Job Description Overview
  • Skill: AWS (Glue, Lambda, AppFlow, DynamoDB, Athena, Step Functions, S3), SQL, NoSQL, MySQL, PostgreSQL, MongoDB, Cassandra, Data Pipeline Tools, Airflow, EC2, S3, EMR, RDS, Redshift, BigQuery, Stream-Processing, Storm, Spark-Streaming, Flink, Python, PySpark, Scala, ETL, Data Integration, Real-Time Data Processing, Cloud Infrastructure Optimization, Data Management
  • Location: Remote
  • Experience: 12

 

We are seeking an experienced AWS Data Engineer to join our team. The ideal candidate will have extensive experience working with AWS cloud services and managing data pipelines. You will be responsible for designing, implementing, and maintaining cloud-based solutions that optimize data flow, streamline processing, and enhance data management using tools like AWS Glue, Lambda, DynamoDB, Athena, and Step Functions. If you have strong skills in data engineering, cloud infrastructure, and real-time data processing, this is the perfect opportunity for you.

 

 

Key Responsibilities:

  • AWS Cloud Services: Design, implement, and maintain solutions using AWS Glue, Lambda, AppFlow, DynamoDB, Athena, Step Functions, and S3 to ensure seamless data operations and management.
  • Cloud Optimization: Optimize performance and cost of cloud infrastructure to ensure efficient resource usage and scalability across projects.
  • Data Management: Work with both relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra) to store and manage large data sets efficiently.
  • Data Pipeline Development: Build and manage ETL workflows using tools like Apache Airflow, ensuring smooth data flow across systems and accurate data transformation processes.
  • Stream Processing: Develop real-time data processing solutions using tools like Storm, Spark Streaming, or Flink, enabling efficient processing of streaming data for time-sensitive applications.
  • Programming & Scripting: Write, debug, and maintain scripts and programs using Python, PySpark, and Scala for automating workflows, data integration, and real-time data processing.
  • Collaboration: Work closely with cross-functional teams to align technical solutions with business needs and deliver high-quality data engineering solutions.
  • Documentation & Guidance: Document system designs, architecture, and workflows, while providing technical guidance and mentorship to team members on best practices in data engineering and cloud technologies.

 

 

Desired Qualifications:

  • Strong experience with AWS cloud services, specifically AWS Glue, Lambda, AppFlow, DynamoDB, Athena, Step Functions, and S3.
  • Extensive experience with relational SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Proficiency in data pipeline tools like Apache Airflow for orchestrating data workflows.
  • Experience with stream-processing systems such as Storm, Spark Streaming, or Flink.
  • Expertise in programming languages like Python, PySpark, and Scala for automating data processes and building scalable solutions.
  • Familiarity with AWS services such as EC2, EMR, RDS, Redshift, and BigQuery for data storage, processing, and analytics.
  • Strong problem-solving skills with the ability to optimize cloud infrastructure for performance and cost-efficiency.
  • Ability to collaborate effectively with cross-functional teams to meet business objectives and align technical solutions with organizational goals.

 

 

This position is ideal for an experienced AWS Data Engineer with a passion for working with large-scale data systems, cloud technologies, and real-time data processing. You will play a key role in shaping our data architecture and ensuring that our data systems are optimized and scalable for the future.