We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions. The role involves designing and implementing scalable data pipelines, developing ETL processes using AWS services, and building data lakes. Strong background in AWS data services, big data technologies, and programming languages is essential.
Requirements
- Design and implement scalable, high-performance data pipelines using AWS services
- Develop and optimize ETL processes using AWS Glue, EMR, and Lambda
- Build and maintain data lakes using S3 and Delta Lake
- Create and manage analytics solutions using Amazon Athena and Redshift
- Design and implement database solutions using Aurora, RDS, and DynamoDB
- Develop serverless workflows using AWS Step Functions
- Write efficient and maintainable code using Python/PySpark and SQL/PostgrSQL
- Ensure data quality, security, and compliance with industry standards
- Collaborate with data scientists and analysts to support their data needs
- Optimize data architecture for performance and cost-efficiency