EY is looking for a Senior Data Engineer with expertise in AWS Databricks to join their GDS Consulting team. The successful candidate will develop and deploy data lakehouse pipelines, design and implement ETL/ELT workflows, and migrate existing on-premises ETL workloads to the AWS + Databricks platform.
Requirements
- 4+ years of IT experience
- 2+ years of relevant experience in Databricks and AWS cloud-based data engineering
- Strong hands-on experience in Databricks (PySpark, Delta Lake, Delta Live Tables, Unity Catalog)
- Practical knowledge of AWS services such as S3, Glue, Lambda, Step Functions, CloudWatch, and Redshift
- Experience working with structured and semi-structured data formats (CSV, JSON, Parquet, XML)
- Good understanding of ETL orchestration using Databricks Workflows, Airflow, or Step Functions
- Familiarity with metadata-driven ingestion frameworks and medallion architecture (Bronze, Silver, Gold) design principles
- Solid experience in Python, PySpark, and SQL for data transformation and validation
- Knowledge of CI/CD practices and version control (GitHub, Azure DevOps, Jenkins)
- Excellent analytical, troubleshooting, and problem-solving skills
- Ability to work independently, interact with stakeholders, and deliver high-quality solutions under minimal supervision
Benefits
- Competitive salary
- Support, coaching and feedback from colleagues
- Opportunities to develop new skills and progress career
- Freedom and flexibility to handle role in a way that's right for you