This data engineering role involves designing, building, and optimizing scalable data pipelines within large-scale, distributed cloud platforms. The role requires both technical excellence and strategic insight, enabling data-driven decision-making across the organization. Collaboration with cross-functional teams and mentorship are crucial aspects of this position.
Requirements
- Bachelor’s degree in Mathematics, Statistics, Computer Science, or a related field.
- 5+ years of experience in data engineering, software engineering, or similar technical roles.
- Experience designing and implementing scalable data architectures, including pipelines for stream and batch processing.
- Proficiency in data modeling, data warehouses, and data lakehouses.
- Hands-on experience with AWS (Glue, Kinesis, Lambda), cloud-based big data technologies.
- Experience building ELT pipelines using dbt and Snowflake.
- Advanced SQL skills (Postgres, Snowflake) and intermediate to advanced Python programming skills.
- Familiarity with software development lifecycle, data orchestration tools (Airflow), and infrastructure-as-code (Terraform).
- Knowledge of analytics tools such as Tableau or Power BI.
Benefits
- Competitive base salary
- Equity participation
- Career growth opportunities
- Comprehensive healthcare and wellness plans
- Retirement/pension contributions
- Learning and development support
- Home office setup allowance
- Optional benefits (pet insurance, legal assistance, identity theft protection)