Talworx is seeking an experienced Data Engineer to join their growing data engineering team. The role involves designing and maintaining scalable data pipelines using Databricks and Apache Spark, developing ETL/ELT processes, and optimizing data storage, processing, and analytics. The team will work with Lakehouse architecture, Workflows, and AWS/Azure/GCP data services.
Requirements
- 3+ years of experience in Big Data technologies like Apache Spark, Databricks.
- Strong proficiency in Python or Scala.
- Experience with cloud platforms (AWS,Azure, GCP).
- Knowledge of data warehousing, ETL processes, and SQL.
- Familiarity with CI/CD pipelines, GitHub, and containerization (Docker, Kubernetes).
- Knowledge of Unity Catalog, security policies, and access controls.
Benefits
- Competitive salary and benefits package
- Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications
- Opportunity to work with cutting-edge technologies
- Employee engagement initiatives
- Annual health check-ups
- Insurance coverage