Design and develop scalable data pipelines, collaborate with cross-functional teams, optimize data architecture, ensure data integrity, build ETL solutions, and adopt best practices. Experience with cloud-based data technologies, Spark, SQL, Docker, Kubernetes, DevOps, and Agile development is required.
Requirements
- Bachelor’s degree in Computer Science, Information Systems, or a related technical field, or equivalent work experience
- 3+ years of hands-on experience with cloud-based data technologies, including message queues, event grids, relational databases, NoSQL databases, data warehouses, and big data technologies
- Proficiency in Spark (Java, Python, SQL) and advanced SQL skills
- Experience with Docker and Kubernetes
- DevOps and CI/CD experience, including Git, continuous integration, and continuous deployment pipelines
- Agile development experience, particularly using Scrum methodology
Benefits
- Competitive salary
- Opportunities for career growth and professional development
- Collaborative and dynamic work environment