Primary Responsibilities include establishing technical designs, optimizing ETL/data pipelines, fine-tuning queries, managing data ingestion, transformation, and processing, developing and maintaining API, ETL Pipeline, and CI/CD integration processes.
Requirements
- Bachelor's degree in computer science, engineering, or similar quantitative field
- 5+ years of relevant experience developing backend, integration, data pipelining, and infrastructure
- Strong expertise in Python, PySpark, and Snowpark
- Proven experience with Snowflake and AWS cloud platforms
- Experience with Informatica/IICS for data integration
- Expertise in database optimization and performance improvement
- Experience with data warehousing and writing efficient SQL queries
- Understanding of data structures and algorithms