Data Engineer (Snowflake, Python, IICS) Job Description
We’re seeking an experienced Data Engineer with strong expertise in Snowflake and Python, alongside hands-on experience with ETL tools—preferably IICS. In this role, you will be part of a large, collaborative team working closely with the client on a major migration project, transitioning IICS workflows to native Snowflake and Python solutions.
Key ResponsibilitiesMigrate and re-engineer IICS workflows into Snowflake and Python-based pipelines.
Design, build, and optimize scalable, high-performance data pipelines ensuring reliability and efficiency.
Collaborate closely with internal teams and clients through regular sync-ups to ensure alignment and project success.
Ensure data quality, consistency, and maintainability by adhering to best practices.
Conduct code reviews, provide constructive feedback, and promote continuous improvement in coding standards.
Mentor and guide junior data engineers to foster technical excellence and knowledge sharing within the team.
Document data engineering processes and solutions for future reference and scalability.
Primary SkillsExtensive experience (7+ years) in data engineering and ETL development.
Advanced proficiency in Python for data engineering tasks, including complex ETL pipelines, automation, and data processing.
Strong hands-on expertise with Snowflake data platform.
Deep understanding of SQL, data modeling, and relational database concepts.
Practical knowledge of IICS (Informatica Intelligent Cloud Services) or similar ETL tools.
Excellent communication and collaboration skills to work effectively within teams and with clients.
Secondary SkillsSolid grasp of data warehousing principles and best practices.
Experience with code versioning tools (e.g., Git).
Familiarity with unit testing and debugging data pipelines.
Ability to analyze and troubleshoot complex data issues.
Good to Have SkillsExperience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
Knowledge of orchestration tools (e.g., Apache Airflow, Control-M) and CI/CD pipelines.
Exposure to large-scale data migration projects.
Understanding of containerization and infrastructure-as-code (e.g., Docker, Terraform).