Guadalajara, Mexico
10 days ago
Data Engineer (Hybrid/Guadalajara or Tijuana)

Insulet started in 2000 with an idea and a mission to enable our customers to enjoy simplicity, freedom and healthier lives through the use of our Omnipod® product platform. In the last two decades we have improved the lives of hundreds of thousands of patients by using innovative technology that is wearable, waterproof, and lifestyle accommodating.

We are looking for highly motivated, performance driven individuals to be a part of our expanding team. We do this by hiring amazing people guided by shared values who exceed customer expectations. Our continued success depends on it!

Position Overview:
Insulet Corporation, maker of the OmniPod, is the leader in tubeless insulin pump. The Data Engineer role is responsible for data lake infrastructure, development of automated data uploads and scripting for data cleansing and analytics. Reporting to the Senior Director, Global Technology and Cloud Ops, you will develop tools and processes to transform data for use with Insulet’s Analytics team and senior technical leaders. We are a fast growing company that provides an energetic work environment and tremendous career growth opportunities.

Responsibilities:

Design, implementation and maintenance of Insulet’s data lake, warehouse and overall architectureWork with IT, analytics and cross functional teams to identify data sources, determine data collection and design aggregation mechanismsPerform data quality checks and data clean upInterface with business stakeholders in cross-functional teams, including manufacturing, quality assurance, and post-market surveillance in order to understand various applications and their data setsDevelop data preprocessing tools as neededMaintenance and understanding of the various business intelligence tools used to visualize and report team analytics results to the company

Education and Experience:

Bachelors degree in Mathematics, Computer Science, Electrical and Computer Engineering, or a closely related STEM field is requiredMaster’s degree in Mathematics, Computer Science, Electrical and Computer Engineering, or a closely related STEM field; or a BS with 2-3 year’s experience working with data technologies, is preferredExperience in data quality assurance, control and lineage for large datasets in relational/non-relational databasesExperience managing robust ETL/ELT pipelines for big real-world datasets that could include messy data, unpredictable schema changes and/or incorrect data typesExperience with both batch data processing and streaming dataExperience in implementing and maintaining Business Intelligence tools linked to an external data warehouse or relational/non-relational databases is requiredExperience in medical device, healthcare, or manufacturing industries is desirableHIPAA experience a plus

Skills/Competencies:

Demonstrated knowledge in SQL and relational databases is requiredKnowledge in non-relational databases (MongoDB) is a plusDemonstrated knowledge of managing large data sets in the cloud (Azure SQL, Google BigQuery, etc) is requiredKnowledge of ETL and workflow tools (Azure Data Factory, AWS Glue, etc) is a plusDemonstrated knowledge of building, maintaining and scaling cloud architectures (Azure, AWS, etc), specifically cloud data tools that leverage Spark, is requiredDemonstrated coding abilities in Python, Java, C or scripting languagesDemonstrated familiarity with different data types as inputs (e.g. CSV, XML, JSON, etc)Demonstrated knowledge of database and dataset validation best practicesDemonstrated knowledge of software engineering principles and practicesAbility to communicate effectively and document objectives and procedures
Por favor confirme su dirección de correo electrónico: Send Email