Bangalore, IND
1 day ago
Data Engineer, I
Remote Work: Hybrid Overview: At Zebra, we are a community of innovators who come together to create new ways of working to make everyday life better. United by curiosity and care, we develop dynamic solutions that anticipate our customer’s and partner’s needs and solve their challenges. Being a part of Zebra Nation means being seen, heard, valued, and respected. Drawing from our diverse perspectives, we collaborate to deliver on our purpose. Here you are a part of a team pushing boundaries to redefine the work of tomorrow for organizations, their employees, and those they serve. You have opportunities to learn and lead at a forward-thinking company, defining your path to a fulfilling career while channeling your skills toward causes that you care about – locally and globally. We’ve only begun reimaging the future – for our people, our customers, and the world. Let’s create tomorrow together. A Data Engineer will be responsible for Designs, develops, programs and implements Machine Learning solutions , Implements Artificial/Augmented Intelligence systems/Agentic Workflows/Data Engineer Workflows, Performs Statistical Modelling and Measurements by applying data engineering, feature engineering, statistical methods, ML modelling and AI techniques on structured, unstructured, diverse “big data” sources of machine acquire data to generate actionable insights and foresights for real life business problem solutions and product features development and enhancements. A strong understanding of databases, SQL, cloud technologies, and modern data integration and orchestration tools like Azure Data Factory (ADF) are required to succeed in this role. Responsibilities: + Integrates state-of-the-art machine learning algorithms as well as the development of new methods + Develops tools to support analysis and visualization of large datasets + Develops, codes software programs, implements industry standard auto ML models (Speech, Computer vision, Text Data, LLM), Statistical models, relevant ML models (devices/machine acquired data), AI models and algorithms + Identifies meaningful foresights based on predictive ML models from large data and metadata sources; interprets and communicates foresights, insights and findings from experiments to product managers, service managers, business partners and business managers + Makes use of Rapid Development Tools (Business Intelligence Tools, Graphics Libraries, Data modelling tools) to effectively communicate research findings using visual graphics, Data Models, machine learning model features, feature engineering / transformations to relevant stakeholders + Analyze, review and track trends and tools in Data Science, Machine Learning, Artificial Intelligence and IoT space + Interacts with Cross-Functional teams to identify questions and issues for data engineering, machine learning models feature engineering + Evaluates and makes recommendations to evolve data collection mechanism for Data capture to improve efficacy of machine learning models prediction + Meets with customers, partners, product managers and business leaders to present findings, predictions, foresights; Gather customer specific requirements of business problems/processes; Identify data collection constraints and alternatives for implementation of models + Working knowledge of MLOps, LLMs and Agentic AI/Workflows + Programming Skills: Proficiency in Python and experience with ML frameworks like TensorFlow, PyTorch + LLM Expertise: Hands-on experience in training, fine-tuning, and deploying LLMs + Foundational Model Knowledge: Strong understanding of open-weight LLM architectures, including training methodologies, fine-tuning techniques, hyperparameter optimization, and model distillation. + Data Pipeline Development: Strong understanding of data engineering concepts, feature engineering, and workflow automation using Airflow or Kubeflow. + Cloud & MLOps: Experience deploying ML models in cloud environments like AWS, GCP (Google Vertex AI), or Azure using Docker and Kubernetes.Designs and implementation predictive and optimisation models incorporating diverse data types + strong in SQL, Azure Data Factory (ADF) Qualifications: • Minimum Education: o Bachelors, Master's or Ph.D. Degree in Computer Science or Engineering. • Minimum Work Experience (years): o 1+ years of experience programming with at least one of the following languages: Python, Scala, Go. o 1+ years of experience in SQL and data transformation o 1+ years of experience in developing distributed systems using open source technologies such as Spark and Dask. o 1+ years of experience with relational databases or NoSQL databases running in Linux environments (MySQL, MariaDB, PostgreSQL, MongoDB, Redis). • Key Skills and Competencies: o Experience working with AWS / Azure / GCP environment is highly desired. o Experience in data models in the Retail and Consumer products industry is desired. o Experience working on agile projects and understanding of agile concepts is desired. o Demonstrated ability to learn new technologies quickly and independently. o Excellent verbal and written communication skills, especially in technical communications. o Ability to work and achieve stretch goals in a very innovative and fast-paced environment. o Ability to work collaboratively in a diverse team environment. o Ability to telework o Expected travel: Not expected. To protect candidates from falling victim to online fraudulent activity involving fake job postings and employment offers, please be aware our recruiters will always connect with you via @zebra.com (https://www.zebra.com/ap/en.html) email accounts. Applications are only accepted through our applicant tracking system and only accept personal identifying information through that system. Our Talent Acquisition team will not ask for you to provide personal identifying information via e-mail or outside of the system. If you are a victim of identity theft contact your local police department.
Por favor confirme su dirección de correo electrónico: Send Email