Bengaluru, Karnataka, India
11 hours ago
Lead Data Engineer
Overview Job Title - Lead Data Engineer(MS Fabrics) Location - Bengaluru Aptean is changing. Our bespoke ERP solutions are transforming a huge range of global businesses, from food producers to manufacturers. In a world of generic enterprise software, we provide targeted solutions that bring together the very best technology and drive greater results. With over 4500 employees, 90 different products and a global client base, there’s no better time to advance your career at Aptean. Aptean is seeking a hands-on, results-driven Data Engineering professional to design, build, and maintain scalable Enterprise data lakehouse using Microsoft Fabric, following the medallion architecture. This role is key to advancing our enterprise data platform, enabling analytics, Enterprise reporting, and AI-driven insights. The ideal candidate will have strong experience in data modeling, data lakehouse technologies, and a passion for leveraging AI to optimize and enrich data processes wherever possible. About the role Design and build robust, scalable data ingestion pipelines using Microsoft Fabric (Pipelines, Dataflows, Notebooks) to integrate data from Business Applications and external APIs. Perform deep source system analysis to define ingestion strategies that ensure data reliability, consistency, and observability, while applying metadata-driven design for automation. Develop and maintain Delta Tables using the medallion architecture (bronze/silver/gold) to systematically cleanse, enrich, and standardize data for downstream consumption. Implement comprehensive data quality checks (nulls, duplicates, schema drift, outliers, SCD types) and ensure data integrity across all transformation layers in the Lakehouse. Apply governance practices including schema versioning, data lineage tracking, role-based access control (RBAC), and audit trails to ensure compliance, traceability, and secure data access. Build semantic models and define business-aligned KPIs to support self-service analytics and dashboarding in Power BI and other BI platforms. Structure the gold layer and semantic model to support AI/ML use cases, ensuring datasets are enriched, contextualized, and optimized for AI agent consumption. Develop and maintain AI-ready run flows and access patterns to enable seamless integration between the Lakehouse and AI agents for tasks such as prediction, summarization, and decision automation. Implement DevOps best practices for pipeline versioning, testing, deployment, and monitoring; proactively detect and resolve data integration and processing issues. Collaborate cross-functionally with analysts, data scientists, and business users to ensure the data platform supports evolving needs for analytics, operational reporting, and AI innovation. 6. JOB SPECIFICATIONS Education (Indicate the minimum level of education necessary for this position. Check all that apply and indicate specific degree as applicable to the side (e.g., Bachelor’s in Computer Science) Required Preferred Degree/Certification ☒ ☐ Bachelor’s degree ☐ ☒ Master’s degree ☐ ☐ Ph.D. ☐ ☐ J.D. (law) ☐ ☐ Certification: ☐ ☐ Registration: ☐ ☐ Licensure: ☐ ☐ Other: Work Experience 7-10 Year’s Knowledge, Skills, Abilities & Competencies Deep expertise in data engineering with hands-on experience in designing and implementing large-scale data platforms, including data warehouses, lakehouse, and modern ETL/ELT pipelines. Proven ability to build, deploy, and troubleshoot highly reliable, distributed data pipelines integrating structured and unstructured data from various internal systems and external sources. Strong technical foundation in data modeling, database architecture, and data transformation techniques using medallion architecture (bronze/silver/gold layers) within Microsoft Fabric or similar platforms. Solid understanding of data lakehouse patterns and Delta Lake / OneLake concepts, with the ability to structure data models that are AI/ML-ready and support semantic modeling. Experience implementing data quality frameworks including checks for nulls, duplicates, schema drift, outliers, and slowly changing dimensions (SCD types). Familiarity with data governance, including schema versioning, data lineage, access controls (RBAC), and audit logging to ensure secure and compliant data practices. Working knowledge of data visualization tools such as Power BI with the ability to support and optimize semantic layers and KPI definitions. Strong communication and collaboration skills, with the ability to articulate complex data engineering solutions to both technical and non-technical stakeholders, and to lead cross-functional initiatives. Experience with DevOps practices, including version control, CI/CD pipelines, environment management, and performance monitoring in a data engineering context. Shift details: N/A Required to work in shift: NO If Yes Shift Timing N/A If you share our mindset, you can share in our success. To find out more about joining Aptean, get in touch today. Learn from our differences. Celebrate our diversity. Grow and succeed together. Aptean pledges to promote a company culture where diversity, equity and inclusion are central. We are committed to applying this principle as we interact with our customers, build our teams, cultivate our leaders and shape a company in which any employee can succeed, regardless of race, color, sex, national origin, sexuality and gender identity, religion, disability or age. Celebrating our diverse experiences, opinions and beliefs allows us to embrace what makes us unique and to use this as an asset in bringing innovative solutions to our customer base. “At Aptean, our global and diverse employee base is our greatest asset. It is through embracing and understanding our differences that we are able to harness our individual power to maximize the success of our customers, our employees and our company.” – TVN Reddy
Por favor confirme su dirección de correo electrónico: Send Email