Senior Lead Data Engineer [Databricks, Spark, AWS]
JP Morgan
Embrace this pivotal role as an essential member of a high performing team dedicated to reaching new heights in data engineering. Your contributions will be instrumental in shaping the future of one of the world’s largest and most influential companies.
As a Senior Lead Data Engineer at JPMorgan Chase within the Corporate Technology, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. Leverage your deep technical expertise and problem solving capabilities to drive significant business impact and tackle a diverse array of challenges that span multiple data pipelines, data architectures, and other data consumers.Job responsibilities
Provides recommendations and insight on data management, governance procedures, and intricacies applicable to the acquisition, maintenance, validation, and utilization of dataDesigns and delivers trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable wayDefines database back-up, recovery, and archiving strategy Generates advanced data models for one or more teams using firmwide tooling, linear algebra, statistics, and geometrical algorithmsApproves data analysis tools and processesCreates functional and technical documentation supporting best practicesAdvises junior engineers and technologistsEvaluates and reports on access control processes to determine effectiveness of data asset securityAdds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
Formal training or certification on Data Engineering concepts and 5+ years applied experienceMinimum of 10+ years of expertise in software engineering, emphasizing strong architecture and design principles.Proven experience of 10+ years in designing and implementing distributed, scalable, and event-driven services to support large-scale data processing and analytics workloads.Advanced expertise in Data Engineering and end-to-end software solutions skills, showcasing a high level of proficiency.Advanced proficiency with at least 8+ years of hands-on experience in one or more programming languages, such as Python or Java, with a strong understanding of object-oriented principles (OOPs).Proficient and hands-on with SQL, Spark SQL, and PySpark.Demonstrated expertise in designing and utilizing AWS services, micro-services, Databricks, and Spark for complex projects.Extensive working experience with both relational databases (Oracle, SQL Server, RDS) and NoSQL databases.Expertise in employing CD/CI practices within Agile SDLC to enhance agility and software quality, effectively collaborating across different sprint cycles and cross-functional teams for application development.Solid background in Computer Science, Computer Engineering, or a related technical field.Associate/Developer or Architect level certification required in at least one of the following technologies: Databricks, Spark, or AWS. Preferred qualifications, capabilities, and skills Excellent communication skills, with the ability to convey complex technical concepts to non-technical audiences.Experience with version control systems like Git and CI/CD pipelines for data engineering workflows.
Por favor confirme su dirección de correo electrónico: Send Email