Data Engineer III - AWS / Java
JP Morgan
Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.
As a Data Engineer III at JPMorgan Chase within Wealth Management, you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's objectives.
Job responsibilities
Supports the review of controls to ensure sufficient protection of enterprise dataReviews and makes customizations in one or two tools to generate a product at the business or customer's requestUpdates logical or physical data models based on new use casesFrequently uses SQL and understands NoSQL databases and their niche in the marketplaceContributes to the team's culture by promoting collaboration and innovationDesigns, develops, and maintains robust data pipelines to automate the extraction, transformation, and loading (ETL) of data from various sources into data warehouses or data lakesImplements scalable and efficient data architectures that support data processing and analyticsIntegrates data from multiple sources, ensuring consistency, accuracy, and reliabilityCollaborates with data scientists and analysts to understand data requirements and provide solutionsMonitors and troubleshoots data pipeline performance issues and implements solutionsDocuments data pipeline processes, architectures, and workflows for future reference and trainingRequired qualifications, capabilities, and skills
Formal training or certification on data engineering disciplines and 3+ years applied experienceAdvanced proficiency in NoSQL databases and SQL (e.g., joins and aggregations)Proficiency in programming languages such as Java and Python for data processing tasksProficient in Object-Oriented Programming (OOP) concepts, with a strong ability to design and implement robust, reusable, and maintainable code structures across various programming languagesExtensive experience with cloud platforms, particularly Amazon Web Services (AWS), including EMR, Glue, Lambda, and ECS, to design, deploy, and manage scalable and efficient cloud-based solutions.Hands-on experience with frameworks like Apache Spark, leveraging its capabilities for large-scale data processing and analytics to drive efficient and insightful data solutions.Proven experience in utilizing Cucumber and Gherkin for behavior-driven development (BDD) Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, or similar, and big-data storage formats such as ParquetStrong understanding of data architecture, data modeling, and data warehousing conceptsAbility to integrate data from various sources, ensuring consistency and accuracySignificant experience with statistical data analysis and the ability to determine appropriate tools and data patterns for analysisPreferred qualifications, capabilities, and skills
Familiarity with CI/CD pipelines, Docker, and KubernetesProvision infrastructure using a high-level configuration language - TerraformUtilized Splunk to monitor and analyze system performanceExperience with Datadog or Dynatrace for real-time monitoring and performance analysis of applications and infrastructureFlexibility and eagerness to learn new technologies and skills
Por favor confirme su dirección de correo electrónico: Send Email