Bengaluru, Karnataka, India
8 hours ago
Software Engineer II Pyspark Data bricks AWS

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorgan Chase within the Corporate Data Services, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problemsCreates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systemsProduces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code developmentGathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systemsProactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architectureContributes to software engineering communities of practice and events that explore new and emerging technologiesAdds to team culture of diversity, equity, inclusion, and respect

 Required qualifications, capabilities, and skills

Formal training or certification on software engineering concepts and 2+ years applied experienceExperience designing and implementing data pipelines in a cloud environment is required. (E.g. Apache Ni Fi /  Informatica etc.).3+ years of experience migrating/developing data solutions in the AWS cloud is required. Experience needed in  AWS Services, Apache Airflow.3+ years of experience building/implementing data pipelines using Databricks such as Unity Catalog, Databricks workflow, Databricks Live Table etc.Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security3+ years of Hands-on object-oriented programming experience using Python (specially Py Spark) to write complex, highly optimized queries across large volumes of data.Experience in Big data technologies such as Hadoop/Spark.Experience in Data modeling and ETL processingHands on experience in data profiling and advanced PL/SQL procedures

Preferred qualifications, capabilities, and skills

Familiarity Oracle, ETL, Data Warehousing, and good to have cloud expertiseExposure to cloud technologies
Por favor confirme su dirección de correo electrónico: Send Email