At Schwab, you’re empowered to make an impact on your career. Here, innovative thought meets creative problem solving, helping us “challenge the status quo” and transform the finance industry together. We believe in the importance of in-office collaboration and fully intend for the selected candidate for this role to work on site in the specified location(s).
Schwab Wealth and Asset Management Engineering is a part of the Schwab Technology Services organization supporting Schwab's money management, research, and asset management platforms to help clients manage their wealth.
We are seeking a Data Engineer for building out the cloud native data platform for Schwab Asset Management (SAM). This role is ideal for a professional with progressive experience in cloud native data engineering who is ready to take on more responsibility and operate with minimal supervision. You will be integral to enabling and enhancing our data assets, data pipelines and supporting a data platform on SQL Server, Snowflake and Google Cloud. This is an exciting opportunity to work in a dynamic, data-driven environment, contributing to the ongoing development and optimization of our data platform.
The role requires hands-on development in a client driven technology organization and executing regulatory, tactical and strategic business initiatives focused on developing the data platform and delivery of analytics and reporting projects. The ideal candidate is expected to be a detail oriented and work in Agile and DevOps model in partnership of our business—actively working with Product Owners, End-Users, Partners—in managing requirements, design, coding, testing (unit and functional), deployment and post-release support as well engage in migration activities to evolve the on-premise data stack to the cloud.
The ideal candidate will have:
5+ years of working experience and sound knowledge in building data platforms leveraging cloud (GCP/AWS) cloud native architecture, ETL/ELT and data integration 3-5 years of development experience with cloud services (AWS, GCP, AZURE) utilizing various support tools (e.g. GCS, Cloud Data flow, Airflow (Composer), Cloud Pub/Sub)3-5 years of experience and sound knowledge in developing reliable data pipelines leveraging data warehouses (Snowflake, Big Query, SQL Server) and data processing frameworks (Apache Spark, Apache Beam, Apache Flink, Informatica, SSIS, Pentaho)Knowledge of NoSQL database technologies (e.g. MongoDB, BigTable, DynamoDB)Expertise in build and deployment tools - (Visual Studio, PyCharm, Git/Bitbucket/Bamboo, Maven, Jenkins, Nexus, TeamCity)Experience in database design techniques and philosophies (e.g. RDBMS, Document, Star Schema, Kimball Model)Experience leveraging continuous integration/development tools (e.g. Bamboo, Docker, Containers, GitHub, GitHub Actions) in a CI/CD pipelineExperience with SQL, ETL, and other code-based data transformation and delivery technologies.Experience in messaging and services-based software, preferably in cloud platform using RabbitMQ, Kafka or the equivalent.Experience with Informatica Developer Tool set or Data Validation Option (DVO) is a plus.Advanced understanding of software development and research toolsAttention to detail and results oriented, with a strong customer focusAbility to work as part of a team and independentlyAnalytical and problem-solving skillsProblem-solving and technical communication skillsAbility to prioritize workload to meet tight deadlinesWhat you have
To ensure that we fulfill our promise of "challenging the status quo," this role has specific qualifications that successful candidates should have.
Required Qualifications:
Bachelor’s degree in computer science, Engineering, Mathematics, or related field5+ years of experience in data engineering or similar rolesProficiency in SQL and experience with relational databases (e.g., SQL Server, PostgreSQL, Snowflake)Solid experience with Python for data processing, ETL development, tooling and analytics implementationsFamiliarity with cloud data platforms (e.g., Snowflake, Google BigQuery)Experience building and maintaining ETL pipelinesStrong experience in data modeling, including designing normalized and denormalized schemas for financial dataUnderstanding of financial data conceptsKnowledge of data quality, validation, and governance best practices
Preferred Qualifications:
Deep understanding of data architectures and engineering patterns of data pipelines and reporting environments.Familiarity with data visualization tools (Power BI, Tableau)Exposure to regulatory requirements in finance (e.g., GDPR, N-PORT, 13F/G)Analytical and troubleshooting skills to identify and resolve data and platform issues effectively.Ability to work collaboratively within an agile team environment, supporting cross-functional initiatives and contributing to shared goals.Strong documentation skills and the ability to communicate technical concepts clearly and effectively to both technical and non-technical stakeholders.In addition to the salary range, this role is also eligible for bonus or incentive opportunities.
Options Apply for this jobApplyShareRefer a friendRefer Sorry the Share function is not working properly at this moment. Please refresh the page and try again later. Share on your newsfeedWhy work for us?
At Schwab, we’re committed to empowering our employees’ personal and professional success. Our purpose-driven, supportive culture, and focus on your development means you’ll get the tools you need to make a positive difference in the finance industry.
We offer a competitive benefits package to our full-time employees that takes care of the whole you – both today and in the future:
401(k) with company match and Employee stock purchase plan Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions Paid parental leave and family building benefits Tuition reimbursement Health, dental, and vision insurance Application FAQsSoftware Powered by iCIMS
www.icims.com