USA
4 days ago
Scala Data Engineering Architect
**Overview** As a **Scala Data Engineering Architect** at Publicis Sapient, you will lead the design and implementation of modern, cloud-native data platforms that power large-scale digital transformation. This role combines hands-on architecture and team leadership, enabling organizations to unlock the full potential of their data using AWS and Scala-based technologies. **Your Impact** **Architecture & Strategy** + Define end-to-end data architecture strategies leveraging **AWS** and **Scala** , ensuring scalability, reliability, and alignment with business objectives. + Lead the selection and application of data technologies, frameworks, and patterns tailored to business needs. + Develop and maintain architectural roadmaps for data platform modernization and cloud-native initiatives. **Solution Design & Delivery** + Translate business requirements into robust, scalable data solutions using AWS-native services and Scala-based frameworks. + Design and implement data ingestion, processing, storage, and analytics pipelines with high availability and performance. + Build reusable components and frameworks to streamline development and accelerate delivery. **Technical Leadership** + Provide architectural guidance and mentorship to data engineering teams. + Review solution designs to ensure adherence to engineering best practices and standards. + Support project estimation and contribute to delivery plans and technical roadmaps. **Client Engagement & Collaboration** + Collaborate with business and technical stakeholders to align data strategies with organizational goals. + Facilitate architecture reviews, technical deep dives, and collaborative design sessions. **Operational Excellence** + Oversee the performance, observability, and automation of data platforms in production environments. + Drive continuous improvements in platform health, data quality, and operational efficiency. **Qualifications** **Your Skills & Experience** + Proven experience leading data engineering teams and delivering cloud-native data platforms on **AWS** . + Strong programming expertise in **Scala** , particularly for distributed data processing and ETL workflows. + Hands-on experience with AWS services including **S3, Glue, EMR, Lambda, Redshift, Athena** , and **DynamoDB** . + Deep understanding of data modeling, data warehousing, and stream/batch data processing frameworks (e.g., **Apache Spark** ). + Familiarity with infrastructure-as-code and CI/CD for data pipelines (e.g., **Terraform, Git, Jenkins** ). + Strong communication skills and stakeholder engagement experience in client-facing environments. **Set Yourself Apart With** + Experience implementing **DataOps and DevOps** practices in cloud data environments. + Exposure to **multi-cloud** or **hybrid cloud** architectures (AWS, GCP, Azure). + Knowledge of **observability** , logging, and performance optimization strategies for data platforms.
Por favor confirme su dirección de correo electrónico: Send Email