The application window is expected to close on July 3, 2025
Job posting may be removed earlier if the position is filled or if a sufficient number of applications are received.
Meet the Team
We are seeking an experienced Big Data Technical Leader to lead the design, implementation, and optimization of our data lake using Databricks, Apache Spark, and AWS cloud services. In this senior role, you will architect scalable solutions, mentor junior engineers, and drive technical decision-making to transform large-scale data into actionable business insights.
Your Impact
In this role you will be part of a team that is building a unified experience for how products are provisioned and how Identity and Access is managed in the platform.
Architect and lead the development of complex ETL pipelines and data workflows using Databricks and Apache SparkHelp lead the design and evolution of our data lake architecture in AWS and Databricks Unity CatalogOptimize Spark applications for performance, cost-efficiency, and scalabilityDefine data engineering best practices and design patterns for the organizationMentor junior engineers and provide technical leadership across teamsCollaborate with data science, ML, and other product teams on requirements and develop PoCsEvaluate and integrate new technologies into our data platformLead technical design reviews and contribute to strategic roadmap planningImplement robust monitoring, alerting, and disaster recovery systemsMinimum Qualifications:
Bachelor's degree in Computer Science, Engineering, or related field (Master's preferred)7+ years of experience in data engineering and experience working with Apache SparkExperience with common storage types (AVRO, Parquet, Delta, Iceberg, ORC)Experience with at least one programming language (Python, Scala, or Java)Extensive experience with AWS data services (S3, EMR, Glue, Lambda, Step Functions)Preferred Qualifications:
Expertise with Databricks platform, including Delta Lake and Databricks workflowsExperience with SQL, optimized big data queries, and tuning large-scale data processing workloadsExperience with real-time data processing using Structured Streaming, Kafka, or KinesisExperience designing and implementing data lakes and data warehousing solutionsExperience with Infrastructure as Code using TerraformWhy Cisco?
At Cisco, we’re revolutionizing how data and infrastructure connect and protect organizations in the AI era – and beyond. We’ve been innovating fearlessly for 40 years to create solutions that power how humans and technology work together across the physical and digital worlds. These solutions provide customers with unparalleled security, visibility, and insights across the entire digital footprint. Simply put – we power the future.
Fueled by the depth and breadth of our technology, we experiment and create meaningful solutions. Add to that our worldwide network of doers and experts, and you’ll see that the opportunities to grow and build are limitless. We work as a team, collaborating with
empathy to make really big things happen on a global scale. Because our solutions are everywhere, our impact is everywhere.
We are Cisco, and our power starts with you.