Manager - DataOps
WESCO
As a Manager - DataOps, you are responsible for coordinating activities of staff engaged in business systems, computer operations, computer systems, computer programming, and company's network to assure effective computer resources are provided to users.
**Responsibilities:**
+ Lead and mentor the DataOps Engineering team, fostering a culture of accountability, continuous improvement, and technical excellence.
+ Define and implement CI/CD pipelines and automation practices using tools such as GitHub Actions, Terraform, and Airflow.
+ Oversee observability standards: logging, monitoring, alerting, and retries across the entire pipeline lifecycle.
+ Ensure alignment between DataOps and other technical chapters (Engineering, Platform, Architecture, Security) to support cross-domain pipelines.
+ Collaborate with business stakeholders and tech leads to proactively manage delivery plans, risks, and dependencies.
+ Act as the technical authority for incident response, root cause analysis, and resilience strategies in production environments.
+ Promote infrastructure as code (IaC) practices and drive automation across cloud environments.
+ Monitor resource usage and optimize cloud costs (Databricks clusters, compute, storage).
+ Facilitate team rituals (1:1s, planning, retros) and create career development opportunities for team members.
+ Represent the DataOps function in planning, roadmap definition, and architectural discussions.
+ Promote an autonomous work culture by encouraging self-management, accountability, and proactive problem-solving among team members.
+ Serve as a Spin Culture Ambassador to foster and maintain a positive, inclusive, and dynamic work environment that aligns with the company's values and culture.
**Qualifications:**
+ Minimum 7 years in DataOps, or DevOps, with at least 1-2 years in a technical leadership role overseeing and mentoring Data Engineers. Demonstrates experience in managing complex projects, coordinating team efforts, and ensuring alignment with organizational goals.
+ Advanced hands-on experience with **Databricks** , including Unity Catalog, Delta Live Tables, Job orchestration, and monitoring.
+ Solid experience in **cloud platforms** , especially **AWS** (S3, EC2, IAM, Glue).
+ Experience with **CI/CD pipelines** (GitHub Actions, GitLab CI), and orchestration frameworks (Airflow or similar).
+ Proficient in **Python** , **SQL** , and scripting for automation and data operations.
+ Strong understanding of data pipeline architectures across batch, streaming, and real-time use cases.
+ Technical Skills: Proficiency in DevOps tools and technologies such as Jenkins, Docker, Kubernetes, Terraform, Ansible, and cloud platforms (e.g. Databricks, AWS, Azure, GCP).
+ Soft Skills: Strong leadership, communication, and collaboration skills. Excellent problem-solving abilities and a proactive approach to learning and innovation.
+ Experience implementing monitoring and data quality checks (e.g., Great Expectations, Datadog, Prometheus).
+ Effective communicator who can bridge technical and business needs.
Por favor confirme su dirección de correo electrónico: Send Email