USA
1 day ago
Domain Architect – AI/ML

We are seeking a highly skilled AI/ML Domain Architect to design, deploy, and operationalize advanced AI/ML solutions with a focus on MLOps, Generative AI (GenAI), LLM Ops, and Agentic AI integration. This role requires deep expertise in ML engineering practices, cloud-native deployment, and hands-on experience with modern AI platforms. The engineer will be responsible for building scalable ML pipelines, LLM-based applications, and intelligent agent frameworks to accelerate delivery for telecom, enterprise, and next-generation autonomous network solutions.

 

You Have:

Bachelor’s/Master’s in Computer Science, Data Engineering, AI/ML, or related field.10+ years of experience in AI/ML engineering, including 5+ years in MLOps.Proven experience with LLM platforms and GenAI ecosystems (OpenAI, Anthropic, Vertex AI, Hugging Face, LangChain, LlamaIndex).Strong proficiency in Python, PyTorch, TensorFlow, Scikit-learn, SQL.Expertise in MLOps pipelines (Kubeflow, MLflow, Vertex AI pipelines, ArgoCD, CI/CD for ML).Data Engineering: Spark, Kafka, Flink, Airflow.Deep knowledge of cloud platforms: GCP, AWS, Azure.Implement ML pipelines using platforms such as Vertex AI, Red Hat OpenShift AI, and Kubeflow.Experience with Agentic AI frameworks for orchestrating autonomous agents and multi-step workflows.Strong skills in API integration, microservices, and distributed systems. 

It would be nice if you also had:

Familiarity with telecom data products and autonomous networks use cases and Ab-intio data management platformExperience in data mesh, data fabric, and modern data architectures.Knowledge of vector databases and retrieval-augmented generation (RAG).Understanding of security, compliance, and governance for LLM/GenAI deployment. Contributions to open-source AI/ML or GenAI frameworks.Exposure to TM Forum, 3GPP standards, and telecom AI frameworks.Design, optimize, and scale end-to-end ML pipelines using ML-Ops best practices, including CI/CD, model deployment, and performance monitoring. Develop and operationalize Gen AI/LLM-based solutions, applying techniques such as fine-tuning, prompt engineering, Retrieval-Augmented Generation (RAG), and LLM observability. Integrate Agentic AI frameworks with existing AI/ML systems to enable autonomous decision-making and intelligent workflow orchestration. Implement robust data pipelines for ingestion, preprocessing, and feature engineering across structured, semi-structured, and unstructured data sources. Collaborate cross-functionally with data scientists, solution architects, and delivery teams to translate AI/ML use cases into scalable, production-ready solutions. Design and manage cloud-native AI/ML infrastructure on platforms like Google Cloud (Vertex AI), Red Hat OpenShift AI, and Kubeflow. Deploy scalable AI solutions across multi-cloud and hybrid environments using Kubernetes and container orchestration. Ensure observability and governance of AI systems, including model drift detection, fairness, compliance, and LLM usage guardrails. Create accelerators, reusable frameworks, and automation tools to reduce time-to-market and enhance delivery efficiency. Support customer engagements through proof-of-concepts (PoCs), pilot implementations, and production rollouts.
Por favor confirme su dirección de correo electrónico: Send Email