Domain Architect - AI/ML
Nokia
In this role, you will lead the design, operationalization, and scaling of intelligent systems—from LLM-powered solutions to autonomous AI agents—driving innovation across multiple industries.
This position is ideal for a visionary architect with a passion for pushing AI boundaries and transforming cutting-edge research into production-ready systems. You will play a strategic leadership role, working closely with global teams to deliver advanced AI solutions that meet enterprise demands for scalability, security, and performance.
If you have:
Bachelor’s/Master’s in Computer Science, Data Engineering, AI/ML, or related field.10+ years of experience in AI/ML engineering, including 5+ years in MLOps.Proven experience with LLM platforms and GenAI ecosystems (OpenAI, Anthropic, Vertex AI, Hugging Face, LangChain, LlamaIndex).Strong proficiency in Python, PyTorch, TensorFlow, Scikit-learn, SQL.Expertise in MLOps pipelines (Kubeflow, MLflow, Vertex AI pipelines, ArgoCD, CI/CD for ML).Data Engineering: Spark, Kafka, Flink, Airflow.Deep knowledge of cloud platforms: GCP, AWS, Azure.Implement ML pipelines using platforms such as Vertex AI, Red Hat OpenShift AI, and Kubeflow.Experience with Agentic AI frameworks for orchestrating autonomous agents and multi-step workflows.Strong skills in API integration, microservices, and distributed systems.Excellent communication and collaboration skills for cross-functional and global delivery teams.It would be nice if you also had:
Ab-intio data management platformFamiliarity with telecom data products and autonomous networks use cases.Experience in data mesh, data fabric, and modern data architectures.Knowledge of vector databases and retrieval-augmented generation (RAG).Understanding of security, compliance, and governance for LLM/GenAI deployment.Contributions to open-source AI/ML or GenAI frameworks.Exposure to TM Forum, 3GPP standards, and telecom AI frameworks
#LI-Hybrid
Build, optimize, and scale end-to-end ML pipelines using MLOps best practices (CI/CD, model deployment, monitoring).Develop and operationalize GenAI/LLM-based solutions (fine-tuning, prompt engineering, RAG pipelines, LLM monitoring).Integrate Agentic AI frameworks with existing AI/ML systems to enable autonomous decision-making and workflow orchestration.Implement data ingestion, preprocessing, and feature engineering for structured, semi-structured, and unstructured data.Collaborate with data scientists, architects, and delivery teams to translate AI/ML use cases into production-ready solutions.Design and manage cloud-native AI/ML infrastructure on GCP (Vertex AI), Red Hat OpenShift AI, and Kubeflow.Deploy scalable solutions across multi-cloud/hybrid environments with Kubernetes and container orchestration.Ensure observability and governance for AI systems (model drift, fairness, compliance, LLM usage guardrails).Create accelerators, reusable frameworks, and automation to reduce time-to-market for AI solutions.Support PoCs, customer pilots, and production rollouts.
Por favor confirme su dirección de correo electrónico: Send Email
---