USA
2 days ago
Principal Applied Scientist

Invent, implement and deploy state-of-the-art machine learning and/or specific domain industry algorithms and systems. Build prototypes and explore conceptually new solutions. Work collaboratively with science, engineering, and product teams  to identify customer needs in order to create and implement solutions, promote innovation and drive model implementations.  Applies data science capabilities and research findings to create and implement solutions to scale. Responsible for developing new intelligence around core products and services through applied research on behalf of our customers.  Develops models, prototypes, and experiments that pave the way for innovative products and services

 

About the Role


Responsible AI: Principal Applied Scientist. We are seeking an exceptional Principal Applied Scientist with deep expertise in Responsible AI to join our fast-growing AI/ML research team. In this role, you will drive the development and evaluation of scalable safeguards for foundation models, with a focus on large language or multi-modal models (LLMs/ LMMs). Your work will directly influence how we design, deploy, and monitor trustworthy AI systems across a broad range of products.

What You’ll Do

Conduct cutting-edge research and development in Responsible AI, including fairness, robustness, explainability, and safety for generative models Design and implement safeguards, red teaming pipelines, and bias mitigation strategies for LLMs and other foundation models Contribute to the fine-tuning and alignment of LLMs using techniques such as prompt engineering, instruction tuning, and RLHF/DPO. Define and implement rigorous evaluation protocols (e.g., bias audits, toxicity analysis, robustness benchmarks) Collaborate cross-functionally with product, policy, legal, and engineering teams to ensure Responsible AI principles are embedded throughout the model lifecycle Publish in top-tier venues (e.g., NeurIPS, ICML, ICLR, ACL, CVPR) and represent the company in academic and industry forums

Minimum Qualifications

Ph.D. in Computer Science, Machine Learning, NLP, or a related field, with publications in top-tier AI/ML conferences or journals Hands-on experience with LLMs including fine-tuning, evaluation, and prompt engineering Demonstrated expertise in building or evaluating Responsible AI systems (e.g., fairness, safety, interpretability) Proficiency in Python and ML/DL frameworks such as PyTorch or TensorFlow Strong understanding of model evaluation techniques and metrics related to bias, robustness, and toxicity Creative problem-solving skills with a rapid prototyping mindset and a collaborative attitude

Preferred Qualifications (Nice to Have)

Experience with RLHF (Reinforcement Learning from Human Feedback) or other alignment methods Open-source contributions in the AI/ML community Experience working with model guardrails, safety filters, or content moderation systems  

Why Join Us
You’ll be working at the intersection of AI innovation and Responsible AI, helping shape the next generation of safe and trustworthy machine learning systems. If you’re passionate about ensuring AI benefits everyone—and you have the technical depth to back it up—we want to hear from you.

 

 

 

 

Por favor confirme su dirección de correo electrónico: Send Email