Chennai, Tamil Nadu, India
5 days ago
AI/ML Expert

 This specialist combines expertise in cybersecurity and AI/ML to design, implement, and maintain security frameworks, ensuring the integrity, confidentiality, and compliance of AI-driven solutions throughout their lifecycle. This also involves collaboration with cross-functional, stakeholders and AI Engineers to build and deploy enterprise-wide AI security framework.

Technical Skills:Strong understanding of AI/ML concepts, architectures, and security challenges.Strong programming skills in Python, R, or similar languages.Strong experience in Google Cloud Platform (GCP) or equivalent.Solid understanding of machine learning algorithms, neural networks, NLP, and computer vision.Experience with cloud AI/ML services and deployment pipelines is a plus.Experience with security frameworks (e.g., SAIF, NIST, FAICP) and regulatory compliance.Proficiency in data protection techniques, encryption, and secure access management.Familiarity with adversarial machine learning, model hardening, and input sanitization.Knowledge of incident response, monitoring tools, and threat intelligence platforms.Excellent communication and documentation skills for policy development and stakeholder engagement.

 

Experience:Bachelor’s or Master’s degree in computer science, Data Science, Engineering, or a related field.5+ years in AI/ML roles, including hands-on model development and deployment.Track record of delivering AI solutions that drive business value.

 

Certifications:Relevant certifications such as CAISF, AICERTs, AI for Cybersecurity Specialization or equivalent.GCP Cloud certification or equivalent in AWS or Azure (preferred).Cybersecurity certificates (preferred).Design and maintain structured guidelines and controls to secure AI systems, covering data protection, model security, and compliance requirements.Evaluate and utilize established frameworks such as Google’s Secure AI Framework (SAIF), NIST AI Risk Management Framework, or the Framework for AI Cybersecurity Practices (FAICP) as references or baselines.Identify, assess, and mitigate security risks specific to AI, including adversarial attacks, data poisoning, model inversion, and unauthorized access.Conduct regular vulnerability assessments and penetration testing on AI models and data pipelines.Ensure data used in AI systems is encrypted, anonymized, and securely stored.Implement robust access controls (e.g., RBAC, ABAC, Zero Trust) for sensitive AI data and modelsProtect AI models from tampering, theft, or adversarial manipulation during training and deployment.Monitor and log AI system activity for anomalies or security incidentsDevelop and enforce policies to ensure AI systems adhere to industry regulations, ethical standards, and organizational governance requirements.Promote transparency, explainability, and fairness in AI models.Establish real-time monitoring and advanced threat detection for AI systems.Develop and maintain an AI incident response plan for prompt mitigation and recovery.Educate teams on AI security best practices and foster a security-aware culture.Collaborate with IT, data science, compliance, and business units to align AI security with organizational goals.

 

Por favor confirme su dirección de correo electrónico: Send Email