We are looking for an AI Security Engineer to design, implement, and secure our next generation of AI solutions. You will combine hands-on engineering with security and governance expertise to ensure safe and compliant AI adoption. You will build and maintain AI guardrails, enforce Agent RBAC and permissions tied to firm roles, and integrate Data Loss Prevention (DLP) pipelines to protect sensitive information from leaking into LLM endpoints. Partnering with cloud, security, and governance teams, you will evaluate AI architectures for bias, drift, and risk, while aligning them with frameworks like NIST AI RMF, EU AI Act, and ISO / IEC 42001. You are equally comfortable developing AI security controls in Python / YAML as they are advising on compliance strategy.
The Day-to-Day :
- Plan, design, and build secure AI architectures applying NVIDIA NeMo Guardrails, Azure AI Foundry, and enterprise LLM integrations
- Collaboratively develop agent RBAC (role-based access control) to ensure AI agents operate under permissions aligned to firm roles, enforcing least-privilege access
- Design integrations for AI systems with corporate IAM / SSO (Entra, Okta, etc.) to manage persona- and role-based access across the enterprise
- Design Data Loss Prevention (DLP) and redaction pipelines to prevent confidential, regulated, or proprietary data from being sent to external LLM endpoints
- Provide technical advice, direction, and hands-on support to design and develop safe, compliant, and resilient AI workflows
- Evaluate existing and proposed AI / ML architectures for bias, fairness, drift, hallucination, and security risks; recommend controls aligned with NIST AI RMF, EU AI Act, ISO / IEC 42001, CIS
- Collaborate with Information Security, Cloud, Governance, and Engineering teams to implement standardized AI safety and compliance practices
- Actively contribute to the development of AI security standards, playbooks, and architectural patterns
- Automate guardrails, compliance checks, and AI gateway protections for scale and efficiency
- Build and maintain initiative-level artifacts, including AI policy-as-code configs (YAML), architectural diagrams, and risk assessments
- Monitor, log, and audit AI activity for policy violations, compliance tracking, and security event correlation. YAML-based guardrails, architectural diagrams, and AI risk assessments
Your Qualifications :
7+ years in IT, Information Security, or AI / ML engineering roles, with experience in :Design, reviewing, and implementing secure AI programs in enterprise environmentsApply compliance frameworks (ISO 27001, SOC 2, NIST AI RMF, EU AI Act, ISO / IEC 42001)Build RBAC / ABAC-based permission models for AI agents and personas2+ years of hands-on AI / ML administration or engineering, including :Experience with NVIDIA NeMo Guardrails, Azure AI Foundry, or similar platformsDesign and deploy DLP solutions or privacy-preserving data pipelines for AISupport AI and security operations in a large enterpriseProficiency with Terraform, Python, and cloud automationPrior experience in cloud security, data protection, and SIEM / logging for AI traffic