AI/ML Security Program
-
Martin Harrod
- 01 Aug, 2024

Overview
Developed an AI/ML security program to secure machine learning models and pipelines. Established model risk assessments, secured training data pipelines against poisoning, and implemented controls for prompt injection, data leakage, and model theft. Integrated AI/ML system reviews into the product security lifecycle and educated engineering teams on adversarial ML risks.
Role
Lead security engineer partnering with product and ML teams to embed safeguards into the development lifecycle.
Impact
Enabled safe deployment of generative AI and ML features while meeting emerging regulatory requirements (EU AI Act, NIST AI RMF). Increased customer trust in AI offerings and reduced exposure to novel attack classes targeting data and models.
Technologies, Frameworks, and Artifacts
- LLM security testing
- Model supply chain governance
- NIST AI RMF
- Adversarial ML defenses