Skill Set – Model Validation, Bias / Fairness Testing, Automation
Role Overview
Ensure quality, fairness, and reliability of AI / ML models through comprehensive testing, validation, and automation. Focus on model performance, bias detection, and ethical AI practices.
Responsibilities
- Design and execute test strategies for AI / ML models and systems
- Perform model validation including accuracy, robustness, and performance testing
- Conduct bias and fairness testing to identify and mitigate discriminatory outcomes
- Develop automated testing frameworks for continuous model evaluation
- Test data pipelines, feature engineering, and model inference systems
- Create test cases for edge cases, adversarial inputs, and model behavior analysis
- Monitor model drift and performance degradation in production
- Collaborate with ML engineers and data scientists to ensure model quality standards
Requirements
Bachelor's degree in Computer Science, Engineering, Data Science, or related fieldStrong experience in model validation and ML testing methodologiesProven expertise in bias and fairness testing for AI systemsHands-on experience with test automation frameworks and toolsUnderstanding of ML model evaluation metrics and statistical testingProficiency in Python and testing libraries (pytest, unittest)Knowledge of AI ethics, responsible AI principles, and regulatory requirementsExperience with CI / CD pipelines and version control (Git)Preferred
Experience testing various ML models (LLMs, computer vision, NLP, recommender systems)Familiarity with fairness metrics (demographic parity, equalized odds, disparate impact)Knowledge of adversarial testing and model robustness evaluationExperience with A / B testing and experimental designUnderstanding of data quality testing and validationFamiliarity with ML frameworks (PyTorch, TensorFlow, scikit-learn)Experience with monitoring tools (MLflow, Weights & Biases, Evidently AI)Knowledge of GDPR, AI Act, or other AI governance frameworksJ-18808-Ljbffr