Job descriptionLocation: Geography restricted to USA, UK, Canada, EUType: Full-time or Part-time Contract WorkFluent Language Skills Required :EnglishWhy This Role ExistsMercor partners with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. These systems are used across a wide range of everyday and professional scenarios, and their effectiveness depends on how clearly, accurately, and helpfully they respond to real user questions. In mathematics-related contexts, conversational AI systems must demonstrate precise formal reasoning, mathematical rigor, and conceptual clarity. This project focuses on evaluating and improving how models reason about mathematical problems, explanations, and proofs across both foundational and advanced areas of mathematics.What You’ll DoWrite and refine promptsto guide model behavior in mathematical contexts -Evaluate LLM-generated responsesto mathematics-related queries for correctness, rigor, and logical coherence -Verify mathematical claims, derivations, and proofsusing domain expertise -Conduct fact-checkingusing authoritative public sources and domain knowledge -Annotate model responsesby identifying strengths, areas of improvement, and factual or conceptual inaccuracies -Assess clarity, structure, and appropriateness of explanationsfor different audiences - Ensuremodel responses align with expected conversational behaviorand system guidelines -Apply consistent evaluation standardsby following clear taxonomies, benchmarks, and detailed evaluation guidelinesWho You AreYou hold aPhD in Mathematics or a closely related fieldYou have demonstrated experience inProbability & Statistics, and may also have experience in one or more of the following areas : - Algebra & Number Theory - Calculus & Analysis - Geometry & Topology - Discrete Mathematics, Logic & Computation - You havesignificant experience using large language models(LLMs) and understand how and why people use them - You haveexcellent writing skillsand can clearly explain complex mathematical concepts - You havestrong attention to detailand consistently notice subtle issues others may overlook -Experience reviewing or editing technical or academic writingNice-to-Have SpecialtiesPrior experience with RLHF, model evaluation, or data annotation work - Experience teaching, mentoring, or explaining mathematical concepts to non-expert audiences - Familiarity with evaluation rubrics, benchmarks, or structured review frameworksWhat Success Looks LikeYou identify inaccuracies or weak reasoning in mathematical-related model outputs - Your feedback improves the rigor, clarity, and correctness of AI explanations - You deliver consistent, reproducible evaluation artifacts that strengthen model performance - Mercor customers trust their AI systems in mathematical contexts because you’ve rigorously evaluated themWhy Join MercorMercor provides mathematicians with the opportunity to apply deep theoretical expertise to the evaluation and improvement of advanced AI systems. This flexible, remote role allows you to influence how mathematical reasoning is represented and communicated at scale.