Applied ML Researcher
You'll join our applied ML research team focused on turning raw enterprise data into structured, contextualized knowledge graphs and embeddings. You'll experiment with new approaches for distilling large models into smaller, more efficient ones; improve retrieval, ranking, and reasoning performance through feedback loops; and prototype methods that help LLMs extract and act on real-world knowledge. We're looking for someone who thrives on iteration, cares about building with rigor, and is hungry to learn from some of the best engineers and researchers in the field.
What You'll Be Doing
- Prototype and refine models for extracting structured knowledge from text
- Apply knowledge distillation techniques to compress and optimize LLMs for downstream tasks
- Explore the use of reinforcement learning and feedback loops for improving model behavior
- Build evaluation pipelines for entity linking, retrieval, and semantic consistency
- Read, implement, and build upon recent research in LLM alignment, distillation, and symbolic grounding
- Collaborate closely with infra and data engineers to scale your research into production-ready components
Prior Experience
24 years experience (research lab, internship, academic project, or early industry role) working in ML or NLPExposure to knowledge distillation, RLHF, or curriculum learning techniquesStrong Python skills and familiarity with ML frameworks like PyTorch or TensorFlowExperience with language models and transformers (e.g., BERT, LLaMA, or similar)Solid understanding of ML fundamentals : training pipelines, loss functions, evaluation metricsA collaborative mindset and willingness to work across research and engineering teamsNice to Have
Familiarity with reinforcement learning, including policy optimization or reward modelingExperience with semantic representations such as knowledge graphs or entity embeddingsComfort working with tools like HuggingFace Transformers, Ray, or vLLMUnderstanding of small-model techniques (pruning, quantization, adapter layers)Interest in the LLM ecosystem and techniques for model alignment or prompt tuningPrior contributions to open-source projects or academic publications in ML / NLP