Overview
Description
A leading AI research lab is developing the next generation of interactive generative video systems. This role focuses on foundational research that pushes beyond current generative modeling approaches, exploring new concepts in representation learning, temporal coherence, causality, and user-driven model interaction. You’ll help shape the scientific direction of a new medium where real-time generation meets interactivity, building systems that move beyond passive video output.
You will work in a focused, high-caliber environment where researchers own long-horizon agendas, build conceptual frameworks, and turn ideas into working prototypes. The team values originality, rigor, and deep thinking over incremental improvements.
What You’ll Tackle
- Research new foundations for interactive video generation and model control
- Develop hypotheses, design experiments, and build algorithms that advance the field
- Explore emerging architectures and training paradigms beyond diffusion and transformers
- Study how generative models interpret dynamics, causality, and user intent
- Build minimal systems to validate new capabilities and test conceptual frameworks
- Collaborate with engineering teams to scale prototypes into usable systems
- Contribute to a rigorous research culture through publishing, mentorship, and community engagement
Job Requirements
Deep expertise in machine learning and generative modelingExperience in video synthesis, multimodal learning, or interactive model designHistory of original research contributions or algorithmic breakthroughsStrong experimental design and systems prototyping skillsAbility to work autonomously on first-principles research problemsComfort operating in a fast-paced, highly technical environmentNice to Have
Experience with real-time generative systems or reinforcement learningPrior publications in ML / AI conferencesBackground in scaling research prototypes with infra teamsExperience mentoring researchers#J-18808-Ljbffr