About Us
At Together.ai, we are building cutting-edge infrastructure to enable efficient and scalable training of large language models (LLMs). We focus on optimizing training frameworks, algorithms, and infrastructure to push the boundaries of AI performance, scalability, and cost-efficiency.
We are seeking a LLM Training Frameworks and Optimization Engineer to drive innovations in the development and optimization of distributed training frameworks. In this role, you will ensure that our LLM training pipelines are robust, efficient, and capable of handling the complexities of large-scale distributed systems.
Responsibilities
Design, implement, and optimize distributed training frameworks tailored for large language models.
Optimize communication patterns (e.g., gradient synchronization, all-reduce) in distributed training.
Conduct in-depth profiling and debugging of training jobs to identify and resolve bottlenecks.
Ensure training systems scale efficiently to thousands of nodes and petabytes of data.
Work closely with researchers, data engineers, and platform teams to ensure training frameworks meet model and workload requirements.
Qualifications
Must-Have :
5+ years of experience in deep learning frameworks, distributed systems, or machine learning infrastructure.
Expertise in distributed training frameworks (e.g., PyTorch DDP, DeepSpeed, Megatron-LM, TensorFlow XLA).
Proficient in Python and C++ or CUDA for high-performance computing.
Experience with memory optimization techniques (e.g., activation checkpointing, gradient sharding).
Analytical problem-solving skills and a focus on performance improvement.
Nice-to-Have :
About Together AI
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Compensation
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is : $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Equal Opportunity
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https : / / www.together.ai / privacy
#J-18808-Ljbffr
Optimization Engineer • San Francisco, CA, US