Talent.com
LLM Inference Frameworks and Optimization EngineerSan Francisco, Singapore, Amsterdam

LLM Inference Frameworks and Optimization EngineerSan Francisco, Singapore, Amsterdam

Together AISan Francisco, CA, United States
1 day ago
Job type
  • Full-time
Job description

Inference Frameworks And Optimization Engineer

At Together.ai, we are building state-of-the-art infrastructure to enable efficient and scalable inference for large language models (LLMs). Our mission is to optimize inference frameworks, algorithms, and infrastructure, pushing the boundaries of performance, scalability, and cost-efficiency.

We are seeking an Inference Frameworks and Optimization Engineer to design, develop, and optimize distributed inference engines that support multimodal and language models at scale. This role will focus on low-latency, high-throughput inference, GPU / accelerator optimizations, and software-hardware co-design, ensuring efficient large-scale deployment of LLMs and vision models.

Responsibilities

Inference Framework Development And Optimization

  • Design and develop fault-tolerant, high-concurrency distributed inference engine for text, image, and multimodal generation models.
  • Implement and optimize distributed inference strategies, including Mixture of Experts (MoE) parallelism, tensor parallelism, pipeline parallelism for high-performance serving.
  • Apply CUDA graph optimizations, TensorRT / TRT-LLM graph optimizations, and PyTorch-based compilation (pile), and speculative decoding to enhance efficiency and scalability.

Software-Hardware Co-Design And AI Infrastructure

  • Collaborate with hardware teams on performance bottleneck analysis, co-optimize inference performance for GPUs, TPUs, or custom accelerators.
  • Work closely with AI researchers and infrastructure engineers to develop efficient model execution plans and optimize E2E model serving pipelines.
  • Qualifications

    Must-Have :

  • Experience : 3+ years of experience in deep learning inference frameworks, distributed systems, or high-performance computing.
  • Technical Skills : Familiar with at least one LLM inference frameworks (e.g., TensorRT-LLM, vLLM, SGLang, TGI (Text Generation Inference)).
  • Background knowledge and experience : In at least one of the following : GPU programming (CUDA / Triton / TensorRT), compiler, model quantization, and GPU cluster scheduling.
  • Deep understanding : Of KV cache systems like Mooncake, PagedAttention, or custom in-house variants.
  • Programming : Proficient in Python and C++ / CUDA for high-performance deep learning inference.
  • Optimization Techniques : Deep understanding of Transformer architectures and LLM / VLM / Diffusion model optimization. Knowledge of inference optimization, such as workload scheduling, CUDA graph, compiled, efficient kernels.
  • Soft Skills : Strong analytical problem-solving skills with a performance-driven mindset. Excellent collaboration and communication skills across teams.
  • Nice-to-Have :

  • Experience in developing software systems for large-scale data center networks with RDMA / RoCE
  • Familiar with distributed filesystem (e.g., 3FS, HDFS, Ceph)
  • Familiar with open source distributed scheduling / orchestration frameworks, such as Kubernetes (K8S)
  • Contributions to open-source deep learning inference projects
  • Why Join Us?

    This role offers a unique opportunity to shape the future of LLM inference infrastructure, ensuring scalable, high-performance AI deployment across a diverse range of applications. If you're passionate about pushing the boundaries of AI inference, we'd love to hear from you!

    About Together AI

    Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

    Compensation

    We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is : $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

    Equal Opportunity

    Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

    Please see our privacy policy at

    Create a job alert for this search

    And Optimization • San Francisco, CA, United States

    Related jobs
    • Promoted
    Distributed LLM Inference Engineer

    Distributed LLM Inference Engineer

    Anyscale, IncSan Francisco, CA, United States
    Full-time
    At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We're commercializing Ray, a popular open-source project that'...Show moreLast updated: 30+ days ago
    • Promoted
    Lead AI Engineer (FM Hosting, LLM Inference)

    Lead AI Engineer (FM Hosting, LLM Inference)

    Capital OneSan Jose, CA, United States
    Full-time +1
    Lead AI Engineer (FM Hosting, LLM Inference).At Capital One, we are creating responsible and reliable AI systems, changing banking for good. For years, Capital One has been an industry leader in usi...Show moreLast updated: 1 day ago
    • Promoted
    Distributed ML Systems Engineer- InferenceSan Francisco

    Distributed ML Systems Engineer- InferenceSan Francisco

    Together AISan Francisco, CA, United States
    Full-time
    Distributed ML Systems Engineer- Inference.Together AI is seeking a Distributed ML Systems Engineer to design and build scalable machine learning systems that power our accelerated AI initiatives.T...Show moreLast updated: 1 day ago
    • Promoted
    LLM / ML Engineer (Inference)

    LLM / ML Engineer (Inference)

    ReductoSan Francisco, CA, United States
    Full-time
    We would love to meet you if you : .Philosophy : You are your own worst critic.You have a high bar for quality and don't rest until the job is done rightno settling for 90%. We want someone who ships f...Show moreLast updated: 1 day ago
    • Promoted
    Distributed ML Systems Engineer- Inference

    Distributed ML Systems Engineer- Inference

    Together AISan Francisco, CA, United States
    Full-time
    Together AI is seeking a Distributed ML Systems Engineer to design and build scalable machine learning systems that power our accelerated AI initiatives. This role involves developing large-scale, f...Show moreLast updated: 30+ days ago
    • Promoted
    Distributed LLM Inference Engineer

    Distributed LLM Inference Engineer

    AnyscaleSan Francisco, CA, United States
    Full-time
    At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We're commercializing Ray, a popular open-source project that'...Show moreLast updated: 1 day ago
    • Promoted
    AI Engineer LLM Infra

    AI Engineer LLM Infra

    YutoriSan Francisco, CA, United States
    Full-time
    Yutori is reimagining how people interact with the web by building AI agents that can reliably do everyday digital tasks. We are building the entire stack to be agent-first, from training our own mo...Show moreLast updated: 1 day ago
    • Promoted
    Senior GenAI Algorithms Engineer Model Optimizations for Inference

    Senior GenAI Algorithms Engineer Model Optimizations for Inference

    NVIDIASanta Clara, CA, United States
    Full-time
    NVIDIA is at the forefront of the generative AI revolution! The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLM) and d...Show moreLast updated: 1 day ago
    • Promoted
    Software Engineer, ML Inference, Simulation Infrastructure

    Software Engineer, ML Inference, Simulation Infrastructure

    WaymoSan Francisco, CA, United States
    Full-time
    Waymo is an autonomous driving technology company with the mission to be the world's most trusted driver.Since its start as the Google Self-Driving Car Project in 2009, Waymo has focused on buildin...Show moreLast updated: 1 day ago
    • Promoted
    LLM Inference Frameworks and Optimization Engineer

    LLM Inference Frameworks and Optimization Engineer

    Together AISan Francisco, CA, United States
    Full-time
    Our mission is to optimize inference frameworks, algorithms, and infrastructure, pushing the boundaries of performance, scalability, and cost-efficiency. We are seeking anInference Frameworks and Op...Show moreLast updated: 30+ days ago
    • Promoted
    • New!
    LLM Inference Engineer

    LLM Inference Engineer

    Periodic LabsMenlo Park, CA, United States
    Full-time
    We are an AI + physical sciences lab building state of the art models to make novel scientific discoveries.We are well funded and growing rapidly. Team members are owners who identity and solve prob...Show moreLast updated: less than 1 hour ago
    • Promoted
    LLM Algorithmic Optimization Engineer

    LLM Algorithmic Optimization Engineer

    NIOSan Jose, CA, United States
    Full-time
    NIO is a pioneer and a leading company in the premium smart electric vehicle market.Founded in November 2014, NIO's mission is to shape a joyful lifestyle. NIO aims to build a community starting wit...Show moreLast updated: 1 day ago
    • Promoted
    LLM or GenAI Application Engineer

    LLM or GenAI Application Engineer

    FocusKPI Inc.Mountain View, CA, United States
    Temporary
    FocusKPI is looking for an LLM or GenAI Application Engineer to join one of our clients, a high-tech SaaS company.An LLM or GenAI Application Engineer or LLM Research Engineer role is mainly focuse...Show moreLast updated: 1 day ago
    • Promoted
    ML Research Engineer, ML Systems

    ML Research Engineer, ML Systems

    Scale AI, Inc.San Francisco, CA, United States
    Full-time
    Scale's ML platform (RLXF) team builds our internal distributed framework for large language model training and inference. The platform has been powering MLEs, researchers, data scientists and opera...Show moreLast updated: 30+ days ago
    • Promoted
    Senior AI Software Engineer, LLM Inference Performance Analysis

    Senior AI Software Engineer, LLM Inference Performance Analysis

    NVIDIASanta Clara, CA, United States
    Full-time
    We're now seeking a Senior AI Software Engineer, in our LLM Inference Performance Analysis and Optimization team!.NVIDIA leads the generative AI revolution. We're now seeking an experienced AI Softw...Show moreLast updated: 1 day ago
    • Promoted
    AI Engineer - LLM Infra

    AI Engineer - LLM Infra

    YutoriSan Francisco, CA, United States
    Full-time
    Yutori is reimagining how people interact with the web by building AI agents that can reliably do everyday digital tasks. We are building the entire stack to be agent-first, from training our own mo...Show moreLast updated: 30+ days ago
    • Promoted
    Applied ML / LLM Engineer

    Applied ML / LLM Engineer

    PincitesSan Francisco, CA, United States
    Full-time
    Were looking for a sharp, ambitious.AI-native products someone who knows how to turn messy real-world data into performant models, fine-tune and deploy LLMs, and design feedback loops that make AI ...Show moreLast updated: 1 day ago
    • Promoted
    ML Engineer with LLM + Agentic AI

    ML Engineer with LLM + Agentic AI

    Cardinal Integrated Technologies, Inc.San Francisco, CA, United States
    Temporary
    Role : ML Engineer with LLM + Agentic AI.Duration : 6-12+ Months Contract.Skill 1 - Experience designing, training, fine-tuning, and deploying LLM / ML models for production. Skill 2 - Hands-on experien...Show moreLast updated: 1 day ago