Job Summary :
We are seeking experienced Platform Engineers with expertise in MLOps and handling
distributed systems, particularly Kubernetes, along with a strong background in managing
Multi-GPU, Multi-Node Deep Learning job / inference scheduling. Proficiency in Linux (Ubuntu)
systems, the ability to create intricate shell scripts, good proficiency in working with
configuration management tools and sufficient understanding of deep learning workflow.
Required Skills & Qualifications :
Experience :
3+ years of experience in platform engineering, DevOps, or systems
engineering, with a strong focus on machine learning and AI workloads.
Proven experience working with LLM workflows, and GPU-based machine
learning infrastructure.
Hands-on experience in managing distributed computing systems, training
large-scale models, and deploying AI systems in cloud environments.
Knowledge of GPU architectures (e.g., NVIDIA A100, V100, etc.), multi-GPU
systems, and optimization techniques for AI workloads.
Technical Skills :
Proficiency in Linux systems and command-line tools. Strong scripting skills
(Python, Bash, or similar).
Expertise in containerization and orchestration technologies (e.g., Docker,
Kubernetes, Helm).
Experience with cloud platform (AWS), tools such as Terraform, / Terragrunt, or
similar infrastructure-as-code solutions, and exposure to automation of CICD
pipelines using Jenkins / Gitlab / Github, etc.
Familiarity with machine learning frameworks (TensorFlow, PyTorch, etc.) and
deep learning model deployment pipelines. Exposure to vLLM or NVIDIA
software stack for data & model management is preferred.
Expertise in performance optimization tools and techniques for GPUs, including
memory management, parallel processing, and hardware acceleration.
Soft Skills :
Strong problem-solving skills and ability to work on complex system-level
challenges.
Excellent communication skills, with the ability to collaborate across technical
and non-technical teams.
Self-motivated and capable of driving initiatives in a fast-paced environment.
Good to Have Skills :
Experience in building or managing machine learning platforms, specifically for
generative AI models or large-scale NLP tasks.
Familiarity with distributed computing frameworks (e.g., Dask, MPI, Pytorch DDP) and
data pipeline orchestration tools (e.g., AWS Glue, Apache Airflow, etc).
Knowledge of AI model deployment frameworks such as TensorFlow Serving,
TorchServe, vLLM, Triton Inference Server.
Good understanding of LLM inference & how to optimize self-managed infrastructure
Understanding of AI model explainability, fairness, and ethical AI considerations.
Experience in automating and scaling the deployment of AI models on a global
infrastructure.
Solution Architect • Dallas, TX, US