Job Summary :
We are seeking experienced Platform Engineers with expertise in MLOps and handling
distributed systems, particularly Kubernetes, along with a strong background in managing
Multi-GPU, Multi-Node Deep Learning job / inference scheduling. Proficiency in Linux (Ubuntu)
systems, the ability to create intricate shell scripts, good proficiency in working with
configuration management tools and sufficient understanding of deep learning workflow.
Required Skills & Qualifications :
○ 3+ years of experience in platform engineering, DevOps, or systems
engineering, with a strong focus on machine learning and AI workloads.
○ Proven experience working with LLM workflows, and GPU-based machine
learning infrastructure.
○ Hands-on experience in managing distributed computing systems, training
large-scale models, and deploying AI systems in cloud environments.
○ Knowledge of GPU architectures (e.g., NVIDIA A100, V100, etc.), multi-GPU
systems, and optimization techniques for AI workloads.
○ Proficiency in Linux systems and command-line tools. Strong scripting skills
(Python, Bash, or similar).
○ Expertise in containerization and orchestration technologies (e.g., Docker,
Kubernetes, Helm).
○ Experience with cloud platform (AWS), tools such as Terraform, / Terragrunt, or
similar infrastructure-as-code solutions, and exposure to automation of CICD
pipelines using Jenkins / Gitlab / Github, etc.
○ Familiarity with machine learning frameworks (TensorFlow, PyTorch, etc.) and
deep learning model deployment pipelines. Exposure to vLLM or NVIDIA
software stack for data & model management is preferred.
○ Expertise in performance optimization tools and techniques for GPUs, including
memory management, parallel processing, and hardware acceleration.
○ Strong problem-solving skills and ability to work on complex system-level
challenges.
○ Excellent communication skills, with the ability to collaborate across technical
and non-technical teams.
○ Self-motivated and capable of driving initiatives in a fast-paced environment.
Good to Have Skills :
generative AI models or large-scale NLP tasks.
data pipeline orchestration tools (e.g., AWS Glue, Apache Airflow, etc).
TorchServe, vLLM, Triton Inference Server.
infrastructure.
Solution Architect • Dallas, TX, United States