Join to apply for the MLOps Engineer role at KANINI
Openings : 2 positions
Location : Denver, CO
Duration : Long Term
Overview
The MLOps Engineer (GCP Specialization) is responsible for designing, implementing, and maintaining infrastructure and processes on Google Cloud Platform (GCP) to enable the seamless development, deployment, and monitoring of machine learning models at scale. This role bridges data science and data engineering, Infrastructure, ensuring that machine learning systems are reliable, scalable, and optimized for GCP environments.
Key Responsibilities
- Model Deployment : Design and implement pipelines for deploying machine learning models into production using GCP services such as Vertex AI, AI Platform, Cloud Run, and Cloud Composer, ensuring high availability and performance.
- Infrastructure Management : Build and maintain scalable GCP-based infrastructure using Google Compute Engine, Google Kubernetes Engine (GKE), and Cloud Storage to support model training, deployment, and inference.
- Automation : Develop automated workflows for data ingestion, model training, validation, and deployment using GCP tools like Cloud Composer, and CI / CD pipelines integrated with GitLab and Bitbucket repositories.
- Monitoring and Maintenance : Implement monitoring solutions using Google Cloud Monitoring and Logging to track model performance, data drift, and system health, and take corrective actions as needed.
- Collaboration : Work closely with data scientists, data engineers, Infrastructure and DevOps teams to streamline the ML lifecycle and ensure alignment with business objectives.
- Versioning and Reproducibility : Manage versioning of datasets, models, and code using GCP tools like Artifact Registry or Cloud Storage to ensure reproducibility and traceability of machine learning experiments.
- Optimization : Optimize model performance and resource utilization on GCP, leveraging containerization with Docker and GKE, and utilizing cost-efficient resources like preemptible VMs or Cloud TPU / GPU.
- Security and Compliance : Ensure ML systems comply with data privacy regulations (e.g., GDPR, CCPA) using GCP's security tools like Cloud IAM, VPC Service Controls, and Data Loss Prevention (DLP).
- Tooling : Integrate GCP-native tools (e.g., Vertex AI, Cloud Composer) and open-source MLOps frameworks (MLflow, Kubeflow) to support the ML lifecycle.
Qualifications
Technical Skills :Proficiency in Python
Expertise in GCP services, including Vertex AI, Google Kubernetes Engine (GKE), Cloud Run, BigQuery, Cloud Storage, and Cloud Composer, Data Proc or PySpark and managed AirflowExperience with infrastructure-as-code - TerraformFamiliarity with containerization (Docker, GKE) and CI / CD pipelines, GitLab and BitbucketKnowledge of ML frameworks (TensorFlow, PyTorch, scikit-learn) and MLOps tools compatible with GCP (MLflow, Kubeflow) and Gen AI RAG applicationsUnderstanding of data engineering concepts, including ETL pipelines with BigQuery and Dataflow, Dataproc - PySparkSoft Skills :Strong problem-solving and analytical skills
Excellent communication and collaboration abilitiesAbility to work in a fast-paced, cross-functional environmentPreferred Qualifications :Experience with large-scale distributed ML systems on GCP, such as Vertex AI Pipelines or Kubeflow on GKE, Feature Store
Exposure to Generative AI (GenAI) and Retrieval-Augmented Generation (RAG) applications and deployment strategiesFamiliarity with GCP's model monitoring tools and techniques for detecting data drift or model degradationKnowledge of microservices architecture and API development using Cloud Endpoints or Cloud FunctionsGoogle Cloud Professional certifications (e.g., Professional Machine Learning Engineer, Professional Cloud Architect)Seniority level
Entry levelEmployment type
Full-timeJob function
Engineering and Information TechnologyIndustries
Software DevelopmentJ-18808-Ljbffr