Were hiring for our client, a fast-growing voice-AI company based in San Francisco. We seeking a Founding Infrastructure / Platform Engineer to own the cloud, data, and deployment foundations powering next-generation AI systems.
Youll design scalable AWS environments, implement Infrastructure-as-Code with Terraform, build secure and observable systems, and collaborate closely with AI / ML engineers to take new models from concept to production.
This is an onsite role (5 days a week in San Francisco) and ideal for someone who thrives in a hands-on, high-ownership environment where infrastructure reliability directly drives product quality.
What Youll Do
- Design and operate AWS production infrastructure using Terraform with secure networking, automated rollbacks, and reliable defaults.
- Build and maintain data pipelines for ingestion, transformation, and model training.
- Partner with AI / ML teams to deploy and monitor inference and evaluation services in production.
- Manage PostgreSQL performance and reliability , including schema design, indexing, and pgvector for ML workloads.
- Own CI / CD workflows and environment hygiene using GitHub Actions or similar.
- Implement robust observability across services and data jobs (metrics, tracing, alerting, and incident response).
- Drive cost-efficient scaling while maintaining performance and uptime across systems.
- Collaborate cross-functionally to ensure infrastructure aligns with product and ML goals.
What Were Looking For
6-10+ years in Infrastructure, Platform, or SRE roles with full production ownership.Deep expertise with AWS and Infrastructure-as-Code (Terraform, Pulumi, or CloudFormation) .Strong programming skills in Python or TypeScript , with scripting experience (Bash / YAML).Proven experience with containers and orchestration (Docker, Kubernetes, or ECS).Hands-on experience designing and maintaining data pipelines (batch or streaming).PostgreSQL at scale , ideally with exposure to embeddings or pgvector.Strong observability and on-call practices (metrics, tracing, alerting, incident management).Excellent collaboration with AI / ML and product teams ; clear communication of risk and trade-offs.Must be authorised to work in the U.S. and able to work onsite in San Francisco .Nice to Have
Experience supporting ML model training and inference pipelines .Prior work with low-latency systems , voice interfaces, or recommendation engines.History as an early infrastructure owner at a Seed to Series B startup.Why This Role
Youll be joining as a founding-calibre engineer , shaping the infrastructure blueprint that supports advanced AI matching and voice systems.
Expect autonomy, direct collaboration with technical founders, and the chance to build scalable, high-impact systems from the ground up.