About Etched
Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents.
Key responsibilities
- Contribute to the architecture and design of the Sohu host software stack
- Implement high-performance, modular code across the complete Etched software stack, consisting of a mix of Rust, C++ and Python.
- Interface with firmware and drivers teams delivering highest-performance HW / SW stack.
- Work with AI model researchers and product-facing teams building out the Etched serving front-end.
Representative projects
Build scheduling logic for handling continuous batching and real time inferenceImplement inference-time acceleration techniques such as speculative decoding, tree search, KV cache sharing, etc.Implement distributed networking primitives for efficient multi-server inferenceYou may be a good fit if you have
Experience with C++ and PythonFamiliarity with transformer model architectures and inference serving stacks (vLLM, SGLang, etc.) or experience working in distributed inference / training environmentsExperience working cross-functionally in large software and hardware organizationsStrong candidates may also have
Experience with RustFamiliarity with GPU kernels, the CUDA compilation stack and related tools, or other hardware acceleratorsUnderstanding of distributed systems, networking, and parallel programmingBenefits
Full medical, dental, and vision packages, with 100% of premium coveredHousing subsidy of $2,000 / month for those living within walking distance of the officeDaily lunch and dinner in our officeRelocation support for those moving to CupertinoHow we're different
Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.
We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.