Senior Software Engineer, Data Engineering
Omada Health is on a mission to inspire and engage people in lifelong health, one step at a time.
Job overview : We are dedicated to leveraging data to drive strategic decision-making and operational efficiency. Our team is passionate about harnessing the power of data to solve complex problems and deliver impactful insights.
We are seeking a highly skilled and motivated Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining robust data architectures and engineering data models and pipelines. This role will play a critical part in ensuring the integrity, scalability, and performance of our data processing and products.
Key responsibilities :
- Data Architecture : Design, develop, and implement scalable, secure, and efficient data solutions that meet the needs of the organization.
- Data Modeling : Create and maintain logical and physical data models to support business intelligence, analytics, and reporting requirements.
- Pipeline Engineering : Design, build, and optimize ETL (Extract, Transform, Load) processes and data pipelines to ensure smooth and efficient data flow from various sources.
- Data Integration : Integrate diverse data sources, including APIs, databases, and third-party data, into a unified data platform.
- Performance Optimization : Monitor and optimize the performance of data systems and pipelines to ensure low latency and high throughput.
- Data Quality and Governance : Implement data quality checks, validation processes, and governance frameworks to ensure the accuracy and reliability of data.
- Collaboration : Partner closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs.
- Documentation : Maintain comprehensive documentation of data architectures, models, and pipelines for ongoing maintenance and knowledge sharing.
- Training : You'll train and collaborate with teammates effectively in data engineering best practices
- Technical Influence / Leadership : Recommends policy changes and establishes department-wide procedures. Uses extensive experience and knowledge to resolve complex problems.
- Monitor and manage production environment to deliver data within defined SLAs
How you can make an impact :
You'll evaluate, benchmark, and improve the scalability, robustness, and performance of our data platform and applicationsYou'll make significant contributions to the architecture and design of our data processing platformYou'll implement scalable, fault tolerant, and accurate ETLs frameworks.You'll gather and process raw data at scale from diverse sourcesYou'll collaborate with product management, data scientists, analysts, and other engineers on technical vision, design, and planningYou'll implement and maintain a high level of data quality monitoring in our analytics & ML ecosystemYou'll train and collaborate with teammates effectively in data engineering best practicesYou will be responsible for leading, documenting, and collaborating across teams for technical projects.You will love this job if you :
You are passionate about building data-driven systems to enable Data Scientist, Data Analysts and AI / ML Engineers.You want to make a difference to empower digital healthcare through Data-driven decision making.You would like to learn how to build scalable, performant and reliable data pipelines.What you need for this role :
5+ years of experience building, maintaining, and orchestrating scalable data pipelines.3+ years of experience as a data engineer developing or maintaining integration with software such as Airflow or any Python-based data pipeline codebase.Experience applying a variety of integration patterns for different use cases.Experience in backend software development to contribute to distributed computing development and data technologies, with broad experience across systems, contexts, and ideas.Experience implementing data pipelines and improving the performance of ETL processes and related SQL queries.Experience in data modeling for OLTP and OLAP applicationsExperience with Cloud platforms such as Amazon AWSFamiliarity with workflow management tools (Airflow preferred)Familiarity with cloud-based data warehouses (Amazon Redshift preferred)Exceptional Problem solving and analytical skillsExperience working with sensitive data i.e. PHI / PII & security best practicesFamiliarity with data governance practices and principlesTechnical skills :
Proficiency in SQL and experience with relational databases (e.g., MySQL, PostgreSQL).Proficiency in Analytical SQL (e.g. Analytics Queries, Distributed database queries) and experience working with massive parallel processing (MPP) databases (e.g., Redshift, BigQuery, Snowflake).Proficiency in programming languages such as Python, Java, or Scala.Knowledge of data modeling techniques (3NF) and tools (e.g., ER / Studio, ERwin).Software Engineering Mindset : Apply best practices to write elegant, maintainable code and understand automated testing concepts.Familiarity with business intelligence tools and environments.Familiarity with big data technologies (e.g., Lambda, Hadoop, Spark)Software engineering mindset and an ability to write elegant, maintainable code while following engineering best practicesCommunication skills : Excellent communication and collaboration skills, both written and verbal, with the ability to convey complex technical concepts to non-technical stakeholders.
Self-directed : Leading projects and tasks effectively with cross functional stakeholders minimal guidance. You care about writing quality software and recognize that there are often many right answers. Ability to lead tasks and projects effectively.
Education : Bachelor's degree in Computer Science or a similar discipline preferred.
Technologies we use : Ruby on Rails, Redshift, Athena, Postgres, SQL, Python, Apache Airflow, Appflow, S3, SNS, SQS, Kafka, Docker, Kubernetes, AWS infrastructure, Lambda, Serverless, Tableau, Bugsnag, Datadog, GitLabCI
Bonus points for :
Experience with big data technologies such as Hadoop, Spark, or Kafka.Experience building internal frameworks or development productivity tools would be a big plus.Experience building data infrastructure, frameworks and automation is a big plus.Understanding automated testing concepts and ability to consistently apply those concepts.Benefits :
Competitive salary with generous annual cash bonusEquity grantsRemote first work from home cultureFlexible vacation to help you rest, recharge, and connect with loved onesGenerous parental leaveHealth, dental, and vision insurance (and above market employer contributions)401k retirement savings planLifestyle Spending Account (LSA)Mental Health Support Solutions...and more!It takes a village to change health care. As we build together toward our mission, we strive to embody the following values in our day-to-day work. We hope these hold meaning for you as you consider Omada!
Cultivate Trust. We actively cultivate trust through attentive listening and supporting one another. We respectfully provide and are open to receiving candid feedback.Seek Context. We ask to understand and we build connections. We do our research up front to move faster down the road.Act Boldly. We innovate daily to solve problems, improve processes, and find new opportunities for our members and customers.Deliver Results. We reward impact above output. We set a high bar, we're not afraid to fail, and we take pride in our work.Succeed Together. We prioritize Omada's progress above team or individual. We have fun as we get stuff done, and we celebrate together.