Summary
As a Senior Data Engineer on the Data Engineering team, you will provide expert leadership in Snowflake cloud data warehouse modeling, design, deployment, and maintenance of the data engineering framework that supports the enterprises data needs both internally and externally. You will play a key role in advancing data analytics initiatives by designing, building, and managing data ingestion, transformation, and maintenance processes to ensure the delivery of accurate and reliable data across Snowflake and on-premises environments. Additionally, you will drive continuous improvement of data engineering operations by innovating and developing self-service automation tools that enhance efficiency and simplify the overall customer experience. This role uniquely blends technical expertise in data engineering with a deep understanding of the business systems and processes involved, ensuring data solutions align with organizational goals.
Responsibilities :
- Design and implement Snowflake-based data solutions aligned with business requirements and industry best practices
- Engineering tasks including data analysis, data models, reverse engineering, and performance optimization
- Collaborate closely with other data engineers to design and optimize ELT / ETL pipelines using tools such as Matillion and Snowflakes native features
- Design, develop, and deploy data pipelines to create or enhance data warehouses / marts
- Build required infrastructure for optimal extraction, transformation, and loading of data from various data sources using AWS and database technologies
- Stay current with emerging AI technologies and best practices, with a focus on Snowflakes AI capabilities and platform advancements, to continuously improve AI-enabled data solutions
- Document technical specifications, data models, process flows, standards, processes, and procedures
- Identifying, designing, and implementing internal process improvements including redesigning infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Guide and mentor data engineering team members on advanced data engineering theories, principles, and concepts
- Define comprehensive data processing strategies to ensure the most optimal outcomes
- Provide on-call production support on a rotational basis, including monitoring and facilitating resolution of support for all data engineering related services
- Sets design, test, documentation, and implementation standards in accordance with HIPPA and other patient data management policies and ensures that data management systems adhere to them
Credentials and Experience :
Bachelors degree : Information Systems, Computer Science, Business Administration, or related fieldMinimum 8 years of experience in implementation of data engineering initiativesMinimum of 3 years of cloud experience (AWS preferred)Minimum of 3 years of experience working and communicating with cross-functional teams, deriving requirements, architecting data models and shared datasetsA strong engineering background and expertise in working with data and databasesExperience in software development environments using Agile / SCRUM, and code management / versioningDeep expertise in a Data Engineering role, with a focus on building complex data pipelines or conducting data intensive analysis on cloud platforms (AWS preferred) and on-premises environmentsBuild and maintain the infrastructure to support ELT / ETL processing. Extract data from multiple data sources, such as SQL, NoSQL, other platform APIs, and load into a centralized data warehouse / data lake to facilitate unified reportingExtensive experience working in Snowflakes data cloud environmentWorking knowledge and understanding Snowflakes features for data engineering, analytics, AI, and applicationsExperience with master data management, data modeling, and data governance best practicesExperience with one or more AWS and / or cloud services such as EC2, S3, RDS, Lambda, etc.Experience with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI / CD (Jenkins, Maven, etc.), automated unit testing, and Dev OpsExperience working with ETL tools like Informatica, SSIS, Matillion, or Snowflakes native ingestion featuresExperience working with tools like Spark, Kafka, AirflowExperience working with Unix platforms and scriptingAdvanced knowledge of T-SQL, SQL, and ideally other programming languages like Python is preferredApply and mentor junior engineers on advanced data engineering theories, principles, and conceptsContinuously seek out industry best practices and develop skills to create new capabilities for Data Engineering teamAbility to understand and work with complex, large enterprise business environments