Talent.com
Senior Data Engineer – DevOps [Gitlab, Terraform]
Senior Data Engineer – DevOps [Gitlab, Terraform]First Citizens Bank • Raleigh, North Carolina, US
Senior Data Engineer – DevOps [Gitlab, Terraform]

Senior Data Engineer – DevOps [Gitlab, Terraform]

First Citizens Bank • Raleigh, North Carolina, US
14 days ago
Job type
  • Full-time
Job description

Overview

This is a remote role that may only be hired in the following locations : NC, AZ, TX

We are seeking an experienced DevOps Engineer to design, build, and maintain CI / CD pipelines, infrastructure automation, and deployment workflows supporting our data engineering platform. This role focuses on infrastructure as code, configuration management, cloud operations, and enabling data engineers to deploy reliably and rapidly across AWS and Azure environments.

Responsibilities

CI / CD Pipeline & Deployment Automation

  • Design and implement robust CI / CD pipelines using Azure DevOps or GitLab; automate build, test, and deployment processes for data applications, dbt Cloud jobs, and infrastructure changes.
  • Build deployment orchestration for multi-environment (dev, qa, uat, production) workflows with approval gates, rollback mechanisms, and artifact management.
  • Implement GitOps practices for infrastructure and application deployments; maintain version control and audit trails for all changes.
  • Optimize pipeline performance, reduce deployment times, and enable fast feedback loops for rapid iteration.

Infrastructure as Code (IaC) & Cloud Operations

  • Design and manage Snowflake, AWS and Azure infrastructure using Terraform; ensure modularity, reusability, and consistency across environments.
  • Provision and manage Cloud resources
  • Implement tagging strategies and resource governance; maintain Terraform state management and implement remote state backends.
  • Support multi-cloud architecture patterns and ensure portability between AWS and Azure where applicable.
  • Configuration Management & Infrastructure Automation

  • Deploy and manage Ansible playbooks for configuration management, patching, and infrastructure orchestration across cloud environments.
  • Utilize Puppet for infrastructure configuration, state management, and compliance enforcement; maintain Puppet modules and manifests for reproducible environments.
  • Automate VM provisioning, OS hardening, and application stack deployment; reduce manual configuration and ensure environment consistency.
  • Build automation for scaling, failover, and disaster recovery procedures.
  • Snowflake Cloud Operations & Integration

  • Automate Snowflake provisioning, warehouse sizing, and cluster management via Terraform; integrate Snowflake with CI / CD pipelines.
  • Implement Infrastructure as Code patterns for Snowflake roles, permissions, databases, and schema management.
  • Build automated deployment workflows for dbt Cloud jobs and Snowflake objects; integrate version control with Snowflake changes.
  • Monitor Snowflake resource utilization, costs, and performance; implement auto-suspend / auto-resume policies and scaling strategies.
  • Python Development & Tooling

  • Develop Python scripts and tools for infrastructure automation, cloud operations, and deployment workflows.
  • Build custom integrations between CI / CD systems, cloud platforms, and Snowflake; create monitoring and alerting automation.
  • Monitoring, Logging & Observability

  • Integrate monitoring and logging solutions (Splunk, Dynatrace, CloudWatch, Azure Monitor) into CI / CD and infrastructure stacks.
  • Build automated alerting for infrastructure health, deployment failures, and performance degradation.
  • Implement centralized logging for applications, infrastructure, and cloud audit trails; maintain log retention and compliance requirements.
  • Create dashboards and metrics for infrastructure utilization, deployment frequency, and change failure rates.
  • Data Pipeline & Application Deployment

  • Support deployment of data processing jobs, Airflow DAGs, and dbt Cloud transformations through automated pipelines.
  • Implement blue-green or canary deployment patterns for zero-downtime updates to data applications.
  • Build artifact management workflows (Docker images, Python packages, dbt artifacts); integrate with Artifactory or cloud registries.
  • Collaborate with data engineers on deployment best practices and production readiness reviews.
  • Disaster Recovery & High Availability

  • Design backup and disaster recovery strategies for data infrastructure; automate backup provisioning and testing.
  • Implement infrastructure redundancy and failover automation using AWS / Azure native services.
  • Documentation & Knowledge Sharing

  • Maintain comprehensive documentation for infrastructure architecture, CI / CD workflows, and operational procedures.
  • Create runbooks and troubleshooting guides for common issues; document infrastructure changes and design decisions.
  • Establish DevOps best practices and standards; share knowledge through documentation, lunch-and-learns, and mentoring.
  • Qualifications

    Bachelor's Degree and 4 years of experience in Data engineering, big data technologies, cloud platforms OR High School Diploma or GED and 8 years of experience in Data engineering, big data technologies, cloud platforms

    Preferred :

    Technical / Business Skills :

    CI / CD tools : Azure DevOps Pipelines or GitLab CI / CD (hands-on pipeline development)Infrastructure as Code : Terraform (AWS and Azure providers) — production-grade experienceConfiguration Management : Ansible and / or Puppet — ability to write playbooks / manifests and manage infrastructure stateCloud platforms : AWS (EC2, S3, RDS, VPC, IAM, Lambda, Glue, Lakeformation) and Azure (VMs, App Services, Blob Storage, Cosmos DB, networking)Python programming : scripting, automation, API integration, and tooling developmentSnowflake : operational knowledge of warehouse management, cost optimization, and cloud integrationGit / GitLab / GitHub : version control, branching strategies, and repository managementLinux / Unix system administration and command-line proficiencyNetworking fundamentals : VPCs, subnets, security groups, DNS, load balancingScripting languages : Bash, Python, or similar for automation5+ years in DevOps, Platform Engineering, or Infrastructure Engineering3+ years hands-on with Terraform and Infrastructure as Code3+ years with CI / CD tools (Jenkins, GitLab CI, Azure DevOps, or similar)2+ years with configuration management tools (Ansible, Puppet, or similar)2+ years supporting cloud platforms (AWS and / or Azure in production)1+ years with Python automation and scriptingExperience supporting or integrating with Snowflake or modern data warehousesFinancial banking experience is a plus.Must have one or more certifications in the relevant technology fields.

    Functional Skills / Core Competencies :

    Strong automation mindset : identify and eliminate manual toilSystems thinking : understand full deployment pipelines and infrastructure dependenciesComfortable with continuous learning of new tools and cloud servicesAbility to balance speed of delivery with stability and safetyTeam Player : Support peers, team, and department management.Communication : Excellent verbal, written, and interpersonal communication skills.Problem Solving : Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality.Partnership and Collaboration : Develop and maintain partnership with business and IT stakeholdersAttention to Detail : Ensure accuracy and thoroughness in all tasks.

    #LI-XG1

    Benefits are an integral part of total rewards and First Citizens Bank is committed to providing a competitive, thoughtfully designed and quality benefits program to meet the needs of our associates. More information can be found at benefits.

    CI / CD Pipeline & Deployment Automation

  • Design and implement robust CI / CD pipelines using Azure DevOps or GitLab; automate build, test, and deployment processes for data applications, dbt Cloud jobs, and infrastructure changes.
  • Build deployment orchestration for multi-environment (dev, qa, uat, production) workflows with approval gates, rollback mechanisms, and artifact management.
  • Implement GitOps practices for infrastructure and application deployments; maintain version control and audit trails for all changes.
  • Optimize pipeline performance, reduce deployment times, and enable fast feedback loops for rapid iteration.
  • Infrastructure as Code (IaC) & Cloud Operations

  • Design and manage Snowflake, AWS and Azure infrastructure using Terraform; ensure modularity, reusability, and consistency across environments.
  • Provision and manage Cloud resources
  • Implement tagging strategies and resource governance; maintain Terraform state management and implement remote state backends.
  • Support multi-cloud architecture patterns and ensure portability between AWS and Azure where applicable.
  • Configuration Management & Infrastructure Automation

  • Deploy and manage Ansible playbooks for configuration management, patching, and infrastructure orchestration across cloud environments.
  • Utilize Puppet for infrastructure configuration, state management, and compliance enforcement; maintain Puppet modules and manifests for reproducible environments.
  • Automate VM provisioning, OS hardening, and application stack deployment; reduce manual configuration and ensure environment consistency.
  • Build automation for scaling, failover, and disaster recovery procedures.
  • Snowflake Cloud Operations & Integration

  • Automate Snowflake provisioning, warehouse sizing, and cluster management via Terraform; integrate Snowflake with CI / CD pipelines.
  • Implement Infrastructure as Code patterns for Snowflake roles, permissions, databases, and schema management.
  • Build automated deployment workflows for dbt Cloud jobs and Snowflake objects; integrate version control with Snowflake changes.
  • Monitor Snowflake resource utilization, costs, and performance; implement auto-suspend / auto-resume policies and scaling strategies.
  • Python Development & Tooling

  • Develop Python scripts and tools for infrastructure automation, cloud operations, and deployment workflows.
  • Build custom integrations between CI / CD systems, cloud platforms, and Snowflake; create monitoring and alerting automation.
  • Monitoring, Logging & Observability

  • Integrate monitoring and logging solutions (Splunk, Dynatrace, CloudWatch, Azure Monitor) into CI / CD and infrastructure stacks.
  • Build automated alerting for infrastructure health, deployment failures, and performance degradation.
  • Implement centralized logging for applications, infrastructure, and cloud audit trails; maintain log retention and compliance requirements.
  • Create dashboards and metrics for infrastructure utilization, deployment frequency, and change failure rates.
  • Data Pipeline & Application Deployment

  • Support deployment of data processing jobs, Airflow DAGs, and dbt Cloud transformations through automated pipelines.
  • Implement blue-green or canary deployment patterns for zero-downtime updates to data applications.
  • Build artifact management workflows (Docker images, Python packages, dbt artifacts); integrate with Artifactory or cloud registries.
  • Collaborate with data engineers on deployment best practices and production readiness reviews.
  • Disaster Recovery & High Availability

  • Design backup and disaster recovery strategies for data infrastructure; automate backup provisioning and testing.
  • Implement infrastructure redundancy and failover automation using AWS / Azure native services.
  • Documentation & Knowledge Sharing

  • Maintain comprehensive documentation for infrastructure architecture, CI / CD workflows, and operational procedures.
  • Create runbooks and troubleshooting guides for common issues; document infrastructure changes and design decisions.
  • Establish DevOps best practices and standards; share knowledge through documentation, lunch-and-learns, and mentoring.
  • Bachelor's Degree and 4 years of experience in Data engineering, big data technologies, cloud platforms OR High School Diploma or GED and 8 years of experience in Data engineering, big data technologies, cloud platforms

    Preferred :

    Technical / Business Skills :

    CI / CD tools : Azure DevOps Pipelines or GitLab CI / CD (hands-on pipeline development)Infrastructure as Code : Terraform (AWS and Azure providers) — production-grade experienceConfiguration Management : Ansible and / or Puppet — ability to write playbooks / manifests and manage infrastructure stateCloud platforms : AWS (EC2, S3, RDS, VPC, IAM, Lambda, Glue, Lakeformation) and Azure (VMs, App Services, Blob Storage, Cosmos DB, networking)Python programming : scripting, automation, API integration, and tooling developmentSnowflake : operational knowledge of warehouse management, cost optimization, and cloud integrationGit / GitLab / GitHub : version control, branching strategies, and repository managementLinux / Unix system administration and command-line proficiencyNetworking fundamentals : VPCs, subnets, security groups, DNS, load balancingScripting languages : Bash, Python, or similar for automation5+ years in DevOps, Platform Engineering, or Infrastructure Engineering3+ years hands-on with Terraform and Infrastructure as Code3+ years with CI / CD tools (Jenkins, GitLab CI, Azure DevOps, or similar)2+ years with configuration management tools (Ansible, Puppet, or similar)2+ years supporting cloud platforms (AWS and / or Azure in production)1+ years with Python automation and scriptingExperience supporting or integrating with Snowflake or modern data warehousesFinancial banking experience is a plus.Must have one or more certifications in the relevant technology fields.

    Functional Skills / Core Competencies :

    Strong automation mindset : identify and eliminate manual toilSystems thinking : understand full deployment pipelines and infrastructure dependenciesComfortable with continuous learning of new tools and cloud servicesAbility to balance speed of delivery with stability and safetyTeam Player : Support peers, team, and department management.Communication : Excellent verbal, written, and interpersonal communication skills.Problem Solving : Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality.Partnership and Collaboration : Develop and maintain partnership with business and IT stakeholdersAttention to Detail : Ensure accuracy and thoroughness in all tasks.

    #LI-XG1

    Benefits are an integral part of total rewards and First Citizens Bank is committed to providing a competitive, thoughtfully designed and quality benefits program to meet the needs of our associates. More information can be found at benefits.

    Create a job alert for this search

    Senior Data Engineer DevOps Gitlab Terraform • Raleigh, North Carolina, US

    Similar jobs
    Senior DevOps Engineer

    Senior DevOps Engineer

    VirtualVocations • Raleigh, North Carolina, United States
    Full-time
    A company is looking for a Senior DevOps Engineer to lead their DevOps team in a remote-friendly environment.Key Responsibilities Lead and mentor DevOps team members, ensuring best practices in a...Show more
    Last updated: 2 days ago • Promoted
    Senior Software Engineer

    Senior Software Engineer

    LogistiVIEW • Cary, North Carolina, United States
    Full-time
    Job Description Senior Software Engineer.Do you have a passion for technology and optimization? Do you want to join a growing company with the same passion? At LogistiVIEW we deliver intelligent Wa...Show more
    Last updated: 4 hours ago • Promoted • New!
    Senior Sales Engineer

    Senior Sales Engineer

    EDB • Raleigh, NC, United States
    Full-time
    EDB provides a data and AI platform that enables organizations to harness the full power of Postgres for transactional, analytical, and AI workloads across any cloud, anywhere.EDB empowers enterpri...Show more
    Last updated: 1 day ago • Promoted
    Data Center Design Engineer - IC4

    Data Center Design Engineer - IC4

    Oracle • Raleigh, NC, United States
    Full-time
    The Oracle OCI team is looking for a Data Center Designer to join our team, someone who innovates & shares our passion for winning in the cloud marketplace. You will work closely with the data cente...Show more
    Last updated: 2 days ago • Promoted
    MO2-613-Senior Azure Cloud Engineer 11694-1

    MO2-613-Senior Azure Cloud Engineer 11694-1

    FHR • Raleigh, NC, US
    Full-time
    Quick Apply
    Our direct client has an opening for.Senior Azure Cloud Engineer 11694-1.This position is up to 12 months, with the option of extension, in. Please send rates and a resume.DEA needs an Architect or ...Show more
    Last updated: 30+ days ago
    Technical Specialist – Expert (Sr. Data Engineer)

    Technical Specialist – Expert (Sr. Data Engineer)

    Sunrise Systems • Raleigh, North Carolina, United States
    Full-time
    Quick Apply
    Job Title : Technical Specialist Expert (Sr.Location : Raleigh, NC (Hybrid : 2 3 days onsite / week).Duration : 12 Months on W2 Contract. Data Solutions Engineer to lead database architecture for cloud ...Show more
    Last updated: 30+ days ago
    Advanced Manufacturing Leader - Data Centers

    Advanced Manufacturing Leader - Data Centers

    Eaton Plc • Raleigh, NC, US
    Full-time
    Advanced Manufacturing Leader - Data Centers / h2pEatons Electrical Sector America is currently seeking an Advanced Manufacturing Leader - Data Centers. This is a new role due to business growth that ...Show more
    Last updated: 20 days ago • Promoted
    Senior MES Engineer

    Senior MES Engineer

    MetaOption, LLC • Raleigh, NC, US
    Full-time
    Quick Apply
    MES, Werum PAS-X, Sepasoft, POMS, cGMP, 21 CFR Part 11, GAMP5, S88, S95, Life Science Industry, Manufacturing Execution System, Database, SQL, Oracle Experience level : Mid-senior Experie...Show more
    Last updated: 30+ days ago
    Senior Cloud Technical Lead- Storage Engineer

    Senior Cloud Technical Lead- Storage Engineer

    SAS • Cary, NC, United States
    Full-time
    Senior Cloud Technical Lead- Storage Engineer- Hybrid.Through our software and services, we inspire customers around the world to transform data into intelligence - and questions into answers.We're...Show more
    Last updated: 2 days ago • Promoted
    Data Tech Lead- US

    Data Tech Lead- US

    Zortech Solutions • Raleigh, NC, United States
    Full-time
    Location : Raleigh NC (Day 1 onsite).Person should know technical & should have ability to work on Account farming & mining. Senior technical person with strong CRO / pharma domain who can partner with...Show more
    Last updated: 2 days ago • Promoted
    Director, Business Development - Data Centers (Engineering)

    Director, Business Development - Data Centers (Engineering)

    Chemelex • Raleigh, NC, United States
    Full-time
    Chemelex is a global leader in electric thermal and sensing solutions, protecting the world's critical processes, places and people. With over 50 years of innovation and a commitment to excellence, ...Show more
    Last updated: 1 day ago • Promoted
    Data Center Engineer

    Data Center Engineer

    ADEX • Cary, NC, United States
    Full-time
    The engineer in this position should have a wealth of experience engineering / installing datacenter equipment.The engineer will typically take on the more complex work and pull all aspects of a job ...Show more
    Last updated: 2 days ago • Promoted
    Director- Data Engineering, Governance & DS

    Director- Data Engineering, Governance & DS

    NYC Staffing • Cary, NC, United States
    Full-time
    Director, Data & Analytics Delivery.Location : Cary, NC; New York, NY; Tampa, FL.Reports to : VP - Data & Analytics (Corporate Functions). Team : Leads a multi-disciplinary team of ~10-15 comprising of...Show more
    Last updated: 2 days ago • Promoted
    Senior Technical Manager, Data Interoperability

    Senior Technical Manager, Data Interoperability

    Thermo Fisher Scientific • Raleigh, NC, United States
    Full-time
    Join Thermo Fisher Scientific Inc.Technical Manager, guiding developers to build flawless data interoperability solutions!. Lead and manage a team of developers passionate about building data intero...Show more
    Last updated: 1 day ago • Promoted
    Senior Systems Engineer-Project Lead

    Senior Systems Engineer-Project Lead

    Secmation • Raleigh, NC, USA
    Full-time
    Quick Apply
    Senior Systems Engineer - Project Lead.Position Type : Full-Time | Hybrid.Relocation Assistance (if applicable).Secmation is a proven, mission-focused engineering company with more than a decade of ...Show more
    Last updated: 30+ days ago
    M-3-19 - Senior DevOps Engineer (758983)

    M-3-19 - Senior DevOps Engineer (758983)

    Focused HR Solutions • Raleigh, North Carolina, United States
    Full-time
    Quick Apply
    Work currently can be performed remote with potential for onsite at the Client / manager’s discretion.Our client has an opening for a Senior DevOps Engineer (758983). This position is 12 months, with ...Show more
    Last updated: 30+ days ago
    Azure Data Engineer

    Azure Data Engineer

    ATTAINX INC • Raleigh, North Carolina, United States, 27604
    Full-time
    US Citizen w / Active Secret Clearance.We are seeking an experienced Azure Data Engineer with proven ability to evaluate, optimize, and modernize enterprise data warehouse environments.The role comb...Show more
    Last updated: 18 days ago
    Senior Software Engineer, Core Experiences - Cary, USA

    Senior Software Engineer, Core Experiences - Cary, USA

    Speechify • Cary, North Carolina, United States
    Full-time
    Mission Speechify is the easiest way to listen to the world's information.Articles on the web, documents in the cloud, books on your phone. We absorb it all and let you listen to it at your desk, on...Show more
    Last updated: 4 hours ago • Promoted • New!