Talent.com
Data engineer with Data Modeler Expertise

Data engineer with Data Modeler Expertise

Lorven TechnologiesBethlehem, Pennsylvania, United States
Hace más de 30 días
Tipo de contrato
  • A tiempo completo
Descripción del trabajo

Our client is a currently looking for Data engineer with Data Modeler Expertise with a Long Term project in Bethlehem, PA (Hybrid) below is the detailed requirement.

Position : Data engineer with Data Modeler Expertise

Location : Bethlehem, PA (Hybrid)

Duration : Long Term Contract

Job Description :

  • Bachelor's degree in Computer science or equivalent, with minimum 10+ Years of relevant experience.
  • 5+ Years of experience as a Data Engineer
  • Strong technical expertise in Python and SQL
  • Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark)
  • Solid experience with AWS services such as Cloud Formation, S3, Athena, Glue, Glue Data Brew, EMR / Spark, RDS, Redshift, Data Sync, DMS, DynamoDB, Lambda, Step Functions, IAM, KMS, SM, Event Bridge, EC2, SQS, SNS, Lake Formation, Cloud Watch, Cloud Trail
  • Responsible for building, test, QA & UAT environments using Cloud Formation.
  • Build & implement CI / CD pipelines for the EDP Platform using Cloud Formation and Jenkins

Key Skills :

  • Implement high-velocity streaming solutions and orchestration using Amazon Kinesis, AWS Managed Airflow, and AWS Managed Kafka (preferred)
  • Solid experience building solutions on AWS data lake / data warehouse
  • Analyze, design, Development , and implement data ingestion pipeline in AWS
  • Knowledge of implementing ETL / ELT for data solutions end to end
  • ingest data from Rest APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift.
  • Perform the Peer Code Review and, perform code quality analysis, and associated tools end-to-end for Prudential's platforms
  • Create detailed, comprehensive, and well-structured test cases that follow best practices and techniques
  • Understanding requirements, and data solutions (ingest, storage, integration, processing, access) on AWS
  • Knowledge of implementing RBAC strategy / solutions using AWS IAM and Redshift RBAC model
  • Knowledge of analyzing data using SQL Stored procedures
  • Build automated data pipelines to ingest data from relational database systems, file system , and NAS shares to AWS relational databases such as Amazon RDS, Aurora, and Redshift
  • Build Automated data pipelines to develop test plans, execute manual and automated test cases, help to identify the root causes, and articulate defects clearly
  • Recreate production issues to help determine the issue and verify any fixes
  • Conducting End to End verification and validation for the entire application
  • Creating Jenkins CI pipelines to integrate Sonar / Security scans and test automation scripts
  • Using Git / bitbucket for efficient remote team working, storing framework, and developing test scripts
  • Part of DevOps QA and AWS team focusing on building CI / CD pipeline
  • Part of the release / build team and mainly worked on release management, CI / CD pipeline
  • Deploy multiple instances by using cloud formation templates
  • Crear una alerta de empleo para esta búsqueda

    Data Modeler • Bethlehem, Pennsylvania, United States