Our client is a currently looking for Data engineer with Data Modeler Expertise with a Long Term project in Bethlehem, PA (Hybrid) below is the detailed requirement.
Position : Data engineer with Data Modeler Expertise
Location : Bethlehem, PA (Hybrid)
Duration : Long Term Contract
Job Description :
- Bachelor's degree in Computer science or equivalent, with minimum 10+ Years of relevant experience.
- 5+ Years of experience as a Data Engineer
- Strong technical expertise in Python and SQL
- Experience with Big Data Tools such as Hadoop and Apache Spark (Pyspark)
- Solid experience with AWS services such as Cloud Formation, S3, Athena, Glue, Glue Data Brew, EMR / Spark, RDS, Redshift, Data Sync, DMS, DynamoDB, Lambda, Step Functions, IAM, KMS, SM, Event Bridge, EC2, SQS, SNS, Lake Formation, Cloud Watch, Cloud Trail
- Responsible for building, test, QA & UAT environments using Cloud Formation.
- Build & implement CI / CD pipelines for the EDP Platform using Cloud Formation and Jenkins
Key Skills :
Implement high-velocity streaming solutions and orchestration using Amazon Kinesis, AWS Managed Airflow, and AWS Managed Kafka (preferred)Solid experience building solutions on AWS data lake / data warehouseAnalyze, design, Development , and implement data ingestion pipeline in AWSKnowledge of implementing ETL / ELT for data solutions end to endingest data from Rest APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift.Perform the Peer Code Review and, perform code quality analysis, and associated tools end-to-end for Prudential's platformsCreate detailed, comprehensive, and well-structured test cases that follow best practices and techniquesUnderstanding requirements, and data solutions (ingest, storage, integration, processing, access) on AWSKnowledge of implementing RBAC strategy / solutions using AWS IAM and Redshift RBAC modelKnowledge of analyzing data using SQL Stored proceduresBuild automated data pipelines to ingest data from relational database systems, file system , and NAS shares to AWS relational databases such as Amazon RDS, Aurora, and RedshiftBuild Automated data pipelines to develop test plans, execute manual and automated test cases, help to identify the root causes, and articulate defects clearlyRecreate production issues to help determine the issue and verify any fixesConducting End to End verification and validation for the entire applicationCreating Jenkins CI pipelines to integrate Sonar / Security scans and test automation scriptsUsing Git / bitbucket for efficient remote team working, storing framework, and developing test scriptsPart of DevOps QA and AWS team focusing on building CI / CD pipelinePart of the release / build team and mainly worked on release management, CI / CD pipelineDeploy multiple instances by using cloud formation templates