Interview Process :
- We are only looking for profiles who can relocate to Chicago, IL or local to Chicago and only non-visa sponsorship candidates.
Job Responsibilities :
Design, develop, and maintain scalable ETL pipelines using AWSGlueCollaborate with data engineers and analysts to understand data requirementsBuild and manage data extraction, transformation, and loading processesOptimize and troubleshoot existing Glue jobs and workflowsEnsure data quality, integrity, and security throughout the ETL processIntegrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step FunctionsMaintain documentation of data workflows and processesStay updated with the latest AWS tools and best practicesRequired Skills
Strong hands-on experience with AWS Glue, PySpark, and PythonProficiency in SQL and working with structured / unstructured data (JSON, CSV, Parquet)Experience with data warehousing concepts and toolsFamiliarity with CI / CD pipelines, Terraform, and scripting (PowerShell, Bash)Solid understanding of data modeling, data integration, and data managementExposure to AWS Batch, Step Functions, and Data Catalogs