Job description3-5 years of experience in data engineering.Strong experience with distributed data processing (Spark, AWS Glue, EMR, or equivalent).Hands-on expertise with data modeling, ETL pipelines, and performance optimization.Strong hands-on expertise in building and optimizing ETL pipelines into Amazon RedshiftProficiency in Python, PySpark and SQL; familiarity with Iceberg tables preferred.Solid background in Data Analysis and Data Warehousing concepts (star / snowflake schema design, dimensional modeling, and reporting enablement).Orchestration experience with Airflow, Step Functions, and LambdaExperience with Redshift performance tuning, schema design, and workload management.Cloud experience (AWS ecosystem preferred).