For our client, we are seeking an experienced Programmer Analyst Principal to design, develop, and maintain large-scale data systems in a cloud-based environment. The successful candidate will have strong expertise in data engineering, ETL development, and Amazon Redshift, as well as experience collaborating with technical and business teams to deliver reliable data solutions.
Design and implement data pipelines to ingest, extract, transform, and load (ETL) large datasets from multiple sources
Build and maintain data warehouses, including data modeling, governance, and quality control
Ensure data integrity, accuracy, and security through validation and cleansing processes
Optimize data systems for scalability, performance, and reliability
Collaborate with customers to understand technical requirements and provide best practice guidance on Amazon Redshift usage
Partner with cross-functional teams, including analysts, data scientists, and business stakeholders, to define and deliver data solutions
Provide technical support for Amazon Redshift, including troubleshooting and performance tuning
Identify and resolve data-related issues, such as pipeline failures and quality concerns
Develop technical documentation and knowledge articles to support internal teams and clients
Skills Required
Bachelor’s or Master’s degree in Computer Science or a related field, with at least 6 years of experience in Information Technology
8+ years of experience in data engineering and large-scale data system design
5+ years of hands-on experience writing optimized SQL queries for Oracle, SQL Server, and Redshift
5+ years of experience using AWS Glue and Python / PySpark to build production ETL pipelines
Proficiency in one or more programming languages (Python, Java, Scala)
Strong understanding of database design, data modeling, and governance principles
Expertise in SQL query optimization, indexing, and performance tuning
Familiarity with data warehousing concepts (star and snowflake schemas)
Strong analytical and problem-solving skills
Experience with data frameworks such as Apache Kafka and Fivetran
Hands-on experience building ETL pipelines using AWS Glue, Apache Airflow, Python, and PySpark
Experience with agile development methodologies (Scrum or Kanban)
Skills Preferred
Experience with Dataiku, Power BI, Tableau, or Alteryx
Relevant AWS certifications (e.g., AWS Certified Data Analytics – Specialty)
Experience implementing AWS best practices for data management
Experience Required
8+ years of data engineering experience focused on large-scale system design
5+ years of experience with Oracle, SQL Server, and Redshift query optimization
5+ years of experience using AWS Glue and Python / PySpark for ETL pipeline development
Benefits : Health Insurance
401k
Pay Range : $70.00 - $75.00 per hour (Depending on experience and qualifications)
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from HEPCO, Inc and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at https : / / www.hepcoinc.com / privacy /
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, gender expression, marital status, national origin, age, veteran status, disability, or any other protected class.
Programmer Analyst • Falls Church, VA, United States