Job Summary :
The ETL Developer will design, develop, and maintain processes for extracting, transforming, and loading data between various systems, including legacy systems, databases, and data warehouses. This role supports reporting, analytics, and operations.
Location : Linthicum, Maryland, United States
Responsibilities :
- Design, build, and maintain ETL pipelines and workflows (batch, real-time) using ETL tools or custom scripts.
- Extract data from multiple sources, including relational databases, flat files, APIs, and mainframes.
- Transform data by cleansing, deduplicating, aggregating, and enriching it to meet business rules.
- Load data into target systems such as data warehouses, data marts, and reporting databases.
- Optimize ETL performance and ensure reliability and fault tolerance.
- Monitor ETL jobs, handle errors and exceptions, and implement retries and alerting.
- Maintain metadata, lineage, documentation, and data mapping.
- Test and validate data transformations, reconcile data, and verify accuracy.
- Collaborate with data analysts, DBAs, and application teams to define requirements and integration points.
- Support data migration, bridging, or legacy system transitions.
- Ensure data security, privacy, and compliance in ETL processes.
Required Skills & Certifications :
Bachelor's degree in Computer Science, Data Engineering, or a related field (or equivalent experience).X+ years of experience in ETL / data integration roles.Experience with one or more ETL tools (Informatica, SSIS, Talend, etc.) or custom script-based ETL (Python, SQL).Strong SQL skills and experience working with relational databases.Knowledge of data modeling, normalization, and dimensional modeling.Experience with scheduling / job orchestration tools.Strong problem-solving, debugging, and data reconciliation skills.Good documentation, communication, and stakeholder collaboration skills.Preferred Skills & Certifications :
Experience in governmental / social services / legacy integration contexts.Experience with big data / streaming ETL (Kafka, Spark).Experience with cloud data pipelines (AWS Glue, Azure Data Factory).Familiarity with data governance, metadata, and data quality frameworks.Special Considerations :
N / AScheduling :
N / A