A company is looking for a Data Engineer to design, build, and maintain scalable data pipelines for enterprise-level data integration.
Key Responsibilities
Build and maintain high-volume, scalable data pipelines using Apache Kafka and Apache Spark
Design, develop, and optimize data ingestion, transformation, and integration workflows across enterprise systems
Ensure data quality, consistency, and integrity across multiple data sources
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, Engineering, Data Science, or related field (or equivalent experience)
3+ years of experience in data engineering, data integration, or related technical roles
Strong hands-on experience with Apache Kafka and Apache Spark
Advanced SQL development experience, including complex queries and performance tuning
Experience working in the federal government or other highly regulated environments