Job Description :
Job Title : Python Full Stack Engineer
Location : Mclean, VA / Remote
Duration : 3+ Months
About the Role :
We are seeking a highly skilled Data Engineer / Software Engineer with expertise in Python, Java, SQL, PySpark, REST APIs, AWS, Snowflake, and Airflow. The ideal candidate will design, build, and optimize scalable data pipelines and integrations that drive advanced analytics, reporting, and business intelligence solutions.
Key Responsibilities :
Develop and maintain data pipelines using PySpark, Airflow, and AWS services (e.g., Lambda, Glue, S3, EMR, Redshift).
Design, build, and optimize ETL workflows to extract, transform, and load data into Snowflake or other data warehouses.
Develop RESTful APIs and microservices to enable seamless data integration between systems.
Write complex SQL queries for data validation, transformation, and performance tuning.
Implement CI / CD pipelines for data and application deployments using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
Collaborate with data scientists, analysts, and business teams to ensure data quality and accessibility.
Apply best practices in coding, testing, and data governance to maintain secure and reliable data environments.
Monitor, troubleshoot, and optimize Spark and PySpark jobs for performance and cost efficiency.
Required Skills :
Proficiency in Python and Java for backend and data engineering development.
Strong hands-on experience with PySpark and Apache Spark.
Expertise in SQL and relational database concepts.
Experience with AWS cloud services (S3, Glue, Lambda, Redshift, EMR, etc.).
Solid understanding of Snowflake architecture, data modeling, and performance tuning.
Experience in developing REST APIs for data integration.
Familiarity with workflow orchestration tools such as Apache Airflow.
Working knowledge of CI / CD pipelines and version control systems (Git, Jenkins).
Python • McLean, VA, United States