Data Engineer
We're looking for a Data Engineer to architect and scale the data backbone that powers our AI-driven donor engagement platform. You'll design and own modern, cloud-native data pipelines and infrastructure that deliver clean, trusted, and timely data to our ML and product teams - fueling innovation that revolutionizes the nonprofit industry.
About Givzey :
Givzey is a Boston-based, rapidly growing digital fundraising solutions company, built by fundraisers for nonprofit organizations.
Join a fast-growing, mission-driven team working across two innovative platforms : Givzey, the first donor commitment management platform revolutionizing nonprofit fundraising, and Version2.ai, a cutting-edge AI platform helping individuals and organizations create their most authentic, effective digital presence. As an engineer at the intersection of philanthropy and artificial intelligence, you'll build scalable, high-impact solutions that empower nonprofit fundraisers and redefine how people tell their stories online. We're a collaborative, agile team that values curiosity, autonomy, and purpose. Whether you're refining AI-driven experiences or architecting tools for the future of giving, your work will help shape meaningful technology that makes a difference.
Responsibilities :
- Design & build data pipelines (batch and real-time) that ingest, transform, and deliver high-quality data from diverse internal and third-party sources
- Develop and maintain scalable data infrastructure (data lakes, warehouses, and lakehouses) in AWS, ensuring performance, reliability, and cost-efficiency
- Model data for analytics & ML : create well-governed schemas, dimensional models, and feature stores that power dashboards, experimentation, and ML applications
- Implement data quality & observability frameworks : automated testing, lineage tracking, data validation, and alerting
- Collaborate cross-functionally with ML engineers, backend engineers, and product teams to integrate data solutions into production systems
- Automate infrastructure using IaC and CI / CD best practices for repeatable, auditable deployments
- Stay current with emerging data technologies and advocate for continuous improvement across tooling, security, and best practices
Requirements :
US CitizenshipBachelor's or Master's in Computer Science, Data Engineering, or a related field2+ years of hands-on experience building and maintaining modern data pipelines using python-based ETL / ELT frameworksStrong Python skills, including deep familiarity with pandas and comfort writing production-grade code for data transformationFluent in SQL, with a practical understanding of data modeling, query optimization, and warehouse performance trade-offsExperience orchestrating data workflows using modern orchestration frameworks (e.g., Dagster, Airflow, or Prefect)Cloud proficiency (AWS preferred) : S3, Glue, Redshift or Snowflake, Lambda, Step Functions, or similar services on other cloudsProven track record of building performant ETL / ELT pipelines from scratch and optimizing them for cost and scalabilityExperience with distributed computing and containerized environments (Docker, ECS / EKS)Solid data modeling and database design skills across SQL and NoSQL systemsStrong communication & collaboration abilities within cross-functional, agile teamsNice-to-Haves :
Dagster experience for orchestrating complex, modular data pipelinesPulumi experience for cloud infrastructure-as-code and automated deploymentsHands-on with dbt for analytics engineering and transformation-in-warehouseFamiliarity with modern data ingestion tools like dlt, Sling, Fivetran, Airbyte, or StitchApache Spark experience, especially useful for working with large-scale batch data or bridging into heavier data science workflowsExposure to real-time / event-driven architectures, including Kafka, Kinesis, or similar stream-processing toolsAWS data & analytics certifications (e.g., AWS Certified Data Analytics - Specialty)Exposure to serverless data stacks and cost-optimization strategiesKnowledge of data privacy and security best practices (GDPR, SOC 2, HIPAA, etc.)What You'll Do Day-to-Day :
Be part of a world-class team focused on inventing solutions that can transform philanthropyBuild & refine data pipelines that feed our Sense (AI) and Go (engagement) layers, ensuring tight feedback loops for continuous learningOwn the full stack of data work - from ingestion to transformation to serving - contributing daily to our codebase and infrastructurePartner closely with customers, founders, and teammates to understand data pain points, prototype solutions, iterate rapidly, and deploy to production on regular cyclesHelp craft a beautiful, intuitive product that delights nonprofits and elevates donor impact