About Baselayer :
Trusted by 2,200+ financial institutions, Baselayer is the intelligent business identity platform that helps verify any business, automate KYB, and monitor real-time risk. Baselayer's B2B risk solutions & identity graph network leverage state & federal government filings and proprietary data sources to prevent fraud, accelerate onboarding, and lower credit losses.
About You :
You want to learn from the best of the best, get your hands dirty, and put in the work to hit your full potential. You're not just doing it for the win—you're doing it because you have something to prove and want to be great. You're hungry to become an elite data engineer, designing rock-solid infrastructure that powers cutting-edge AI / ML products.
- You have 1–3 years of experience in data engineering, working with Python, SQL, and cloud-native data platforms
- You've built and maintained ETL / ELT pipelines, and you know what clean, scalable data architecture looks like
- You're comfortable with structured and unstructured data, and you thrive on building systems that transform chaos into clarity
- You think in DAGs, love automating things with Airflow or dbt, and sweat the details when it comes to data integrity and reliability
- You're curious about AI / ML infrastructure, and you want to be close to the action—feeding the models, not just cleaning up after them
- You value ethical data practices, especially when dealing with sensitive information in environments like KYC / KYB or financial services
- You're a translator between technical and non-technical stakeholders, aligning infrastructure with business outcomes
- Highly feedback-oriented. We believe in radical candor and using feedback to get to the next level
- Proactive, ownership-driven, and unafraid of complexity—especially when there's no playbook
Responsibilities :
Pipeline Development : Design, build, and maintain robust, scalable ETL / ELT pipelines that power analytics and ML use casesData Infrastructure : Own the architecture and tooling for storing, processing, and querying large-scale datasets using cloud-based solutions (e.g., Snowflake, BigQuery, Redshift)Collaboration : Work closely with data scientists, ML engineers, and product teams to ensure reliable data delivery and feature readiness for modelingMonitoring & Quality : Implement rigorous data quality checks, observability tooling, and alerting systems to ensure data integrity across environmentsData Modeling : Create efficient, reusable data models using tools like dbt, enabling self-service analytics and faster experimentationSecurity & Governance : Partner with security and compliance teams to ensure data pipelines adhere to regulatory standards (e.g., SOC 2, GDPR, KYC / KYB)Performance Optimization : Continuously optimize query performance and cost in cloud data warehousesDocumentation & Communication : Maintain clear documentation and proactively share knowledge across teamsInnovation & R&D : Stay on the cutting edge of data engineering tools, workflows, and best practices—bringing back what works and leveling up the teamHybrid in SF. In office 3 days / weekFlexible PTOSmart, genuine, ambitious teamSalary Range : $135k – $220k + Equity - 0.05% – 0.25%
J-18808-Ljbffr