Job Description
Job Description
Note for all engineering roles : With the rise of fake applicants and AI-enabled candidate fraud, we have built in additional measures throughout the process to identify such candidates and remove them.
About Us
People Data Labs (PDL) is the provider of people and company data. We do the heavy lifting of data collection and standardization so our customers can focus on building and scaling innovative, compliant data solutions. Our sole focus is on building the best data available by integrating thousands of compliantly sourced datasets into a single, developer-friendly source of truth. Leading companies across the world use PDL's workforce data to enrich recruiting platforms, power AI models, create custom audiences, and more.
We are looking for individuals who can balance extreme ownership with a "one-team, one-dream" mindset. Our customers are trying to solve complex problems, and we only help them achieve their goals as a team. Our Platform Engineering Team oversees the foundational work that the rest of our engineering teams build their success upon.
You will be crucial in accelerating our efforts to build standalone data products that enable data teams and independent developers to create innovative solutions at massive scale. In this role, you will be working with a team, defining tools and infrastructure that facilitate big data processing, primarily within AWS.
If you are looking to be part of a team discovering the next frontier of data-as-a-service (DaaS) with a high level of autonomy and opportunity for direct contributions, this might be the role for you. We like our engineers to be thoughtful, quirky, and willing to fearlessly try new things. Failure is embraced at PDL as long as we continue to learn and grow from it.
What You Get To Do
- Manage and improve our growing AWS and data center infrastructures
- Design, implement, and maintain a CI / CD pipeline to improve developer workflows
- Utilize centralized monitoring and logging to improve visibility across the team
- Assist development teams in solving issues around scaling and bottlenecks
- Work with teammates to develop high-quality software, balancing security, reliability, and operational concerns
The Technical Chops You'll Need
5-7+ years of software development experience with a background in platform or cloud infrastructure engineering and clear examples of strategic technical problem-solving and implementation3+ years of experience with Python in a production environmentStrong software development fundamentals and system design experienceStrong experience with our core technologies (AWS, ElasticSearch / OpenSearch, Python, Docker, scaled data processing technologies)AWS, including EC2, Lambda, OpenSearch, API Gateway, ALB, othersData stores, including Postgres / MySQL, Dynamo, Redis, S3Experience with Infrastructure-as-code (IaC) frameworks (e.g. Pulumi, Terraform, CloudFormation, or similar)Experience with network design, including public / private availability, routing, firewalls / security groups, and VPNExperience with Identity and Access ManagementExperience with configuration management tools (e.g. Chef, Puppet, Ansible, etc)Experience with observability tools such as Datadog for metrics, logging, etcExperience with build and deploy systems, architecting and developing CI / CD infrastructure, repo management, and integrating with tools like Github Actions (or similar)People Thrive Here Who Can
Balance high ownership and autonomy with a strong ability to collaborateCan work effectively remotely (able to be proactive about managing blockers, proactive on reaching out and asking questions, and participating in team activities)Strong written communication skills on Slack / Chat and in documentsAre experienced in writing data design docs (pipeline design, dataflow, schema design)Can scope and breakdown projects, communicate and collaborate progress and blockers effectively with your manager, team, and stakeholdersSome Nice To Haves
Degree in a quantitative discipline such as computer science, mathematics, statistics, or engineeringExpertise with Apache Spark (Java, Scala, and / or Python-based)Experience with SQL Data Pipeline DevelopmentExperience supporting developer-oriented data pipeline and workflow orchestration (e.g., Airflow (preferred), dbt, dagster, or similar)Experience with managing, deploying, and ensuring the reliability of streaming platforms (e.g., Kafka)Experience evaluating data quality and maintaining consistently high standards across new feature releases (e.g., consistency, accuracy, validity, completeness)Experience using Databricks or similar data-development platformsExperience managing hybrid environments split between local datacenters and AWS; experience managing bare metal / co-location infrastructureOur Benefits
Great people make great teams. We believe in building highly functional, energetic, and engaging teams to serve our customers. People, Customers, Shareholders, in that order, sets us up for success and delivering on our promises.
StockCompetitive SalariesUnlimited paid time offMedical, dental, & vision insuranceHealth, fitness, and office stipendsThe permanent ability to work wherever and however you wantComp : $160K - $180K
No C2C, 1099, or Contract-to-Hire. Recruiters need not apply.
People Data Labs does not discriminate on the basis of race, sex, color, religion, age, national origin, marital status, disability, veteran status, genetic information, sexual orientation, gender identity or any other reason prohibited by law in provision of employment opportunities and benefits.
Qualified Applicants with arrest or conviction records will be considered for Employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.
Personal Privacy Policy For California Residents
https : / / privacy.peopledatalabs.com / policies?name=personnel -privacy-policy
Note : This is a duplicate post of our Senior Software Engineer, Platform role.