WHY DATA SCIENCE & ANALYTICS?
The Data Science & Analytics organization's mission is to increase our speed, frequency and acumen of making decisions at scale by instilling a data-influenced approach to building products. We cover a wide area of the data spectrum including analytical data engineering, product analytics, experimentation, causal inference, statistical modeling and machine learning. Aligned and partnering with product verticals, we use this extensive tool belt to discover new opportunities and unmet use cases, influence and shape the product roadmap and prioritization, build data products and measure impact on our community of players and developers.
WHY CONTENT SAFETY?
Content safety and moderation are paramount to maintaining a positive, engaging, and trustworthy platform experience for our global community. In this role, you will apply your expertise in data science, statistics, and causal inference to strategically define, measure, and improve the detection and mitigation of harmful user-generated content and violative on-platform behaviors. Your focus will be on asset-based content across the platform - including games, avatars, and emerging in-game behaviors - by establishing reliable ground truth datasets and developing robust prevalence measurement methodologies. You will collaborate closely with Product, Engineering, Legal, Policy, and Compliance teams, acting as the quantitative expert who guides our defensive strategy against evolving safety threats and ensures accountability and scalable solutions. This is a critical opportunity to build innovative detection systems, define what we measure, and prove the impact of our safety efforts.
You Will :
- Establish and monitor robust prevalence measurement methodologies to accurately quantify the overall level of harmful content and behavior on the platform, providing the authoritative source of truth for organizational safety goals.
- Develop and validate comprehensive ground truth datasets for harmful UGC and violative on-platform behaviors, ensuring high quality and alignment with the latest platform policies.
- Deepen our understanding of violations by conducting exploratory analysis on current and emerging threat landscapes, providing data-backed recommendations on where Product and Engineering should strategically invest detection and moderation resources.
- Design, implement, and analyze sophisticated experiments for new moderation and safety enforcement features, communicating results to our primary partners in Product and Engineering to guide development, while also collaborating closely with Policy, Compliance, and Legal teams for full strategic alignment.
- Leverage advanced causal inference methodologies to accurately measure the effectiveness and potential systemic impacts of various safety initiatives on player experience and platform integrity.
- Communicate strategic insights and present recommendations to leadership and all cross-functional partners, translating complex statistical findings on prevalence, ground truth quality, and effectiveness into actionable strategies for Product, Engineering, Policy, Compliance, and Legal.
- Partner with ML and Data Engineering teams to ensure model development, reporting, and detection systems are built on statistically sound ground truth and measurement frameworks.
You Have :
10+ years of industry experience in data science, economics, analytics, or machine learning engineering7+ years of experience using scripting languages (Python, R), and big data query / processing languages and tools such as SQL, Hive, Spark, and AirflowKnowledge of ML and Deep Learning either via formal training or industry experienceAbility to apply creative first-principles reasoning to solve ambiguous problemsExperience developing large-scale safety or moderation systems as well as experience with content platforms, specifically user-generated contentAdvanced Degree and / or PhD in Statistics, Computer Science, Physics, Applied Math, Economics, or other related quantitative fields