Gathering your results ...
3 days
Not Specified
Not Specified
Not Specified
<p>The Productivity and Machine Learning Evaluation team ensures the quality of AI-powered features across a suite of productivity and creative applications - including Creator Studio - used by hundreds of millions of people. This team serves as the primary evaluation function, and its analysis directly informs decisions about model development, feature launches, and product direction. This role is the analytical core of the team; responsible for making sense of evaluation signals and real-world user behavior. The work involves designing feature-level quality metrics, collaborating with partner teams on data collection strategies, and translating evaluation data into concise, actionable insights that drive decisions. This is an opportunity to define how AI feature quality is measured and to directly shape what gets shipped.</p> <p>Day-to-day work involves analyzing evaluation results, identifying trends, regressions, and segment-level patterns across multiple AI features. This includes collaborating with partner teams on data collection strategies, ensuring evaluation data is representative of real-world usage, and designing the metrics framework that leadership uses to make decisions on AI features. Typical deliverables include: feature-level quality metrics and dashboards, evaluation analysis reports, data collection requirements, dataset representativeness audits, and concise metric summaries for decision-makers.</p> <p>Experience designing evaluation or quality metrics for AI-powered or ML-driven features in consumer-facing products Familiarity with productivity software or creative applications, with an ability to distinguish between technically correct and genuinely useful AI outputs Experience partnering with engineering or data teams to define data collection requirements and schemas Track record of translating complex analytical findings into concise recommendations for non-technical decision-makers Experience with evaluation methodology including inter-annotator agreement, evaluation bias detection, and dataset representativeness auditing Understanding of ML model development processes, with the ability to specify what evaluation signals are useful for model improvement Experience managing evaluation across multiple features or product areas simultaneously, with systematic rather than ad-hoc approaches Graduate degree in a relevant quantitative field</p> <p>Bachelor's degree in Statistics, Data Science, Applied Mathematics, Computer Science, or a related quantitative field 5+ years of experience in applied science, data science, or evaluation research, with a focus on defining and operationalizing quality metrics Experience with statistical analysis methods including significance testing, sampling design, effect size estimation, and experimental design Experience working with production user data, understanding its biases and limitations compared to controlled evaluation data Track record of independently designing metrics frameworks and driving data-informed decisions across cross-functional teams Proficiency in Python (pandas, scipy, scikit-learn) or R for data analysis and visualization</p>
POST A JOB
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!