Gathering your results ...
3 days
Not Specified
Not Specified
Not Specified
<p>Are you passionate about working on the next generation of personalized intelligence systems? In this role, you will be developing and deploying robust evaluation frameworks across the data lifecycle -- from data collection and processing, to analytic dashboards for reporting. You will be part of the larger Proactive Intelligence team, which builds features that anticipate customer's needs and create personalized experiences by adapting to user behaviors with machine learning running locally on-device or in PCC. Join our cross functional team of specialists dedicated to the evaluation of agentic systems.</p> <p>We are looking for a high-impact ML Evaluation Engineer to help architect rigorous evaluations systems for autonomous agents. With the rise of generative AI, the ability to quantify the reliability and quality of these systems is more critical than ever. You will design and deploy qualitative and quantitative metrics to measure the quality, reasoning, and tool-use accuracy of agentic systems. You will be working with very sensitive data, so leveraging existing and developing new privacy enhancing technologies -- such as differential privacy, PII redaction, and data minimization -- will be crucial. The team you will be joining is focused on advancing scalable automated processes for evaluation. To succeed, you will need a deep understanding of system-level software operations to deliver next-generation capabilities. Join the Proactive Intelligence team to build the evaluation platforms for the future of intelligent, personalized experiences.</p> <p>Demonstrated experience applying Differential Privacy, Federated Learning, or advanced PII redaction techniques to large-scale datasets. Hands-on experience building or testing LLM-based systems, including a deep understanding of chain-of-thought reasoning, prompt engineering, and agentic planning. Proficiency in building or evaluating systems that integrate with external tools/APIs. Experience with specialized agent evaluation frameworks and analyzing execution traces to identify failure modes in multi-turn interactions. Experience with compiled languages (e.g., Swift) and a curiosity about how ML interacts with OS-level software operations. A track record of developing custom metrics (e.g., "LLM-as-a-Judge") or publishing research on model reliability, safety, or algorithmic bias.</p> <p>MS or PhD in Computer Science, Machine Learning, Statistics, or equivalent practical experience in a quantitative field. 3+ years of industry experience in ML Engineering or Applied Science. Strong software engineering fundamentals (Python is a must) with experience building scalable, automated data or evaluation pipelines.</p>
POST A JOB
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!