Gathering your results ...
10 days
Not Specified
Not Specified
Not Specified
<p>Apply</p> <p>share</p> <ul> <li>linkCopy link </li><li>emailEmail a friend </li></ul> <p>info_outline</p> <p>XApplicants in San Francisco: Qualified applications with arrest or conviction records will be considered for employment in accordance with the San Francisco Fair Chance Ordinance for Employers and the California Fair Chance Act.</p> <p>Minimum qualifications:</p> <ul> <li>Bachelor's degree in Computer Science, Machine Learning, or a related technical field, or equivalent practical experience. </li><li>5 years of experience in engineering and agentic assistance, including software development in Python. </li><li>Experience working in a frontier AI research and development environment. </li><li>Experience working in a professional software engineering or research team environment. </li><li>Experience working with technical stakeholders. </li><li>Experience in frontier model risk. </li></ul> <p>Preferred qualifications:</p> <ul> <li>Experience of engineering or product design for AI tools or assistants, especially those focused on ML Research and Development (R&D). </li><li>Experience with cybersecurity detection and response. </li><li>Experience with collaborating or leading an applied ML project. </li><li>Experience with Large Language Model (LLM) training and inference. </li><li>Knowledge of AI control, chain-of-thought and other monitoring, faithfulness and monitorability and related research areas. </li></ul> <p>About the job</p> <p>Our team develops monitoring and control for potentially misaligned AI to mitigate risks of extreme harms. Currently, this primarily involves: designing, building, and testing monitors for potentially dangerous behaviours; developing and implementing response policies to preserve AI usefulness while mitigating risks; and foreseeing ways in which our control tools might be bypassed or degraded. We are looking for an engineer who can rapidly iterate to solve never-before-seen problems with creativity and thoroughness.</p> <p>The Loss of Control team contributes to a defense in depth against the risk of misaligned AI systems being deployed. We take the possibility of very advanced AI seriously. We don't think control is a suitable alternative to alignment in the limit of advancing intelligence. But while AI remains effectively monitorable, we think that control is an important part of an overall strategy for building safe AI.</p> <p>We are looking for a research engineer for the Frontier Safety Loss of Control team within the AGI Safety and Alignment Team based in either San Francisco or London.</p> <p>In this role, the core responsibility is to help Google prepare for the internal use of potentially misaligned AI systems. That means building defense-in-depth against AI that might persistently pursue goals that users and system developers did not intend.</p> <p>Artificial intelligence will be one of humanity's most transformative inventions. At Google DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high-quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.</p> <p>We are pushing the boundaries across multiple domains. Our global teams offer various learning opportunities and varied career pathways for those driven to achieve exceptional results through collective effort.</p> <p>The US base salary range for this full-time position is $174,000-$252,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.</p> <p>Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.</p> <p>Responsibilities</p> <ul> <li>Identify ways that misaligned agents could cause harm and strategies for detecting and preventing harm. </li><li>Implement technical controls to monitor agent thoughts, behaviour, and respond to mitigate potential harms. </li><li>Integrate various agent behaviour signals from across the organisation to inform response policies. </li><li>Conduct adversarial testing of controls. </li><li>Work with internal product teams to ensure that control systems are adopted over all high-risk AI surfaces. </li></ul>
POST A JOB
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!