Vice President of Research and Safety at OpenAI and well-known safety expert Lilian Weng made her exit from the business official on Friday. Weng, who began working at OpenAI in 2018, served in a variety of capacities, including team leader for Safety Systems. She stated in an X post that she was ready to “reset and explore something new” and that November 15th would be her last day. Her next moves have not been revealed, but she spoke glowingly of her team’s accomplishments.

Questions concerning OpenAI’s dedication to safety are raised by Weng’s resignation, which joins an increasing number of AI safety and policy specialists who have departed the business. Also leaving OpenAI this year were Jan Leike and Ilya Sutskever, who oversaw the company’s Superalignment team, which was tasked with managing superintelligent AI. Former researcher Suchir Balaji expressed concerns that OpenAI’s technology would present more societal problems than benefits, while another policy researcher, Miles Brundage, departed in October following the company’s dissolution of its AGI preparation team.

Weng began her career in robotics at OpenAI by contributing to the development of the team that created a robot hand that could solve a Rubik’s cube. Weng moved into practical AI research as OpenAI turned to its GPT models, and once GPT-4 was released, he eventually established a specialised safety team. There are currently more than 80 members of this safety systems group dedicated to reducing the risks associated with AI, and this work will continue even after Weng leaves.

A spokesman for OpenAI expressed confidence in the Safety Systems team’s future role in upholding AI safeguards and commended Weng’s contributions. CTO Mira Murati and research VP Barret Zoph are two more leaders who have just left OpenAI. Some have started their own businesses, while others, including Leike and co-founder John Schulman, joined rivals like Anthropic.

Leave a Reply

Your email address will not be published. Required fields are marked *