News

Co-founder of OpenAI proposed building a doomsday bunker that would house the company's top researchers in case of a "rapture," according to a book.
OpenAI co-founder Ilya Sutskever is wary of Artificial General Intelligence (AGI), capable of surpassing the cognitive abilities of humans.
In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched writing a story about a then little-known company, OpenAI. This is what happened next.
OpenAI’s proposed conversion also raised a whole other issue—a precedent for taking resources accrued under charitable intentions and repurposing them for profitable pursuits.
OpenAI CEO Sam Altman became a household name after the release of OpenAI's groundbreaking AI model, ChatGPT.
Months before he left OpenAI, Sutskever believed his AI researchers needed to be assured protection once they ultimately achieved their goal of creating artificial general intelligence, or AGI ...
OpenAI's former chief scientist Ilya Sutskever has long been preparing for AGI — and he discussed with coworkers doomsday prep plans.
Karen Hao sat down with Mashable to discuss 'Empire of AI' revelations about OpenAI and its quest for AI supremacy.
In her new book “Empire of AI,” journalist Karen Hao chronicles the anxieties around the OpenAI office in its early days.