(L-R) Fei-Fei Li, Condoleezza Rice, Gina Raimondo, and Miriam Vogel speaking at the Hoover Institute on January 26, 2024.Courtesy of Department of Commerce When OpenAI’s ChatGPT took the world by storm last year, it caught many power brokers in both Silicon Valley and Washington, DC, by surprise. The US government should now get advance warning of future AI breakthroughs involving large language models, the technology behind ChatGPT. The Biden administration is preparing to use the Defense Production Act to compel tech companies to inform the government when they train an AI model using a significant amount of computing power. The rule could take effect as soon as next week. The new requirement will give the US government access to key information about some of the most sensitive projects inside OpenAI, Google, Amazon, and other tech companies competing in AI. Companies will also have to provide information on safety testing being done on their new AI creations. OpenAI has been coy about how much work has been done on a successor to its current top offering GPT-4 . The US government may be the first to know when work or safety testing really begins on GPT-5. OpenAI did not immediately respond to a request for comment. “We’re using the Defense Production Act, which is authority that we have because of the President, to do a survey requiring companies to share with us every time they train a new large language model, and share with us the results—the safety data—so we can review it,” Gina Raimondo, US Secretary of Commerce, said Friday at an event held at Stanford University’s Hoover Institution . She did not say when the requirement will take effect, or what action the government might take on the information it received about AI projects. More details are expected […]

Click here to visit source. www.wired.com

See also  Joint Statement on the Launch of the North American Semiconductor Conference and North American Ministerial Committee on Economic Competitiveness

By Donato