Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s artificial intelligence technologies.

The hacker extracted details from discussions on an online forum where employees were discussing OpenAI’s latest technologies, according to two people familiar with the incident, but did not break into the systems where the company hosts and builds its artificial intelligence.

OpenAI executives disclosed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023, according to the two people, who discussed sensitive information about the company on condition of anonymity.

But executives decided not to share the news publicly because no customer or partner information had been stolen, the two people said. Executives did not consider the incident to be a national security threat because they believed the hacker was an individual with no known ties to a foreign government. The company did not inform the FBI or anyone else in law enforcement.

For some OpenAI employees, the news raised fears that foreign adversaries like China could steal AI technology that, while now primarily a work and research tool, could eventually endanger U.S. national security. It also raised questions about how seriously OpenAI treated security and exposed fractures within the company over the risks of AI.

After the breach, Leopold Aschenbrenner, a technical program manager at OpenAI focused on ensuring future AI technologies do not cause serious harm, sent a memo to OpenAI’s board of directors arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Leopold Aschenbrenner, a former OpenAI researcher, alluded to the security breach in a podcast last month and reiterated his concerns.Credit…via YouTube

Aschenbrenner said OpenAI fired him this spring for leaking other information outside the company and argued that his dismissal was politically motivated. He alluded to the leak in a recent podcast, but details of the incident had not been previously reported. He said OpenAI’s security was not strong enough to protect against the theft of key secrets if foreign actors infiltrated the company.

“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his dismissal,” said an OpenAI spokeswoman, Liz Bourgeois. Referring to the company’s efforts to create an artificial general intelligence — a machine that can do everything the human brain can do — she added: “While we share his commitment to creating a safe AI, we disagree with many of the claims he has since made about our work.”

Fears that a hack on a US tech company could have ties to China are not unreasonable. Last month, Brad Smith, the president of Microsoft, testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a wide-ranging attack on federal government networks.

However, under federal and California laws, OpenAI cannot prevent people from working at the company based on their nationality, and policy researchers have said that excluding foreign talent from U.S. projects could significantly hamper AI progress in the United States.

“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s chief security officer, told The New York Times in an interview. “There are some risks involved, and we need to figure them out.”

(The Times has sued OpenAI and its partner, Microsoft, alleging copyright infringement of news content related to AI systems.)

OpenAI isn’t the only company building ever more powerful systems using rapidly improving AI technology. Some of them — most notably Meta, the owner of Facebook and Instagram — freely share their designs with the rest of the world as open-source software. They believe that the dangers posed by current AI technologies are few, and that sharing code allows engineers and researchers across the industry to identify and fix problems.

Today’s AI systems can help spread disinformation online, including through text, still images and, increasingly, video. They’re also starting to eliminate some jobs.

Companies like OpenAI and its competitors Anthropic and Google add restrictions to their AI applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread misinformation or cause other problems.

But there is little evidence that current AI technologies pose a significant risk to national security. Studies by OpenAI, Anthropic and other companies over the past year have shown that AI is not significantly more dangerous than search engines. Daniela Amodei, Anthropic’s co-founder and the company’s president, said its latest AI technology would not pose a significant risk if its designs were stolen or freely shared with others.

“If it was owned by someone else, could it be very disruptive to a large part of society? Our answer is, ‘No, probably not,’” he told The Times last month. “Could it accelerate something for a bad actor in the future? Maybe. It’s really speculative.”

Still, researchers and tech executives have long worried that AI could one day power the creation of new biological weapons or help infiltrate government computer systems. Some even believe it could destroy humanity.

Several companies, including OpenAI and Anthropic, are already reining in their technical operations. OpenAI recently created a Safety and Security Committee to study how it should manage the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber ​​Command. He has also been appointed to OpenAI’s board of directors.

“We started investing in security years before ChatGPT existed,” Knight said. “We are in a process of not only understanding and anticipating risks, but also deepening our resilience.”

Federal officials and state lawmakers are also pushing for government regulations that would ban companies from releasing certain AI technologies and fine them millions of dollars if their technologies cause harm. But experts say these dangers are still years or even decades away.

Chinese companies are building their own systems that are nearly as powerful as the top U.S. systems. By some measures, China has eclipsed the United States as the largest producer of AI talent, with the country producing nearly half of the world’s top AI researchers.

“It’s not crazy to think that China will soon overtake the United States,” said Clément Delangue, chief executive of Hugging Face, a company that hosts many of the world’s open-source artificial intelligence projects.

Some researchers and national security leaders argue that the mathematical algorithms at the heart of today’s AI systems, while not dangerous today, could become dangerous and are calling for tighter controls on AI labs.

“Even if the worst-case scenarios are relatively low probability, if they have a high impact, then it is our responsibility to take them seriously,” said Susan Rice, a former domestic policy adviser to President Biden and former national security adviser to President Barack Obama, during an event in Silicon Valley last month. “I don’t think it’s science fiction, as many like to claim.”

Share.
Leave A Reply

© 2024 Daily News Hype. Designed by The Contentify.
Exit mobile version