OpenAI, the San Francisco tech company that attracted worldwide attention after releasing ChatGPT, said Tuesday it is unveiling A new version of its artificial intelligence software.
Called GPT-4, the software can “solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem-solving abilities”, according to an announcement on OpenAI’s website.
In a video posted by the company OnlineIt said that GPT-4 had an array of capabilities that previous iterations of the technology did not have, including the ability to “reason” based on images uploaded by users.
“GPT-4 is a large multimodal model (accepting image and text input, emitting text output) that is less capable than humans in many real-world scenarios, with human-level performance on various professional and academic benchmarks.” Demonstrates performance,” OpenAI wrote. its website.
Lady Karpathy, an OpenAI employee, Tweeted That feature meant that the AI could “see.”
The new technology isn’t available for free, at least not for now. OpenAI said people can try GPT-4 on its subscription service ChatGPT Plus, which costs $20 a month.
OpenAI and its ChatGPT chatbot have shaken up the tech world and alerted many outside the industry to the possibilities of AI software, through the company’s partnership with Microsoft and its search engine, Bing.
But the speed of OpenAI’s release has also raised concerns because the technology hasn’t been tested, prompting a sudden shift in fields from education to the arts. The rapid public development of ChatGPT and other generative AI programs has prompted some ethicists and industry leaders to call for protections over the technology.
Sam Altman, CEO of OpenAI, tweeted on monday that “we definitely need more regulation on AI.”
The company detailed the GPT-4’s capabilities in a series of examples on its website: the ability to solve problems, such as scheduling a meeting between three busy people; scoring highly in tests such as Common Bar exams; and learning the user’s creative writing style.
But the company has also acknowledged limitations such as social biases and “hallucinations” that suggest it knows more than it actually does.
Google, concerned that AI technology could cut into the market share of its search engine and cloud-computing service, released its own software, known as Bard, in February.
OpenAI was launched in late 2015 with backing from tech billionaires including Elon Musk, Peter Thiel and Reid Hoffman, and its name reflects its status as a non-profit organization that promotes the principles of open-source software. Will share freely online. In 2019, it transitioned to a “capped” for-profit model.
Now, it is releasing GPT-4 with some measure of privacy. In a 98-page paper accompanying the announcement, company employees said they would keep many details close to the chest.
Most notably, the paper stated that the underlying data on which the model was trained would not be publicly discussed.
“Given both the competitive landscape and the security implications of large-scale models such as GPT-4, this report contains no further details regarding the architecture (including model size), hardware, training compute, dataset construction, training methodology, or similar is,” he wrote.
“We plan to make additional technical details available to additional third parties who can advise us on how to weigh competitive and safety considerations against the scientific value of further transparency,” he added.
The release of GPT-4, the fourth iteration of OpenAI’s foundational system, has been rumored for months amid growing hype around chatbots built on top of it.
In January, Altman downplayed expectations of what GPT-4 would be capable of, telling the podcast StrictlyVC, “People are begging to be disappointed and they will be.”
On Tuesday he sought opinion.
“We’ve spent a long time doing preliminary training on GPT-4, but it’s taken us a long time and a lot of hard work to feel it’s ready to release,” Altman said. Twitter, “We hope you enjoy it and we really appreciate feedback on its shortcomings.”
Sarah Myers West, managing director of the AI Now Institute, a nonprofit group that studies AI’s effects on society, said that releasing such systems to the public without oversight is “essentially experiments in the wild.” To do.”
“We have clear evidence that generative AI systems routinely produce error-prone, abusive and discriminatory results,” he said in a text message. ,
OpenAI said it was planning an online demonstration for Tuesday at 1 p.m. PT (4 p.m. ET) on Google-owned video service YouTube.