Tech

OpenAI Takes Steps to Curb AI Hallucinations in ChatGPT

Published

on

Image Credit: Shutterstock

With its well-known chatbot, ChatGPT, OpenAI, the top artificial intelligence research lab, is taking steps toward fixing the problem of AI hallucinations. The company said on Wednesday that it is working on the improvement of the chatbot’s capacity for solving complex mathematical problems to decrease the amount of erroneous information produced by AI.

 

The advancement of AI technology has added a heap of benefits for humans, but it also had difficulty accurately producing reliable data. These errors are what OpenAI refers to as “hallucinations,” which are when an AI system generates unexpected and incorrect outcomes that are unsupported by empirical evidence. These hallucinations may seem like erroneous information, news, or information about people, places, or events.

 

A well-known American criminal defence lawyer and law professor, Jonathan Turley, recently told ChatGPT about his unpleasant encounter. Turley said that the chatbot had made up accusations of sexual assault against him, even going so far as to construct a fake Washington Post story to back up the baseless claim. Turley expressed these defamatory claims in a USA Today opinion piece and on his blog. This incident served as additional proof of why OpenAI must act quickly to solve the problem of AI hallucinations.

 

Also Read: How one can livestream the FA Cup closing completely free

 

Although OpenAI did not disclose particular examples that led to their investigation into hallucinations, two recent occurrences shed light on the effects of these errors in the actual world. The event involving Jonathan Turley in April brought attention to the possible harm brought on by false AI-generated claims. Additionally, lawyer Steven A. Schwartz acknowledged that he used ChatGPT as a research tool for a legal matter only to find that the conclusions the chatbot produced were utterly false. Schwartz expressed regret for using generative AI without first confirming its reliability and vowed never to repeat the mistake.

 

The battle for developing AI Bots has increased with the public unveiling of Google’s enhanced AI chatbot, Bard. OpenAI’s ChatGPT is now facing a severe challenge from Bard, which puts its position as the leader in danger. On May 10, Google unveiled Bard during its yearly Google I/O conference, making it freely usable in more than 180 nations. The new improvements made by Bard are intended to maintain it in the lead, thus escalating the battle between the two businesses.

 

Also Read: Display camouflage sharing would possibly presumably effectively be coming to WhatsApp for Android

 

Notably, Microsoft has also made an entry into the AI chatbot space by integrating the ChatGPT technology from OpenAI into its Bing web browser. Despite the difficulties ChatGPT confronts, Microsoft is confident in the technology’s promise as seen by its substantial $13 billion investment in OpenAI.

 

To reduce hallucinations, OpenAI conducted research comparing two types of supervision: “process supervision,” which offers feedback at each stage of the problem-solving process, and “outcome supervision,” which provides feedback depending on the ultimate output. To compare the efficacy of different methods, the business performed evaluations using mathematical problems. For each issue, many solutions were created, and the best one according to each incentive model was selected.

 

OpenAI admitted that it is yet unclear how process monitoring would affect fields other than mathematics. However, OpenAI thinks that more research should examine the possibilities of process oversight in other fields. OpenAI has released to the public the whole dataset of process supervision to promote research in this area.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version