The “Godfather of AI” Simply Trashed GPT-3


Trash Discuss

GPT-3, a sophisticated language-processing synthetic intelligence algorithm developed by OpenAI, is basically good at what it does — churning out humanlike text.

However Yann LeCun, the Chief AI Scientist at Fb who’s been called a “godfather of AI,” trashed the algorithm in a Tuesday Facebook post, writing that “individuals have fully unrealistic expectations about what large-scale language fashions akin to GPT-3 can do.”

Glitching Once more

LeCun cites a current experiment by the medical AI agency NABLA, which discovered that GPT-3 is woefully insufficient to be used in a healthcare setting as a result of writing coherent sentences isn’t the identical as having the ability to cause or perceive what it’s saying.

“It’s entertaining, and maybe mildly helpful as a inventive assist,” LeCun wrote. “However attempting to construct clever machines by scaling up language fashions is like [using] high-altitude airplanes to go to the Moon. You would possibly beat altitude data, however going to the Moon would require a very completely different method.”

Medical Malpractice

After testing it in a wide range of medical situations, NABLA discovered that there’s an enormous distinction between GPT-3 having the ability to type coherent sentences and really being helpful.

In a single case, the algorithm was unable so as to add up the price of gadgets in a medical invoice, and in a vastly extra harmful state of affairs, actually recommended {that a} mock affected person kill themself.

“As a question-answering system,” LeCun wrote, “GPT-3 shouldn’t be superb.”

Extra on GPT-3: Major Newspaper Publishes Op-Ed Written by GPT-3


Please enter your comment!
Please enter your name here