Analysis: ChatGPT is excellent at what it is designed for. You’re just using it wrong

It doesn’t take much for ChatGPT to make a factual error.

My son does a report on American presidents, so I thought I might help him by researching biographies. I tried asking for a list of books about Abraham Lincoln and it worked pretty well:

A reasonable list of Lincoln books. Screenshot by Jonathan May/CC BY-ND

Number 4 is wrong. Garry Wills wrote Lincoln at Gettysburg, and Lincoln himself wrote the Emancipation Proclamation, sure, but that’s not a bad start. Then I tried a little harder and asked for the much more obscure William Henry Harrison instead, and he bravely came up with a list, almost entirely bogus.

Books on Harrison of which less than half are correct. Screenshot by Jonathan May/CC BY-ND

Numbers 4 and 5 are correct; the rest does not exist or was not written by these people. I repeated the exact same exercise and got slightly different results:

More Harrison books, mostly not yet available. Screenshot by Jonathan May/CC BY-ND

This time, numbers 2 and 3 are correct and the other three are not real books or are not written by these authors. Number 4, “William Henry Harrison: His Life and Times” is a real book, but it is by James A. Green, not Robert Remini, a well-known historian of Jackson’s time.

I called the error and ChatGPT eagerly corrected itself, then confidently told myself that the book had indeed been written by Gail Collins (who wrote another biography of Harrison), then continued to tell more about the book and about her. I finally revealed the truth and the machine luckily worked with my correction. Then I lied absurdly and said that presidents should write a biography of a former president in their first hundred days, and ChatGPT pointed it out to me. I then subtly lied in erroneously attributing authorship of Harrison’s biography to historian and writer Paul C. Nagel, and that bought my lie.

When I asked ChatGPT if he was sure I wasn’t lying, he claimed it was just an “AI language model” and he was unable to verify the accuracy. However, he changed that claim by saying: “I can only provide information based on the training data provided to me, and it appears that the book ‘William Henry Harrison: His Life and Times’ has been written by Paul C. Nagel and published in 1977 became .

It’s not true.

words, not facts

This interaction can give the impression that ChatGPT has been given a library of facts, including misrepresentations about authors and books. After all, the creators of ChatGPT, OpenAI, claim that they trained the chatbot on “huge amounts of data from the internet written by humans”.

However, he was almost certainly not given the name of a bunch of made-up books about one of the most mediocre presidents. Somehow, however, this incorrect information is actually based on his training data.

As a computer scientist, I often get complaints that reveal a common misconception about big language models like ChatGPT and its older brothers GPT3 and GPT2: that they’re some kind of “super Google” or digital versions of a reference library in search of answers. to interrogate an endless library of facts, or to piece together pastiches of stories and characters. They don’t do any of that – at least they weren’t explicitly designed for that.

It looks good

A language model like ChatGPT, more commonly known as a “pre-trained generative transformer” (that’s what G, P and T stand for), takes the current conversation and uses it to form a probability for all the words in its conversational vocabulary, then chooses one of them as the likely next word. Then it does it over and over and over again until it stops.

It therefore has no facts per se. He just knows what word to come next. In other words, ChatGPT does not try to write true sentences. But he tries to write plausible sentences.

REGARD: Security expert warns of potential threat to democracy from AI tools

When discussing ChatGPT privately with colleagues, they often point out the number of factually false statements it produces and dismiss them. To me, the idea that ChatGPT is a flawed data recovery system is irrelevant. After all, people have been using Google for two and a half decades. There is already a fairly good information service.

In fact, the only way to verify that all of these presidential book titles were correct was to google them and then check the results. My life wouldn’t be much better if I got these facts by talking instead of getting them for almost half my life by researching documents and then critically analyzing them to see if I can trust the content.

improv partner

On the other hand, if I can talk to a bot that gives me plausible answers to what I’m saying, that would be helpful in situations where factual accuracy isn’t as important. A few years ago, a college student and I were trying to create an “improv bot”, which would respond with a “yes, and” to anything you said, to keep the conversation going. We showed in an article that our bot was better at “yes-and-ing” than other bots at the time, but in AI, that’s a two-year-old story.

I tried a dialogue using ChatGPT – a sci-fi space explorer scenario – much like a typical improv class. ChatGPT is much better at “yes, and-ing” than what we did, but it didn’t add any drama. I felt like I was doing all the heavy lifting.

After a few tweaks it got a bit more complicated, and at the end of the day I felt like it was pretty good practice for me, not having improvised much since I graduated from college over 20 years ago.

An impromptu space exploration scene created by the author using ChatGPT. Screenshot by Jonathan May/CC BY-ND

Of course, I don’t want ChatGPT to appear in “Whose Line Is It Whatever?” and it’s not a big “Star Trek” plot (although it’s still less problematic than “Code of Honor”), but how many times have you sat down to write something from scratch and face on the blank page being afraid of you? Starting with a bad first draft can break writer’s block and get the creative juices flowing, and ChatGPT and great language models seem like the right tools to help with those exercises.

And for a machine designed to produce the best possible sentences in response to the words you type, rather than feed you information, that seems like the right use of the tool.

This article was republished by The Conversation under a Creative Commons license. Read the original article.

Source: www.pbs.org

#Analysis #ChatGPT #excellent #designed #Youre #wrong