Can ChatGPT make my internet time more efficient?
April 3, 2023 6:00 AM   Subscribe

[This post was written by a human...] One thing that drives my time on the internet is wanting to know the strongest arguments for and against controversial issues. Rather than looking for a sane person on Twitter to offer up what I'm looking for, could ChatGPT do the same more efficiently? [Answers generated by ChatGPT will be scorned...]

FWIW, I was impressed by a sample text where ChatGPT was asked to summarize the Israeli/Palestinian conflict in the style of a sarcastic teenage mean girl. Not only did it manage to be neutral and factual, but it nailed the stereotype perfectly.
posted by Jon44 to Computers & Internet (26 answers total) 2 users marked this as a favorite
 
This would only work if you're willing to spend time checking every factual-seeming assertion in the output - how would you know it's not just spitting out propaganda it's read?
posted by sagc at 6:07 AM on April 3, 2023 [8 favorites]


Response by poster: "...how would you know it's not just spitting out propaganda it's read?"

Do you know for sure that ChatGPT has no validation process for objective facts or that, if it does, it can't be considered reliable?
posted by Jon44 at 6:29 AM on April 3, 2023


No, I don’t believe this will work.

In my not-very-extensive but quite consistent experience, ChatGPT gives itself a disclaimer when you push it near to anything considered controversial - along the lines of “hey I’m just an AI, I don’t have a position on this so don’t ask me”.

Probably a good design decision for a language model at this stage of maturity, but it means it won’t be useful for OP’s stated purpose.
posted by Puppy McSock at 6:36 AM on April 3, 2023 [1 favorite]


I asked chat GPT to write a bio for me. I'm not a public figure or anything, but I have a website with a bio on it, and I've been profiled in a number of significant newspapers, and my work has been reviewed in the media. The bio it generated was very plausible sounding - it referenced all kinds of industry specific awards and content. It was also completely incorrect.
posted by stray at 6:37 AM on April 3, 2023 [7 favorites]


I signed up for ChatGPT4 because I find it very useful. If you'd like to try asking it some questions, I'd be happy to do so on your behalf and email you it's answers. Feel free to contact me.
posted by The otter lady at 6:47 AM on April 3, 2023 [3 favorites]


Best answer: I was also initially attracted to its ability to explain and summarize complicated topics (or things I'm not terribly familiar with). When I test it out on complicated things that I do know something about, it's not bad but not great either, so I tend to distrust it.

Example - my wife used it to explain a concept in linear algebra she was struggling with. It came back with a reasonable-looking explanation, but she could tell there was a large gap in the example/proof. But she only knew this because she was already deep into the subject matter.

Here's where I have found a use for ChatGPT3: writing bits of python code that I'm too lazy to figure out on my own. It does pretty well here but occasionally paints itself into corners without being aware of it.

In my experience, ChatGPT is frequently wrong, but very confident in its wrongness. Working with it is similar to developing Google search skills - 3/4ths of it is learning how to ask precisely. YMMV.
posted by jquinby at 6:50 AM on April 3, 2023 [9 favorites]


Best answer: Do you know for sure that ChatGPT has no validation process for objective facts

Yes, absolutely 100%, ChatGPT has no validation process for objective facts. This is, in fact, one of the main criticisms of ChatGPT as an information engine. The term of art that has emerged is that ChapGPT "hallucinates" false information. Even more troubling is that the false information is often presented confidently in a comprehensive and partially true answer.

OpenAI says as much in their terms of use:

(d) Accuracy. Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.
posted by grog at 6:50 AM on April 3, 2023 [23 favorites]


Academic friends have used it to draft bios, as an exercise. It's made up entire publications and claimed they authored them.
posted by lapis at 7:10 AM on April 3, 2023 [1 favorite]


Best answer: I think it can be useful with caveats.

ChatGPT 3.5 is remarkably pathological in its willingness to make things up and assert them as real facts. I would worry that its attempt to summarize arguments might incorporate things that are truly wrong. As folks have said above, they don't vet the training inputs for correctness, nor the product outputs.

ChatGPT has some restrictions about controversial topics to try to prevent it from saying offensive things on sensitive subjects. It's fairly explicit about telling you when you hit a restriction but I'd worry about bias from invisible rules hand-coded by OpenAI (as opposed to bias implicit in the trained network).

Also ChatGPT doesn't cite sources by default; it gives generic answers synthesized from everything it's trained on. I've had some success asking for more detailed references. Sometimes the links it provides are way outdated. But overall it seems roughly as useful as a Google keyword search but different.

Caveats aside, I'm finding ChatGPT a remarkably useful synthesis of information. I don't think it's as good as Wikipedia can be, but it's a lot bigger and can dive specifically into details that there won't be an article for.

I created a few sample questions and answers asking for "pros and cons" on politically charged topics. CW: one test was with a highly racist and offensive question.
posted by Nelson at 7:15 AM on April 3, 2023 [2 favorites]


It is like working with someone who is very smart and very stupid and has no idea where the gaps in their knowledge are. It's useful for getting small clearly outlined things where you know the subject matter and can verify, or you supply the list of facts but it's absolutely not reliable.

The links are terrible - it will confidently present you with a list of references that are false. I've used both v3 and v4, and seen no improvement for veracity.
posted by dorothyisunderwood at 7:18 AM on April 3, 2023 [3 favorites]


Best answer: There is no validation process for facts and it's highly unreliable. Facts are not what it is for. That said, you could probably get a good sense of arguments or positions provided you ask for ideas, not specific facts.
posted by lookoutbelow at 7:25 AM on April 3, 2023 [2 favorites]


To the point about the links being terrible; one of the things it offers in my transcript is
"The Promise and Peril of Genetic Testing for Alzheimer's Disease" (https://www.nature.com/articles/d41586-019-02996-0): This article from Nature explores the role of genetic testing in Alzheimer's disease research and treatment, including the potential benefits and challenges of using genetic testing to inform medical decision-making.
Sounds plausible, I assumed it was real! But on closer look I can find no evidence such an article ever existed in Nature. I asked ChatGPT to correct itself and it did, citing equally-fabricated articles in Time and JAMA. In a very authoritative voice.

ChatGPT is full of landmines like this. Hopefully some iteration of this technology will include the AI telling you how confident it is that an answer is correct. Right now it's pathologically self-confident.
posted by Nelson at 7:31 AM on April 3, 2023 [9 favorites]


Based on what I've seen in my following of academic-twitter discussions of this, ChatGPT cheerfully makes up nonexistent books and articles, or grabs real titles, assigns them to the wrong authors, and completely misrepresents their content. It is not trustworthy for anything that needs to be factual unless you are already enough of an expert that you will be able to pick out and clean up the errors, and are just saving yourself some time by generating an first draft to work from.
posted by Stacey at 7:48 AM on April 3, 2023 [2 favorites]


Do you know for sure that ChatGPT has no validation process for objective facts or that, if it does, it can't be considered reliable?

100%, guaranteed, yes. Because that's how it works at a fundamental level. I predicts the most likely word to come after a current word, over and over.
posted by Back At It Again At Krispy Kreme at 8:43 AM on April 3, 2023 [4 favorites]


ChatGPT has zero regard for the truth. Here are a bunch of basic factual errors, for example, about really simple things: https://twitter.com/DKElections/status/1621532788787761153
posted by DavidNYC at 9:50 AM on April 3, 2023 [1 favorite]


ChatGPT has been writing Wikipedia articles. It even includes footnotes! The footnotes are provided as links. If you click on them they go nowhere because it invented the links as plausible sounding ones, written in the exact same format as ones that actually are sources.

The volunteers who check new entries in Wikipedia have been letting some of the entries that ChatGPT writes stand because they look perfect when they read them, and they can check more entries if they don't take the time to click the links. This is not good.

ChatGPT will give you strong arguments all right, but they will not be based on documented facts. Someone could ask ChatGPT to write an article about the assassination of the late President Trump and it would do so, with footnotes and details about how he was killed on January Sixth by by Capitol Hill Police officers or whatever seemed plausible to the AI.

You'll have to be really, really careful what you ask because your own biases are going to warp the answers. "Why do White Nationalists support gun rights?" is a loaded question. First it assumes that they do, and second by calling it "gun rights" rather than "gun ownership" or 'the right to bear arms" you can trigger answers with significantly different implication. The result you get will sound very plausible but will likely reflect a whole lot of assumptions. You may assume that the White Nationalists are Americans, for example, but ChatGPT is happy to lump them in with European White Nationalists and you will never know it is doing that.
posted by Jane the Brown at 10:23 AM on April 3, 2023 [2 favorites]


Unless ChatGPT can back up its work with fact-checked sources, it's worse than useless for this. ChatGPT can only output stuff that looks plausible based on its training data, and in a lot of ways just reflects our own human biases and weaknesses right back at us--which is one of the reasons people seem to anthropomorphize AI when it is basically the result of an inscrutable mathematical process rather than anything resembling actual thinking.
posted by Aleyn at 11:05 AM on April 3, 2023 [1 favorite]


ChatGpt's only purpose is to parrot statistically plausible - not verified 'true' - responses. it's a bullshitting machine. as i heard elsewhere, "ChatGpt is LLM driven mansplaining."
posted by j_curiouser at 11:13 AM on April 3, 2023 [5 favorites]


There's a common (and totally understandable) misconception that ChatGPT can consult or cite its training data, which it cannot. It has irreversibly reduced all of the information it was trained on into a complex soup of word associations. Every response, correct or incorrect, is "hallucinated" in the same way.

It basically generates stories, using methods very similar to the ones its cousin DALL-E uses to generate images. The way they make it look like a thinking, intelligent agent is they literally write some context into a prompt, almost like a movie script, and fill in what the user types, kind of like this:
It is Monday, April 3, 2023. You are a helpful AI assistant named ChatGPT, talking to a human.

The human says: [Whatever is typed into the chat box]
You respond:
And then it starts filling in words that fit the entire prompt, including the first part that you can't see. It has digested enough sci-fi stories and Reddit threads (or whatever) to know what such a conversation should look like, and has also gone through a LOT of fine tuning (humans saying do more like this, less like this) to be more convincing.

Just like DALL-E's images, the results are always "fake," but usually pretty convincing, and sometimes accurate enough to be useful.
posted by Ryon at 11:45 AM on April 3, 2023 [5 favorites]


Response by poster: @Ryon:
Thanks--that's a very helpful clarification. Relatedly, if someone asks "write xyz in the style of ABC..", does ABC have to be something a human programmed the AI to mimic or can it develop its own rules for a certain style based on its training material? (If so, in the example I gave, would it just seek out references to "mean girls" and then do its best to mimic their speech patterns using the specific topic?)
posted by Jon44 at 11:56 AM on April 3, 2023


From what I understand, humans have not programmed it to mimic any particular style, other than the fine-tuning that gives it the default sort of bland assistant "personality," and conversely to discourage controversial or harmful language. When you ask it to sound like a mean girl, it just free-associates, as it always does, based on the phrase itself. In the countless pages of source material it has distilled into a model, it has seen the words "mean girls" and the sorts of words that come next, thus it's capable of producing something similar.

Again, it doesn't seek out or reference anything, it just uses a fixed statistical model of what "human text" looks like to predict words that complete the prompt in a way that might elicit a thumbs up from a human. The model is just so vast and complicated that it's hard for us to imagine something like that actually working as well as it does, so our brain makes a little jump into "it knows x" and "it thinks y." Even in my previous post, I used the word "know" even though it doesn't "know" anything, because that's just how we're used to talking about things that can produce language.
posted by Ryon at 12:21 PM on April 3, 2023 [3 favorites]


I have seen Chat GPT make up an article "by" my dissertation chair that did not exist.
posted by joycehealy at 2:13 PM on April 3, 2023


These arguments against using ChatGPT for your use case are specific to ChatGPT. Microsoft's Bing bot and Google's Bard bot both cite their sources, so you can double-check the facts.
posted by acridrabbit at 2:20 PM on April 3, 2023 [2 favorites]


> Do you know for sure that ChatGPT has no validation process for objective facts or that, if it does, it can't be considered reliable?

Jay T. Cullen @JayTCullen: This is what #ChatGPT knows about me. It was nice knowing you all
Jay Cullen was a Canadian oceanographer and professor at the University of Victoria in British Columbia, Canada. He was a leading expert in the field of marine chemistry and ocean acidification, and was particularly known for his research on the impact of human activities on the ocean's chemistry and ecology.

Tragically, Jay Cullen passed away on June 2017 while on a research expedition aboard the Canadian Coast Guard Ship John P. Tully in the Pacific Ocean.
posted by sebastienbailard at 1:20 AM on April 4, 2023 [1 favorite]


Think of ChatGPT as a professional bullshitter, like those smarmy consultant stereotypes. It's great at making up a lot of words that kinda make sense and sound good together, but it doesn't do well with facts, just like 99% of professional bullshitters.

I'm using for tasks like "expand these 3 bullet points into 3 pages of business English to satisfy the minimum-word-count requirement for this task", or "transform this table [that I paste into the chat] into a list of bullet points and write a short description of each one in Spanish", or "summarize the following word salad into one page of bullet points".

So yes, it can help you, and no you absolutely should not rely on it.

(To use your example: I would ask it to summarize an in-depth report on the conflict if I could provide the report as a starting point, but I would definitely not ask it to just produce something out of thin air.)
posted by gakiko at 4:35 AM on April 4, 2023 [2 favorites]


For an excellent look at CHAT GPT with a brilliant and kind man, see Sam Parr and Shaan Puri's discussion with HubSpot co-founder Dharmesh Shah.

Shah uses a great term, "prompt engineering," which describes the art of crafting beneficial queries to feed CHAT GPT, which take into account its limitations.
posted by lometogo at 12:39 AM on April 6, 2023


« Older Favorite Fashion Influencers for Forty+ Female...   |   How do independent musical artists make make money... Newer »
This thread is closed to new comments.