Can I use AI for work?
May 26, 2023 9:36 PM   Subscribe

I know that I physically can, but what should I consider before I do?

I'm intrigued by ChatGPT and from what I've read, I think it could really help me in my work in two ways: doing research for me summarizing laws, news, studies, and explaining new technology (though I understand that it is not always accurate) and for grammar and writing assistance (I notice by the end of the day my writing gets sloppy as I get more tired).

I don't work with sensitive or private materials, and if asked by my employer, I'd share that I use it. My employer has no rules about using AI, but for some reason it feels weird to use it? Like I'm cheating my employer or not fully doing my job? Not sure. I use a lot of assistive technology though I'm not disabled, as I find it makes my work easier, and I use every possible digital assistant technology available, because again, I can work faster and more efficiently. AI just seems like the natural next step.

Should I use AI for work? Do you use it? Anything I should consider?
posted by Toddles to Technology (38 answers total) 16 users marked this as a favorite
 
doing research for me summarizing laws, news, studies, and explaining new technology (though I understand that it is not always accurate)

You really need to understand that it will lie to you unblinkingly. It will make up quotations and sources without hesitation. How much leeway do you have to run with a completely erroneous fact?
posted by praemunire at 9:40 PM on May 26, 2023 [76 favorites]


I’d find whatever confidentiality agreement you signed when you accepted your current position, and review it, before putting any work content into an AI tool that might log it (or who knows what else). Absent a written policy there that specifies what kinds of tools you can and can’t use, I’d get something in writing from my boss. You don’t want to find out that actually that info was sensitive only once you’ve already distributed it and you’re dealing with termination and/or a lawsuit rather than just hearing “no, please don’t do that.”

On preview: also what praemunire said.
posted by Alterscape at 9:41 PM on May 26, 2023 [3 favorites]


Response by poster: To get ahead of this, I'd rather not say what my job is, but please believe me when I say that everything I do is completely public and available to anyone whenever and whatever, and legally so. So, if we can skip those answers, that would be great.

The lying bit might be a problem, appreciate that.
posted by Toddles at 9:46 PM on May 26, 2023 [2 favorites]


Chat GPT has no way of knowing from the text it aggregates what is real or not, and does not have any ideas about truth, or accuracy.

It is good at putting English sentences together with correct syntax, and is also good at paragraphs on the same topic. However the text itself can be and is often absolutely meaningless. It might pull statistics or just make them up because it has learned that when humans ask what is the percentage of people who eat chocolate, the answer back should be something along the lines of

The percentage of subject( people) that does (verb) is number percent.

The text does not care if that number is accurate, just that it meets the syntax requirements of those who speak English and it sounds natural.
posted by AlexiaSky at 9:53 PM on May 26, 2023 [12 favorites]


Yes to all of the foregoing, but I have found it to be exceptionally useful for things like “write a more forceful and compelling version of the following…”

…never rely on it for facts, but it’s pretty fricking useful for hitting word limits, changing writing style, summarizing extended sections, and so on. Basically any time you’re jumping through an arbitrary text hoop, it can probably help with your first draft. You give it the facts, and let it sculpt the writing around them.

First draft only (only!!) — you’ll always need to edit. Nevertheless, it let me burn through a huge amount of tedious text adjustment in a weekend, rather than a week.

I also use it for proforma email replies via a GMail plugin. That also saves surprising amounts of time over a week.
posted by aramaic at 9:56 PM on May 26, 2023 [10 favorites]


I'd think of it like a tool to shape your grammar and writing style but not content directly. As long as that's what you are using it for, I think it's a useful tool.
posted by CleverClover at 10:21 PM on May 26, 2023 [1 favorite]


If by "everything I do is completely public and available to anyone whenever and whatever, and legally so.," You mean to say that you work for an agency where your work might be subject to a public records request, than I also think you need to consider what responsibility you have to the public to be transparent about your work product. I'm not sure if that means some kind of endnote or disclaimer that this work was created with the assistance of AI, but we're all in a giant grey area here and not nearly enough careful thought is going into the ethics of various use cases.
posted by brookeb at 10:25 PM on May 26, 2023 [9 favorites]


If you want to use it for research purposes, ask it to provide you links to sources. That way, you can confirm the information. But it’s great for summarizing or revising text you’ve provided.
posted by bluloo at 10:33 PM on May 26, 2023 [2 favorites]


My employer is intrigued by ChatGPT and used it to generate 4 blog posts to review internally. The first post included a statement in the second paragraph that was so farcically incorrect that if we had published it we would have lost all credibility in our field.

I think at this point ChatGPT is not useful for anything that needs to contain facts, because the effort to ensure everything it generates is factual is equivalent to just writing something factual in the first place. So I wouldn’t use it for your first stated use case.

As for your second use case, are you pretty much inputting a paragraph you wrote and asking it to phrase it better? I’m sure that’s fine. That’s just Grammarly.
posted by ejs at 10:37 PM on May 26, 2023 [6 favorites]


If you want to use it for research purposes, ask it to provide you links to sources. That way, you can confirm the information. But it’s great for summarizing or revising text you’ve provided.

But recognise that these may also be completely made up, so you do have to do the actual legwork in checking them out.

I was recently playing around with Google's Bard and it very compellingly told me about a 2018 report published by the King's Fund on multimorbidity in England. It would have been a really useful source and the statistics it said it took from that report were ideal for my purposes.

It never existed. Total falsification, but a very convincing one.
posted by knapah at 10:38 PM on May 26, 2023 [13 favorites]


i use it to rewrite bullshit work stuff and provide summaries of things I give it. I work with confidential data, so I have to strip stuff out (the whole i make teapots kind of substitution) but it is useful for turning a bulleted list in an email into a table with specific formatting which i weirdly have to do A LOT.

I also use it as a narrative google/wikipedia for terms I don't understand. My new job involves two very different highly technical fields overlapping, and it's been helpful to have chatgpt on my phone during meetings to write "explain clearly and briefly what XYZ means in field ABC" and get a quick answer so I can keep up. I still have to dive further afterwards to learn from a reliable source but it's a convenient start.
posted by dorothyisunderwood at 10:52 PM on May 26, 2023 [2 favorites]


At this point I really think the two tenable use cases are rewriting existing texts (make this text more formal, more friendly, etc.) and generating random ideas. Nothing that relies on truth, logic, thoroughness, or understanding - just things that rely on pure mimicking of what other people have done in the past.

It's like if you had an intern who was a complete idiot at actual thinking and was extremely unconscientious and stupid at doing research and both made things up constantly, misunderstood existing things, and ignored important things - but who also had a pretty decent ear for style, more than a lot of people. What tasks would you give them where they might be useful and couldn't do any harm?
posted by trig at 11:21 PM on May 26, 2023 [11 favorites]


I'm not a lawyer, but have a lot of training in parsing legalese/grammar to find loopholes, inconsistencies etc. I would never trust a machine over my judgement, as my reputation hinges on my reading.

Also some of the law I'd be reading would not be being widely read/accessed, and it would be possible for a bad actor to work out what I was working on - a very real possibility for some job types.
posted by unearthed at 11:48 PM on May 26, 2023 [3 favorites]


Research? No. It makes up the facts, and if you ask it for sources, it makes up the sources.
Grammar? Sure.
Writing assistence? Eh, as long as the writing is not supposed to be creative...

I'm going to expose myself as a horrible hypocrite now, because whenever the topic is brought up on the blue, I'm always quick to chime in on the hand-wringing and gnashing of teeth how auto-complete on steroids is going to fuck up the signal-to-the-noise ratio and help flood the zone with bullshit, something I do really believe, from a big picture perspective, because I just don't trust people in aggregate to responsibly reckon with its limitations...

but as long as you make sure to get actual facts from actual sources, and that you yourself actually understand what you're writing about and basically just use A.I. for register and grammar and packaging, it's kinda neat.

To be honest, a lot of the writing I do/used to do professionally is just not that mission-critical, so factually correct and generic will do, and if you can handle the factually correct part and all you need from the writing is generic, ChatGPT and its ilk can really help you be more efficient. Maybe slightly less effective, but the time-and-effort saving can be worth a minor loss of quality, if the task is just not that integral to overall success. And sometimes doing it fast can contribute more to success than doing it perfectly.

I use A.I mostly for quick access to useful phrases and boilerplate templates for text genres I'm not terribly fond of writing, but need to service on occasion. Stuff I used to google, back when google was still useful (I'd trade ChatGPT in a heartbeat for still-useful google of years ago). I let it rephrase stuff I wrote to see if there's maybe a more formal, more diplomatic, more idiomatic, more conventional way to put it. (More conventional is where it really shines, and sometimes more conventional is just what you want).

I do think you need to be really careful about using A.I for summaries though.

Summaries for something you wrote yourself or already properly learned about in the traditional way - Sure. I've found that LLMs sometimes ignore the causal relationship explicated in the original source and instead invent a causal relationship that is quicker to explain, but when it's something I wrote myself or studied in detail using actual sources, I can easily catch those errors and correct them. It can be a convenient way to refresh my own memory of something I engaged with some time ago, if my knowledge isn't buried too deeply to catch potential hallucinations.

Summaries for something new to me, something someone else wrote? Again, eh. If I just need the most cursory overview, to get a sense whether this could be something relevant to my interests and potentially worth looking into more deeply? Maybe, although personally, I still prefer to go to wikipedia for that sort of stuff. Maybe bits I'm most interested in are going to be a bit less spoon-fed (I might have to skim and scan a slightly longer text to get the gist), but I have way more context to assess the reliabilty of the info and that's ultimately more important to me.

If it's knowledge I actually plan to use though? As in potentially apply in practice in the near future? I'm afraid that's where the looking-into-it-more deeply part becomes indispensable, and that can never be done with A.I. If you actually want to process knowledge in a way that allows you to apply it, you need to understand it, you need to integrate it into your existing knowledge, and if that's your goal, you're really sabotaging yourself, if you use short-cuts. You have to actually do the reading, and you have to write your own summaries. Because summarizing something, filtering out the key facts, the basic shape of the argument, picking the examples that are most salient to your own purposes to illustrate an abstact notion, most likely to stick in your own mind - that's already the first test of your understanding, and you need to test your understanding. You have to do your own summary to see if you got the point, or if you need to read it again and ask further questions to clarify. If you think you can skip that, all you acquire are phrases to parrot, not knowledge to use.

Alas, phrases to parrot is often all many people seem to get out of their education, and they seem to be doing just fine. Just saying, you need to be aware of your own aims here. Only you can know whether that's enough for your purposes.
posted by sohalt at 12:07 AM on May 27, 2023 [5 favorites]


"I understand that it is not always accurate"

It literally just makes stuff up

It's a chatbot, not a look-stuff-up but

The only thing it "knows" is what words usually go together and in what order they usually appear in

It's basically just a more coherent version of the predictive text function on your phone
posted by Jacqueline at 12:33 AM on May 27, 2023 [18 favorites]


As someone who reviews others' work...

If I learned someone was doing "research" using ChatGPT, which is a completely unsuitable tool for research, I would permanently lose all trust in them.

If I learned someone was using ChatGPT for grammar/rewriting, I would wonder if they were being careful to check that the rewritten text was still factually accurate. I would demand to see samples of before/after text for comparison. Any factual difference, or any reluctance to produce samples - again, complete loss of trust.

So there's something to consider - what are the consequences going to be when you're found out, and some coworkers permanently lose all trust in you. Not everyone would react as strongly as me; who knows, maybe few would. But there will be some.
posted by equalpants at 1:05 AM on May 27, 2023 [17 favorites]


I cannot emphasize enough that LLMs are not AI. Zero intelligence is involved. They do not have any kind of internal model that understands the meaning of information. They really are just fancy predictive text. They produce text that sounds like plausible text, including references that sound like valid references but are completely made up.

You've seen the auto-generated graphics that give people three ears, six teeth, and seventeen fingers? This is that, except for text. Our brains are not as good at seeing the seventeen fingers in text at a glance, but that doesn't mean they aren't there.

You can easily bias the results by putting a leading question in the prompt, so if you start from false assumptions an LLM will cheerfully validate them for you. I can't find the reference now, but some plant information website had the genius idea to add a chatbot -- if you ask it if hemlock is toxic it will say yes, but that will in no way stop it from helping you out with a delicious recipe for hemlock salad, if that's what you ask for.

If you can tell that the output of an LLM is gibberish, it's because you're familiar enough with the subject matter and adept enough with the output language to be able to tell when factual statements are incorrect or phrases don't say what you mean them to say or adjacent sentences or paragraphs don't make any logical sense together.

If you use an LLM to summarize a subject you're not familiar with, or to rewrite text for you because you're not that confident in the output language, you may not be able to verify that it hasn't made critical errors, and you're asking for trouble.

My mother uses Google Translate to iteratively refine writing in her second language. She does this by writing something in her first language, translating it, tweaking it, translating it back, tweaking it, and so on. I think that this is less fraught with peril because text produced by automated translation tools uses an entire body of original human-written text as a prompt, and as such is much more firmly bound to something real.

It's possible that you can get a similarly reasonable result with an LLM using a long prompt and asking for a rewrite, but you need to be prepared to pore over the output with a fine-toothed comb to make sure it hasn't done something dumb that isn't apparent in a skim-read. If you consider that a more enjoyable use of your time than doing the original boring writing task by hand, I get that, but caveat emptor.
posted by confluency at 1:18 AM on May 27, 2023 [5 favorites]


Just to elaborate a little on why this is not such a good idea: the sweet-spot for generative models is when the output is hard to generate but easy to verify. The worst use-cases are when the output is easy to generate but hard to verify. So you need to think carefully about how you are going to verify the information ChatGPT spits out.

For example, if you are using ChatGPT to summarise an article then how will you know that the summary is accurate? You'd need to be familiar enough with the content that you could summarise it yourself, in which case ChatGPT is not giving you much of a productivity boost.

All the cases I've seen where people think they're getting a productivity boost from this thing hinge on that person not doing the diligence work they should be doing. That's where the reduction in time spent comes from. The LLM is just laundering their laziness.
posted by june_dodecahedron at 1:39 AM on May 27, 2023 [17 favorites]


As a professional editor, absolutely not. I can endorse most of the other points that people have already made in this thread.

Use it for fun and games, absolutely (I literally saw someone prompt it to write the story of a new souls-like video game based on the characters and lore of Succession, and it was awesome). My brother-in-law also uses it to learn how to write code, which I think is probably fine, although I'm open to pushback since I'm not a subject matter expert on coding.

But research and/or writing? In addition to being prone to inaccuracies, superficiality, and plagiarism, current LLM iterations lack a sophisticated understanding of semantics, style, and nuance.
posted by nightrecordings at 4:48 AM on May 27, 2023


Courtney Milan just highlighted a twitter thread with a practical example of how things can go wrong with LLM. Don't be that guy.
posted by sukeban at 4:50 AM on May 27, 2023 [13 favorites]


Sometimes I run sentences through it like, "am I phrasing this properly? What are some other ways one might phrase this sentence?" and it spits out some alternatives, and I'll use that to rejigger some wording in emails and stuff. It works best when you have your own content or your own language for it to work off of; as others are saying, it sucks at doing "research" or producing accurate content form scratch.

Also it totally kicks ass at excel formulas. I am now officially an excel guru more than ever before. If you ever had a big fancy spreadsheet in mind for boosting productivity/tracking/automating anything but could never really wrap your mind around the formulas involved, I highly recommend using it for that.
posted by windbox at 4:51 AM on May 27, 2023 [9 favorites]


A cautionary tale
posted by flabdablet at 6:12 AM on May 27, 2023 [1 favorite]


Think about why you say you'd tell your employer if they ask. And why you didn't say "I'd check with my employer".

Simply incorrect answers are the least of your worries, this stuff has a well-documented track record for extreme racism, sexism, agism.

So, I think this would be completely unethical and likely harmful. I think part of you knows that too.
posted by SaltySalticid at 6:14 AM on May 27, 2023 [1 favorite]


If you ever had a big fancy spreadsheet in mind for boosting productivity/tracking/automating anything but could never really wrap your mind around the formulas involved, I highly recommend using it for that

But you'd damn well better wrap your mind around the formulas it populates your sheet with after you've had it do so, or you're risking a world of hurt.
posted by flabdablet at 6:15 AM on May 27, 2023 [2 favorites]


Response by poster: Ok, I'm hearing a resounding "no!" That is my answer, thank you!
posted by Toddles at 6:50 AM on May 27, 2023 [6 favorites]


I think people might be overstating the problems. It's true that I've seen ChatGPT4 get some facts wrong, but I've also found it to be really useful for assisting me with IT-related stuff. The answers have generally been accurate and helpful to me. I wouldn't necessarily trust it implicitly, but I have not had issues with it "hallucinating" much in the domain I use it for.
posted by alex1965 at 8:01 AM on May 27, 2023 [3 favorites]


I agree that people are being a little overly negative here - like yes, don't say "write me a research paper about [x]" and expect it to be publishable. But that doesn't mean it's not a really powerful tool that you could benefit from.

But you could ask it things like

"What are the top five cited articles about [x topic]?"

"What are the five biggest trends in the field of [x] studies, and who are the scholars connected to these?"

etc.

It's pretty good answering questions like this - sure, you'll then actually want to do your own human-based-intelligence research to confirm what it says, but it will likely save you the time of figuring out where to start/look for information. And yes, it's also great for doing a first draft of material where you do have a command on the facts and will be able to spot any mistakes or fact-check anything you're unsure about.
posted by coffeecat at 8:17 AM on May 27, 2023 [2 favorites]


I genuinely do think LLMs are neat, and, in a sane world, they would be a fun toy. But with respect to using it as a research starter, I will say this: as a litigator, I have to check every single citation in any work product by opposing counsel. It's unbelievable what people will make up, even officers of the court, even knowing they are submitting material to a court with clerks to do research or to a skeptical opposing counsel. I know this well; I've had it empirically confirmed for me for 10+ years now; I have seen the real-world consequences of doing so. And still...it takes tremendous discipline to do it every time, because it's so damn tedious and in the context it feels like it shouldn't be necessary. I think most people who use an LLM as a research starter will eventually succumb to the temptation not to.
posted by praemunire at 8:56 AM on May 27, 2023 [12 favorites]




I think most people who use an LLM as a research starter will eventually succumb to the temptation not to.

Flagged as fantastic, praemunire. This is my worry, too. If it’s just a few people doing it, they’ll get sacked and I won’t feel very much sympathy. If it starts being more than a few, we might start to have real, deepfake level problems.
posted by eirias at 9:30 AM on May 27, 2023 [1 favorite]


I've been an editor for 20 years, and have been at my current workplace for almost a decade. I have edited tens of thousands of words of my coworkers' writing over the years, and I know their habits, favoured vocabulary and grammatical tics very, very well by now. I have reason to believe that of those coworkers has recently started using ChatGPT to "help" her meet her deadlines. It definitely helps her get words on paper in a timely fashion, but the amount of time that I must now spend fact-checking her work to remove obvious errors and then re-writing sentences that sound great but mean absolutely nothing has easily quadrupled. Please, please don't do this to your colleagues.
posted by notquitejane at 9:38 AM on May 27, 2023 [12 favorites]


The style-changing or rephrasing can be helpful, but as notquitejane and others have said, you gotta go through it with an editor's eye to make sure everything makes sense and flows, and the errors it makes can be harder to catch than human errors.

However, it's been useful to suggest some different phrasing for a paragraph when I was stuck, and also to rewrite a dry article as a "manifesto" to show somebody what I meant by "a version with more energy," but in neither case was the output directly publishable.
posted by troyer at 9:50 AM on May 27, 2023 [3 favorites]


Amusingly, a friend just sent me this article: https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

(missed on preview that this link had already been posted.)
posted by joycehealy at 10:23 AM on May 27, 2023


Recently had a class on this. The people leading didn't set up their training materials with this as the main bullet points, but they came up with every question. 1) Your prompt should tell it not to lie - every time, every refinement - and 2) Understanding how to engineer prompts is the key (which came across as you could save time by just doing the writing yourself.)
posted by Lesser Shrew at 1:37 PM on May 27, 2023 [1 favorite]


I had a small amount of your same dilemma recently. I got over it fast.

My job's leadership asks for worker-created documentation about things that, were he to do a Google search of literally SECONDS, would instantly find existing documentation out the wazoo. When they ask me for another instance of this, I have started using ChatGPT to give me chunks of text that (as others have mentioned) I use as a starting point.

From there, I tweak it, make sure it doesn't reference products or features that the company does not use, etc. and generally confirm that it's accurate based on my many years of experience.

I look on it as if I had an assistant I was asking, "Hey, Peggy, put together a first draft on how to install WidgetPoint" so I can skip that basic step. My time is best spent on tasks that allow me to use much more valuable parts of my skill set, so I have zero problem with it.

The key here is, I would never use the GPT content without the review and tweaking.
posted by I_Love_Bananas at 4:39 AM on May 28, 2023 [1 favorite]


Your prompt should tell it not to lie - every time, every refinement

Buy that doesn't mean it won't lie. It will tell you it's "double-checked" (as in that "lawyer asks ChatGPT" story above) but it doesn't mean it's more accurate than it would otherwise be.

I've used it a couple of times to generate some code for something I wasn't familiar with and aside from a few errors, it was good.

I can imagine maybe using it to rewrite something in a different style, or as a first pass of something that didn't require specific facts.

But any specific details about the world - names of books, papers, people, articles, websites, etc, etc - should each individually be checked, every time, no matter how much you've told it not to lie, how convincing those names etc sound, and how much it insists they're correct.

It is very, very good at sounding like it's accurate.
posted by fabius at 5:23 AM on May 28, 2023 [8 favorites]


You can test that it won't lie theory by telling it not to lie and asking it to tell you about people who eat a made up food. It will in fact tell you about people who eat a made up food and not that the food isn't real. That's because it has grounding in language - when someone says not to lie the most common response it has analyzed is to assert truth. Instead of ok, I'm about to lie to you.

When someone asks "tell me about something" it expects a completed a lengthy response about it, not that that thing isn't real. So it will do the first.

It isn't actually analyzing the words in the paragraph or verifying for anything at all. It cannot know truth from a lie. It has an idea of the percentage of times that a certain word order choice has come up in the text it has analyzed. And will repeat those patterns over and over.
posted by AlexiaSky at 6:18 AM on May 28, 2023 [5 favorites]


The “key takeaway” was you would save time by just doing the work yourself.
posted by Lesser Shrew at 8:14 AM on May 29, 2023 [1 favorite]


« Older How did the Buddhist temples survive the Cultural...   |   Otterbox Defender Just aint the same Newer »

You are not logged in, either login or create an account to post comments