Is "ChatGPT says you're wrong" an acceptable thing in the workplace now?
January 7, 2025 9:36 AM Subscribe
Like many millennials, I'm generally not a big fan of AI, so I'm not sure if my bias is showing or if this is actually as crappy as it feels to me.
Last year, I joined a startup as the product manager for their website. I've worked with and on websites for 10+ years now, so I'm pretty knowledgeable and experienced about this stuff. I was hired because the website was previously being run by the design team, who had little experience in web design and none in any kind of tech. Over the last few months, I've worked closely with them so that they can start to understand what's easy vs. difficult vs. not doable design-wise, particularly with our own wacky tech stack.
On two occasions now, when I've provided feedback on why a particular approach in a design might not work or be ideal, one of the designers has replied with "ChatGPT says we can do it this way," along with the wall of text output from ChatGPT and nothing else. ChatGPT wasn't wrong, but it also wasn't right, at least from my own perspective.
Both times, I was really annoyed when I got these messages. Of course, no one likes to be told they're "wrong." But to be frank, I don't care what ChatGPT says, at least not in this instance. I'm a human being who's been doing this for a long time and I know what works and what doesn't, especially with our own setup. Even the best AI doesn't have that nuance.
But I also know times are changing and I'm feeling a little like an old person yelling at the big computer in the sky. This is the first job I've had where almost everyone is younger than me and they use ChatGPT a lot. Is "ChatGPT says you're wrong" a normalized and okay thing that I should just get used to now? Or this is actually kind of a crappy way to deal with a disagreement with a coworker? Or something in the middle?
Last year, I joined a startup as the product manager for their website. I've worked with and on websites for 10+ years now, so I'm pretty knowledgeable and experienced about this stuff. I was hired because the website was previously being run by the design team, who had little experience in web design and none in any kind of tech. Over the last few months, I've worked closely with them so that they can start to understand what's easy vs. difficult vs. not doable design-wise, particularly with our own wacky tech stack.
On two occasions now, when I've provided feedback on why a particular approach in a design might not work or be ideal, one of the designers has replied with "ChatGPT says we can do it this way," along with the wall of text output from ChatGPT and nothing else. ChatGPT wasn't wrong, but it also wasn't right, at least from my own perspective.
Both times, I was really annoyed when I got these messages. Of course, no one likes to be told they're "wrong." But to be frank, I don't care what ChatGPT says, at least not in this instance. I'm a human being who's been doing this for a long time and I know what works and what doesn't, especially with our own setup. Even the best AI doesn't have that nuance.
But I also know times are changing and I'm feeling a little like an old person yelling at the big computer in the sky. This is the first job I've had where almost everyone is younger than me and they use ChatGPT a lot. Is "ChatGPT says you're wrong" a normalized and okay thing that I should just get used to now? Or this is actually kind of a crappy way to deal with a disagreement with a coworker? Or something in the middle?
I do not considerable ChatGPT a reputable source of factual information. Quite the opposite. ChatGPT and other generative AI system have well-documented patterns of providing wildly inaccurate information. It's one thing to use ChatGPT to generate a cover letter that you can proofread before sending. But it's another thing use it as a citeable source of expertise.
If your coworker wants to keep their design, they should find a reputable source that says it's okay. That would be a human or a design company or something. But not a generative AI system.
(I wish I could find you a reputable citation for this point of view, but when I google I just get AI summaries, and I don't have time to delve deeper at this time.)
posted by Winnie the Proust at 9:50 AM on January 7 [15 favorites]
If your coworker wants to keep their design, they should find a reputable source that says it's okay. That would be a human or a design company or something. But not a generative AI system.
(I wish I could find you a reputable citation for this point of view, but when I google I just get AI summaries, and I don't have time to delve deeper at this time.)
posted by Winnie the Proust at 9:50 AM on January 7 [15 favorites]
Best answer: I write a lot of the documentation used by our customer service team based on documentation provided by our vendors. I've come up against this a lot recently when working directly with the care team, who tend to be new graduates and individuals looking for experience in the business we do. A lot of what I'm seeing is folks taking the documentation I've written, running it through ChatGPT or a similar AI tool, which brings in outside sources to the siloed plan procedures. This can mean ChatGPT saying we can approve an appeal, providing wrong timelines or even wrong contact information.
I treat ChapGPT like an extension of the person. If I think ChatGPT is wrong, I treat it just like I would a human being because in my experience, the person giving the log doesn't understand the answer. By breaking it down like I would a response from a person, they feel heard. I basically act like they got bad information from a third party. If it's a security breach, then it's handled a bit different, but I find most folks use ChatGPT because they feel it listens to them more than an actual person would. By validating their experience- even if I'm rolling my eyes - we tend to be able to move forward. Or they feed my response back into ChatGPT and we go back to step one.
If this is a big time sink, maybe question why they're trusting chatGPT more than the expert they're working with, but I'd go at least two rounds with the AI before trying to figure out why they're going there instead of to you.
posted by Torosaurus at 9:58 AM on January 7 [19 favorites]
I treat ChapGPT like an extension of the person. If I think ChatGPT is wrong, I treat it just like I would a human being because in my experience, the person giving the log doesn't understand the answer. By breaking it down like I would a response from a person, they feel heard. I basically act like they got bad information from a third party. If it's a security breach, then it's handled a bit different, but I find most folks use ChatGPT because they feel it listens to them more than an actual person would. By validating their experience- even if I'm rolling my eyes - we tend to be able to move forward. Or they feed my response back into ChatGPT and we go back to step one.
If this is a big time sink, maybe question why they're trusting chatGPT more than the expert they're working with, but I'd go at least two rounds with the AI before trying to figure out why they're going there instead of to you.
posted by Torosaurus at 9:58 AM on January 7 [19 favorites]
My bias entirely lines up with yours, I think "Chatgpt says you're wrong" is a moronic statement borne out of crippling ignorance about what ChatGPT is.
THAT SAID I think how you approach it entirely depends on what the next step is. Is implementation your job, and they're telling you that their design choices are technically feasible based on the LLM output? I think you need to give them a more accurate scope for implementation based on the realities of your tech stack since implementation is your domain, and with enough time and money everything is technically possible even if it means completely recreating the entire system. Or: is the LLM saying that their design choices are correct but you know that they don't line up with best practices? Sounds like design is their domain and if they want to screw things up based on output of the gibberish machine, that's on them.
IOW: don't let them outsource your job to ChatGPT (seems like a "ok then you try to make it work smart guy" situation), but if they're doing that for their own jobs, register your complaints and move on.
posted by supercres at 9:59 AM on January 7 [10 favorites]
THAT SAID I think how you approach it entirely depends on what the next step is. Is implementation your job, and they're telling you that their design choices are technically feasible based on the LLM output? I think you need to give them a more accurate scope for implementation based on the realities of your tech stack since implementation is your domain, and with enough time and money everything is technically possible even if it means completely recreating the entire system. Or: is the LLM saying that their design choices are correct but you know that they don't line up with best practices? Sounds like design is their domain and if they want to screw things up based on output of the gibberish machine, that's on them.
IOW: don't let them outsource your job to ChatGPT (seems like a "ok then you try to make it work smart guy" situation), but if they're doing that for their own jobs, register your complaints and move on.
posted by supercres at 9:59 AM on January 7 [10 favorites]
I've found that particularly for "soft" questions involving grey areas and complex judgment, ChatGPT can also be persuaded to completely reverse its first answer with just a little user pressure. Try it with a complicated question or something involving specialized professional knowledge, then after it answers, ask "Are you sure? I though that [evidence for opposite or even completely bogus answer]," refuse to back down, and watch it waffle. Just a few weeks ago I got it to confidently claim that a key plot item in a popular novel was green, then with some resistance made it reverse and confess, with similar confidence, that actually the item was black. (In the book, it was purple).
This makes sense because ChatGPT is not "answering" the question or querying its "knowledge," it's probabilistically cobbling together some language associated with the given prompt, based on the text in its largely web-based training set. It doesn't screw up reliably enough that you can count on its embarrassing itself with every question, or in real time in front of your boss, but if you play around enough to get it to do that once, then print out the conversation to show him, it makes for a very powerful correction to people's naive trust in the magic box.
posted by Bardolph at 10:00 AM on January 7 [30 favorites]
This makes sense because ChatGPT is not "answering" the question or querying its "knowledge," it's probabilistically cobbling together some language associated with the given prompt, based on the text in its largely web-based training set. It doesn't screw up reliably enough that you can count on its embarrassing itself with every question, or in real time in front of your boss, but if you play around enough to get it to do that once, then print out the conversation to show him, it makes for a very powerful correction to people's naive trust in the magic box.
posted by Bardolph at 10:00 AM on January 7 [30 favorites]
Best answer: I've run a digital product agency for years.
I'd lose my shit if someone said that. ChatGPT and its ilk are powerful tools, but I'd not have them provide design guidance - there's way too much nuance to those types of decisions for most people to frame their questions correctly.
The "nice" way to present this is to ask where ChatGPT is sourcing its information and what prompt they're using to ask it questions.
Like most everything in dealing with large datasets or ML and AI, getting an answer is easy. Getting the answer you "want" is also pretty easy. Whether that answer is valuable will depend on whether you're asking the right questions.
posted by bitdamaged at 10:00 AM on January 7 [17 favorites]
I'd lose my shit if someone said that. ChatGPT and its ilk are powerful tools, but I'd not have them provide design guidance - there's way too much nuance to those types of decisions for most people to frame their questions correctly.
The "nice" way to present this is to ask where ChatGPT is sourcing its information and what prompt they're using to ask it questions.
Like most everything in dealing with large datasets or ML and AI, getting an answer is easy. Getting the answer you "want" is also pretty easy. Whether that answer is valuable will depend on whether you're asking the right questions.
posted by bitdamaged at 10:00 AM on January 7 [17 favorites]
I have begun doing what warriorqueen suggested above (run your problem through ChatGPT yourself first and include in your presentation/email/whatever why the ChatGPT solution isn't a great one) preemptively before certain meeting now.
It's more practical that what I want to do, which is say "Gee, thanks, John Backflip!"
posted by Calvin and the Duplicators at 10:02 AM on January 7 [1 favorite]
It's more practical that what I want to do, which is say "Gee, thanks, John Backflip!"
posted by Calvin and the Duplicators at 10:02 AM on January 7 [1 favorite]
a ... statement borne out of crippling ignorance about what ChatGPT is
"If you can say that with a straight face, then you have absolutely no idea what an LLM is, how they work, or what they're good for."
Not saying bust that out first thing, but it may be necessary to keep in your back pocket because at some point you're going to need to crush someone, I can almost guarantee it. Just file it away.
My harsh answer is borne out of seeing people use ChatGPT to confirm why their utterly idiotic preconception is in any way possible, and then using that word salad to explain why "AI" says they were right all along (for example, you can see people do this a LOT in the meme stock universe, trying to explain why their latest shitco stock is actually worth trillions). ChatGPT (and nearly all LLMs) will bend over backward to give you the answer they "think" you want, especially if you bully it a little.
Doesn't mean FFIE is gonna be worth more the China, no matter how confidently ChatGPT says it's possible.
posted by aramaic at 10:11 AM on January 7 [6 favorites]
"If you can say that with a straight face, then you have absolutely no idea what an LLM is, how they work, or what they're good for."
Not saying bust that out first thing, but it may be necessary to keep in your back pocket because at some point you're going to need to crush someone, I can almost guarantee it. Just file it away.
My harsh answer is borne out of seeing people use ChatGPT to confirm why their utterly idiotic preconception is in any way possible, and then using that word salad to explain why "AI" says they were right all along (for example, you can see people do this a LOT in the meme stock universe, trying to explain why their latest shitco stock is actually worth trillions). ChatGPT (and nearly all LLMs) will bend over backward to give you the answer they "think" you want, especially if you bully it a little.
Doesn't mean FFIE is gonna be worth more the China, no matter how confidently ChatGPT says it's possible.
posted by aramaic at 10:11 AM on January 7 [6 favorites]
Best answer: Wow. If I were this designer's manager and I saw them pull this I would immediately schedule a conversation with them to let them know that not only is this an unacceptable way to communicate with colleagues, but that putting forth ChatGPT as a source of truth makes me doubt both their technical expertise and design chops. Design is communication, and that communication was not okay.
posted by subocoyne at 10:14 AM on January 7 [21 favorites]
posted by subocoyne at 10:14 AM on January 7 [21 favorites]
How about asking ChatGPT what it thinks about your designer's trust of ChatGPT, and sending it back to the designer?
I asked ChatGPT about your question:
When I gave someone feedback on a design, that person told me that ChatGPT said my feedback was wrong. Should I trust what ChatGPT said about my feedback?
ChatGPT said:
It's always good to approach feedback—whether it's from ChatGPT or another source—with a critical mindset. Here's how to handle this situation:
Understand the Context: ChatGPT provides responses based on patterns and knowledge available up until its training cutoff or through tools like web searches. It can offer logical suggestions, but it may not have the full context of the design or the goals of the project.
Consider the Person's Expertise: If the person you're giving feedback to has significant expertise or familiarity with the design, they may have insights that ChatGPT and even you might not.
Double-Check with Reliable Sources:
Research authoritative resources or best practices for the design topic.
Seek opinions from other experts or peers in the field.
Learn and Iterate: If the person disagrees with your feedback, ask for clarification. This can help you understand their perspective and improve your own knowledge.
Use ChatGPT as a Tool, Not the Authority: Think of ChatGPT as a helpful assistant or brainstorming partner rather than the final authority on any subject. Its responses can sometimes lack nuance or be incomplete.
posted by ShooBoo at 10:19 AM on January 7 [24 favorites]
I asked ChatGPT about your question:
When I gave someone feedback on a design, that person told me that ChatGPT said my feedback was wrong. Should I trust what ChatGPT said about my feedback?
ChatGPT said:
It's always good to approach feedback—whether it's from ChatGPT or another source—with a critical mindset. Here's how to handle this situation:
Understand the Context: ChatGPT provides responses based on patterns and knowledge available up until its training cutoff or through tools like web searches. It can offer logical suggestions, but it may not have the full context of the design or the goals of the project.
Consider the Person's Expertise: If the person you're giving feedback to has significant expertise or familiarity with the design, they may have insights that ChatGPT and even you might not.
Double-Check with Reliable Sources:
Research authoritative resources or best practices for the design topic.
Seek opinions from other experts or peers in the field.
Learn and Iterate: If the person disagrees with your feedback, ask for clarification. This can help you understand their perspective and improve your own knowledge.
Use ChatGPT as a Tool, Not the Authority: Think of ChatGPT as a helpful assistant or brainstorming partner rather than the final authority on any subject. Its responses can sometimes lack nuance or be incomplete.
posted by ShooBoo at 10:19 AM on January 7 [24 favorites]
“The next person who tells me GhatGPT said we could do it this way is getting glue on their pizza.”
posted by jimfl at 10:38 AM on January 7 [3 favorites]
posted by jimfl at 10:38 AM on January 7 [3 favorites]
JFC NO!
Longer version: I'm a scientist and work in academia. If anyone told me that I would laugh in their face. I may be wrong, but that's like trying to win a spelling bee by yodeling or reciting the alphabet. a Wikipedia article would be light years better as a starting point for taking issue with a claim of mine.
posted by SaltySalticid at 10:40 AM on January 7 [7 favorites]
Longer version: I'm a scientist and work in academia. If anyone told me that I would laugh in their face. I may be wrong, but that's like trying to win a spelling bee by yodeling or reciting the alphabet. a Wikipedia article would be light years better as a starting point for taking issue with a claim of mine.
posted by SaltySalticid at 10:40 AM on January 7 [7 favorites]
What they're not getting is that ChatGPT isn't an expert, it's just a jumped up spreadsheet. "ChatGPT says you're wrong" is a ridiculous response, because ChatGPT does not think or do work, it is just a text predictor.
To me this suggests that your colleagues don't understand what you do or why it's important to have a skilled staffer in your role, because when you say "based on my actual experience with this work, this isn't a good idea" they're responding with something that's like doing a quick google and saying "I found this link that says we can do it, you must be wrong".
Can you push back and ask them to explain how they know that ChatGPT is giving actionable advice? At the very minimum, they ought to have familiarized themselves with the sources it is using and be able to give a layperson's explanation of why they think ChatGPT is correct. If they can't explain that, then they don't have enough expertise to evaluate ChatGPT's answer, much less evaluate your experience-based advice.
Honestly, this situation would put me in a real I-am-becoming-the-Joker headspace.
posted by Frowner at 10:42 AM on January 7 [8 favorites]
To me this suggests that your colleagues don't understand what you do or why it's important to have a skilled staffer in your role, because when you say "based on my actual experience with this work, this isn't a good idea" they're responding with something that's like doing a quick google and saying "I found this link that says we can do it, you must be wrong".
Can you push back and ask them to explain how they know that ChatGPT is giving actionable advice? At the very minimum, they ought to have familiarized themselves with the sources it is using and be able to give a layperson's explanation of why they think ChatGPT is correct. If they can't explain that, then they don't have enough expertise to evaluate ChatGPT's answer, much less evaluate your experience-based advice.
Honestly, this situation would put me in a real I-am-becoming-the-Joker headspace.
posted by Frowner at 10:42 AM on January 7 [8 favorites]
Best answer: designer is big mad that website authority has gone from their department to you. They're having their "Computer Says No..." moment.
Ask them to summarize ChatGPT's approach into a five bullet point plan. That requires either the designer or ChatGPT to understand what ChatGPT just said. There's a fair chance that either/both won't be able to answer coherently. Reply with "Hmm, interesting; but I see that this doesn't meet our tech stack requirements of FlurbleJS talking through PoxyProxy 1.23 to our ShoeboxFullOfIndexCards backend"
Either way, do what you were going to anyway. It sounds like you are more senior than them.
None of this, I hope, suggests deliberately being a shit to them.
posted by scruss at 11:01 AM on January 7 [8 favorites]
Ask them to summarize ChatGPT's approach into a five bullet point plan. That requires either the designer or ChatGPT to understand what ChatGPT just said. There's a fair chance that either/both won't be able to answer coherently. Reply with "Hmm, interesting; but I see that this doesn't meet our tech stack requirements of FlurbleJS talking through PoxyProxy 1.23 to our ShoeboxFullOfIndexCards backend"
Either way, do what you were going to anyway. It sounds like you are more senior than them.
None of this, I hope, suggests deliberately being a shit to them.
posted by scruss at 11:01 AM on January 7 [8 favorites]
In your place, I would also be irritated.
However, I can also imagine saying to my auto mechanic, "Are you sure that I need to replace ABC on my car? An anonymous commenter on MetaFilter said that for cars like mine, there's no need to replace ABC for 10 years and it's only been 3 years."
My auto mechanic might be offended that I'm considering a random anonymous commenter over him (a seasoned mechanic). But I would just be guarding against the case where he says, "Oh wait, your car is only 3 years old? I mis-read that!" or "Oh that's right, I forgot these newer cars come with auto-cleaning ABC!"
How does your colleague respond when you say no the second time? Do they keep arguing or do they humbly accept your explanation?
posted by vienna at 11:03 AM on January 7 [8 favorites]
However, I can also imagine saying to my auto mechanic, "Are you sure that I need to replace ABC on my car? An anonymous commenter on MetaFilter said that for cars like mine, there's no need to replace ABC for 10 years and it's only been 3 years."
My auto mechanic might be offended that I'm considering a random anonymous commenter over him (a seasoned mechanic). But I would just be guarding against the case where he says, "Oh wait, your car is only 3 years old? I mis-read that!" or "Oh that's right, I forgot these newer cars come with auto-cleaning ABC!"
How does your colleague respond when you say no the second time? Do they keep arguing or do they humbly accept your explanation?
posted by vienna at 11:03 AM on January 7 [8 favorites]
If they’re a Tolkien nerd, you can tell them ChatGPTjust told me tea isn’t mentioned in his writings when The Lord of the Rings has it right there in Chapter 1.
posted by Lemkin at 11:40 AM on January 7 [4 favorites]
posted by Lemkin at 11:40 AM on January 7 [4 favorites]
You could install this extension that turns all instances of 'ChatGPT' into 'my stupid friend' and then ask them why they're talking to their stupid friend instead of thinking critically.
posted by PikeMatchbox at 12:08 PM on January 7 [3 favorites]
posted by PikeMatchbox at 12:08 PM on January 7 [3 favorites]
Best answer: I think the especially frustrating thing is that they're putting in almost zero effort (typing in a question) and laying the onus on you to expend effort in order to engage with the details of a low-quality source of pseudo-information that they may not have even fully read before sending to you.
I would put the onus of effort back on them. "In my understanding, ChatGPT is a language model and not a source of actual information. But if there are parts of this output that you find to be useful guidance, I'd like to learn more about what you discovered. Could you please walk me through it?"
posted by dusty potato at 12:10 PM on January 7 [19 favorites]
I would put the onus of effort back on them. "In my understanding, ChatGPT is a language model and not a source of actual information. But if there are parts of this output that you find to be useful guidance, I'd like to learn more about what you discovered. Could you please walk me through it?"
posted by dusty potato at 12:10 PM on January 7 [19 favorites]
This is really annoying. However, what they are doing is pushing back, and attempting to cite an authority. Now, it just so happens you don't regard their authority as an authority (and I agree!), but don't miss the forest for the trees here. Imagine they are saying, "my web designer friend says we CAN do this."
I would try not to make it about chatgpt at all. I think scruss is exactly right: they are irritated they no longer get to make these design decisions, and they are trying to argue for what they want.
posted by bluedaisy at 12:49 PM on January 7 [13 favorites]
I would try not to make it about chatgpt at all. I think scruss is exactly right: they are irritated they no longer get to make these design decisions, and they are trying to argue for what they want.
posted by bluedaisy at 12:49 PM on January 7 [13 favorites]
Sounds like you could replace that person with AI.
posted by kinddieserzeit at 2:31 PM on January 7 [2 favorites]
posted by kinddieserzeit at 2:31 PM on January 7 [2 favorites]
Best answer: This is less an issue of ChatGPT in the workplace and more a “your coworker sucks” problem. So while you are going to encounter misguided reliance of AI in work and private life because people don’t understand how it works, this coworker’s use of it is a symptom of them not trusting you as a subject matter expert. The short answer is definitely yes, this is a crappy way for them to handle a disagreement with a coworker, but the long answer is that it isn’t because they used ChatGPT.
To specifically address the use of ChatGPT by your coworker, it might help to use the example of how generated AI images work when talking to them about it as a source. Not always, but often younger people are less gullible about AI imagery even while trusting ChatGPT’s accuracy. You can point out how even though the image generation keeps improving so much, these tools still generate things like too many teeth or fingers and ultimately still need to be checked for accuracy by humans. Keep your tone light and curious — maybe ask if they use AI generated images in their designs and what their experience has been with their accuracy, especially when it comes to really specific inputs, and try to be as friendly as possible. You’re just sharing your specific knowledge about a topic while asking them about their experiences.
The rest of my answer is not AI-focused, it’s thoughts coming from being a designer for almost 20 years having worked with many project and product managers, designers, and developers, often having to do all of those roles (they get blurred too often). I have been in your place of having to explain what is technically possible to people really invested in their vision, and this is a familiar dynamic that would exist without AI, it’s just one more way for people to be confidently wrong.
Start-ups in particular can have a skewed sense of workplace norms, so I wouldn’t take your coworker’s poor approach as being acceptable elsewhere. Look at the bigger picture of how things are handled where you work: is it one where people are rewarded for having big ideas and the people who execute those ideas or who focus on the practical don’t get as much recognition? Do you have stakeholders who get excited about big ideas and don’t care if they’re realistic or make sense for your longterm goals? (And are people there in general super excited about the use of AI?) Is the design team used to being seen as a bunch of creative visionaries? How have they worked with any product managers or even developers in the past and know what your role actually IS? Do any of them have much experience in the workplace?
Sometimes even the most annoying coworker is simply reacting to what is being rewarded in a workplace. It doesn’t make your coworker right but if it’s the kind of environment where you don’t want to stick around long enough for it to seem normal, the problem becomes less “how do I handle this coworker who thinks chatGPT knows more than I do” and more “how do I find a new job?”
On the “your coworker sucks” side: when I was first starting out in my career I was shit at feeling it was okay for me not to know something. I would present an idea for a design and when I was told it wasn’t technically possibly, it felt like I was supposed to fight back. This got reinforced in a particularly toxic job in my early 20s where scoring points with stakeholders was always more important than, you know, reality, and the stakeholders had neither technical nor design expertise but they wanted to feel like they had great taste so me being precious about my “art” was tolerated and often rewarded. I am curious if any of that could be going on with your coworker or the design team overall?
I assume you’re probably doing a lot of the following, but some ideas to help make your feedback get heard: acknowledge why they want something to look a certain way before saying it isn’t possible. If they aren’t explaining the why’s behind their design, ask, so that you can better explain what is possible that still achieves their stated goals. Ask the questions in a way that makes it clear that you respect their expertise. I mean, obviously they don’t respect yours if they think asking chatGPT is a more reliable source than you, but it’s worth making sure that you’re approaching them as being on the same team, and knowing that maybe they are just young and inexperienced and don’t realize when you tell them something isn’t possible it isn’t because they failed or because they didn’t defend it hard enough.
Sorry you’re dealing with this!
posted by the thorn bushes have roses at 2:45 PM on January 7 [2 favorites]
To specifically address the use of ChatGPT by your coworker, it might help to use the example of how generated AI images work when talking to them about it as a source. Not always, but often younger people are less gullible about AI imagery even while trusting ChatGPT’s accuracy. You can point out how even though the image generation keeps improving so much, these tools still generate things like too many teeth or fingers and ultimately still need to be checked for accuracy by humans. Keep your tone light and curious — maybe ask if they use AI generated images in their designs and what their experience has been with their accuracy, especially when it comes to really specific inputs, and try to be as friendly as possible. You’re just sharing your specific knowledge about a topic while asking them about their experiences.
The rest of my answer is not AI-focused, it’s thoughts coming from being a designer for almost 20 years having worked with many project and product managers, designers, and developers, often having to do all of those roles (they get blurred too often). I have been in your place of having to explain what is technically possible to people really invested in their vision, and this is a familiar dynamic that would exist without AI, it’s just one more way for people to be confidently wrong.
Start-ups in particular can have a skewed sense of workplace norms, so I wouldn’t take your coworker’s poor approach as being acceptable elsewhere. Look at the bigger picture of how things are handled where you work: is it one where people are rewarded for having big ideas and the people who execute those ideas or who focus on the practical don’t get as much recognition? Do you have stakeholders who get excited about big ideas and don’t care if they’re realistic or make sense for your longterm goals? (And are people there in general super excited about the use of AI?) Is the design team used to being seen as a bunch of creative visionaries? How have they worked with any product managers or even developers in the past and know what your role actually IS? Do any of them have much experience in the workplace?
Sometimes even the most annoying coworker is simply reacting to what is being rewarded in a workplace. It doesn’t make your coworker right but if it’s the kind of environment where you don’t want to stick around long enough for it to seem normal, the problem becomes less “how do I handle this coworker who thinks chatGPT knows more than I do” and more “how do I find a new job?”
On the “your coworker sucks” side: when I was first starting out in my career I was shit at feeling it was okay for me not to know something. I would present an idea for a design and when I was told it wasn’t technically possibly, it felt like I was supposed to fight back. This got reinforced in a particularly toxic job in my early 20s where scoring points with stakeholders was always more important than, you know, reality, and the stakeholders had neither technical nor design expertise but they wanted to feel like they had great taste so me being precious about my “art” was tolerated and often rewarded. I am curious if any of that could be going on with your coworker or the design team overall?
I assume you’re probably doing a lot of the following, but some ideas to help make your feedback get heard: acknowledge why they want something to look a certain way before saying it isn’t possible. If they aren’t explaining the why’s behind their design, ask, so that you can better explain what is possible that still achieves their stated goals. Ask the questions in a way that makes it clear that you respect their expertise. I mean, obviously they don’t respect yours if they think asking chatGPT is a more reliable source than you, but it’s worth making sure that you’re approaching them as being on the same team, and knowing that maybe they are just young and inexperienced and don’t realize when you tell them something isn’t possible it isn’t because they failed or because they didn’t defend it hard enough.
Sorry you’re dealing with this!
posted by the thorn bushes have roses at 2:45 PM on January 7 [2 favorites]
My rule of thumb for LLM authority is to treat it in exactly the same ballpark as "hey, my cousin's friend who I think kinda works in this area, maybe, once told me at a party that..."
In other words yes, your interlocutor (or me, to be fair) might have been put onto the track of something interesting, relevant, important or useful but for that axis of enquiry to be treated with any degree of serious consideration, they / we need to do put the verification work in, own the professional risk and present it as their own opinion / proposition.
The "chat says this so there you go..." line is, I think, a pretty clear warning sign in any professional context. Naive at best, descend through misguided, lazy, ignorant, foolish etc. depending how charitable you feel.
posted by protorp at 3:23 PM on January 7 [2 favorites]
In other words yes, your interlocutor (or me, to be fair) might have been put onto the track of something interesting, relevant, important or useful but for that axis of enquiry to be treated with any degree of serious consideration, they / we need to do put the verification work in, own the professional risk and present it as their own opinion / proposition.
The "chat says this so there you go..." line is, I think, a pretty clear warning sign in any professional context. Naive at best, descend through misguided, lazy, ignorant, foolish etc. depending how charitable you feel.
posted by protorp at 3:23 PM on January 7 [2 favorites]
Sounds like you could replace that person with AI.
ChatGPT says "You're fired"
posted by banshee at 3:30 PM on January 7 [1 favorite]
ChatGPT says "You're fired"
posted by banshee at 3:30 PM on January 7 [1 favorite]
Chat GPT has been known to lie.
Barring that, chat GPT also takes all of its info from other digital sites which are made by people... who may or may not have a clue what they are talking about, or could even be scam sites.
posted by itsflyable at 5:52 PM on January 7
Barring that, chat GPT also takes all of its info from other digital sites which are made by people... who may or may not have a clue what they are talking about, or could even be scam sites.
posted by itsflyable at 5:52 PM on January 7
Provide a short summary of the web page: https://ask.metafilter.com/383854/Is-ChatGPT-says-youre-wrong-an-acceptable-thing-in-the-workplace-now
A discussion on Ask MetaFilter highlighted frustrations with colleagues using ChatGPT to counter professional advice, undermining expertise. Respondents stressed the importance of clear communication, explaining decisions with evidence, and treating ChatGPT as an imperfect tool, not a definitive source. The consensus was that while AI can assist, it shouldn't replace human expertise or contextual understanding in specialized fields.
posted by nixxon at 6:59 PM on January 7 [2 favorites]
A discussion on Ask MetaFilter highlighted frustrations with colleagues using ChatGPT to counter professional advice, undermining expertise. Respondents stressed the importance of clear communication, explaining decisions with evidence, and treating ChatGPT as an imperfect tool, not a definitive source. The consensus was that while AI can assist, it shouldn't replace human expertise or contextual understanding in specialized fields.
posted by nixxon at 6:59 PM on January 7 [2 favorites]
No. It's one thing if this person said, "with my 3/5/10 years of professional experience this is possible."
Then you could have a discussion of the relative merits of both approaches. But it's entirely different to use an appeal to authority, when as the boss... you are the authority.
Also... Is the use of Chat GPT sanctioned by your employer? (I can't use it at my employer, but can use Microsoft Copilot).
posted by oceano at 8:46 PM on January 7 [1 favorite]
Then you could have a discussion of the relative merits of both approaches. But it's entirely different to use an appeal to authority, when as the boss... you are the authority.
Also... Is the use of Chat GPT sanctioned by your employer? (I can't use it at my employer, but can use Microsoft Copilot).
posted by oceano at 8:46 PM on January 7 [1 favorite]
I’m also a millennial and work in nonprofits. My 70+ year old boss physically stood over my shoulder last week until I caved and made an account on Chat GPT because she loves it and thinks it’s the answer to everything. She uses it for everything now - writing our strategic plan, grant applications, job descriptions, everything. So, I don’t think this issue is age-bound. I think a lot of people are willing to trust a system that is yeah, maybe kind of dicey - if it means that system does their work and, even more importantly, their critical thinking for them.
To answer your original question - I don’t think most workplaces are at a point yet where that kind of statement is acceptable, but unfortunately, I think that’s where we’re headed. People lack the ability and frankly the inclination to parse out what these tools were and weren’t specifically designed to do - they’re just seen as digital band-aids that can be stuck onto every problem. And, usually, people can mostly muddle through with this strategy because a lot of what Chat GPT generates is so word salad and opaque.
I also definitely think it’s making a lot of us lazier, but that’s a separate discussion.
posted by oywiththepoodles at 8:54 PM on January 7 [3 favorites]
To answer your original question - I don’t think most workplaces are at a point yet where that kind of statement is acceptable, but unfortunately, I think that’s where we’re headed. People lack the ability and frankly the inclination to parse out what these tools were and weren’t specifically designed to do - they’re just seen as digital band-aids that can be stuck onto every problem. And, usually, people can mostly muddle through with this strategy because a lot of what Chat GPT generates is so word salad and opaque.
I also definitely think it’s making a lot of us lazier, but that’s a separate discussion.
posted by oywiththepoodles at 8:54 PM on January 7 [3 favorites]
In my world, "X says you're wrong" is unacceptable for any value of X.
If the person can't offer their own reasoning, they absolutely shouldn't be declaring anything "wrong" on any authority. They have declared themselves personally clue-free and useless! And that's the biggest problem here, aside from the rude formation of the statement (which it is, to be sure).
posted by Dashy at 9:14 AM on January 8 [4 favorites]
If the person can't offer their own reasoning, they absolutely shouldn't be declaring anything "wrong" on any authority. They have declared themselves personally clue-free and useless! And that's the biggest problem here, aside from the rude formation of the statement (which it is, to be sure).
posted by Dashy at 9:14 AM on January 8 [4 favorites]
Ask them why the think ChatGPT is an authoritative source for this. Tell them you don't view such tools as authoritative and ask for other sources. ChatGPT is just predictive text on steroids and anyone who says differently is selling something.
posted by Aleyn at 12:07 PM on January 9
posted by Aleyn at 12:07 PM on January 9
I’d be tempted to respond with something like “Yeah, I used to get that kind of output too - I really had to hone my prompt engineering to ensure I was including all the relevant context. Have you tried the CLARITY prompt framework? It might improve your results.” Bonus points if you can act faintly surprised that they’re still using ChatGPT; hasn’t everyone moved on to Claude?
posted by scyllary at 6:38 PM on January 9
posted by scyllary at 6:38 PM on January 9
ChatGPT does not answer questions or provide information. ChatGPT predicts what would be a word that would make sense after the previous word, based on having analyzed lots and lots of other words and sentences and documents. Any resemblance its outputs have to factual truth is purely a side-effect.
posted by decathecting at 8:46 PM on January 9
posted by decathecting at 8:46 PM on January 9
« Older Can I eat it? Ground pork puchased 4 days ago.... | Book suggestions for a 92yo female chemistry Ph.D. Newer »
You are not logged in, either login or create an account to post comments
I'm a human being who's been doing this for a long time and I know what works and what doesn't, especially with our own setup. Even the best AI doesn't have that nuance.
So...while I agree with you, especially about the AI part, if I presented things that way here in my environment, I would not be successful in building buy in. I always present things with a rationale (either on the page or in my back pocket) that generally is either:
1. research (user, competitive analysis, etc.)
2. best practice based (accessibility, etc.)
3. any scenarios we've run through with the tech team and discarded with a straightforward explanation of why ("due to our choice to use X we can't easily implement Y.")
My suggestion is rather than decide whether your coworkers' responses are okay or not okay, run your problem through ChatGPT yourself first and include in your presentation/email/whatever why the ChatGPT solution isn't a great one, the same way you would say something like "although previous thinking was X, current standards are Y."
posted by warriorqueen at 9:41 AM on January 7 [8 favorites]