AI & I
July 9, 2024 6:43 AM

My job has started using AI to generate data and research. It’s terrible. I hate it. Can I push back without losing my job? Can I push back and maybe win?

I started at my job slightly less than a year ago. My company does research. Lots of data and facts, very buttoned-down stuff.

In the last two months, they’ve become quite obsessed with AI and using it to generate their research. Overnight, an already dull job became tedious and almost impossible. Not to mention, ethically I’m 100 percent anti-AI.

I hate it. I hate it enough to quit, but I’m a life-long job-hopper who really, really wants to get their life together and work in a place for more than a year.

How can I push back without losing my job? Can I? Should I? I’m not good at office politics navigation and I’m very stubborn — thus all the quitting. But this bullshit is ruining the work output, not to mention the whole world. I didn’t sign on to improve AI slop, I signed on this job to work with people and improve their writing.
posted by meowmeowdream to Work & Money (20 answers total) 8 users marked this as a favorite
Are there problems with the AI (particularly hallucinations) that you can use as an argument for why it's not a good idea to rely to heavily on a flawed tool? I suspect your superiors may be more open to an argument showing that AI is bad for the company vs. an argument that it's bad for you (or for society).
posted by akk2014 at 6:48 AM on July 9


Arguments from ethics or "this is slop" are unlikely to be successful.

Successful work at companies typically aligns with the company's goals. "I'm very concerned that hallucinations are going to reduce quality" is a valid corporate argument. "This is garbage and it's ruining the world" is very unlikely to see any success.

The very short answer is that you either need to learn to be good at office politics right away and this could be a good way to practice those muscles, or you are 100% done with this place and should keep your mouth shut and get out as soon as possible, whether you like it or not. Your bosses are (probably) being pushed hard themselves to get costs way, way down via The Magic of AI. They're hearing a lot from sales people and/or accounting and/or wide-eyed board members/executives about how amazing AI is and how it'll let them get rid of all these pesky expensive people. They're seeing, superficially, amazing results. You cannot just say 'this is bad, it's stupid' in a straightforward way. Getting people out of their obsessions is very very hard. If you're a stubborn person without much practice in office politics, you are very likely to just make a name for yourself as a troublemaker and get fired quickly. "Some people may be sticks in the mud and resist the New World" is something they're probably primed for. There is no easy way to do this; there's the calm, long-play, political path - or you get yourself fired even faster after a frustrating, terrible experience in which you are dismissed. There is basically no good middle ground.

This doesn't mean I think you should just give up. But this is a hard problem and will require discipline and patience to make any progress on, and it still might utterly fail. Are you prepared for that?
posted by Tomorrowful at 6:58 AM on July 9


Something you don’t address (fair) is what conditions are like at other positions you might take, in this field or others you might consider. AI is in many, many workplaces now and not leaving in the near term for silly little reasons like “but it is destroying the world and damaging workers.” Will leaving because of this problem land you in a worse job where you face the same problem? Something to consider.
posted by cupcakeninja at 7:09 AM on July 9


My job has started using AI to generate data and research.

It's not possible to "generate data" using AI. If the numbers aren't acquired through measurement, then it's not data, it's wishful thinking, AI or no.

AI could produce a summary of data, ie, "32% of respondents preferred hot pretzels with mustard." But there's no real AI there, just number crunching.

I could imagine AI producing a useful "abstract" of a study, since the "meat" would be in the study itself. But I'd want to double-check it anyway to ensure it wasn't inventing citations or drawing unsupported conclusions.
posted by SPrintF at 7:10 AM on July 9


From your question I can’t understand what your specific issues are. Are they:

- you’re rewriting instead of writing? If it takes more time, that’s a company issue. If you just don’t like it, it’s probably not going to be compelling
- is the data inaccurate?
- are the insights inaccurate?
- are things like white papers less effective because AI generalities are a dime a dozen?
- is the output likely to lower your company’s reputation?
- other?

If the end output is meeting the same corporate goals and doesn’t cost more, you’re not likely to impact things. But if it is — you use data to make cases right, or at least provide data in service of a goal? Give your company the data.
posted by warriorqueen at 7:10 AM on July 9


Can you clarify how you're actually using AI to generate data and research? It's sorta hard to answer this question without knowing what sort of research you're doing or how AI is being incorporated - on preview, my question is similar to SPrintF.
posted by coffeecat at 7:14 AM on July 9


What does "generate" research mean here? That phrasing makes it sound like they are using AI to make fake data and then citing it as original experiments? (this seems wildly unethical) Or they are using it to generate fake articles/citations to justify whatever conclusions they want. (also very unethical)
posted by mrgoldenbrown at 7:19 AM on July 9


Agreed that if you can bring this up at all it's gonna need to be centered on possible damage to the company. It almost sounds like they are using AI to generate research summaries/white papers but not do the research itself (I hope!!)? I know there are many kinds of work called "research" and not all of it is sitting in a laboratory creating raw data via experiments but the actual synthesis needs to be done by humans regardless of the type of research.

In addition to the quality angle, you could also bring up intellectual property and copyright - is your company hurting their IP by unthinkingly dumping their research into an LLM to generate the summary/writeup/paper? That is something my embarrassingly-AI-cheerleader executives are slowing down to think about. Also, where does your funding come from? If you get grants from the government or other institutions you should be really clear about what they allow in projects they fund.
posted by misskaz at 7:36 AM on July 9


Back at the beginning of 2023 when everyone was chatGPT this and chatGPT that, one of my projects at work was to write some [boring stuff]. The content of [boring stuff] was taking info from finance people, sales people, and legal people and translating it into something clear and readable to anyone. I am very good at doing this.

Is it something that AI can do, too? Probably, that's a good use case that I've heard for it. I don't personally know, I don't use those kind of tools.

But anyway, all along this process I kid you not, every. single. person. I spoke with made some comment like "wow, you're even better than chatGPT!" And my response to that, every time, was "no shit." Yeah that's right, I said the swears.

Some version of this has played out repeatedly over the last few years, people responding pleasantly surprised that their apparently insurmountable problems can be solved by one reasonably bright and literate human faster, easier, and better than with an AI tool.

So in my experience you push back on this by being a person with a working human brain who makes the merits of working with an actual human person apparent. Be capable, have good humor, be better. (When you're very good at it you can even be a little rude. Another bonus that chatGPT is unlikely to deliver.)

There are plenty of applications that AI/machine learning are great at, and there are plenty that it's not so great at. Some good questions are: What safeguards are in place to reduce bias from the LLM's data source? Have [all affected teams] been able to provide input? How many humans have reviewed and approved this data?
posted by phunniemee at 7:42 AM on July 9


I think it's an uphill battle, so you'll need allies -- at least two more people. Identify an issue that management already finds concerning and upon which they are indecisive. Turn this into an issue that reinforces your position. Have your coalition independently voice its concerns to management, in a way that makes them think it was their idea. Repeat until you get your way or give up.

The challenge is that just about every VP/CxO in the world seems convinced this is the way to go, so you'll be competing with them for mind share. And if you jump ship you'd better be sure there is an idealist at the helm that agrees with you.
posted by credulous at 7:46 AM on July 9


Also, sorry for the double post, but given that it sounds like you are an editor or something like that, I wonder if using track changes to edit whatever AI generated garbage you are given and comparing it to the (I assume reduced) amount of editing a human-generated report requires would showcase "hey this might seem like it saves time but it actually is worse to edit and make accurate and readable." Like, the literal full markup version showing nothing but red lines and comments and stuff could be really visually compelling. Or time yourself and show that you have to double check data so much more because you can't trust an LLM didn't just make it up, so it actually takes longer.

But I also need to emphasize that the political/social capital thing is huge, and I agree with Tomorrowful that you can't reason someone out of their obsessions easily. I have nearly 7 years at my current org and have built up a TON of political/social capital. Not to brag, but everyone loves me - I'm constantly asked to be on project teams and interview panels, and I get unsolicited feedback from colleagues about how helpful I am or that I'm a "good leader."

I am also the resident AI curmudgeon, but despite all that good will my colleagues have toward me, I'm still having a hard time getting my AI concerns any real attention or consideration. I express concern about the quality and potential bias of the output and hear "think of AI like an assistant and like any assistant you'd review their work." (My internal response to this is that over time a human assistant learns and becomes more competent and requires less and less oversight, but I have not yet found the right time/place to get into an argument with my CEO, who is the one who said that.)

I bring up the climate change angle, because we are a nonprofit with a mission and priorities that include helping people mitigate climate change, and the C-suite person I was talking to was like, "yeah that's true... also I wish we had better recycling here." She just totally switched the topic. It's like her brain couldn't hold the two thoughts of "AI is good" and "AI is bad for the climate" so it just bounced off the latter to a similar but not really related topic.

If you're gonna broach the topic at all, you need to start subtly and carefully.
posted by misskaz at 7:50 AM on July 9


1. You might want to try Ask A Manager with this one?
2. I agree that it's at least somewhat likely that AI will come up at another job.
3. I have seen a LOT of articles about how ChatGPT/AI just recycle bullshit and don't come out with anything good. I don't understand what your job is, but I would suggest compiling articles that point out the problems with AI. Like, I'd look for every "literally getting the facts and data wrong" articles out there. You're going to need to prove that AI causes more problems than it "solves" and isn't a cheap and easy way to replace humans.
posted by jenfullmoon at 8:16 AM on July 9


I am, at this exact second, looking at a screen in which Google's AI-assisted search tool is assuring me that the first person to do a backflip was John Backflip.

"But where did this tradition begin? It all started in medieval Europe. with the man named John Backflip. He was the first person to ever do a flip, and he first performed the stunt in 1316."

This isn't a "hallucination". This is "confidently repeating obvious bullshit, because the machine has no notion of accountability or truth".

If your job is research and data, then I'd start by saying that accountability and truth actually matter.
posted by mhoye at 9:50 AM on July 9


As others have said, document the fact that it was better before. It's about all the evidence you have.

And then, wait a year or so. Corporate fads come and go. Next year your management will have found something else shiny and reevaluation of AI will be on the table. Until then, grin and bear it.
posted by Tell Me No Lies at 10:37 AM on July 9


I’m not going to tell you all my specific job. My company is very fragmented, purposefully, and I don’t have access to a lot of details. I get my marching orders.

It does appear, from my limited perspective, that the company is being unethical in their use and implementation of the technology.

I have asked questions on the subject, I haven’t gotten answers. If my question is vague, it’s because I’m working on minimal information.

Tell Me No Lies, I think you’ve got it. The market is already shifting on AI. If I can’t hop ship straightaway, I’ll wait out another dumb corporate trend. Thanks to most of you 🫡
posted by meowmeowdream at 10:38 AM on July 9


Unfortunately this may depend on how much leverage you have personally, and how willing you are to spend it on this particular situation. In a past job (company fortunately defunct), I was able to get my boss to push back against the misleading use of synthetic data when customers might have thought it was real, but only because I had leverage, I knew the company was doomed, and I didn’t care anymore. And then salespeople went around us and did it anyway. Be sure to document your concerns in writing within the proper chain of command. (If merely putting things in writing in private communication is a problem because it’s legally discoverable, get the hell out ASAP.) Hedge anything potentially accusatory like “misleading” or “fraudulent” with e.g. “could be construed as”. Best of luck to you.
posted by mubba at 10:57 AM on July 9


I’m 100 percent anti-AI

This is really the only thing you can control or change. And, I think if you are starting from here, it's much, much more difficult to make an argument for better use of AI by the company. Folks who are anti-anything are much less likely to be considered reliable sources of arguments for not using part of the thing. You'll have to at least try to pretend there are some decent uses of AI.

But also (and hear me out), as an experiment: why don't you go into some incognito window and ask some chatbot to make an argument against using AI. It'll probably give you something halfway decent.
posted by bluedaisy at 3:06 PM on July 9


I didn’t sign on to improve AI slop, I signed on this job to work with people and improve their writing.

Unfortunately, a loooooooot of people in business environments think that one of the best uses AI is to "improve their writing", so you won't get a lot of traction in any fight if that's even a part of the ground you're staking out. "I can just use AI" is a very seductive argument for a lot of folks, as far as writing goes, because AI is fast and is basically effortless; copy/paste takes a lot less effort than thinking through a writing exercise. I think that's bad; many people do not, or at least do not care.

For the record, I'm on your side as regards AI; I have decided, however, that the smart play at my place of work, as they hurtle on full speed ahead in "solution in search of a problem" mode with it, is to sit on the sidelines and wait for the new shiny to distract them in a year or two. AI, in my work's context at least, is basically never going to amount to much more than party tricks; everyone's all gaga over it now, but where I am it will mostly be used to summarize long text documents, compose abstracts and minor things like that.

I get that you can't say where you work, but can you maybe give a generalized example of how your company is using AI to "generate research"? I'm not even sure how that'd be possible - I'm not saying it isn't, I'm just unfamiliar with how it would be, so even a little genericized example would help me wrap my head around it.
posted by pdb at 4:45 PM on July 9


I can completely understand your position. Unfortunately this may be an issue wherever you work at the moment.

I think that you would be best to pretend that you are enthusiastic about the potential of AI, but very diplomatically express your concerns. Name tangible risks and ways that you could try to mitigate them. Try to steer them in a direction where AI is used sensibly.
posted by kinddieserzeit at 12:00 AM on July 10


In your position I would start consistently referring to working with the AI as "macrodata refinement" and asking where the egg cart is at and when the next waffle party is due.

First they ignore you, then they laugh at you, then they fight you, then you win... but the bots are the "you" in this instance, and if they're to be stopped from winning, then you and as many coworkers as you can bring with you need not to move past the point of laughing at them.

Good managers will listen once the majority of their workforce is calling bullshit on obvious bullshit, especially if they're doing that just by trash-talking it as a matter of course; bad managers will double down. If your org's management is bad then you're wasting your time working with them and should move on. But it's going to take you at least another couple of months to find out which kind you have.
posted by flabdablet at 9:06 AM on July 10


« Older Sane Widows PC setup 2024 edition   |   Free/Low cost invoice management software Newer »

You are not logged in, either login or create an account to post comments