Help me head off AI at my organisation
October 4, 2024 4:45 AM Subscribe
I work in a mid-sized UK charitable organisation. The spectre of AI is beginning to haunt us. Help me exorcise it by providing arguments, resources and data.
So far we have neither taken on AI in any major sense, nor written a policy addressing our attitude towards it. There is currently one person preparing a report on the subject for the board, and there has been a small amount of discussion about it within the workforce, and within our union, which has a fairly good level of membership.
I'd like to write a report making the case against us using AI at all, or at most in very limited and highly scrutinised ways. This is directed primarily at my colleagues in the workforce/union, but hopefully it will also serve as something I can present to the person evaluating AI and/or higher level people in general.
I've tried to canvas opinions on this and I know that at least most of the more technical-role people I talked to were moderately to strongly against it. I imagine that attitude exists to some extent across the organisation as well, but I've had a hard time prompting broader discussion, especially discussion that is explicitly critical rather than just experimental.
There are also people who see promise in the idea. The articles that are shared are generally quite vague puff-pieces.
Overall, I think the situation is such that a well-written and sourced report could significantly move the needle in terms of how people think about AI in the organisation.
--------------------------------
Points I'd like to make and support include:
* AI is a problem for labor rights.
* AI is a problem for skill development.
* AI is approaching its limits in terms of functionality.
* The AI industry is approaching its limits in terms of investment.
* AI is a serious environmental problem.
* AI-generated content presents copyright issues.
* Using AI is incompatible with our stated 'values' as an organisation.
* AI is a problem for society and for human knowledge.
* AI is a reputational risk.
* We as a workforce must grapple with these issues now rather than waiting for the board to take the initiative.
If you'd like to make the argument why my approach is wrong, that would probably be useful as well (I promise not to argue).
Thank you!
So far we have neither taken on AI in any major sense, nor written a policy addressing our attitude towards it. There is currently one person preparing a report on the subject for the board, and there has been a small amount of discussion about it within the workforce, and within our union, which has a fairly good level of membership.
I'd like to write a report making the case against us using AI at all, or at most in very limited and highly scrutinised ways. This is directed primarily at my colleagues in the workforce/union, but hopefully it will also serve as something I can present to the person evaluating AI and/or higher level people in general.
I've tried to canvas opinions on this and I know that at least most of the more technical-role people I talked to were moderately to strongly against it. I imagine that attitude exists to some extent across the organisation as well, but I've had a hard time prompting broader discussion, especially discussion that is explicitly critical rather than just experimental.
There are also people who see promise in the idea. The articles that are shared are generally quite vague puff-pieces.
Overall, I think the situation is such that a well-written and sourced report could significantly move the needle in terms of how people think about AI in the organisation.
--------------------------------
Points I'd like to make and support include:
* AI is a problem for labor rights.
* AI is a problem for skill development.
* AI is approaching its limits in terms of functionality.
* The AI industry is approaching its limits in terms of investment.
* AI is a serious environmental problem.
* AI-generated content presents copyright issues.
* Using AI is incompatible with our stated 'values' as an organisation.
* AI is a problem for society and for human knowledge.
* AI is a reputational risk.
* We as a workforce must grapple with these issues now rather than waiting for the board to take the initiative.
If you'd like to make the argument why my approach is wrong, that would probably be useful as well (I promise not to argue).
Thank you!
I use AI myself, both at work and at home. My colleagues who use it think of it as a helper, not as something that can be used without at least human oversight. For one example, it helps me troubleshoot code. You might not have anyone who codes at your job. But AI can be good as a step to a finished product, such as a an early draft or a final check.
I understand your concerns. But I think you would be more successful if you put were to pursue less-than-total limits on using AI. I think you would be taken more seriously. I think the tide is too strong.
posted by NotLost at 5:03 AM on October 4 [14 favorites]
I understand your concerns. But I think you would be more successful if you put were to pursue less-than-total limits on using AI. I think you would be taken more seriously. I think the tide is too strong.
posted by NotLost at 5:03 AM on October 4 [14 favorites]
It would be helpful if you were more specific about what guidelines you’re proposing. Do you want to forbid your employees from using Pilot to draft rote letters, prohibit AI artwork where a human illustrator could be paid, or avoid budgeting tools that used AI to assign spend categories? Just saying “I’m against this, categorically,” is going to be a hard sell.
posted by chesty_a_arthur at 5:35 AM on October 4 [2 favorites]
posted by chesty_a_arthur at 5:35 AM on October 4 [2 favorites]
Response by poster: To clarify, I guess my position would be along the lines of "We should only use technologies - including AI - after actively and seriously reckoning with the potential downsides and measuring them against the potential upsides."
What I'm looking for are reliable resources to explain the extents of the downsides I listed in my post, which so far have not received much attention in the discussion I've seen.
I know there are use cases where it can be useful, and I don't currently have a hard answer on what should and shouldn't be allowed, but if we as an organisation are going to use them then they should be able to pass the above test.
posted by Corvinity at 5:45 AM on October 4 [1 favorite]
What I'm looking for are reliable resources to explain the extents of the downsides I listed in my post, which so far have not received much attention in the discussion I've seen.
I know there are use cases where it can be useful, and I don't currently have a hard answer on what should and shouldn't be allowed, but if we as an organisation are going to use them then they should be able to pass the above test.
posted by Corvinity at 5:45 AM on October 4 [1 favorite]
Response by poster: Resources that seriously evaluate the upsides are also useful in this regard, if that helps.
posted by Corvinity at 6:01 AM on October 4
posted by Corvinity at 6:01 AM on October 4
Response by poster: Apparently I haven't written a good question!
I'd rephrase it as:
Please provide any good, well-argued resources, articles, etc you've come across that reckon with the ethical and practical issues of using AI in the workplace, with a particular focus on the potential downsides, since that is the area that has not seen much discussion in my organisation.
posted by Corvinity at 6:05 AM on October 4 [1 favorite]
I'd rephrase it as:
Please provide any good, well-argued resources, articles, etc you've come across that reckon with the ethical and practical issues of using AI in the workplace, with a particular focus on the potential downsides, since that is the area that has not seen much discussion in my organisation.
posted by Corvinity at 6:05 AM on October 4 [1 favorite]
If you'd like to make the argument why my approach is wrong, that would probably be useful as well (I promise not to argue).
I'm reasonably neutral on the topic of AI, but just from reading your post: Most of your arguments are vague, and a portion of them are likely factually wrong. You're intermingling arguments about the existence of a technology broadly with arguments about the application of a technology in your specific workplace. And you're also struggling to honestly evaluate the merits of existing research and publications on the topic given your clear anti-AI bias.
posted by NotMyselfRightNow at 6:09 AM on October 4 [4 favorites]
I'm reasonably neutral on the topic of AI, but just from reading your post: Most of your arguments are vague, and a portion of them are likely factually wrong. You're intermingling arguments about the existence of a technology broadly with arguments about the application of a technology in your specific workplace. And you're also struggling to honestly evaluate the merits of existing research and publications on the topic given your clear anti-AI bias.
posted by NotMyselfRightNow at 6:09 AM on October 4 [4 favorites]
Best answer: Here's a useful and short piece distinguishing the use of AI to write entire articles versus using it as a grammar checker.
(As a reviewer of submissions and an editor of articles, I now have to gatekeep regularly to keep AI-generated essays from lousing up our publication.)
I for one think you've outlined the different areas of concern quite well and I hope that other commenters can address each of them, God willing.
posted by rabia.elizabeth at 6:13 AM on October 4 [5 favorites]
(As a reviewer of submissions and an editor of articles, I now have to gatekeep regularly to keep AI-generated essays from lousing up our publication.)
I for one think you've outlined the different areas of concern quite well and I hope that other commenters can address each of them, God willing.
posted by rabia.elizabeth at 6:13 AM on October 4 [5 favorites]
Best answer: AI is a serious environmental problem
"OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. ...And it’s not just energy. Generative AI systems need enormous amounts of fresh water to cool their processors and generate electricity." [nature, mentions the Artificial Intelligence Environmental Impacts Act of 2024]
https://www.pbs.org/newshour/show/the-big-environmental-costs-of-rising-demand-for-big-data-to-power-the-internet
posted by HearHere at 6:20 AM on October 4 [7 favorites]
"OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. ...And it’s not just energy. Generative AI systems need enormous amounts of fresh water to cool their processors and generate electricity." [nature, mentions the Artificial Intelligence Environmental Impacts Act of 2024]
https://www.pbs.org/newshour/show/the-big-environmental-costs-of-rising-demand-for-big-data-to-power-the-internet
posted by HearHere at 6:20 AM on October 4 [7 favorites]
Best answer: The field of AI is incredibly broad and by making blanket statements like you have covering all of it you're possibly limiting your organizations effectiveness. Be specific on what forms of AI you're against and address why they don't align with your organization's goals and values. Similarly be specific on what forms of AI are OK, how they may help the organization, and in what way.
posted by Runes at 6:21 AM on October 4 [1 favorite]
posted by Runes at 6:21 AM on October 4 [1 favorite]
Best answer: I work in a mid-sized UK charitable organisation.
No specific recommendation for a source - but something you might want to consider would be to look at any published mission/vision/values statements from your company. If these are well written, they should address how the company is set up to achieve its goals and how they will select, motivate and develop their employees to those ends. Any policy statement you end up adopting should either fit with existing goals/values - or the corporate goals and values should be updated to match it.
posted by rongorongo at 6:24 AM on October 4
No specific recommendation for a source - but something you might want to consider would be to look at any published mission/vision/values statements from your company. If these are well written, they should address how the company is set up to achieve its goals and how they will select, motivate and develop their employees to those ends. Any policy statement you end up adopting should either fit with existing goals/values - or the corporate goals and values should be updated to match it.
posted by rongorongo at 6:24 AM on October 4
Best answer: Two Computer Scientists Debunk A.I. Hype - from Adam Conover - is a good recent primer for skeptics.
posted by rongorongo at 6:32 AM on October 4 [3 favorites]
posted by rongorongo at 6:32 AM on October 4 [3 favorites]
Best answer: I don't have specifics to offer, but here are couple links that might be good starting points:
* https://www.nist.gov/aisi
* https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
posted by NotLost at 6:34 AM on October 4
* https://www.nist.gov/aisi
* https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
posted by NotLost at 6:34 AM on October 4
AI is here and is going to be adopted. Rather than waste your time fighting against The New Big Thing I would suggest you instead stick to corralling it to where it is useful.
I understand your philosophical differences with people adopting it, but based on 30 years of working in business environments I firmly believe that all you can do is weather these fads and try to minimize the damage until the next thing comes along.
posted by Tell Me No Lies at 6:51 AM on October 4
I understand your philosophical differences with people adopting it, but based on 30 years of working in business environments I firmly believe that all you can do is weather these fads and try to minimize the damage until the next thing comes along.
posted by Tell Me No Lies at 6:51 AM on October 4
Super interesting related article: How AI Detection Software Turns Professors into Cops, Tech as Systems of World-Making, and More that addresses many of the issues you raise.
The professor interviewed in the article has shared publicly the AI policy she wrote for her class. In writing your own policy, the next itself may be useful to you as well as her sources.
posted by hydropsyche at 7:06 AM on October 4 [5 favorites]
The professor interviewed in the article has shared publicly the AI policy she wrote for her class. In writing your own policy, the next itself may be useful to you as well as her sources.
posted by hydropsyche at 7:06 AM on October 4 [5 favorites]
You'll find a lot (probably way too much) going through recent posts of Ed Zitron's newsletter.
posted by General Malaise at 7:18 AM on October 4 [3 favorites]
posted by General Malaise at 7:18 AM on October 4 [3 favorites]
There’s a Problem With AI Programming Assistants: They’re Inserting Far More Errors Into Code
AI tools may actually create more work for coders, not less
The upshot of this is similar to what people upthread are saying: It's potentially a helpful thing, more as a partner or aid to existing staff than as a way to replace them; it needs careful quality control, especially for anything public facing or mission critical; it can easily be used in a way that will make the organization look foolish or insensitive if used without the proper judgement & oversight; it is a huge fad in the IT world right now so beware of overhype; the whole purpose of giant IT companies is to make money off of you so beware of free/cheap come-ons that gradually over time make you dependent and then charge up the wazoo in one way or another once your infrastructure and business model is dependent on them and you have no other way to operate; generally expect prices to increase dramatically (whether that is in dollars or intangible poisoning of the well, infringement on privacy, incessant advertising, as e.g. users of social media see) once this early cinderella period is over and the marketplace has a few shakeouts and becomes a giant monopoly or oligopoly.
(For similar previous examples see e.g. social media, blockchain, bitcoin, the entire internet, etc etc etc.)
posted by flug at 7:25 AM on October 4 [3 favorites]
AI tools may actually create more work for coders, not less
The upshot of this is similar to what people upthread are saying: It's potentially a helpful thing, more as a partner or aid to existing staff than as a way to replace them; it needs careful quality control, especially for anything public facing or mission critical; it can easily be used in a way that will make the organization look foolish or insensitive if used without the proper judgement & oversight; it is a huge fad in the IT world right now so beware of overhype; the whole purpose of giant IT companies is to make money off of you so beware of free/cheap come-ons that gradually over time make you dependent and then charge up the wazoo in one way or another once your infrastructure and business model is dependent on them and you have no other way to operate; generally expect prices to increase dramatically (whether that is in dollars or intangible poisoning of the well, infringement on privacy, incessant advertising, as e.g. users of social media see) once this early cinderella period is over and the marketplace has a few shakeouts and becomes a giant monopoly or oligopoly.
(For similar previous examples see e.g. social media, blockchain, bitcoin, the entire internet, etc etc etc.)
posted by flug at 7:25 AM on October 4 [3 favorites]
What's the use case(s) here? I mean, there is no question there are places it is and can be useful, but equally places it's not. Case in point: one of our developers demonstrated using it to summarise some of our API documentation and suggested this might aid those integrating with our platform. They proceeded to demonstrate this by asking it questions about "how do I do foo?"
Of course, and you saw this coming, the answer was wrong. Subtly so, it provided parameters that were not in the original documentation and wouldn't work and rather would result in frustrating debugging sessions.
Here's my own take on it, which I haven't had the time to transcribe into a blog post yet. Although I question the value in doing that if it's just then going to be summarised again by AI and all the nuance and idiom stripped out?
posted by lawrencium at 7:29 AM on October 4 [1 favorite]
Of course, and you saw this coming, the answer was wrong. Subtly so, it provided parameters that were not in the original documentation and wouldn't work and rather would result in frustrating debugging sessions.
Here's my own take on it, which I haven't had the time to transcribe into a blog post yet. Although I question the value in doing that if it's just then going to be summarised again by AI and all the nuance and idiom stripped out?
posted by lawrencium at 7:29 AM on October 4 [1 favorite]
Just in case it matters to you, it’s rather easy to figure out where you work from the four organisation values you’ve stated.
You might consider asking a mod to remove them if this is a concern for you.
posted by Ted Maul at 7:44 AM on October 4 [2 favorites]
You might consider asking a mod to remove them if this is a concern for you.
posted by Ted Maul at 7:44 AM on October 4 [2 favorites]
Response by poster: (I won't keep marking 'best answers' as there'd be too many - but thanks, all, great stuff so far. And thanks for the tip, Ted - post edited!)
posted by Corvinity at 10:51 AM on October 4 [1 favorite]
posted by Corvinity at 10:51 AM on October 4 [1 favorite]
This sounds like my work.
I have been trying to get my office to buy THE INTELLIGENCE ILLUSION
A practical guide to the business risks of Generative AI
The book appears to be pragmatic, which means critical. Its aim is to fully evaluate and explanation, with actual references and recommendations the following questions about AI
posted by zenon at 11:18 AM on October 4 [4 favorites]
I have been trying to get my office to buy THE INTELLIGENCE ILLUSION
A practical guide to the business risks of Generative AI
The book appears to be pragmatic, which means critical. Its aim is to fully evaluate and explanation, with actual references and recommendations the following questions about AI
- What do these systems actually do well?
- How are they currently broken?
- How likely are those flaws to be fixed?
- Are there ways of benefiting from what they do well and avoiding what they do badly?
- Are these AI chatbot things as clever as they say they are?
- Can we use it, productively and safely?
- should we avoid it until it improves?
posted by zenon at 11:18 AM on October 4 [4 favorites]
Arguing against the emergence of AI is like arguing against the emergence of computers. Yes, they took peoples' jobs. Yes, they are an environmental disaster. Yes, they perpetuate poverty and child labor for the most vulnerable parts of the supply chain. Nevertheless they are here, and the choice of whether to use them to raise money for Palestine or to use them to distribute child pornography is not something you can effectively control on a corporate level because corporations are collections of humans and humans are erratic.
AI is great at so many things and does a lot of things really well. Siri uses AI. Autocomplete uses AI. Apps describing visual things to blind people use AI. I spent an hour yesterday outlining a presentation of space careers for career day for kids, and I am here to tell you I would not have thought about Food Science had it not suggested it. (I threw out four other suggestions that were terrible, but I'm the human and that is my job in that relationship.)
You are probably not against AI any more than you are against computers. You are probably very opposed to bad applications with terrible outcomes of AI. So TLDR: f you want to make an argument that doesn't make you sound like a completely dismiss-able 21st Century Luddite, you should consider policy on use cases rather than technology IMHO.
posted by DarlingBri at 6:07 AM on October 6 [1 favorite]
AI is great at so many things and does a lot of things really well. Siri uses AI. Autocomplete uses AI. Apps describing visual things to blind people use AI. I spent an hour yesterday outlining a presentation of space careers for career day for kids, and I am here to tell you I would not have thought about Food Science had it not suggested it. (I threw out four other suggestions that were terrible, but I'm the human and that is my job in that relationship.)
You are probably not against AI any more than you are against computers. You are probably very opposed to bad applications with terrible outcomes of AI. So TLDR: f you want to make an argument that doesn't make you sound like a completely dismiss-able 21st Century Luddite, you should consider policy on use cases rather than technology IMHO.
posted by DarlingBri at 6:07 AM on October 6 [1 favorite]
This is an excellent primer: What’s machine learning (ML)? Or artificial intelligence (AI)? Or a Large Language Model (LLM)?
posted by ursus_comiter at 9:07 AM on October 8
posted by ursus_comiter at 9:07 AM on October 8
Meredith Whittaker is an excellent person to follow for cogent critiques of the surveillance industry, which includes all the big players in LLMs.
posted by ursus_comiter at 9:20 AM on October 8
posted by ursus_comiter at 9:20 AM on October 8
The Human Cost of our AI Driven Future
"The horror of his work reached a devastating peak when Abrha came across his cousin’s body while moderating content. It was a brutal reminder of the very real and personal stakes of the conflict he was being forced to witness daily through a computer screen."
posted by ursus_comiter at 9:21 AM on October 8
"The horror of his work reached a devastating peak when Abrha came across his cousin’s body while moderating content. It was a brutal reminder of the very real and personal stakes of the conflict he was being forced to witness daily through a computer screen."
posted by ursus_comiter at 9:21 AM on October 8
Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI
posted by ursus_comiter at 9:24 AM on October 8
posted by ursus_comiter at 9:24 AM on October 8
Project Analyzing Human Language Usage Shuts Down Because ‘Generative AI Has Polluted the Data’ [Archived]
Statement on GitHub
posted by ursus_comiter at 9:31 AM on October 8
Statement on GitHub
posted by ursus_comiter at 9:31 AM on October 8
« Older Australian domain registrar/email/hosting recs | How Do I Learn to Manage People, Over Email and... Newer »
You are not logged in, either login or create an account to post comments
posted by Corvinity at 4:50 AM on October 4