Significance
As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.
Abstract
Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.
I don’t think that people who use AI tools are idiots. I think that some of my coworkers are idiots and their use of AI has just solidified that belief. They keep pasting AI results to nuanced questions and not validating the response themselves.
This kind of work I find very important when talking about AI adoption.
I’ve been generating (the boring) parts of work documents via AI, and even though I put a lot of thought into my prompts and I reviewed and adjusted the output each time, I kept wondering constantly if people would notice the AI parts, and if that made me look either more efficient and ‘complete’ (we are talking about some template document where some parts seem to be designed to be repetitive), or lazy and disrespectful. Because it’s for sure that my own trust in content and a person drops when I notice auto-generated parts, which triggers that I use AI in turn, and I ask it to summarise all that verbose AI generated content. I’m not sure that’s how decoder-encoders are meant to work :)
This apparent tension between AI’s documented benefits
That is one hell of an assumption to make, that AI is actually a benefit at work, or even a documented one, especially compared to a professional in the same job doing the work themselves.
I think its honestly pretty undeniable that AI can be a massive help in the workplace. Not all jobs sure but using it to automate toil is incredibly useful.
a benefit of ai is that its faster than a human. on the other hand, its can be wrong
A rudimentary quick Internet search will provide a good bit of the “AI benefits at work” documentation for which you seek. 🤷♂️