Some employees say that it’s easier to talk to ChatGPT than to senior colleagues. They say that there’s no point bothering to ask polite and well-formulated questions because ChatGPT always answers politely regardless. Stock photo: Joe Lambert/AP/NTB

Employees are using ChatGPT for advice, but is this a good thing?

Recent data on people’s habits in the workplace are giving us reason to look into whether the use of artificial intelligence (AI) may be hampering collaboration and knowledge sharing.

The countless useful applications offered by content-generating robots such as ChatGPT will probably be of great benefit to society. But where do we draw the line in the use of these applications at work? 

And what tasks in the workplace are we already using these robots for?

At SINTEF, we’ve been looking into this issue as part of a recent study completed together with the consultants Knowit.

Both harmless…

We have conducted in-depth interviews with a small selection of people employed in enterprises working with technology and digitalisation systems. Two consistent findings have emerged from among the responses.

The first is that those people who are active users of ChatGPT for the most part let the robot perform tasks that they regard as boring and repetitive. 

We believe that this use of the robots is harmless and may even have positive outcomes. According to the employees themselves, the relief on their workload allows them to perform more effectively. They also say that their motivation is boosted. They experience a more meaningful day at work because the number of demotivating tasks is reduced.

…and worrying

However, our second finding is that some employees are also asking the robot work-related questions. They say that it’s easier to talk to ChatGPT than to senior colleagues, and  that there’s no point bothering to ask polite and well-formulated questions because ChatGPT always answers politely regardless.

You might think that this is also unproblematic.

However, we believe that this kind of application of robots gives cause for concern and that we should be on our guard. Because what happens to workplace collaboration and knowledge sharing when employees prefer to resort to a robot for advice rather than their colleagues?

Reflecting American values

We believe that more research is needed into this issue. Not least in the light of what Sven Størmer Thaulow wrote recently in the national daily Aftenposten. Thaulow is Director of Data and Technology at the media group Schibsted.

ChatGPT is founded on American language models that in turn reflect the content in which they are trained. That is to say, using texts and other data imbued with an underlying set of American values.

Working life in America is organised according to entirely different principles than in Norway. Is it really advice from sources such as these that we want our young employees in Norway to be hearing when they feel uncertain in their roles?

Breaking through writer’s block

Previous research has shown that the application of generative artificial intelligence in the workplace is having its biggest impacts on young and inexperienced employees. Our work, on the other hand, indicates that employees who work with technology use such tools regardless of their levels of experience.

In our study, the range of ‘tasks’ delegated to ChatGPT was very wide-ranging.

Type One, or ‘Boring’, tasks ranged from repeatedly having to write virtually identical texts and breaking through writer’s block, to simply compiling documents. According to some of our respondents, work that used to take a whole day can now be completed in an hour or two.

Seeking out the robot in order to learn

Type Two tasks, involving ‘Questions we ask ChatGPT’, are linked to issues such as learning. One respondent had asked the robot: “Who is the best person in the world to teach me this? Pretend that you are this person and explain everything as if I was a five-year old”.

Some employees use the robot because they don’t want to waste their colleagues’ valuable time.

Others have asked for help and have obtained it in response to questions such as: “I don’t even know what the right question is, so please give me a clue that will put me on the right track”. Several respondents reported that simply formulating a question can sometimes lead to a new understanding of the content of a problem.

Senior colleagues find it hurtful to be ignored

However, we believe that there are potential drawbacks to all of this. Research shows that being able to help colleagues who are experiencing problems boosts work satisfaction.

One of the managers we interviewed pointed out that it might be experienced as hurtful to hear that colleagues would rather seek advice from ChatGPT in connection with work-related problems.

It is exactly this issue that encourages us to recommend that we ought to be acquiring research-based knowledge on the consequences of robots assuming aspects of the role of mentor that used to be carried out by other, including senior, colleagues.

This article was first published in the financial daily Dagens Næringsliv on 4 September 2023 and is reproduced here with the permission of the paper.