AI diplomacy in a digital age
Like a lot of us, Stewart Dresner gets frustrated dealing with chatbots.
The chief executive of Privacy Laws & Business told the 45th Global Privacy Assembly: “One of the confusing things is that sometimes when you are asking questions you do not know whether you are dealing with a chatbot or a human.”
During a question-and-answer session in the panel, AI and Society Opportunities and Challenges for Responsible AI Development, Mr Dresner said knowing he was dealing with a human or chatbot, could cause him to take different approaches.
“If you knew it was a human, you might say, that’s not right, or I want to do something else,” he said.
Panel moderator Cecilia Alvarez, Meta privacy policy engagement director, said many jurisdictions were now labelling chatbot interactions, or were considering it.
Sometimes this is done through a change of colour in text or background, to show that something different is happening.
But there are some indications that labelling chatbot interaction causes humans to misbehave.
“We have different expectations when it is a human and when it is a machine,” Ms Alvarez said. “Sometimes those expectations can go in a different direction.”
Studies have found humans were actually less polite when dealing with a chatbot or machine. A 2016 article in the Harvard Business Review, written by Michael Schrage, suggested that 10 per cent to 50 per cent of interactions with bots such as Siri were abusive and vulgar.
“People are more nasty because they do not feel the consequences,” Ms Alvarez said.
Another panellist Monika Tomczak-Gorlikowska, the chief privacy officer of Prosus Group, said this rudeness to machines was not theoretical.
“One of our education technology companies built this virtual tutor for using generative AI technology,” she said. “One of the things they looked at was how the learner reacted to the virtual tutor.”
Ms Alvarez joked that the chatbot could be trained to cut off the interaction when people did not say “please”.
Rude or polite responses from humans usually do not invoke a different response from chatbots such as ChatGPT. However, there are concerns that mistreating chatbots could make us more abusive to other humans.
On the topic, one X platform user, Kitze, joked: “Be nice to ChatGPT and always say thanks. You don't want your Roomba stabbing you in your sleep.”
There are also concerns that rude interactions with humans could train the chatbot algorithm in a negative way.
Teki Akuetteh, founder and executive director, Africa Digital Rights Hub, said we needed to be transparent about what goes into training artificial intelligence modules.
“We also need to be transparent about how the technology is after we have use cases,” she said. “We know that the AI tools continue to develop and improve themselves even at that point.”
She said AI developers needed to be clear about how apps such as Chat GPT use and learn from the prompts and information given to them by users.
“We need to be transparent about the risks associated with the information that is being collected and where the information is coming from to train the modules, and the foreseeable risks,” she said.
Ms Alvarez said it was important to talk about responsible AI use and development, but also the benefits of it.
“The cost of not developing these benefits is something of which you need to be very conscious,” Ms Alvarez said.
The 45th GPA was held at the Hamilton Princess & Beach Club and hosted privacy experts from around the world.