AI misinformation now perceived a top risk to business
Artificial intelligence is turbocharging misinformation that is threatening democracy, according to new research from Axa XL.
The latest Axa XL Future Risks Report lists AI misinformation as the fourth biggest perceived risk in the business world, after climate change, geopolitical unrest and cybersecurity.
Local AI educator and proponent Aaron Smith agrees there is cause for concern.
He told The Royal Gazette: “Deepfakes are rampant right now, and most people have a hard time telling the difference between an AI-generated conversation and the real thing.”
He warned that people could become “confident idiots” if they take everything AI spits out as gospel.
“Some people have this idea that all of a sudden everyone has infinite knowledge with AI,” said Mr Smith who facilitates AI courses out of the Bermuda Clarity Institute. “It is important to scrutinise the data that comes out of it.”
In Axa XL’s report, David Colon, author of the book The Information War, is quoted: “Strategic use of fake news aims to undermine democratic regimes from within by amplifying divisions, encouraging mistrust, and eroding the ability of citizens and organisations to differentiate what’s true from what’s false.”
As an example, as the United States geared up for a presidential election earlier this year, thousands of New Hampshire Democratic voters received a call ahead of the state primary, urging them to stay home rather than vote.
It seemed to come from President Joe Biden, but was actually a deepfake.
Governments worldwide are attempting to understand how best to respond to the problem.
However, 82 per cent of experts surveyed by Axa XL thought AI itself could be used to combat the fakery. Advanced AI-driven systems can analyse patterns, language use and context to aid in content moderation, fact-checking and the detection of false information.
“People are using AI for nefarious reasons to trick people into doing things they would not otherwise do,” Mr Smith said. “There is usually a monetary element to it.”
Mr Smith was concerned about the lack of transparency around the generation of AI language models.
“Many people do not appreciate how much the information they are using and contributing to can be used against them and taken without their knowledge.”
He said the real benefit to an AI app developer could be access to the users data, contributed willingly or unwillingly.
He urged people to do their research before purchasing and downloading AI applications.
The Personal Information Protection Act comes into effect in Bermuda in January giving people some control over the information companies and organisations hold on them.
“I am not certain if Pipa would be adequate to address AI privacy concerns in Bermuda other than to set standards on how local companies and organisations must treat personal and private data,” he said.
Mr Smith thought training and education around AI misinformation was a way to fight it.
“That way, people will understand what the potential dangers are and how to avoid them,” he said. “It is no longer a question of whether people are using AI in their organisations, but whether they are using it properly and in a way that keeps data secure.”
Some people worry that AI will become so powerful it will take over from humans. Mr Smith was sceptical.
“I have a problem with that idea, because ultimately we are the ones feeding the data and information to it,” he said.
Need to
Know
2. Please respect the use of this community forum and its users.
3. Any poster that insults, threatens or verbally abuses another member, uses defamatory language, or deliberately disrupts discussions will be banned.
4. Users who violate the Terms of Service or any commenting rules will be banned.
5. Please stay on topic. "Trolling" to incite emotional responses and disrupt conversations will be deleted.
6. To understand further what is and isn't allowed and the actions we may take, please read our Terms of Service