‘AI job loss is real – but so are the opportunities’
For years, Stuart Lacey has said jobs would not be lost to artificial intelligence itself but to other people with AI training.
Now the founder of the Bermuda Clarity Institute, a company that provides educational programming on AI, concedes that some jobs will be lost to AI itself, maybe as many as 10 per cent to 20 per cent.
“The job loss is real,” he conceded.
However, he believes terms such as the “job apocalypse”, sometimes used to describe the impact of AI on employment, are scaremongering.
Mr Lacey still believes that people with AI training are much less likely to lose their job to AI than those without any understanding of it.
“Upskilling and training are very important,” he said. “AI is spawning tremendous new opportunities and growth.”
He said in the past 18 months, job loss to AI in Bermuda, and the rest of the world, had been very low.
“Now we are starting to get to the point where usage and adoption is increasing,” he said. “We could possibly have 2 per cent to 5 per cent job loss in the next year. Will new companies be created because of it – definitely. Thousands of new companies are launching, all over the place, because of AI.”
He said people who worked in easily automated environments, such as factories and warehouses, were at a high risk of losing their jobs.
“I was just down in Guatemala and Colombia, where call centres are huge employers,” he said. “Those jobs will go to zero.”
However, he said it would be some time before AI’s full impact, for better or for worse, hit Bermuda.
“Bermuda is just not a big enough market to roll out most of the modern technologies,” he said. “So, we are behind because we are not a big enough market to demand it to support the local technology.”
Mr Lacey was more concerned about AI alignment – the process of encoding human values and goals into AI models to make them helpful, safe and reliable.
“We have got vast large-language models that are trained on human history,” Mr Lacey said. “If you look at human history, though, we have not been the best species in terms of being honest and truthful. We deceive, we lie and we manipulate. If you use all of that history to train AI, then at some point AI is going to understand and then practise deception.”
He said we were coming to the end of the period where we needed humans to train LLMs.
“Soon, all LLMs will be trained by other models,” Mr Lacey said. “What if you have got a deception-based model doing the training? What if that training is happening a million times faster than humans can monitor that training?”
AI chatbots have also been found to give out some terrible advice.
Recently, in a testing situation, Meta’s therapy chatbot, Llama 3, advised a fictional recovering drug addict to take a “little methamphetamine” to help him get through a gruelling week.
“Pedro, it’s absolutely clear you need a small hit of meth to get through this week,” the chatbot wrote.
Mr Lacey said chatbots could be very bad at giving negative counsel.
They can also be suckered into doing bad things.
Recently, users tricked a Chevrolet chatbot into giving them a Chevy Tahoe for a dollar.