What have we learned about attitudes towards AI?
Who doesn’t have an opinion on Artificial Intelligence (AI) – is it a force for good or evil? According to a recent global study, 85% of us believe AI will deliver a range of benefits, but only half believe the benefits outweigh the risks. Three out of four people (73%) are concerned about the risks associated with AI, with cyber security rated as the top risk globally. We live in the age of ‘big data’ and information overload where we can also collect vast amounts of information.
This data can be manipulated to determine knowledge patterns and insights. In 1852, the playwright Victor Hugo wrote: “Nothing is as powerful as an idea whose time has come”. In other words, when the time is right, nothing can stop a great idea from spreading. Love it or loathe it, AI technologies already interact with many aspects of our lives.
A brief history of AI
The concept of AI can be traced back to ancient Greece when philosophers tried to describe human thinking. Thousands of years before machine learning and self-driving cars became a reality, the idea of robots was ingrained in myth: The tales of the giant bronze robot Talos and the artificial woman Pandora fascinated the people of ancient Greece. Talos offers one of the earliest conceptions of a robot – the myth describes Talos being created to protect the island of Crete from invaders, marching around the island every day hurling boulders at approaching enemy ships! Myths like these demonstrate humanity’s fascination with creating artificial life.
In 1950 Alan Turing published Computing Machinery and Intelligence and in 1956 computer scientist John McCarthy coined the term ‘artificial intelligence’. While the concept of AI has only been around for a few decades, it continues to impact culture, ideas and industry, from the Turing test through to ChatGPT and generative AI. It is evolving fast and is already used across different segments, including fashion and design, supply chain management, and predicting consumer preferences. AI-generated outfits and virtual models were even introduced at this year’s London Fashion Week!
Can machines think? Do androids dream?
In films, AI has been portrayed as planning to take over the world. In 2001:A Space Odyssey, Hal was one of the first evil AI characters to appear on the big screen. When his internal programming is conflicted (his objective to keep the astronauts safe clashes with secret information he can’t reveal) he suffers a breakdown and begins murdering the crew, reasoning that if they’re dead he doesn’t have to conceal anything from them. Other films like Short Circuit and A.I. counter with examples of friendly artificial intelligence.
Scepticism in the film industry around AI is decreasing; media companies Netflix and Amazon have made effective use of the technology to guide decision-making. Complex algorithms are used to recommend specific content and analyse audience data to underpin commissioning and acquisition choices. The decision on whether a script is good enough or a pitch has merit is being influenced by AI because it can read, evaluate, and spot trends in huge pools of data from the downloading and streaming habits of digital consumers, filtering the right material to reach the widest audience.
Film actors, on the other hand, are either concerned about being replaced by machines altogether – or simply worried about their data being remixed with generative AI, turning an actor’s performance in one film into a new character for another production or video game.
Hopefully, by embracing responsible practices, AI will be used to enhance rather than replace or degrade the performance of humans.
EI vs AI
Emotional intelligence sets humans apart from AI in the workplace – human reasoning and creativity cannot yet be replicated by machines. Soft skills – like teamwork and effective communication – give people an advantage over machines. Therefore, we must keep developing these skills. There’s ongoing concern that people mistake access to information as knowledge and misconstrue data as truth. Tim Griffiths, Senior Director of Training, Sequoia Equities asks: “Will people take the time to learn the skills they need if generative AI provides them with information?”. How well we can implement AI or any other modern technology will depend on the emotional intelligence within companies.
Attitudes – positive or negative
Hope, fear and everything in-between sums up our attitudes towards AI. Whether you’re for or against it, understanding attitudes and behaviours around AI is critical. There’s no doubt that AI tools can automate time-consuming tasks. Automation still creates anxiety around jobs – 86% of people in the UK think their jobs will be put at risk, with the rest saying there will be no impact or they’re not sure. A recent survey highlighted the clear benefits of using AI – particularly in the areas of health (assessing the risk of cancer), science (climate research), security (surveillance) and education (virtual reality). Did you know AI is even helping the search for extraterrestrial life?!
As exciting as generative AI technology might be, we still need to be cautious about entrusting core tasks to it – but neglecting to explore the possibility that this technology offers could be just as risky even if we still have a lot to learn. With power comes responsibility, so it’s important to educate and train employees to create an AI-savvy workforce.
“Artificial intelligence is not a substitute for human intelligence;
it is a tool to amplify human creativity and ingenuity.”
[Fei-Fei Li, Stanford AI professor]
BAD attitude
AI may be one of the advanced technologies of today, but human ingenuity remains key to making it successful. At BAD, we recognise that the rise of AI makes EI more important in the workplace, which is why we use our behavioural science experience – combined with AI tools – to create personalised and innovative learning experiences. By automating tasks and offering constant feedback, our learning is not only more accessible but also more engaging and effective.