Decision time: beware the hype around generative artificial intelligence
03 May 2023
Blog
The fact that the latest version of ChatGPT can pass legal examinations, create pioneering drugs, invent games – and much more – in mere seconds, has drawn hordes of business leaders to generative artificial intelligence, understandably.
However, as wondrous and appealing as this new strand of AI is, company decision-makers must ask themselves: do we really need it? Or is it just a helpful toy rather than a game-changing business tool? This means understanding what enhanced performance can be achieved with the applications of AI chatbots.
While leaders are seemingly spellbound, using AI just for the sake of it is counter-productive and costly. Further, by testing, using and buying generative AI solutions, it feeds the beast that is likely to grow bigger than humanity.
Experts reckon so-called artificial general intelligence (AGI) – basically, a computer system capable of generating new scientific knowledge and completing all human tasks – is closer than ever. Some think it could happen within a decade.
For now, many people are happy with using freemium versions of generative AI tools. But there is nothing free in business. The searches and results they create will be used to help companies, such as ChatGPT’s OpenAI, determine what use cases and applications businesses will pay for and improve and develop capabilities.
Meanwhile, the long shadow of AGI looms large. Since ChatGPT was made publicly available in November, a torrent of investment and talent has been poured into AGI research. Suddenly, we have jumped from having one AGI startup, Deepmind, attracting $23 million in funding in 2012 to eight or more organisations operating in this space raising $20 billion of investment cumulatively this year.
It’s worth bearing in mind that, alongside private investment, nation-states are participating in this race, too. After all, improvements in AI help better service civilians and the military. It is also important to realise that if an AI achieves superhuman performance at writing software, it could, for example, develop cyber weapons. Indeed, three years ago, one AI programme defeated an experienced United States military pilot in a simulated dogfight.
Pressing pause on progress
Little wonder, then, that in late March, Elon Musk and almost 4,000 high-profile signatories, including engineers from Amazon, DeepMind, Google, Meta, and Microsoft, attempted to halt the giddying acceleration of generative AI in an open letter.
It read: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” The letter continued: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
We should all take notice when the smartest human – not machine – brains are demanding progress be halted. But the fear is that the bot has already bolted. Given the potential competitive advantages on offer if rivals choose to snub such AI tools, will the temptation to continue pushing the boundaries of technology beyond their current limits be too much for business leaders?
Ultimately, as technologist James Bridle warned in a recent essay published by The Guardian, AI “is inherently stupid.” He wrote: “It has read most of the internet, and it knows what human language is supposed to sound like, but it has no relation to reality whatsoever.”
Bridle called for business leaders to take an ambivalent view of ChatGPT and AI in general. Further, believing it was “knowledgeable or meaningful [was] actively dangerous,” he added. “It risks poisoning the well of collective thought, and of our ability to think at all.”
His point was that relying on AI to work its magic could convince organisations to take shortcuts and, worse, shunt humans out of the control seat. If a business leader decides there is a case for using and funding AI projects, they will always need a degree of human filtering.
Keeping humans in charge
AI could provide more significant insights into a company’s portfolio, its customers, consumer habits, and so on. But when it comes down to decisions about what product to launch, for instance, leaders have to make the final call. It’s a bit like using Waze on a car journey – there might be a choice of two routes, and it’s down to the driver to decide which one to take.
Steering this analogy further, Professor Erik Brynjolfsson, Director of the Digital Economy Lab at the Stanford Institute for Human-Centered AI, pointed to Waymo’s experiments with self-driving vehicles. “It works 99.9% of the time, but there is a human safety driver overseeing the system and a second safety driver in case the first one falls asleep,” he said in a recent interview. “People watching each other is not the path to driverless cars.”
Alternatively, Toyota Research Institute has “flipped it around”, said Brynjolfsson, allowing the human to be in the driving seat, making the decisions, and the AI to “act as a guardian angel”, only intervening when unseen or missed danger lurks. “I think this is a good model, not just for self-driving, but for many other applications where humans and machines work together,” added Brynjolfsson.
I agree with this analysis. The human must remain in the driver’s seat or pilot’s chair. And in a business setting, the leader must be central to the decision-making process yet informed by AI with better visibility than the human eye.
Author(s)
-