Attenzione! Look out for AI’s Shortcuts

Book review for Nello Cristianini’s “The Shortcut: Why Intelligent Machines Do Not Think Like Us”


Reference:

1 Cristianini, N. (2023). The Shortcut: Why Intelligent Machines Do Not Think Like Us (1st ed.). CRC Press.

2 Dadich, S. (2016, November). Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity. WIRED.| Link to full article


Disclaimer:

This book was read and reviewed by the G.M.S.C. Consulting team in line with our efforts to continually enrich our understanding of AI, as well as advocate for the safe and ethical use of the technology.

No compensation—monetary or non-monetary—was given in exchange for this book review.

How much can we trust artificial intelligence (AI)? As much as it supposedly resembles human intelligence, Nello Cristianini’s latest book reminds us to be wary of the different shortcuts we took to build the AI we know today.

This year, OpenAI’s Chat Generated Pre-Trained Transformer (ChatGPT) and other large language models (LLMs) took the world by storm. It opened a Pandora’s box many were not familiar with: the possibility of using AI in our daily tasks.

Indeed, we don’t only ask these machines to do the heavy lifting for us. We also ask them to craft our reports and emails for us. People cannot help but argue about its consequences, such as robbing employees of  jobs or rights to protect their original work.

Rather than listing down a bunch of hypothetical ways AI will influence our future, we must first look back at its origin story.

In his latest book, The Shortcut: Why Intelligent Machines Do Not Think Like Us1 (originally published as “La Scorciatoia” in Italian), professor and renowned researcher Nello Cristianini wants readers to pay attention to the different shortcuts we took to build the AI we often encounter today.

How does a machine learn how to communicate?

Before Google became the go-to platform for quick and easy language translation, we had to take language learning very seriously. Every language has its own set of grammar rules, vocabulary and syntax. Learning languages was a form of art. Back then, the more languages you knew by heart, the more chances of success.

Right now, we take languages for granted. Instead of rigorously studying the ins and outs of the Japanese language, you can ask a machine to tell the waiter that you want more rice on your katsudon. We have now removed the mystical need to do the “hard work.”

Did machines learn languages the same way humans used to? Unlike us, they don’t need to rigorously study the foundational aspects of a language for hours on end. They operate using statistics and probabilities. Rather than being precise with their answers, machines give estimates.This is one of the shortcuts mentioned by Cristianini in his book.

But what about their need for massive data? Doesn’t this make them rely on “evidence-based reasoning”?

Think of data as their fuel.The more data we feed into these machines, the better they recognize the patterns. Once an LLM has completed its training, it generates an answer word by word based on prediction, to resemble “human comprehension.” The same approach can be observed for “content recommendation systems” as these predict our preferences based on data scraped from our web usage.

Like statistics, both limitless data gathering and pattern recognition were also considered as shortcuts to make AI work.

How reliable are these machines?

Young children are typically asked to group words based on certain themes to build comprehension skills. For example: if the theme was fruits, we would put down words like apple, lemon, melon, peach and so on. Unlike the real world, this is a very simple scenario.

Machines are expected to perform pattern recognition using more complex data sets. As much as we’d like to believe that these machines are flawless, they can also commit errors, dubbed as“hallucinations,” since these rely massively on coincidence. Considering the three shortcuts we mentioned earlier, we also do not truly know all the “mechanisms” behind these machines, as their train of thought doesn’t match our way of reasoning.

This was exhibited by AlphaGo in 2015 when it found new ways to defeat its opponents, which were incomprehensible even to expert human players.). Instead of solely learning the rules of Go, AlphaGo’s knowledge relied on the 30 million recorded matches and 50 million matches it played with itself–an experience that requires more than a person’s lifetime. With this rich experience, it tweaked its moves over time (using “parameters”) to achieve the goal of defeating its adversary.

Okay, but AlphaGo’s tale does not sound scary, right?

Imagine using AI on a wider scale like Facebook. In his book, Cristianini cited a study about how digital footprints were being collected and extracted to build digital profiles without users’ explicit consents. This poses a risk for society at large as they influence the way AI behaves and interacts with us.

Other issues we need to consider are biased data sets and algorithms. Both increase the chances of discrimination, disinformation, fraud attacks and polarization, which we’re already witnessing today.

Rather than shutting down AI (a joke popularized by former US president Barack Obama), Cristianini encouraged the readers to set up guardrails like regulations. We don’t necessarily need to shun AI or be Luddites. Rather, we need to find a way to co-exist with AI and not let it control us.

The Shortcut: Why Intelligent Machines Do Not Think Like Us was highly engaging from beginning to end. Rather than forcing us to pick sides as an AI optimist or pessimist, Cristianini is giving readers another perspective to consider when we talk about AI. He did this by pointing out the overarching impact of shortcuts like pattern recognition, which we tackled earlier.

Overall, we highly recommend this book as it is not too technical for many to indulge in, nor does it give in to the AI hype we were led to believe in.

Do you want to learn more about what we think of AI?

Previous
Previous

Why should we make tech more human?

Next
Next

Chatbots were not born overnight