The pandemic that fooled artificial intelligence predictive systems

Artificial General Intelligence

Almost suddenly the entire population of a country (and later of many other countries and entire continents) stops going to the movies, stops eating at restaurants, hardly consumes any fuel anymore, spends many more hours online and tries to buy surgical masks, gloves and headphones with microphone.

The machine learning models that until that moment were trying to predict our purchases, our behaviors and maybe influence our habits are starting to get it all wrong. They don’t understand what’s going on (to be fussy never “understood” it, but at least they could predict it) and they suddenly become useless, harmful, or at the very least unreliable.

The pandemic has shown all the fragility of certain machine learning models, which were not used to handling sudden changes after years of gradual variations. Those engineers who could do it corrected the code or the data by manually adding changes and corrections, but not everyone is able to do so: many companies purchase artificial intelligence solutions without having the necessary skills to manage them. And if you leave some automations uncontrolled — think of finance, logistics or purchasing — they can cause considerable monetary damage. For example, systems that carry out automatic orders for perishable raw materials, for processing or resale — where humans already know that sales were about to stop and would remain so for quite some time — kept ordering them. If left unchecked, AI risks filling a warehouse with products that will go bad without ever being sold.

Predictive models have failed their first real appointment with an epochal change —granted, those found only in history books — and in the industry these blunders are not taken lightly. We all know that machine learning works with the data it is given and does not have our ability to take information in a certain field (“a virus is spreading that will create an epidemic”) and apply it to a completely different field (“I have to cancel next month’s trip”, or “next week there will be no need to order all that food for my restaurant”).

But this débâcle gives more sap to those who are fighting to reduce the importance of “heavy” deep learning, enriching it (or polluting it, it’s up to you) with symbolic elements, i.e. already acquired knowledge that integrates reasoning and intuition to “guide” neural networks, trying to make them avoid aberrations and oddities.

A person who in my opinion nailed one of the main aspects of the problem is Thomas G. Dietterich, professor emeritus at Oregon State University and former president of the Association for the Advancement of Artificial Intelligence (AAAI). In a response tweet to Gary Marcus’ thread he highlighted how one of the characteristics of a machine learning system should be to report episodes that are way too abnormal. Some sort of alarm that should make the model say “careful guys, something big has led me astray: I’m no longer reliable”.

With such a warning from the model you could have it trigger a whole range of actions, such as halting all automatic decisions, issuing a warning to human controllers and maybe prepare an initial analysis of what data the system considers so divergent to cause such an exception.

All those “hiccups” that many predictive models experienced during the first days or weeks of this pandemic should be awake-up call for all who tried to oversimplify artificial intelligence and automatic decision-taking. As I like to remind those who ask me for a definition, if it’s artificial it’s not intelligent.

True artificial intelligence is an aspiration, a concept in progress, we talk a lot about it but we haven’t got there yet. Current systems do not generalise and do not understand cause and effect. We do not have an artificial intelligence that reads the news, that derives information from various unannotated and unstructured sources and that manages to put it all together, actually understanding what the hack is happening.

And even if it could, having software that makes innovative decisions based on this mass of information, so diverse, varied and unstructured, is in itself another obstacle that we have not yet been able to overcome. If we succeed, we would have created a general, or “strong”, artificial intelligence of the kind we see in science fiction movies. And in that case, the scenarios would change radically and forever. Nothing compared to a “simple” pandemic.

I have an Artificial Intelligence Professional certificate from IBM and one on machine learning from Google Cloud. I am a member of several AI industry associations: AAAI, ACM (SIGAI), AIxIA. I participate to the European AI Alliance of the European Commission and I work with the European Defence Agency and the Joint Research Centre.