AI’s “Gut Feeling”: Should Society Trust It?

 

Imad Aad
C4DT
April 2024

Inspired by the panel session mentioned hereunder, this write up very informally summarizes my thoughts about transparency and trust in AI. It is aimed at readers with all backgrounds, including those who had little or no exposure to AI so far.

While preparing this year’s “AI House” panel session on “Transparency in Artificial Intelligence” – one of the many exciting events that took place during the World Economic Forum Annual Meeting in Davos – I had various thoughts about where transparency applies, and what are the risks for our society. Transparency isn’t simply a technical challenge, but a multi-faceted endeavor that touches on governance, equity, and the ability of AI to serve humanity in a manner that is understandable and fair. In this blog post, I would like to share these thoughts in more detail.

The need for transparency

In our daily interactions, transparency is often desired but not always provided. When I donate money to a charity, I’m naturally curious about how my contribution is put to use. The same principle applies to the payment of taxes: I’d ideally also see governmental transparency in the handling of public funds. Naturally, when a given organization collects and uses my personal information (e.g., my location data), it is crucial for me to be informed about the intent behind data collection and the specifics of its processing.

More generally, the higher the level of transparency is, the better the chances are for an organization to inspire trust among its users. This reasoning is also at the core of the EU’s AI Act (passed last month), where the required transparency level of AI-based applications follows their level of risk (unacceptable, high, limited, or minimal).

Before diving into transparency in AI, here is a short description of how AI works, for the non-savvy reader.

Basics of AI

An AI system is a “program” or an application generally written by computer engineers. The code of the program can be made open to the public, i.e.,  “open-source”, or it can be kept “closed-source,” with just the publishing of the application.

To “train” this AI program, the engineers use large amounts of data. For instance, in the case of an HR application used to pre-filter applicants’ CVs, thousands of past CVs are fed into the AI system, along with their corresponding past human decisions (e.g., this CV has been rejected… that CV has been accepted… etc.), and how these job candidates actually performed. This is called the “training data.”

Once trained, the AI system can be deployed and starts receiving new unseen data (e.g., new CVs). Based on the new entries and on the previous training data, the AI system outputs the results, or decisions (e.g., “accept this CV” or “reject that CV”).

AI systems can be used in many ways, from recommending products to consumers based on previous purchases and behaviors, to evaluating job applicants’ CVs, as described above. It can also be used in far less ethical ways, such as targeting specific buildings for bombing based on historic information and real-time observations. All these examples are different use-cases, different training data, and different outputs/decisions, but the basic idea and components are the same:

  • The data: for training
  • The code: for analyzing and learning
  • The output: for deciding, recommending, or predicting.

With these 3 components in mind, let’s get back to transparency. A fully transparent AI system involves:

  • Open data: where everyone can see which data was used for training the system. Anyone can check whether the data is biased (e.g., discrimination against specific profiles in the CVs), contains fake news, etc.
  • Open source: where everyone can check how the “AI program’s logic” works. Here again, one can check against biases, discrimination, or simply for wrong decision processes.
  • Explainability of the output: where the logic behind the output of the AI system can be explained. An explanation can be given for why exactly a given CV was rejected, and what the major “factors” (commonly known as “attributes”), e.g., the candidate’s age, gender, or school grades, or combination of factors, were in the making of the decision.

The following sections will discuss these three components in detail.

Open data

Open data provides the advantage that any interested party (e.g., researchers) can check how good the data is, i.e., if it is biased (e.g., previous decisions were made based on gender, a skin color, religion, etc.), fake, statistically valid, and so on. Such openness, and the ability to check it, strengthens the trust in the AI system. It further helps the “reproducibility” of specific tests and results.

However, opening access to the data may not be an economically viable practice for companies. They may have invested significantly in gathering such training data, or it may contain intellectual properties. Granting access to this data risks giving competitors valuable insights or free data for training their own systems.

Furthermore, open data may reveal the weak points impacting the decisions, allowing users/hackers to misuse the system to their advantage (e.g., identifying and exploiting inputs to deliberately produce false positives, false negatives, or false claims).

Open data or not, companies should exercise good governance over the data they use. Car manufacturers, for instance, maintain precise bills of materials (BoMs) containing their list of providers for all components used in their cars, and possibly the precedent providers of their providers. In a similar way, some software providers maintain Software Bills of Materials (SBoMs), where they list the providers of their software components (or libraries) and possibly the subsequent ones. If a software vulnerability gets published, the SBoM helps the software provider quickly identify whether their software is affected or not (this is also called “supply chain security,” described in my last blog post). SBoMs are not easy to maintain due to the very wide and very dynamic nature of software systems. This is why, even though SBoMs are a good practice, they are not widely used.

Companies building AI systems should also maintain “Data BoMs,” listing all the data sources they use for training their systems. Similar to “software vulnerabilities being published,” if a data source is shown to be bad / biased / racist etc., the “Data BoM” helps identify the reliance of the AI system on that data source, which in theory can be removed and the AI system trained again. That’s good data governance.

BoMs of physical components are relatively easy to maintain, mainly because  of the limited small number of component providers. In software BoMs, things get more complicated due to the large number of components and subcomponents used, the diversity of their developers, and the dynamism of their changes and updates. Things get even more challenging in Data BoMs. Anyone can write and publish text on the Internet, all of which could be used to train some AI systems. Tracking all these sources, their quality, and their dynamics through Data BoMs may be the most challenging – albeit not impossible – of the three.

Open source

Opening the source code of the AI system shares many similarities with making data open. In addition to “reproducibility”, an open-source system provides the possibility for it to be vetted by various researchers and engineers, potentially validating its robustness, or revealing some security vulnerabilities, bugs or biases in the AI logic, etc. so they can be corrected. However, hackers may outpace ethical researchers in finding vulnerabilities or biases and exploiting them for their own gain. This is the typical debate between open-source defenders and opponents.

Besides the security issues, open-source AI exposes the developer company to the possibility of competitors infringing possible IP rights and reusing the logic that the company may have invested millions in to develop.

Another issue in open-source AI is the risk of being misused. Take, for instance, ChatGPT. OpenAI has taken precautions to prevent it from being misused, such as providing instructions on how to build a bomb, how to hack, etc. While opening the source code of ChatGPT would give legitimate users the opportunity to adapt it to specialized domains (e.g., medicine), it would also allow malicious actors to adapt it to potentially nefarious purposes beyond the control of the original developer.

Explainability

Open data and open source are relatively straightforward to implement once the developer of the AI system decides to do so after evaluating the corresponding advantages and drawbacks. But the third transparency component – the explainability of the output – is much harder to put in place, despite the eagerness of society and lawmakers to have it incorporated.

Just as not all AI systems are hard to explain, neither are they all inherently explainable. Take, for instance, a simple use case in which the training data is based only on a candidate’s age and years of experience. When the system outputs a new result (e.g., rejecting a candidate), this can be easily linked to the underlying logic and the training data, therefore enabling a proper explanation.

However, as AI system becomes more complex, explainability becomes more challenging. When the training data involves millions of CVs (i.e., inputs) that capture scores of attributes such as writing style, facial expression and social media activity (in fact, AI systems can be based on billions of such attributes, let alone the number of inputs for each attribute), then explaining a given decision becomes next to impossible. Likewise, in the context of AI-based military applications, where the training draws from tens of thousands of data sources, including video analysis, patterns of people’s behavior, and intelligence data, in order to decide on military targets, explaining these decisions may be close to impossible, despite the high responsibilities and serious implications.

There is some research being done on AI explainability, but it is still in early stages.  Until we learn more, we have to tolerate the unexplainability of AI systems. To some of us, this may sound outrageous and unacceptable. Others may argue that just as human decisions are not always explainable or transparent anyways, and often rely on gut feelings, the unexplainable AI system behaves in a similar way. It is what I will refer to as the AI’s gut feeling. Let’s explore this point a little bit more.

Humans decide partly based on logic, but also on instinct, as well as intuition and gut feelings:

  • Instinct reflects the learnings of millions of years of evolution, allowing humans to survive, avoid risks, and pass the learnings to their descendants. This is a universal learning, common to a whole species (e.g. humans, birds…). For instance, any human facing a lion would be afraid, even though it may be the first time they see a lion, and never had an incident with lions before. .
  • Intuitions, on the other hand, are based on the accumulation of one’s personal experiences. They are a way of processing information that is distinct to each human being. Sometimes they get reflected in physical “gut feelings”. A manager, for example, may not like a job candidate for no obvious reason, but just based on gut feelings, or on the “chemistry”, without being able to explain it.

Instincts and intuitions (and their occasional gut feelings) are hardly explainable, because of the underlying complex learning schema. They both result in unexplainable quick reactions, but the underlying learnings are quite different: “universal” for the former, and “personal” for the latter.

In a similar way AI results can be unexplainable. It’s because the astronomical amounts of data, data types, and multiple layers of very complex learning, contribute to unexplainable results and decisions.

So the question is: Would unexplainable AI be acceptable for society? From recommending which products to buy, to selecting job candidates and even military targeting decisions, unexplainable AI systems are already in use, without much pressure for explainability.

Risks for society

It is easy to say, “No way! Transparency, including explainability, is a fundamental right and a pillar of trust”. In practice, though, we have two options:

The first one is to stop using AI systems until we are certain we can have explainability. But realistically speaking, AI systems are already there, proliferating into all aspects of our lives (and deaths), with considerable advantages for the users and the economy. It’s like thinking of stopping the internet and redesigning it to be “better”. It won’t happen. It’s evolution, not smart design, in the digital world as well.

The second is to keep deploying and using AI systems with the understanding (or hope) that the “fundamental right” of transparency and explainability will be reached at some point in the future. This sounds more realistic. However, experience shows that in practice these fundamental rights are overshadowed by the convenience which some systems offer. Take privacy for instance. It is a fundamental right, claimed by many, yet with the surge of online social networks, recommendation systems, and their convenience to most of the society, one wonders whether privacy has become an illusionary fundamental right. In a similar way AI systems, even when unexplainable, may end up outweighing the need for the right for explainability from the society.

Another concern is related to the unexplainable “AI gut feeling”. While humans react based on instinct, developed universally and applicable everywhere (e.g. fear of dangers), and intuition, developed personally and applicable “locally”, AI systems are trained “somewhere”, and may be used “somewhere else” where the AI intuition, or “AI gut feeling”, is not necessarily appropriate for the local environment while it’s deciding without clear explainability.

The contextuality may be absent in “AI learning astronomical amounts of data”. Besides the intrinsic known problem of having training data biased against a gender, skin color, religion etc. different societies do not necessarily have equal means to build and train good AI systems. While some may be wealthy and investing heavily in AI, others may be starving, in war zones, or in developing countries more generally. The risk is that of spreading AI systems with gut feelings that are imported from “elsewhere” (think of cultural differences), without necessarily reflecting the local context.

This reminds me of Hollywood movies from the 80’s: Not all countries had the means to produce fancy (local) movies, or the capacity to advertise and distribute them around the world. As a result, the brain of the child I was back then developed a distorted model of reality. As a kid for whom Hollywood movies were today’s Wikipedia, I thought Americans were always the “good guys” who won the war over the Vietnamese and the bad guys who always came from specific regions of the world.

The risk to society, therefore, is in how much trust we put into the AI’s gut feeling – AI systems that have been trained on data from specific societies with specific point of views spreading “AI gut feelings” around the globe is today’s equivalent to the Hollywood movies’ spread of specific judgements in the 80’s. Now I know how much credibility I should give to Hollywood movies from the 80’s, but I’m not sure how much trust people will put into unexplainable “AI gut feelings”, and how aware they are of its immaturity. AI literacy and awareness is fundamental here.

AI decisions are likely to remain unexplainable for the next decade(s), but access to the (open) training data and the open-source can be of big help to check their correctness.

Furthermore, just like Netflix producing regional movies and spreading them globally, allowing everyone to have other points of views of local problems and habits, there is the need for AI inclusion of developing countries, global south, countries at war… such that they can have their AI and data capabilities, or at least that the “riches’ AI gut feeling” takes the poor or discriminated people’s data into consideration and into its training.

A few ending points

Artificial Intelligence is percolating into all aspects of our daily lives, from the superficial to the life-threatening ones. Transparency is needed for society to understand AI and use it constructively and responsibly. In this blog post I discussed why explainability is the most challenging part in AI’s transparency, similar to how challenging it is for humans to explain their gut feelings. However, despite its immaturity, AI has a global reach and potential high risks. In brief:

  • Transparency in AI is crucial for the society
  • Until AI becomes explainable, data and source transparency is helpful
  • Awareness efforts should be made such that fundamental rights such as transparency (and privacy) do not get overshadowed by the convenience of AI tools
  • AI inclusion and awareness are necessary for global good AI systems

For further reflections on “Transparency in AI” you can watch the panel session C4DT organized in the “AI House” during the last World Economic Forum Annual Meeting in Davos last January.

Acknowledgement

Many thanks to Stephanie Borg Psaila from Diplo Foundation and Melanie Kolbe-Guyot from C4DT for their thorough reviews and great suggestions. Thanks to Katherine Loh, Jean-Pierre Hubaux, Martin Jaggi, Laura Tocmacov, and Hamilton Mann for the valuable discussions.