27-10-2021 Door: Paul van der Linden

AI: with great power comes great responsibility. Part 1: Artificial Intelligence – again

Deel dit bericht

The constant bombardment with stories on the possibilities and dangers of artificial intelligence might give you the impression that we are dealing with a “new new” phenomenon. Nothing is further from the truth. Actually, artificial intelligence (AI) was already present in the middle of the twentieth century and subsequently went through what Gartner calls a hype cycle. After having been hailed as the solution for almost everything, it failed to deliver on bloated expectations and a period called the AI winter set in. In a time when disillusionments about the actual results of AI were less than convincing, there were obviously not many that wanted to invest into the further development of AI. This blog is the first of a series of 4.

Fast forward to the present, where the developments in technology and declining cost of processing and storage have changed the scene. To be clear, there are already many applications of AI. Examples are (semi)autonomous cars, speech recognition (Alexa, Siri), machine translations (Google Translate) and text and video analytics.

It’s good to understand that it is not only the big tech companies – Google, Amazon, Facebook, and Apple (GAFA) – or their Asian counterparts – Baidu, Alibaba, Tencent, and Xiaomi (BATX) – that are exploring and implementing the many applications of AI. The Chicago Police Department has been using a combination of camera feeds, soundbites from microphones, and AI models to understand whether they should send a police car or ambulance to sites that are generating suspicious images and sounds. The Dutch Tax Agency is using sophisticated models – based on analysis of historic data – that can distinguish between income tax statements that need to be investigated and ones that don’t. Uber uses an algorithm to connect drivers to people requesting a ride. In fact, it is hard to find a company that would not benefit from the implementation of AI.

When and how to join the evolution
As Andrew Ng – who played a major role in transforming both Google and Baidu into AI-driven organizations – mentions, most current applications fall into the category of input-output relations. You bulk feed a lot of data into the algorithm and then end up with some sort of hopefully relevant output. Ng therefore states that we are only at the beginning of what AI is and is more interested in AGI (artificial general intelligence), which resembles the way children learn (not by bulk feeding and tagging information), something which he is not sure will happen in his lifetime.

For now, however, there is still a lot to win by applying this narrow form of AI. Andrew Ng’s comments clarify that AI is an evolution and we are just at the beginning. It also highlights that doing AI is not a choice. The choice is rather when and how to join that evolution.

With the ongoing adoption of AI into different fields it has also become clear that the big expectations for AI are accompanied with big challenges. Despite the best intentions, AI applications have been known to worsen rather than limit racial practices, to weaken rather than to enforce workforce diversity. Self-driving cars have hit (and even killed) pedestrians, which poses the question if the responsibility lies with the designer of the AI algorithms, the manufacturer of the car, or with the driver.

Organizations applying AI or pondering its application, should therefore also make sure that they have actively considered and addressed the implications on topics such as data trust, bias, ethics, and integrity. Trust, visibility, perception, and reputation are paramount in the digital era and can have a direct and substantial impact on your business.

Seven requirements
What could be of help with this important, yet difficult matter is the assessment list for trustworthy artificial intelligence (ALTAI) provided by the high-level expert group on artificial intelligence (set up by the European Commission). It proposes the following seven requirements:
1. Human agency and oversight: humans should be part of the process to oversee and guide
2. Technical robustness and safety: deliver trustworthy services also in case of changes
3. Privacy and data governance: making sure privacy is guarded by using proper governance
4. Transparency: can you trace and explain how a result occurred and discuss the limitations of the AI solution?
5. Diversity, non-discrimination and fairness: AI solutions should be available for all, be fair and not discriminate as a result of for instance historic bias, incompleteness, or bad governance models
6. Societal and environmental well-being: the AI system is part of a broader context where society and environment are also stakeholders
7. Accountability: this is closely related to risk and is about taking responsibility for the development, deployment and use of the AI system.

All of the above requirements must be duly considered and included in any AI approach, by default and by design. The question then becomes how to implement this in a practical way without it becoming a huge task.

Game changer
In a subsequent article we will explore the factors that will enable an organization to implement and prosper from having an AI functionality. For now, let’s briefly highlight these factors:
1. Know why: the business case. AI should deliver real benefits within set timelines. This will avoid it becoming an expensive hobby.
2. Implement: build or buy? AI is a process and organizations need to decide if they want to build it themselves or start fast by buying.
3. Trust, transparency, and ethics. Great results coming out of a black box solution will not inspire trust and could trigger questions on the ethics of the organization.
4. Go cloud: when requirements are still unclear, or scalability is a consideration (it should be!) it stands to reason to consider using a cloud solution.
5. Enterprise and third-party data. Successful AI depends on having vast amounts of good quality data – both internal and contextual (third-party data).
6. Process. AI is a process and it needs to be handled as such. The technical part is important but by no means is it the complete picture.
7. Team. AI thrives by having a diverse team consisting of people with different backgrounds and interests. The key differentiator is asking the right question.
8. Support from the top. Continuing support from C-level is needed to setup, continue and scale AI. It helps overcoming resistance to change and hitting occasional road bumps.

Artificial intelligence is truly a game changer. We now have the means to continue on the promising AI journey towards artificial general intelligence as outlined by Andrew Ng. However, if we do not address the topics mentioned above, the chances of another AI winter will increase. Not because the possibilities are not here, but because we are not addressing the topics in a responsible and structural manner. And are overtaken by negative perception and sentiment. With great power comes great responsibility.

This blog has been co-authored with Erwin Vorwerk.

Paul van der Linden

Paul van der Linden is Principal Consultant bij Capgemini I&D. Hij is een certified architect en certified information professional (CIPP/E, CIPT, CIPM) en heeft de afgelopen twintig jaar organisaties geholpen om de transitie naar digitale organisatie te maken. Hij is co-auteur van het boek ‘Data als succesfactor’, wat wordt gebruikt als leerboek door hoger onderwijs in Nederland en België.

Alle blogs van deze auteur

Partners