Gerd Gigerenzer is a psychologist with five decades of research into decision making and Director Emeritus of the Max Planck Institute for Human Development.
In "Wisdom Over Algorithms," the author analyzes how humans use algorithmic and artificial intelligence (AI) technology to make decisions, and outlines their capabilities and limitations.
In discussions about algorithms and AI, we often come across two opposing camps: One believes that they will make the world a better place; The other believes that robots and AI will replace and dominate humans, leading us to an apocalyptic future.
Both the believers and the fearful, the optimistic and the pessimistic, share the same view: Algorithms/machines will do everything better than humans (more accurately, faster, cheaper). That is also the promise and sales pitch of technology companies.
The author argues that this is a false conclusion, there are areas where they are better than humans, but in other areas they are not. And this will change the way we think and behave in relation to technology.
In this book, Gigerenzer wants to emphasize: “Complex algorithms can be successful when the situation is stable, but will encounter many difficulties in uncertain contexts.”
He believes that a wise attitude is the key to staying in control in this age of AI: “Keeping a wise attitude means understanding the potential and risks of digital technology, and remaining proactive in a world full of algorithms.”
By understanding the potential and, more importantly, the limitations and risks of these technologies, what they can and cannot do, we can be neither frightened nor blindly trusting. We can become astute digital citizens.
Today, algorithms and AI have permeated every aspect of human life. They help us choose dates, monitor our health and lifestyle, manage our money...
They are even applied at many levels in many different fields: Security (criminal identification), medicine (diagnosis and treatment), justice (predicting the possibility of criminals reoffending), traffic (self-driving cars)...
In particular, algorithms have a profound impact on us in terms of information. With the personalized advertising model, the information we are exposed to is very likely what advertisers want us to see.
To serve advertisers, tech companies collect minute-by-minute data about where you are, what you're doing, and what you're watching.
In this dangerous intelligent world, we cannot help but become smarter, if we want to maintain control.
To help us understand the potential and limitations of algorithms, the author analyzes how we apply algorithms in different fields (dating, recruiting, self-driving cars, translation, crime recognition, diagnosis and treatment...)
In each example, Gigerenzer shows what algorithms can and cannot do, then compares machine intelligence with human intelligence. Through specific and clear analysis, the author helps us see the fundamental difference between these two types of intelligence.
Gigerenzer’s first example is dating apps. He points out the promises of apps like Parship (“a lonely heart falls in love every 11 minutes,” EliteSingles, Tinder, Jdate…), and explains how love algorithms work. Essentially, the apps will base on the personal profiles provided by users, score and compare characteristics to assess compatibility and suitability between different people.
However, a profile is not a person. In addition, to retain users, applications facilitate access to many potential partners, stimulating people to "always look for someone better". Worse, there are many scams that have occurred by those who want to take advantage of this system.
From the low success rate of dating apps, Gigerenzer figured out what AI is best at, through the principle of stable worlds.
That is the fundamental difference between human intelligence and artificial intelligence. That is also the common limitation of AI in all fields. When there is human participation, a lot of uncertainty appears. And just a little uncertainty makes AI confused.
For the same reason, AI cannot be absolutely successful in areas such as: Medical diagnosis, translation, self-driving cars, crime recognition... It can be applied to some extent, but cannot completely replace humans.
In this book, Gigerenzer also points out how technology companies have hooked users. Tools used to control attention include: social media feeds, notification systems, delayed likes, autoplay videos, snaptreaks, and mindless games that require constant attention.
From these analyses, Gigerenzer proposes a number of methods for individuals to proactively regain control: Managing attention, verifying information sources, limiting dependence on technology... In addition, government intervention also plays a very important role in protecting privacy and democracy.
In short, the author believes that our awareness of the potential and dangers of “algorithms/AI/technology” will be the key to retaining control for ourselves.