What Does A Good Future Look Like? A Conversation With Futurist Keynote Speaker Gerd Leonhard
Polls suggest that most Millennials think the future will be terrible, or at least worse than the past, not least due to climate change and war. Gerd Leonhard fears that such a negative outlook can create a negative future, and he is exploring how to create what he calls The Good Future. By this he does not mean that everyone is rich, but that everyone’s fundamental needs are fulfilled: health, food, shelter, education, a meaningful job, and the basic democratic freedoms. He joined the London Futurists Podcast to discuss these ideas.
Leonhard is one of the most successful futurists on the international speaker circuit. He estimates that he has spoken to a combined audience of 2.5 million people in more than 50 countries. He left his home country of Germany in 1982 to go to the USA to study music. While he was in the US, he set up one of the first internet-based music businesses, and then parlayed that into his current speaking career. His talks and videos are known for their engaging use of technology and design, and he prides himself on his rigorous use of research and data to back up his claims and insights.
Leonhard’s mantra is “people, planet, purpose, prosperity”, and he argues that if any of these four are neglected, we are in trouble. He thinks the world currently places too much emphasis on profit and economic growth, and not enough on purpose, or meaning, and planet, or sustainability. Capitalism, he believes, needs a reboot, with new types of dividends, and new types of stock market.
Of course it is easy to criticise today’s current economic and social structures; the harder job is to describe what new ones would be an improvement. Leonhard doesn’t claim to have a detailed blueprint, but he asserts that in an exponential age when both the quality and the efficiency of most products and services are improving at an accelerating pace, it must be possible to devise better structures.
If forced to put a name to the kind of system he would like to see, Leonhard calls it progressive capitalism, or social capitalism. But it is unclear exactly how he thinks this would differ in principle from many countries today, where the state already spends more than 40% of GDP.
At a high level, Leonhard likes Kevin Kelly’s idea of “protopia”, which is an escape from the usual dismal choice between dystopia, which is obviously unacceptable, and utopia, which is both unattainable and undesirable, because nothing would change, so there could be no fun. Protopia is a state in which everything is pretty good, and bit by bit, it keeps getting better every day.
Leonhard is not sure we are on this path at present. He argues that companies like Unilever are penalised, because their management embraces goals beyond shareholder value, whereas he thinks companies like Meta (Facebook) and Saudi Aramco are “evil”, but stock markets don’t care as long as they are profitable.
Is AI a threat to human values?
He is worried that the rush to adopt AI is driving us headlong into another undesirable situation, where humans may lose sight of their fundamental values. Unfortunately it is not always easy to foresee the harms it will cause. With some previous technologies, the harm was clearer, for instance with CFCs, the industrial chemicals which were discovered to be punching a hole in the ozone layer of the atmosphere. The solution, the Montreal Protocol, was agreed relatively quickly and painlessly in 1987, because there was little controversy about this problem. With AI, the risks are less black-and-white.
An example is the use of generative AI in search. There is reportedly an argument within Microsoft about how fast OpenAI’s technology should be deployed in the company’s Bing search product. Some think it should be rolled out as fast as possible in order to take advantage of a limited window of opportunity to wrench some of the immensely lucrative search advertising business away from Google. Others argue that generative AIs are demonstrably unreliable, and that they should therefore be deployed gradually and cautiously.
The ultimate threat from AI is the creation of a superintelligence whose goals are incompatible with humanity’s. This would be an existential threat to humanity, regardless whether the superintelligence’s attitude towards us was hostility or indifference. Unfortunately, it is unlikely that we could prevent this risk becoming real by a global agreement to desist from developing the general AI (AGI) which would become the superintelligence.
It is sometimes argued that the history of nuclear weapons shows that we can control the development of dangerous technologies by international agreement. Unfortunately the analogy is misleading.
There are currently two leading AGI Labs in the world: DeepMind and OpenAI. Both are explicitly seeking to create an AGI, and both are confident that they will achieve it within the next several decades. Prior to Microsoft’s latest investment, the cost of setting up OpenAI was around $3bn. This is a sum that is within the reach of many governments, companies, and even private individuals today, and the cost will fall as computers become more and more capable. The idea of holding back development, sometimes called “relinquishment”, seems implausible.
Looking further ahead, Leonhard is uncomfortable with a school of thought known as transhumanism, which is the belief that humans should be free to use technology to enhance their cognitive and physical abilities. He thinks enhancement is fine so long as it doesn’t undermine our humanity. For instance, he agrees that at first sight it would seem great to have a permanent, always-on connection between our minds and the internet, providing instant access to all the knowledge in the world. But he worries that we would become dependent on it, and perhaps unable to function independently if we lost it. We could become lazy, and we could lose our judgement if we rely uncritically on the information provided.
This raises the interesting question of how far should we accept losing the skills of our forefathers. Many people today would struggle to light a fire without matches, or even grow their own food. But as long as some people retain these skills so that they can be revived if necessary, is this a bad thing? If we all try to retain all the skills humans have needed throughout history, we will not have the time or the mental bandwidth to make progress by acquiring new knowledge and new skills.
Some people think that what is important about us is not the fact of being biologically human, but what goes on in our minds. Membership of a species is defined by biologists as the ability to create new members in the traditional way. If we could upload our minds into machines and live in a limitless virtual world with astounding capabilities and freedoms, we would no longer be humans by this definition. We would be post-humans, and some people would welcome this. Leonhard thinks we would have lost something important, and we would have become machines instead.