Just did the latest version of my 'Futureshock' talk (update from 2005 / 2009) at OHM2013. The central new insight is that exponential change does not only work 'up' (Moore's law, Kurzweil's law of accelerating returns) but also the other way: exponential out of control financial systems and military-industrial-security-complexes causing exponential depletion of critical resources. All of this is very bad but the exponential climate disaster is now rapidly approaching a level that could end up killing more people that all the wars ever (and perhaps all of us). Welcome to the age of consequences where 'crisis' will be the new normal.
Just as in 2005/2009 I to give an overview of exponentially developing technologies and their implications (for details see the earlier versions of the talk linked above). But we really need to discuss some bad news about exponentially growing problems of resource scarcity, environmental degradation and the policy non-responses of our governments so far. A lot of activism against things like 'The War on Terror' or the various other ways our governments have lots their democratic ways seem to be working from the assumption that most of the problems are just a misunderstanding. And if we can just explain the facts to these, not so smart, but esssentially well meaning people in Brussels and Washington everything will be OK. This model of reality is good for getting funded as an NGO and getting invited to talk to aforementioned well-meaning people. It is not good for actually understanding and influencing what is going on (firstly because it ignores the fact that politicians in Brussels and Washington are really not in charge). Lets at least consider the idea that these 'crazy' policies are not crazy at all but are actually working perfectly. That is for the actual goals, just not the officially stated ones.
Let's talk. But let our talking be based on a harsh assesment of where we really are, not some politically convienent pretense of where we should be or would like to be.
The technological singularity is an interesting concept from 1993 by mathematician Venor Vinge. Vinge describes the consequences of smarter-than-human systems (computers, improved humans or symbiotic human-machine systems) as leading to an infinite acceleration of intelligence-improvement.
It goes like this: "what would a smarter-than-human artificial intelligence do? It might play the stockmarket or be the worlds greatest artist, politician or general. But it might also become the worlds smartest computer-science researcher working on improving artificial intelligence, making a better version of itself. Rinse and repeat and interesting stuff starts to happen. Computer systems have been doubling performance every 18 months under the limited guidance of static human intelligence for over a century. With self improvement they could perhaps double in a much, much shorter time-spans. Think 17 minutes. Or less.
The implications of this idea are profound. It has the potential to make most of our problems today irrelevant (material scarcity and mortality might turn out to be easily solvable problems). It may also destroy our entire solar system. But just as with nuclear fusion there is also the possibility that it just won't happen in the forseeable future. We must guard against passivity among smart people who stop solving problems while they are waiting for 'the rapture of the nerds'.
For over a million years we lived as hunter-gatherers in small family groups, for thousands of years we lived as farmers in small villages, for 200 years we lived in cities and built industry. Now we live globally in a world that is changing faster every day than ever before through new ideas and technologies.
Sickness and mortality? Scarcity of material goods? Humans as the most intelligent beings? How very 20th century!
Our history has not prepared us for these changes, Our cultures, ideologies and religions provide no answers to many of the new questions we are faced with. Trying to impose old world views or ways of doing things on a new world is a recipe for failure, whether you are a company, government or individual.
For businesses the challenge will be to provide valuable products in a world where many things that were expensive in the recent past have quickly become very cheap or essentially free. Governments will struggle to remain relevant in a world that moves much faster than they can and where geographical location is becoming less and less important for the individual citizens' identity, income and social network.
All of us will be challenged to rediscover what being human means in a world that is constantly changed by new technologies that we cannot really control. Do we try to stop these changes or can we adapt to them? What are some of the risks we face if we use all these new technologies? What are the rewards we might miss out on if we decide to not use them?
This type of presentation is part of our scenarioplanning services. Other visual examples in Dutch are this TV appearance in 2005, a short film we made for one of our finance clients in the summer of 2008 and another film we helped make about the future of culture and knowledge.