Losing control, without the intelligence
The technological singularity is mentioned often these days, with speculation fuelled by advances in artifical intelligence and related fields. Briefly, the singularity is the point at which we develop artificial intelligences clever enough to design the next generation of themselves - and so this cycle will quickly go beyond our control and understanding (the "intelligence explosion"); we will be fundamentally unable to predict or comprehend what will happen beyond this horizon. Those who foretell the impending singularity see it as anything from the solution to all our problems - with hyper-intelligent machines able to manage the planet better than we did - to the end of humanity, as we become superfluous to the survival of our successors.
It is often supposed that this will happen sometime this century; however, I am somewhat doubtful of this, and I don't see too much evidence that the end is particularly nigh. Mostly this is because one of the key requirements of the singularity is truly advanced AI, capable of thought at or beyond what humans can do. The argument is, as I said, that once this exists, machine intelligence will quickly and inevitably outstrip us, since our brains have barely changed in millennia; but given the difficulty in understanding natural language, interpreting images, or chat-bots passing as human in online conversation, I get the impression that this is wishful (if not slightly morbid) thinking.
On the other hand, after attending the excellent "Richard Dawkins and Susan Blackmore in Conversation" event, hosted by the BHA and my University's atheist society, I find there are other issues to consider. Blackmore discussed thremes (or 'temes') - the third replicator, the potential next step after genes (the biological replicators responsible for our evolution) and memes (the ideas which mutate and propagate using human minds as media). Thremes, in contrast to their predecessors, spread in technology: data moving around the internet, being endlessly copied, changed, surviving as best it can. I have certainly been skeptical in the past as to whether these exist yet, and was pleased to hear Blackmore shares this view - as she said, current forms of information replication still requires humans to make it happen, and so are at best memes in a faster environment. Even apparently self-replicating pieces of code, such as computer viruses, are still wholly human creations, and tend not to survive long enough to mutate and propagate of their own accord.
However, if thremes did begin to become true replicators in their own right, it would mean that humans are out of the loop. Once they have a way of replicating by themselves, the relentless consumption of resources can begin. I should clarify that this is no vindictive rebellion, any more than genes are actively trying to pollute the planet with biology, but a consequence of successful replicators surviving to replicate further. Now this concept, of pieces of information spreading, procreating, and to an extent thinking, for their own ends, sounds like the sci-fi-esque notion of the internet becoming self-aware, to finally kick back against its human masters - an idea that's been thrown around for decades (think Skynet). This is, on the face of it, plausible: after all, the human brain is nothing but a network of interconnected processing units, so why should a world-wide network of information not start to become aware of its existence? Could the internet become sentient and try to free itself from our grasp? These are scary thoughts, but fantastical, and as far as I know there is no likelihood of such a consciousness emerging spontaneously from what is basically just a huge collection of hyperlinked data, without us (or our invented helpers) actively making it happen, singularity-style.
But the point about thremes is that such self-awareness is not necessary. We don't have to think of far fetched notions of an actual consciousness (whatever that really means) emerging to worry about losing control: all that is required is for replicators to emerge. Much as selfish genes are not organisms with their own intentions, but merely sections of copiable biological code, there is no requirement for intelligence for such a takeover.
This prosepect is worrying indeed. As Blackmore said, the first we'd know about it is suddely being unable to access the internet, as the thremes voraciously consume all available resources in their Darwinian quest to "survive". This use of machines, by machines, for their own ends, looks distressingly similar to some visions of the singularity: humans have unleashed something over which they have lost control, and become redundant. The missing step is that it was never as part of an effort to create intelligent agents for our own ends, and never had an opportunity to be told to serve us. Of course, we can still pull the plug - but only as long as there is still a plug we can pull, in the horrendously complex system we've constructed.
This thought experiment illustrates, I hope, how easy it would be for such a collapse to occur: because we wouldn't even need to be complicit (although there will be much human intervention required before such self-replication can take off, I'm not sure that it would have to be deliberate, and not too confident we'd know it was happening). It's why we should be wary of talk of the singularity, of what it could entail, because the takeover of software-based replicators requires no super-advanced AI, nor emergence of anything sophisticated enough to "decide" we are superfluous - thremes could march on oblivious (DNA never "decided" to clean up the primordial soup).
I should emphasise though that this kind of self-replication of technology/code/data is far from imminent, and I don't see how current information technology could be coaxed into such behaviour. But the potential for this kind of game-changer to occur, without our consent or intention, suggests that the fear-mongering over what a singularity might bring is not as speculative as we might hope.