Friday 25 May 2012

Capturing the Dragon

"Houston, it looks like we got us a Dragon by the tail"



Well that was amazing: I just watched the live video feed from NASA of the SpaceX Dragon capsule docking (well, being grabbed by a robot arm) with the International Space Station. This is a pretty important moment: it's the first time a privately funded mission has made it to space and linked up with NASA.


It was pretty impressive, to watch the robot arm being carefully guided to link up with the capture mechanism on the Dragon: that's a lot of complicated equipment being wielded in a place with very little margin for error, and watching the tense faces at Houston goes to show how hard it all is. But of course, that sort of thing has happened loads of times now, with the ISS having been in orbit for years, and numerous other complicated things (such as Hubble) being delicately positioned in orbit.

That isn't the point though: so far, every space mission has had the backing of very big, well funded (in comparison)  government agencies. The achievements of NASA et al. over the past decades have obviously been immense - humans on the moon, probes to the edge of the solar system, robots on other planets. These are right and proper things for publicly funded science to be doing - but if it takes a massive government-funded organisation to get anything into space, that's going to be a problem. It's not the kind of setup that will let humans really do space, or allow the kind of unlimited exploration and adventure that we'd all love to see.

NASA's budget, as we recently became only too aware, can only stretch so far, and making the hard choices between different missions is no fun. Now though, SpaceX has conclusively shown it is possible for a private company, in just a few years, to become capable of getting hardware into space, and interacting with the bigger players already there. Hopefully they and other private companies will soon be capable of dealing with the necessary commercial aspect of space - launching satellites, tourism, and so on - leaving publicly funded bodies to get on with the fun stuff.

That's the practical, financial argument, anyway. But from a futuristic, sci-fi point of view, this achievement is the start of something very important. It's hard to imagine humanity becoming a proper space-faring race if only governments of wealthy nations can go there. And while the initial plan, or so I gather, is for private companies to step in handle the drudgery of supplying the ISS, maintaining satellites and so on, I don't doubt that once more companies get off the ground, so many more things will begin to happen. Who knows the possibilities that could open up, once space becomes easy for ordinary people (OK, not quite, but you get the idea) to reach, who are free to boldly go as they see fit, without having to justify every bit of spending to governments or taxpayers: this is "a new era for space flight unfolding in real time" as the NASA video commentator put it.

Of course, today's achievement didn't happen spontaneously, and it certainly wasn't cheap: it took many many years to get this done, and months of planning just for the docking; and I'm guessing the price tag (for the mission itself and years of development) and was pretty heavy. Running up and down to space on a whim is still a long way away. But the fact that a private company can get to space is the important point, and illustrates how far we have come in the last fifty years. It's often lamented that humans have not been back to the moon since the '70s - almost as if we lost interest - but in those days, it took a large part of the wealth of a world superpower to do it. This is clearly no longer the case - and it can only get easier from here.  The true age of space flight could soon begin.

Images taken from the NASA video feed at http://www.nasa.gov/multimedia/nasatv/ustream.html

Thursday 24 May 2012

Replicators of the singularity

Losing control, without the intelligence


The technological singularity is mentioned often these days, with speculation fuelled by advances in artifical intelligence and related fields. Briefly, the singularity is the point at which we develop artificial intelligences clever enough to design the next generation of themselves - and so this cycle will quickly go beyond our control and understanding (the "intelligence explosion"); we will be fundamentally unable to predict or comprehend what will happen beyond this horizon. Those who foretell the impending singularity see it as anything from the solution to all our problems - with hyper-intelligent machines able to manage the planet better than we did - to the end of humanity, as we become superfluous to the survival of our successors.

It is often supposed that this will happen sometime this century; however, I am somewhat doubtful of this, and I don't see too much evidence that the end is particularly nigh. Mostly this is because one of the key requirements of the singularity is truly advanced AI, capable of thought at or beyond what humans can do. The argument is, as I said, that once this exists, machine intelligence will quickly and inevitably outstrip us, since our brains have barely changed in millennia; but given the difficulty in understanding natural language, interpreting images, or chat-bots passing as human in online conversation, I get the impression that this is wishful (if not slightly morbid) thinking.

On the other hand, after attending the excellent "Richard Dawkins and Susan Blackmore in Conversation" event, hosted by the BHA and my University's atheist society, I find there are other issues to consider. Blackmore discussed thremes (or 'temes') - the third replicator, the potential next step after genes (the biological replicators responsible for our evolution) and memes (the ideas which mutate and propagate using human minds as media). Thremes, in contrast to their predecessors, spread in technology: data moving around the internet, being endlessly copied, changed, surviving as best it can. I have certainly been skeptical in the past as to whether these exist yet, and was pleased to hear Blackmore shares this view - as she said, current forms of information replication still requires humans to make it happen, and so are at best memes in a faster environment. Even apparently self-replicating pieces of code, such as computer viruses, are still wholly human creations, and tend not to survive long enough to mutate and propagate of their own accord.

However, if thremes did begin to become true replicators in their own right, it would mean that humans are out of the loop. Once they have a way of replicating by themselves, the relentless consumption of resources can begin. I should clarify that this is no vindictive rebellion, any more than genes are actively trying to pollute the planet with biology, but a consequence of successful replicators surviving to replicate further. Now this concept, of pieces of information spreading, procreating, and to an extent thinking, for their own ends, sounds like the sci-fi-esque notion of the internet becoming self-aware, to finally kick back against its human masters - an idea that's been thrown around for decades (think Skynet). This is, on the face of it, plausible: after all, the human brain is nothing but a network of interconnected processing units, so why should a world-wide network of information not start to become aware of its existence? Could the internet become sentient and try to free itself from our grasp? These are scary thoughts, but fantastical, and as far as I know there is no likelihood of such a consciousness emerging spontaneously from what is basically just a huge collection of hyperlinked data, without us (or our invented helpers) actively making it happen, singularity-style.

But the point about thremes is that such self-awareness is not necessary. We don't have to think of far fetched notions of an actual consciousness (whatever that really means) emerging to worry about losing control: all that is required is for replicators to emerge. Much as selfish genes are not organisms with their own intentions, but merely sections of copiable biological code, there is no requirement for intelligence for such a takeover.

This prosepect is worrying indeed. As Blackmore said, the first we'd know about it is suddely being unable to access the internet, as the thremes voraciously consume all available resources in their Darwinian quest to "survive". This use of machines, by machines, for their own ends, looks distressingly similar to some visions of the singularity: humans have unleashed something over which they have lost control, and become redundant. The missing step is that it was never as part of an effort to create intelligent agents for our own ends, and never had an opportunity to be told to serve us. Of course, we can still pull the plug - but only as long as there is still a plug we can pull, in the horrendously complex system we've constructed.

This thought experiment illustrates, I hope, how easy it would be for such a collapse to occur: because we wouldn't even need to be complicit (although there will be much human intervention required before such self-replication can take off, I'm not sure that it would have to be deliberate, and not too confident we'd know it was happening). It's why we should be wary of talk of the singularity, of what it could entail, because the takeover of software-based replicators requires no super-advanced AI, nor emergence of anything sophisticated enough to "decide" we are superfluous - thremes could march on oblivious (DNA never "decided" to clean up the primordial soup).

I should emphasise though that this kind of self-replication of technology/code/data is far from imminent, and I don't see how current information technology could be coaxed into such behaviour. But the potential for this kind of game-changer to occur, without our consent or intention, suggests that the fear-mongering over what a singularity might bring is not as speculative as we might hope.

Wednesday 23 May 2012

My life has no meaning - and that's just fine


The absence of a meaning of life is what sets us free


The meaning of life is something which has bothered humanity for as long as we've been around to be bothered. Generations of philosophers, theologians, artists and poets have debated, with little progress either way, what the point of it all is; with answers ranging from it all being part of a divine plan, to a quest for enlightenment or a journey of self discovery.

However, as we come to understand more about the physical world, our relatively minor role in it, and our origins, it is becoming apparent that there probably is none.  As the product of a few billion years of disinterested evolution, clinging to an unremarkable ball of damp rock revolving though a cold and uncaring universe, the conclusion that there is no actual meaning or significance to our existence is scientifically the most likely explanation.

This is dealt with in various ways - from outright nihilism to the focus on experience and absurdity of existentialism, to the more optimistic outlook of humanism; and in a lot of the more progressive outlooks with which I am most familiar, this is not much of a problem.  However, far from being something we have to come to terms with - some affront to our existence which we must learn to bear - I see this as the best situation in which we could find ourselves.

Consider for a moment that at some point, from some divine revalation, deep philosophical insight or profound scientific discovery, we had found the actual meaning of life, and understood our true reason for existing.  I really, really don't like the thought of this.  Why?

Because it would mean we have a purpose.

Think of what that would entail: we would know what it is we are supposed to do. Yes, we may rid ourselves of all that existential angst, the endless pondering over the point of it all, but at what cost? It doesn't matter what this meaning actually is, to know that there is one would be a terribe, crushing blow to our sense of identity, as well as a threat to the delicately balanced social structure we have built. We'd know exactly what we were here for, what it is that some undefined being/process had intended for us, and that if we did not fulfil said task, we were failing in our mission as human beings. Irrespecitive of what we, as intelligent creatures, think we ought to be doing, we would have irrevocable knowledge that this does, or does not, fit in with the plan.

That, I think, would certainly take a lot of the pleasure out of life; and it's not simply for the joy of the chase, the pleasure we take in pondering these deep questions - rather that when we are free to do as we think best, we know that we are doing things our own way, for our own reasons, for better or worse. The way I see it, making the most of our brief existence, and striving to improve ourselves and our environment, is an essential part of what it means to have a fulfilling life, and I would hate to find out that this is somehow contrary to some pre-determined intention (conversely, I'd also be disappointed to know that that is the plan, again because then we are reduced merely to following instructions, not living for ourselves). Life would basically be reduced to a case of "that's what it's all about then - better get on with it".

The need to be free of any particular plan is all the more apparent when considering some of the alternatives to the absence of a meaning.  The Christian tradition, for example, posits that our meaning is for the glory of God, that we are his creations and are here pretty much for his amusement and adoration - I've written about how I'm not too happy with this vision of reality before, but my point here is that if indeed there is such a meaning, it would surely be a horrific thing to know. It would mean that all our efforts are pretty much for nothing, other than as a service to some incomprehensible and ineffable being. The Judeo-Christian tradition is perhaps a rather easy target in this regard, but the same applies to less dictatorial religious viewpoints, which stress personal enlightenment, relvalation and one-ness - such as Buddhism, Zen or Hinduism: if that really is the meaning, then our efforts must presumably be focused on it, and any other endeavours (such as art, science, exploration, love) are ultimately a waste of time.

This brings me to my central objection to any externally imposed meaning of life: that without it, we have done rather a lot of awesome stuff. The past few thousand years have seen us advance astronomically (literally) from our humble hunter-gatherer origins, being capable of sublime works of art and continually going beyond our current capabilities, reaching for the stars, and gaining a greater understanding of ourselves. Certainly, some of this has been tied into mistaken ideas of what it's all about (religious architecture and music can't be ignored), or directly as a desire to find out (after all, "why?" is the primary scientific question) - but I'm sure that a lot of our achievements come directly from this sense of freedom, this ability to choose our goals for ourselves, that comes of not having any set direction.

My point is that we need not burden ourselves with searching for a meaning of life, nor should we be compelled to assign "bettering ourselves", "pursuit of knowledge", "helping our fellow human" or other noble goals as our 'purpose' in lieu of a definitive answer - there simply is no meaning. We must embrace this, since instead we get to decide our own fate, to do what we think is right purely because it is us that it affects: to be the architects of our own future and be judges of ourselves, as a species, on our own merits for our own sake. I'd say we've done pretty well at that so far.

A sparkling conversation


I just got back from 'Richard Dawkins and Susan Blackmore In Conversation', a British Humanist Association event, held just over the road in the University where I work (in association with the university's atheist society).  It was, not surprisingly, amazing. The conversation was fascinating, covering a diverse range of topics from memes (of course) to the nature of religious belief. There was some very interesting discussion of Zahavi's handicap principle, group/kin selection and the mechanics of gene replication - so I'm very glad that I've recently been ploughing through The Selfish Gene, in that I managed to understand more of what was going on. There was also some very interesting debate about consciousness, which I'm aware has been the focus of a lot of Blackmore's recent work (she gave a talk at Bristol on the subject a year or so ago, which is still pretty much all I know on this); aside from the difficulty of defining what consciousness actually is, they considered whether consciousness is necessary for the kind of intelligent things we get to, and if we are indeed actually conscious (note to self: read her books).

I loved the format of the event - after a concise introduction from BHA chief Andrew Copson (we all giggled when he said "hashtag": still not sure why), basically the two of them had a nice friendly chat about their work and stuff in general, rambling from topic to thrilling topic; followed by a more interactive session where the audience joined in with some very interesting questions.

These questions eventually got around to the subject of free will - as they tend to do in such situations as this - specifically with the question of whether they believed they had free will. This was one of the (many) highlights of the evening, as it's something I've recently become pretty interested in (both through debates with religious people who claim that we do possess it, and that it is proof of God's existence; and that I'm currently reading Pinker's Blank Slate, which contains some very though-provoking perspectives). On this subject Dawkins responded with his customary Hitchens quote - "I have no choice!"; but Blackmore's answer was quite unexpected, a position I was not previously aware of... though relating superficially to some things I'd previously been pondering.

Her opinion on the matter, briefly, is that she does not have free will; but unlike many other scientists/philosophers (to whom she has asked the question) she does not feel the need to live as if she did. Rather, she accepts that things are not the consequences of her conscious exercising of free will, but the result of this person that she is doing the things that it will do - and whatever happens, happens. Though this seems a little strange at first, I can see how it can be a liberating perspective: that she is able to look upon herself with a more detatched perspective, and take a step back from the immediate first-person experience to consier how this self of hers is reacting. In stressful situations, for example, she describes how it becomes a matter of intellectual curiosity to sit back and watch how she will cope.

It's interesting how this also allows more compassion and sympathy to be felt for others: since they are no longer to be viewed as purely free and autonomous consciousnesses acting in accordance with their independently existing will, but as beings influenced by their genes, their environment, and their experiences, it is easier to understand their motivations and recognise the humanity in them (again, as Pinker would say, the fact that people are influenced by genes, and whether or not they are in any sense free, makes no difference in terms of blame or intent - but does allow us to understand them better). This does not detract from what it means to be human, the value of freedom, or the necessity in making sensible decisions - we can still do the things we do, and we can still have valid reasons for doing them, it just means that there isn't necessarily that spontaneous "me" in constant control. Like I said, this is a fascinating perspective, not one I'd thought about much before, and certainly something I hope to elaborate on in future.

So with that and the general discussion about genes and memes, a mention of temes/thremes (the third, machine-based replicators, which it was good to hear of after having been present in yet another lecture in which she was developing the idea), the potential of Dawkins and Blackmore using their memetic knowledge in designing a new religion, and on how best to counter religion's resilience to critical thought, the evening definitely contained a lot to think about. I'll leave it there for now, lest I ramble incoherently on, and fail to an even greater degree not to brag about how lucky I am to have attended; but hopefully I'll return to many of the issues raised when I've thought a bit more about it all.