Sunday, 18 March 2012

Nicholas Agar responds to students' blogposts

I have to say that I really  enjoyed reading these posts.  When you  write a book like Humanity’s End you  hope that smart people will read it and engage with its themes.  That’s obviously happened here.  Congratulations to Chris on running what  seems to have been a very successful course and thanks for inviting me to  respond to some of your points.
Nicholas Agar


Barend de Rooij
Barend, you challenge some of my  criticisms of Nick Bostrom and Toby Ord.  Bostrom and Ord think that some (but not all) arguments against human enhancement display a bias toward the status quo.  Novel enhancements are opposed just because they’re novel.  You think that I retain an irrational bias toward the status quo.  In a way, I’m pleading guilty.  But I’m going to say that it’s a rational preference rather than an irrational bias.
Bostrom and Ord leave open the possibility of rational preference for the status quo in their paper.  In their discussion of the famous experiment in which people chose to retain whichever of the chocolate bar or mug that they had first received they allow that people might have formed an emotional bond with the original item.  Emotional bonds are a big part of being human.  You don’t automatically dump your romantic partner because you receive an offer from someone who is objectively better (by common consensus s/he is more attractive, more intelligent, wittier, has more Facebook friends …).   
I think we have this kind of connection with aspects of ourselves that radical enhancement would do away with.  This attachment isn’t fully described in Humanity’s End.  But, (shameless self-promotion coming …) look for more on it in my next book….
Caspar Plomp
Caspar, I agree with almost everything you say.  De Grey has a somewhat simplistic view of how SENS will work.  According to him, therapies that turn old people into young people will sharply reduce medical costs.  Rejuvenated people will stay out of hospital beds.  Instead they’ll generate wealth to pay for SENS.  I really like your suggestion that SENS will involve substantial ongoing maintenance costs.  SENS patients will be like diabetics requiring (very expensive) daily injections.  I think that there’s a general problem of too much optimism from would-be radical enhancers.  They focus too much on ideal outcomes (compare the possibility that SENS rejuvenates and therefore dramatically reduces health costs with the outcome sought by planners of the Iraq war – the overthrow of Saddam Hussein followed by the prompt establishment of a stable democracy).  There’s not enough thought about sub-optimal outcomes (SENS works imperfectly and remains very expensive and socially divisive; the Iraqi people aren’t terrifically happy about being invaded and occupied.)

Laura Pierik
Laura, like you, I’m not keen on the boredom argument.  But I do think that there’s something in the fear line.  It’s really a prediction about how people who’ve done all they can to reduce to zero the risk of death from internal causes will feel about external threats.  I wonder how many of them will think as you do – that it’s the risk of dying that makes life fun.  Anyone who thinks this way probably isn’t making plans to celebrate her 1,000th birthday!  I find the risk of death from (careful) car driving acceptable, but won’t negligibly senescent people view driving pretty much as we now view medieval jousting – an activity that we used to view as safe enough but now seems hideously reckless?  Wouldn’t a kind of Darwinian selection tend to eliminate risk-takers from the population of the negligibly senescing leaving only the cautious types?  This isn’t to say that you’d be mad to opt for negligible senescence.  But it would be a life very different from the kinds of lives that we currently enjoy.  It’s something for societies to factor in when they consider spending the huge sums of money that SENS requires.
I like your discussion of social inequality.  This is something that de Grey tends to dismiss.  New therapies need to be tested before they’re ready for de Grey’s millionaire benefactors.  Who will test them?  Here’s something I wrote for Slate on the problem of finding willing human guinea pigs for SENS.  
  In 2011 researchers at the Mayo Clinic in Florida discovered that eliminating senescent cells delays aging. Both mice in the picture above are of the same age, the one on the right had its senescent cells removed (image Jan M. van Deursen)

Georgina Kuipers
Georgina, nice account of the process that Kurzweil thinks will take us to super-intelligence.  I think he might want to defend the falsifiability of the law of accelerating returns.  His books contain (tediously) many examples of technologies whose improvement has tracked an exponential path.
I meant my discussion of the possibility of a holism about the mind to be one possibility that would throw out Kurzweil’s timetable – according to him human super-intelligence is imminent.  I was thinking of some ways in which it could take (much) longer than Kurzweil suspects.  I take it that holists can’t just assert that atomists leave stuff out.  Holism might gain credibility at some point in the future when we have a good account of all of the human mind’s parts – its neurons and neuronal maps – and we find that there’s a whole list of mental phenomena about which we remain clueless.  I like the example of George Lucas’s atomistic account of the force.  That’s certainly another way in which the task of describing the human brain well enough to make a synthetic mind might be a harder task that Kurzweil anticipates.

Lars Been 
Hi, Lars.  I’m an academic philosopher so it’s not surprising that I have lots of philosophical beliefs.  But there are relatively few (none?) of them I’d bet my life on.  For example, I’m a strong believer in moral consequentialism but I wouldn’t challenge a philosophical super-intelligence (something like Douglas Adams’ Deep Thought computer) to immediately terminate me if it turned out that consequentialism wasn’t the correct moral theory.
Uploading asks us to stake our lives on the truth of a philosophical proposition – that every aspect of our minds that we value can be realized by a machine.  You might be quite confident about the possibility of computers capable of intentionality and consciousness but still be justifiably cautious about transferring your mind into a computer.  A skeptic about thinking machines would be as unconvinced by the computer learning that you discuss as s/he would be by the message “I have conscious beliefs about the world” displayed on a computer monitor.
You’re right that this won’t bother some people who will just go ahead and upload.  But then there will always be people who do prudentially irrational (i.e. silly) things.  Please don’t do it, Lars!

Post written by Nicholas Agar
Author of Humanity's End (The MIT Press, 2010)
For more information about Nicholas Agar and his writings: www.nicholasagar.com

Tuesday, 13 March 2012

Opposing Radical Enhancement & the Status Quo Bias

Lifespans of over a thousand years, enhanced levels of perception, never failing memories and IQs that would make Einstein look like a primate—the apparent benefits of radical enhancement are great, and they are many. With the dawn of new enhancement technologies that enable us to augment an increasingly greater number of bodily functions, several scholars have expressed their hope and belief that we will relatively soon be able to artificially improve our intellectual, physical and psychological capacities such that they far exceed the capacities we naturally possess.  Aubrey de Grey, as we have seen, has argued that aging will soon be considered a disease that can be cured. Ray Kurzweil, an expert in artificial intelligence, has predicted that technology will improve at so fast a rate that it is only a brief matter of time before we leave our biological bodies behind entirely and upload ourselves into infinitely intelligent machines. Yet not everyone is as keen as De Grey and Kurzweil to embrace radical enhancement technologies—Nicholas Agar, writer of Humanity’s End, hy we should reject radical  enhancement argues that as soon as we opt to radically enhance ourselves our relation to the world transforms in such a way that we might cease to be human since our experiences and the value we place on them will have transformed beyond recognition (1). Whatever the radically enhanced humans experience and value, he argues, it is not what we experience and value, and therefore not worth pursuing for us regular humans.  Doesn’t Agar, however, by this reasoning betray an irrational disposition towards maintaining the status quo?
Amalgam Comics' Super Soldier (1996). Injected with a 'super soldier' formula an exposed to solar radiation, he holds powers and abilities far beyond those of mortals.

The philosopher Nick Bostrom makes just this case by asserting that most opponents of radical enhancement suffer from a so-called “status quo bias.” In his view, bioconservatives and other sceptics of radical enhancement are only against it because they are irrationally disposed to favour the status quo. As such, they make the error of favouring one alternative over another simply because it preserves things as they are now. Blindly wanting to maintain what we have now in the face of seemingly better alternatives, Bostrom implies, many sceptics of radical enhancement are being irrational in the arguments they make. Hoping to expose this crippling fallacy, he and his colleague Toby Ord designed what they call the reversal test:
“Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias.” (2)
In other words, they make the case that those opponents of radical enhancement who are also against intentionally diminishing our psychological and physical capacities are suspect of suffering from a status quo bias if they cannot prove that these capacities are currently at precisely the right level. As such, they not only attack the arguments of nearly every opponent of radical enhancement, but they also burden these opponents with the task of proving that radical enhancement cannot improve the status quo.
Since there are few opponents of radical enhancement who would consider diminishing our psychological and physical capacities a good thing, they should according to the reversal test prove why these capacities are now at an optimal level. We might, however question whether or not Bostrom provides the persons who take his test with a fair choice. He sketches a picture of radical enhancement as directly opposed to what we might call ‘radical diminishment’ and frames this opposition in such a way that radical diminishment corresponds with something that is morally bad, something that no one in his right mind would choose. According to Bostrom, since no one would want to intentionally diminish his mental or physical capacities, everyone should opt for the other alternative, which, presumably, is located on the other end of the moral spectrum and should be labelled “good”. If, given the choice between these two alternatives, someone refuses to choose either one of them, he or she implicitly chooses for maintaining the status quo and should prove why radical enhancement cannot improve it.
It seems to me, however, that given the choice between two alternatives one of which is demonstrably bad, no one would choose for this obviously bad option—but this is not tantamount to saying that the other option is at all desirable. Even though radical enhancement and radical diminishment stand in direct opposition to each other and radical diminishment is demonstrably bad, this does not mean that radical enhancement is automatically good. Additionally, someone who is against radical enhancement and also against radical diminishment does not implicitly claim that our psychological and physical capacities are precisely at the right level now. Someone may perfectly well be against radical enhancement without saying that we are optimal as we are, for example by claiming that it is not up to us to enhance ourselves, or, like Agar, by claiming that by trying to enhance our current position it may cease to be our position (since we may cease to be human). Neither of these examples amounts to saying that our current cognitive and physical capacities are optimal, yet I am quite sure that neither Agar nor someone who claims that it is not up to us to enhance ourselves would want to artificially diminish these capacities. Certainly, they do not have the burden of proving why radical enhancement cannot improve our current position—the burden to prove why we might benefit from radical enhancement remains with its proponents. This is not to say, however, that opponents like Agar should not respond intelligently to arguments in favour of radical enhancement.
Bostrom’s reversal test, then, is a clever rhetorical device targeted specifically at opponents of radical enhancement by presenting them with an unfair choice between two opposing alternatives one of which is demonstrably bad. Since no one would choose for the demonstrably bad option, Bostrom implies, the other alternative is better and the obvious way to go. If, however, someone is also against this better option, he or she implicitly opts for maintaining the status quo and as such should prove why the other alternative is not better. In reality there are more options. Someone is not either for radical enhancement, for maintaining the status quo or for radical diminishment. Someone may be discontent with his or her mental capacities, against radical enhancement and against radical diminishment without being irrational. The reversal test does not allow for this, and as such does not hold— despite its clever design.
Sources:
1. Nicholas Agar, Humanity’s End, why we should reject radical enhancement (Cambridge, MA: The MIT Press, 2010).
2. Nick Bostrom and Toby Ord, “The Reversal Test: Eliminating Status Quo Bias in Applied Ethics,” Ethics 116 (2006): pp. 664-65. Cited in Agar, Humanity’s end: p. 136.


Written by Barend de Rooij
2nd year student LUC

The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester. 

Negligible Senescence and its Implications


  • “Is SENS following Gandhi?
  • First they ignore you (2000-2002).
  • Then they laugh at you (2002-2004).
  • Then they oppose you (2005-present).
  • Then they say they were always with you.”

Aubrey de Grey
Such was the confidence that Aubrey de Grey, a Cambridge gerontologist, concisely displayed on a Powerpoint slide during a 2006 TED talk, in which he mapped out his Strategies for Engineered Negligible Senescence, a project he developed and of which he is Chief Science Officer. The project, that at first sight appears fantastic in several ways, treats ageing as a disease that can, should and will be overcome, the mere prerequisite being that society invests enough. What is more, de Grey asserted elsewhere that due to Longevity Escape Velocity (LEV), “The first 1000-year-old is probably less than 20 years younger than the first 150-year-old.”(1) In other words, those alive today will come to witness a point in time where engineering (a term he prefers over medical research) will be so advanced that every liveable time unit that is the outcome of this engineering will be greater than the time consumed for doing away with the causes of ageing. Because of this increase in time lived, ageing will cease to be, leaving only fatal accidents, murder and suicide as barriers to certainly indefinite life spans. In what follows, I examine SENS as the therapies for negligible senescence it hopes it will offer, and on the basis of this I ask what consequences the characteristics of such therapies might have for the society subjected to SENS as the organisation in control of these therapies.

Try to imagine how we might arrive in a world, or even just one, privileged, community for that matter, where the fruits of years of medical engineering would be able to stop anyone from ageing. How, technically, would we accomplish LEV and go on from there to fully stop ageing? Despite all de Grey's bold confidence, things may turn out to be slightly more difficult than demanding a few round numbers for funding – $100M a year, de Grey asserts us, would result in a 50% chance of therapies being available in 2030 (2). De Grey intends to spend this funding on combating the 'seven deadly things' that cause ageing, one of which he labels as mutations to our DNA or to the structure of proteins that regulate gene expression that cause cancer. Although he himself acknowledges that the SENS response to cancer is extremely speculative, this does not reduce the flaws in de Grey's strategies to combat cancer as a cause of ageing by preventing a single tumour from becoming fatal; Nicholas Agar, in his chapter on SENS in his book Humanity's End, points out that clusters of tumours – an inevitability, since a negligibly senescent person's chances of growing them increase exponentially to 1 – can also be fatal (3). The other six of the seven hurdles SENS wants to take seem equally ambitious: these are the loss of cells that perform important tasks, the accumulation of the wrong kinds of cells in some parts of our bodies, mutations to mitochondrial DNA, the accumulation of various kinds of waste (these are two of the deadly things), and, lastly, extracellular crosslinks as a special kind of extracellular waste. De Grey wants to revolutionise healthcare by transforming research into these respective areas, which now purports to offer sick people only a few more, allegedly miserable, years to live, into engineering that will stop these causes from altogether influencing one's vitality. That is, SENS is confident there are no more than seven causes of ageing – it argues on its website that since scientists have added nothing to this list for 20 years, it must be complete (4). But the website's display of the chronological coming into existence of this list refutes SENS's own argument: between the first cause discovered (extracellular junk, 1907) and the second (cell loss and cell atrophy 1955) there were 48 years – why could there not be remaining one, ten, or a million other causes of ageing undiscovered? Now, in Agar's words, “Can [de Grey] do it?” (5). Reflecting on some serious criticism on SENS from several journals, Agar draws a comparison with similar sentiments of disbelief amidst John F. Kennedy's commitment to send humans to the moon. Given the groundbreaking results required, however, it may be more apt to say that SENS will have to go to the moon seven times.
 robot nurse Riba, designed to aid the Japan's increasing elderly population

This hints at another aspect that merits more consideration than SENS grants it: the finances. De Grey may or may not be right in asserting that developing SENS will be much less costly than healthcare for the current elderly, but what is the value of this comparison (6)? For one, there will certainly be people who do not wish or cannot afford to be part of SENS's project; they will still require regular healthcare. But what is more, the costs of undergoing such treatment and keeping or making it available are set to be fuelled by a significant proportion of the world population that will, as the likelihood of getting certain diseases increases to 1 after one has lived a few hundred years, require perhaps daily medical care, of a nature much more complex than current healthcare. While mass production will make individual treatments more affordable – an argument provided by SENS's website – the scale at which negligibly senescent persons will need them appears neglected; it may too easy to claim, as SENS's website does, that a society with SENS “is likely to be far cheaper” than current healthcare expenses (7). And if indeed the reverse is true, and healthcare costs will be far greater than they are now, where will funding come from – especially when, after having tackled the seven deadly things, SENS has not achieved negligible senescence because there remain other, unknown causes of ageing? This is a speculative but important question; since SENS claims to be committed to improving people's quality of life, it would have to ensure that redirecting cash flows would not adversely affect this quality.

Without doubt it can be said that SENS's therapies, if at some point they will have come into existence, would have their share of potential patients; indeed, with the availability of such therapies the cost of dying becomes so much greater that people might become much more anxious to do everything in order to remain as physically healthy as possible (note that such anxiety potentially may infringe on one's mental health, which is not taken into account by SENS). Medical check-ups and treatment against a thousand ailments might become part of the negligibly senescent person's daily routine. What emerges here is the beginning of a description of a healthcare industry that will physically and mentally dominate the lives of those who have decided they want to live up to their 1,000s. Important questions arise: who, as the bearers of power in such a healthcare system, decide over the lives of these people? Who funds the engineering and what is so inferior to this project that money can be withdrawn from it to feed the ever-more-needy healthcare industry? Will powerful figures in the future healthcare industry (or industries, for that matter) become the effective political heads of communities of negligibly senescent people? De Grey, disappointingly, replaces current biogerontologists by “visionary philanthropists” and assumes that's that (8). One cannot help but wonder what could happen if SENS, as the organisation that might come to control the minds and bodies of all those negligibly senescent, would fall into the wrong hands. It appears, however, that de Grey himself is not remotely concerned with, for example, a democratic system that would replace the malevolent by the genuinely philanthropic – and that he himself knows best which 'visionary philanthropists' will best promote SENS. Thus, objectionably, we would simply have to trust SENS, our lives being in their hands.

Through this exercise of imagination, we have established that SENS, if it were to develop as promised by de Grey, will have profound physical and mental influence over those who undergo its therapies, and will possibly require a sizeable proportion of the economy's cash flows directed towards it. All that appears left for those who want to become negligibly senescent, then, is to trust SENS that it will not abuse its power to, for example, set exorbitant prices on those treatments required to keep alive those for whom the cost of dying is even higher (for after a certain age SENS will make all the difference between a 1,000 year old going to bed with the knowledge that the next day he will still be able to do the toughest physical work and a 1,000 year old who will perish quickly when his body does not receive its necessary treatments). If SENS is going to be fully developed and treatments made available, let us hope that indeed people will say SENS is following Gandhi.

sources:
1   Nicholas Agar, Humanity's End: Why We Should Reject Radical Enhancement (Cambridge, MA: The MIT Press, 2010), 102.
2   Aubrey de Grey, “TED 2006 Conference Presentation: Aubrey de Grey,” video posted 2006, <http://video.google.com/videoplay?docid=3847943059984264388> (accessed 11 December, 2011).
3   Agar, Humanity's End, 95.
4   SENS Foundation, “Research Themes,” <http://sens.org/sens-research/research-themes> (accessed 12 December 2011).
5   Agar, Humanity's End, 102.
6   De Grey, “TED 2006 Conference Presentation: Aubrey de Grey.”
7   SENS Foundation, “FAQ,” <http://sens.org/sens-research/faq> (accessed 11 December 2011).
8   De Grey, “TED 2006 Conference Presentation: Aubrey de Grey.”


Written by Caspar Plomp
2nd year student LUC

The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester. 

Monday, 12 March 2012

And they lived happily forever after?

There are many debates going on about whether it will one day be possible for humans to live ‘forever’. Sure, this wouldn’t mean being immortal, and probably in the end all of us would still die. But it would mean that we would simply no longer die of natural causes, the only way to die would be by accident. So while this debate seems interesting and is getting more and more attention nowadays, the real debate should be about whether we actually would want to live forever. For, if we as society feel that this is something to be desired, I’m sure that with billions of dollars as investment we would come pretty close to living forever within centuries, if not decades. So the real question is: do we want to live forever? In his book ‘Humanity’s End’ Nicholas Agar focuses a lot on this question, as he also acknowledges that his problem with transhumanism lies not so much with the science we would have to develop, but more with the ethics behind it. There are several arguments he makes in order to show us why it is undesirable for humans to live forever:

Boredom
The first argument being made is one that has first been brought up by Bernard Williams: Boredom. He argues that once we live long enough our lives will stagnate. Since there will be only a limited amount of new experiences, soon enough we will be completely bored with the things we find entertaining and challenging right now. This seems indeed a reasonable argument; can we really expect ourselves to find enough stuff to do for thousands of years? However there are several things that Williams overlooks when makings this argument; first of all people change. While Williams assumes that we will still be the same person over a thousand years as we are right now, reality shows that people change already a lot in a mere forty or fifty years, I think we can’t even imagine how people will develop and change over centuries of time! This means that there will be new things we will find entertaining and we will seek new goals in our lives. Along with that we must realize that even if we will be able to live for centuries, our lives will still be framed by birth and death; there is no way people will ever become immortal. So humans will always remain ‘obsessed’ with surviving, and so although the timeframe might change our perspectives on life will remain the same. For only if we were to become immortal and the fear of dying would be completely gone there would be a drastic change on our attitude on life. Even more important is to remember that if we really were to become bored with our lives, there is still a way out. People can always choose to commit suicide if they figure out that their life has become meaningless to them. And the good thing to this is the time it has taken these people to become so bored with the activities they used to love. For quite some people nowadays say that they wished they could have done more in their lives; but these people choosing to commit suicide after centuries of life will actually have done everything they could have ever dreamed of: they will have had the full human experience. And so only once they are completely done with all of it, they will die. So it seems that boredom actually shouldn’t be seen as an argument against living forever at all.

Fear
Another argument Agar makes that is linked in with this argument of boredom is Fear. As we will then have the possibility of living thousands of years, an accident would apparently seem much more horrible to these humans than it will nowadays to us: since we will only lose a few decades of life, while they will lose centuries of life! Thus Agar argues that risks that seem reasonable to us will become far too dangerous for those humans: they will no longer dare to drive cars, they will not go with airplanes anymore and so on. And so Agar thinks that these humans will retreat from the world: they will stay within the safety of their homes and make sure that there is no chance that they will die. However I think this is very unlikely to happen in reality: for it is exactly this risk of dying that makes life so exciting. It makes us want to achieve things right now. And together with that it must be said that most of society isn’t too obsessed about dying; perhaps a small group of people contemplate dying and actually make rational choices in order to make the chance of having an accident as small as possible, but most people don’t make these rational choices: for example many young people are pretty damn good in destroying their bodies with alcohol and drugs, they only think about the short term (having an amazing time with friends) and not about the long term (all sorts of deceases and the change of having an accident because of being drunk/stoned). It seems that it isn’t really our human nature to make sure that we live as long as possible. So it is very unlikely that this would change once we get to live a thousand years on average.

Tokyo trainpassengers
Social Inequality
There is however one ethical problem that will probably cause serious problems anyway: social inequality. Since once the science for the Longevity Escape Velocity (the moment science develops more quickly than a human will grow old and die) has been created, it seems reasonable that at first only the richest and most successful people in the world will get access to the available treatment. This will mean that suddenly there will be a huge gap between the rich who might live a thousand years and the poor who will still only live for around eighty years. So where social inequality now can cause at most a twenty to forty years difference in life expectancy, then this change will increase to over nine hundred years! This will have monumental consequences for society, since it will give all power to the people with access to the treatment. For example it seems reasonable to say that people who get the treatment are no longer willing to fight in armies and thus others are needed to fight in wars for states, perhaps in exchange for the treatment many poor people who won’t have access to the treatment normally, will thus be eager to go into the armies; since this is their only chance of living a longer life. The same counts for other jobs which are way too dangerous for the people that have undergone the treatment.

Conclusion
So it seems that although there are no direct objections against living forever, if boredom and fear are the only real objections I sure as hell wouldn’t mind having the treatment, it might actually have too much of an impact on society as a whole. Thus it might be undesirable to invest in a treatment that will increase the inequality between people on such a gigantic scale that we might perhaps even start speaking of two different sorts of humans. It seems that only if all human beings were to be given the treatment at the same time it would be desirable to have the treatment at all. But as we can see with the treatment for illnesses like AIDS this is easier said than done, and so it is highly unlikely that people will ever make sure that all people, as equals, will get the same treatment at the same time.

Written by Laura Pierik
2nd year student LUC

The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester.

Ray Kurzweil: The Technologist

Written by Georgina Kuipers
2nd year student LUC

This semester’s masterclass, as you may or may not know, focuses on transhumanism. Transhumanism is a movement that believes and supports the technological advancement of humankind, e.g. decreasing the effects of ageing or creating ‘superhumans’ that have computers instead of brains. Although this might sound quite fictional, the thinkers featured in Nicholas Agar’s book Humanity’s End: Why We Should Reject Radical Enhancement actually come up with quite believable technological arguments to explain that we may be reaching this ‘superhuman’ moment faster than we think. Besides this quantitative side, there is also much debate about the ethics of ‘radical enhancement,’ which Agar himself argues strongly against.
The Singularity
The ‘superhuman’ moment is what Ray Kurzweil refers to as the Singularity. It is a specific human condition; it is the end of humanity as we know it, but definitely not the end of development. It is the moment that computer technology is better and faster than our simple human brains. Kurzweil himself calls it “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” (Agar 35) For Kurzweil, reaching this moment is nothing but a quantitative problem; he is a scientific determinist, and thus sure that we will reach Singularity.

His best argument for this is the law of accelerating returns, also known as Moore’s law. I will save you most of the mathematics and technological arguments – basically it means that technology develops in an exponential way, rather than linear, so the amount of change will be bigger after each increase. I actually find this part of the argument quite convincing; I often wonder at the rapidity of technological developments in our day and age, especially in contrast to what history teaches us about technological developments in previous centuries.

So how does this law of accelerating returns help Kurzweil? As he likes things quantitative, he reasons our brains can do 1019 calculations per second. Roadrunner, a computer produced by IBM in 2008, can do 1015 calculations per second, and its creators bragged: “a physics problem that Roadrunner could crack within a week would have taken the world’s most powerful computer of 1998 twenty years to solve.” Of course, this Roadrunner is insanely expensive, and so apart from allowing a couple more years to further develop computational power, Kurzweil is suggesting that affordable, 1000-dollar machines should be available to the public around 2020. To be precise, Kurzweil predicts Singularity to happen in 2029. We in the masterclass have a running joke that explains why all these technological thinkers predict the ‘superhuman’ age to arrive so soon; they just want to be alive when it happens.

My personal problem with this law of accelerating returns is that it is predicting the future by looking at the past, which makes it a non-falsifiable argument; we will not know it is true until it happens, and until it happens, everyone can suggest that it will happen at some point. Although Agar argues that Kurzweil is misinterpreting some data, his basic point is that we do not know the endpoint or the totality of the human brain – so all this quantitative thinking remains a guesstimation.

kurzweiltranscendent-man-movie
Stills from 'Transcendent Man' (2009), a documentary about the life and ideas of Ray Kurzweil

Atomism/holism
Some of you may be reading the above and think “[t]here’s more to human intelligence than computational power!” (Agar 39) Interestingly, this is exactly what Agar brings up. He uses the difference between atomism and holism to strengthen his point, though perhaps not very effectively. First, let us look at these two ways to view the world. Atomism is the idea that everything can be explained (and re-created) by exploring the parts it is made up out of. Holism, in contrast, thinks the whole is greater than the sum of its parts. Kurzweil is atomistic in his view of the human brain; he thinks that we can completely duplicate it using technology. Should our knowledge of the brain lead to a non-working copy, that would mean that we simply have not looked at the appropriate “lower-level goings-on” (Agar 52) – perhaps duplicating the brain from the quantum level or something even below that would produce a perfect replica.

One of the problems with these two mind-sets is that they are not compatible for discussion; the holist will keep arguing there is something in the entirety of the forest that makes it a forest (so it is more than just the trees), while the atomist will keep suggesting that perhaps they should look closer at the fungi and re-explore the trees at quantum level to find what makes it a forest. Chris actually mentioned an even better example in Star Wars-terms; he argued the force was holistic in the original trilogy (so IV, V and VI), as it was just an all-compassing power, but that George Lucas altered the force into an atomistic concept when he shot the later trilogy (I, II, and III); he had been reading Roger Penrose’s theory that consciousness resides inside cells, and had thus decided that the ‘force’ quality was just on a lower level, apparent inside a cell.

The other problem I have is specific to Kurzweil’s argument; as mentioned in the introduction, a big problem for Agar is the ethical consequences of radical enhancement to humanity. He thinks that if we duplicate our brain, and actually start replacing it with a computer (which could work even faster than our brains), we will have lost our humanity. Kurzweil argues against this, but he uses a rather strange logic for an atomist: he refers to the current technology, as we are able to augment our brains already with electronic components, and says:

“If we regard a human modified with technology as no longer human, where would we draw the defining line? Is a human with a bionic heart still human? … How about someone with ten nanobots in his brain? How about 500 million nanobots? Should we establish a boundary at 650 million nanobots: under that, you’re still human, and over that, you’re posthuman?” (Agar 53)

As you can see from this sarcastic line of thought, Kurzweil seems to be arguing that it is the whole of the human body (and the human brain) that constitutes humanity; because we started out as humans, and will gradually implement technological changes, we will remain so. As I hope you grasp by now, this does not match his atomistic and quantitative portrayal of technological advancements to the human brain, and therefore greatly undermines his point of no ethical boundaries being crossed.

So, in conclusion, I hope you have found my post interesting; the following posts will probably focus more on the ethical side of transhumanism. I can assure you that there are still many mind-boggling ideas to follow.

PS: By the way, I can definitely recommend you all to read (at least snippets) of Kurzweil’s Q&A on the Singularity (http://www.singularity.com/qanda.html), as he provides some interesting ethical responses to for instance living for hundreds of years, and humans taking over the universe.


The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester.

Uploading: desirable or inevitable?


Cyber-Future-by-Benedict-Campbell
Cyber Future by Benedict Campbell
Technology is changing our world at an invisibly high rate. Those ideals that seemed impossible in the beginning of the twentieth century are now reality: we can fly to the moon, split up the smallest particles in accelerators, and find all our information on a digital web. How will technology shape our lives in ten years? Will we still be mortal human beings or are we able to solve the most destructive problem of our human life, death, and become radically enhanced, negligibly senescent posthumans? This essay discusses the uploading of the human mind, which radically enhances our intelligence and possibly makes us negligibly senescent. Do we really desire to upload ourselves or is it a venture too dangerous? In this essay I will argue that it is inevitable, but also desirable to be uploaded and upgraded into posthumans. An upload entails the replacement of the whole biological neural network, including the senses, nervous system, and the brain, by an artificial or electronic network. This would make our neural network much more vital and much more efficient in transporting energy. And thus human beings would evolve into super intelligent beings that, presumably, to a much smaller extent, are susceptible to neurological disorders.

Why do people want to upload themselves? By uploading ourselves we become super intelligent beings that will be able to develop technologies that may extend our lives. As super intelligent beings, we might develop new uploading techniques or new medicines against terminal or life-threatening diseases, but we may also develop inter alia social systems or technological devices that make our lives much easier and less stressful. Even the latter will possibly play a role in lengthening our lives. Although I am against systems and devices that make us lazy beings, the imagination of a stress-free but stimulating and inspiring world appeals to me greatly. The fact, however, that a stress-free world is an ideal and thus hardly realizable, should not weaken our will to strive for it.

The previous paragraph delineates why people supposedly want to upload themselves. The reasons given appear however to be rather subjective. Yet, I would like to argue that most human beings would be in favour of uploading themselves if it possibly lengthens their lives. All self-conscious people that care about the things they and others do want to live longer and safer, because those things we can do are all too valuable to give up. In real life we already see that people are trying to make their lives safer and more comfortable. We test all our food, drinks, and other stimulants in order to frighten us of the unhealthy substances and to stimulate us to consume the healthier ones. Also, we see that much money is put into medical research and in developing new drugs or techniques that prevent us from dying young and make us live substantively longer.

If the practice of uploading is safe, and if it does not remove my human characteristics, then I will definitely venture the upload. Nevertheless, many people argue that uploading is unsafe, because it takes away humanity and because these uploaded posthumans will not be human friendly. John Searle, an American philosopher, says that an upload will transform us into robots or computers that only simulate thinking. He says that we will lose our ability to be self-conscious or to be able to understand what we are thinking. Thus we will lose our humanity, our ability to rationalize and to be emotional beings. Nicholas Agar believes in Searle’s contention, which is in the footsteps of those that advocate Weak Artificial Intelligence – those saying that artificial intelligence cannot match or exceed human capabilities. Although we cannot predict whether an upload will be truly destructive for a human being, Agar says that we would better not even give it a try. Agar comes to this conclusion by looking at Blaise Pascal’s Wager. Accordingly, it is a good trade-off to refuse uploading, because from that we could possibly lose everything, and instead to chose to remain human, because from living a fortunate human life we don’t lose anything.

For me, this wager is all well and good, but it does not bring us any further. Besides, in the future there will be anyways an irrational fool that will accept the offer to venture the upload. This will show us whether the upload is successful, whether it is sensible to upload more human beings, and what should be improved. For me it is therefore more interesting to inquire how and why an upload could be a success. This inquiry brings me to a philosophical discussion of the mind and human consciousness and I hope that the discussion below will provoke more ideas and discussion.

First, however, we need to discuss the criteria a posthuman needs to meet in order to become a functional being. Most importantly, a posthuman needs to have intelligence. This implies that a posthuman must be able to reason, represent knowledge, plan, learn, and communicate. In order to have these intelligent capabilities, a posthuman also needs to have senses, a nervous network with a brain that can process and memorize information, and it needs to have ‘motor skills’, the skills to move things by means of muscles. Although it will be technologically difficult to produce a being with an electronic neural network that is linked to the senses and the muscles, I am convinced that it will be possible in the future. Remember that we can already make highly intelligent robots built on microchips and other nanotechnological systems.

When we reach the point of developing a being that is functional and can process information and language, we have developed a conscious being. A being that in existentialist terms exists, as it thinks, but not has transcended into a being with essence. We can call this being a pre-reflective cogito, a being that has not yet reflected upon the essence of what it is doing and of what it is thinking. The being is thus not aware of its own conscious being and is according John Searle only simulating thinking.

The conscious pre-reflective cogito is conscious just like a computer. A computer can also transport energy, information, or language when there is a certain sensory input. When we tap on our keyboard of our computer, the computer reacts to it according to how it is programmed. The computer will transport the information through its microchips and will give a certain sensory output, e.g. by saying a word. A computer can however do more. It can compute. “Computations can capture other systems’ abstract causal organization. Mental properties are nothing over and above abstract causal organization. Therefore, computers running the right kind of computations will instantiate mental properties.” (Chalmers) We can compare this to our use of formulas. By inserting particular information (the information from sensory inputs) into standard formulas, formulas can provide us with new information of which we did not think of before. So computers can compute the consequences of certain acts and can make decisions on basis of choosing the best consequence. It is thus not only externally synthesized information that goes through a computer, but also internally produced/computed information that goes through a computer. This information is always randomly produced, as the sensory inputs from our chaotic world are always random. By storing the information on its memory microchip, the computer has learned something.

Two other criteria for intelligence are planning and reasoning. These practices are only possible when a computer or an uploaded being can learn. The computer can reason and plan by means of a scheme composed of bad and good consequences that it has learned in the past. With this scheme, the computer can make trade-off based decisions. If a computer rationally makes a decision, it can also program itself to act upon the decision somewhere in the future, which we call planning. Whether the computer will succeed in acting upon it in the future is still another question.

Communication and representing knowledge are both more complex criteria of intelligence. For both we need to understand what we are thinking: the computer must be self-conscious in order to be capable of communicating and representing knowledge. The computer must be able to think of its thinking. For this capability, a computer must understand its existence or its being – it must learn that there is a being that thinks, a cogito. The computer must understand that it is alive rather than dead, and that it can do things. It must also learn that the being makes decisions on basis of what is good and bad for the ego. By becoming aware of these decisions, the computer will care about them and also establishes a will to live up to them.

The computer must also learn a language in order to show others what it likes and dislikes. Then, other people will also think that the computer is thinking and is caring about its interests. The computer becomes a social being that loses its radical freedom to do things without caring, because it understands that others compete against him/her. The computer understands that he/she needs to secure its interests by fighting against the other. In Hobbesian terms, the state of nature, “the war of all against all”, has then become reality. Thus, the computer has become a caring, self-conscious, egoistic being that is able to communicate its interests and to represent knowledge. Knowledge is something, that has gotten an essence, a subjectivity, or a meaning for the computer, because of the computer’s consciousness of the conscious beings in this world.

Since it seems to be possible to create self-conscious posthumans, there are still others arguing that human beings should not venture to upload themselves into posthumans. These people say that we will become substantively different creatures. The aesthetic sensibilities of posthumans will be completely different from ours and they will find it reasonless to reproduce, because of their negligibly senescent lives. Being afraid of losing humanity seems however to be a myth. We will never know exactly how it is like to be a posthuman. Yet, we can be sure of our different mentality and identity in the future, which we will get both as evolving human beings and as transforming human beings into posthuman beings. In my opinion, we should make the progressive step towards posthumanism rather than staying pessimistic, afraid, and conservative about it.

At last, there are also people arguing that an upload will make us human unfriendly beings. These people say that in our social unequal world, where not all people have the resources to upload themselves or the resources to provide their basic needs, alienation will bring the posthuman being at loggerheads with the human being. In that situation, the uploaded posthuman, with all its power and intelligence, will oppress the human race, which we don’t want and always should try to prevent. Well, why not developing human friendly artificial intelligence? By programming or by education, we can possibly make most posthumans human friendly. But won’t they be smart enough to circumvent the programming? Yes, they can, but these posthumans face the same dilemma we are also facing. They will also think that an evolution of the human friendly posthuman into a hating posthuman will endanger the existence of both human friendly posthumans and human beings. Most human friendly posthumans will remain the same peace-loving beings, as most of them prefer to live in peace rather than in war.

Words Cited
Chalmers, David. "A Computational Foundation for the Study of Cognition." Web. 13 Dec. 2011. https://mywebspace.wisc.edu/lshapiro/web/Phil554_files/Chalmers-computation.html

Written by Lars Been
2nd year student LUC

The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester.