Written by Georgina Kuipers
2nd year student LUC
This semester’s masterclass, as you may or may not know, focuses on transhumanism. Transhumanism is a movement that believes and supports the technological advancement of humankind, e.g. decreasing the effects of ageing or creating ‘superhumans’ that have computers instead of brains. Although this might sound quite fictional, the thinkers featured in Nicholas Agar’s book Humanity’s End: Why We Should Reject Radical Enhancement actually come up with quite believable technological arguments to explain that we may be reaching this ‘superhuman’ moment faster than we think. Besides this quantitative side, there is also much debate about the ethics of ‘radical enhancement,’ which Agar himself argues strongly against.
The Singularity
The ‘superhuman’ moment is what Ray Kurzweil refers to as the Singularity. It is a specific human condition; it is the end of humanity as we know it, but definitely not the end of development. It is the moment that computer technology is better and faster than our simple human brains. Kurzweil himself calls it “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” (Agar 35) For Kurzweil, reaching this moment is nothing but a quantitative problem; he is a scientific determinist, and thus sure that we will reach Singularity.
His best argument for this is the law of accelerating returns, also known as Moore’s law. I will save you most of the mathematics and technological arguments – basically it means that technology develops in an exponential way, rather than linear, so the amount of change will be bigger after each increase. I actually find this part of the argument quite convincing; I often wonder at the rapidity of technological developments in our day and age, especially in contrast to what history teaches us about technological developments in previous centuries.
So how does this law of accelerating returns help Kurzweil? As he likes things quantitative, he reasons our brains can do 1019 calculations per second. Roadrunner, a computer produced by IBM in 2008, can do 1015 calculations per second, and its creators bragged: “a physics problem that Roadrunner could crack within a week would have taken the world’s most powerful computer of 1998 twenty years to solve.” Of course, this Roadrunner is insanely expensive, and so apart from allowing a couple more years to further develop computational power, Kurzweil is suggesting that affordable, 1000-dollar machines should be available to the public around 2020. To be precise, Kurzweil predicts Singularity to happen in 2029. We in the masterclass have a running joke that explains why all these technological thinkers predict the ‘superhuman’ age to arrive so soon; they just want to be alive when it happens.
My personal problem with this law of accelerating returns is that it is predicting the future by looking at the past, which makes it a non-falsifiable argument; we will not know it is true until it happens, and until it happens, everyone can suggest that it will happen at some point. Although Agar argues that Kurzweil is misinterpreting some data, his basic point is that we do not know the endpoint or the totality of the human brain – so all this quantitative thinking remains a guesstimation.
Stills from 'Transcendent Man' (2009), a documentary about the life and ideas of Ray Kurzweil
Atomism/holism
Some of you may be reading the above and think “[t]here’s more to human intelligence than computational power!” (Agar 39) Interestingly, this is exactly what Agar brings up. He uses the difference between atomism and holism to strengthen his point, though perhaps not very effectively. First, let us look at these two ways to view the world. Atomism is the idea that everything can be explained (and re-created) by exploring the parts it is made up out of. Holism, in contrast, thinks the whole is greater than the sum of its parts. Kurzweil is atomistic in his view of the human brain; he thinks that we can completely duplicate it using technology. Should our knowledge of the brain lead to a non-working copy, that would mean that we simply have not looked at the appropriate “lower-level goings-on” (Agar 52) – perhaps duplicating the brain from the quantum level or something even below that would produce a perfect replica.
One of the problems with these two mind-sets is that they are not compatible for discussion; the holist will keep arguing there is something in the entirety of the forest that makes it a forest (so it is more than just the trees), while the atomist will keep suggesting that perhaps they should look closer at the fungi and re-explore the trees at quantum level to find what makes it a forest. Chris actually mentioned an even better example in Star Wars-terms; he argued the force was holistic in the original trilogy (so IV, V and VI), as it was just an all-compassing power, but that George Lucas altered the force into an atomistic concept when he shot the later trilogy (I, II, and III); he had been reading Roger Penrose’s theory that consciousness resides inside cells, and had thus decided that the ‘force’ quality was just on a lower level, apparent inside a cell.
The other problem I have is specific to Kurzweil’s argument; as mentioned in the introduction, a big problem for Agar is the ethical consequences of radical enhancement to humanity. He thinks that if we duplicate our brain, and actually start replacing it with a computer (which could work even faster than our brains), we will have lost our humanity. Kurzweil argues against this, but he uses a rather strange logic for an atomist: he refers to the current technology, as we are able to augment our brains already with electronic components, and says:
“If we regard a human modified with technology as no longer human, where would we draw the defining line? Is a human with a bionic heart still human? … How about someone with ten nanobots in his brain? How about 500 million nanobots? Should we establish a boundary at 650 million nanobots: under that, you’re still human, and over that, you’re posthuman?” (Agar 53)
As you can see from this sarcastic line of thought, Kurzweil seems to be arguing that it is the whole of the human body (and the human brain) that constitutes humanity; because we started out as humans, and will gradually implement technological changes, we will remain so. As I hope you grasp by now, this does not match his atomistic and quantitative portrayal of technological advancements to the human brain, and therefore greatly undermines his point of no ethical boundaries being crossed.
So, in conclusion, I hope you have found my post interesting; the following posts will probably focus more on the ethical side of transhumanism. I can assure you that there are still many mind-boggling ideas to follow.
PS: By the way, I can definitely recommend you all to read (at least snippets) of Kurzweil’s Q&A on the Singularity (http://www.singularity.com/qanda.html), as he provides some interesting ethical responses to for instance living for hundreds of years, and humans taking over the universe.
The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester.
No comments:
Post a Comment