Nicholas Agar |
Barend de Rooij
Barend, you challenge some of my
criticisms of Nick Bostrom and Toby Ord.
Bostrom and Ord think that some (but not all) arguments against human enhancement display a bias toward the status quo. Novel enhancements are opposed just because they’re novel. You think that I retain an irrational bias toward the status quo.
In a way, I’m pleading guilty.
But I’m going to say that it’s a rational preference rather than an irrational bias.
Bostrom and Ord leave open the possibility of rational preference for the status quo in their paper. In their discussion of the famous experiment in which people chose to retain whichever of the chocolate bar or mug that they had first received they allow that people might have formed an emotional bond with the original item. Emotional bonds are a big part of being human. You don’t automatically dump your romantic partner because you receive an offer from
someone who is objectively better (by common consensus s/he is more attractive, more intelligent, wittier, has more Facebook friends …).
I think we have this kind of connection with aspects of ourselves that radical enhancement would do away with. This attachment isn’t fully described in Humanity’s End. But, (shameless self-promotion coming …) look for more on it in my next book….
Caspar Plomp
Caspar, I agree with almost everything you say. De Grey has a somewhat simplistic view of how SENS will work.
According to him, therapies that turn old people into young people will
sharply reduce medical costs. Rejuvenated people will stay out of hospital beds.
Instead they’ll generate wealth to pay for SENS. I really like your suggestion that SENS will involve substantial ongoing maintenance costs.
SENS patients will be like diabetics requiring (very expensive) daily injections. I think that there’s a
general problem of too much optimism from would-be radical enhancers. They focus too much on ideal outcomes (compare the possibility that SENS rejuvenates and therefore dramatically reduces health
costs with the outcome sought by planners of the Iraq war – the overthrow of Saddam Hussein followed by the prompt establishment of a stable democracy). There’s not enough thought about sub-optimal outcomes (SENS works imperfectly and remains very expensive and socially divisive; the Iraqi people aren’t terrifically happy about being invaded and occupied.)
Laura Pierik
Laura, like you, I’m not keen on the boredom argument. But I do think that there’s something in the fear line.
It’s really a prediction about how people who’ve done all they can to reduce to zero the risk of death from internal causes will feel about external threats. I wonder how many of them will think as you do – that it’s the risk of dying that makes life fun. Anyone who thinks this way probably isn’t making plans to celebrate her 1,000th birthday! I find the risk of death from (careful) car driving acceptable, but won’t negligibly senescent people view driving pretty much as we now view medieval jousting – an activity that we used to view as safe enough but now seems hideously reckless? Wouldn’t a
kind of Darwinian selection tend to eliminate risk-takers from the population of the negligibly senescing leaving only the cautious types? This isn’t to say that you’d be mad to opt for negligible senescence. But it would be a life very different from the kinds of lives that we currently enjoy. It’s something for societies to factor in when they consider spending the huge sums of money that SENS requires.
I like your discussion of social inequality. This is something that de Grey tends to dismiss. New therapies need to be tested before they’re ready for de Grey’s millionaire benefactors. Who will test them? Here’s something I wrote for Slate on the problem of finding willing human guinea pigs for SENS.
Georgina Kuipers
Georgina, nice account of the process that Kurzweil thinks will take us to super-intelligence. I think he might want to defend the falsifiability of the law of accelerating returns. His books contain (tediously) many examples of technologies whose improvement has tracked an exponential path.
I meant my discussion of the possibility of a holism about the mind to be one possibility that would throw out Kurzweil’s timetable – according to him human super-intelligence is imminent. I was thinking of some ways in which it could take (much) longer than Kurzweil suspects. I take it that holists can’t just assert that atomists leave stuff out. Holism might gain credibility at some point in the future when we have a good account of all of the human mind’s parts – its neurons and neuronal maps – and we find that there’s a whole list of mental phenomena about which we remain clueless. I like the example of George Lucas’s atomistic account of the force. That’s certainly another way in which the task of describing the human brain well enough to make a synthetic mind might be a harder task that Kurzweil anticipates.
Lars Been
Hi, Lars. I’m an academic philosopher so it’s not surprising that I have lots of philosophical beliefs. But there are relatively few (none?) of them I’d bet my life on. For example, I’m a strong believer in moral consequentialism but I wouldn’t challenge a philosophical super-intelligence (something like Douglas Adams’ Deep Thought computer) to immediately terminate me if it turned out that consequentialism wasn’t the correct moral theory.
Uploading asks us to stake our lives on the truth of a philosophical proposition – that every aspect of our
minds that we value can be realized by a machine. You might be quite confident about the possibility of computers capable of intentionality and consciousness but still be justifiably cautious about transferring your mind into a computer. A skeptic about thinking machines would be as unconvinced by the computer learning that you discuss as s/he would be by the message “I have conscious beliefs about the world” displayed on a computer monitor.
You’re right that this won’t bother some people who will just go ahead and upload. But then there will always be people who do prudentially irrational (i.e. silly) things.
Please don’t do it, Lars!
Post written by Nicholas Agar
Author of Humanity's End (The MIT Press, 2010)
For more information about Nicholas Agar and his writings: www.nicholasagar.com
No comments:
Post a Comment