Monday, 12 March 2012

And they lived happily forever after?

There are many debates going on about whether it will one day be possible for humans to live ‘forever’. Sure, this wouldn’t mean being immortal, and probably in the end all of us would still die. But it would mean that we would simply no longer die of natural causes, the only way to die would be by accident. So while this debate seems interesting and is getting more and more attention nowadays, the real debate should be about whether we actually would want to live forever. For, if we as society feel that this is something to be desired, I’m sure that with billions of dollars as investment we would come pretty close to living forever within centuries, if not decades. So the real question is: do we want to live forever? In his book ‘Humanity’s End’ Nicholas Agar focuses a lot on this question, as he also acknowledges that his problem with transhumanism lies not so much with the science we would have to develop, but more with the ethics behind it. There are several arguments he makes in order to show us why it is undesirable for humans to live forever:

Boredom
The first argument being made is one that has first been brought up by Bernard Williams: Boredom. He argues that once we live long enough our lives will stagnate. Since there will be only a limited amount of new experiences, soon enough we will be completely bored with the things we find entertaining and challenging right now. This seems indeed a reasonable argument; can we really expect ourselves to find enough stuff to do for thousands of years? However there are several things that Williams overlooks when makings this argument; first of all people change. While Williams assumes that we will still be the same person over a thousand years as we are right now, reality shows that people change already a lot in a mere forty or fifty years, I think we can’t even imagine how people will develop and change over centuries of time! This means that there will be new things we will find entertaining and we will seek new goals in our lives. Along with that we must realize that even if we will be able to live for centuries, our lives will still be framed by birth and death; there is no way people will ever become immortal. So humans will always remain ‘obsessed’ with surviving, and so although the timeframe might change our perspectives on life will remain the same. For only if we were to become immortal and the fear of dying would be completely gone there would be a drastic change on our attitude on life. Even more important is to remember that if we really were to become bored with our lives, there is still a way out. People can always choose to commit suicide if they figure out that their life has become meaningless to them. And the good thing to this is the time it has taken these people to become so bored with the activities they used to love. For quite some people nowadays say that they wished they could have done more in their lives; but these people choosing to commit suicide after centuries of life will actually have done everything they could have ever dreamed of: they will have had the full human experience. And so only once they are completely done with all of it, they will die. So it seems that boredom actually shouldn’t be seen as an argument against living forever at all.

Fear
Another argument Agar makes that is linked in with this argument of boredom is Fear. As we will then have the possibility of living thousands of years, an accident would apparently seem much more horrible to these humans than it will nowadays to us: since we will only lose a few decades of life, while they will lose centuries of life! Thus Agar argues that risks that seem reasonable to us will become far too dangerous for those humans: they will no longer dare to drive cars, they will not go with airplanes anymore and so on. And so Agar thinks that these humans will retreat from the world: they will stay within the safety of their homes and make sure that there is no chance that they will die. However I think this is very unlikely to happen in reality: for it is exactly this risk of dying that makes life so exciting. It makes us want to achieve things right now. And together with that it must be said that most of society isn’t too obsessed about dying; perhaps a small group of people contemplate dying and actually make rational choices in order to make the chance of having an accident as small as possible, but most people don’t make these rational choices: for example many young people are pretty damn good in destroying their bodies with alcohol and drugs, they only think about the short term (having an amazing time with friends) and not about the long term (all sorts of deceases and the change of having an accident because of being drunk/stoned). It seems that it isn’t really our human nature to make sure that we live as long as possible. So it is very unlikely that this would change once we get to live a thousand years on average.

Tokyo trainpassengers
Social Inequality
There is however one ethical problem that will probably cause serious problems anyway: social inequality. Since once the science for the Longevity Escape Velocity (the moment science develops more quickly than a human will grow old and die) has been created, it seems reasonable that at first only the richest and most successful people in the world will get access to the available treatment. This will mean that suddenly there will be a huge gap between the rich who might live a thousand years and the poor who will still only live for around eighty years. So where social inequality now can cause at most a twenty to forty years difference in life expectancy, then this change will increase to over nine hundred years! This will have monumental consequences for society, since it will give all power to the people with access to the treatment. For example it seems reasonable to say that people who get the treatment are no longer willing to fight in armies and thus others are needed to fight in wars for states, perhaps in exchange for the treatment many poor people who won’t have access to the treatment normally, will thus be eager to go into the armies; since this is their only chance of living a longer life. The same counts for other jobs which are way too dangerous for the people that have undergone the treatment.

Conclusion
So it seems that although there are no direct objections against living forever, if boredom and fear are the only real objections I sure as hell wouldn’t mind having the treatment, it might actually have too much of an impact on society as a whole. Thus it might be undesirable to invest in a treatment that will increase the inequality between people on such a gigantic scale that we might perhaps even start speaking of two different sorts of humans. It seems that only if all human beings were to be given the treatment at the same time it would be desirable to have the treatment at all. But as we can see with the treatment for illnesses like AIDS this is easier said than done, and so it is highly unlikely that people will ever make sure that all people, as equals, will get the same treatment at the same time.

Written by Laura Pierik
2nd year student LUC

The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester.

Ray Kurzweil: The Technologist

Written by Georgina Kuipers
2nd year student LUC

This semester’s masterclass, as you may or may not know, focuses on transhumanism. Transhumanism is a movement that believes and supports the technological advancement of humankind, e.g. decreasing the effects of ageing or creating ‘superhumans’ that have computers instead of brains. Although this might sound quite fictional, the thinkers featured in Nicholas Agar’s book Humanity’s End: Why We Should Reject Radical Enhancement actually come up with quite believable technological arguments to explain that we may be reaching this ‘superhuman’ moment faster than we think. Besides this quantitative side, there is also much debate about the ethics of ‘radical enhancement,’ which Agar himself argues strongly against.
The Singularity
The ‘superhuman’ moment is what Ray Kurzweil refers to as the Singularity. It is a specific human condition; it is the end of humanity as we know it, but definitely not the end of development. It is the moment that computer technology is better and faster than our simple human brains. Kurzweil himself calls it “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” (Agar 35) For Kurzweil, reaching this moment is nothing but a quantitative problem; he is a scientific determinist, and thus sure that we will reach Singularity.

His best argument for this is the law of accelerating returns, also known as Moore’s law. I will save you most of the mathematics and technological arguments – basically it means that technology develops in an exponential way, rather than linear, so the amount of change will be bigger after each increase. I actually find this part of the argument quite convincing; I often wonder at the rapidity of technological developments in our day and age, especially in contrast to what history teaches us about technological developments in previous centuries.

So how does this law of accelerating returns help Kurzweil? As he likes things quantitative, he reasons our brains can do 1019 calculations per second. Roadrunner, a computer produced by IBM in 2008, can do 1015 calculations per second, and its creators bragged: “a physics problem that Roadrunner could crack within a week would have taken the world’s most powerful computer of 1998 twenty years to solve.” Of course, this Roadrunner is insanely expensive, and so apart from allowing a couple more years to further develop computational power, Kurzweil is suggesting that affordable, 1000-dollar machines should be available to the public around 2020. To be precise, Kurzweil predicts Singularity to happen in 2029. We in the masterclass have a running joke that explains why all these technological thinkers predict the ‘superhuman’ age to arrive so soon; they just want to be alive when it happens.

My personal problem with this law of accelerating returns is that it is predicting the future by looking at the past, which makes it a non-falsifiable argument; we will not know it is true until it happens, and until it happens, everyone can suggest that it will happen at some point. Although Agar argues that Kurzweil is misinterpreting some data, his basic point is that we do not know the endpoint or the totality of the human brain – so all this quantitative thinking remains a guesstimation.

kurzweiltranscendent-man-movie
Stills from 'Transcendent Man' (2009), a documentary about the life and ideas of Ray Kurzweil

Atomism/holism
Some of you may be reading the above and think “[t]here’s more to human intelligence than computational power!” (Agar 39) Interestingly, this is exactly what Agar brings up. He uses the difference between atomism and holism to strengthen his point, though perhaps not very effectively. First, let us look at these two ways to view the world. Atomism is the idea that everything can be explained (and re-created) by exploring the parts it is made up out of. Holism, in contrast, thinks the whole is greater than the sum of its parts. Kurzweil is atomistic in his view of the human brain; he thinks that we can completely duplicate it using technology. Should our knowledge of the brain lead to a non-working copy, that would mean that we simply have not looked at the appropriate “lower-level goings-on” (Agar 52) – perhaps duplicating the brain from the quantum level or something even below that would produce a perfect replica.

One of the problems with these two mind-sets is that they are not compatible for discussion; the holist will keep arguing there is something in the entirety of the forest that makes it a forest (so it is more than just the trees), while the atomist will keep suggesting that perhaps they should look closer at the fungi and re-explore the trees at quantum level to find what makes it a forest. Chris actually mentioned an even better example in Star Wars-terms; he argued the force was holistic in the original trilogy (so IV, V and VI), as it was just an all-compassing power, but that George Lucas altered the force into an atomistic concept when he shot the later trilogy (I, II, and III); he had been reading Roger Penrose’s theory that consciousness resides inside cells, and had thus decided that the ‘force’ quality was just on a lower level, apparent inside a cell.

The other problem I have is specific to Kurzweil’s argument; as mentioned in the introduction, a big problem for Agar is the ethical consequences of radical enhancement to humanity. He thinks that if we duplicate our brain, and actually start replacing it with a computer (which could work even faster than our brains), we will have lost our humanity. Kurzweil argues against this, but he uses a rather strange logic for an atomist: he refers to the current technology, as we are able to augment our brains already with electronic components, and says:

“If we regard a human modified with technology as no longer human, where would we draw the defining line? Is a human with a bionic heart still human? … How about someone with ten nanobots in his brain? How about 500 million nanobots? Should we establish a boundary at 650 million nanobots: under that, you’re still human, and over that, you’re posthuman?” (Agar 53)

As you can see from this sarcastic line of thought, Kurzweil seems to be arguing that it is the whole of the human body (and the human brain) that constitutes humanity; because we started out as humans, and will gradually implement technological changes, we will remain so. As I hope you grasp by now, this does not match his atomistic and quantitative portrayal of technological advancements to the human brain, and therefore greatly undermines his point of no ethical boundaries being crossed.

So, in conclusion, I hope you have found my post interesting; the following posts will probably focus more on the ethical side of transhumanism. I can assure you that there are still many mind-boggling ideas to follow.

PS: By the way, I can definitely recommend you all to read (at least snippets) of Kurzweil’s Q&A on the Singularity (http://www.singularity.com/qanda.html), as he provides some interesting ethical responses to for instance living for hundreds of years, and humans taking over the universe.


The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester.

Uploading: desirable or inevitable?


Cyber-Future-by-Benedict-Campbell
Cyber Future by Benedict Campbell
Technology is changing our world at an invisibly high rate. Those ideals that seemed impossible in the beginning of the twentieth century are now reality: we can fly to the moon, split up the smallest particles in accelerators, and find all our information on a digital web. How will technology shape our lives in ten years? Will we still be mortal human beings or are we able to solve the most destructive problem of our human life, death, and become radically enhanced, negligibly senescent posthumans? This essay discusses the uploading of the human mind, which radically enhances our intelligence and possibly makes us negligibly senescent. Do we really desire to upload ourselves or is it a venture too dangerous? In this essay I will argue that it is inevitable, but also desirable to be uploaded and upgraded into posthumans. An upload entails the replacement of the whole biological neural network, including the senses, nervous system, and the brain, by an artificial or electronic network. This would make our neural network much more vital and much more efficient in transporting energy. And thus human beings would evolve into super intelligent beings that, presumably, to a much smaller extent, are susceptible to neurological disorders.

Why do people want to upload themselves? By uploading ourselves we become super intelligent beings that will be able to develop technologies that may extend our lives. As super intelligent beings, we might develop new uploading techniques or new medicines against terminal or life-threatening diseases, but we may also develop inter alia social systems or technological devices that make our lives much easier and less stressful. Even the latter will possibly play a role in lengthening our lives. Although I am against systems and devices that make us lazy beings, the imagination of a stress-free but stimulating and inspiring world appeals to me greatly. The fact, however, that a stress-free world is an ideal and thus hardly realizable, should not weaken our will to strive for it.

The previous paragraph delineates why people supposedly want to upload themselves. The reasons given appear however to be rather subjective. Yet, I would like to argue that most human beings would be in favour of uploading themselves if it possibly lengthens their lives. All self-conscious people that care about the things they and others do want to live longer and safer, because those things we can do are all too valuable to give up. In real life we already see that people are trying to make their lives safer and more comfortable. We test all our food, drinks, and other stimulants in order to frighten us of the unhealthy substances and to stimulate us to consume the healthier ones. Also, we see that much money is put into medical research and in developing new drugs or techniques that prevent us from dying young and make us live substantively longer.

If the practice of uploading is safe, and if it does not remove my human characteristics, then I will definitely venture the upload. Nevertheless, many people argue that uploading is unsafe, because it takes away humanity and because these uploaded posthumans will not be human friendly. John Searle, an American philosopher, says that an upload will transform us into robots or computers that only simulate thinking. He says that we will lose our ability to be self-conscious or to be able to understand what we are thinking. Thus we will lose our humanity, our ability to rationalize and to be emotional beings. Nicholas Agar believes in Searle’s contention, which is in the footsteps of those that advocate Weak Artificial Intelligence – those saying that artificial intelligence cannot match or exceed human capabilities. Although we cannot predict whether an upload will be truly destructive for a human being, Agar says that we would better not even give it a try. Agar comes to this conclusion by looking at Blaise Pascal’s Wager. Accordingly, it is a good trade-off to refuse uploading, because from that we could possibly lose everything, and instead to chose to remain human, because from living a fortunate human life we don’t lose anything.

For me, this wager is all well and good, but it does not bring us any further. Besides, in the future there will be anyways an irrational fool that will accept the offer to venture the upload. This will show us whether the upload is successful, whether it is sensible to upload more human beings, and what should be improved. For me it is therefore more interesting to inquire how and why an upload could be a success. This inquiry brings me to a philosophical discussion of the mind and human consciousness and I hope that the discussion below will provoke more ideas and discussion.

First, however, we need to discuss the criteria a posthuman needs to meet in order to become a functional being. Most importantly, a posthuman needs to have intelligence. This implies that a posthuman must be able to reason, represent knowledge, plan, learn, and communicate. In order to have these intelligent capabilities, a posthuman also needs to have senses, a nervous network with a brain that can process and memorize information, and it needs to have ‘motor skills’, the skills to move things by means of muscles. Although it will be technologically difficult to produce a being with an electronic neural network that is linked to the senses and the muscles, I am convinced that it will be possible in the future. Remember that we can already make highly intelligent robots built on microchips and other nanotechnological systems.

When we reach the point of developing a being that is functional and can process information and language, we have developed a conscious being. A being that in existentialist terms exists, as it thinks, but not has transcended into a being with essence. We can call this being a pre-reflective cogito, a being that has not yet reflected upon the essence of what it is doing and of what it is thinking. The being is thus not aware of its own conscious being and is according John Searle only simulating thinking.

The conscious pre-reflective cogito is conscious just like a computer. A computer can also transport energy, information, or language when there is a certain sensory input. When we tap on our keyboard of our computer, the computer reacts to it according to how it is programmed. The computer will transport the information through its microchips and will give a certain sensory output, e.g. by saying a word. A computer can however do more. It can compute. “Computations can capture other systems’ abstract causal organization. Mental properties are nothing over and above abstract causal organization. Therefore, computers running the right kind of computations will instantiate mental properties.” (Chalmers) We can compare this to our use of formulas. By inserting particular information (the information from sensory inputs) into standard formulas, formulas can provide us with new information of which we did not think of before. So computers can compute the consequences of certain acts and can make decisions on basis of choosing the best consequence. It is thus not only externally synthesized information that goes through a computer, but also internally produced/computed information that goes through a computer. This information is always randomly produced, as the sensory inputs from our chaotic world are always random. By storing the information on its memory microchip, the computer has learned something.

Two other criteria for intelligence are planning and reasoning. These practices are only possible when a computer or an uploaded being can learn. The computer can reason and plan by means of a scheme composed of bad and good consequences that it has learned in the past. With this scheme, the computer can make trade-off based decisions. If a computer rationally makes a decision, it can also program itself to act upon the decision somewhere in the future, which we call planning. Whether the computer will succeed in acting upon it in the future is still another question.

Communication and representing knowledge are both more complex criteria of intelligence. For both we need to understand what we are thinking: the computer must be self-conscious in order to be capable of communicating and representing knowledge. The computer must be able to think of its thinking. For this capability, a computer must understand its existence or its being – it must learn that there is a being that thinks, a cogito. The computer must understand that it is alive rather than dead, and that it can do things. It must also learn that the being makes decisions on basis of what is good and bad for the ego. By becoming aware of these decisions, the computer will care about them and also establishes a will to live up to them.

The computer must also learn a language in order to show others what it likes and dislikes. Then, other people will also think that the computer is thinking and is caring about its interests. The computer becomes a social being that loses its radical freedom to do things without caring, because it understands that others compete against him/her. The computer understands that he/she needs to secure its interests by fighting against the other. In Hobbesian terms, the state of nature, “the war of all against all”, has then become reality. Thus, the computer has become a caring, self-conscious, egoistic being that is able to communicate its interests and to represent knowledge. Knowledge is something, that has gotten an essence, a subjectivity, or a meaning for the computer, because of the computer’s consciousness of the conscious beings in this world.

Since it seems to be possible to create self-conscious posthumans, there are still others arguing that human beings should not venture to upload themselves into posthumans. These people say that we will become substantively different creatures. The aesthetic sensibilities of posthumans will be completely different from ours and they will find it reasonless to reproduce, because of their negligibly senescent lives. Being afraid of losing humanity seems however to be a myth. We will never know exactly how it is like to be a posthuman. Yet, we can be sure of our different mentality and identity in the future, which we will get both as evolving human beings and as transforming human beings into posthuman beings. In my opinion, we should make the progressive step towards posthumanism rather than staying pessimistic, afraid, and conservative about it.

At last, there are also people arguing that an upload will make us human unfriendly beings. These people say that in our social unequal world, where not all people have the resources to upload themselves or the resources to provide their basic needs, alienation will bring the posthuman being at loggerheads with the human being. In that situation, the uploaded posthuman, with all its power and intelligence, will oppress the human race, which we don’t want and always should try to prevent. Well, why not developing human friendly artificial intelligence? By programming or by education, we can possibly make most posthumans human friendly. But won’t they be smart enough to circumvent the programming? Yes, they can, but these posthumans face the same dilemma we are also facing. They will also think that an evolution of the human friendly posthuman into a hating posthuman will endanger the existence of both human friendly posthumans and human beings. Most human friendly posthumans will remain the same peace-loving beings, as most of them prefer to live in peace rather than in war.

Words Cited
Chalmers, David. "A Computational Foundation for the Study of Cognition." Web. 13 Dec. 2011. https://mywebspace.wisc.edu/lshapiro/web/Phil554_files/Chalmers-computation.html

Written by Lars Been
2nd year student LUC

The LUC Dean's Masterclass is run each semester for the students who made the honour roll in the previous semester.

Friday, 18 November 2011

Indigenous Heritage and Human Rights by Maarten Jansen

A few weeks ago, some of us had the privilege to listen to Professor Maarten Jansen speak as part of the visiting Lecture series. His lecture on indigenous heritage and human rights was highly interesting, however it provoked some questions.

To give a short summary of his talk, Professor Jansen started his lecture by investigating what exactly was meant by the term “indigenous peoples” before moving on to looking at the various stereotypes that have typified representations of Indigenous groups throughout Central America (i.e. ‘Cannibals’, ‘Human Sacrifice’ and ‘the Noble Savage’). After this introduction, Professor Jansen used these historical points to introduce his opinions on the current situation of indigenous groups and their struggles for rights and recognition in Mesoamerica. He stated that the importance of indigenous ‘participation’ in larger society rather than ‘integration’ was a vital change in policy needed in the struggle for indigenous rights and mentions examples of the changing status of indigenous rights throughout the world (e.g. The 2007 UN declaration on the rights of indigenous peoples). Moving on to the idea of the ‘endangered heritage’ of indigenous people, Jansen mentions the fact that it is estimated that of the 7000 languages currently spoken in the world, six thousand will most likely be lost by the end of the century. Not only that but, most of those six thousand are actually already considered ‘extinct’, or to rephrase Maarten Jansen, it is like a species of animal who, though the last specimen is still alive, has lost the ability to reproduce and is as such ‘extinct’.

Jansen further outlined how since the Spaniards ‘discovered’ the Americas, Ancient artifacts belonging to the Indigenous peoples have been brought to Europe and the ‘western’ world (naming as an example the famous ‘Crown of Moctezuma’ which currently resides in a museum in Vienna). He stated that the indigenous peoples whose culture these artifacts belong to contend that “Why should ‘they’ have all the benefits while it is the work of ‘our’ ancestors?”

Professor Jansen then moved on to his personal work in the field with indigenous communities, using both the returning of artifacts to indigenous groups and the risk of the loss, or ‘extinction’ of culture to validate one project he has been working on, namely the ‘teaching’ of Mixtec culture and language to Mixtec Indigenous peoples. Using his background in Archaeology and in particular, his expertise on the interpretation of Mixtec pictorial manuscripts, Professor Jansen goes to Mixtec Indigenous communities and works together with the people there to ‘interpret’ artifacts of Ancient Mixtec culture in order to maintain the culture and language of the area. Here I must critique his method as serious issues arrive when considering the ‘teaching’ of culture.

Professor Jansen stated that ‘we’, here he meant of course western society and in particular himself, are able to understand, and thus teach about, Mixtec pictorial manuscripts as accurate ‘dictionaries’ exist which provide translations of the pictorial manuscripts. These dictionaries however were written by the Spanish conquerors of the area and as such provide only an interpretation of what the Spanish conquerors thought the meaning of the pictographs were. This of course means that the very translations which Jansen is basing his teachings on are simply the ‘western’ perspective of the meaning of those manuscripts. Thus Jansen is in fact not maintaining the Mixtec culture by instructing Mixtec people about the manuscripts but he is fact influencing their culture by teaching it to them from a western perspective. When I brought up this issue with Professor Jansen, he admitted that this was a cause for concern and his counter argument was that he worked together with the local people in his work in order to improve the accuracy of the translations.
Though I personally believe in the fact that culture must be preserved it must also be taken into consideration that the culture that Jansen instructs about in fact no longer exists because the ancient Mixtec culture of 500 years ago has developed since colonisation to become what it is today. Jansen seem to imply in his lecture that he was ‘reviving’ a culture and language --however he is teaching the history of a culture to its descendants. That is of course a valuable thing to do as knowledge of the history of one’s ancestry is important, but I think that the work that Jansen does in Mixtec communities cannot be seen as the ‘revival’ of a culture, simply the investigation of the history of a culture.

The lecture by Professor Jansen thus brought up a variety of interesting questions in us as an audience. In particular the question as to whether the ‘teaching’ of an indigenous culture to its descendants is useful was an interesting point to contemplate as the question of western involvement in indigenous communities is of course one of great contention.

Jori Nanninga
BA 1

Friday, 7 October 2011

Raymond Geuss: "The Ambiguities of Democracy and Human Rights"

The Ambiguities of Democracy and Human Rights

(On the occasion of Prof. Raymond Geuss' lecture 'The Authority of Democracy and Human Rights' and a research seminar in which he discussed his paper “Does criticism always have to be constructive?” the next morning)

We live in a really threatening, unsurveyable and infinitely complex world. It is a world in which many different individuals who value and aspire many different things have somehow found a way to live together; it is a world in which we continue to be baffled by forces of nature and the intricate web of human relations: it is a difficult world to make sense of. As such, it is natural for us to simplify it in terms of abstract schemata that allow us to somehow order the world. These schemata, stresses the Cambridge professor of philosophy Raymond Geuss, once in place, often take the form of dogmas. This is not a problem, as long as we realise that they are ultimately human constructs with limited applicability. It is especially important that we realise that this is also the case with two of the central dogmas of Western political thought: the belief in the inherent value and universal applicability of democracy and human rights. “It is natural to structure the world such that what you are best in appears pivotal,” explains Geuss, “[but] we ought to resist fetishizing good working schemata by abstracting them and projecting them on other structures.” Vividly illustrating why it is a mistake to take these dogmas for somehow deeply, inherently justified ideas with universal aspirations, professor Geuss then sets out to expose the ambiguities and incompatibility of the terms ‘democracy’ and ‘human rights.’
When we speak of democracy, he warns, we are not speaking of a uniquely specified phenomenon. Numerous models of democracy have been proposed and enacted throughout history, each different from the other. The direct democracy practised in the Ancient Greek poleis, for example, is vastly different from the representative democracies we now know in for example The Netherlands, yet both forms of government carry the same name. However, that two different interpretations of a particular concept appropriate the same name does not necessarily imply that either one of them is wrong, or, in this case, undemocratic. It is important here, states Geuss, to distinguish between two fundamentally different senses in which the term democracy is typically used. Usually, it is taken to be a descriptive empirical term describing a particular organisation of society and its institutions. When I contrasted the Ancient Greek democracy with the contemporary Dutch one, I used the term in a descriptive manner. However, if I were to criticise either one of the regimes I mentioned above by contrasting them with a non-existent ideal type of democracy, I am using the word in an altogether different sense as a “highly theoretical interpretation of what ought to be going on.” Used this way, the word has a strong normative connotation. When someone speaks of democracy, therefore, we ought to ask ourselves whether he or she is using the term in a descriptive or normative manner: we ought to remember that it is an ambiguous concept that can be interpreted in numerous ways.
Although the two senses of democracy under discussion are analytically different, explains Geuss, they are often used in conjunction and sometimes conflated with each other. Over the course of the last decades, particularly since the fall of the Soviet Union in 1989, the term democracy has undeniably come to be regarded as normatively positive by the Western public. Somehow, we get the impression that when someone speaks of democracy, it is clear that he or she is speaking of an inherently valuable and deeply justified form of government with universal aspirations. Yet, as we have seen, the term 'democracy' may denote many different things. Therefore, if we attach the label 'democracy' to our own institutional arrangement of society and take this as sufficient justification for spreading it as a state model, we are conflating the descriptive and normative element of the term democracy. This is problematic, because, as Geuss argues, the fact that democracy works well for us does not necessarily imply that it works well elsewhere. Moreover, he points out, it is likely that 'true democracy' does not exist if we take it to mean a form of government in which the 'people' exercise power. Firstly, because this notion posits a unitary people somehow capable of exercising power, while in reality societies are composed of numerous individuals with often conflicting interests. Secondly, because in most modern democracies the power to rule is not vested in the people, but in separate structures that operate beyond the direct control of a state's citizens. In fact, the modern state's reason of being seems to be the institutionalisation of power and, as such, they are by definition undemocratic. “Democracy, then,” Geuss concludes, “is not a good conceptual tool to analyse contemporary politics.”
The second and equally problematic central dogma of Western political thought that Geuss discusses in his lecture is that of human rights. Like democracy, the term human rights is widely regarded as an undeniably valuable concept with universal aspirations. They are thought to be rights that every human being possesses on account of his or her humanity. Hence, they are not rights assigned to individuals through political processes, but they are rights that exist independently of human interference. That is, they are natural rights. This notion becomes problematic when we subject the term 'rights' to closer evaluation. The concept of 'rights' is only useful if these rights can be enforced. Otherwise, they lose their meaning. For human rights to be a meaningful concept, therefore, there must be someone or something capable of enforcing them. In Locke's theory of natural rights, there was a deity to take care of this job. However, if we do not believe in the presence of a God, it is also difficult to think of natural rights as a useful concept. As soon as we, humans, start taking the role of enforcing them, they are no longer independent of human interference and hence lose their status as somehow transcendental rights. Furthermore, it is very ambiguous what natural rights are in the first place. “The context-given interpretation of natural rights,” states Geuss, “is very important.” Though we now all agree that holding slaves is a direct violation of human rights, this was not a problem for the Founding Fathers who signed the U.S. Constitution in which they proclaimed that every man is born equal. What is and is not a human right, then, is a highly political matter that somehow depends on personal interpretation. Which personal interpretations we take to be most accurate in describing human rights depends on who we believe to be in the right authority to evaluate them and is therefore highly subjective. The concept of human rights, by consequence, is not a clear “cognitive tool to assess modern societies.”
If, taken on their own, the concepts of human rights and democracies are problematic, they are even more so taken together. Much of modern political theory, Geuss points out, is devoted to showing that both concepts are somehow compatible. Yet, he maintains, this is an impossible task: while democracy “vests final power, legitimacy and authority in the 'people',” the concept of human rights “posits the individual bearer of such rights as the final origin and locus of authority.” Hence, we may have to re-think the way we think about democracy and human rights. They are dogmas among other dogmas, and we should not overgeneralise them. If we accept this, it is no problem that democracy and human rights are incompatible concepts, for they are not the somehow “deep, inherently justified ideas” that we sometimes perceive them to be. Accepting this, moreover, implies that we have come one step closer to “resisting fetishizing good working schemata by abstracting them and projecting them on other structures.” Even if the he terms democracy and human rights worked well for us to think about society (a contention that we may have to reconsider after Prof. Geuss' talk), they would not for that reason be equally useful elsewhere. At any rate, we should not aspire to export our state model throughout the world. For, as we have seen, we live in an infinitely complex world, and the world view that appeals to us most likely does not appeal to everyone.

Barend de Rooij (LUC, 2nd year student)
5-10-2011