How do I know right from wrong? Possible answers may be found in religion (God will tell me what to do), in relativism (anything goes, there are no “right” answers) or in determinist theories (my DNA has been programmed that way, God has a plan for me, evolution made me that way, my mother didn’t love me).
Let’s assume (as I do) that science (or religion, for that matter) has not managed to prove conclusively that all human behaviour is determined. While many of our decisions and ensuing actions may be the result of a closed causal, determined process, at some point in that process we may, we will, end up at a fork in the road, a gap in the causal chain, from where we can only move on by making a choice. It is what distinguishes us from strictly programmed machines.
So, the question then turns to – “How do I know I’m making the right choice?” For 2000 years – from the moment people felt unsatisfied with the answers they were given by religious dogma – philosophers have come up with four possible answers to these questions. I list them below, accompanied by the example of lying, and explain why one of them is my favourite candidate.
Whether the consequentialist tells a lie or not, depends on the outcome. She calculates the consequences of her actions, by balancing the happiness or harm they cause. If no harm, then surely lying is permitted. If more benefits than harm, then still lying may be permitted. There are a number of problems with this approach. How do you calculate consequences? How do you predict the outcome of a decision, and can you predict all possible future consequences? Do you numerically measure the pros and cons, the harm versus happiness? But is any action that favours more people than it harms, always the best action? And exactly how do you measure happiness, or harm? One action may be perceived very harmful by one person or group of people, while only slightly inconvenient for another. If there is no certain, common denominator, how do you compare, measure and predict the consequences of your actions?
Whether or not the social contract theorist tells a lie, depends on the agreement she has with other people. If other people promise not to tell lies, then it makes reasonable sense for her to stick to the same promise. If not, other people may lie all the time, and society would become impossible to live in. Even though we may all, selfishly and instinctively, be inclined to want to lie, setting up agreements which enforce the prohibition of lying are better for everyone. So it’s better not to lie.
Whether the deontologist tells a lie, depends on the rules, and his duty to stick to them. The rule will tell me how to act, or not to act. An immediate problem seems to be, of course: which rule, and why this rule? Some deontologists find authority in religious writings, others in the compelling imperative of rational human thought, and some in the simple fact that we have all, socially, consented to the rule simply by voluntarily taking part in society. If there is a rule that requires me not to lie, why should I subscribe to that rule? Is authority a problem for the deontologist, so is the inflexibility of the rules. Decisions are more often than not a balancing act, juggling a multitude of contextual factors, to which inflexible rules may provide undesirable answers (the famous deontologist Kant claimed that we should never lie, not even when an axe murderer asks you where your children sleep). How do hard and fast rules deal with flexible and changing dilemmas?
The non-cognitivist may tell a lie if he feels inclined that way, or may not tell a lie if reasonable consideration advises him not to lie. The non-cognitivist claims that we are driven by passions, not by intellect. How could something as elusive as a mental thought be the cause of a physical behaviour? Reason may be an advisory guide, but can never be the actual engine or motivator for our actions. Non-cognitivism can easily slide into relativism or subjectivism – the theories that hold that individual people or groups of people know best what’s right for them. What feels right for you, may not feel right for me, and who am I to tell you what’s right for you? In a moral context, however, we intend to look for explanations as to why this or that action is better than another. We’re not comparing soft drinks, or favourite colours.
In my opinion, all of the above distract from the essence of the issue at stake – the essence of our decisions, and the essence of us as moral agents. Inspired by the existentialist philosopher Jean Paul Sartre ( † 1980 ), I would hold that we all have a responsibility to take up the freedom we have as humans, without hiding behind consequences, rules or passions. This is what makes us human – not meekly following rules, not meekly succumb to our passions and temptations, and not coldy calculating consequences, but shaping our characters by well-balanced decisions we make throughout our lives.
Whether or not the virtue ethicist will lie, depends on a rational balancing act between a whole lot of factors, including circumstances, third persons and self-growth. The virtue ethicist aims to build a character on virtues – good human qualities that can be found in a reasonable balance of good and bad extremes, e.g. the virtue of courage between blind brazenness and despicable cowardice. Qualities that have stood the test of time, and we have historically associated with humanity. Qualities that need a rational weighing up of all facts, context and complexities. The virtue ethicist takes full responsibility for his actions, accepts and learns from his mistakes, and does not hide behind consequences, rules or passions.