Becky Tun
15/10/07
Do agents necessarily have reasons to care about others? If so, what are they?
I am going to call caring about others 'altruism' and treat it as a moral term - that is, a term referring to a moral phenomenon. In so far as it involves a moral belief, I will define it as considering other people's benefit as equally important as, if not more important than, your own. Definitions of altruism also include the fact that we get gratification vicariously or from the responses of the people we help, as well as the fact that we are more likely to behave altruistically towards those we are close to, for obvious reasons. These could also be said to be aspects of the ethical nature of altruism, although I would like to maintain a distinction between the proposed moral belief 'that other people's happiness is as important as mine' from other states or capacities that give us dispositions to behave altruistically, such as love or empathy, because whether you admit beliefs, or dispositions, or perhaps both, into the moral sphere will inform what kind of normative theory you will adopt iff you want to build altruism into an ethical system. Still, as we will see, all these aspects of the nature of altruism are intricately tied together with regard to both their causes and the way they play out in our behaviour. First, in order to answer the question, I want to make some distinctions. There are causes of why caring about others is something that happens at all, and then there are our reasons for caring about others, and these are two separate things. Here, the reasons for caring about others consist in the states that dispose us to behave altruistically towards others, some of which states will be conscious moral beliefs, thus embodying an important sense of the word 'reasons'. There is also the sense in which 'reasons' can mean ethical truths, in which case they are not part of the state of the agent at all; in fact the agent need not even apprehend them (more about this later). The causes are the causes of why we have such psychological states - which will be, at bottom, an evolutionary explanation of altruism (both as a social institution and as an aspect of human nature), noting, quite importantly, that while some aspects of altruism may be adaptive, other aspects may be side-effects of adaptations. So I have introduced three notions: causes, psychological reasons and normative reasons. The importance of the distinction between causes and psychological reasons is at least in part that psychological reasons can be taken into the moral sphere, while causes cannot. That is, a normative theory can take psychological reasons for action as ethical objects in the theory (especially given that some of these psychological reasons consist in moral judgements). That is not to say that some moralists do not commit the fallacy of bringing causes into the moral sphere, such as when philosophers turn the principles of evolution into imperatives. This fallacy can be committed partly because people find it tempting to think that the evolutionary principles which we think gave us certain dispositions and capacities and beliefs can themselves be somehow taken as the (perhaps unconscious, or conditioned away) reasons for our actions. This is wrong; evolution programmed us with certain psychological mechanisms; these mechanisms involve having certain (conscious and unconscious) reasons for action; these reasons are not in contact with the principles that governed the process of evolution - except when philosophers unnecessarily make a conscious decision to translate evolutionary principles into reasons for action. For instance someone committing this fallacy might say that when we act in the interests of our children we are acting on an unconscious desire to propagate our genes. In fact we have no such desire. This is shown by the fact that some people
Buy the full version of these notes or essay plans and more in our Ethics Notes.