Philosophy Notes > Cambridge Philosophy Notes > Metaphysics Notes

Functionalism Notes

This is a sample of our (approximately) 8 page long Functionalism notes, which we sell as part of the Metaphysics Notes collection, a Upper 2.1 package written at Cambridge in 2008 that contains (approximately) 46 pages of notes across 7 different documents.

Learn more about our Metaphysics Notes

The original file is a 'Word (Doc)' whilst this sample is a 'PDF' representation of said file. This means that the formatting here may have errors. The original document you'll receive on purchase should have more polished formatting.

Functionalism Revision

The following is a plain text extract of the PDF sample above, taken from our Metaphysics Notes. This text version has had its formatting removed so pay attention to its contents alone rather than its presentation. The version you download will have its original formatting intact and so will be much prettier to look at.

-1When and where do mental events happen?
Mental events happen. We know they happen because they happen to us. They include phenomena such as beliefs, desires, sensations, and rational thought. Looking at these things scientifically and philosophically, they are interesting and tricky because they have mysterious features such as intentionality, qualia, a sense of freedom, and subjectivity - which are all, broadly, features that require consciousness. We want to know what sort of circumstances in the world make such features occur. This is another way of asking the question 'what are mental events?'. If we knew what specially characterised mental events then we would know how to say when/where mental events happen, or conversely, if we had a way of specifying exactly the criteria for saying that a mental event was occurring, then we would know what a mental event was. Mental events have proved problematic because they have features which no purely objective characterisation seems to do justice to. Other phenomena in the world easily admit of objective physical descriptions which satisfy us, but any description of features of the mind which proceeds purely in objective physical terms seems to be lacking something. Of course, you might be a substance dualist about mental events, as many people have been in the past, notably Descartes: you might think that, on the one hand, there is matter, and on the other hand there is 'thinking stuff' or something like the soul. The soul, which is immaterial and has no extension, takes care of all the mysterious subjective features of the mind. But this view has serious metaphysical problems which do not need to be gone into now, and most philosophers have rejected substance dualism in favour of physicalism - the view that only physical things exist and that we will only need to make reference to physical things in explaining the mind. In the quest for a physicalist theory of mind, one big movement was behaviourism. This is a fad that started in the 1930s with logical positivism. According to behaviourism, we should make no recourse to inner psychological states or constructs such as the mind when explaining human behaviour, because claims involving such notions are empirically unverifiable. Instead, the only sensible things to talk about are descriptions of overt behaviour and/or hypothetical statements about what overt behaviour would occur under certain circumstance. Behaviourism suffers from various technical problems, but it is enough to say that it has been found inadequate because not only does it provide no explanation of subjective phenomena, but moreover it rather gratuitously leaves out inner psychological events, that is, events in the brain. Behaviourism was a school of thought in psychology, and treating events in the brain as unobservable and therefore meaningless was, as you can imagine, very unfruitful, and has since been rejected. Since psychology now deals with the brain as part of its science, this has enabled theories of mind to make reference to brain states. There is one class of theories of mind that identify mental states with certain brain states. These are called identity theories. Identity theories do seem to have the problem of not actually explaining the subjective features of mental events but there is another danger with identity theories which is relevant to the question of when and where mental events happen. You might be a token identity theorist and believe that a given mental event is identical with its brain state, but you do not want to imply by this that only brain states can be identical with mental events.

-2-

We happen to know that human mental states are related to the brain in particular, but it does not seem necessary to rule out any other kind of medium by definition. After all, a real empirical question is whether robots/computers could ever have a mental life like ours. Neither do we want to rule out the possibility of creatures (perhaps who evolved on other planets) whose physical constitution is very different from ours still having what we would want to call minds. For this reason among others, some philosophers have been led to adopt a functionalist view of the mind. On this view mental states are constituted solely by their functional role - that is, their causal relations to other mental states, sensory inputs, and behavioral outputs. And since mental states are identified by a functional role, they are often said to be 'multiply realizable'; that is, they are able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. Note that you could still be an identity theorist while being a functionalist. The important claim of functionalism is that a mental state's functional role is what makes it the mental state that it is. You needn't deny that any given mental state is the physical state that constitutes it. You might after all think that the fact that mental states are realized by brain states is essential, in which case you would deny multiple realizability. However, multiple realizability is the intuition that has driven all the intrigue and excitement about whether we could one day build conscious robots, and is even quite crucial to the weaker claim that building robots and computers at least helps us understand our own psychology better. And so it certainly seems like an intuition that needs to be addressed; from now on when I refer to functionalism I will mean functionalism that implies multiple realizability. Now, the computers question is very real because we are building computers that are more and more powerful in terms of information-processing ability, and we really want to know whether this means we are getting closer to producing things that have minds like us, or whether we are still missing some big thing which makes human minds unique. In all our attempts to understand the brain, computation has been a very useful notion - it is the central analogy used in cognitive science. The brain is, at least, an information-processor, and many of its abilities and processes can be characterised as computations. Briefly, what is a computation? One neat definition is that a computation is anything that can be simulated by a universal Turing machine. Turing machines are extremely basic abstract symbol-manipulating devices which, despite their simplicity, can be adapted to simulate the logic of any algorithm that could possibly be constructed. An algorithm is a definite list of well-defined instructions for completing a task, which, given an initial state, will proceed through a well-defined series of successive states, eventually terminating in an end-state. For any well-defined computational procedure whatever, a universal Turing machine is capable of simulating a machine that will execute those procedures. It does this by reproducing exactly the input/output behaviour of the machine being simulated. And the exciting fact is that a modern computer is a universal Turing machine (although they lack unlimited memories, but memories can always be made larger to meet demand). So the question is not really whether suitably programmed computers can simulate the behaviour produced by computational procedures found in natural animals (because we know that they can),

****************************End Of Sample*****************************

Buy the full version of these notes or essay plans and more in our Metaphysics Notes.