Operational Research Notes > Lse Operational Research Notes > Operational Research Techniques Notes

Markov Chains Notes

This is a sample of our (approximately) 3 page long Markov Chains notes, which we sell as part of the Operational Research Techniques Notes collection, a 1st Class package written at LSE in 2011 that contains (approximately) 104 pages of notes across 17 different documents.

Learn more about our Operational Research Techniques Notes

Markov Chains Revision

The following is a plain text extract of the PDF sample above, taken from our Operational Research Techniques Notes. This text version has had its formatting removed so pay attention to its contents alone rather than its presentation. The version you download will have its original formatting intact and so will be much prettier to look at.

Lecture 15: Markov Chains Summary

• Introduction

• Markov Chains

Introduction

• Stochastic Processes = Processes that evolve over time in a probabilistic manner
○ Most processes are stochastic
 If not, the future is fully determined

• We make the assumption that stochastic processes are fully described by two sets of information:
○ The current state
○ Transition probabilities

• Stochastic processes with such assumptions are Markov Processes

• The simplifying assumption is that history does not matter

• Markov Chain = Markov Process in discrete time (i.e. weeks or generations) Example of modelling a Markov Chain We can use a Markov Chain to model the future class structure of a society

• Current state = current class structure (what proportion of individuals are in each class)

• Transition probabilities = i.e. the probability that the son will be each class given that his father was upper class etc.

• ∴ future class structure depends on current and we do not need to know about the past

• Another assumption that is commonly made:
○ Stationary transition probabilities = That the transition probabilities stay the same
○ With this we can model the long run equilibrium state
○ These determine the long run outcomes
 Not the current distribution

• Good at modelling market share in the short run

Markov Chains

• Examples include:
○ Brand switching in consumer purchases
○ Changes in social class over generations
○ Changes in staff employed at different levels in firms
○ Progress of a disease in populations

• Models movement between different states over time

• In any time period (stage) a unit will be in one and only one state

• Between states there can be a transition to any of a number of other states

Transition Probabilities
○ Transition probability from state to :

In Markov chains, it depends on and and not on how state was reached Course Notes Page 37

****************************End Of Sample*****************************

Buy the full version of these notes or essay plans and more in our Operational Research Techniques Notes.