This website uses cookies to ensure you get the best experience on our website. Learn more

Operational Research Notes Operational Research Techniques Notes

Markov Chains Notes

Updated Markov Chains Notes

Operational Research Techniques Notes

Operational Research Techniques

Approximately 104 pages

In depth, typed notes covering the Operational Research Techniques (OR202.1) course at LSE (London School of Economics) which is part of the Operational Research Methods (OR202) course along with Mathematical Programming (OR202.2). Covers the full content of the course including the following topics:

- Flowshop Scheduling
- Replacement Theory
- Critical Path Analysis
- PERT Analysis
- Decision Theory
- Game Theory
- Simulation
- Heuristic Methods
- Travelling Salesman Problem
- Queuin...

The following is a more accessible plain text extract of the PDF sample above, taken from our Operational Research Techniques Notes. Due to the challenges of extracting text from PDFs, it will have odd formatting:

Lecture 15: Markov Chains Summary * Introduction * Markov Chains Introduction * Stochastic Processes = Processes that evolve over time in a probabilistic manner * Most processes are stochastic ? If not, the future is fully determined * We make the assumption that stochastic processes are fully described by two sets of information: * The current state * Transition probabilities * Stochastic processes with such assumptions are Markov Processes * The simplifying assumption is that history does not matter * Markov Chain = Markov Process in discrete time (i.e. weeks or generations) Example of modelling a Markov Chain We can use a Markov Chain to model the future class structure of a society * * Current state = current class structure (what proportion of individuals are in each class) * Transition probabilities = i.e. the probability that the son will be each class given that his father was upper class etc. * [?] future class structure depends on current and we do not need to know about the past * Another assumption that is commonly made: * Stationary transition probabilities = That the transition probabilities stay the same * With this we can model the long run equilibrium state * These determine the long run outcomes ? Not the current distribution * Good at modelling market share in the short run Markov Chains * Examples include: * Brand switching in consumer purchases * Changes in social class over generations * Changes in staff employed at different levels in firms * Progress of a disease in populations * Models movement between different states over time * In any time period (stage) a unit will be in one and only one state * Between states there can be a transition to any of a number of other states Transition Probabilities * Transition probability from state to :In Markov chains, it depends on and and not on how state was reached Course Notes Page 37

Buy the full version of these notes or essay plans and more in our Operational Research Techniques Notes.