I have only been able to say a few things about Brownian motion and the Wiener process. The The Wikipedia articles referenced in the previous sentence are good and contain some nice video demonstrations of things like self-similar scaling. I reviewed only the very basics, but enough for us to develop a simple stochastic differential equation that corresponds to the optimality equation of dynamic programming in the case of continuous time and a continuous state space. Although Brownian motion is a subtle thing, one can infer (or at least guess) many of its properties by thinking about the simple discrete time symmetric random walk, of which it is a limit.
There are many interesting things to be said about Brownian motion. Paul Levy's Forgery Theorem for Brownian motion, implies that in two dimensions almost surely every arc of the path contains the complete works of Shakespeare in the handwriting of Bacon.
I mentioned in the lecture that it is initially surprising to see that in Example 19.4 the cost is greatest when there is no noise. This is a little bit counterintuitive and so we should try to understand why this is. I guess it is because $C_0$ and $C_1$ are almost the same, and $L$ is small compared to $Q$, and so we can let the noise take us to the endpoints, only expending minimal effort (which is costed at $Qu^2$). I guess that if $C_0$ were a lot less than $C_1$, or if $L$ were bigger compared to $Q$, then noise will not be helpful and we would expect Figure 4 to look different.
The last part of today's session was about Risk-sensitive optimal control. I have corrected the small sign errors that were in the notes. If you would like to learn more about this then I recommend this paper:
There are many interesting things to be said about Brownian motion. Paul Levy's Forgery Theorem for Brownian motion, implies that in two dimensions almost surely every arc of the path contains the complete works of Shakespeare in the handwriting of Bacon.
I mentioned in the lecture that it is initially surprising to see that in Example 19.4 the cost is greatest when there is no noise. This is a little bit counterintuitive and so we should try to understand why this is. I guess it is because $C_0$ and $C_1$ are almost the same, and $L$ is small compared to $Q$, and so we can let the noise take us to the endpoints, only expending minimal effort (which is costed at $Qu^2$). I guess that if $C_0$ were a lot less than $C_1$, or if $L$ were bigger compared to $Q$, then noise will not be helpful and we would expect Figure 4 to look different.
The last part of today's session was about Risk-sensitive optimal control. I have corrected the small sign errors that were in the notes. If you would like to learn more about this then I recommend this paper:
P. Whittle. Risk-sensitivity, large deviations and stochastic control, European Journal of Operational Research 73 (1994) 295-303.
I think it is the easiest one to start with. There are other references given at the end of this paper.
I think it is the easiest one to start with. There are other references given at the end of this paper.