In Section 4.5 I talked about a simple problem of searching for a moving target that moves back and forth randomly between two locations (e.g. think of a destroyer that hunting for a submarine that moves between two depths).
In the following more general form this problem has never been solved! Suppose that it costs c_i to search location i. If the object (or target) is in location i and we search there then we will find it with probability \alpha_i. If we do not find it, the the object moves according to some two state Markov chain with known transition matrix P=(p_{ij}). The aim is to minimize the expected cost spent searching until we find the object.
Obviously we can always compute p_t, the probability that the object is in box 1, given the fact that we have made certain searches and not yet found it. Everyone believes that the optimal strategy is of the form "search box 1 iff p_t\geq p^* for some p^*." The number p^* will depend on all the parameters in the problem.
The optimality of this form of policy has been proved for the special case of c_1=c_2. However, it remains unproved for general c_1,c_2.
In the following more general form this problem has never been solved! Suppose that it costs c_i to search location i. If the object (or target) is in location i and we search there then we will find it with probability \alpha_i. If we do not find it, the the object moves according to some two state Markov chain with known transition matrix P=(p_{ij}). The aim is to minimize the expected cost spent searching until we find the object.
Obviously we can always compute p_t, the probability that the object is in box 1, given the fact that we have made certain searches and not yet found it. Everyone believes that the optimal strategy is of the form "search box 1 iff p_t\geq p^* for some p^*." The number p^* will depend on all the parameters in the problem.
The optimality of this form of policy has been proved for the special case of c_1=c_2. However, it remains unproved for general c_1,c_2.