Semantron 23 Summer 2023

Markov chains and coin tosses

into a closed subset of states: either HH+HHT or HTH. Since it starts at T, it first encounters the path leading to HHT and hence is more likely to take that route. Let us compute the probability of A winning.

Denote the event ’ A wins’ by A . Let X n be the state of the process at the time n (after reaching HTH or HHT, the process stops there). Looking at the graph:

Analogous relations are true for other junctions. Using the Markov property, we may write P ( A | X 1 = HT ) = P ( A | X 0 = HT ) (after reaching HT, the process behaves as if it started from HT in the first place). Doing this for other nodes, we get that

The first equation yields P ( A ) = P ( A | X 0 = T ) = P ( A | X 0 = H ). Not surprisingly, the last equation implies P ( A | X 0 = HH ) = 0 - there is no path from HH to HTH. It follows that

and thus

Before we move to the general case, let us consider a somewhat simpler problem: how many times, on average, do you have to toss a coin to obtain a chosen pattern?

Expected waiting time

It turns out that the average time one has to wait depends on whether the translated pattern matches itself. Rather than trying to trace through all the loops and junctions on the graph, we will use an analogy to demonstrate the problem. The construction is taken from [1].

Suppose that an infinite number of players sit at a casino table playing the following game. In each turn, the croupier takes in bets from the players and flips a coin. The game continues until the pattern A = a 1 a 2 ...a n shows up. At the start of the i -th turn, the player i enters the game and puts £1 in for a 1 . If a 1 shows on the coin, he wins £2 and bets it for a 2 in the next turn, then £4 for a 3 , and so on. The player stops betting

222

Made with FlippingBook - Online catalogs