最新Markov-chains-马尔科夫链.doc

上传人:豆**** 文档编号:35254721 上传时间:2022-08-20 格式:DOC 页数:174 大小:2.50MB
返回 下载 相关 举报
最新Markov-chains-马尔科夫链.doc_第1页
第1页 / 共174页
最新Markov-chains-马尔科夫链.doc_第2页
第2页 / 共174页
点击查看更多>>
资源描述

《最新Markov-chains-马尔科夫链.doc》由会员分享,可在线阅读,更多相关《最新Markov-chains-马尔科夫链.doc(174页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、Four short words sum up what has lifted most successful individuals above the crowd: a little bit more.-author-dateMarkov-chains-马尔科夫链Markov ChainsMarkov Chains 4.1 INTRODUCTION AND EXAMPLESConsider a stochastic process Xn,n=0,1,2, . that takes on a finite or countable number of possible values. Unl

2、ess otherwise mentioned, this set of possible will be denoted by the set of nonnegative integers 0,1,2, .If Xn=i, then the process is said to be in state i at time n. We suppose that whenever the process is in state i, there is a fixed probability Pij that it will next be in state j. That is, we sup

3、pose that(4.1.1) PXn+1=j Xn-1=in-1,.,X1=i1,X0=i0=Pijfor all states i0 ,i1,.in-1 , i, j and all n 0.Such a stochastic process is known as a Markov chain. Equation(4.1.1)may be interpreted as stating that, for a Markov chain,the conditional distribution of any future state Xn+1 given the past states X

4、0,X1,.,Xn-1 and the present state Xn, is independent of the past states and depends only on the present state. This is called the Markovian property. The value Pij represents the probability that the process will, when in state i, next make a transition into state j. Since probabilities are nonnegat

5、ive and since the process must make a transition into some state, we have that Pij0,i,j0; ,i=0,1,.Let P denote the matrix of one-step transition probabilities Pij,so that P=EXAMPLE 4.1(A) The M/G/1 Queue. Suppose that customers arrive at a service center in accordance with a Poisson process with rat

6、e.There is a single server and those arrivals finding the server free go immediately into service; all others wait in line until their service turn. The service times of successive customers are assumed to be independent random variables having a common distribution G; and they are also assumed to b

7、e independent of the arrival process.The above system is called the M/G/1 queueing system. The letter M stands for the fact that the interarrival distribution of customers is exponential, G for the service distribution;the number 1 indicates that there is a single server. If we let X(t) denote the n

8、umber of customers in the system at t, then X(t),t0 would not possess the Markovian property that the conditional distribution of the future depends only on the present and not on the past. For if we know the number in the system at time t, then, to predict future behavior, whereas we would not care

9、 how much time had elapsed since the last arrival (since the arrival process is memoryless),we would care how long the person in service had already been there(since the service distribution G is arbitrary and therefore not memoryless). As a means of getting around the above dilemma let us only look

10、 at the system at moments when customers depart. That is, let Xn denote the number of customers left behind by the nth departure, n1.Also,let Yn denote the number of customers arriving during the service period of the (n+1)st customer.When Xn0,the nth departure leaves behind Xn customers-of which on

11、e enters service and the other Xn-1 wait in line. Hence, at the next departure the system will contain the Xn-1 customers that were in line in addition to any arrivals during the service time of the (n+1)st customer. Since a similar argument holds when Xn=0,we see that (4.1.2) Xn+1= Since Yn, n1,rep

12、resent the number of arrivals in nonoverlapping service intervals, it follows, the arrival process being Poisson process, that they are independent and (4.1.3) PYn =j= j=0,1,.Form (4,1.2)and (4.1.3)it follows thatXn,n=1,2,.is a Markov chain with transition probabilities given by P= j P= j P=0 otherw

13、iseEXAMPLE 4.1(B) The M/G/1 Queue. Suppose that customers arrive at a single-server center in accordance with an arbitrary renewal process having interarrival distribution G. Suppose further that the service distribution is exponential with rate . If we let Xn denote the number of customers in the s

14、ystem as seen by the nth arrival, it is easy to see that the processXn,n1 is a Markov chain. To compute the transition probabilities Pij for this Markov chain, let us first note that, as long as there are customers to be served, the number of services in any length of time t is a Poisson random vari

15、able with mean t. This is true since the time between successive services is exponential and, as we know, this implies that number of services thus constitutes a Poisson process. Therefore,Pi,i+1-j=,iWhich follows since if an arrival finds i in the system, then the next arrival will find i+1 minus t

16、he numbers served, and the probability that j will be served is easily seen (by conditioning on the time between the successive arrivals)to equal the right-hand side of the above. The formula for Pi0 is little different (it is the probability that at least i+1 Poisson events occur in a random length

17、 of time having distribution G)and thus is given byPi0=,iRemark The reader should note that in the previous two examples we were able to discover an embedded Markow chain by looking at the process only at certain time points, and by choosing these time points so as to exploit the lack of memory of t

18、he exponential distribution. This is often a fruitful approach for processes in which the exponential distribution is present.EXAMPLE4.1(C) Sums of Independent, Identically Distributed Random variables. The General Random Walk.Let Xi,i1,be independent and identically distributed with PXi=j=aj,j=0,1,

19、.If we let S0=0 and S=,ThenSn,n0 is a Markov chain for whichPij=aj-i Sn,n0 is called the general random walk and will be studied in chapter 7.EXAMPLE 4.1(D) The Absolute value of the Simple Random Wallk .The random walk Sn ,n1,where Sn = is said to be a simple random walk if for some p, 0p0,then def

20、ine the period of to be infinite.)A state with period 1 is said to be aperiodic. Let d()denote the period of . We now that periodicity is a class property.PROPOSITION 4.2.2If , then d()=d().Proof Let m and n be such that PP, and suppose that P.Then,where the second inequality follows, for instance,

21、since the left-hand side represents the probability that starting in the chain will be back in after transitions, whereas the right-hand side is the probability of the same event subject to the further restriction that the chain is in both after and transitions. Hence, d()divides both and ;thus ,whe

22、never P.Therefore, d()divides d().A similar argument yields that d() divides d(j),thus d()=d()For any states and define to be the probability that, starting in , the first transition into occurs at time n. Formally,=0=X=,X,.Let.Then denotes the probability of ever making a transition into state, giv

23、en that the process starts in . (Note that for , is positive if, and only if, is accessible from ).State is said to be recurrent if =1,and transient otherwise.PROPOSITION 4.2.3State is recurrent if, and only if,Proof State is recurrent if, with probability 1,a process starting at will eventually ret

24、urn. However, by the Markovian property it follows that the process probabilistically restarts itself upon returning to .Hence, with probability 1, it will return again to.Repeating this argument, we see that, with probability 1 ,the number of visits to will be infinite and will thus have infinite e

25、xpectation. On the other hand, suppose is transient, then each time the process returns to there is a positive probability 1-f that it will never again return; hence the number of visits is geometric with finite mean 1/(1-f).By the above argument we see that state is recurrent if, and only if,number

26、 of visits to =.But, lettingI=It follows that denotes the number of visits to .since =,The result follows.The argument leading to the above proposition is doubly important for it also shows that a transient state will only be visited a finite number of times (hence the name transient). This leads to

27、 the conclusion that in a finite-state Markov chain not all states can be transient. To see this, suppose the states are 0,1,.,M and suppose that they are all transient. Then after a finite amount of time(say after time T)state 0 will never be visited, and after a time (say T)state 1 will never be v

28、isited ,and after a time (say T) state 2 will never be visited, and so on. Thus, after a finite time T=maxTO,T1,.,TM no states will be visited. But as the process must be in some state after time T, we arrive at a contradiction, which shows that at least one of the states must be recurrent.We will u

29、se proposition 4.2.3 to prove that recurrence, like periodicity, is a class property.Corollary4.2.4If is recurrent and , then is recurrent.Proof Let m and n be such that ,.Now for any sand thus=and the result follows from Proposition 4.2.3Example 4.2(A) THE SIMPLE RANDOM WALK.The Markov chain whose

30、state spate is the set of all integers and has transition probabilities i=,.,Where ,is called the simple random walk. One interpretation of this process is that it represents the wanderings of a drunken man as he walks along a straight line. Another is that it represents the winnings of a gambler wh

31、o on each play of the game either wins or loses one dollar. Since all states clearly communicate it follows from corollary 4.2.4 that they are either all transient or all recurrent. So let us consider state 0 and attempt to determine if is finite or infinite.Since it is impossible to be even (using

32、the gambling model interpretation )after an odd number of plays, we must ,of course, have that=0, n=1,2, .On the other hand, the gambler would be even after 2n trials if, and only if, he won n of these and lost n of these. As each play of the game results in a win probability p and a loss with proba

33、bility 1-p, the desired probability is thus the binomial probabilityP=p(1-p)=(p(1-p),n=1,2,3,.By using an approximation, due to Stirling, which asserts thatn! ne,where we say that ab when lim (a/ b)=1,we obtainNow it is easy to verify that if ab then ,if, and only if , .Hence will converge if, and o

34、nly if,does.However, 4p(1-p)with equality holding if, and only if, p=1/2.hence, if, and only if, p=1/2. Thus, the chain is recurrent when p=1/2 and transient if pWhen p=1/2, the above process is called a sysmmetric random walk. We could also look at symmetric random walks in more then one dimension.

35、 For instance, in the two-dimensional symmetric random walk the left, right, up, or down each having probability. Similarly, in three dimensions the process would, with probability ,same method as in the one-dimensional random walk it can be shown that the two-dimensional symmetric random walk is re

36、current, but all higher-dimensional random walks are transient.Corollary 4.2.5If and is recurrent, then f=1.Proof Suppose =, and let n be such that. Say that we miss opportunity 1 if j. If we miss opportunity 1, then Let T1 denote the next time we enter (T1 is finite with probability 1 by corollary4

37、.2.4). Say that we miss opportunity 2 if X j. If opportunity 2 is missed, let T2 denote the next time we enter i and say that we miss opportunity 3 if Xj, and so on. It is easy to see that the opportunity number of the first success is a geometric random variable with mean 1/,and is thus finite with

38、 probability 1. The result follows since i being recurrent implies that the number of potential opportunities is infinite. Let denote the number of transitions into j by time t. If j is recurrent and =j, then as the process probabilistically starts over upon transitions into j, it follows that ,tis a renewal process with interarrival distribution f n. If =i, , and j is recurrent, then,tis s a delayed renewal process with initial interarrival distribution f,.4.3 LIMIT THEOREMSIt is easy to show that if state is transient, then

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 教育专区 > 成人自考

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁