近似动态规划相关的外文文献及翻译(共19页).doc

上传人:飞****2 文档编号:19336918 上传时间:2022-06-06 格式:DOC 页数:19 大小:588KB
返回 下载 相关 举报
近似动态规划相关的外文文献及翻译(共19页).doc_第1页
第1页 / 共19页
近似动态规划相关的外文文献及翻译(共19页).doc_第2页
第2页 / 共19页
点击查看更多>>
资源描述

《近似动态规划相关的外文文献及翻译(共19页).doc》由会员分享,可在线阅读,更多相关《近似动态规划相关的外文文献及翻译(共19页).doc(19页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、精选优质文档-倾情为你奉上外文文献:Adaptive Dynamic Programming: An IntroductionAbstract: In this article, we introduce some recent research trends within the field of adaptive/approximate dynamic programming (ADP), including the variations on the structure of ADP schemes, the development of ADP algorithms and appli

2、cations of ADP schemes. For ADP algorithms, the point of focus is that iterative algorithms of ADP can be sorted into two classes: one class is the iterative algorithm with initial stable policy; the other is the one without the requirement of initial stable policy. It is generally believed that the

3、 latter one has less computation at the cost of missing the guarantee of system stability during iteration process. In addition, many recent papers have provided convergence analysis associated with the algorithms developed. Furthermore, we point out some topics for future studies.IntroductionAs is

4、well known, there are many methods for designing stable control for nonlinear systems. However, stability is only a bare minimum requirement in a system design. Ensuring optimality guarantees the stability of the nonlinear system. Dynamic programming is a very useful tool in solving optimization and

5、 optimal control problems by employing the principle of optimality. In 16, the principle of optimality is expressed as: “An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state res

6、ulting from the first decision.” There are several spectrums about the dynamic programming. One can consider discrete-time systems or continuous-time systems, linear systems or nonlinear systems, time-invariant systems or time-varying systems, deterministic systems or stochastic systems, etc.We firs

7、t take a look at nonlinear discrete-time (timevarying) dynamical (deterministic) systems. Time-varying nonlinear systems cover most of the application areas and discrete-time is the basic consideration for digital computation. Suppose that one is given a discrete-time nonlinear (timevarying) dynamic

8、al systemwhere represents the state vector of the system and denotes the control action and F is the system function. Suppose that one associates with this system the performance index (or cost)where U is called the utility function and g is the discount factor with 0 , g # 1. Note that the function

9、 J is dependent on the initial time i and the initial state x( i ), and it is referred to as the cost-to-go of state x( i ). The objective of dynamic programming problem is to choose a control sequence u(k), k5i, i11,c, so that the function J (i.e., the cost) in (2) is minimized. According to Bellma

10、n, the optimal cost from time k is equal toThe optimal control u* 1k2 at time k is the u1k2 which achieves this minimum, i.e.,Equation (3) is the principle of optimality for discrete-time systems. Its importance lies in the fact that it allows one to optimize over only one control vector at a time b

11、y working backward in time.In nonlinear continuous-time case, the system can be described byThe cost in this case is defined asFor continuous-time systems, Bellmans principle of optimality can be applied, too. The optimal cost J*(x0)5min J(x0, u(t) will satisfy the Hamilton-Jacobi-Bellman EquationEq

12、uations (3) and (7) are called the optimality equations of dynamic programming which are the basis for implementation of dynamic programming. In the above, if the function F in (1) or (5) and the cost function J in (2) or (6) are known, the solution of u(k ) becomes a simple optimization problem. If

13、 the system is modeled by linear dynamics and the cost function to be minimized is quadratic in the state and control, then the optimal control is a linear feedback of the states, where the gains are obtained by solving a standard Riccati equation 47. On the other hand, if the system is modeled by n

14、onlinear dynamics or the cost function is nonquadratic, the optimal state feedback control will depend upon solutions to the Hamilton-Jacobi-Bellman (HJB) equation 48 which is generally a nonlinear partial differential equation or difference equation. However, it is often computationally untenable t

15、o run true dynamic programming due to the backward numerical process required for its solutions, i.e., as a result of the well-known “curse of dimensionality” 16, 28. In 69, three curses are displayed in resource management and control problems to show the cost function J , which is the theoretical

16、solution of the Hamilton-Jacobi- Bellman equation, is very difficult to obtain, except for systems satisfying some very good conditions. Over the years, progress has been made to circumvent the “curse of dimensionality” by building a system, called “critic”, to approximate the cost function in dynam

17、ic programming (cf. 10, 60, 61, 63, 70, 78, 92, 94, 95). The idea is to approximate dynamic programming solutions by using a function approximation structure such as neural networks to approximate the cost function.The Basic Structures of ADPIn recent years, adaptive/approximate dynamic programming

18、(ADP) has gained much attention from many researchers in order to obtain approximate solutions of the HJB equation, cf. 2, 3, 5, 8, 1113, 21, 22, 25, 30, 31, 34, 35, 40, 46, 49, 52, 54, 55, 63, 70, 76, 80, 83, 95, 96, 99, 100. In 1977, Werbos 91 introduced an approach for ADP that was later called a

19、daptive critic designs (ACDs). ACDs were proposed in 91, 94, 97 as a way for solving dynamic programming problems forward-in-time. In the literature, there are several synonyms used for “Adaptive Critic Designs” 10, 24, 39, 43, 54, 70, 71, 87, including “Approximate Dynamic Programming” 69, 82, 95,

20、“Asymptotic Dynamic Programming” 75, “Adaptive Dynamic Programming” 63, 64, “Heuristic Dynamic Programming” 46, 93, “Neuro-Dynamic Programming” 17, “Neural Dynamic Programming” 82, 101, and “Reinforcement Learning” 84.Bertsekas and Tsitsiklis gave an overview of the neurodynamic programming in their

21、 book 17. They provided the background, gave a detailed introduction to dynamic programming, discussed the neural network architectures and methods for training them, and developed general convergence theorems for stochastic approximation methods as the foundation for analysis of various neuro-dynam

22、ic programming algorithms. They provided the core neuro-dynamic programming methodology, including many mathematical results and methodological insights. They suggested many useful methodologies for applications to neurodynamic programming, like Monte Carlo simulation, on-line and off-line temporal

23、difference methods, Q-learning algorithm, optimistic policy iteration methods, Bellman error methods, approximate linear programming, approximate dynamic programming with cost-to-go function, etc. A particularly impressive success that greatly motivated subsequent research, was the development of a

24、backgammon playing program by Tesauro 85. Here a neural network was trained to approximate the optimal cost-to-go function of the game of backgammon by using simulation, that is, by letting the program play against itself. Unlike chess programs, this program did not use lookahead of many steps, so i

25、ts success can be attributed primarily to the use of a properly trained approximation of the optimal cost-to-go function.To implement the ADP algorithm, Werbos 95 proposed a means to get around this numerical complexity by using “approximate dynamic programming” formulations. His methods approximate

26、 the original problem with a discrete formulation. Solution to the ADP formulation is obtained through neural network based adaptive critic approach. The main idea of ADP is shown in Fig. 1.He proposed two basic versions which are heuristic dynamic programming (HDP) and dual heuristic programming (D

27、HP).HDP is the most basic and widely applied structure of ADP 13, 38, 72, 79, 90, 93, 104, 106. The structure of HDP is shown in Fig. 2. HDP is a method for estimating the cost function. Estimating the cost function for a given policy only requires samples from the instantaneous utility function U,

28、while models of the environment and the instantaneous reward are needed to find the cost function corresponding to the optimal policy.In HDP, the output of the critic network is J, which is the estimate of J in equation (2). This is done by minimizing the following error measure over timewhere J(k)5

29、J 3x(k), u(k), k, WC4 and WC represents the parameters of the critic network. When Eh50 for all k, (8) implies thatDual heuristic programming is a method for estimating the gradient of the cost function, rather than J itself. To do this, a function is needed to describe the gradient of the instantan

30、eous cost function with respect to the state of the system. In the DHP structure, the action network remains the same as the one for HDP, but for the second network, which is called the critic network, with the costate as its output and the state variables as its inputs.The critic networks training

31、is more complicated than that in HDP since we need to take into account all relevant pathways of backpropagation.This is done by minimizing the following error measure over timewhere J 1k2 /x1k2 5J 3x1k2, u1k2, k, WC4/x1k2 and WC represents theparameters of the critic network. When Eh50 for all k, (

32、10) implies that2. Theoretical DevelopmentsIn 82, Si et al summarizes the cross-disciplinary theoretical developments of ADP and overviews DP and ADP; and discusses their relations to artificial intelligence, approximation theory, control theory, operations research, and statistics.In 69, Powell sho

33、ws how ADP, when coupled with mathematical programming, can solve (approximately) deterministic or stochastic optimization problems that are far larger than anything that could be solved using existing techniques and shows the improvement directions of ADP.In 95, Werbos further gave two other versio

34、ns called “actiondependent critics,” namely, ADHDP (also known as Q-learning 89) and ADDHP. In the two ADP structures, the control is also the input of the critic networks. In 1997, Prokhorov and Wunsch 70 presented more algorithms according to ACDs.They discussed the design families of HDP, DHP, an

35、d globalized dual heuristic programming (GDHP). They suggested some new improvements to the original GDHP design. They promised to be useful for many engineering applications in the areas of optimization and optimal control. Based on one of these modifications, they present a unified approach to all

36、 ACDs. This leads to a generalized training procedure for ACDs. In 26, a realization of ADHDP was suggested: a least squares support vector machine (SVM) regressor has been used for generating the control actions, while an SVM-based tree-type neural network (NN) is used as the critic. The GDHP or AD

37、GDHP structure minimizes the error with respect to both the cost and its derivatives. While it is more complex to do this simultaneously, the resulting behavior is expected to be superior. So in 102, GDHP serves as a reconfigurable controller to deal with both abrupt and incipient changes in the pla

38、nt dynamics due to faults. A novel fault tolerant control (FTC) supervisor is combined with GDHP for the purpose of improving the performance of GDHP for fault tolerant control. When the plant is affected by a known abrupt fault, the new initial conditions of GDHP are loaded from dynamic model bank

39、(DMB). On the other hand, if the fault is incipient, the reconfigurable controller maintains performance by continuously modifying itself without supervisor intervention. It is noted that the training of three networks used to implement the GDHP is in an online fashion by utilizing two distinct netw

40、orks to implement the critic. The first critic network is trained at every iterations while the second one is updated with a copy of the first one at a given period of iterations.All the ADP structures can realize the same function that is to obtain the optimal control policy while the computation p

41、recision and running time are different from each other. Generally speaking, the computation burden of HDP is low but the computation precision is also low; while GDHP has better precision but the computation process will take longer time and the detailed comparison can be seen in 70. In 30, 33 and

42、83, the schematic of direct heuristic dynamic programming is developed. Using the approach of 83, the model network in Fig. 1 is not needed anymore. Reference 101 makes significant contributions to model-free adaptive critic designs. Several practical examples are included in 101 for demonstration w

43、hich include single inverted pendulum and triple inverted pendulum. A reinforcement learning-based controller design for nonlinear discrete-time systems with input constraints is presented by 36, where the nonlinear tracking control is implemented with filtered tracking error using direct HDP design

44、s. Similar works also see 37. Reference 54 is also about model-free adaptive critic designs. Two approaches for the training of critic network are provided in 54: A forward-in-time approach and a backward-in-time approach. Fig. 4 shows the diagram of forward-intimeapproach. In this approach, we view

45、 J(k) in (8) as the output of the critic network to be trained and choose U(k)1gJ(k11) as the training target. Note that J(k) and J(k11) are obtained using state variables at different time instances. Fig. 5 shows the diagram of backward-in-time approach. In this approach, we view J(k11) in (8) as t

46、he output of the critic network to be trained and choose ( J(k)2U(k)/g as the training target. The training ap proach of 101 can be considered as a backward- in-time ap proach. In Fig. 4 and Fig. 5, x(k11) is the output of the model network.An improvement and modification to the two network architec

47、ture, which is called the “single network adaptive critic (SNAC)” was presented in 65, 66. This approach eliminates the action network. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load (about half of the dual network algorit

48、hms), and no approximate error due to the fact that the action network is eliminated. The SNAC approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and the costate variables. Most of the problems in aerospace, automobile, robotics, and other engineering disciplines can be characterized by the nonlinear control-affine equations that yield such a relation. SNAC-based controllers yield excellent tracking performances in applications to microelectronic mechanical syste

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 教育专区 > 教案示例

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁