首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In Part I, methods of nonstandard analysis are applied to deterministic control theory, extending earlier work of the author. Results established include compactness of relaxed controls, continuity of solution and cost as functions of the controls, and existence of optimal controls. In Part II, the methods are extended to obtain similar results for partially observed stochastic control. Systems considered take the form:where the feedback control u depends on information from a digital read-out of the observation process y. The noise in the state equation is controlled along with the drift. Similar methods are applied to a Markov system in the final section.  相似文献   

2.
Classical location theories and models were initially developed for the private sector, so that the related operational research literature has emphasized performance measures of efficiency and effectiveness. For public sector applications, measures of equity become important, yet such measures have received little formal treatment. This paper suggests a locational equity measure, the variance measure, and investigates its properties for tree networks. A fast algorithm (O(M)) to locate the minimum variance point on a tree network is developed, and some numerical results illustrate the variance optimal location.This work is dedicated to the memory of Professor Jonathan Halpern  相似文献   

3.
In 1934, Tonelli proved, for free problems withn=1, that the trajectories of a minimizing sequence are equiabsolutely continuous provided a pointwise growth condition is satisfied everywhere except at the points of an exceptional set where an additional mild hypothesis is required, condition (T). The author extended this result to optimal control problems by the use of a uniform growth condition, called (), which is equivalent to Tonelli's pointwise growth condition for free problems with convex integrand (Refs. 4–5, 10). More recently, condition (), or condition (), which is equivalent to () for optimal control problems, was replaced by the much weaker condition () in existence theorems for optimal control problems (Refs. 1–2).In the present paper, we show that this same combination of growth condition () and condition (T) at the points of the exceptional set is actually a special case of the growth condition (), which does not use the concept of the exceptional set.  相似文献   

4.
5.
Preconditioned conjugate gradients (PCG) are widely and successfully used methods for solving a Toeplitz linear system [59,9,20,5,34,62,6,10,28,45,44,46,49]. Frobenius-optimal preconditioners are chosen in some proper matrix algebras and are defined by minimizing the Frobenius distance from . The convergence features of these PCG have been naturally studied by means of the Weierstrass–Jackson Theorem [17,36,45], owing to the profound relationship between the spectral features of the matrices , generated by the Fourier coefficients of a continuous function f, and the analytical properties of the symbol f itself. In this paper, we capsize this point of view by showing that the optimal preconditioners can be used to define both new and just known linear positive operators uniformly approximating the function f. On the other hand, by modifying the Korovkin Theorem to study the Frobenius-optimal preconditioning problem, we provide a new and unifying tool for analyzing all Frobenius-optimal preconditioners in any generic matrix algebra related to trigonometric transforms. Finally, the multilevel case is sketched and discussed by showing that a Korovkin-type Theory also holds in a multivariate sense. Received October 1, 1996 / Revised version received May 7, 1998  相似文献   

6.
The results contained herein provide a rigorous formulation of a broad class of differential games with information time lag and present a theoretical analysis for treating such games. This analysis extends the so-called Hamilton-Jacobi theory of optimal control and the main equation analysis developed by Isaacs to treat differential games with information time lag. Necessary and sufficient conditions satisfied by thepotential value function are developed to indicate the strategy-synthesis procedure for differential games with information time lag.  相似文献   

7.
In this paper we first derive the verification theorem for nonlinear optimal control problems over time scales. That is, we show that the value function is the only solution of the Hamilton-Jacobi equation, in which the minimum is attained at an optimal feedback controller. Applications to the linear-quadratic regulator problem (LQR problem) gives a feedback optimal controller form in terms of the solution of a generalized time scale Riccati equation, and that every optimal solution of the LQR problem must take that form. A connection of the newly obtained Riccati equation with the traditional one is established. Problems with shift in the state variable are also considered. As an important tool for the latter theory we obtain a new formula for the chain rule on time scales. Finally, the corresponding LQR problem with shift in the state variable is analyzed and the results are related to previous ones.  相似文献   

8.
《Optimization》2012,61(4):497-513
This paper deals with necessary conditions for optimization problems with infinitely many inequality constraints assuming various differentiability conditions. By introducing a second topology N on a topological vector space we define generalized versions of differentiability and tangential cones. Different choices of N lead to Gâteaux-, Hadamaed- and weak differentiability with corresponding tangential cones. The general concept is used to derive necessary conditions for local optimal points in form of inequalities and generalized multiplier rules, Special versions of these theorems are obtained for different differentiability assumptions by choosing properly. An application to approximation theory is given.  相似文献   

9.

Non-convex discrete-time optimal control problems in, e.g., water or power systems, typically involve a large number of variables related through nonlinear equality constraints. The ideal goal is to find a globally optimal solution, and numerical experience indicates that algorithms aiming for Karush–Kuhn–Tucker points often find solutions that are indistinguishable from global optima. In our paper, we provide a theoretical underpinning for this phenomenon, showing that on a broad class of problems the objective can be shown to be an invariant convex function (invex function) of the control decision variables when state variables are eliminated using implicit function theory. In this way, optimality guarantees can be obtained, the exact nature of which depends on the position of the solution within the feasible set. In a numerical example, we show how high-quality solutions are obtained with local search for a river control problem where invexity holds.

  相似文献   

10.
《Optimization》2012,61(3):521-537
Abstract

Strong second-order conditions in mathematical programming play an important role not only as optimality tests but also as an intrinsic feature in stability and convergence theory of related numerical methods. Besides of appropriate firstorder regularity conditions, the crucial point consists in local growth estimation for the objective which yields inverse stability information on the solution. In optimal control, similar results are known in case of continuous control functions, and for bang–bang optimal controls when the state system is linear. The paper provides a generalization of the latter result to bang–bang optimal control problems for systems which are affine-linear w.r.t. the control but depend nonlinearly on the state. Local quadratic growth in terms of L1 norms of the control variation are obtained under appropriate structural and second-order sufficient optimality conditions.  相似文献   

11.
Summary An optimal control problem is considered in a setting akin to that of the theory. of generalized curves. Rather than minimizing a functional depending on pairs of trajectories and controls subject to some constraints, a functional defined on a set of Radon measures is considered; the set of measures is determined by the constraints. An approximation scheme is developed, so that the solution of the optimal control problems can be effected by solving a sequence of nonlinear programming problems. Several existence theorems for this kind of generalized control problems are then proved; the most interesting is the one concerning problems in which the set of allowable controls is unbounded. Entrata in Redazione il 5 febbraio 1975.  相似文献   

12.
Sina Ober-Blöbaum 《PAMM》2016,16(1):821-822
Higher order variational integrators are analyzed and applied to optimal control problems posed with mechanical systems. First, we derive two different kinds of high order variational integrators based on different dimensions of the underlying approximation space. While the first well-known integrator is equivalent to a symplectic partitioned Runge-Kutta method, the second integrator, denoted as symplectic Galerkin integrator, yields a method which in general, cannot be written as a standard symplectic Runge-Kutta scheme [1]. Furthermore, we use these integrators for the discretization of optimal control problems. By analyzing the adjoint systems of the optimal control problem and its discretized counterpart, we prove that for these particular integrators optimization and discretization commute [2]. This property guarantees that the accuracy is preserved for the adjoint system which is also referred to as the Covector Mapping Principle. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
Two classes of Riccati equations arising in the boundary control of parabolic systems are studied by direct methods. The new feature with respect to previous works on this subject is the low regularity of the final data. The classes considered here generalize those of [7]and [5]on one side, and of [14]on the other one. Completely new methods are used to obtain the solution of the Riccati equations, in both cases. The central theme is the dependence of the solutions on a «symmetric» norm of the final data, yielding these new results as well as a new proof of existence for the related algebraic Riccati equation under more general assumptions. The synthesis of the associated linear-quadratic-regulator problems is easily solved using these results.  相似文献   

14.
A linear quadratic optimal control problem with coexisting initial and persistent disturbances is studied. Upper and lower values and relevant algebraic Riccati equations (ARE for short) are introduced. Various relations among these values are presented. The solvability of the resulting AREs is shown to be closely related to the solvability of the original optimal control problem. A formula is obtained for the solution of one of the AREs, which is of nonstandard form. Several unexpected features of the original problem are revealed from the standpoint of differential games. Some known results on the so-calledH optimal control problem are recovered.This work was partially supported by the Chinese NSF under Grant 19131050, the Chinese State Education Commission Science Foundation, the SEDC Foundation for Young Academics, and the Fok Ying Tung Education Foundation.  相似文献   

15.
A nonlinear time-varying adaptive filter is introduced, and its derivation using optimal control concepts is given in detail. The filter, which is called the discrete Pontryagin filter, is basically an extension to Sridhar filtering theory. The proposed approach can easily replace the conventional methods of autoregressive (AR) and autoregressive moving average (ARMA) models in their many applications. Instead of using a large number of time-invariant parameters to describe the signal or the time series, a single time-varying function is enough. This function is estimated using optimization techniques. Many features are gained using this approach, such as simpler and compact filter equations and better overall accuracy. The statistical properties of the filter are given, and it is shown that the signal estimate will converge in thepth mean to the true value.  相似文献   

16.
Michael Schacher 《PAMM》2010,10(1):541-542
The aim of this presentation is to construct an optimal open-loop feedback controller for robots, which takes into account stochastic uncertainties. This way, optimal regulators being insensitive with respect to random parameter variations can be obtained. Usually, a precomputed feedback control is based on exactly known or estimated model parameters. However, in practice, often exact informations about model parameters, e.g. the payload mass, are not given. Supposing now that the probability distribution of the random parameter variation is known, in the following, stochastic optimisation methods will be applied in order to obtain robust open-loop feedback control. Taking into account stochastic parameter variations, the method works with expected cost functions evaluating the primary control expenses and the tracking error. The expectation of the total costs has then to be minimized. Corresponding to Model Predictive Control (MPC), here a sliding horizon is considered. This means that, instead of minimizing an integral from a starting time point t0 to the final time tf, the future time range [t; t+T], with a small enough positive time unit T, will be taken into account. The resulting optimal regulator problem under stochastic uncertainty will be solved by using the Hamiltonian of the problem. After the computation of a H-minimal control, the related stochastic two-point boundary value problem is then solved in order to find a robust optimal open-loop feedback control. The performance of the method will be demonstrated by a numerical example, which will be the control of robot under random variations of the payload mass. (© 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
There are many optimal control problems in which it is necessary or desirable to constrain the range of values of state variables. When stochastic inputs are involved, these inequality constraint problems are particularly difficult. Sometimes the constraints must be modeled as hard constraints which can never be violated, and other times it is more natural to prescribe a probability that the constraints will not be violated. This paper treats general problems of the latter type, in which probabilistic inequality constraints are imposed on the state variables or on combinations of state and control variables. A related class of problems in which the state is required to reach a target set with a prescribed probability is handled by the same methods. It is shown that the solutions to these problems can be obtained by solving a comparatively simple bilinear deterministic control problem.  相似文献   

18.
Applications of elastic plates weakened with full-strength holes are of great interest in several mechanical constructions (building practice, in mechanical engineering, shipbuilding, aircraft construction, etc). It's proven that in case of infinite domains the minimum of tangential normal stresses (tangential normal moments) maximal values will be obtained on such contours, where these values maintain constant(the full strength holes). The solvability of these problems allow to control stress optimal distribution at the hole boundary via appropriate hole shape selection. The paper addresses a problem of plane elasticity theory for a doubly connected domain S on the plane z = x + iy, which external boundary is an isosceles trapezoid boundary; the internal boundary is required full-strength hole including the origin of coordinates. In the provided work the unknown full-strength contour and stressed state of the body were determined. (© 2015 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
There are two different meanings of the wordchattering in control theory and optimization theory. Chattering arcs of the first kind are related to the notion of relaxation of the control (i.e., convexization of the maneuverability domain). Some sufficient conditions of equivalence of these notions are defined. Chattering arcs of the second kind appear before and after some optimal singular arcs, for instance, the intermediate thrust arcs of the optimal transfer problem of astrodynamics. The simplest examples of chattering arcs of the second kind appear in Fuller's problem, two cases of which are examined in detail. The conditions of chattering of the second kind are analyzed; they are related to the Kelley-Contensou optimality test of singular extremals, also known asGeneralized Legendre-Clebsch conditions; they lead to general solutions and not only to solutions restricted to particular terminal conditions; thus, the phenomenon of chattering is very important (fortunately, these solutions can generally be approximated very closely by simple piecewise continuous controls). Finally, some special and complex cases appear, some examples of which are analyzed.  相似文献   

20.
Lecture notes of an introductory course on control theory on Lie groups. Controllability and optimal control for left-invariant problems on Lie groups are addressed. A general theory is accompanied by concrete examples. The course is intended for graduate students; no preliminary knowledge of control theory or Lie groups is assumed. Translated from Sovremennaya Matematika. Fundamental’nye Napravleniya (Contemporary Mathematics. Fundamental Directions), Vol. 27, Optimal Control, 2007.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号