首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The equations of state evolution of a hybrid system are nonlinear and generate non-Gaussian sample paths. For this reason, the optimal, mean-square estimate of the state is difficult to determine. In an earlier paper (Ref. 1), a useful approximation to the optimal estimator was derived for the case where there is a direct, albeit noisy, measurement of the modal state. Although this algorithm has proven serviceable, it is restricted to applications in which the base-state path is continuous. In this paper, the result is extended to the case in which there are base-state discontinuities of a particular sort. The algorithm is tested on a target tracking problem and is shown to be superior to both the extended Kalman filter and the estimator derived in Ref. 1.  相似文献   

2.
The stochastic approximation problem is to find some root or minimum of a nonlinear function in the presence of noisy measurements. The classical algorithm for stochastic approximation problem is the Robbins-Monro (RM) algorithm, which uses the noisy negative gradient direction as the iterative direction. In order to accelerate the classical RM algorithm, this paper gives a new combined direction stochastic approximation algorithm which employs a weighted combination of the current noisy negative gradient and some former noisy negative gradient as iterative direction. Both the almost sure convergence and the asymptotic rate of convergence of the new algorithm are established. Numerical experiments show that the new algorithm outperforms the classical RM algorithm.  相似文献   

3.
In a hidden Markov model, the underlying Markov chain is usually unobserved. Often, the state path with maximum posterior probability (Viterbi path) is used as its estimate. Although having the biggest posterior probability, the Viterbi path can behave very atypically by passing states of low marginal posterior probability. To avoid such situations, the Viterbi path can be modified to bypass such states. In this article, an iterative procedure for improving the Viterbi path in such a way is proposed and studied. The iterative approach is compared with a simple batch approach where a number of states with low probability are all replaced at the same time. It can be seen that the iterative way of adjusting the Viterbi state path is more efficient and it has several advantages over the batch approach. The same iterative algorithm for improving the Viterbi path can be used when it is possible to reveal some hidden states and estimating the unobserved state sequence can be considered as an active learning task. The batch approach as well as the iterative approach are based on classification probabilities of the Viterbi path. Classification probabilities play an important role in determining a suitable value for the threshold parameter used in both algorithms. Therefore, properties of classification probabilities under different conditions on the model parameters are studied.  相似文献   

4.
The purpose of this article is to find the conditions which a minimizing sequence for an integral process with a phase constraint obeys. We employ Ekeland's variational principle (Ref. 1) and follow Sumin (Ref. 2) to obtain the conditions satisfied by a minimizing sequence. The conditions derived actually hold, even for certain minimizing sequences that do not necessarily satisfy the imposed constraints. This statement is better understood from our theorem at the end of the paper. However, it is assumed that there are controls such that the imposed constraints are satisfied. We close the article with a discussion of an example.This research was supported by ONR Grant No. N0001-87-K-0276.  相似文献   

5.
The search for low energy states of molecular clusters is associated with the study of molecular conformation and especially protein folding. This paper describes a new global minimization algorithm which is effective and efficient for finding low energy states and hence stable structures of molecular clusters. The algorithm combines simulated annealing with a class of effective energy functions which are transformed from the original energy function based on the theory of renormalization groups. The algorithm converges to low energy states asymptotically, and is more efficient than a general simulated annealing method.  相似文献   

6.
When a dynamical system with multiple point attractors is released from an arbitrary initial condition, it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimizes these constraints by this method is unlikely or may take many attempts. Here, we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimize total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely “recalling” low energy states that have been previously visited but “predicting” their location by generalizing over local attractor states that have already been visited. This “self‐modeling” framework, i.e., a system that augments its behavior with an associative memory of its own attractors, helps us better understand the conditions under which a simple locally mediated mechanism of self‐organization can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph coloring and distributed task allocation problems. © 2010 Wiley Periodicals, Inc. Complexity 16: 17–26, 2011  相似文献   

7.
We propose a fully sequential indifference-zone selection procedure that is specifically for use within an optimization-via-simulation algorithm when simulation is costly, and partial or complete information on solutions previously visited is maintained. Sequential Selection with Memory guarantees to select the best or near-best alternative with a user-specified probability when some solutions have already been sampled, their previous samples are retained, and simulation outputs are i.i.d. normal. For the case when only summary information on solutions is retained, we derive a modified procedure. We illustrate how our procedures can be applied to optimization-via-simulation problems and compare its performance with other methods by numerical examples.  相似文献   

8.
In this paper, we propose a method of determining the initial temperature for continuous fast simulated annealing from the perspective of state variation. While the conventional method utilizes fitness variation, the proposed method additionally considers genotype variation. The proposed scheme is based on the fact that the annealing temperature, which includes the initial temperature, not only appears in the acceptance probability but serves as the scale parameter of a state generating probability distribution. We theoretically derive an expression for the probability of generating states to cover the state space in conjunction with the convergence property of the fast simulated annealing. We then numerically solve the expression to determine the initial temperature. We empirically show that the proposed method outperforms the conventional one in optimizing various benchmarking functions.  相似文献   

9.
In many decision-making situations, decision makers (DMs) have difficulty in specifying their perceived state probability values or even probability value ranges. However, they may find it easier to tell how much more likely is the occurrence of a given state when compared with other states. An approach is proposed to identify the efficient strategies of a decision-making situation where the DMs involved declare their perceived relative likelihood of the occurrence of the states by pair-wise comparisons. The pair-wise comparisons of all the states are used to construct a judgment matrix, which is transformed into a probability matrix. The columns of the transformed matrix delineate a convex cone of the state probabilities. Next, an efficiency linear program (ELP) is formulated for each available strategy, whose optimal solution determines whether or not that strategy is efficient within the probability region defined by the cone. Only an efficient strategy can be optimum for a given set of state probability values. Inefficient strategies are never used irrespective of state probability values. The application of the approach is demonstrated using examples where DMs offer differing views on the occurrence of the states.  相似文献   

10.
Simulated annealing algorithms have traditionally been developed and analyzed along two distinct lines: Metropolis-type Markov chain algorithms and Langevin-type Markov diffusion algorithms. Here, we analyze the dynamics of continuous state Markov chains which arise from a particular implementation of the Metropolis and heat-bath Markov chain sampling methods. It is shown that certain continuous-time interpolations of the Metropolis and heat-bath chains converge weakly to Langevin diffusions running at different time scales. This exposes a close and potentially useful relationship between the Markov chain and diffusion versions of simulated annealing.The research reported here has been supported by the Whirlpool Foundation, by the Air Force Office of Scientific Research under Contract 89-0276, and by the Army Research Office under Contract DAAL-03-86-K-0171 (Center for Intelligent Control Systems).  相似文献   

11.
An algorithm is presented which minimizes continuously differentiable pseudoconvex functions on convex compact sets which are characterized by their support functions. If the function can be minimized exactly on affine sets in a finite number of operations and the constraint set is a polytope, the algorithm has finite convergence. Numerical results are reported which illustrate the performance of the algorithm when applied to a specific search direction problem. The algorithm differs from existing algorithms in that it has proven convergence when applied to any convex compact set, and not just polytopal sets.This research was supported by the National Science Foundation Grant ECS-85-17362, the Air Force Office Scientific Research Grant 86-0116, the Office of Naval Research Contract N00014-86-K-0295, the California State MICRO program, and the Semiconductor Research Corporation Contract SRC-82-11-008.  相似文献   

12.
In this paper the usage of a stochastic optimization algorithm as a model search tool is proposed for the Bayesian variable selection problem in generalized linear models. Combining aspects of three well known stochastic optimization algorithms, namely, simulated annealing, genetic algorithm and tabu search, a powerful model search algorithm is produced. After choosing suitable priors, the posterior model probability is used as a criterion function for the algorithm; in cases when it is not analytically tractable Laplace approximation is used. The proposed algorithm is illustrated on normal linear and logistic regression models, for simulated and real-life examples, and it is shown that, with a very low computational cost, it achieves improved performance when compared with popular MCMC algorithms, such as the MCMC model composition, as well as with “vanilla” versions of simulated annealing, genetic algorithm and tabu search.  相似文献   

13.
This paper presents an improved version of a componentwise bounding algorithm for the state probability vector of nearly completely decomposable Markov chains, and on an application it provides the first numerical results with the type of algorithm discussed. The given two-level algorithm uses aggregation and stochastic comparison with the strong stochastic (st) order. In order to improve accuracy, it employs reordering of states and a better componentwise probability bounding algorithm given st upper- and lower-bounding probability vectors. Results in sparse storage show that there are cases in which the given algorithm proves to be useful.  相似文献   

14.
A new zero-one integer programming model for the job shop scheduling problem with minimum makespan criterion is presented. The algorithm consists of two parts: (a) a branch and bound parametric linear programming code for solving the job shop problem with fixed completion time; (b) a problem expanding algorithm for finding the optimal completion time. Computational experience for problems having up to thirty-six operations is presented. The largest problem solved was limited by memory space, not computation time. Efforts are under way to improve the efficiency of the algorithm and to reduce its memory requirements.This report was prepared as part of the activities of the Management Sciences Research Group, Carnegie-Mellon University, under Contract No. N00014-82-K-0329 NR 047-048 with the U.S. Office of Naval Research. Reproduction in whole or in part is permitted for any purpose of the U.S. Government.  相似文献   

15.
An approach is presented for treating discrete optimization problems mapped on the architecture of the Hopfield neural network. The method constitutes a modification to the local minima escape (LME) algorithm which has been recently proposed as a method that uses perturbations in the network's parameter space in order to escape from local minimum states of the Hopfield network. Our approach (LMESA) adopts this perturbation mechanism but, in addition, introduces randomness in the selection of the next local minimum state to be visited in a manner analogous with the case of Simulated Annealing (SA). Experimental results using instances of the Weighted Maximum Independent Set (MIS) problem indicate that the proposed method leads to significant improvement over the conventional LME approach in terms of quality of the obtained solutions, while requirinŗ & g a comparable amount of computational effort.  相似文献   

16.
The most common idea of network reliability in the literature is a numerical parameter calledoverall network reliability, which is the probability that the network will be in a successful state in which all nodes can mutually communicate. Most papers concentrate on the problem of calculating the overall network reliability which is known to be an NP hard problem. In the present paper, the question asked is how to find a method for determining a reliable subnetwork of a given network. Givenn terminals and one central computer, the problem is to construct a network that links each terminal to the central computer, subject to the following conditions: (1) each link must be economically feasible; (2) the minimum number of links should be used; and (3) the reliability coefficient should be maximized. We argue that the network satisfying condition (2) is a spanning arborescence of the network defined by condition (1). We define the idea of thereliability coefficient of a spanning arborescence of a network, which is the probability that a node at average distance from the root of the arborescence can communicate with the root. We show how this coefficient can be calculated exactly when there are no degree constraints on nodes of the spanning arborescence, or approximately when such degree constraints are present. Computational experience for networks consisting of up to 900 terminals is given.This report was prepared as part of the activities of the Management Science Research Group, Carnegie-Mellon University, under Contract No. N00014-82-K-0329 NR 047–048 with the U.S. Office of Naval Research. Reproduction in whole or in part is permitted for any purpose of the U.S. Government.  相似文献   

17.
Recently, Kort and Bertsekas (Ref. 1) and Hartman (Ref. 2) presented independently a new penalty function algorithm of exponential type for solving inequality-constrained minimization problems. The main purpose of this work is to give a proof on the rate of convergence of a modification of the exponential penalty method proposed by these authors. We show that the sequence of points generated by the modified algorithm converges to the solution of the original nonconvex problem linearly and that the sequence of estimates of the optimal Lagrange multiplier converges to this multiplier superlinearly. The question of convergence of the modified method is discussed. The present paper hinges on ideas of Mangasarian (Ref. 3), but the case considered here is not covered by Mangasarian's theory.  相似文献   

18.
This paper presents an integrated platform for multi-sensor equipment diagnosis and prognosis. This integrated framework is based on hidden semi-Markov model (HSMM). Unlike a state in a standard hidden Markov model (HMM), a state in an HSMM generates a segment of observations, as opposed to a single observation in the HMM. Therefore, HSMM structure has a temporal component compared to HMM. In this framework, states of HSMMs are used to represent the health status of a component. The duration of a health state is modeled by an explicit Gaussian probability function. The model parameters (i.e., initial state distribution, state transition probability matrix, observation probability matrix, and health-state duration probability distribution) are estimated through a modified forward–backward training algorithm. The re-estimation formulae for model parameters are derived. The trained HSMMs can be used to diagnose the health status of a component. Through parameter estimation of the health-state duration probability distribution and the proposed backward recursive equations, one can predict the useful remaining life of the component. To determine the “value” of each sensor information, discriminant function analysis is employed to adjust the weight or importance assigned to a sensor. Therefore, sensor fusion becomes possible in this HSMM based framework.  相似文献   

19.
Can stochastic search algorithms outperform existing deterministic heuristics for the NP-hard problemNumber Partitioning if given a sufficient, but practically realizable amount of time? In a thorough empirical investigation using a straightforward implementation of one such algorithm, simulated annealing, Johnson et al. (Ref. 1) concluded tentatively that the answer is negative. In this paper, we show that the answer can be positive if attention is devoted to the issue of problem representation (encoding). We present results from empirical tests of several encodings ofNumber Partitioning with problem instances consisting of multiple-precision integers drawn from a uniform probability distribution. With these instances and with an appropriate choice of representation, stochastic and deterministic searches can—routinely and in a practical amount of time—find solutions several orders of magnitude better than those constructed by the best heuristic known (Ref. 2), which does not employ searching.  相似文献   

20.
Summary. We present and analyze a new speed-up technique for Monte Carlo optimization: the Iterated Energy Transformation algorithm, where the Metropolis algorithm is used repeatedly with more and more favourable energy functions derived from the original one by increasing transformations. We show that this method allows a better speed-up than Simulated Annealing when convergence speed is measured by the probability of failure of the algorithm after a large number of iterations. We study also the limit of a large state space in the special case when the energy is made of a sum of independent terms. We show that the convergence time of the I.E.T. algorithm is polynomial in the size (number of coordinates) of the problem, but with a worse exponent than for Simulated Annealing. This indicates that the I.E.T. algorithm is well suited for moderate size problems but not for too large ones. The independent component case is a good model for the end of many optimization processes, when at low temperature a neighbourhood of a local minimum is explored by small and far apart modifications of the current solution. We show that in this case both global optimization methods, Simulated Annealing and the I.E.T. algorithm, are less efficient than repeated local stochastic optimizations. Using the general concept of “slow stochastic optimization algorithm”, we show that any “slow” global optimization scheme should be followed by a local one to perform the last approach to a minimum. Received: 22 November 1994 / In revised form: 14 July 1997  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号