首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
We ask the experts in global optimization if there is an efficient solution to an optimization problem in acceptance sampling: Here, one often has incomplete prior information about the quality of incoming lots. Given a cost model, a decision rule for the inspection of a lot may then be designed that minimizes the maximum loss compatible with the available information. The resulting minimax problem is sometimes hard to solve, as the loss functions may have several local maxima which vary in an unpredictable way with the parameters of the decision rule.  相似文献   

3.
4.
As an emerging effective approach to nonlinear robust control, simplex sliding mode control demonstrates some attractive features not possessed by the conventional sliding mode control method, from both theoretical and practical points of view. However, no systematic approach is currently available for computing the simplex control vectors in nonlinear sliding mode control. In this paper, chaos-based optimization is exploited so as to develop a systematic approach to seeking the simplex control vectors; particularly, the flexibility of simplex control is enhanced by making the simplex control vectors dependent on the Euclidean norm of the sliding vector rather than being constant, which result in both reduction of the chattering and speedup of the convergence. Computer simulation on a nonlinear uncertain system is given to illustrate the effectiveness of the proposed control method.  相似文献   

5.
Nonlinear rescaling vs. smoothing technique in convex optimization   总被引:1,自引:0,他引:1  
We introduce an alternative to the smoothing technique approach for constrained optimization. As it turns out for any given smoothing function there exists a modification with particular properties. We use the modification for Nonlinear Rescaling (NR) the constraints of a given constrained optimization problem into an equivalent set of constraints.?The constraints transformation is scaled by a vector of positive parameters. The Lagrangian for the equivalent problems is to the correspondent Smoothing Penalty functions as Augmented Lagrangian to the Classical Penalty function or MBFs to the Barrier Functions. Moreover the Lagrangians for the equivalent problems combine the best properties of Quadratic and Nonquadratic Augmented Lagrangians and at the same time are free from their main drawbacks.?Sequential unconstrained minimization of the Lagrangian for the equivalent problem in primal space followed by both Lagrange multipliers and scaling parameters update leads to a new class of NR multipliers methods, which are equivalent to the Interior Quadratic Prox methods for the dual problem.?We proved convergence and estimate the rate of convergence of the NR multipliers method under very mild assumptions on the input data. We also estimate the rate of convergence under various assumptions on the input data.?In particular, under the standard second order optimality conditions the NR method converges with Q-linear rate without unbounded increase of the scaling parameters, which correspond to the active constraints.?We also established global quadratic convergence of the NR methods for Linear Programming with unique dual solution.?We provide numerical results, which strongly support the theory. Received: September 2000 / Accepted: October 2001?Published online April 12, 2002  相似文献   

6.
A review of statistical models for global optimization is presented. Rationality of the search for a global minimum is formulated axiomatically and the features of the corresponding algorithm are derived from the axioms. Furthermore the results of some applications of the proposed algorithm are presented and the perspectives of the approach are discussed.  相似文献   

7.
This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large-scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L 1-regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms.  相似文献   

8.
One of the main services of National Statistical Agencies (NSAs) for the current Information Society is the dissemination of large amounts of tabular data, which is obtained from microdata by crossing one or more categorical variables. NSAs must guarantee that no confidential individual information can be obtained from the released tabular data. Several statistical disclosure control methods are available for this purpose. These methods result in large linear, mixed integer linear, or quadratic mixed integer linear optimization problems. This paper reviews some of the existing approaches, with an emphasis on two of them: cell suppression problem (CSP) and controlled tabular adjustment (CTA). CSP and CTA have concentrated most of the recent research in the tabular data protection field. The particular focus of this work is on methods and results of practical interest for end-users (mostly, NSAs). Therefore, in addition to the resulting optimization models and solution approaches, computational results comparing the main optimization techniques - both optimal and heuristic - using real-world instances are also presented.  相似文献   

9.
Aim of this paper is to present a new fractal approach linking the macroscopic mechanical properties of micro- and nano-structured materials with the main parameters: composition, grain size and structural dimension, as well as contiguity and mean free path. Assuming the key role played by the interfaces, the proposed fractal energy approach unifies the influences of all the above parameters, through the introduction of a fractal structural parameter (FSP), which represents an extension of the Gurland’s structural parameter. This modeling approach is assessed through an extensive comparison with experimental data on poly crystalline diamond (PCD) and WC–Co alloys. The results clearly show that the theoretical fractal predictions are in a fairly good agreement with the experiments on both hardness and toughness. This new synthetic parameter is thus proposed to investigate, design and optimize new micro- and nano-grained materials. Eventually, FSP-based optimization maps are developed, that allow to design new materials with high hardness and toughness.  相似文献   

10.
In this paper, we are concerned with an algorithm which combines the generalized linear programming technique proposed by Dantzig and Wolfe with the stochastic quasigradient method in order to solve stochastic programs with recourse. In this way, we overcome the difficulties arising in finding the exact values of the objective function of recourse problems by replacing them with the statistical estimates of the function. We present the basic steps of the proposed algorithm focusing our attention on its implementation alternatives aimed at improving both the convergence and computational performances. The main application areas are mentioned and some computational experience in the validation of our approach is reported. Finally, we discuss the possibilities of parallelization of the proposed algorithmic schemes.This paper has been partially supported by the Italian MURST 40% project on Flexible Manufacturing Systems.  相似文献   

11.
This paper summarizes the results of axiomatic constructing statistical models of complicated multimodal functions. It is shown that an optimization algorithm may be constructed on the basis of a statistical model and some ideas of the rational choice theory. A brief review of related algorithms and reports on investigations of their efficiency is given.  相似文献   

12.
Optimization problems with network constraints arise in several instances in engineering, management, statistical and economic applications. The (usually) large size of such problems motivated research in designing efficient algorithms and software for this problem class. The introduction of parallelism in the design of computer systems adds a new element of complexity to the field. This paper describes the implementation of a distributed relaxation algorithm for strictly convex network problems on a massively parallel computer. A Connection Machine CM-1 configured with 16,384 processing elements serves as the testbed of the implementation. We report computational results with a series of stick percolation and quadratic transportation problems. The algorithm is compared with an implementation of the primal truncated Newton on an IBM 3081-D mainframe, an Alliant FX/8 shared memory vector multiprocessor and the IBM 3090-600 vector supercomputer. One of the larger test problems with approximately 2500 nodes and 8000 arcs requires 1.5 minutes of CPU time on the vector supercomputer. The same problem is solved using relaxation on the CM-1 in less that a second.  相似文献   

13.
In this paper, we propose a general framework for Extreme Learning Machine via free sparse transfer representation, which is referred to as transfer free sparse representation based on extreme learning machine (TFSR-ELM). This framework is suitable for different assumptions related to the divergence measures of the data distributions, such as a maximum mean discrepancy and K-L divergence. We propose an effective sparse regularization for the proposed free transfer representation learning framework, which can decrease the time and space cost. Different solutions to the problems based on the different distribution distance estimation criteria and convergence analysis are given. Comprehensive experiments show that TFSR-based algorithms outperform the existing transfer learning methods and are robust to different sizes of training data.  相似文献   

14.
Shape optimization is a widely used technique in the design phase of a product. Current ongoing improvement policies require a product to fulfill a series of conditions from the perspective of mechanical resistance, fatigue, natural frequency, impact resistance, etc. All these conditions are translated into equality or inequality restrictions which must be satisfied during the optimization process that is necessary in order to determine the optimal shape. This article describes a new method for shape optimization that considers any regular shape as a possible shape, thereby improving on traditional methods limited to straight profiles or profiles established a priori. Our focus is based on using functional techniques and this approach is, based on representing the shape of the object by means of functions belonging to a finite-dimension functional space. In order to resolve this problem, the article proposes an optimization method that uses machine learning techniques for functional data in order to represent the perimeter of the set of feasible functions and to speed up the process of evaluating the restrictions in each iteration of the algorithm. The results demonstrate that the functional approach produces better results in the shape optimization process and that speeding up the algorithm using machine learning techniques ensures that this approach does not negatively affect design process response times.  相似文献   

15.
We present a model for simulating normal forces arising during a grinding process in cement for single diamond grinding. Assuming the diamond to have the shape of a pyramid, a very fast calculation of force and removed volume can be achieved. The basic approach is the simulation of the scratch track. Its triangle profile is determined by the shape of the diamond. The approximation of the scratch track is realized by stringing together polyhedra. Their sizes depend on both the actual cutting depth and an error implicitly describing the material brittleness. Each scratch track part can be subdivided into three three-dimensional simplices for a straightforward calculation of the removed volume. Since the scratched mineral subsoil is generally inhomogeneous, the forces at different positions of the workpiece are expected to vary. This heterogeneous nature is considered by sampling from a Gaussian random field. To achieve a realistic outcome the model parameters are adjusted applying model based optimization methods. A noisy Kriging model is chosen as surrogate to approximate the deviation between modelled and observed forces. This deviation is minimized and the results of the modelled forces and the actual forces from conducted experiments are rather similar.  相似文献   

16.
The computational complexity of a new class of combinatorial optimization problems that are induced by optimal machine learning procedures in the class of collective piecewise linear classifiers of committee type is studied.  相似文献   

17.
The analogy between combinatorial optimization and statistical mechanics has proven to be a fruitful object of study. Simulated annealing, a metaheuristic for combinatorial optimization problems, is based on this analogy. In this paper we show how a statistical mechanics formalism can be utilized to analyze the asymptotic behavior of combinatorial optimization problems with sum objective function and provide an alternative proof for the following result: Under a certain combinatorial condition and some natural probabilistic assumptions on the coefficients of the problem, the ratio between the optimal solution and an arbitrary feasible solution tends to one almost surely, as the size of the problem tends to infinity, so that the problem of optimization becomes trivial in some sense. Whereas this result can also be proven by purely probabilistic techniques, the above approach allows one to understand why the assumed combinatorial condition is essential for such a type of asymptotic behavior.  相似文献   

18.
《Optimization》2012,61(1):117-135
A statistical model tor giobai optimization is constructed generalizing some properties ofthe Wiener process to the multidimensional case. A new approach, which is similar to the Branch and Bound approach, is proposed to the construction of algorithms based on statistical models. A two dimensional version of the algorithm is implemented, and test results are presented  相似文献   

19.
《Optimization》2012,61(7):1099-1116
In this article we study support vector machine (SVM) classifiers in the face of uncertain knowledge sets and show how data uncertainty in knowledge sets can be treated in SVM classification by employing robust optimization. We present knowledge-based SVM classifiers with uncertain knowledge sets using convex quadratic optimization duality. We show that the knowledge-based SVM, where prior knowledge is in the form of uncertain linear constraints, results in an uncertain convex optimization problem with a set containment constraint. Using a new extension of Farkas' lemma, we reformulate the robust counterpart of the uncertain convex optimization problem in the case of interval uncertainty as a convex quadratic optimization problem. We then reformulate the resulting convex optimization problems as a simple quadratic optimization problem with non-negativity constraints using the Lagrange duality. We obtain the solution of the converted problem by a fixed point iterative algorithm and establish the convergence of the algorithm. We finally present some preliminary results of our computational experiments of the method.  相似文献   

20.
Back analysis is commonly used in identifying geomechanical parameters based on the monitored displacements. Conventional back analysis method is not capable of recognizing non-linear relationship involving displacements and mechanical parameters effectively. The new intelligent displacement back analysis method proposed in this paper is the combination of support vector machine, particle swarm optimization, and numerical analysis techniques. The non-linear relationship is efficiently represented by support vector machine. Numerical analysis is used to create training and testing samples for recognition of SVMs. Then, a global optimum search on the obtained SVMs by particle swarm optimization can lead to the geomechanical parameters identification effectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号