首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
In this paper we discuss the problem of verifying and computing optimal controls of systems whose dynamics is governed by differential systems with a discontinuous right-hand side. In our work, we are motivated by optimal control of mechanical systems with Coulomb friction, which exhibit such a right-hand side. Notwithstanding the impressive development of nonsmooth and set-valued analysis, these systems have not been closely studied either computationally or analytically. We show that even when the solution crosses and does not stay on the discontinuity, differentiating the results of a simulation gives gradients that have errors of a size independent of the stepsize. This means that the strategy of “optimize the discretization” will usually fail for problems of this kind. We approximate the discontinuous right-hand side for the differential equations or inclusions by a smooth right-hand side. For these smoothed approximations, we show that the resulting gradients approach the true gradients provided that the start and end points of the trajectory do not lie on the discontinuity and that Euler’s method is used where the step size is “sufficiently small” in comparison with the smoothing parameter. Numerical results are presented for a crude model of car racing that involves Coulomb friction and slip showing that this approach is practical and can handle problems of moderate complexity.  相似文献   

2.
Using the recession analysis we study necessary and sufficient conditions for the existence and the stability of a finite semi-coercive variational inequality with respect to data perturbation. Some applications of the abstract results in mechanics and in electronic circuits involving devices like ideal diode and practical diode are discussed. The research of S. Adly has been supported by the “Fondation EADS” and the ANR project “Guidage”.  相似文献   

3.
In this paper, we propose the treatment of complex reservoir operation problems via our newly developed tool of fuzzy criterion decision processes. This novel approach has been shown to be a more flexible and useful analysis tool especially when it is desirable to incorporate an expert’s knowledge into the decision models. Additionally, it has been demonstrated that this form of decision models will usually result in an optimal solution, which guarantees the highest satisfactory degree. We provide a practical exemplification procedure for the models presented as well as an application example.  相似文献   

4.
We show that uniqueness and existence for signal reconstruction from multiscale edges in the Mallat and Zhong algorithm become possible if we restrict our signals to Paley-Wiener space, band-limit our wavelets, and irregularly sample at the wavelet transform (absolute) maxima—the edges—while possibly including (enough) extra points at each level. We do this in a setting that closely resembles the numerical analysis setting of Mallat and Zhong and that seems to capture something of the essence of their (practical) reconstruction method. Our work builds on a uniqueness result for reconstructing an L2 signal from irregular sampling of its wavelet transform of Grochenig and the related work of Benedetto, Heller, Mallat, and Zhong. We show that the rate of convergence for this reconstruction algorithm is geometric and computable in advance. Finally, we consider the effect on the rate of convergence of not sampling enough local maxima.  相似文献   

5.
In this paper, we have suggested a penalty method to modify the combinatorial optimization problem with the linear constraints to a global optimization problem with linear constraints. It also deals with a topic of vital significance of pump operation optimization in a water system. In this connection we have done a lot of work to formulate a model based on a simplified flow volume balance to resolve the problem of optimal pump operation settings of switching “ON” and “OFF” with the reduced gradient method. This global solution approach incorporates some benefits for practical application to a real system as is shown in the case study.  相似文献   

6.
This paper proposes the use of a formal grammar for the verification of mathematical formulae for a practical mathematical OCR system. Like a C compiler detecting syntax errors in a source file, we want to have a verification mechanism to find errors in the output of mathematical OCR. A linear monadic context-free tree grammar (LM-CFTG) is employed as a formal framework to define “well-formed” mathematical formulae. A cubic time parsing algorithm for LM-CFTGs is presented. For the purpose of practical evaluation, a verification system for mathematical OCR is developed, and the effectiveness of the system is demonstrated by using the ground-truthed mathematical document database InftyCDB-1 and a misrecognition database newly constructed for this study.  相似文献   

7.
This paper presents a data envelopment analysis model that canbe implemented by public sector management for assessing theefficiency of a health system within a developing country. Toillustrate the practical implementation and interpretation ofthe model this study compares health systems across 51 developingcountries using 1998–99 data. The results of the analysisgenerated empirical indicators of efficiency, and we demonstratehow these may then be used by management in order to understandthe factors associated with good performance of a health system.  相似文献   

8.
“Logical analysis of data” (LAD) is a methodology developed since the late eighties, aimed at discovering hidden structural information in data sets. LAD was originally developed for analyzing binary data by using the theory of partially defined Boolean functions. An extension of LAD for the analysis of numerical data sets is achieved through the process of “binarization” consisting in the replacement of each numerical variable by binary “indicator” variables, each showing whether the value of the original variable is above or below a certain level. Binarization was successfully applied to the analysis of a variety of real life data sets. This paper develops the theoretical foundations of the binarization process studying the combinatorial optimization problems related to the minimization of the number of binary variables. To provide an algorithmic framework for the practical solution of such problems, we construct compact linear integer programming formulations of them. We develop polynomial time algorithms for some of these minimization problems, and prove NP-hardness of others. The authors gratefully acknowledge the partial support by the Office of Naval Research (grants N00014-92-J1375 and N00014-92-J4083).  相似文献   

9.
It has been proven that the code lengths of Tardos’s collusion-secure fingerprinting codes are of theoretically minimal order with respect to the number of adversarial users (pirates). However, the code lengths can be further reduced as some preceding studies have revealed. In this article we improve a recent discrete variant of Tardos’s codes, and give a security proof of our codes under an assumption weaker than the original Marking Assumption. Our analysis shows that our codes have significantly shorter lengths than Tardos’s codes. For example, when c = 8, our code length is about 4.94% of Tardos’s code in a practical setting and about 4.62% in a certain limit case. Our code lengths for large c are asymptotically about 5.35% of Tardos’s codes. A part of this work was presented at 17th Applied Algebra, Algebraic Algorithms, and Error Correcting Codes (AAECC-17), Bangalore, India, December 16–20, 2007.  相似文献   

10.
There are several classes of interior point algorithms that solve linear programming problems in O(√nL) iterations. Among them, several potential reduction algorithms combine both theoretical (O(√nL) iterations) and practical efficiency as they allow the flexibility of line searches in the potential function, and thus can lead to practical implementations. It is a significant open question whether interior point algorithms can lead to better complexity bounds. In the present paper we give some negative answers to this question for the class of potential reduction algorithms. We show that, even if we allow line searches in the potential function, and even for problems that have network structure, the bound O(√nL) is tight for several potential reduction algorithms, i.e., there is a class of examples with network structure, in which the algorithms need at least Θ(√nL) iterations to find an optimal solution. The research of the author was partially supported by a Presidential Young Investigator Award DDM-9158118 with matching funds from Draper Laboratory.  相似文献   

11.
Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in machine learning, control theory, and discrete geometry. This class of optimization problems, known as rank minimization, is NP-hard, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic replaces the rank function with the nuclear norm—equal to the sum of the singular values—of the decision variable and has been shown to provide the optimal low rank solution in a variety of scenarios. In this paper, we assess the practical performance of this heuristic for finding the minimum rank matrix subject to linear equality constraints. We characterize properties of the null space of the linear operator defining the constraint set that are necessary and sufficient for the heuristic to succeed. We then analyze linear constraints sampled uniformly at random, and obtain dimension-free bounds under which our null space properties hold almost surely as the matrix dimensions tend to infinity. Finally, we provide empirical evidence that these probabilistic bounds provide accurate predictions of the heuristic’s performance in non-asymptotic scenarios.  相似文献   

12.
In this paper we study the asymptotic behavior of Bayes estimators for hidden Markov models as the number of observations goes to infinity. The theorem that we prove is similar to the Bernstein—von Mises theorem on the asymptotic behavior of the posterior distribution for the case of independent observations. We show that our theorem is applicable to a wide class of hidden Markov models. We also discuss the implication of the theorem’s assumptions for several models that are used in practical applications such as ion channel kinetics.   相似文献   

13.
In this paper we present a new algorithm for computing the homology of regular CW-complexes. This algorithm is based on the coreduction algorithm due to Mrozek and Batko and consists essentially of a geometric preprocessing algorithm for the standard chain complex generated by a CW-complex. By employing the concept of S-complexes the original chain complex can—in all known practical cases—be reduced to a significantly smaller S-complex with isomorphic homology, which can then be computed using standard methods. Furthermore, we demonstrate that in the context of non-uniform cubical grids this method significantly improves currently available algorithms based on uniform cubical grids.  相似文献   

14.
In this paper we explore whether the incorporation of systematic time series analyses and mathematical optimization procedures in the practical planning process has the potential to improve production program decisions. The cases of four German cash crop farms are investigated over six planning periods. In order to avoid solutions that simply exceed the farmer’s risk tolerance, the apparently accepted variance of the observed program’s total gross margin is used as an upper bound in the optimization. For each of the 24 planning occasions, the formal model is used to generate optimized alternative programs. The total gross margins that could have been realized if the formally optimized programs had been implemented are then compared to those that were actually realized. We find that the farmers could have increased their total gross margins significantly if—instead of using simple routines and rules of thumb—they had used adequate methods of statistical analysis combined with the formal optimization model. Norbert Hirschauer thanks the German Research Foundation (DFG) for the opportunity to work on this paper during a research leave.  相似文献   

15.
We study qualitative indications for d.c. representations of closed sets in and functions on Hilbert spaces. The first indication is an index of nonconvexity which can be regarded as a measure for the degree of nonconvexity. We show that a closed set is weakly closed if this indication is finite. Using this result we can prove the solvability of nonconvex minimization problems. By duality a minimization problem on a feasible set in which this indication is low, can be reduced to a quasi-concave minimization over a convex set in a low-dimensional space. The second indication is the separability which can be incorporated in solving dual problems. Both the index of nonconvexity and the separability can be characteristics to “good” d.c. representations. For practical computation we present a notion of clouds which enables us to obtain a good d.c. representation for a class of nonconvex sets. Using a generalized Caratheodory’s theorem we present various applications of clouds.  相似文献   

16.
Using an image rectification application arising in the field of forest management, we demonstrate in this paper the practical feasibility of applying thin plate spline (TPS) techniques to real image warping problems. The use of TPS‐based warping in large problems can be limited by two factors: numerical instability in the calculation of TPS coefficients, and the intensive computation involved in evaluating TPS functions. Both drawbacks can be overcome by taking advantage of some recent advances in the numerical analysis of radial basis functions. Here we relate our experience in applying some of this work to realize successful TPS warping of large forestry images, and some graphical examples are given. Methods for automated control point selection and editing are also presented, and a cross‐correlation technique for evaluating the effectiveness of the warps is described. This experience has guided our development of an effective and efficient software package for control point selection and TPS warping of digital images. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

17.
The article provides a refinement for the volume-corrected Laplace-Metropolis estimator of the marginal likelihood of DiCiccioet al. The correction volume of probability α in DiCiccioet al. is fixed and suggested to take the value α=0.05. In this article α is selected based on an asymptotic analysis to minimize the mean square relative error (MSRE). This optimal choice of α is shown to be invariant under linear transformations. The invariance property leads to easy implementation for multivariate problems. An implementation procedure is provided for practical use. A simulation study and a real data example are presented.  相似文献   

18.
Optimal enough?     
An alleged weakness of heuristic optimisation methods is the stochastic character of their solutions: instead of finding the truly optimal solution, they only provide a stochastic approximation of this optimum. In this paper we look into a particular application, portfolio optimisation. We demonstrate that the randomness of the ‘optimal’ solution obtained from the algorithm can be made so small that for all practical purposes it can be neglected. More importantly, we look at the relevance of the remaining uncertainty in the out-of-sample period. The relationship between in-sample fit and out-of-sample performance is not monotonous, but still, we observe that up to a point better solutions in-sample lead to better solutions out-of-sample. Beyond this point there is no more cause for improving the solution any further: any in-sample improvement leads out-of-sample only to financially meaningless improvements and unpredictable changes (noise) in performance.  相似文献   

19.
The practical application of graph prime factorization algorithms is limited in practice by unavoidable noise in the data. A first step towards error-tolerant “approximate” prime factorization, is the development of local approaches that cover the graph by factorizable patches and then use this information to derive global factors. We present here a local, quasi-linear algorithm for the prime factorization of “locally unrefined” graphs with respect to the strong product. To this end we introduce the backbone \mathbbB (G)\mathbb{B} (G) for a given graph G and show that the neighborhoods of the backbone vertices provide enough information to determine the global prime factors.  相似文献   

20.
In this paper we consider discrete time forward interest rate models. In our approach, unlike in the classical Heath–Jarrow–Morton framework, the forward rate curves are driven by a random field. Hence we get a general interest rate structure. Our aim is to give an overview of our results in such a model on the following questions: no-arbitrage conditions, maximum likelihood estimation of the volatility, as well as the joint estimation of the parameters and the asymptotic behaviour of the estimators, relationship with continuous models. Finally we give discussion on the practical problems of the estimation and we show several numerical results on the statistics of such models. This research has been supported by the Hungarian Scientific Research Fund under Grants No. OTKA–F046061/2004 and OTKA–T048544/2005.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号