首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 33 毫秒
1.
Local consistency techniques for numerical constraints over interval domains combine interval arithmetic, constraint inversion and bisection to reduce variable domains. In this paper, we study the problem of integrating any specific interval arithmetic library in constraint solvers. For this purpose, we design an interface between consistency algorithms and arithmetic. The interface has a two-level architecture: functional interval arithmetic at low-level, which is only a specification to be implemented by specific libraries, and a set of operations required by solvers, such as relational interval arithmetic or bisection primitives. This work leads to the implementation of an interval component by means of C++ generic programming methods. The overhead resulting from generic programming is discussed.  相似文献   

2.
This paper deals with the application of interval arithmetic to Bessel-Ricatti functions. The extended interval arithmetic we have used is due to Markov and involves a check on monotonicity of functions in an attempt to try and get sharper bounds on computed intervals. The results we obtain are compared with those from Hansen's methods (based on bounding the Taylor series remainder) and those from Moore's technique of subdividing intervals. Two techniques are considered for evaluating derivatives of functions — one uses hand-coded derivatives and the other uses automatic differentiation. Numerical results are given, using Fortran 90 implementations of interval and automatic differentiation arithmetic.  相似文献   

3.
In this paper, for a one-dimensional formal group over the ring of integers of a local field in the case of small ramification we study the arithmetic of the module of roots of the isogeny, as well as the arithmetic of the formal module constructed on the maximal ideal of a local field containing all the roots of the isogeny. Bibliography: 5 titles. __________ Translated from Zapiski Nauchnykh Seminarov POMI, Vol. 338, 2006, pp. 125–136.  相似文献   

4.
Floating-point arithmetic precision is limited in length the IEEE single (respectively double) precision format is 32-bit (respectively 64-bit) long. Extended precision formats can be up to 128-bit long. However some problems require a longer floating-point format, because of round-off errors. Such problems are usually solved in arbitrary precision, but round-off errors still occur and must be controlled. Interval arithmetic has been implemented in arbitrary precision, for instance in the MPFI library. Interval arithmetic provides guaranteed results, but it is not well suited for the validation of huge applications. The CADNA library estimates round-off error propagation using stochastic arithmetic. CADNA has enabled the numerical validation of real-life applications, but it can be used in single precision or in double precision only. In this paper, we present a library called SAM (Stochastic Arithmetic in Multiprecision). It is a multiprecision extension of the classic CADNA library. In SAM (as in CADNA), the arithmetic and relational operators are overloaded in order to be able to deal with stochastic numbers. As a consequence, the use of SAM in a scientific code needs only few modifications. This new library SAM makes it possible to dynamically control the numerical methods used and more particularly to determine the optimal number of iterations in an iterative process. We present some applications of SAM in the numerical validation of chaotic systems modeled by the logistic map.  相似文献   

5.
Several authors have used interval arithmetic to deal with parametric or sensitivity analysis in mathematical programming problems. Several reported computational experiments have shown how interval arithmetic can provide such results. However, there has not been a characterization of the resulting solution interval in terms of the usual sensitivity analysis results. This paper presents a characterization of perturbed convex programs and the resulting solution intervals.Interval arithmetic was developed as a mechanism for dealing with the inherent error associated with numerical computations using a computational device. Here it is used to describe error in the parameters. We show that, for convex programs, the resulting solution intervals can be characterized in terms of the usual sensitivity analysis results. It has been often reported in the literature that even well behaved convex problems can exhibit pathological behavior in the presence of data perturbations. This paper uses interval arithmetic to deal with such problems, and to characterize the behavior of the perturbed problem in the resulting interval. These results form the foundation for future computational studies using interval arithmetic to do nonlinear parametric analysis.  相似文献   

6.
In general, the fuzzy Graphical Evaluation and Review Technique (GERT) usually evaluates/analyzes variables with interval arithmetic (α-cut arithmetic) operations, especially those with complicated fuzzy systems. Thus the interval arithmetic operations may occur accumulating phenomenon of fuzziness in complicated systems, and the accumulating phenomenon of fuzziness may make decision-maker that cannot effectively evaluate problems/systems under vague environment. In order to overcome the accumulating phenomenon of fuzziness or credibly reduce fuzzy spreads, this study adopts approximate fuzzy arithmetic operations under the weakest t-norm arithmetic operations (Tω) to evaluate fuzzy reliability models based on fuzzy GERT simulation technology. The approximate fuzzy arithmetic operations employ principle of interval arithmetic under the weakest t-norm arithmetic operations. Therefore, the novel fuzzy arithmetic operations may obtain fitter decision values, which have smaller fuzziness accumulating, under vague environment. In numerical examples the approximate fuzzy arithmetic operations has evidenced that it can successfully calculate results of fuzzy operations as interval arithmetic, and can more effectively reduce fuzzy spreads. In the real fuzzy repairable reliability model the performance also shows that the approximate fuzzy arithmetic operations successfully analyze the reliability problem and obtain more confident fuzzy results.  相似文献   

7.
We present a framework for validated numerical computations with real functions. The framework is based on a formalisation of abstract data types for basic floating-point arithmetic, interval arithmetic and function models based on Banach algebra. As a concrete instantiation, we develop an elementary smooth function calculus approximated by sparse polynomial models. We demonstrate formal verification applied to validated calculus by a formalisation of basic arithmetic operations in a theorem prover. The ultimate aim is to develop a formalism powerful enough for reachability analysis of nonlinear hybrid systems.  相似文献   

8.
Normally inventory models of deteriorating items, such as food products, vegetables, etc. involve imprecise parameters, like imprecise inventory costs, fuzzy storage area, fuzzy budget allocation, etc. In this paper, we aim to provide two defuzzification techniques for two fuzzy inventory models using (i) extension principle and duality theory of non-linear programming and (ii) interval arithmetic. On the basis of Zadeh’s extension principle, two non-linear programs parameterized by the possibility level α are formulated to calculate the lower and upper bounds of the minimum average cost at α-level, through which the membership function of the objective function is constructed. In interval arithmetic technique the interval objective function has been transformed into an equivalent deterministic multi-objective problem defined by the left and right limits of the interval. This formulation corresponds to the possibility level, α = 0.5. Finally, the multi-objective problem is solved by a multi-objective genetic algorithm (MOGA). The model has been illustrated through a numerical example and solved for different values of possibility level, α through extension principle and for α = 0.5 via MOGA. As a particular case, the results have been obtained for the inventory model without deterioration. Results from two methods for α = 0.5 are compared.  相似文献   

9.
Principal component analysis on interval data   总被引:2,自引:0,他引:2  
Summary  Real world data analysis is often affected by different types of errors as: measurement errors, computation errors, imprecision related to the method adopted for estimating the data. The uncertainty in the data, which is strictly connected to the above errors, may be treated by considering, rather than a single value for each data, the interval of values in which it may fall: the interval data. Statistical units described by interval data can be assumed as a special case of Symbolic Object (SO). In Symbolic Data Analysis (SDA), these data are represented as boxes. Accordingly, purpose of the present work is the extension of Principal Component analysis (PCA) to obtain a visualisation of such boxes, on a lower dimensional space pointing out of the relationships among the variables, the units, and between both of them. The aim is to use, when possible, the interval algebra instruments to adapt the mathematical models, on the basis of the classical PCA, to the case in which an interval data matrix is given. The proposed method has been tested on a real data set and the numerical results, which are in agreement with the theory, are reported.  相似文献   

10.
Summary LetC o [a, b] be the Banach space of all real valued continuous functions defined on the interval [a, b], endowed with the supremum norm. In this paper we construct optimal formulas for the numerical differentiation and integration forC o [a, b].In particular, the questions of Meinguet [2] and Salzer [5] on the existence of such formulas are answered.  相似文献   

11.
《随机分析与应用》2013,31(5):863-892
This paper investigates control chart schemes for detecting drifts in the process mean μ and/or process standard deviation σ when individual observations are sampled. Drifts may be due to causes such as gradual deterioration of equipment, catalyst aging, waste accumulation, or human causes, such as operator fatigue or close supervision. The standard Shewhart X chart and moving range (MR) chart are evaluated, as well as several types of exponentially weighted moving average (EWMA) charts and combinations of charts involving these EWMA charts. We show that the combinations of the EWMA charts detect slow-rate and moderate-rate drifts much faster than the combined X and MR charts. We also show that varying the sampling interval adaptively as a function of the process data results in notable reductions in the detection delay of drifts in μ and/or σ.  相似文献   

12.
I. Ojeda  J. C. Rosales 《代数通讯》2020,48(9):3707-3715
Abstract

In this paper we introduce the notion of extension of a numerical semigroup. We provide a characterization of the numerical semigroups whose extensions are all arithmetic and we give an algorithm for the computation of the whole set of arithmetic extension of a given numerical semigroup. As by-product, new explicit formulas for the Frobenius number and the genus of proportionally modular semigroups are obtained.  相似文献   

13.

In this paper, we design a Branch and Bound algorithm based on interval arithmetic to address nonconvex robust optimization problems. This algorithm provides the exact global solution of such difficult problems arising in many real life applications. A code was developed in MatLab and was used to solve some robust nonconvex problems with few variables. This first numerical study shows the interest of this approach providing the global solution of such difficult robust nonconvex optimization problems.

  相似文献   

14.
Astract  We describe an algorithm for generating the symbolic sequences which code the orbits of points under an interval exchange transformation on three intervals. The algorithm has two components. The first is an arithmetic division algorithm applied to the lengths of the intervals. This arithmetic construction was originally introduced by the authors in an earlier paper and may be viewed as a two-dimensional generalization of the regular continued fraction. The second component is a combinatorial algorithm which generates the bispecial factors of the associated symbolic subshift as a function of the arithmetic expansion. As a consequence, we obtain a complete characterization of those sequences of block complexity 2n+1 which are natural codings of orbits of three-interval exchange transformations, thereby answering an old question of Rauzy. Partially supported by NSF grant INT-9726708.  相似文献   

15.
Infimum-supremum interval arithmetic is widely used because of ease of implementation and narrow results. In this note we show that the overestimation of midpoint-radius interval arithmetic compared to power set operations is uniformly bounded by a factor 1.5 in radius. This is true for the four basic operations as well as for vector and matrix operations, over real and over complex numbers. Moreover, we describe an implementation of midpoint-radius interval arithmetic entirely using BLAS. Therefore, in particular, matrix operations are very fast on almost any computer, with minimal effort for the implementation. Especially, with the new definition it is seemingly the first time that full advantage can be taken of the speed of vector and parallel architectures. The algorithms have been implemented in the Matlab interval toolbox INTLAB.  相似文献   

16.
Simulation techniques are commonly used to analyze the influence of uncertainties of initial conditions and systemparameters on the trajectories of the state variables of dynamical systems. In this context, interval arithmetic approaches are of interest. They are capable of determining guaranteed bounds of all reachable states if worst-case bounds of the above-mentioned uncertainties are known. Furthermore, interval algorithms ensure the correctness of numerical results in spite of rounding errors which inevitably arise if floating point operations are carried out on a computer. However, naive implementations of interval algorithms often lead to overestimation, i.e., too conservative enclosures which can make the results meaningless. In this contribution, we summarize the basic routines of ValEncIA-IVP which computes interval enclosures of all reachable states of dynamical systems described by ordinary differential equations ODEs. ValEncIA-IVP , VAL idation of state ENC losures using I nterval A rithmetic for I nitial V alue P roblems, can be applied to the simulation of systems with both uncertain parameters and uncertain initial conditions. Advanced techniques for reduction of overestimation are demonstrated for a simplified catalytic reactor. Afirst approach to using VanEncIA-IVP for the simulation of sets of differential algebraic equations is outlined. Finally, an outlook on the integration of ValEncIA-IVP in an interval arithmetic framework for computation of optimal and robust control strategies for continuous-time processes is given. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
Abstract

This paper is dealing with the problem of finding the “best” multipoint Padé approximant of an analytic function when data in some neighborhoods of sampling points are more important than others. More exactly, we obtain a multipoint Padé approximants as limits of best rational Lp-approximations on union of disks, when the measure of them tends to zero with different speeds. As such, this technique provides useful qualitative and analytic information concerning the approximants, which is difficult to obtain from a strictly numerical treatment.  相似文献   

18.
Fast wavelet transform algorithms for Toeplitz matrices are proposed in this paper. Distinctive from the well known discrete trigonometric transforms, such as the discrete cosine transform (DCT) and the discrete Fourier transform (DFT) for Toeplitz matrices, the new algorithms are achieved by compactly supported wavelet that preserve the character of a Toeplitz matrix after transform, which is quite useful in many applications involving a Toeplitz matrix. Results of numerical experiments show that the proposed method has good compression performance similar to using wavelet in the digital image coding. Since the proposed algorithms turn a dense Toeplitz matrix into a band-limited form, the arithmetic operations required by the new algorithms are O(N) that are reduced greatly compared with O(N log N) by the classical trigonometric transforms.  相似文献   

19.
The purpose of this paper is to investigate and propose a fuzzy extended economic production quantity model based on an elaboratively modeled unit cost structure. This unit cost structure consists of the various lot-size correlative components such as on-line setups, off-line setups, initial production defectives, direct material, labor, and depreciation in addition to lot-size non-correlative items. Thus, the unit cost is correlatively modeled to the production quantity. Therefore, the modeling or the annual total cost function developed consists of not only annual inventory and setup costs but also production cost. Moreover, via the concept of fuzzy blurred optimal argument and the vertex method of the α-cut fuzzy arithmetic (or fuzzy interval analysis), two solution approaches are proposed: (1) a fuzzy EPQ and (2) a compromised crisp EPQ in the fuzzy sense. An optimization procedure, which can simultaneously determine the α-cut-vertex combination of fuzzy parameters and the optimizing decision variable value, is also proposed. The sensitivity model for the fuzzy total cost and thus EPQ to the various cost factors is provided. Finally, a numerical example with the original data collected from a firm demonstrates the usefulness of the new model.  相似文献   

20.
We propose a procedure to construct the empirical likelihood ratio confidence interval for the mean using a resampling method. This approach leads to the definition of a likelihood function for censored data, called weighted empirical likelihood function. With the second order expansion of the log likelihood ratio, a weighted empirical likelihood ratio confidence interval for the mean is proposed and shown by simulation studies to have comparable coverage accuracy to alternative methods, including the nonparametric bootstrap-t. The procedures proposed here apply in a unified way to different types of censored data, such as right censored data, doubly censored data and interval censored data, and computationally more efficient than the bootstrap-t method. An example of a set of doubly censored breast cancer data is presented with the application of our methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号