首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Information inequalities in a general sequential model for stochastic processes are presented by applying the approach to estimation through estimating functions. Using this approach, Bayesian versions of the information inequalities are also obtained. In particular, exponential-family processes and counting processes are considered. The results are useful to find optimum properties of parameter estimators. The assertions are of great importance for describing estimators in failure-repair models in both Bayes approach and the nuisance parameter case.  相似文献   

2.
In this study, a new approach is developed to solve the initial value problem for interval linear differential equations. In the considered problem, the coefficients and the initial values are constant intervals. In the developed approach, there is no need to define a derivative for interval-valued functions. All derivatives used in the approach are classical derivatives of real functions. The reason for this is that the solution of the problem is defined as a bunch of real functions. Such a solution concept is compatible also with the robust stability concept. Sufficient conditions are provided for the solution to be expressed analytically. In addition, on a numerical example, the solution obtained by the proposed approach is compared with the solution obtained by the generalized Hukuhara differentiability. It is shown that the proposed approach gives a new type of solution. The main advantage of the proposed approach is that the solution to the considered interval initial value problem exists and is unique, as in the real case.  相似文献   

3.
The conditional nonlinear optimal perturbation (CNOP for short) approach is a powerful tool for predictability and targeted observation studies in atmosphere-ocean sciences. By fully considering nonlinearity under appropriate physical constraints, the CNOP approach can reveal the optimal perturbations of initial conditions, boundary conditions, model parameters, and model tendencies that cause the largest simulation or prediction uncertainties. This paper reviews the progress of applying the CNOP approach to atmosphere-ocean sciences during the past five years. Following an introduction of the CNOP approach, the algorithm developments for solving the CNOP are discussed.Then, recent CNOP applications, including predictability studies of some high-impact ocean-atmospheric environmental events, ensemble forecast, parameter sensitivity analysis, uncertainty estimation caused by errors of model tendency or boundary condition, are reviewed. Finally, a summary and discussion on future applications and challenges of the CNOP approach are presented.  相似文献   

4.
Different solution strategies to the relaxed Saint-Venant problem are presented and comparatively discussed from a mechanical and computational point of view. Three approaches are considered; namely, the displacement approach, the mixed approach, and the modified potential stress approach. The different solution strategies lead to the formulation of two-dimensional Neumann and Dirichlet boundary-value problems. Several solution strategies are discussed in general, namely, the series approach, the reformulation of the boundary-value problems for the Laplace's equations as integral boundary equations, and the finite-element approach. In particular, the signatures of the finite-element weak solutions—the computational costs, the convergence, the accuracy—are discussed considering elastic cylinders whose cross sections are represented by piece-wise smooth domains.  相似文献   

5.
A new Lagrangian relaxation (LR) approach is developed for job shop scheduling problems. In the approach, operation precedence constraints rather than machine capacity constraints are relaxed. The relaxed problem is decomposed into single or parallel machine scheduling subproblems. These subproblems, which are NP-complete in general, are approximately solved by using fast heuristic algorithms. The dual problem is solved by using a recently developed “surrogate subgradient method” that allows approximate optimization of the subproblems. Since the algorithms for subproblems do not depend on the time horizon of the scheduling problems and are very fast, our new LR approach is efficient, particularly for large problems with long time horizons. For these problems, the machine decomposition-based LR approach requires much less memory and computation time as compared to a part decomposition-based approach as demonstrated by numerical testing.  相似文献   

6.
This paper reviews and compares existing approaches for supply chain modeling and simulation and applies the mesoscopic modeling and simulation approach using the simulation software MesoSim, an own development. A simplified real-world supply chain example is modeled with discrete event, mesoscopic and system dynamics simulation. The objective of the study is to compare the process of model creation and its validity using each approach. The study examines advantages of the mesoscopic approach for the simulation. Major benefits of the mesoscopic approach are that modeling efforts are balanced with the necessary level of detail and facilitate quick and simple model creation and simulation.  相似文献   

7.
The symplectic geometry approach is introduced for accurate bending analysis of rectangular thin plates with two adjacent edges free and the others clamped or simply supported. The basic equations for rectangular plates are first transferred into Hamilton canonical equations. Using the symplectic approach, the analytic solution of rectangular thin plate with two adjacent edges simply supported and the others slidingly supported is derived. Accurate bending solutions of title problems are then obtained using the superposition method. The approach used in this paper eliminates the need to pre-determine the deformation function and is hence more reasonable than conventional methods. Numerical results are presented to demonstrate the validity and efficiency of the approach as compared with those reported in other literatures.  相似文献   

8.
We use an actuarial approach to estimate the valuation of the reload option for a non-tradable risk asset under the jump-diffusion processes and Hull-White interest rate. We verify the validity of the actuarial approach to the European vanilla option for non-tradable assets. The formulas of the actuarial approach to the reload option are derived from the fair premium principle and the obtained results are arbitrage. Numerical experiments are conducted to analyze the effects of different parameters on the results of valuation as well as their differences from those obtained by the no-arbitrage approach. Finally, we give the valuations of the reload options under different parameters.  相似文献   

9.
This paper considers the pricing of contingent claims using an approach developed and used in insurance pricing. The approach is of interest and significance because of the increased integration of insurance and financial markets and also because insurance-related risks are trading in financial markets as a result of securitization and new contracts on futures exchanges. This approach uses probability distortion functions as the dual of the utility functions used in financial theory. The pricing formula is the same as the Black-Scholes formula for contingent claims when the underlying asset price is log-normal. The paper compares the probability distortion function approach with that based on financial theory. The theory underlying the approaches is set out and limitations on the use of the insurance-based approach are illustrated. The probability distortion approach is extended to the pricing of contingent claims for more general assumptions than those used for Black-Scholes option pricing.  相似文献   

10.
To predict particulate two-phase flows, two approaches are possible. One treats the fluid phase as a continuum and the particulate second phase as single particles. This approach, which predicts the particle trajectories in the fluid phase as a result of forces acting on particles, is called the Lagrangian approach. Treating the solid as some kind of continuum, and solving the appropriate continuum equations for the fluid and particle phases, is referred to as the Eulerian approach.Both approaches are discussed and their basic equations for the particle and fluid phases as well as their numerical treatment are presented. Particular attention is given to the interactions between both phases and their mathematical formulations. The resulting computer codes are discussed.The following cases are presented in detail: vertical pipe flow with various particle concentrations; and sudden expansion in a vertical pipe flow. The results show good agreement between both types of approach.The Lagrangian approach has some advantages for predicting those particulate flows in which large particle accelerations occur. It can also handle particulate two-phase flows consisting of polydispersed particle size distributions. The Eulerian approach seems to have advantages in all flow cases where high particle concentrations occur and where the high void fraction of the flow becomes a dominating flow controlling parameter.  相似文献   

11.
Manufacturing decision makers have to deal with a large number of reports and metrics for evaluating the performance of manufacturing systems. Since the metrics provide different and at times conflicting assessments, it is hard for the manufacturing decision makers to track and improve overall manufacturing system performance. This research presents a data envelopment analysis (DEA) based approach for performance measurement and target setting of manufacturing systems. The approach is applied to two different manufacturing environments. The performance peer groups identified using DEA are utilized to set performance targets and to guide performance improvement efforts. The DEA scores are checked against past process modifications that led to identified performance changes. Limitations of the DEA based approach are presented when considering measures that are influenced by factors outside of the control of the manufacturing decision makers. The potential of a DEA based generic performance measurement approach for manufacturing systems is provided.  相似文献   

12.
This paper presents a valence approach for assessing multiattribute utility functions. Unlike the decomposition approach which uses independence axioms on whole attributes to obtain utility representations, the valence approach partitions the elements of each attribute into classes on the basis of equivalent conditional preference orders. These partitions generate multivalent utility independence axioms that lead to additive-multiplicative and quasi-additive representation theorems for multiaatribute utility functions defined over product sets of equivalence classes. Preference interdependencies are thereby reflected in these classes, so attribute interactions are readily interpreted and the functional forms of the representations are kept simple.  相似文献   

13.
Adler and Monteiro (1992) developed a parametric analysis approach that is naturally related to the geometry of the linear program. This approach is based on the availability of primal and dual optimal solutions satisfying strong complementarity. In this paper, we develop an alternative geometric approach for parametric analysis which does not require the strong complementarity condition. This parametric analysis approach is used to develop range and marginal analysis techniques which are suitable for interior point methods. Two approaches are developed, namely the LU factorization approach and the affine scaling approach. Presented at the ORSA/TIMS, Nashville, TN, USA, May 1991. Supported by the National Science Foundation (NSF) under Grant No. DDM-9109404 and Grant No. DMI-9496178. This work was done while the author was a faculty member of the Systems and Industrial Engineering Department at The University of Arizona. Supported in part by the GTE Laboratories and the National Science Foundation (NSF) under Grant No. CCR-9019469.  相似文献   

14.
For many dynamical systems that are popular in applications, estimates are known for the decay of correlation in the case of Hölder continuous functions. In the present article, we suggest an approach that allows us to obtain estimates for correlation in dynamical systems in the case of arbitrary functions. This approach is based on approximation and estimates are obtained with the use of known estimates for Hölder continuous functions. We apply our approach to transitive Anosov diffeomorphisms and derive the central limit theorem for the characteristic functions of certain sets with boundary of zero measure.  相似文献   

15.
We propose a new approach to portfolio optimization by separating asset return distributions into positive and negative half-spaces. The approach minimizes a newly-defined Partitioned Value-at-Risk (PVaR) risk measure by using half-space statistical information. Using simulated data, the PVaR approach always generates better risk-return tradeoffs in the optimal portfolios when compared to traditional Markowitz mean–variance approach. When using real financial data, our approach also outperforms the Markowitz approach in the risk-return tradeoff. Given that the PVaR measure is also a robust risk measure, our new approach can be very useful for optimal portfolio allocations when asset return distributions are asymmetrical.  相似文献   

16.
Environmental impact assessment (EIA) problems are often characterised by a large number of identified environmental factors that are qualitative in nature and can only be assessed on the basis of human judgments, which inevitably involve various types of uncertainties such as ignorance and fuzziness. So, EIA problems need to be modelled and analysed using methods that can handle uncertainties. The evidential reasoning (ER) approach provides such a modelling framework and analysis method. In this paper the ER approach will be applied to conduct EIA analysis for the first time. The environmental impact consequences are characterized by a set of assessment grades that are assumed to be collectively exhaustive and mutually exclusive. All assessment information, quantitative or qualitative, complete or incomplete, and precise or imprecise, is modelled using a unified framework of a belief structure. The original ER approach with a recursive ER algorithm will be introduced and a new analytical ER algorithm will be investigated which provides a means for using the ER approach in decision situations where an explicit ER aggregation function is needed such as in optimisation problems. The ER approach will be used to aggregate multiple environmental factors, resulting in an aggregated distributed assessment for each alternative policy. A numerical example and its modified version are studied to illustrate the detailed implementation process of the ER approach and demonstrate its potential applications in EIA.  相似文献   

17.
This article presents and compares two approaches of principal component (PC) analysis for two-dimensional functional data on a possibly irregular domain. The first approach applies the singular value decomposition of the data matrix obtained from a fine discretization of the two-dimensional functions. When the functions are only observed at discrete points that are possibly sparse and may differ from function to function, this approach incorporates an initial smoothing step prior to the singular value decomposition. The second approach employs a mixed effects model that specifies the PC functions as bivariate splines on triangulations and the PC scores as random effects. We apply the thin-plate penalty for regularizing the function estimation and develop an effective expectation–maximization algorithm for calculating the penalized likelihood estimates of the parameters. The mixed effects model-based approach integrates scatterplot smoothing and functional PC analysis in a unified framework and is shown in a simulation study to be more efficient than the two-step approach that separately performs smoothing and PC analysis. The proposed methods are applied to analyze the temperature variation in Texas using 100 years of temperature data recorded by Texas weather stations. Supplementary materials for this article are available online.  相似文献   

18.
The present work is associated with Bayesian finite element (FE) model updating using modal measurements based on maximizing the posterior probability instead of any sampling based approach. Such Bayesian updating framework usually employs normal distribution in updating of parameters, although normal distribution has usual statistical issues while using non-negative parameters. These issues are proposed to be dealt with incorporating lognormal distribution for non-negative parameters. Detailed formulations are carried out for model updating, uncertainty-estimation and probabilistic detection of changes/damages of structural parameters using combined normal-lognormal probability distribution in this Bayesian framework. Normal and lognormal distributions are considered for eigen-system equation and structural (mass and stiffness) parameters respectively, while these two distributions are jointly considered for likelihood function. Important advantages in FE model updating (e.g. utilization of incomplete measured modal data, non-requirement of mode-matching) are also retained in this combined normal-lognormal distribution based proposed FE model updating approach. For demonstrating the efficiency of this proposed approach, a two dimensional truss structure is considered with multiple damage cases. Satisfactory performances are observed in model updating and subsequent probabilistic estimations, however level of performances are found to be weakened with increasing levels in damage scenario (as usual). Moreover, performances of this proposed FE model updating approach are compared with the typical normal distribution based updating approach for those damage cases demonstrating quite similar level of performances. The proposed approach also demonstrates better computational efficiency (achieving higher accuracy in lesser computation time) in comparison with two prominent Markov Chain Monte Carlo (MCMC) techniques (viz. Metropolis-Hastings algorithm and Gibbs sampling).  相似文献   

19.
In this paper a review of application of Bayesian approach to global and stochastic optimization of continuous multimodal functions is given. Advantages and disadvantages of Bayesian approach (average case analysis), comparing it with more usual minimax approach (worst case analysis) are discussed. New interactive version of software for global optimization is discussed. Practical multidimensional problems of global optimization are considered  相似文献   

20.
Norm-minimizing-type methods for solving large sparse linear systems with symmetric and indefinite coefficient matrices are considered. The Krylov subspace can be generated by either the Lanczos approach, such as the methods MINRES, GMRES and QMR, or by a conjugate-gradient approach. Here, we propose an algorithm based on the latter approach. Some relations among the search directions and the residuals, and how the search directions are related to the Krylov subspace are investigated. Numerical experiments are reported to verify the convergence properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号