首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a simplified Newton's cradle, which would be unobservable if the non-smooth impacts between the spheres were ignored. Nevertheless, under the assumption that there is an infinite sequence of nonsmooth impacts, it is possible to design an observer that is able to asymptotically estimate all the nonmeasurable state variables, including those that would be unobservable in absence of impacts.  相似文献   

2.
This paper provides new models for portfolio selection in which the returns on securities are considered fuzzy numbers rather than random variables. The investor's problem is to find the portfolio that minimizes the risk of achieving a return that is not less than the return of a riskless asset. The corresponding optimal portfolio is derived using semi-infinite programming in a soft framework. The return on each asset and their membership functions are described using historical data. The investment risk is approximated by mean intervals which evaluate the downside risk for a given fuzzy portfolio. This approach is illustrated with a numerical example.  相似文献   

3.
Verification, validation and testing (VVT) of large systems is an important but complex process. The decisions involved have to consider on one hand the controllable variables associated with investments in appraisal and prevention activities and on the other hand the outcomes of these decisions that are associated with risk impacts and systems' failures. Typically, quantitative models of such large systems use simulation to generate distributions of possible costs and risk outcomes. Here, by assuming independence of risk impacts, we decompose the decision process into separate decisions for each VVT activity and supercede the simulation technique by simple analytical models. We explore various optimization objectives of VVT strategies such as minimum total expected cost, minimum uncertainty as well as a generalized optimization objective expressing Taguchi's expected loss function and provide explicit solutions. A numerical example based on simplified data of a case study is used to demonstrate the proposed VVT optimization procedure.  相似文献   

4.
Stochastic modelling of tropical cyclone tracks   总被引:5,自引:0,他引:5  
A stochastic model for the tracks of tropical cyclones that allows for the computerised generation of a large number of synthetic cyclone tracks is introduced. This will provide a larger dataset than previously available for the assessment of risks in areas affected by tropical cyclones. To improve homogeneity, the historical tracks are first split into six classes. The points of cyclone genesis are modelled as a spatial Poisson point process, the intensity of which is estimated using a generalised version of a kernel estimator. For these points, initial values of direction, translation speed, and wind speed are drawn from histograms of the historical values of these variables observed in the neighbourhood of the respective points, thereby generating a first 6-h segment of a track. The subsequent segments are then generated by drawing changes in theses variables from histograms of the historical data available near the cyclone’s current location. A termination probability for the track is determined after each segment as a function of wind speed and location. In the present paper, the model is applied to historical cyclone data from the western North Pacific, but it is general enough to be transferred to other ocean basins with only minor adjustments. A version for the North Atlantic is currently under preparation.  相似文献   

5.
This empirical study investigates the contribution of different types of predictors to the purchasing behaviour at an online store. We use logit modelling to predict whether or not a purchase is made during the next visit to the website using both forward and backward variable-selection techniques, as well as Furnival and Wilson's global score search algorithm to find the best subset of predictors. We contribute to the literature by using variables from four different categories in predicting online-purchasing behaviour: (1) general clickstream behaviour at the level of the visit, (2) more detailed clickstream information, (3) customer demographics, and (4) historical purchase behaviour. The results show that predictors from all four categories are retained in the final (best subset) solution indicating that clickstream behaviour is important when determining the tendency to buy. We clearly indicate the contribution in predictive power of variables that were never used before in online purchasing studies. Detailed clickstream variables are the most important ones in classifying customers according to their online purchase behaviour. Though our dataset is limited in size, we are able to highlight the advantage of e-commerce retailers of being able to capture an elaborate list of customer information.  相似文献   

6.
The majority of catalog allocation models using historical data ignore endogeneity of past catalog decisions. We investigate two alternative approaches which either impose a relationship between the number of catalogs allocated to a customer and customer-specific coefficients of the sales response function or use instrumental variables. Heterogeneity across customers is modeled by cluster effects following a nonparametric distribution derived from a Dirichlet process prior. Models are estimated by Markov chain Monte Carlo simulation methods and evaluated by cross-validation predictive densities. Models which consider endogeneity imply much lower effects for sending a higher number of catalogs. These models also lead to optimal allocations which differ strongly from optimal allocations obtained for models which ignore endogeneity. Higher values of both posterior model probabilities and model average profits suggest to allocate catalogs based on the instrumental variables approach.  相似文献   

7.
Can payoffs buy happiness? People's perception on their current payoffs depends on the social context and the historical context. This paper develops a utility model that captures both effects of interpersonal comparisons and self-adaptations in evaluating time streams of payoffs. Moment utility represents subjective happiness over payoffs, which hinges on three state variables: retaliation, aspiration, and ambition. Retaliation incorporates insights of fairness thinking and formulates people's psychological reaction on relative payoff comparisons interpersonally. Aspiration displays habit formation that formulates people's self-adaptation on their own history of payoffs. Ambition captures the net influence of the past situations to the present. Over a time window, our model delivers a new measure of individual well-being under both social and historical contexts.  相似文献   

8.
We analyze the semilocal convergence of Steffensen's method, using a novel technique, which is based on recurrence relations, for solving systems of nonlinear equations. This technique allows analyzing the convergence of Steffensen's method to solutions of equations, where the function involved can be both differentiable and nondifferentiable. Moreover, this technique also allows enlarging the domain of starting points for Steffensen's method from certain predictions with the simplified Steffensen method.  相似文献   

9.
Using Khrapchenko’s method, we obtain the exact lower bound of 40 for the complexity in the class of π-schemes of a linear Boolean function depending substantially on 6 variables. We give a simplified proof of several lower bounds for the complexity of linear Boolean functions which are previously obtained on the basis of the same method.  相似文献   

10.
Suppose that the failure times of the units placed on a life-testing experiment are independent but nonidentically distributed random variables. Under progressively type II censoring scheme, distributional properties of the proposed random variables are presented and some inferences are made. Assuming that the random variables come from a proportional hazard rate model, the formulas are simplified and also the amount of Fisher information about the common parameters of this family is calculated. The results are also extended to a fixed covariates model. The performance of the proposed procedure is investigated via a real data set. Some numerical computations are also presented to study the effect of the proportionality rates in view of the Fisher information criterion. Finally, some concluding remarks are stated.  相似文献   

11.
Rough set data analysis (RSDA) has been primarily studied in order to obtain knowledge rules. Taking one group of data as an example, according to the simplified attributes and extractive rules a case was established; using case-based reasoning (CBR), another case was established based on the parameters influencing on the roving quality. In addition, RSDA was combined with CBR to build the case library. This allowed unimportant parameters to be removed and the case library to be simplified; in turn allowing easier, more efficient searching of attributes from the simplified case library. The union of Rule-Based Reasoning (RBR) and CBR means that complicated calculations around similar cases and the associated error can be avoided. Using RSDA one is able to reveal characteristic attributes and deduce knowledge rules associated with the model/problem in order to build a case library directly from historical data. So using the above procedure, allows machine settings to be defined and the prediction and control of the end-product quality to be easier.  相似文献   

12.
This paper investigates some common interest rate models for scenario generation in financial applications of stochastic optimization. We discuss conditions for the underlying distributions of state variables which preserve convexity of value functions in a multistage stochastic program. One- and multi-factor term structure models are estimated based on historical data for the Swiss Franc. An analysis of the dynamic behavior of interest rates generated with these models reveals several deficiencies which have an impact on the performance of investment policies derived from the stochastic program. While barycentric approximation is used here for the generation of scenario trees, these insights may be generalized to other discretization techniques as well.  相似文献   

13.
We use proprietary data collected by SVB Analytics, an affiliate of Silicon Valley Bank, to forecast the retained earnings of privately held companies. Combining methods of principal component analysis (PCA) and L1/quantile regression, we build multivariate linear models that feature excellent in‐sample fit and strong out‐of‐sample predictive accuracy. The combined PCA and L1 technique effectively deals with multicollinearity and non‐normality of the data, and also performs favorably when compared against a variety of other models. Additionally, we propose a variable ranking procedure that explains which variables from the current quarter are most predictive of the next quarter's retained earnings. We fit models to the top five variables identified by the ranking procedure and thereby, discover interpretable models with excellent out‐of‐sample performance. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
This paper develops an interactive three–stage systems approach for the calibration of the structural parameters and missing data within a deterministic, dynamic non–linear simultaneous equations model under arbitrary configurations of incomplete data. In Stage One, we minimize a quadratic loss function in the differences between the actual endogenous variables and the predicted solution values, relative to any feasible choice of the structural parameters. Missing exogenous variables and initial endogenous variables are treated as additional parameters to be calibrated; whereas missing current endogenous variables are treated by the missing data updating condition, in which the current solution values iteratively and sequentially replace those absent. Stage One may or may not lead to unique calibrations of the structural parameters—a fact that can be monitored a posteriori using singular value decompositions of the relevant Jacobian matrix. If not, there is an equivalence class of parameter values, all of which result in the same loss function value. If Stage Two is necessary, we attempt to exploit the non–linearity and simultaneity of the structural system to extract further information about the parameters from the same database, by minimizing the distance between the restricted and unrestricted reduced forms, while constraining the parameters also to lie within the Stage One equivalence class. This requires the use of higher–order numerical derivatives, and probably restricts its use in all but the simplest of cases to the next generation of supercomputers with massive numbers of parallel processors and much larger word–sizes. In Stage Three, various methods by which the original structural model can be simplified, given a non–unique Stage One calibration, are entertained.  相似文献   

15.
We study the problem of expansion of a wedge of non-ideal gas into vacuum in a two-dimensional bounded domain. The non-ideal gas is characterized by a van der Waals type equation of state. The problem is modeled by standard Euler equations of compressible flow, which are simplified by a transformation to similarity variables and then to hodograph transformation to arrive at a second order quasilinear partial differential equation in phase space; this, using Riemann variants, can be expressed as a non-homogeneous linearly degenerate system provided that the flow is supersonic. For the solution of the governing system, we study the interaction of two-dimensional planar rarefaction waves, which is a two-dimensional Riemann problem with piecewise constant data in the self-similar plane. The real gas effects, which significantly influence the flow regions and boundaries and which do not show-up in the ideal gas model, are elucidated; this aspect of the problem has not been considered until now.  相似文献   

16.
Variable correlation is important for many operations research models. Manyinventory, revenue management, and queuing models presume uncorrelated demandbetween products, market segments, or time periods. The specific model applied,or the resulting policies of a model, can differ drastically depending onvariable correlation. Having missing data are a common problem for the realworld application of operations research models. This work is at the junction ofthe two topics of correlation and missing data. We propose a test ofindependence between two variables when data are missing. The typical method fordetermining correlation with missing data ignores all data pairs in which onepoint is missing. The test presented here incorporates all data. The test can beapplied when both variables are continuous, when both are discrete, or when onevariable is discrete and the other is continuous. The test makes no assumptionsabout the distribution of the two variables, and thus it can be used to extendapplication of non-parametric rank tests, such as Spearman's rankcorrelation, to the case where data are missing. An example is shown wherefailure to incorporate the incomplete data yields incorrect policies.  相似文献   

17.
Ishihara's problem of decidable variables asks which class of decidable propositional variables is sufficient to warrant classical theorems in intuitionistic logic. We present several refinements to the class proposed by Ishii for this problem, which also allows the class to cover Glivenko's logic. We also treat the extension of the problem to minimal logic, suggesting a couple of new classes.  相似文献   

18.
This paper analyzes an intensity‐based approach for equity modeling. We use the Cox–Ingersoll–Ross (CIR) process to describe the intensity of the firm's default process. The intensity is purposely linked to the assets of the firm and consequently is also used to explain the equity. We examine two different approaches to link assets and intensity and derive closed‐form expressions for the firms' equity under both models. We use the Kalman filter to estimate the parameters of the unobservable intensity process. We demonstrate our approach using historical equity time series data from Merrill Lynch. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when α = M/N is close to the experimental threshold αc separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when α is close to (but smaller than) αc. We introduce a new type of message passing algorithm which allows to find efficiently a satisfying assignment of the variables in this difficult region. This algorithm is iterative and composed of two main parts. The first is a message‐passing procedure which generalizes the usual methods like Sum‐Product or Belief Propagation: It passes messages that may be thought of as surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional heuristic. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005  相似文献   

20.
Population projections by various variables are often required by decision makers for a variety of planning problems in both the private and public sectors. Mathematical population models have not yet been developed which are suitable for making population projections stratified by large numbers of variables. However, stochastic simulation called microanalytic simulation provides us with a feasible approach for obtaining population projections of this nature. In this paper the structure of a microanalytic population simulation model and its advantages over the transitional matrix method are discussed. Then a general method is presented to condition the course of simulations over historical periods using available aggregate vital statistics data. In this way, deviations of the simulated from the actual population can be controlled to a certain degree allowing us to recreate otherwise unavailable historical time series and cross-sectional population samples with reasonable precision. Numerical results are presented for the population of Alberta, a province of Canada, over the period 1961-1971.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号