首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
The original Operational Researchers conducted scientific analyses of operational data to help solve operational problems. Following the success to this essentially empirical process, a general belief has grown that all forms of decision problems can be assisted by similar scientific analysis using ‘a priori’ models, even when the issues are beyond experience and the models can never be validated. In this paper the author describes the evolution of Operational Research in the RAF at Command level over half a century, reflects on the value of techniques and concludes that the adaptive OR group itself has proved to be the most enduring and useful “tool” of all.  相似文献   

2.
Test problems for the nonlinear Boltzmann and Smoluchowski kinetic equations are used to analyze the efficiency of various versions of weighted importance modeling as applied to the evolution of multiparticle ensembles. For coagulation problems, a considerable gain in computational costs is achieved via the approximate importance modeling of the “free path” of the ensemble combined with the importance modeling of the index of a pair of interacting particles. A weighted modification of the modeling of the initial velocity distribution was found to be the most efficient for model solutions to the Boltzmann equation. The technique developed can be useful as applied to real-life coagulation and relaxation problems for which the model problems considered give approximate solutions.  相似文献   

3.
For many complex business and industry problems, high‐dimensional data collection and modeling have been conducted. It has been shown that interactions may have important implications beyond the main effects. The number of unknown parameters in an interaction analysis can be larger or much larger than the sample size. As such, results generated from analyzing a single data set are often unsatisfactory. Integrative analysis, which jointly analyzes the raw data from multiple independent studies, has been conducted in a series of recent studies and shown to outperform single–data set analysis, meta‐analysis, and other multi–data set analyses. In this study, our goal is to conduct integrative analysis in interaction analysis. For regularized estimation and selection of important interactions (and main effects), we apply a threshold gradient directed regularization approach. Advancing from the existing studies, the threshold gradient directed regularization approach is modified to respect the “main effects, interactions” hierarchy. The proposed approach has an intuitive formulation and is computationally simple and broadly applicable. Simulations and the analyses of financial early warning system data and news‐APP (application) recommendation behavior data demonstrate its satisfactory practical performance.  相似文献   

4.
A stochastic branch-and-bound technique for the solution of stochastic single-machine-tardiness problems with job weights is presented. The technique relies on partitioning the solution space and estimating lower and upper bounds by sampling. For the lower bound estimation, two different types of sampling (“within” and “without” the minimization) are combined. Convergence to the optimal solution (with probability one) can be demonstrated. The approach is generalizable to other discrete stochastic optimization problems. In computational experiments with the single-machine-tardiness problem, the technique worked well for problem instances with a relatively small number of jobs; due to the enormous complexity of the problem, only approximate solutions can be expected for a larger number of jobs. Furthermore, a general precedence rule for the single-machine scheduling of jobs with uncertain processing times has been derived, essentially saying that “safe” jobs are to be scheduled before “unsafe” jobs.  相似文献   

5.
This paper describes how the analytic hierarchy process was applied to a decision concerning the selection of a computer operating system. Various practical issues and problems which arose whilst using the process are discussed. The method was found to be a useful aid for identifying the criteria upon which a decision depends. It also revealed useful data about the concerns and preferences of decision-makers. Some problems were, however, encountered when assessing the importance of an intangible factor relative to a tangible factor. Also, it was difficult to interpret the options' final scores because the method does not provide an indication of statistical significance.  相似文献   

6.
In their paper “A New Perspective on Constrained Motion,” F. E. Udwadia and R. E. Kalaba propose a new form of matrix equations of motion for nonholonomic systems subject to linear nonholonomic second-order constraints. These equations contain all of the generalized coordinates of the mechanical system in question and, at the same time, they do not involve the forces of constraint. The equations under study have been shown to follow naturally from the generalized Lagrange and Maggi equations; they can be also obtained using the contravariant form of the motion equations of a mechanical system subjected to nonholonomic linear constraints of second order. It has been noted that a similar method of eliminating the forces of constraint from differential equations is usually useful for practical purposes in the study of motion of mechanical systems subjected to holonomic or classical nonholonomic constraints of first order. As a result, one obtains motion equations that involve only generalized coordinates of a mechanical system, which corresponds to the equations in the Udwadia–Kalaba form.  相似文献   

7.
We construct and analyze an algorithm for the numerical computation of Burgers' equation for preceding times, given an a priori bound for the solution and an approximation to the terminal data. The method is based on the “backward beam equation” coupled with an iterative procedure for the solution of the nonlinear problem via a sequence of linear problems. We also present the results of several numerical experiments. It turns out that the procedure converges “asymptotically,” i.e., in the same manner in which an asymptotic expansion converges. This phenomenon seems related to the “destruction of information,” at t = 0, which is typical in backwards dissipative equations. We derive a priori stability estimates for the analytic backwards problem, and we observe that in many numerical experiments, the distance backwards in time where significant accuracy can be attained is much larger than would be expected on the basis of such estimates. The method is useful for small solutions. Problems where steep gradients occur require considerably more precision in measurement. The algorithm is applicable to other semilinear problems.  相似文献   

8.
Jinfa Cai  Bikai Nie 《ZDM》2007,39(5-6):459-473
This paper is an attempt to paint a picture of problem solving in Chinese mathematics education, where problem solving has been viewed both as an instructional goal and as an instructional approach. In discussing problem-solving research from four perspectives, it is found that the research in China has been much more content and experience-based than cognitive and empirical-based. We also describe several problem-solving activities in the Chinese classroom, including “one problem multiple solutions,” “multiple problems one solution,” and “one problem multiple changes.” Unfortunately, there are no empirical investigations that document the actual effectiveness and reasons for the effectiveness of those problem-solving activities. Nevertheless, these problem-solving activities should be useful references for helping students make sense of mathematics.  相似文献   

9.
Aligning simulation models: A case study and results   总被引:1,自引:0,他引:1  
This paper develops the concepts and methods of a process we will call “alignment of computational models” or “docking” for short. Alignment is needed to determine whether two models can produce the same results, which in turn is the basis for critical experiments and for tests of whether one model can subsume another. We illustrate our concepts and methods using as a target a model of cultural transmission built by Axelrod. For comparison we use the Sugarscape model developed by Epstein and Axtell. The two models differ in many ways and, to date, have been employed with quite different aims. The Axelrod model has been used principally for intensive experimentation with parameter variation, and includes only one mechanism. In contrast, the Sugarscape model has been used primarily to generate rich “artificial histories”, scenarios that display stylized facts of interest, such as cultural differentiation driven by many different mechansims including resource availability, migration, trade, and combat. The Sugarscape model was modified so as to reproduce the results of the Axelrod cultural model. Among the questions we address are: what does it mean for two models to be equivalent, how can different standards of equivalence be statistically evaluated, and how do subtle differences in model design affect the results? After attaining a “docking” of the two models, the richer set of mechanisms of the Sugarscape model is used to provide two experiments in sensitivity analysis for the cultural rule of Axelrod's model. Our generally positive experience in this enterprise has suggested that it could be beneficial if alignment and equivalence testing were more widely practiced among computational modelers.  相似文献   

10.
One of the major problems for O.R. lies in providing decision-making assistance in complex conflicts. Too often in the past the approach has been concentrated on providing technical solutions to well-defined hypothetical problems, rather than on attempting to tackle the decision-makers' real problems of trying to make sense of the complex situations in which they find themselves. The paper is intended to contribute towards practical understanding by outlining a theory which explains the ways in which decision-makers' views of the environment are affected by conflicts, particularly under conditions of crisis.The theory attempts to integrate a series of pragmatic hypotheses derived from International Relations, by extending concepts originating in cognitive psychology. The theory's crucial concept is that of “resources” treated by the authors as the number of units of information processed over a given time period. By considering the impact of resources on the conscious analysis of problems a set of postulates is arrived at, which are applicable to a wide variety of decision-making situations.In particular the principle of “inappropriate resource saving” is proposed which suggests that resource considerations imply that unsuitable oversimplistic approaches are likely to dominate decision-making in precisely the situations where the necessity of a complex sophisticated approach is greatest.Finally the implications of the theory for decision-making in general, and for understanding other parties in conflicts, in particular, are discussed. The importance is stressed of; forward contingency planning (putting time in the bank), conscious resource management, and forming multiple models of complex situations.  相似文献   

11.
In previous papers, the consequences of the “presence of fuzziness” in the experimental information on which statistical inferences are based were discussed. Thus, the intuitive assertion «fuzziness entails a loss of information» was formalized, by comparing the information in the “exact case” with that in the “fuzzy case”. This comparison was carried out through different criteria to compare experiments (in particular, that based on the “pattern” one, Blackwell's sufficiency criterion). Our purpose now is slightly different, in the sense that we try to compare two “fuzzy cases”. More precisely, the question we are interested in is the following: how will different “degrees of fuzziness” in the experimental information affect the sufficiency? In this paper, a study of this question is carried out by constructing an alternative criterion (equivalent to sufficiency under comparability conditions), but whose interpretation is more intuitive in the fuzzy case. The study is first developed for Bernoulli experiments, and the coherence with the axiomatic requirements for measures of fuzziness is also analyzed in such a situation. Then it is generalized to other random experiments and a simple example is examined.  相似文献   

12.
Abstract

Projection pursuit describes a procedure for searching high-dimensional data for “interesting” low-dimensional projections via the optimization of a criterion function called the projection pursuit index. By empirically examining the optimization process for several projection pursuit indexes, we observed differences in the types of structure that maximized each index. We were especially curious about differences between two indexes based on expansions in terms of orthogonal polynomials, the Legendre index, and the Hermite index. Being fast to compute, these indexes are ideally suited for dynamic graphics implementations.

Both Legendre and Hermite indexes are weighted L 2 distances between the density of the projected data and a standard normal density. A general form for this type of index is introduced that encompasses both indexes. The form clarifies the effects of the weight function on the index's sensitivity to differences from normality, highlighting some conceptual problems with the Legendre and Hermite indexes. A new index, called the Natural Hermite index, which alleviates some of these problems, is introduced.

A polynomial expansion of the data density reduces the form of the index to a sum of squares of the coefficients used in the expansion. This drew our attention to examining these coefficients as indexes in their own right. We found that the first two coefficients, and the lowest-order indexes produced by them, are the most useful ones for practical data exploration because they respond to structure that can be analytically identified, and because they have “long-sighted” vision that enables them to “see” large structure from a distance. Complementing this low-order behavior, the higher-order indexes are “short-sighted.” They are able to see intricate structure, but only when they are close to it.

We also show some practical use of projection pursuit using the polynomial indexes, including a discovery of previously unseen structure in a set of telephone usage data, and two cautionary examples which illustrate that structure found is not always meaningful.  相似文献   

13.
数学与应用数学(师范)专业中的《运筹学》具有跨学科、实践性的课程特点,目标在于培养职前教师用数学方法解决实际问题的能力.结合义务教育阶段新课程标准中"四基"的提出这一背景,本文将以线性规划部分(运筹数学)对偶线性规划概念的引入这一知识模块为例,探讨通过问题串形式进行问题驱动、多元表征的概念教学过程.即遵循问题驱动—兴趣驱动—问题意识发展—提出和解决新问题,依据数学与外部联系、数学内部联系两条主线设计教学和学习,探索如何通过问题驱动、多元表征的结构化教学过程引导学生的学习方式发生改变,增强探究学习的动机,发展问题解决能力.课堂教学实践证明效果优于以往单一的讲授式教学法,一定程度上提高了学生的学业成绩、应用问题的兴趣和问题解决意识.  相似文献   

14.
In this paper, we address uncapacitated network design problems characterised by uncertainty in the input data. Network design choices have a determinant impact on the effectiveness of the system. Design decisions are frequently made with a great degree of uncertainty about the conditions under which the system will be required to operate. Instead of finding optimal designs for a given future scenario, designers often search for network configurations that are “good” for a variety of likely future scenarios. This approach is referred to as the “robustness” approach to system design. We present a formal definition of “robustness” for the uncapacitated network design problem, and develop algorithms aimed at finding robust network designs. These algorithms are adaptations of the Benders decomposition methodology that are tailored so they can efficiently identify robust network designs. We tested the proposed algorithms on a set of randomly generated problems. Our computational experiments showed two important properties. First, robust solutions are abundant in uncapacitated network design problems, and second, the proposed algorithms performance is satisfactory in terms of cost and number of robust network designs obtained.  相似文献   

15.
An important question for corporate finance officers is whether risk assessments, such as Value at Risk (VaR), are currently accurate. In contrast to past research on assessing the accuracy of VaR, volatility, and related density estimates, which has focused on backtesting using large samples of fixed size, we develop a class of sequential testing tools for on-line, real-time assessment, based on time windows that vary adaptively with the data.The VaR is determined by a single point of the estimated distribution of the portfolio “gain” and may be positive (profit) or negative (loss). Previous literature has dichotomically tested the sequence of VaR forecasts or the sequence of estimated distributions. A pure test is obtained by converting each observed gain into a binary value indicating whether it was covered by the corresponding VaR forecast or not. A more powerful test results from using the entire distribution, by transforming the observed gain to a random variable that has a known distribution when the forecast is accurate. This, however, also detects errors unrelated to the accuracy of estimating VaR and other measures of risk.We propose an adjustable, continuous compromise between detection power and purity, where “power” refers to quick detection of systematic bias and “purity” refers to insensitivity to errors not relevant to VaR estimation accuracy. Previous approaches focused on either extreme of this continuum. However, we point out that there are few practical situations for which the choice of either extreme would be optimal. Instead, we suggest a compromise that would be much better and very useful in most practical applications.  相似文献   

16.
Gert Kadunz 《ZDM》2002,34(3):73-77
The paper highlights the importance of “macros” or modules for teaching and learning Geometry using Dynamical Geometry Software (DGS). The role of modules is analyzed in terms of “writing” and “reading” Geometry. At first, modules are taken as tools for geometrical construction tasks and as tools to describe and analyze these constructions. For proofs, decomposing a given geometrical statement may be supported by using prototypical pictures representing theorems of geometry (“modules”). Reading theorems into geometry and constructing proofs is still a major achievement of the student—which may be reached by using macros and modules as a major heuristic strategy  相似文献   

17.
Airport management: taxi planning   总被引:4,自引:0,他引:4  
The Taxi Planning studies the aircraft routing and scheduling on the airport ground. This is a dynamic problem, which must be updated almost every time that a new aircraft enters or exits the system. Taxi Planning has been modelled using a linear multicommodity flow network model with side constraints and binary variables. The flow capacity constraints are used to represent the conflicts and competence between aircrafts using a given airport capacity. The “Branch and Bound” and “Fix and Relax” methodologies have been used. The computational tests have been run at the Madrid-Barajas airport, using actual data from the airport traffic.  相似文献   

18.
Conservative dynamical systems propagate as stationary points of the action functional. Using this representation, it has previously been demonstrated that one may obtain fundamental solutions for two-point boundary value problems for some classes of conservative systems via solution of an associated dynamic program. Further, such a fundamental solution may be represented as a set of solutions of differential Riccati equations (DREs), where the solutions may need to be propagated past escape times. Notions of “static duality” and “stat-quad duality” are developed, where the relationship between the two is loosely analogous to that between convex and semiconvex duality. Static duality is useful for smooth functionals where one may not be guaranteed of convexity or concavity. Some simple properties of this duality are examined, particularly commutativity. Application to stationary action is considered, which leads to propagation of DREs past escape times via propagation of stat-quad dual DREs.  相似文献   

19.
Selecting, modifying or creating appropriate problems for mathematics class has become an activity of increaing importance in the professional development of German mathematics teachers. But rather than asking in general: “What is a good problem?” there should be a stronger emphasis on considering the specific goal of a problem, e.g.: “What are the ingredients that make a problem appropriate for initiating a learning process” or “What are the characteristics that make a problem appropriate for its use in a central test?” We propose a guiding scheme for teachers that turns out to be especially helpful, since the newly introduced orientation on outcome standards a) leads to a critical predominance of test items and b) expects teachers to design adequate problems for specific learning processes (e.g. problem solving, reasoning and modelling activities).  相似文献   

20.
We propose and analyze a new relaxation scheme for the iterative solution of the linear system arising from the finite difference discretization of convection–diffusion problems. For problems that are convection dominated, the (nondimensionalized) diffusion parameter ϵ is usually several orders of magnitude smaller than computationally feasible mesh widths. Thus, it is of practical importance that approximation methods not degrade for small ϵ. We give a relaxation procedure that is proven to converge uniformly in ϵ to the solution of the linear algebraic system (i.e., “robustly”). The procedure requires, at each step, the solution of one 4 × 4 linear system per mesh cell. Each 4 × 4 system can be independently solved, and the result communicated to the neighboring mesh cells. Thus, on a mesh connected processor array, the communication requirements are four local communications per iteration per mesh cell. An example is given, which illustrates the robustness of the new relaxation scheme. © 1999 John Wiley & Sons, Inc. Numer Methods Partial Differential Eq 15: 91–110, 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号