首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
This paper investigates theories that integrate and extend currently accepted agency- and transaction-based approaches to organizational control. We use a computational model to build three forms of control systems (market, bureaucratic, clan) and three forms of control targets (input, behavior, output). Using these models, we examine relationships between control systems and both singular and multiple control targets. Results of this study support the emerging broader perspective on organizational control research and suggest that managers can improve organizational performance by focusing attention on multiple control targets. In addition, findings partially support posited relationships between control systems and singular control targets. The authors suggest that results of this study should direct scholars to refocus control research from examinations of singular forms of control to evaluations of more complex control systems.  相似文献   

2.
Inventory levels are critical to the operations, management, and capacity decisions of inventory systems but can be difficult to model in heterogeneous, non-stationary throughput systems. The inpatient hospital is a complicated throughput system and, like most inventory systems, hospitals dynamically make managerial decisions based on short term subjective demand predictions. Specifically, short term hospital staffing, resource capacity, and finance decisions are made according to hospital inpatient inventory predictions. Inpatient inventory systems have non-stationary patient arrival and service processes. Previously developed models present poor inventory predictions due to model subjectivity, high model complexity, solely expected value predictions, and assumed stationary arrival and service processes. Also, no models present statistical testing for model significance and quality-of-fit. This paper presents a Markov chain probability model that uses maximum likelihood regression to predict the expectations and discrete distributions of transient inpatient inventories. The approach has a foundation in throughput theory, has low model complexity, and provides statistical significance and quality-of-fit tests unique to this Markov chain. The Markov chain is shown to have superior predictability over Seasonal ARIMA models.  相似文献   

3.
The fields of operations research (OR) and artificial intelligence (AI) provide complementary methods that may be combined into managerial decision support systems (DSS). However, the management domain is substantially different from domains in which prior expert systems have been developed. Consequently, successful application of OR/AI techniques in managerial DSS requires careful analysis and additional development. Ongoing research concerning design and implementation of managerial DSS is discussed. A prototype system capable of constructing linear statistical models of direct and indirect relationships from a knowledge base of relationships is described and evaluated.  相似文献   

4.
We advocate the use of qualitative models for the analysis of shift equilibria in large biological systems. We present a mathematical method, allowing qualitative predictions to be made of the behaviour of a biological system. These predictions are not dependent on specific values of the kinetic constants. We show how these methods can be used to improve understanding of a complex regulatory system. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
The purpose of this paper is to provide a strategic collaborative approach to risk and quality control in a cooperative supply chain by using a Neyman–Pearson quantile risk framework for the statistical control of risks. The paper is focused on the statistical quality control of a supplier and a producer, applying the traditional Neyman–Pearson theory to the control of quality in a supply chain environment. In our framework, the risks assumed by the parties in the supply chain depend on the organizational structure, the motivations and the power relationships that exist between members of the supply chain.  相似文献   

6.
This paper considers two related issues regarding feedforward Neural Networks (NNs). The first involves the question of whether the network weights corresponding to the best fitting network are unique. Our empirical tests suggest an answer in the negative, whether using standard Backpropagation algorithm or our preferred direct (non-gradient-based) search procedure. We also offer a theoretical analysis which suggests that there will almost inevitably be functional relationships between network weights. The second issue concerns the use of standard statistical approaches to testing the significance of weights or groups of weights. Treating feedforward NNs as an interesting way to carry out nonlinear regression suggests that statistical tests should be employed. According to our results, however, statistical tests can in practice be indeterminate. It is rather difficult to choose either the number of hidden layers or the number of nodes on this basis.  相似文献   

7.
Loss given default modelling has become crucially important for banks due to the requirement that they comply with the Basel Accords and to their internal computations of economic capital. In this paper, support vector regression (SVR) techniques are applied to predict loss given default of corporate bonds, where improvements are proposed to increase prediction accuracy by modifying the SVR algorithm to account for heterogeneity of bond seniorities. We compare the predictions from SVR techniques with thirteen other algorithms. Our paper has three important results. First, at an aggregated level, the proposed improved versions of support vector regression techniques outperform other methods significantly. Second, at a segmented level, by bond seniority, least square support vector regression demonstrates significantly better predictive abilities compared with the other statistical models. Third, standard transformations of loss given default do not improve prediction accuracy. Overall our empirical results show that support vector regression techniques are a promising technique for banks to use to predict loss given default.  相似文献   

8.
This paper discusses some applications of statistical condition estimation (SCE) to the problem of solving linear systems. Specifically, triangular and bidiagonal matrices are studied in some detail as typical of structured matrices. Such a structure, when properly respected, leads to condition estimates that are much less conservative compared with traditional non‐statistical methods of condition estimation. Some examples of linear systems and Sylvester equations are presented. Vandermonde and Cauchy matrices are also studied as representative of linear systems with large condition numbers that can nonetheless be solved accurately. SCE reflects this. Moreover, SCE when applied to solving very large linear systems by iterative solvers, including conjugate gradient and multigrid methods, performs equally well and various examples are given to illustrate the performance. SCE for solving large linear systems with direct methods, such as methods for semi‐separable structures, are also investigated. In all cases, the advantages of using SCE are manifold: ease of use, efficiency, and reliability. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
Novel constructions of empirical controllability and observability gramians for nonlinear systems are proposed for subsequent use in a balanced truncation style of model reduction. The new gramians are based on a generalisation of the fundamental solution for a Linear Time-Varying system. Relationships between the given gramians for nonlinear systems and the standard gramians for both Linear Time-Invariant and Linear Time-Varying systems are established as well as relationships to prior constructions proposed for empirical gramians. Application of the new gramians is illustrated through a sample test-system.  相似文献   

10.
Datasets from remote-sensing platforms and sensor networks are often spatial, temporal, and very large. Processing massive amounts of data to provide current estimates of the (hidden) state from current and past data is challenging, even for the Kalman filter. A large number of spatial locations observed through time can quickly lead to an overwhelmingly high-dimensional statistical model. Dimension reduction without sacrificing complexity is our goal in this article. We demonstrate how a Spatio-Temporal Random Effects (STRE) component of a statistical model reduces the problem to one of fixed dimension with a very fast statistical solution, a methodology we call Fixed Rank Filtering (FRF). This is compared in a simulation experiment to successive, spatial-only predictions based on an analogous Spatial Random Effects (SRE) model, and the value of incorporating temporal dependence is quantified. A remote-sensing dataset of aerosol optical depth (AOD), from the Multi-angle Imaging SpectroRadiometer (MISR) instrument on the Terra satellite, is used to compare spatio-temporal FRF with spatial-only prediction. FRF achieves rapid production of optimally filtered AOD predictions, along with their prediction standard errors. In our case, over 100,000 spatio-temporal data were processed: Parameter estimation took 64.4 seconds and optimal predictions and their standard errors took 77.3 seconds to compute. Supplemental materials giving complete details on the design and analysis of a simulation experiment, the simulation code, and the MISR data used are available on-line.  相似文献   

11.
Most information systems generate large quantities of figures—usually at the request of senior management. Their interpretation is frequently left to the user management. Sales information systems are particularly prone to producing vast quantities of computer print-out. This paper describes a case study within Lyons Bakery Ltd where statistical methods have been incorporated into a standard sales information system in order to isolate sales problems and provide a lay interpretation of their causes, leaving management free to devise and implement their solutions.  相似文献   

12.
Inventory control systems typically require the frequent updating of forecasts for many different products. In addition to point predictions, interval forecasts are needed to set appropriate levels of safety stock. The series considered in this paper are characterised by high volatility and skewness, which are both time-varying. These features motivate the consideration of forecasting methods that are robust with regard to distributional assumptions. The widespread use of exponential smoothing for point forecasting in inventory control motivates the development of the approach for interval forecasting. In this paper, we construct interval forecasts from quantile predictions generated using exponentially weighted quantile regression. The approach amounts to exponential smoothing of the cumulative distribution function, and can be viewed as an extension of generalised exponential smoothing to quantile forecasting. Empirical results are encouraging, with improvements over traditional methods being particularly apparent when the approach is used as the basis for robust point forecasting.  相似文献   

13.
We model the evolution of biological and linguistic sequences by comparing their statistical properties. This comparison is performed by means of efficiently computable kernel functions, that take two sequences as an input and return a measure of statistical similarity between them. We show how the use of such kernels allows to reconstruct the phylogenetic trees of primates based on the mitochondrial DNA (mtDNA) of existing animals, and the phylogenetic tree of Indo-European and other languages based on sample documents from existing languages. Kernel methods provide a convenient framework for many pattern analysis tasks, and recent advances have been focused on efficient methods for sequence comparison and analysis. While a large toolbox of algorithms has been developed to analyze data by using kernels, in this paper we demonstrate their use in combination with standard phylogenetic reconstruction algorithms and visualization methods.  相似文献   

14.
Organizations face trade-offs when they adopt strategies in changing resource environments. The type of trade-off depends on the type of resource change. This paper offers an organizational trade-off model for quantitative resource changes. We call it the “Cricket and Ant” (CA) model, because the pertaining strategies resemble the cricket and ant's behavior in La Fontaine's famous fable. We derive theorems in this CA model in First Order Logic, which we also use to demonstrate that two theory fragments of organizational ecology, i.e., niche width theory and propagation strategy theory, obtain as variant cases of CA; their predictions on environmental selection preferences derive as theorems once their respective boundary conditions are represented in the formal machinery.  相似文献   

15.
Network Growth Models such as Preferential Attachment and Duplication/Divergence are popular generative models with which to study complex networks in biology, sociology, and computer science. However, analyzing them within the framework of model selection and statistical inference is often complicated and computationally difficult, particularly when comparing models that are not directly related or nested. In practice, ad hoc methods are often used with uncertain results. If possible, the use of standard likelihood-based statistical model selection techniques is desirable.  相似文献   

16.
Manufacturing flow line systems: a review of models and analytical results   总被引:7,自引:0,他引:7  
The most important models and results of the manufacturing flow line literature are described. These include the major classes of models (asynchronous, synchronous, and continuous); the major features (blocking, processing times, failures and repairs); the major properties (conservation of flow, flow rate-idle time, reversibility, and others); and the relationships among different models. Exact and approximate methods for obtaining quantitative measures of performance are also reviewed. The exact methods are appropriate for small systems. The approximate methods, which are the only means available for large systems, are generally based on decomposition, and make use of the exact methods for small systems. Extensions are briefly discussed. Directions for future research are suggested.  相似文献   

17.
Over the last 10 years, the field of mathematical epidemiology has piqued the interest of complex‐systems researchers, resulting in a tremendous volume of work exploring the effects of population structure on disease propagation. Much of this research focuses on computing epidemic threshold tests, and in practice several different tests are often used interchangeably. We summarize recent literature that attempts to clarify the relationships among different threshold criteria, systematize the incorporation of population structure into a general infection framework, and discuss conditions under which interaction topology and infection characteristics can be decoupled in the computation of the basic reproductive ratio, R0. We then present methods for making predictions about disease spread when only partial information about the routes of transmission is available. These methods include approximation techniques and bounds obtained via spectral graph theory, and are applied to several data sets. © 2008 Wiley Periodicals, Inc. Complexity, 2009  相似文献   

18.
Although the concept of Batch Markovian Arrival Processes (BMAPs) has gained widespread use in stochastic modelling of communication systems and other application areas, there are few statistical methods of parameter estimation proposed yet. However, in order to practically use BMAPs for modelling, statistical model fitting from empirical time series is an essential task. The present paper contains a specification of the classical EM algorithm for MAPs and BMAPs as well as a performance comparison to the computationally simpler estimation procedure recently proposed by Breuer and Gilbert. Furthermore, it is shown how to adapt the latter to become an estimator for hidden Markov models.  相似文献   

19.
Space semidiscretization of PDAEs, i.e. coupled systems of PDEs and algebraic equations, give raise to stiff DAEs and thus the standard theory of numerical methods for DAEs is not valid. As the study of numerical methods for stiff ODEs is done in terms of logarithmic norms, it seems natural to use also logarithmic norms for stiff DAEs. In this paper we show how the standard conditions imposed on the PDAE and the semidiscretized problem are formally the same if they are expressed in terms of logarithmic norms. To study the mathematical problem and their numerical approximations, this link between the standard conditions and logarithmic norms allow us to use for stiff DAEs techniques similar to the ones used for stiff ODEs. The analysis is done for problems which appear in the context of elastic multibody systems, but once the tools, i.e., logarithmic norms, are developed, they can also be used for the analysis of other PDAEs/DAEs.  相似文献   

20.
Part I of this paper presented the basic concepts of behavior settings and eco-behavioral science originated by the psychologist Roger Barker, showed how they could be linked with standard economic data systems, and suggested their use as a basis for time-allocation matrices and social system accounts. Part II discusses the relationships of behavior settings and eco-behavioral science to established disciplines, describes applications of mathematics to the new concepts by Fox and associates, and points out some major areas in need of mathematical and theoretical development. These areas include representation and measurement of patterns of relationships among roles within behavior settings, relationships among behavior settings within communities and organizations, and the evolution of large, heterogeneous populations of behavior settings over time. We hope some readers will be motivated to participate in this new scientific enterprise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号