首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The primary objective of this article is to explore the feasibility of the application of cost minimization analysis in a teaching hospital environment. The investigation is concerned with the development of cost per admission and cost per patient day models. These models are further used for determining the value of the length of stay that would minimize cost per patient day (projected length of stay) and for estimating the costs. This study is based on total of 109,060 observations (2002), obtained from a teaching hospital in South Florida. The top 10 diagnosis-related groups (DRGs) with the highest volume are selected for the study. The cost models are fitted to the data for an average R2 value of 87.3%, and a mean absolute percentage error (MAPE) value of 16.1%. The result demonstrates that if a hospital can control the length of stay at the projected level, on average, the cost per admission and the cost per patient day will decrease. Based on 8703 admissions for the selected DRGs in 2002, the total cost per year and the cost per patient day are decreased by approximately 8.56% ($15,453,841) and 4.02% ($66.30), respectively. Overall, these results confirm that the concept of cost minimization analysis in economic theory can be applied to healthcare industries for the purpose of reducing of costs. Cost minimization and cost variation analyses offer useful information to hospital management for better decision-making. It would be an important aid in making management decisions, particularly for cost reduction.  相似文献   

2.
This paper compares demand forecasts computed using the time series forecasting techniques of vector autoregression (VAR) and Bayesian VAR (BVAR) with forecasts computed using exponential smoothing and seasonal decomposition. These forecasts for three demand data series were used to determine three inventory management policies for each time series. The inventory costs associated with each of these policies were used as a further basis for comparison of the forecasting techniques. The results show that the BVAR technique, which uses mixed estimation, is particularly useful in reducing inventory costs in cases where the limited historical data offer little useful information for forecasting. The BVAR technique was effective in improving forecast accuracy and reducing inventory costs in two of the three cases tested. In the third case, unrestricted VAR and exponential smoothing produced the lowest experimental forecast errors and computed inventory costs. Furthermore, this research illustrates that improvements in demand forecasting can provide better cost reductions than relying on stochastic inventory models to provide cost reductions.  相似文献   

3.
Judgemental forecasting of exchange rates is critical for financial decision-making. Detailed investigations of the potential effects of time-series characteristics on judgemental currency forecasts demand the use of simulated series where the form of the signal and probability distribution of noise are known. The accuracy measures Mean Absolute Error (MAE) and Mean Squared Error (MSE) are frequently applied quantities in assessing judgemental predictive performance on actual exchange rate data. This paper illustrates that, in applying these measures to simulated series with Normally distributed noise, it may be desirable to use their expected values after standardising the noise variance. A method of calculating the expected values for the MAE and MSE is set out, and an application to financial experts' judgemental currency forecasts is presented.  相似文献   

4.
One of the major drawbacks of the existing fuzzy time series forecasting models is the fact that they only provide a single-point forecasted value just like the output of the traditional time series methods. Hence, they cannot provide a decision analyst more useful information. The aim of this present research is to design an improved fuzzy time series forecasting method in which the forecasted value will be a trapezoidal fuzzy number instead of a single-point value. Furthermore, the proposed method may also increase the forecasting accuracy. Two numerical data sets were used to illustrate the proposed method and compare the forecasting accuracy with three fuzzy time series methods. The results of the comparison indicate that the proposed method can generate forecasting values that are more accurate.  相似文献   

5.
Abstract Following a catastrophic disturbance, forest managers may choose to perform a salvage harvest to recoup timber losses. When the disturbance process evolves stochastically, a unique option value arises associated with the salvage harvest decision. This option value represents the value of postponing a salvage harvest to gain more information about the disturbance process. This paper uses a real options approach to determine how much of a forested area must be infested to trigger a salvage harvest when the forest provides both timber and nontimber values. Analytical results indicate slower rates of forest recovery will optimally delay a salvage harvest while forested areas with large timber values and where nontimber values are more sensitive to the presence of dead and dying trees should be harvested more immediately. The model is applied to a mountain pine beetle outbreak in Idaho's Sawtooth National Forest using readily available aerial detection survey data.  相似文献   

6.
This study develops a new use of data envelopment analysis for estimating a stochastic frontier cost function that is assumed to have two different error components: a one-sided disturbance (representing technical and allocative inefficiencies) and a two-sided disturbance (representing an observational error). The two error components are handled by data envelopment analysis in combination with goal programming/constrained regression. The approach proposed in this study can avoid several statistical assumptions used in conventional methods for estimating a stochastic frontier function. As an important application, this study uses the estimation technique to obtain an AT&T stochastic frontier cost function. As a result, this study measures technical and allocative efficiencies of AT&T production process and review its natural monopoly issue. The estimated stochastic frontier cost function is also compared with the other cost function models used for previous studies concerning the divestiture of the telephone industry.  相似文献   

7.
制造业成本控制和管理对企业发展是非常重要的,将作业成本法与成本控制相结合,为制造企业成本管理提供了一种新的思路.大数据时代为企业的生存与发展带来新的契机,大数据思维将极大的影响企业战略.将大数据与企业具体目标相结合,基于大数据环境,利用大数据技术和工具,探求新的成本控制模式.考虑到整个价值链,并利用作业成本思想,完善作业成本核算,期望为制造业成本控制提供一定的指引.  相似文献   

8.
The value of information systems availability is analyzed in this study through theoretical models of information economics. The article employs the information structure model to assess the values of information systems under various situations, with particular examples of the impact of data accessibility level on the quality of decision-making.The study centers on the relationship between the information system's time and content characteristics and the value of the information. It suggests a method to model the utility considerations that lead to the choice of an information system. The entailed models are employed to illuminate certain facets of the productivity paradox.The results of the analysis indicate that there is a direct relationship between systems accessibility and its informativeness. Consequently, there are some aspects of the “Productivity Paradox” that may be explained by using these results. The article proves a number of theorems and discusses the theoretical and practical interpretation of the results.  相似文献   

9.
研究有界闭箱约束下的全局最优化问题,利用相对熵及广义方差函数方程的最大根与全局最小值之间的等价关系,设计求解全局最优值的积分型水平值估计算法.对采用重点样本采样技巧产生的函数值按一定规则进行聚类,从而在各聚类中产生的若干新重点样本,结合相对熵算法,构造出多重点样本进行全局搜索的新算法.该算法的优点在于每次迭代选用当前较好的函数值信息,以达到随机搜索到更好的函数值信息.同时多重点样本可有利挖掘出更好的全局信息.一系列的数值实验表明该算法是非常有效的.  相似文献   

10.
We discuss the use of monotonic set measures for the representation of uncertain information. We look at some important examples of measure-based uncertainty, specifically probability and possibility and necessity. Others types of uncertainty such as cardinality based and quasi-additive measures are discussed. We consider the problem of determining the representative value of a variable whose uncertain value is formalized using a monotonic set measure. We note the central role that averaging and particularly weighted averaging operations play in obtaining these representative values. We investigate the use of various integrals such as the Choquet and Sugeno for obtaining these required averages. We suggest ways of extending a measure defined on a set to the case of fuzzy sets and the power sets of the original set. We briefly consider the problem of question answering under uncertain knowledge.  相似文献   

11.
The problem of describing minimal response time execution strategies in evaluating the join of several fragmented database relations, is considered. The consequential optimization problem assumes the convenient form of a min-max integer program. With further attention, various generalizations are realized that also include the performance objective of total execution cost.Tables of data logically conforming to the relational model of information are, at the physical level, frequently divided into numerous pieces. These fragments are found disseminated amongst the various sites of a distributed database system, with each one possibly replicated at any number of separate facilities.A submission demanding the amalgamation of many such relations is resolved by joining together their sets of component fragments in an appropriate manner, as defined by complicated patterns of overlapping attribute values. The final result is realized by then concatenating the products of these computations. This process is to be performed under the supervision of the database management system in such a way as to minimize the time taken, as perceived by the user who issued the request.These developments are based upon earlier investigations [1–5] that consider only the alternative optimization goal of minimal execution cost. With this in mind, several different different approaches may be taken to realize distinct hybrid models that give due regard to both measures of join query performance.  相似文献   

12.
There're about 10^{11} neurons in the human brain.Through the synaptic junction, neurons have formed a highly complex network.And it is really important to figure out the information expressed in the network, which will contribute to the resolution of the prevention and diagnosis of cognitive disorder of human beings. This paper uses the schizophrenia and healthy controlled subjects' fMRI data to construct the brain network model, in order to explores abnormal topological properties of schizophrenics' brain network based on graph theory. When studying the human brain network information traditionally by the basement of graph theory, it's all assure that the human brain network model has invariance, so it takes the whole period of time series data in constructing human network model, which is a kind static network. However, it's hard to ensure this because of the nonstationarity of fMRI functional time series data. Thus, when constructing human brain network model, we should take its time-variation into consideration, then construct a dynamic brain network. We can explore the brain network information better. In this research, we segment the time series data, using time windows, to constructing dynamic brain network model, then analyze it combined with the knowledge of graph theory, thereby reducing effects that the nonstationarity of fMRI functional time series data will have. Comparing dynamic brain network of the schizophrenic patients with normal controls subjects' in different level, the results show that there are difference in single node property, group network property of schizophrenic patients and normal control subjects' whole brain dynamic functional connectivity network. The discovery of these difference in network topological properties has provide new clues for the further study on the pathological mechanism of schizophrenia.  相似文献   

13.
通过建立Stackelberg博弈模型,研究了在混合渠道下零售商创新投入对供应链的影响,分析了零售商创新成本系数信息对称和不对称两种情况下各方的决策变量和利润受创新成本系数,制造商对零售商创新信息掌握的不确定性程度,需求转移系数,市场潜力及创新潜力等的影响关系。研究结果发现制造商总能通过信息分享受益,另外得到了零售商愿意分享成本信息以及使得供应链整体受益的条件,这些为制造商信息分享决策提供了依据。最后通过算例分析对结果进行了验证。  相似文献   

14.
This paper presents a new scheme for the coordination of dynamic, uncapacitated lot-sizing problems in two-party supply chains where parties’ local data are private information and no external or central entity is involved. This coordination scheme includes the following actions: At first, the buyer generates a series of different supply proposals using an extension of her local lot-sizing problem. Then the supplier calculates his cost changes that would result from the implementation of the buyer’s proposals. Based on these information, parties can identify the best proposal generated. The scheme identifies the system-wide optimum in different settings—for instance in a two-stage supply chain where the supplier’s costs for holding a period’s demand in inventory exceed his setup costs.  相似文献   

15.
In most stochastic decision problems one has the opportunity to collect information that would partially or totally eliminate the inherent uncertainty. One wishes to compare the cost and value of such information in terms of the decision maker's preferences to determine an optimal information gathering plan. The calculation of the value of information generally involves oneor more stochastic recourse problems as well as one or more expected value distribution problems. The difficulty and costs of obtaining solutions to these problems has led to a focus on the development of upper and lower bounds on the various subproblems that yield bounds on the value of information. In this paper we discuss published and new bounds for static problems with linear and concave preference functions for partial and perfect information. We also provide numerical examples utilizing simple production and investment problems that illustrate the calculations involved in the computation of the various bounds and provide a setting for a comparison of the bounds that yields some tentative guidelines for their use. The bounds compared are the Jensen's Inequality bound,the Conditional Jensen's Inequality bound and the Generalized Jensen and Edmundson-Madansky bounds.  相似文献   

16.
对医疗费用的建模分析与合理预测是医疗保险费用厘定的基础与根本.医疗费用中的高维附加信息在长期预测中具有重要作用.然而,传统的统计建模方法不适用于处理高维纵向数据下的医疗费用.本文提出部分线性多指标可加模型,对具有高维特征的纵向医疗费用数据进行拟合与预测,并且使用两种不同的降维估计方法进行模型估计,并将该模型应用于一组含...  相似文献   

17.
Since the identification of variant Creutzfeldt–Jacob Disease in the late 1980s, the possibility that this disease might be passed on via blood transfusion has presented challenging policy questions for Government and blood services in the UK. This paper discusses the use of mathematical modelling to inform policy in this area of health protection. We focus on the use of a relatively simple analytical model to explore how many such infections might eventually be expected to result in clinical cases under a range of alternative scenarios of interest to policy, and on the potential impact of possible additional counter measures. We comment on the value of triangulating between findings generated using distinct modelling approaches and observational data.  相似文献   

18.
The cost of obtaining good information regarding the various probability distributions needed for the solution of most stochastic decision problems is considerable. It is important to consider questions such as: (1) what minimal amounts of information are sufficient to determine optimal decision rules; (2) what is the value of obtaining knowledge of the actual realization of the random vectors; and (3) what is the value of obtaining some partial information regarding the actual realization of the random vectors. This paper is primarily concerned with questions two and three when the decision maker has an a priori knowledge of the joint distribution function of the random variables. Some remarks are made regarding results along the lines of question one. Mention is made of assumptions sufficient so that knowledge of means, or of means, variances, co-variances and n-moments are sufficient for the calculation of optimal decision rules. The analysis of the second question leads to the development of bounds on the value of perfect information. For multiperiod problems it is important to consider when the perfect information is available. Jensen's inequality is the key tool of the analysis. The calculation of the bounds requires the solution of nonlinear programs and the numerical evaluation of certain functions. Generally speaking, tighter bounds may be obtained only at the expense of additional information and computational complexity. Hence, one may wish to compute some simple bounds to decide upon the advisability of obtaining more information. For the analysis of the value of partial information it is convenient to introduce the notion of a signal. Each signal represents the receipt of certain information, and these signals are drawn from a given probability distribution. When a signal is received, it alters the decision maker's perception of the probability distributions inherent in his decision problem. The choice between different information structures must then take into account these probability distributions as well as the decision maker's preference function. A hierarchy of bounds may be determined for partial information evaluation utilizing the tools of the multiperiod perfect information case. However, the calculation of these bounds is generally considerably more dicult than the calculation of similar boulids in the perfect information case. Most of the analysis is directed towards problems in which the decision maker has a linear utility function over profits, costs or some other numerical variable. However, some of the bounds generalize to the case when the utility function is strictly increasing and concave.  相似文献   

19.
郭金海  罗光洲  陈化 《数学杂志》2005,25(6):625-630
本文讨论了抛物-双曲和抛物-常微两个Chemotaxis模型初边值问题解的性质.利用形式级数展开的方法,得到全局解在微小扰动下,导致解在有限时间内爆破,并对爆破时间进行了估计.因此说明了这种模型空间齐性解的不稳定性。  相似文献   

20.
时间序列分类(TSC)是数据挖掘领域中重要且富有挑战性的问题之一.首先将时间序列数据按照Gramian Angular Summation/Difference Field(GASF/GADF)、Markov Transition Field(MTP)、Recurrence Plot(RP)四种方式编码成图像,然后利用深度残差网络(ResNet)对编码的图像进行分类.为了充分利用四种编码图像的的信息以及提高分类的性能,使用AdaBoost对基分类器进行集成.ResNet在反向传播过程需要保存每一层的激活值,为了减少集成过程的内存消耗,利用可逆残差模块对传统残差模块进行替换.在计算分析阶段,从UCR数据中选取部分数据集进行测试,并将测试结果与当前最优的结果进行对比,实验结果表明所提算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号