首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Each of an organization’s many activities transforms inputs into outputs. Managing these activities involves allocating input resources for some activities and assigning output targets for others. Making these decisions is especially difficult in the presence of uncertainty. In practice, many organizations address these problems by using a fairly simple “proportional allocation” heuristic (e.g., “allocate to each activity the same percentage increase (or decrease) in its resources or targets”). But proportional allocation does not consider the uncertainty inherent in the ability of each activity to make use of its resources (or meet its targets).  相似文献   

2.
Supply chain management has increasingly attracted attention as a systematic approach to integrate the supply chain in order to planning and controlling the materials and information from suppliers to customers. One of the most important issues in supply chain management is selection of the appropriate supplier which has significant effect on purchasing cost decrease and increase in the organization’s competition ability. Selection of the best supplier is naturally complex with no definite structure, and dependent on the type of suppliers’ activity. In the process of decision making about suppliers and many qualitative and quantitative performance indicators such as quality, price, flexibility, and due date should be considered. Then, the supplier selection problem is a multi-criteria decision making problem where numerous methods have been proposed to solve this problem so far. In the current paper, four suppliers of imported raw material “Tripolyphosphate (TPP)” (primary material to produce the detergent powder with a case study in Iran) are evaluated based on 25 effective criteria using the hierarchical fuzzy TOPSIS (HFTOPSIS) approach.  相似文献   

3.
The paper presents the author’s partial and personal historical reconstruction of how decision theory is evolving to a decision aiding methodology. The presentation shows mainly how “alternative” approaches to classic decision theory evolved. In the paper it is claimed that all such decision “theories” share a common methodological feature, which is the use of formal and abstract languages as well as of a model of rationality. Different decision aiding approaches can thus be defined, depending on the origin of the model of rationality used in the decision aiding process. The concept of decision aiding process is then introduced and analysed. The paper’s ultimate claim is that all such decision aiding approaches can be seen as part of a decision aiding methodology.  相似文献   

4.
Data envelopment analysis (DEA) allows us to evaluate the relative efficiency of each of a set of decision-making units (DMUs). However, the methodology does not permit us to identify specific sources of inefficiency because DEA views the DMU as a “black box” that consumes a mix of inputs and produces a mix of outputs. Thus, DEA does not provide a DMU manager with insight regarding the internal source of the organization’s inefficiency.  相似文献   

5.
In this paper, both the duration and the cost of an activity are modeled as random variables, and accordingly, the cumulative cost at each time point also becomes a random variable along a project’s progress. We first present the new concept of an “alphorn of uncertainty” (AoU) to describe the domain of cumulative cost variation throughout the life of a project and subsequently apply it to assess the project’s financial status over time. The shape of the alphorn was obtained by mixing Monte Carlo sampling with Gantt chart analysis, which enabled us to determine a project’s financial status related to specific payment modes. To validate the AoU, we designed and conducted an extensive numerical experiment using a randomly generated data set of activity networks. The results indicate that the AoU may be a promising method for the financial management of projects under uncertainty. Furthermore, financial status under uncertain conditions is not sensitive to an activity’s choice of duration distributions or to the form of cost functions. However, payment rules can greatly affect financial status over the duration of a project.  相似文献   

6.
Diophantos in Arithmetica, without having defined previously any concept of “equality” or “equation,” employs a concept of the unknown number as a tool for solving problems and finds its value from an equality ad hoc created. In this paper we analyze Diophantos’s practices in the creation and simplification of such equalities, aiming to adduce more evidence on certain issues arising in recent historical research on the meaning of the “equation” in Diophantos’s work.  相似文献   

7.
In a firm, potential conflict exists between manufacturing and sales departments. Salespersons prefer to order from manufacturing departments in advance so that they can secure products in the amount they need to satisfy customers in time. This time in advance strategy is defined as “lead-time hedging.” While this hedging strategy is good for the sales department to guarantee the right quantity at the right time for customers, it adds additional costs and pressure to the manufacturing department. One scheme to resolve this conflict is to introduce a fair “internal price,” charged by the manufacturing department to the sales department. In this paper, two models involving a fair internal price are introduced. In one model, a Nash game is played to reach an optimal strategy for both parties. In the other model, a Stackelberg game is played in which the manufacturing department serves as the leader. We show that these two models can successfully reduce lead-time hedging determined by the salesperson and can increase the firm’s overall profit, as compared to the traditional model without considering the internal price. More insights have also been analyzed that include the comparisons of the manufacturer’s and the salesperson’s profits among the traditional model, the Nash game model, the Stackelberg game model, and the centralized global optimization model.  相似文献   

8.
This paper analyzes strategic store openings in a situation in which firms can open multiple stores depending on the financial constraints of the firm. Specifically, given any upper limit of the number of store openings that two potentially symmetric firms can open, they sequentially determine the number of store openings, including their locations, to maximize their profits. As a result of our analysis in a microeconomic framework, we show that the equilibrium strategy can be wholly classified into only two following opposite strategies according to the level of their financial constraints involved. When firms can afford to invest significant amounts of money in the market, the leader chooses “segmentation strategy,” in which a part of the market can be monopolized by opening a chain of multiple stores and deterring the follower’s entry. In contrast, when the leader has a severe financial constraint so that it can only monopolize less than half of the market, the leader chooses “minimum differentiation strategy,” where firms open each of their stores at exactly the same point as the rival’s. Under this strategy, the leader necessarily captures just half of the market. Furthermore, we show that regardless of potential symmetry between firms, both first and second mover advantages in terms of profit can occur in the equilibrium.  相似文献   

9.
Molodtsov’s soft set theory was originally proposed as a general mathematical tool for dealing with uncertainty. By combining the multi-fuzzy set and soft set models, the purpose of this paper is to introduce the concept of multi-fuzzy soft sets. Some operations on a multi-fuzzy soft set are defined, such as complement operation, “AND” and “OR” operations, Union and Intersection operations. Then, the DeMorgan’s laws are proved. Finally, by means of level soft set, an algorithm is presented, and a decision problem is analyzed using multi-fuzzy soft set.  相似文献   

10.
We give an analytic characterization of a large-time “downside risk” probability associated with an investor’s wealth. We assume that risky securities in our market model are affected by “hidden” economic factors, which evolve as a finite-state Markov chain. We formalize and prove a duality relation between downside risk minimization and the related risk-sensitive optimization. The proof is based on an analysis of an ergodic-type Hamilton–Jacobi–Bellman equation with large (exponentially growing) drift.  相似文献   

11.
Our goal here is to present various examples of situations where a “large” investor (i.e. an investor whose “size” challenges the liquidity or the depth of the market) sees his long-term guesses on some important financial parameters instantaneously confirmed by the market dynamics as a consequence of his trading strategy, itself based upon his guesses. These examples are worked out in the context of a model (i.e. a quantitative framework) which attempts to provide a rigorous basis for the qualitative intuitions of many practitioners. Our results may be viewed as some kind of reverse Black–Scholes paradigm where modifications of option prices affect today's real volatility.  相似文献   

12.
We study a competition of product customization between two branded firms by a game-theoretic approach. Firms produce products with two attributes: one attribute indicates a characteristic with regard to “function” or “design” of a product and the other indicates “taste” or “flavor” of the product, which reflects consumers’ brand/taste preferences. Two branded firms have their own specific core products and our customization is defined as a continuous extension of their product line from the core product only along the “function” attribute. In particular, we allow asymmetric positions of core products, which may create the position advantage/disadvantage between firms. We suppose that consumers incur their selection costs with regard to finding their most favorable item among a rich variety of products and firms incur their customizing costs with regard to extending their product lines. We first show that in the equilibrium, branded firms should fundamentally adopt their customizations to cover the center space in the market as far as possible, regardless of the position of the competitor’s core product. Therefore, the position of the core product contributes to the creation of a competitive advantage: when one firm’s core product is located more closely to the center of the market than the competitor’s, its customization can always cover more range of the center space in the market, while keeping its degree of customization smaller than the competitor’s. Furthermore, we show some implications of unit-cost improvement: in a short run, a firm is better off concentrating on the improvement of the unit selection cost rather than the unit customizing cost. In contrast, in a long run, both firms can benefit from the improvement of the unit customizing cost.  相似文献   

13.
This paper examines the hypothetical retirement behavior of defined contribution (DC) pension plan participants. Using a Monte Carlo simulation approach, we compare and discuss three retirement decision models: the two-thirds replacement ratio benchmark model, the option-value of continued work model and a newly-developed “one-year” retirement decision model. Unlike defined benefit (DB) pension plans where economic incentives create spikes in retirement at particular ages, all three retirement decision models suggest that the retirement ages of DC participants are much more smoothly distributed over a wide range of ages. We find that the one-year model possesses several advantages over the other two models when representing the theoretical retirement choice of a DC pension plan participant. First, its underlying theory for retirement decision-making is more feasible given the distinct features and pension drivers of a DC plan. Second, its specifications produce a more logical relationship between an individual’s decision to retire and his/her age and accumulated retirement wealth. Lastly, although the one-year model is less complex than the option-value model as the DC participants’ scope is only one year, the retirement decision is optimal over all future projected years if projections are made using reasonable financial assumptions.  相似文献   

14.
We expose a rather simple and direct approach to the structure theory of prime PI-rings (“Posner’s theorem”), based on fundamental properties of the extended centroid of a prime ring.  相似文献   

15.
Julia E. Bergner 《Topology》2007,46(4):397-436
Given any model category, or more generally any category with weak equivalences, its simplicial localization is a simplicial category which can rightfully be called the “homotopy theory” of the model category. There is a model category structure on the category of simplicial categories, so taking its simplicial localization yields a “homotopy theory of homotopy theories”. In this paper we show that there are two different categories of diagrams of simplicial sets, each equipped with an appropriate definition of weak equivalence, such that the resulting homotopy theories are each equivalent to the homotopy theory arising from the model category structure on simplicial categories. Thus, any of these three categories with the respective weak equivalences could be considered a model for the homotopy theory of homotopy theories. One of them in particular, Rezk’s complete Segal space model category structure on the category of simplicial spaces, is much more convenient from the perspective of making calculations and therefore obtaining information about a given homotopy theory.  相似文献   

16.
New organizational forms are being conceived and proposed continually, but because many such organizations remain conceptual—and hence have no basis for empirical assessment—their putative advantages over extant organizational forms are difficult to evaluate. Moreover, many such organizational forms are proposed without solid grounding in our cannon of organization theory; hence understanding their various theoretical properties in terms of our familiar, archetypal forms remains difficult. This poses problems for the practitioner and researcher alike. The Edge represents one such, recent, conceptual organizational form, which lacks readily observable examples in practice, and the conceptualization of which is not rooted well in our established organization theory. Nonetheless, proponents of this new form argue its putative advantages over existing counterparts, with an emphasis upon complex, dynamic, equivocal environmental contexts; hence the appeal of this form in today’s organizational environment. The research described in this article employs methods and tools of computational experimentation to explore empirically the behavior and performance of Edge organizations, using the predominant and classic Hierarchy as a basis of comparison. We root our models of these competing forms firmly in Organization Theory, and we make our representations of organizational assumptions explicit via semi-formal models, which can be shared with other researchers. The results reveal insightful dynamic patterns and differential performance capabilities of Hierarchy and Edge organizations, and they elucidate theoretical ramifications for continued research along these lines, along with results amenable to practical application. This work also highlights the powerful role that computational experimentation can play as a complementary, bridging research method. Mark Nissen is Associate Professor of Information Systems and Management at the Naval Postgraduate School. His research focuses on dynamic knowledge and organization. He views work, technology and organization as an integrated design problem, and has concentrated recently on the phenomenology of knowledge flows. Mark’s publications span information systems, project management, organization studies, knowledge management and related fields. In 2000 he received the Menneken Faculty Award for Excellence in Scientific Research, the top research award available to faculty at the Naval Postgraduate School. In 2001 he received a prestigious Young Investigator Grant Award from the Office of Naval Research for work on knowledge-flow theory. In 2002–2003 he was Visiting Professor at Stanford, integrating knowledge-flow theory into agent-based tools for computational modeling. Before his information systems doctoral work at the University of Southern California, he acquired over a dozen years’ management experience in the aerospace and electronics industries.  相似文献   

17.
This study examines joint decisions regarding risky asset allocation and consumption rate for a representative agent in the presence of background risk and insurance markets. Contrary to the conclusion of the “mutual fund separation theorem”, we show that the optimal risky asset mix will reflect an agent’s risk attitude as long as background risk is not independent of investment risk. This result can, however, be used to solve the “riskyasset allocation puzzle”. We also unveil that optimal insurance to shift background risk is determined through establishing a hedging portfolio against investment risk and is an arrangement maintaining the balance between growth and volatility of expected consumption. Because the optimal insurance we obtain generally leads to a smoother consumption path, it may plausibly explain the “equity premium puzzle” in the financial literature.  相似文献   

18.
The paper discusses the tension which occurred between the notions of set (with measure) and (trial-) sequence (or—to a certain degree—between nondenumerable and denumerable sets) when used in the foundations of probability theory around 1920. The main mathematical point was the logical need for measures in order to describe general nondiscrete distributions, which had been tentatively introduced before (1919) based on von Mises’s notion of the “Kollektiv.” In the background there was a tension between the standpoints of pure mathematics and “real world probability” (in the words of J.L. Doob) at the time. The discussion and publication in English translation (in Appendix) of two critical letters of November 1919 by the “pure” mathematician Felix Hausdorff to the engineer and applied mathematician Richard von Mises compose about one third of the paper. The article also investigates von Mises’s ill-conceived effort to adopt measures and his misinterpretation of an influential book of Constantin Carathéodory. A short and sketchy look at the subsequent development of the standpoints of the pure and the applied mathematician—here represented by Hausdorff and von Mises—in the probability theory of the 1920s and 1930s concludes the paper.  相似文献   

19.
We investigate two very common pricing schemes for a Stackelberg-dominant retailer: percentage-markup and dollar-markup. We show that when a dominant retailer switches from dollar to percentage markup, the channel’s “overall pie” and the retailer’s “pie-piece” are both enlarged. In contrast, the manufacturer will be forced to levy a lower wholesale price, thus receiving a smaller pie-piece despite the larger pie. The preceding statements hold regardless of whether the demand is deterministic or stochastic. However, the effects of switching to percentage markup on the retail price and sales volume will depend not only on whether the demand is stochastic, but also on the assumed demand-curve shape and on whether demand stochasticity is “additive” or “multiplicative”. Besides presenting a comprehensive set of answers on the comparative performance of dollar- and percentagemarkups, our results also highlight the often overlooked importance of choosing between: (i) dollar- and percentage-markup; and (ii) the formats of the assumed stochasticity and demand curves.  相似文献   

20.
In our version of Watts and Strogatz’s small world model, space is a dd-dimensional torus in which each individual has in addition exactly one long-range neighbor chosen at random from the grid. This modification is natural if one thinks of a town where an individual’s interactions at school, at work, or in social situations introduce long-range connections. However, this change dramatically alters the behavior of the contact process, producing two phase transitions. We establish this by relating the small world to an infinite “big world” graph where the contact process behavior is similar to the contact process on a tree. We then consider the contact process on a slightly modified small world model in order to show that its behavior is decidedly different from that of the contact process on a tree.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号