首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Matching product architecture with supply chain design   总被引:1,自引:0,他引:1  
Product architecture is typically established in the early stages of the product development (PD) cycle. Depending on the type of architecture selected, product design, manufacturing processes, and ultimately supply chain configuration are all significantly affected. Therefore, it is important to integrate product architecture decisions with manufacturing and supply chain decisions during the early stage of the product development. In this paper, we present a multi-objective optimization framework for matching product architecture strategy to supply chain design. In contrast to the existing operations management literature, we incorporate the compatibility between the supply chain partners into our model to ensure the long term viability of the supply chain. Since much of the supplier related information may be very subjective in nature during the early stages of PD, we use fuzzy logic to compute the compatibility index of a supplier. The optimization model is formulated as a weighted goal programming (GP) model with two objectives: minimization of total supply chain costs, and maximization of total supply chain compatibility index. The GP model is solved by using genetic algorithm. We present case examples for two different products to demonstrate the model’s efficacy, and present several managerial implications that evolved from this study.  相似文献   

2.
3.
In many manufacturing operations, a system may exhibit dynamic behavior before reaching a steady-state level. This is most frequently associated with a transition in production like a product style change or a grade change. During the transition phase, the output does not respond instantaneously to a change in input. However, there is typically some information about the past transition phase performance available. We develop an adjustment policy for transition periods based on using a Bayesian forecast to incorporate the prior information. We present computational results showing average process improvements under various system and noise disturbance conditions.  相似文献   

4.
Most of the previous studies on the Emergency Evacuation Problem (EEP) assume that the length and widths of the circulation spaces are fixed. This assumption is only true if one is evaluating facilities that are already built. However, when designing the network for the first time, the size of the circulation space is not known to the designer, in fact it is one of several design parameters. After the routes have been established, it seems that the next logical question is to find out whether or not the system circulation spaces are capable of accommodating the traffic for both normal circulation and in an emergency. The problem of designing emergency evacuation networks is very complex and it is only recently that queueing networks are now being used to model this problem. Recent advances include state-dependent queueing network models that incorporate the mean value analysis algorithm to capture the non-linearities in the problem. We extend these models by incorporating the mean value analysis algorithm within Powell's derivative free unconstrained optimization algorithm. The effect of varying circulation widths on throughput will be discussed and a methodology for solving the resource allocation problem is proposed and demonstrated on several examples. The computational experience of the new methodology illustrates its usefulness in network design problems.  相似文献   

5.
We consider a make‐to‐stock production system with one product type, dynamic service policy, and delay‐sensitive customers. To balance the waiting cost of customers and holding cost of products, a dynamic production policy is adopted. If there is no customer waiting in the system, instead of shutting down, the system operates at a low production rate until a certain threshold of inventory is reached. If the inventory is empty and a new customer emerges, the system switches to a high production rate where the switching time is assumed to be exponentially distributed. Potential customers arrive according to the Poisson process. They are strategic in the sense that they make decisions on whether to stay for product or leave without purchase on the basis of on their utility value and the system information on whether the number of products is observable to customers or not. The strategic behavior is explored, and a Stackelberg game between production manager and customers is formulated where the former is the game leader. We find that the optimal inventory threshold minimizing the cost function can be obtained by a search algorithm. Numerical results demonstrate that the expected cost function in an observable case is not greater than that in an unobservable case. If a customer's delay sensitivity is relatively small, these two cases are entirely identical. With increasing of delay sensitivity, the optimal inventory threshold might be positive or zero, and hence, a demarcation line is depicted to determine when a make‐to‐stock policy is advantageous to the manager.  相似文献   

6.
We consider the problem of scheduling orders for multiple different product types in an environment with m dedicated machines in parallel. The objective is to minimize the total weighted completion time. Each product type is produced by one and only one of the m dedicated machines; that is, each machine is dedicated to a specific product type. Each order has a weight and may also have a release date. Each order asks for certain amounts of various different product types. The different products for an order can be produced concurrently. Preemptions are not allowed. Even when all orders are available at time 0, the problem has been shown to be strongly NP-hard for any fixed number (?2) of machines. This paper focuses on the design and analysis of efficient heuristics for the case without release dates. Occasionally, however, we extend our results to the case with release dates. The heuristics considered include some that have already been proposed in the literature as well as several new ones. They include various static and dynamic priority rules as well as two more sophisticated LP-based algorithms. We analyze the performance bounds of the priority rules and of the algorithms and present also an in-depth comparative analysis of the various rules and algorithms. The conclusions from this empirical analysis provide insights into the trade-offs with regard to solution quality, speed, and memory space.  相似文献   

7.
To date, efforts to understand virtual teaming have been largely anecdotal and atheoretical. Therefore, drawing from the extant research in the groups domain, we attempt to ground the definition of a virtual team in well-established group-level constructs, and design a simulation study to investigate the impact of different virtual team characteristics on team performance. Essentially, we argue that the virtual team is defined by three key characteristics—the virtual team context, the virtual team composition, and the virtual team structure. Using the VDT computational discrete event simulation model as our experimental platform, we simulated different virtual team models, and examined their impact on various team performance dimensions. We found that virtual team characteristics have different effects on different aspects of team performance. The virtual context team had a lower rework volume but higher coordination volume and longer project duration than the virtual composition team. Interestingly, we also found that the virtual structure team performed better than the software development team baseline model in all aspects of team performance. Based on these results, we proposed strategies to improve performance in different types of virtual team. Specifically, we propose (1) increasing the ease of communication and availability of routines in the virtual context team; (2) clarifying role expectations and fostering a team culture in the virtual composition team; and (3) implementing a lateral structure in the virtual team. Our results also suggest that firms should consider situational demands, specifically tolerance for errors and coordination volume, when considering the design of virtual teams.  相似文献   

8.
We consider a retailer’s decision of developing a store brand (SB) version of a national brand (NB) and the role that its positioning strategy plays in appropriating the supply chain profit. Since the business of the retailer can be regarded as selling to NB manufacturers the shelf space at its disposal, we formulate a game-theoretical model of a single-retailer, single-manufacturer supply chain, where the retailer can decide whether to launch its own SB product and sells scarce shelf-space to a competing NB in a consumer good category. As a result, the most likely equilibrium outcome is that the available selling amount of each brand is constrained by the shelf-space available for its products and both brands coexist in the category. In this paper, we conceptualize the SB positioning that involves both product quality and product features. Our analysis shows that when the NB cross-price effect is not too large, the retailer should position its SB’s quality closer to the NB, more emphasize its SB’s differences in features facing a weaker NB, and less emphasize its SB’s differences in features facing a stronger NB. Our results stress the importance of SB positioning under the shelf-space allocation, in order to maximize the retailer’s value appropriation across the supply chain.  相似文献   

9.
Effects of Mis-Specification in Bivariate Extreme Value Problems   总被引:3,自引:0,他引:3  
The need to incorporate the structure of complex problems in extreme value analyzes, and the requirement to exploit all the limited information that is available, has led to the increased use of advanced dependence models. When they are appropriate, these dependence models can lead to substantial benefits over simpler univariate extreme value methods. Here we explore some inference problems for the marginal and conditional distributions caused by model mis-specification. We find distinct differences in estimation characteristics when the dependence structure is asymptotically dependent or asymptotically independent, and that conditional models can be substantially improved if the variables are standardized to have common marginal distributions.  相似文献   

10.
In the last several years, the modeling of emergency vehicle location has focussed on the temporal availability of the vehicles. Vehicles are not available for service when they are engaged in earlier calls. To incorporate this dynamic aspect into facility location decisions, models have been developed which provide additional levels of coverage. In this paper, two new models are derived from the probabilistic location set covering problem. These models allow the examination of the relationships between the number of facilities being located, the reliability that a vehicle will be available, and a coverage standard. In addition, these models incorporate sectoral specific estimates of the availability of the vehicles. Solution of these models reveals that the use of sectoral estimates leads to facility locations which are distributed to a greater spatial extent over the region to be serviced.  相似文献   

11.
Skin detection is an important step for a wide range of research related to computer vision and image processing and several methods have already been proposed to solve this problem. However, most of these methods suffer from accuracy and reliability problems when they are applied to a variety of images obtained under different conditions. Performance degrades further when fewer training data are available. Besides these issues, some methods require long training times and a significant amount of parameter tuning. Furthermore, most state-of-the-art methods incorporate one or more thresholds, and it is difficult to determine accurate threshold settings to obtain desirable performance. These problems arise mostly because the available training data for skin detection are imprecise and incomplete, which leads to uncertainty in classification. This requires a robust fusion framework to combine available information sources with some degree of certainty. This paper addresses these issues by proposing a fusion-based method termed Dempster–Shafer-based Skin Detection (DSSD). This method uses six prominent skin detection criteria as sources of information (SoI), quantifies their reliabilities (confidences), and then combines their confidences based on the Dempster–Shafer Theory (DST) of evidence. We use the DST as it offers a powerful and flexible framework for representing and handling uncertainties in available information and thus helps to overcome the limitations of the current state-of-the-art methods. We have verified this method on a large dataset containing a variety of images, and achieved a 90.17% correct detection rate (CDR). We also demonstrate how DSSD can be used when very little training data are available, achieving a CDR as high as 87.47% while the best result achieved by a Bayesian classifier is only 68.81% on the same dataset. Finally, a generalized DSSD (GDSSD) is proposed achieving 91.12% CDR.  相似文献   

12.
The success of a company increasingly depends on timely information (internal or external) being available to the right person at the right time for crucial managerial decision-making. Achieving such a “right time/right place” duet depends directly on database performance. A database system has been a core component that supports modern business system such as enterprise resource planning (ERP) system that integrates and supports all enterprise processes including product designing and engineering, manufacturing, and other business functions to achieve highest efficiency and effectiveness of operations. We develop and demonstrate through a proof-of-concept case study, a new “query-driven” heuristics for database design that seeks to identify database structures that perform robustly in dynamic settings with dynamic queries. Our focus is the design of efficient structures to process read-only queries in complex environments. Our heuristics begins with detailed analysis of relationships between diverse queries and the performance of different database structures. These relationships are then used in a series of steps that identify “robust” database structures that maintain high performance levels for a wide range of query patterns. We conjecture that our heuristics can facilitate efficient operations and effective decision-making of companies in today’s dynamic environment.  相似文献   

13.
This paper investigates the research and development accumulation and pricing strategies of two firms competing for consumer demand in a dynamic framework. A firm’s research and development is production-cost-reducing and can benefit from part of the competitor’s research and development stock without payment. We consider decisions in a game characterized by Nash equilibrium. In this dynamic game, a player’s action depends on whether the competitor’s current research and development stock are observable. If the competitor’s current research and development stock are not observable or observable only after a certain time lag, a player’s action can be solely based on the information on the current period t (open-loop strategy). In the converse case, it can also include the information on the competitor’s reaction to a change in the current value of the state vector (closed-loop strategy), which allows for strategic interaction to take place throughout the game. Given the cumulative nature of research and development activities, a primary goal of this paper is to determine whether, regardless of the observability of the competitor’s current research and development stock, free research and development spillovers generate a lower level of scientific knowledge than research and development appropriability. A second objective of the paper is to determine how the observability of the rival’s current research and development stock affects a firm’s research and development and pricing decisions and payoffs under imperfect research and development appropriability.  相似文献   

14.
In regression analysis, when no previous information about the statistical model is available, non-parametric estimation methods are very useful since their requirements on the specification of the model are very few. However, if this information exists, these methods usually neglect to incorporate it. In this paper, we propose a non-parametric regression technique that accounts for information about the underlying statistical model when this information is introduced through a known function. We also provide some theoretical properties and examples of this estimator. © 1998 John Wiley & Sons, Ltd.  相似文献   

15.
Many statistical models, e.g. regression models, can be viewed as conditional moment restrictions when distributional assumptions on the error term are not assumed. For such models, several estimators that achieve the semiparametric efficiency bound have been proposed. However, in many studies, auxiliary information is available as unconditional moment restrictions. Meanwhile, we also consider the presence of missing responses. We propose the combined empirical likelihood (CEL) estimator to incorporate such auxiliary information to improve the estimation efficiency of the conditional moment restriction models. We show that, when assuming responses are strongly ignorable missing at random, the CEL estimator achieves better efficiency than the previous estimators due to utilization of the auxiliary information. Based on the asymptotic property of the CEL estimator, we also develop Wilks’ type tests and corresponding confidence regions for the model parameter and the mean response. Since kernel smoothing is used, the CEL method may have difficulty for problems with high dimensional covariates. In such situations, we propose an instrumental variable-based empirical likelihood (IVEL) method to handle this problem. The merit of the CEL and IVEL are further illustrated through simulation studies.  相似文献   

16.
The goal of this paper is to investigate how uncertainties in demand and production should be incorporated into manufacturing system design problems. We examine two problems in manufacturing system design: the resource allocation problem and the product grouping problem. In the resource allocation problem, we consider the issue of how to cope with uncertainties when we utilize two types of resources: actual processing capacity and stored capacity (inventory). A closed form solution of the optimal allocation scheme for each type of capacity is developed, and its performance is compared to that of the conventional scheme where capacity allocation and inventory control decisions are made sequentially. In the product grouping problem, we consider the issue of how we design production lines when each line is dedicated to a certain set of products. We formulate a mathematical program in which we simultaneously determine the number of production lines and the composition of each line. Two heuristics are developed for the problem.  相似文献   

17.
Network revenue management is concerned with managing demand for products that require inventory from one or several resources by controlling product availability and/or prices in order to maximize expected revenues subject to the available resource capacities. One can tackle this problem by decomposing it into resource-level subproblems that can be solved efficiently, for example by dynamic programming. We propose a new dynamic fare proration method specifically having large-scale applications in mind. It decomposes the network problem by fare proration and solves the resource-level dynamic programs simultaneously using simple, endogenously obtained dynamic marginal capacity value estimates to update fare prorations over time. An extensive numerical simulation study demonstrates that the method results in tightened upper bounds on the optimal expected revenue, and that the obtained policies are very effective with regard to achieved revenues and required runtime.  相似文献   

18.
In the presence of huge losses from unsuccessful new product introductions, companies often seek forecast information from various sources. As the information can be costly, companies need to determine how much effort to put into acquiring the information. Such a decision is strategically important because an insufficient investment may cause lack of knowledge of product profitability, which in turn may lead to introducing a loss-making product or scrapping a potentially profitable one. In this paper, we use decision analytical models to study information acquisition for new product introduction. Specifically, we consider a decision maker (DM) who, prior to introducing a new product, can purchase forecasts and use the information to update his knowledge of the market demand. We analyze and compare two approaches: The first approach is to determine the total amount of forecasts to purchase all at once. The second one is to purchase forecasts sequentially and, based on the purchased forecasts, determine whether those forecasts are informative enough for making an introduction decision or an additional forecast is needed. We present dynamic programming formulations for both approaches and derive the optimal policies. Via a numerical study, we find the second approach, i.e., purchasing forecasts sequentially, can generate a significant profit advantage over the first one when (1) the cost of acquiring forecasts is neither too high nor too low, (2) the precision of the forecasts is of a moderate level, and (3) the profit margin of the new product is small.  相似文献   

19.
In many service industries, firms offer a portfolio of similar products based on different types of resources. Mismatches between demand and capacity can therefore often be managed by using product upgrades. Clearly, it is desirable to consider this possibility in the revenue management systems that are used to decide on the acceptance of requests. To incorporate upgrades, we build upon different dynamic programming formulations from the literature and gain several new structural insights that facilitate the control process under certain conditions. We then propose two dynamic programming decomposition approaches that extend the traditional decomposition for capacity control by simultaneously considering upgrades as well as capacity control decisions. While the first approach is specifically suited for the multi-day capacity control problem faced, for example, by hotels and car rental companies, the second one is more general and can be applied in arbitrary network revenue management settings that allow upgrading. Both approaches are formally derived and analytically related to each other. It is shown that they give tighter upper bounds on the optimal solution of the original dynamic program than the well-known deterministic linear program. Using data from a major car rental company, we perform computational experiments that show that the proposed approaches are tractable for real-world problem sizes and outperform those disaggregated, successive planning approaches that are used in revenue management practice today.  相似文献   

20.
Principal component analysis (PCA) is an important tool for dimension reduction in multivariate analysis. Regularized PCA methods, such as sparse PCA and functional PCA, have been developed to incorporate special features in many real applications. Sometimes additional variables (referred to as supervision) are measured on the same set of samples, which can potentially drive low-rank structures of the primary data of interest. Classical PCA methods cannot make use of such supervision data. In this article, we propose a supervised sparse and functional principal component (SupSFPC) framework that can incorporate supervision information to recover underlying structures that are more interpretable. The framework unifies and generalizes several existing methods and flexibly adapts to the practical scenarios at hand. The SupSFPC model is formulated in a hierarchical fashion using latent variables. We develop an efficient modified expectation-maximization (EM) algorithm for parameter estimation. We also implement fast data-driven procedures for tuning parameter selection. Our comprehensive simulation and real data examples demonstrate the advantages of SupSFPC. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号