首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We consider the mass‐in‐mass (MiM) lattice when the internal resonators are very small. When there are no internal resonators the lattice reduces to a standard Fermi‐Pasta‐Ulam‐Tsingou (FPUT) system. We show that the solution of the MiM system, with suitable initial data, shadows the FPUT system for long periods of time. Using some classical oscillatory integral estimates we can conclude that the error of the approximation is (in some settings) higher than one may expect.  相似文献   

2.
Reliable and efficient a posteriori error estimates are derived for the edge element discretization of a saddle‐point Maxwell's system. By means of the error estimates, an adaptive edge element method is proposed and its convergence is rigorously demonstrated. The algorithm uses a marking strategy based only on the error indicators, without the commonly used information on local oscillations and the refinement to meet the standard interior node property. Some new ingredients in the analysis include a novel quasi‐orthogonality and a new inf‐sup inequality associated with an appropriately chosen norm. It is shown that the algorithm is a contraction for the sum of the energy error plus the error indicators after each refinement step. Numerical experiments are presented to show the robustness and effectiveness of the proposed adaptive algorithm. © 2011 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2012  相似文献   

3.
This paper first presents several formulas for mean chance distributions of triangular fuzzy random variables and their functions, then develops a new class of fuzzy random data envelopment analysis (FRDEA) models with mean chance constraints, in which the inputs and outputs are assumed to be characterized by fuzzy random variables with known possibility and probability distributions. According to the established formulas for the mean chance distributions, we can turn the mean chance constraints into their equivalent stochastic ones. On the other hand, since the objective in the FRDEA model is the expectation about the ratio of the weighted sum of outputs and the weighted sum of inputs for a target decision-making unite (DMU), for general fuzzy random inputs and outputs, we suggest an approximation method to evaluate the objective; and for triangular fuzzy random inputs and outputs, we propose a method to reduce the objective to its equivalent stochastic one. As a consequence, under the assumption that the inputs and the outputs are triangular fuzzy random vectors, the proposed FRDEA model can be reduced to its equivalent stochastic programming one, in which the constraints contain the standard normal distribution function, and the objective is the expectation for a function of the normal random variable. To solve the equivalent stochastic programming model, we design a hybrid algorithm by integrating stochastic simulation and genetic algorithm (GA). Finally, one numerical example is presented to demonstrate the proposed FRDEA modeling idea and the effectiveness of the designed hybrid algorithm.  相似文献   

4.
We present a multispecies stochastic model that suggests optimal fishing policy for two species in a three‐species predator–prey ecosystem in the Barents Sea. We employ stochastic dynamic programming to solve a three‐dimensional model, in which the catch is optimized by using a multispecies feedback strategy. Applying the model to the cod, capelin, and herring ecosystem in the Barents Sea shows that the optimal catch for the stochastic interaction model is more conservative than that implied by the deterministic model. We also find that stochasticity has a stronger effect on the optimal exploitation policy for prey (capelin) than for predator (cod).  相似文献   

5.
This paper is proposed for the error estimates of the element‐free Galerkin method for a quasistatic contact problem with the Tresca friction. The penalty method is used to impose the clamped boundary conditions. The duality algorithm is also given to deal with the non‐differentiable term in the quasistatic contact problem with the Tresca friction. The error estimates indicate that the convergence order is dependent on the nodal spacing, the time step, the largest degree of basis functions in the moving least‐squares approximation, and the penalty factor. Numerical examples demonstrate the effectiveness of the element‐free Galerkin method and verify the theoretical analysis.  相似文献   

6.
A method is proposed for estimating the parameters in a parametric statistical model when the observations are fuzzy and are assumed to be related to underlying crisp realizations of a random sample. This method is based on maximizing the observed-data likelihood defined as the probability of the fuzzy data. It is shown that the EM algorithm may be used for that purpose, which makes it possible to solve a wide range of statistical problems involving fuzzy data. This approach, called the fuzzy EM (FEM) method, is illustrated using three classical problems: normal mean and variance estimation from a fuzzy sample, multiple linear regression with crisp inputs and fuzzy outputs, and univariate finite normal mixture estimation from fuzzy data.  相似文献   

7.
The three‐dimensional displacement of two‐phase flow in porous media is a preliminary problem of numerical simulation of energy science and mathematics. The mathematical model is formulated by a nonlinear system of partial differential equations to describe incompressible miscible case. The pressure is defined by an elliptic equation, and the concentration is defined by a convection‐dominated diffusion equation. The pressure generates Darcy velocity and controls the dynamic change of concentration. We adopt a conservative block‐centered scheme to approximate the pressure and Darcy velocity, and the accuracy of Darcy velocity is improved one order. We use a block‐centered upwind multistep method to solve the concentration, where the time derivative is approximated by multistep method, and the diffusion term and convection term are treated by a block‐centered scheme and an upwind scheme, respectively. The composite algorithm is effective to solve such a convection‐dominated problem, since numerical oscillation and dispersion are avoided and computational accuracy is improved. Block‐centered method is conservative, and the concentration and the adjoint function are computed simultaneously. This physical nature is important in numerical simulation of seepage fluid. Using the convergence theory and techniques of priori estimates, we derive optimal estimate error. Numerical experiments and data show the support and consistency of theoretical result. The argument in the present paper shows a powerful tool to solve the well‐known model problem.  相似文献   

8.
ABSTRACT. Concerns about local depletion of fish populations are intensifying, as interest becomes focused on finer spatial and temporal scales. We used the DeLury model to investigate local depletion of the eastern Bering Sea walleye pollock population by its fishery by using spatial and temporal scales thought to meet assumptions about closure and applicability. Local depletion is estimated as the slope of logarithmic catch‐per‐unit‐effort (CPUE) from the fishery versus cumulative effort, with data from 1995 1999 stratified by small areas, short seasons and years. Of 237 depletion estimators, 172 had negative slopes, 94 of which were significant, a greater number than would be expected by chance alone. Of the 65 positive slopes, 19 were significantly positive, which is also more than would be expected. Cumulative depletion over a season was inversely related to estimated initial biomass, total catch, and total effort, indicating that depletion is detected more easily in areas of low abundance and consequently lower catch and effort. Our fine‐scale estimates of depletion are much smaller than the overall depletion from annual stock assessments, showing that commercial data alone can be at best a relative index of depletion. This hyperstable relationship may result from the lack of search time in the measure of effort, fish finding technology and schooling behavior of pollock. Evidence also suggests that measures that were taken starting in 1999 to disperse the exploitation pressure in space and time may decrease local depletion, and that pollock may repopulate an exploited area in a relatively short time period (weeks).  相似文献   

9.
We introduce stochastic version of an input relaxation model in data envelopment analysis (DEA). The input relaxation model, recently developed in DEA, is useful to resource management [e.g. G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion, Appl. Math. Comput. 151(1) (2004) 263–273]. This model allows more changes in the input combinations of decision making units than those in the observed inputs of evaluating decision making units. Using this extra flexibility in input combinations we can find better outputs. We obtain a non-linear deterministic equivalent to this stochastic model. It is shown that under fairly general conditions this non-linear model can be replaced by an ordinary deterministic DEA model. The model is illustrated using a real data set.  相似文献   

10.
A new algorithm is developed for simultaneous state-parameter estimation in real-time flood-forecasting applications. Dubbed the partitioned state-parameter (PSP) algorithm, is it unusual in the way that the parameter filter is formulated explicitly in terms of the identifiable parameters in the transition and input coefficient matrices. By virtue of its parallel filter structure the algorithm is very fast, yet it has been designed so that essential error interactions between the forecasting and parameter filters are preserved. Furthermore, PSP is structured so that input coefficients are only updated when the corresponding inputs are actually applied. This feature is useful for systems subject to sporadic inputs. The algorithm is tested with real and synthesized daily rainfall-runoff data from the Hillsborough River in Florida. PSP is found to produce good forecasts and parameter estimates and is much faster than the extended Kalman filter.  相似文献   

11.
Dimensional and similarity analyses are used in physics and engineering, specially in fluid mechanics, to reduce the dimension of the input variable space with no loss of information. Here, we apply these techniques to the propagation of uncertainties for computer codes by the Monte Carlo method, in order to reduce the variance of the estimators of the parameters of the output variable distribution. In the physics and engineering literature, dimensional analysis is often formulated intuitively in terms of physical quantities or dimensions such as time, longitude, or mass; here we use the more rigorous and more abstract generalized dimensional analysis of Moran and Marshek. The reduction of dimensionality is only successful in reducing estimator variance when applying variance-reduction techniques and not when using ordinary random sampling. In this article we use stratified sampling, and the key point of the success of the reduction in dimensionality in improving the precision of the estimates is a better measurement of the distances betwen the outputs, for given inputs. We illustrate the methodology with an application to a physical problem, a radioactive contaminant transport code. A substantial variance reduction is achieved for the estimators of the mean, variance, and distribution function of the output. Last, we present a discussion on which conditions are necessary for the method to be successful.  相似文献   

12.
In off‐line quality control, the settings that minimize the variance of a quality characteristic are unknown and must be determined based on an estimated dual response model of mean and variance. The present paper proposes a direct measure of the efficiency of any given design‐estimation procedure for variance minimization. This not only facilitates the comparison of different design‐estimation procedures, but may also provide a guideline for choosing a better solution when the estimated dual response model suggests multiple solutions. Motivated by the analysis of an industrial experiment on spray painting, the present paper also applies a class of link functions to model process variances in off‐line quality control. For model fitting, a parametric distribution is employed in updating the variance estimates used in an iteratively weighted least squares procedure for mean estimation. In analysing combined array experiments, Engel and Huele (Technometrics, 1996; 39:365) used log‐link to model process variances and considered an iteratively weighted least squares leading to the pseudo‐likelihood estimates of variances as discussed in Carroll and Ruppert (Transformation and Weighting in Regression, Chapman & Hall: New York). Their method is a special case of the approach considered in this paper. It is seen for the spray paint data that the log‐link may not be satisfactory and the class of link functions considered here improves substantially the fit to process variances. This conclusion is reached with a suggested method of comparing ‘empirical variances’ with the ‘theoretical variances’ based on the assumed model. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

13.
Regression and linear programming provide the basis for popular techniques for estimating technical efficiency. Regression-based approaches are typically parametric and can be both deterministic or stochastic where the later allows for measurement error. In contrast, linear programming models are nonparametric and allow multiple inputs and outputs. The purported disadvantage of the regression-based models is the inability to allow multiple outputs without additional data on input prices. In this paper, deterministic cross-sectional and stochastic panel data regression models that allow multiple inputs and outputs are developed. Notably, technical efficiency can be estimated using regression models characterized by multiple input, multiple output environments without input price data. We provide multiple examples including a Monte Carlo analysis.  相似文献   

14.
In this paper, a posteriori error estimates for the generalized Schwarz method with mixed boundary condition on the interfaces for advection‐diffusion equation with second‐order boundary value problems are proved using theta time scheme combined with Galerkin spatial method. Furthermore, a asymptotic behavior in Sobolev norm is deduced using Benssoussan‐Lions' algorithm.  相似文献   

15.
ABSTRACT. The excessive and unsustainable exploitation of our marine resources has led to the promotion of marine reserves as a fisheries management tool. Marine reserves, areas in which fishing is restricted or prohibited, can offer opportunities for the recovery of exploited stock and fishery enhancement. In this paper we examine the contribution of fully protected tropical marine reserves to fishery enhancement by modeling marine reserve‐fishery linkages. The consequences of reserve establishment on the long‐run equilibrium fish biomass and fishery catch levels are evaluated. In contrast to earlier models this study highlights the roles of both adult (and juvenile) fish migration and larval dispersal between the reserve and fishing grounds by employing a spawner‐recruit model. Uniform larval dispersal, uniform larval retention and complete larval retention combined with zero, moderate and high fish migration scenarios are analyzed in turn. The numerical simulations are based on Mombasa Marine National Park, Kenya, a fully protected coral reef marine reserve comprising approximately 30% of former fishing grounds. Simulation results suggest that the establishment of a fully protected marine reserve will always lead to an increase in total fish biomass. If the fishery is moderately to heavily exploited, total fishery catch will be greater with the reserve in all scenarios of fish and larval movement. If the fishery faces low levels of exploitation, catches can be optimized without a reserve but with controlled fishing effort. With high fish migration from the reserve, catches are optimized with the reserve. The optimal area of the marine reserve depends on the exploitation rate in the neighboring fishing grounds. For example, if exploitation is maintained at 40%, the ‘optimal’ reserve size would be 10%. If the rate increases to 50%, then the reserve needs to be 30% of the management area in order to maximize catches. However, even in lower exploitation fisheries (below 40%), a small reserve (up to 20%) provides significantly higher gains in fish biomass than losses in catch. Marine reserves are a valuable fisheries management tool. To achieve maximum fishery benefits they should be complemented by fishing effort controls.  相似文献   

16.
在自适应控制中被控对象的模型往往与实际对象有一定的误差,表现为对象的未建模动态特性和外来干扰.而适应控制的设计者只能以理想模型为基础来设计自适应控制器.然而有些自适应算法不能保证问题对外来干扰或未建模特性的稳健性,甚至很小的干扰也会使得自适应环不稳定.自80年代初,C.E.Rohrs 等提出这种具有未建模  相似文献   

17.
In this work, a model order reduction (MOR) technique for a linear multivariable system is proposed using invasive weed optimization (IWO). This technique is applied with the combined advantages of retaining the dominant poles and the error minimization. The state space matrices of the reduced order system are chosen such that the dominant eigenvalues of the full order system are unchanged. The other system parameters are chosen using the invasive weed optimization with objective function to minimize the mean squared errors between the outputs of the full order system and the outputs of the reduced order model when the inputs are unit step. The proposed algorithm has been applied successfully, a 10th order Multiple-Input–Multiple-Output (MIMO) linear model for a practical power system was reduced to a 3rd order and compared with recently published work.  相似文献   

18.
Analyzing the sensitivity of model outputs to inputs is important to assess risk and make decisions in engineering application. However, for model with multiple outputs, it is difficult to interpret the sensitivity index since the effect of the dimension and the correlation between multiple outputs are often ignored in the existing methods. In this paper, a new kind of sensitivity analysis method is proposed by use of vector projection and dimension normalization for multiple outputs. Through the dimension normalization, the space of multiple outputs can be unified into a dimensionless one to eliminate the effect of the dimension of the different output. After an affine coordinate system is constructed by considering the correlation of the multiple normalized outputs, a total variance vector for the multiple outputs can be composed by the individual variance of each output. Then, by projecting the variance contribution vector composed by the individual variance contribution of the input to each output on the total variance vector, the new sensitivity indices are proposed for measuring the comprehensive effect of the input on the total variance vector of multiple outputs, it is defined as the ratio of the projection of the variance contribution vector to the norm of the total variance vector. We derive that the Sobol’ indices for a scalar output and the covariance decomposition based indices for multiple outputs are special cases of the proposed vector projection based indices. Then, the mathematical properties and geometric interpretation of the proposed method are discussed. Three numerical examples and a rotating shaft model of an aircraft wing are used to validate the proposed method and show their potential benefits.  相似文献   

19.
Over the last three decades most of the world's fisheries have been subject to management programs that have tried to limit the use of key fishing inputs. Inevitably, these restrictions have proven ineffective at preventing rent dissipation and stock depletion. More noteworthy is that fishers have subverted the intentions of these restrictions by adjusting the primary harvesting technology. This has led to an inefficient industrial structure characterized by capital stuffing on the part of each vessel, excess employment, an inefficient mix of vessels in the fleet, and too many vessels altogether.A promising means of encouraging more efficient primary harvesting is the individual transferable vessel quota that allocates a given catch to each vessel, thereby giving an incentive to catch the quota at least cost. This paper examines efficiency gains and potential industry restructuring from the introduction of an ITVQ into a fishery that was previously subject to input restrictions.Using data from the British Columbia salmon fishery, this paper estimates restricted cost functions for each of four different vessel types and simulates the operations of a market for ITVQs. The demand for quota comes from individual vessels and is found by differentiating the cost function with respect to the shadow price of quota. The market for quota is in equilibrium when the total demand for quota is equal to the fixed supply of quota set by the government. This implicitly defines the equilibrium quota rental price. Results show that the ITVQ could generate unit rental prices for quota between 31 and 93 cents per pound (18–53% of the average landed price).Using this simulated price, each vessel's costminimizing strategy is defined and both low cost vessels (those that will buy quota) and high cost vessels (those that will sell quota and exit the fishery) are identified. Quota trades between the two groups result in efficiency gains. These include reduced capital stuffing, exit of less efficient vessel types, attainment of economies of scale, and an efficient composition of vessel types in the fleet. In aggregate these gains lead to an estimate of annual resource rent that is approximately equal to one third of the value of the catch.  相似文献   

20.
Pesticides are widely used by crop producers in developed countries to combat risk associated with pests and diseases. However, their indiscriminate use can lead to various environmental spillovers that may alter the agricultural production environment thus contributing to production risk. This study utilises a data envelopment analysis (DEA) approach to measure performance of arable farms, incorporating pesticides’ environmental spillovers and output variance as undesirable outputs in the efficiency analysis and taking explicitly into account the effect of pesticides and other inputs on production risk. The application focuses on panel data from Dutch arable farms over the period 2003–2007. A moment approach is used to compute output variance, providing empirical representations of the risk-increasing or -decreasing nature of the used inputs. Finally, shadow values of risk-adjusted inputs are computed. We find that pesticides are overused in Dutch arable farming and there is a considerable evidence of the need for decreasing pesticides’ environmental spillovers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号