共查询到20条相似文献,搜索用时 15 毫秒
1.
Brendan Mc Cann 《Annali di Matematica Pura ed Applicata》1990,157(1):27-61
Summary
We present a Fitting Class construction which exploits the properties of a certain class of finite soluble groups of nilpotent length three, called
-groups in the paper. In addition we look at a wider class of groups, called
in the paper, also of nilpotent length three and examine the question as to what
-groups are in
.A number of examples are given to illuminate the various classes.The author would like to thank HermannHeineken for supervising his dissertation, from which this paper has sprung, the German Academic Exchange Service (DAAD) for enabling him to study with Prof. Heineken at Universität Würzburg and An Roinn Oideachais for the fellowship which is at present supporting him at University College, Galway. 相似文献
2.
《European Journal of Operational Research》1998,111(1):129-141
In this paper, a higher level heuristic procedure “tabu search” is proposed to provide good solutions to resource-constrained, randomized activity duration project scheduling problems. Our adaptation of tabu search uses multiple tabu lists, randomized short-term memory, and multiple starting schedules as a means of search diversification. The proposed method proves to be an efficient way to find good solutions to both deterministic and stochastic problems. For the deterministic problems, most of the optimal schedules of the test projects investigated are found. Computational results are presented which establish the superiority of tabu search over the existing heuristic algorithms. 相似文献
3.
Takefumi Kondo 《Differential Geometry and its Applications》2005,22(2):121-130
In this paper we are going to generalize Gromov's mm-Reconstruction theorem (cf. [Metric Structures for Riemannian and Non-Riemannian Spaces, Birkhäuser, Basel, 1999] ) to a probability measures on the spaces of mm-spaces. And for this purpose, we give alternative proof of mm-Reconstruction theorem. 相似文献
4.
An activity-on-arc network project of the PERT type with random activity durations is considered. For each activity, its accomplishment is measured in percentages of the total project. When operated, each activity utilizes resources of a pregiven capacity and no resource reallocation is undertaken in the course of the project's realization. Each activity can be operated at several possible speeds that are subject to random disturbances and correspond to one and the same resource capacity; that is, these speeds depend only on the degree of intensity of the project's realization. For example, in construction projects partial accomplishments are usually measured in percentages of the total project, while different speeds correspond to different hours a day per worker. The number of possible speeds is common to all activities. For each activity, speeds are sorted in ascending order of their average values—namely speeds are indexed. It is assumed that at any moment t>0 activities, in operation at that moment, have to apply speeds of one and the same index that actually determines the project's speed. The progress of the project can be evaluated only via inspection at control points that have to be determined. The project's due date and the chance constraint to meet the deadline are pregiven. An on-line control model is suggested that, at each control point, faces a stochastic optimization problem. Two conflicting objectives are imbedded in the model:(1) to minimize the number of control points, and(2) to minimize the average index of the project's speeds which can be changed only at a control point.At each routine control point, decision-making centers on determining the next control point and the new index of the speeds (for all activities to be operated) up to that point. The model's performance is verified via simulation.The developed on-line control algorithm can be used for various PERT network projects which can be realized with different speeds, including construction projects and R&D projects. 相似文献
5.
G. A. Belyankin A. A. Borisov A. A. Vasin V. V. Morozov V. V. Fedorov 《Computational Mathematics and Modeling》1999,10(2):184-189
We consider problems of optimal distribution of capital and borrowed funds between investment projects in scientific-technical production. We develop a modified method of branching and boundaries using the principal of decomposition with partitioning of the problem of optimization on both high and low levels. For a number of cases we obtain necessary and sufficient conditions for solvability of the low-level problem. Bibliography: 2 titles. Translated fromProblemy Matematicheskoi Fiziki, 1998, pp. 225–234. 相似文献
6.
7.
8.
9.
Consider n jobs (J1,J2,…,Jn) and m machines (M1,M2…,Mm). Upon completion of processing of Ji, 1 ? i ? n, on Mj 1 ? j ? m ? 1, it departs with probability pi or moves to Mj+1 with the complementary probability, 1?pi. A job completing service on Mm departs. The processing time of ji on Mj possesses a distribution function Fj. It is proved that sequencing the jobs in a nondecreasing order of pi minimizes in distribution the schedule length. 相似文献
10.
11.
Haibo Chen Xianqing Lv Yansong Qiao 《Journal of Computational and Applied Mathematics》2009,233(4):1128-1138
Existence of a least squares solution for a sum of several weighted normal functions is proved. The gradient descent (GD) method is used to fit the measured data (i.e. the laser grain-size distribution of the sediments) with a sum of three weighted lognormal functions. The numerical results indicate that the GD method is not only easy to operate but also could effectively optimize the parameters of the fitting function with the error decreasing steadily. Meanwhile the overall fitting results are satisfactory. As a new way of data fitting, the GD method could also be used to solve other optimization problems. 相似文献
12.
One key point in the multiple attribute decision making is to determine the associated weights. In this paper, we first briefly review some main methods for determining the weights by using distribution functions. Then, motivated by the idea of data distribution, we develop some novel methods for obtaining the weights associated with the weighted arithmetic aggregation operators. The methods can relieve the influence of biased data on the decision results by weighting these data with small values based on the corresponding probability of data. Furthermore, some commonly used probability distribution methods are used to solve the problems in different conditions. Finally, four practical examples are provided to illustrate the weighting method. 相似文献
13.
Zhu Lixing 《数学学报(英文版)》1993,9(2):175-183
ConsiderD n the maximum Kolmogorov distance betweenP n andP among all possible one-dimensional projections, whereP n is an empirical measure based ond-dimensional i.i.d vectors with spherically symmetric probability measureP. We show in this paper that $$c_1 \lambda ^2 \exp ( - 2\lambda ^2 ) \leqslant P(\sqrt n D_n > \lambda )$$ for large λ,d≥2 and an appropriate constantc 1. From this, when dimensiond is fixed, we give a negative answer to Huber's conjecture,P(D n >ε)≤N exp(?2n? 2), whereN is a constant depending only on dimensiond. 相似文献
14.
Robust and efficient fitting of the generalized Pareto distribution with actuarial applications in view 总被引:1,自引:0,他引:1
Due to advances in extreme value theory, the generalized Pareto distribution (GPD) emerged as a natural family for modeling exceedances over a high threshold. Its importance in applications (e.g., insurance, finance, economics, engineering and numerous other fields) can hardly be overstated and is widely documented. However, despite the sound theoretical basis and wide applicability, fitting of this distribution in practice is not a trivial exercise. Traditional methods such as maximum likelihood and method-of-moments are undefined in some regions of the parameter space. Alternative approaches exist but they lack either robustness (e.g., probability-weighted moments) or efficiency (e.g., method-of-medians), or present significant numerical problems (e.g., minimum-divergence procedures). In this article, we propose a computationally tractable method for fitting the GPD, which is applicable for all parameter values and offers competitive trade-offs between robustness and efficiency. The method is based on ‘trimmed moments’. Large-sample properties of the new estimators are provided, and their small-sample behavior under several scenarios of data contamination is investigated through simulations. We also study the effect of our methodology on actuarial applications. In particular, using the new approach, we fit the GPD to the Danish insurance data and apply the fitted model to a few risk measurement and ratemaking exercises. 相似文献
15.
《Insurance: Mathematics and Economics》2010,46(3):424-435
Due to advances in extreme value theory, the generalized Pareto distribution (GPD) emerged as a natural family for modeling exceedances over a high threshold. Its importance in applications (e.g., insurance, finance, economics, engineering and numerous other fields) can hardly be overstated and is widely documented. However, despite the sound theoretical basis and wide applicability, fitting of this distribution in practice is not a trivial exercise. Traditional methods such as maximum likelihood and method-of-moments are undefined in some regions of the parameter space. Alternative approaches exist but they lack either robustness (e.g., probability-weighted moments) or efficiency (e.g., method-of-medians), or present significant numerical problems (e.g., minimum-divergence procedures). In this article, we propose a computationally tractable method for fitting the GPD, which is applicable for all parameter values and offers competitive trade-offs between robustness and efficiency. The method is based on ‘trimmed moments’. Large-sample properties of the new estimators are provided, and their small-sample behavior under several scenarios of data contamination is investigated through simulations. We also study the effect of our methodology on actuarial applications. In particular, using the new approach, we fit the GPD to the Danish insurance data and apply the fitted model to a few risk measurement and ratemaking exercises. 相似文献
16.
Mir Nabi Pirouzi Fard 《Computational Statistics》2010,25(2):257-267
A comparison between the ordinary least-squares estimator and the weighted least-squares estimator when the data set arises
from the standard extreme value distribution is provided. Probability plot of the extreme value distribution is applied. A
goodness-of-fit test of the standard extreme value distribution is introduced. The percentage points of the test statistic
are investigated. The results of power study for the test statistic under various alternatives show that in most situations
the proposed test statistic serves as well as do competing alternatives. 相似文献
17.
This paper proposes a novel algorithm to reconstruct an unknown distribution by fitting its first-four moments to a proper parametrized probability distribution (PPD) model. First, a PPD system containing three previously developed PPD models is suggested to approximate the unknown distribution, rather than empirically adopting a single distribution model. Then, a two-step algorithm based on the moments matching criterion and the maximum entropy principle is proposed to specify the appropriate (final) PPD model in the system for the distribution. The proposed algorithm is first verified by approximating several commonly used analytical distributions, along with a set of real dataset, where the existing measures are also employed to demonstrate the effectiveness of the proposed two-step algorithm. Further, the effectiveness of the algorithm is demonstrated through an application to three typical moments-based reliability problems. It is found that the proposed algorithm is a robust tool for selecting an appropriate PPD model in the system for recovering an unknown distribution by fitting its first-four moments. 相似文献
18.
Wai-Yuan Tan 《Mathematical and Computer Modelling》2005,41(13):53-1414
In this paper, I have shown that under some mild conditions, the number of initiated cells in an extended two-stage model of carcinogenesis can be approximated by a diffusion process. By using this approximation, I have derived the probability distribution for the number of initiated cells in terms of Laguerre polynomials under normal prevention conditions. This follows from the fact that many of the dietary components are antioxidants which would neutralize the hydroxyl free radicals and hence, reduce the proliferation rates of initiated cells to interrupt or slow down the promotion stage in carcinogenesis. 相似文献
19.
Bid calculation for construction projects: Regulations and incentive effects of unit price contracts
《European Journal of Operational Research》2006,171(3):1005-1019
The Austrian contract awarding system for construction projects is characterized by the unit price contract being an important contract type. The bid price is a decisive criterion for the selection of the construction company that performs a project, and the bid price is calculated from the unit prices and the specified production volumes of the project activities. Since the actual production volumes can differ from the specified volumes, the actual payment can differ from the bid price according to these deviations. In practice there can be asymmetric information on the production volumes. This leads to an incentive for the bidders to “skew” the bid calculation by asymmetric allocation of overhead costs to project activities.In this paper we analyze this agency-theoretical situation and develop a model that decides on the allocation of overhead costs to project activities in order to maximize the actual payment for a predetermined bid price. We also present a case study and comment on the implications of the model for the contract awarding system in the construction industry. 相似文献
20.