首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
For solving inverse gravimetry problems, efficient stable parallel algorithms based on iterative gradient methods are proposed. For solving systems of linear algebraic equations with block-tridiagonal matrices arising in geoelectrics problems, a parallel matrix sweep algorithm, a square root method, and a conjugate gradient method with preconditioner are proposed. The algorithms are implemented numerically on a parallel computing system of the Institute of Mathematics and Mechanics (PCS-IMM), NVIDIA graphics processors, and an Intel multi-core CPU with some new computing technologies. The parallel algorithms are incorporated into a system of remote computations entitled “Specialized Web-Portal for Solving Geophysical Problems on Multiprocessor Computers.” Some problems with “quasi-model” and real data are solved.  相似文献   

2.
C. Popa 《PAMM》2003,2(1):491-492
In this paper we describe two “sparse preconditioning” techniques for accelerating the convergence of Kaczmarzlike algorithms. The first method, uses projections with respect to the “energy scalar product” generated by an appropriate symmetric and positive definite matrix. The second one starts from some recent results of Y. Censor and T. Elfving on “sparsity pattern oriented” (SPO) oblique projections and uses an “algebraic multigrid interpolationlike” construction of the (SPO) family. Numerical experiments are described on a system comming from a bioelectric field simulation problem.  相似文献   

3.
This paper presents two new approximate versions of the alternating direction method of multipliers (ADMM) derived by modifying of the original “Lagrangian splitting” convergence analysis of Fortin and Glowinski. They require neither strong convexity of the objective function nor any restrictions on the coupling matrix. The first method uses an absolutely summable error criterion and resembles methods that may readily be derived from earlier work on the relationship between the ADMM and the proximal point method, but without any need for restrictive assumptions to make it practically implementable. It permits both subproblems to be solved inexactly. The second method uses a relative error criterion and the same kind of auxiliary iterate sequence that has recently been proposed to enable relative-error approximate implementation of non-decomposition augmented Lagrangian algorithms. It also allows both subproblems to be solved inexactly, although ruling out “jamming” behavior requires a somewhat complicated implementation. The convergence analyses of the two methods share extensive underlying elements.  相似文献   

4.
Zdzislaw Pawlak  Jerzy Rakowski 《PAMM》2008,8(1):10321-10322
The purpose of the paper is to derive an efficient sinusoidal thick beam finite element for the static analysis of 2D structures. A two–node, 6–DOF curved, sine–shape element of a constant cross–section is considered. Effects of flexural, axial and shear deformations are taken into account. Contrary to commonly used curvilinear co–ordinates, a rectangular co–ordinates system is used in the present analysis. First, an auxiliary problem is solved: a symmetric clamped–clamped sinusoidal arch subjected to unit nodal displacements of both supports is considered using the flexibility method. The exact stiffness matrix for the shear–flexible and compressible element is derived. Introduction of two parameters “n” and “t” enables the identification of shear and membrane influences in the element stiffness matrix. Basing on the principle of virtual work a full set of 18 shape functions related to unit support displacements is derived (total rotations of cross–sections, tangential and normal displacements along the element). The functions are found analytically in the closed form. They are functions of one linear dimensionless coordinate of x–axis and depend on one geometrical parameter of sinusoidal arch, height/span ratio “c” and on physical and geometrical properties of the element cross–section. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Contemporary Group Technology (GT) methods apply coding schemes as a popular method for capturing the design and manufacturing information pertinent to the parts to be grouped. Coding schemes are very popular and many different coding systems are commercially available. The main disadvantage of current coding systems, however, is their generality and lack of informative representation of the parts.This paper presents a new methodology for coding parts using fuzzy codes. The methodology is general and applies to attributes that have a crisp value (e.g., “length”, “ratio of length to diameter”), an interval value (e.g., “tolerance”, “surface roughness”) or a fuzzy value (e.g., “primary shape”). The methodology considers the range of attributes' values relevant for the grouping, and therefore, is tuned and adjusted to the specific collection of parts of interest. This method creates a more informative coding scheme which leads to improved variant process planning methods, scheduling and inventory control as well as other manufacturing functions that utilize GT.  相似文献   

6.
7.
The Linear Complementarity Problem (LCP), with an H+?matrix coefficient, is solved by using the new “(Projected) Matrix Analogue of the AOR (MAAOR)” iterative method; this new method constitutes an extension of the “Generalized AOR (GAOR)” iterative method. In this work two sets of convergence intervals of the parameters involved are determined by the theories of “Perron-Frobenius” and of “Regular Splittings”. It is shown that the intervals in question are better than any similar convergence intervals found so far by similar iterative methods. A deeper analysis reveals that the “best” values of the parameters involved are those of the (projected) scalar Gauss-Seidel iterative method. A theoretical comparison of the “best” (projected) Gauss-Seidel and the “best” modulus-based splitting Gauss-Seidel method is in favor of the former method. A number of numerical examples support most of our theoretical findings.  相似文献   

8.
A Boolean matrix is a matrix with elements having values of either 1 or 0; a fuzzy matrix is a matrix with elements having values in the closed interval [0, 1]. Fuzzy matrices occur in the modeling of various fuzzy systems, with products usually determined by the “max(min)” rule arising from fuzzy set theory. In this paper, some sufficient conditions for convergence under “max(min)” products of the powers of a square fuzzy matrix and of a fuzzy state process are established.  相似文献   

9.
The usual mathematical method to represent uncertain quantities, for example the state of a dynamical system with uncertain initial conditions, are random variables (RVs). In many problems the space of elementary events Ω, on which the RVs are defined as functions of these events, is not concretely accessible, so that the usual idea of a function (e.g. given as a formula) loses much of its meaning. The representation of RVs is therefore often strikingly different from what is used for “normal” functions. With the help of RVs one can formulate Bayesian estimators for the uncertain quantity when additional information (usually noisy, incomplete measurements) becomes available. A common way to derive such an estimator is to use an instance of the projection theorem for Hilbert spaces. In this work we present a linear Bayesian estimation method which results from using a recently popular representation of an RV, the polynomial chaos expansion (PCE), also known as “white noise analysis”. The resulting method is completely deterministic, as well as computationally efficient. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
The computational complexity of discrete problems concerning the enumeration of solutions is addressed. The concept of an asymptotically efficient algorithm is introduced for the dualization problem, which is formulated as the problem of constructing irreducible coverings of a Boolean matrix. This concept imposes weaker constraints on the number of “redundant” algorithmic steps as compared with the previously introduced concept of an asymptotically optimal algorithm. When the number of rows in a Boolean matrix is no less than the number of columns (in which case asymptotically optimal algorithms for the problem fail to be constructed), algorithms based on the polynomialtime-delay enumeration of “compatible” sets of columns of the matrix is shown to be asymptotically efficient. A similar result is obtained for the problem of searching for maximal conjunctions of a monotone Boolean function defined by a conjunctive normal form.  相似文献   

11.
This paper presents a high-level language for describing VLSI circuits designed as a collection of asynchronous concurrent processes. The notation is called “Synchronized Transitions,” and it can be used to describe designs from very high levels of abstraction down to the gate level. Both synchronous and asynchronous/self-timed circuits can be described, and it is not necessary to choose a particular type of circuitry in the early phases of a design. “Synchronized Transitions” programs may be used for experimenting with (simulating) a design at several levels, e.g., to explore different high-level decisions or to verify the gate level design. By observing certain constraints in a “Synchronized Transitions” program, it is possible to systematically transform it into an efficient layout.  相似文献   

12.
Moses and Nachum (1990) identified conceptual flaws (later echoed by Samet, 2010) in Bacharach’s (1985) generalization of Aumann’s (1976) seminal “agreeing to disagree” result by demonstrating that the crucial assumptions of like-mindedness and the Sure-Thing Principle are not meaningfully expressible in standard partitional information structures. This paper presents a new agreement theorem couched in “counterfactual information structures” that resolves these conceptual flaws. The new version of the Sure-Thing Principle introduced here, which accounts for beliefs at counterfactual states, is also shown to sit well with the intuition of the original version proposed by Savage (1972).  相似文献   

13.
In some proportional electoral systems with more than one constituency the number of seats allotted to each constituency is pre-specified, as well as, the number of seats that each party has to receive at a national level. “Bidimensional allocation” of seats to parties within constituencies consists of converting the vote matrix V into an integer matrix of seats “as proportional as possible” to V, satisfying constituency and party totals and an additional “zero-vote zero-seat” condition. In the current Italian electoral law this Bidimensional Allocation Problem (or Biproportional Apportionment Problem—BAP) is ruled by an erroneous procedure that may produce an infeasible allocation, actually one that is not able to satisfy all the above conditions simultaneously. In this paper we focus on the feasibility aspect of BAP and, basing on the theory of (0,1)-matrices with given line sums, we formulate it for the first time as a “Matrix Feasibility Problem”. Starting from some previous results provided by Gale and Ryser in the 60’s, we consider the additional constraint that some cells of the output matrix must be equal to zero and extend the results by Gale and Ryser to this case. For specific configurations of zeros in the vote matrix we show that a modified version of the Ryser procedure works well, and we also state necessary and sufficient conditions for the existence of a feasible solution. Since our analysis concerns only special cases, its application to the electoral problem is still limited. In spite of this, in the paper we provide new results in the area of combinatorial matrix theory for (0,1)-matrices with fixed zeros which have also a practical application in some problems related to graphs.  相似文献   

14.
In the high-energy quantum-physics literature one finds statements such as “matrix algebras converge to the sphere”. Earlier I provided a general setting for understanding such statements, in which the matrix algebras are viewed as compact quantum metric spaces, and convergence is with respect to a quantum Gromov–Hausdorff-type distance. More recently I have dealt with corresponding statements in the literature about vector bundles on spheres and matrix algebras. But physicists want, even more, to treat structures on spheres (and other spaces) such as Dirac operators, Yang–Mills functionals, etc., and they want to approximate these by corresponding structures on matrix algebras. In preparation for understanding what the Dirac operators should be, we determine here what the corresponding “cotangent bundles” should be for the matrix algebras, since it is on them that a “Riemannian metric” must be defined, which is then the information needed to determine a Dirac operator. (In the physics literature there are at least 3 inequivalent suggestions for the Dirac operators.)  相似文献   

15.
The multiple criteria decision making (MCDM) methods VIKOR and TOPSIS are based on an aggregating function representing “closeness to the ideal”, which originated in the compromise programming method. In VIKOR linear normalization and in TOPSIS vector normalization is used to eliminate the units of criterion functions. The VIKOR method of compromise ranking determines a compromise solution, providing a maximum “group utility” for the “majority” and a minimum of an individual regret for the “opponent”. The TOPSIS method determines a solution with the shortest distance to the ideal solution and the greatest distance from the negative-ideal solution, but it does not consider the relative importance of these distances. A comparative analysis of these two methods is illustrated with a numerical example, showing their similarity and some differences.  相似文献   

16.
The success of a company increasingly depends on timely information (internal or external) being available to the right person at the right time for crucial managerial decision-making. Achieving such a “right time/right place” duet depends directly on database performance. A database system has been a core component that supports modern business system such as enterprise resource planning (ERP) system that integrates and supports all enterprise processes including product designing and engineering, manufacturing, and other business functions to achieve highest efficiency and effectiveness of operations. We develop and demonstrate through a proof-of-concept case study, a new “query-driven” heuristics for database design that seeks to identify database structures that perform robustly in dynamic settings with dynamic queries. Our focus is the design of efficient structures to process read-only queries in complex environments. Our heuristics begins with detailed analysis of relationships between diverse queries and the performance of different database structures. These relationships are then used in a series of steps that identify “robust” database structures that maintain high performance levels for a wide range of query patterns. We conjecture that our heuristics can facilitate efficient operations and effective decision-making of companies in today’s dynamic environment.  相似文献   

17.
Operational research (OR) offers efficient tools to support managers in strategic decision-making processes. Data envelopment analysis (DEA) and multiple criteria decision aid (MCDA) are two important research areas in OR. These two domains are both based on the evaluation of “objects” according to multiple “points of views”. Within the MCDA framework, choosing appropriate weights for the different criteria often arises as a problem itself for decision makers. As a consequence, researchers have developed original methodologies to help them during this elicitation phase. In this work, we aim to investigate how DEA can be used to propose weights in the context of the PROMETHEE II method. More precisely, we suggest an extension of the so-called “decision maker brain” used in the GAIA plane (also known as PROMETHEE VI) based on DEA. The underlying idea is based on the computation of weights in PROMETHEE (GAIA brain) which are compatible with the DEA analysis. We end this paper with a numerical example.  相似文献   

18.
This paper presents a value-at-risk (VaR) model based on the singular value decomposition (SVD) of a sparsity matrix for voltage risk identification in power supply networks. The matrix-based model provides a more computationally efficient risk assessment method than conventional models such as probability analysis and sensitivity analysis, for example, and provides decision makers in the power supply industry with sufficient information to minimize the risk of network collapse or blackouts. The VaR model is incorporated into a risk identification system (RIS) programmed in the MATLAB environment. The feasibility of the proposed approach is confirmed by performing a series of risk assessment simulations using the standard American Electric Power (AEP) test models (i.e. 14-, 30- and 57-node networks) and a real-world power network (Taiwan power network), respectively. In general, the simulated results confirm the ability of the matrix-based model VaR model to efficient identify risk of power supply networks.  相似文献   

19.
For a multivariate normal distribution with unknown mean vector and unknown dispersion matrix, a sequential procedure for estimating the unknown mean vector is suggested. The procedure is shown to be asymptotically “risk efficient” in the sense of Starr (Ann. Math. Statist. (1966), 1173–1185), and the asymptotic order of the “regret” (see Starr and Woodroofe, Proc. Nat. Acad. Sci. 63 (1969), 285–288) is given. Moderate sample behaviour of the procedure using Monte-Carlo techniques is also studied. Finally, the asymptotic normality of the stopping time is proved.  相似文献   

20.
When a linear model is chosen by searching for the best subset among a set of candidate predictors, a fixed penalty such as that imposed by the Akaike information criterion may penalize model complexity inadequately, leading to biased model selection. We study resampling-based information criteria that aim to overcome this problem through improved estimation of the effective model dimension. The first proposed approach builds upon previous work on bootstrap-based model selection. We then propose a more novel approach based on cross-validation. Simulations and analyses of a functional neuroimaging data set illustrate the strong performance of our resampling-based methods, which are implemented in a new R package.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号