首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A basic system is a nonempty collection of finite incomparable subsets of a set such that for any two subsets or bases in the collection, any element of one basis can be replaced by some element of the other to give another basis in the collection. In a basic system, any subset of one basis can be bijectively exchanged for distinct elements of another; for a finite set, basis complements also have these properties; and certain conditions will guarantee that two such systems on the same set will contain a common basis. All proofs are new, elementary, and set-theoretic. In addition, they suggest efficient algorithmic procedures whose efficiencies are calculated.  相似文献   

2.
Michael Schmidt 《PAMM》2006,6(1):15-18
Many model reduction techniques take a semi-discretization of the original PDE model as starting point and aim then at an accurate approximation of its input/output map. In this contribution, we discuss the direct discretization of the i/o map of the original infinite-dimensional system. First, the input and output signals are discretized in space and time, second, the system dynamics are approximated in form of the underlying evolution operator, leading to an approximated i/o map with matrix representation. The discretization framework, corresponding error estimations, a SVD-based system reduction method and a numerical application in an optimization problem are presented for the example of a linear time-invariant heat equation. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
In isogeometric analysis, NURBS basis functions are used as shape functions in an isoparametric finite-element-type discretization. Among other advantageous features, this approach is able to provide exact and smooth representations of a broad class of computational domains with curved boundaries. Therefore, this discretization method seems to be especially convenient for computational shape optimization, where a smooth and CAD-like parametrization of the optimal geometry is desired. Choosing boundary control point coordinates of an isogeometric discretization as design variables, an additional design model can be avoided. However, for a higher number of design variables, typical drawbacks like oscillating boundaries as known from early node-based shape optimization methods appear. To overcome this problem, we propose to use a fictitious energy regularization: the strain energy of a fictitious deformation, which maps the initial to the optimized domain, is employed as a regularizing term in the optimization problem. Moreover, this deformation is used for efficiently moving the dependent nodes within the domain in each step of the optimization process. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
There is a wide range of iterative methods in infinite dimensional spaces to treat variational equations or variational inequalities. As a rule, computational handling of problems in infinite dimensional spaces requires some discretization. Any useful discretization of the original problem leads to families of problems over finite dimensional spaces. Thus, two infinite techniques, namely discretization and iteration are embedded into each other. In the present paper, the behaviour of truncated iterative methods is studied, where at each discretization level only a finite number of steps is performed. In our study no accuracy dependent a posteriori stopping criterion is used. From an algorithmic point of view, the considered methods are of iteration–discretization type. The major aim here is to provide the convergence analysis for the introduced abstract iteration–discretization methods. A special emphasis is given on algorithms for the treatment of variational inequalities with strongly monotone operators over fixed point sets of quasi-nonexpansive mappings.  相似文献   

5.
Motivated by the benefits of discretization in optimal control problems, we consider the possibility of discretizing pursuit-evasion games. Two approaches are introduced. In the first approach, the solution of the necessary conditions of the continuous-time game is decomposed into ordinary optimal control problems that can be solved using discretization and nonlinear programming techniques. In the second approach, the game is discretized and transformed into a bilevel programming problem, which is solved using a first-order feasible direction method. Although the starting points of the approaches are different, they lead in practice to the same solution algorithm. We demonstrate the usability of the discretization by solving some open-loop representations of feedback solutions for a complex pursuit-evasion game between a realistically modeled aircraft and a missile, with terminal time as the payoff. The solutions are compared with those obtained via an indirect method.  相似文献   

6.
The paper deals with a combination of pathfollowing methods (embedding approach) and feasible descent direction methods (so-called jumps) for solving a nonlinear optimization problem with equality and inequality constraints. Since the method that we propose here uses jumps from one connected component to another one, more than one connected component of the solution set of the corresponding one-parametric problem can be followed numerically. It is assumed that the problem under consideration belongs to a generic subset which was introduced by Jongen, Jonker and Twilt. There already exist methods of this type for which each starting point of a jump has to be an endpoint of a branch of local minimizers. In this paper the authors propose a new method by allowing a larger set of starting points for the jumps which can be constructed at bifurcation and turning points of the solution set. The topological properties of those cases where the method is not successful are analyzed and the role of constraint qualifications in this context is discussed. Furthermore, this new method is applied to a so-called modified standard embedding which is a particular construction without equality constraints. Finally, an algorithmic version of this new method as well as computational results are presented.  相似文献   

7.
8.
The goal of harmonic analysis on a (noncommutative) group is to decompose the most “natural” unitary representations of this group (like the regular representation) on irreducible ones. The infinite-dimensional unitary group U(∞) is one of the basic examples of “big” groups whose irreducible representations depend on infinitely many parameters. Our aim is to explain what the harmonic analysis on U(∞) consists of.We deal with unitary representations of a reasonable class, which are in 1-1 correspondence with characters (central, positive definite, normalized functions on U(∞)). The decomposition of any representation of this class is described by a probability measure (called spectral measure) on the space of indecomposable characters. The indecomposable characters were found by Dan Voiculescu in 1976.The main result of the present paper consists in explicitly constructing a 4-parameter family of “natural” representations and computing their characters. We view these representations as a substitute of the nonexisting regular representation of U(∞). We state the problem of harmonic analysis on U(∞) as the problem of computing the spectral measures for these “natural” representations. A solution to this problem is given in the next paper (Harmonic analysis on the infinite-dimensional unitary group and determinantal point processes, math/0109194, to appear in Ann. Math.), joint with Alexei Borodin.We also prove a few auxiliary general results. In particular, it is proved that the spectral measure of any character of U(∞) can be approximated by a sequence of (discrete) spectral measures for the restrictions of the character to the compact unitary groups U(N). This fact is a starting point for computing spectral measures.  相似文献   

9.
Russell??s critique of substance monism is an ideal starting point from which to understand some main concepts in Spinoza??s difficult metaphysics. This paper provides an in-depth examination of Spinoza??s proof that only one substance exists. On this basis, it rejects Russell??s interpretation of Spinoza??s theory of reality as founded upon the logical doctrine that all propositions consist of a predicate and a subject. An alternative interpretation is offered: Spinoza??s substance is not a bearer of properties, as Russell implied, but an eternally active, self-actualizing creative power. Eventually, Spinoza the Monist and Russell the Pluralist are at one in holding that process and activity rather than enduring things are the most fundamental realities.  相似文献   

10.
11.
A numerical decomposition method proposed by Adomian provides solutions to nonlinear, or stochastic, continuous time systems without the usual restrictive restraints. It is applicable to differential, delay differential, integro-differential, and partial differential equations without the need for linearization or other restrictions. It also works through both uncoupled boundary conditions as well as delay systems. In the following paper, a new time discretization method for the development of a sample-data representation of a nonlinear continuous-time input-driven dynamical system is proposed. The proposed method is based on both the zero-order hold (ZOH) assumption as well as the Adomian Decomposition Method which exhibit unique algorithmic and computational advantages. To take advantage of this method, the following steps must be followed. First, the method is applied to a linear input-driven dynamical system to explicitly derive an exact sample-data representation, producing proper results. Second, the method is then applied to a nonlinear input-driven dynamical system, which thereby derives exact and approximate sample-data representations, the latter being most suited for practical applications. To evaluate the performance, the proposed discretization procedure was tested using simulations in a case study which involved an illustrative two-degree-of-freedom mechanical system that exhibited nonlinear behavior considering various control and input variable profiles. In conclusion, the suggested algorithm, in comparison to the results of a Taylor-Lie series expansion method, demonstrated increased performance and efficiency.  相似文献   

12.
Finding interesting patterns from data is one of the most important problems in data mining and it has been studied actively for more than a decade. However, it is still largely open problem which patterns are interesting and which are not.The problem of detecting the interesting patterns (in a predefined class of patterns) has been attempted to solve by determining quality values for potentially interesting patterns and deciding a pattern to be interesting if its quality value (i.e., the interestingness of the pattern) is higher than a given threshold value. Again, it is very difficult to find a threshold value and a way to determine the quality values such that the collection of patterns with quality values greater than the threshold value would contain almost all truly interesting patterns and only few uninteresting ones.To enable more accurate characterization of interesting patterns, use of constraints to further prune the pattern collection has been proposed. However, most of the constrained pattern discovery research has been focused on structural constraints for the pattern collections and the patterns. We take a complementary approach and focus on constraining the quality values of the patterns.We propose quality value simplifications as a complementary approach to structural constraints on patterns. As a special case of the quality value simplifications, we consider discretizing the quality values. We analyze the worst-case error of certain discretization functions and give efficient discretization algorithms minimizing several loss functions. In addition to that, we show that the discretizations of the quality values can be used to obtain small approximate condensed representations for collections of interesting patterns. We evaluate the proposed condensation approach experimentally using frequent itemsets.  相似文献   

13.
《Optimization》2012,61(8):1283-1295
In this article we present the fundamental idea, concepts and theorems of a basic line search algorithm for solving linear programming problems which can be regarded as an extension of the simplex method. However, unlike the iteration of the simplex method from a basic point to an improved adjacent basic point via pivot operation, the basic line search algorithm, also by pivot operation, moves from a basic line which contains two basic feasible points to an improved basic line which also contains two basic feasible points whose objective values are no worse than that of the two basic feasible points on the previous basic line. The basic line search algorithm may skip some adjacent vertices so that it converges to an optimal solution faster than the simplex method. For example, for a 2-dimensional problem, the basic line search algorithm can find an optimal solution with only one iteration.  相似文献   

14.
Abstract. Our main interest in this paper is nonlinear approximation. The basic idea behind nonlinear approximation is that the elements used in the approximation do not come from a fixed linear space but are allowed to depend on the function being approximated. While the scope of this paper is mostly theoretical, we should note that this form of approximation appears in many numerical applications such as adaptive PDE solvers, compression of images and signals, statistical classification, and so on. The standard problem in this regard is the problem of m -term approximation where one fixes a basis and looks to approximate a target function by a linear combination of m terms of the basis. When the basis is a wavelet basis or a basis of other waveforms, then this type of approximation is the starting point for compression algorithms. We are interested in the quantitative aspects of this type of approximation. Namely, we want to understand the properties (usually smoothness) of the function which govern its rate of approximation in some given norm (or metric). We are also interested in stable algorithms for finding good or near best approximations using m terms. Some of our earlier work has introduced and analyzed such algorithms. More recently, there has emerged another more complicated form of nonlinear approximation which we call highly nonlinear approximation. It takes many forms but has the basic ingredient that a basis is replaced by a larger system of functions that is usually redundant. Some types of approximation that fall into this general category are mathematical frames, adaptive pursuit (or greedy algorithms), and adaptive basis selection. Redundancy on the one hand offers much promise for greater efficiency in terms of approximation rate, but on the other hand gives rise to highly nontrivial theoretical and practical problems. With this motivation, our recent work and the current activity focuses on nonlinear approximation both in the classical form of m -term approximation (where several important problems remain unsolved) and in the form of highly nonlinear approximation where a theory is only now emerging.  相似文献   

15.
Coalitional games raise a number of important questions from the point of view of computer science, key among them being how to represent such games compactly, and how to efficiently compute solution concepts assuming such representations. Marginal contribution nets (MC‐nets), introduced by Ieong and Shoham, are one of the simplest and most influential representation schemes for coalitional games. MC‐nets are a rulebased formalism, in which rules take the form patternvalue, where “pattern ” is a Boolean condition over agents, and “value ” is a numeric value. Ieong and Shoham showed that, for a class of what we will call “basic” MC‐nets, where patterns are constrained to be a conjunction of literals, marginal contribution nets permit the easy computation of solution concepts such as the Shapley value. However, there are very natural classes of coalitional games that require an exponential number of such basic MC‐net rules. We present read‐once MC‐nets, a new class of MC‐nets that is provably more compact than basic MC‐nets, while retaining the attractive computational properties of basic MC‐nets. We show how the techniques we develop for read‐once MC‐nets can be applied to other domains, in particular, computing solution concepts in network flow games on series‐parallel networks (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
Semidefinite optimization is a strong tool in the study of NP-hard combinatorial optimization problems. On the one hand, semidefinite optimization problems are in principle solvable in polynomial time (with fixed precision), on the other hand, their modeling power allows to naturally handle quadratic constraints. Contrary to linear optimization with the efficiency of the Simplex method, the algorithmic treatment of semidefinite problems is much more subtle and also practically quite expensive. This survey-type article is meant as an introduction for a non-expert to this exciting area. The basic concepts are explained on a mostly intuitive level, and pointers to advanced topics are given. We provide a variety of semidefinite optimization models on a selection of graph optimization problems and give a flavour of their practical impact.  相似文献   

17.
In this paper the problem of stopping the Multistart algorithm for global optimization is considered. The algorithm consists of repeatedly performing local searches from randomly generated starting points. The crucial point in this algorithmic scheme is the development of a stopping criterion; the approach analyzed in this paper consists in stopping the sequential sampling as soon as a measure of the trade-off between the cost of further local searches is greater than the expected benefit, i.e. the possibility of discovering a better optimum.Stopping rules are thoroughly investigated both from a theoretical point of view and from a computational one via extensive simulation. This latter clearly shows that the simple1-step look ahead rule may achieve surprisingly good results in terms of computational cost vs. final accuracy.The research of the second author was partially supported by Progetto MPI 40% Metodi di Ottimizzazione per le Decisioni.  相似文献   

18.
We introduce a new idea of algorithmic structure, called assigning algorithm, using a finite collection of a subclass of strictly quasi‐nonexpansive operators. This new algorithm allows the iteration vectors to take steps on a pattern which is based on a connected directed acyclic graph. The sequential, simultaneous, and string‐averaging methods for solving convex feasibility problems are the special cases of the new algorithm which may be used to reduce idle time of processors in parallel implementations. We give a convergence analysis for such algorithmic structure with perturbation. Also, we extend some existence results of the split common fixed point problem based on the new algorithm. The performance of the new algorithm is illustrated with numerical examples from computed tomography.  相似文献   

19.
Alternative representations of boundary integral operators corresponding to elliptic boundary value problems are developed as a starting point for numerical approximations as, e.g., Galerkin boundary elements including numerical quadrature and panel-clustering. These representations have the advantage that the integrands of the integral operators have a reduced singular behaviour allowing one to choose the order of the numerical approximations much lower than for the classical formulations. Low-order discretisations for the single layer integral equations as well as for the classical double layer potential and the hypersingular integral equation are considered. We will present fully discrete Galerkin boundary element methods where the storage amount and the CPU time grow only linearly with respect to the number of unknowns.

  相似文献   


20.
The study explores the nature of students’ conceptual understanding of calculus. Twenty students of engineering were asked to reflect in writing on the meaning of the concepts of limit and integral. A sub-sample of four students was selected for subsequent interviews, which explored in detail the students’ understandings of the two concepts. Intentional analysis of the students’ written and oral accounts revealed that the students were expressing their understanding of limit and integral within an algorithmic context, in which the very ‘operations’ of these concepts were seen as crucial. The students also displayed great confidence in their ability to deal with these concepts. Implications for the development of a conceptual understanding of calculus are discussed, and it is argued that developing understanding within an algorithmic context can be seen as a stepping stone towards a more complete conceptual understanding of calculus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号