首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 455 毫秒
1.
Parallel stochastic gradient algorithms for large-scale matrix completion   总被引:1,自引:0,他引:1  
This paper develops Jellyfish, an algorithm for solving data-processing problems with matrix-valued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and least-squares problems regularized by the nuclear norm or $\gamma _2$ -norm. Jellyfish implements a projected incremental gradient method with a biased, random ordering of the increments. This biased ordering allows for a parallel implementation that admits a speed-up nearly proportional to the number of processors. On large-scale matrix completion tasks, Jellyfish is orders of magnitude more efficient than existing codes. For example, on the Netflix Prize data set, prior art computes rating predictions in approximately 4 h, while Jellyfish solves the same problem in under 3 min on a 12 core workstation.  相似文献   

2.
Even though OpenMath has been around for more than 10?years, there is still confusion about the ??semantics of OpenMath??. As the recent MathML3 recommendation semantically bases Content MathML on OpenMath Objects, this question becomes more pressing. One source of confusions about OpenMath semantics is that it is given on two levels: a very weak algebraic semantics for expression trees, which is extended by considering mathematical properties in content dictionaries that interpret the meaning of (constant) symbols. While this two-leveled way to interpret objects is well-understood in logic, it has not been spelt out rigorously for OpenMath. We present two denotational semantics for OpenMath: a construction-oriented semantics that achieves full coverage of all legal OpenMath expressions at the cost of great conceptual complexity, and a symbol-oriented one for a subset of OpenMath expressions. This subset is given by a variant of the OpenMath 2 role system, which??we claim??does not exclude any representations of meaningful mathematical objects.  相似文献   

3.
RENS     
This article introduces rens, the relaxation enforced neighborhood search, a large neighborhood search algorithm for mixed integer nonlinear programs (MINLPs). It uses a sub-MINLP to explore the set of feasible roundings of an optimal solution $\bar{x}$ of a linear or nonlinear relaxation. The sub-MINLP is constructed by fixing integer variables $x_j$ with $\bar{x} _{j} \in \mathbb {Z}$ and bounding the remaining integer variables to $x_{j} \in \{ \lfloor \bar{x} _{j} \rfloor , \lceil \bar{x} _{j} \rceil \}$ . We describe two different applications of rens: as a standalone algorithm to compute an optimal rounding of the given starting solution and as a primal heuristic inside a complete MINLP solver. We use the former to compare different kinds of relaxations and the impact of cutting planes on the so-called roundability of the corresponding optimal solutions. We further utilize rens to analyze the performance of three rounding heuristics implemented in the branch-cut-and-price framework scip. Finally, we study the impact of rens when it is applied as a primal heuristic inside scip. All experiments were performed on three publicly available test sets of mixed integer linear programs (MIPs), mixed integer quadratically constrained programs (MIQCPs), and MINLP s, using solely software which is available in source code. It turns out that for these problem classes 60 to 70 % of the instances have roundable relaxation optima and that the success rate of rens does not depend on the percentage of fractional variables. Last but not least, rens applied as primal heuristic complements nicely with existing primal heuristics in scip.  相似文献   

4.
LIM is not slim     
In this paper LIM, a recently proposed impartial combinatorial ruleset, is analyzed. A formula to describe the $\mathcal G $ -values of LIM positions is given, by way of analyzing an equivalent combinatorial ruleset LIM’, closely related to the classical nim. Also, an enumeration of $\mathcal P $ -positions of LIM with $n$ stones, and its relation to the Ulam-Warburton cellular automaton, is presented.  相似文献   

5.
In this article, we propose an algorithm, nesta-lasso, for the lasso problem, i.e., an underdetermined linear least-squares problem with a 1-norm constraint on the solution. We prove under the assumption of the restricted isometry property (rip) and a sparsity condition on the solution, that nesta-lasso is guaranteed to be almost always locally linearly convergent. As in the case of the algorithm nesta, proposed by Becker, Bobin, and Candès, we rely on Nesterov’s accelerated proximal gradient method, which takes $O(\sqrt {1/\varepsilon })$ iterations to come within $\varepsilon > 0$ of the optimal value. We introduce a modification to Nesterov’s method that regularly updates the prox-center in a provably optimal manner. The aforementioned linear convergence is in part due to this modification. In the second part of this article, we attempt to solve the basis pursuit denoising (bpdn) problem (i.e., approximating the minimum 1-norm solution to an underdetermined least squares problem) by using nesta-lasso in conjunction with the Pareto root-finding method employed by van den Berg and Friedlander in their spgl1 solver. The resulting algorithm is called parnes. We provide numerical evidence to show that it is comparable to currently available solvers.  相似文献   

6.
7.
In this paper a new tool for simultaneous optimisation of decisions on multiple time scales is presented. The tool combines the dynamic properties of Markov decision processes with the flexible and compact state space representation of LImited Memory Influence Diagrams (Limids). A temporal version of Limids, TemLimids, is defined by adding time-related functions to utility nodes. As a result, expected discounted utility, as well as expected relative utility might be used as optimisation criteria in TemLimids. Optimisation proceeds as in ordinary Limids. A sequence of such TemLimids can be used to model a Markov Limid Process, where each TemLimid represents a macro action. Algorithms are presented to find optimal plans for a sequence of such macro actions. Use of algorithms is illustrated based on an extended version of an example from pig production originally used to introduce the Limid concept.  相似文献   

8.
We introduce and study a notion of ‘Sasaki with torsion structure’ (st ) as an odd-dimensional analogue of Kähler with torsion geometry (kt ). These are normal almost contact metric manifolds that admit a unique compatible connection with \( 3 \) -form torsion. Any odd-dimensional compact Lie group is shown to admit such a structure; in this case, the structure is left-invariant and has closed torsion form. We illustrate the relation between st structures and other generalisations of Sasaki geometry, and we explain how some standard constructions in Sasaki geometry can be adapted to this setting. In particular, we relate the st structure to a kt structure on the space of leaves and show that both the cylinder and the cone over an st manifold are kt , although only the cylinder behaves well with respect to closedness of the torsion form. Finally, we introduce a notion of ‘ \( G \) -moment map’. We provide criteria based on equivariant cohomology ensuring the existence of these maps and then apply them as a tool for reducing st structures.  相似文献   

9.
LetG be a Vilenkin group (i.e., an infinite, compact, metrizable, zero-dimensional Abelian group). Our main result is a factorization theorem for functions in the Lipschitz spaces \(\mathfrak{L}\mathfrak{i}\mathfrak{p}\) (α,p; G). As colloraries of this theorem, we obtain (i) an extension of a factorization theorem ofY. Uno; (ii) a convolution formula which says that \(\mathfrak{L}\mathfrak{i}\mathfrak{p} (\alpha , r; G) = \mathfrak{L}\mathfrak{i}\mathfrak{p} (\beta , l; G)*\mathfrak{L}\mathfrak{i}\mathfrak{p} (\alpha - \beta , r; G)\) for 0<β<α<∞ and 1≤r≤∞; and (iii) an analogue, valid for allG, of a classical theorem ofHardy andLittlewood. We also present several results on absolute convergence of Fourier series defined onG, extending a theorem ofC. W. Onneweer and four results ofN. Ja. Vilenkin andA. I. Rubinshtein. The fourVilenkin-Rubinshtein results are analogues of classical theorems due, respectively, toO. Szász, S. B. Bernshtein, A. Zygmund, andG. G. Lorentz.  相似文献   

10.
Garbage collection (GC) algorithms play a key role in reducing the write amplification in flash-based solid state drives, where the write amplification affects the lifespan and speed of the drive. This paper introduces a mean field model to assess the write amplification and the distribution of the number of valid pages per block for a class $\mathcal {C}$ of GC algorithms. Apart from the Random GC algorithm, class $\mathcal {C}$ includes two novel GC algorithms: the $d$ -Choices GC algorithm, that selects $d$ blocks uniformly at random and erases the block containing the least number of valid pages among the $d$ selected blocks, and the Random++ GC algorithm, that repeatedly selects another block uniformly at random until it finds a block with a lower than average number of valid blocks. Using simulation experiments, we show that the proposed mean field model is highly accurate in predicting the write amplification (for drives with $N=50{,}000$ blocks). We further show that the $d$ -Choices GC algorithm has a write amplification close to that of the Greedy GC algorithm even for small $d$ values, e.g., $d = 10$ , and offers a more attractive trade-off between its simplicity and its performance than the Windowed GC algorithm introduced and analyzed in earlier studies. The Random++ algorithm is shown to be less effective as it is even inferior to the FIFO algorithm when the number of pages $b$ per block is large (e.g., for $b \ge 64$ ).  相似文献   

11.
Using Heijenoort??s unpublished generalized rules of quantification, we discuss the proof of Herbrand??s Fundamental Theorem in the form of Heijenoort??s correction of Herbrand??s ??False Lemma?? and present a didactic example. Although we are mainly concerned with the inner structure of Herbrand??s Fundamental Theorem and the questions of its quality and its depth, we also discuss the outer questions of its historical context and why Bernays called it ??the central theorem of predicate logic?? and considered the form of its expression to be ??concise and felicitous??.  相似文献   

12.
A geometric analysis of the shake and rattle methods for constrained Hamiltonian problems is carried out. The study reveals the underlying differential geometric foundation of the two methods, and the exact relation between them. In addition, the geometric insight naturally generalises shake and rattle to allow for a strictly larger class of constrained Hamiltonian systems than in the classical setting. In order for shake and rattle to be well defined, two basic assumptions are needed. First, a nondegeneracy assumption, which is a condition on the Hamiltonian, i.e., on the dynamics of the system. Second, a coisotropy assumption, which is a condition on the geometry of the constrained phase space. Non-trivial examples of systems fulfilling, and failing to fulfill, these assumptions are given.  相似文献   

13.
Aumann andShapley [1973] have investigated values of games in which all players are individually insignificant, i.e. form a non-atomic continuum, or “ocean”. In this paper we treat games in which, in addition to such an ocean, there are also some “atoms”, i.e. players who are individually significant. We define spaces of such games that are analogous to those investigated byAumann andShapley, and prove the existence of values on some of them. Unlike in the non-atomic case, we find that in general there are infinitely many values, corresponding to various ways in which the atoms can be imbedded in the ocean. The results generalize those ofMilnor andShapley [1961]. Precise statements will be found in Section 2.  相似文献   

14.
S. A. Telyakovskii [12] generalized a theorem of Bojani? [2] on the quantitative version of the Dirichlet-Jordan test, well-known in the classical Fourier analysis. Our goal is to extend Telyakovskii’s theorem from single to double Fourier series of functions in two variables that are of bounded variation in the sense of Hardy and Krause. The related theorems of Hardy [5] and Móricz [7] for such functions are corollaries of our Theorem proved in this paper.  相似文献   

15.
We prove a transplantation theorem for Fourier-Bessel coefficients. Theorems of such type were proved byAskey andWainger [1] andAskey [2] for ultraspherical and Jacobi coefficients, respectively. Our theorem can be also seen as a dual result to a transplantation theorem for Fourier-Bessel series which was proved byGilbert [3].  相似文献   

16.
We introduce the WeightedGrammar constraint and propose propagation algorithms based on the CYK parser and the Earley parser. We show that the traces of these algorithms can be encoded as a weighted negation normal form (WNNF), a generalization of NNF that allows nodes to carry weights. Based on this connection, we prove the correctness and complexity of these algorithms. Specifically, these algorithms enforce domain consistency on the WeightedGrammar constraint in time O(n 3). Further, we propose that the WNNF constraint can be decomposed into a set of primitive arithmetic constraint without hindering propagation.  相似文献   

17.
This paper reports computational experience with the codesDecompsx andLift which are built on IBM's MPSX/370 LP software for large-scale structured programs.Decompsx is an implementation of the Dantzig-Wolfe decomposition algorithm for block-angular LP's.Lift is an implementation of a nested decomposition algorithm for staircase and block-triangular LP's. A diverse collection of test problems drawn from real applications is used to test these codes, including multinational energy models and global economic models.  相似文献   

18.
The Mesh Adaptive Direct Search algorithm (Mads) algorithm is designed for nonsmooth blackbox optimization problems in which the evaluation of the functions defining the problems are expensive to compute. The Mads algorithm is not designed for problems with a large number of variables. The present paper uses a statistical tool based on variance decomposition to rank the relative importance of the variables. This statistical method is then coupled with the Mads algorithm so that the optimization is performed either in the entire space of variables or in subspaces associated with statistically important variables. The resulting algorithm is called Stats-Mads and is tested on bound constrained test problems having up to 500 variables. The numerical results show a significant improvement in the objective function value after a fixed budget of function evaluations.  相似文献   

19.
The theory ofLorentz transformations developped in [5]–[15] admits a simple construction of elements (23)–(50) in the classical hyperbolic metric space by means of complex 2×2 matrices. In the infinitesimal 3-dimensional Euclidean neighbourhood of a point on the unit sphere in theMinkowski space-time these matrices reduce to the classical motors (54) (Cf.Brand [3]).  相似文献   

20.
Consider the linear least squares problem min x b?Ax2 whereA is anm×n (m<n) matrix, andb is anm-dimensional vector. Lety be ann-dimensional vector, and let ηls(y) be the optimal backward perturbation bound defined by $$\eta _{LS} (y) = \inf \{ ||F||_F :y is a solution to \mathop {min}\limits_x ||b - (A + F)x||_2 \} .$$ . An explicit expression of ηls(y) (y≠0) has been given in [8]. However, if we define the optimal backward perturbation bounds ηmls(y) by $$\eta _{MLS} (y) = \inf \{ ||F||_F :y is the minimum 2 - norm solution to \mathop {min}\limits_x ||b - (A + F)x||_2 \} ,$$ , it may well be asked: How to derive an explicit expression of ηmls(y)? This note gives an answer. The main result is: Ifb≠0 andy≠0, then ηmls(y)=ηls (y).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号