首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
Spline smoothing is a widely used nonparametric method that allows data to speak for themselves. Due to its complexity and flexibility, fitting smoothing spline models is usually computationally intensive which may become prohibitive with large datasets. To overcome memory and CPU limitations, we propose four divide and recombine (D&R) approaches for fitting cubic splines with large datasets. We consider two approaches to divide the data: random and sequential. For each approach of division, we consider two approaches to recombine. These D&R approaches are implemented in parallel without communication. Extensive simulations show that these D&R approaches are scalable and have comparable performance as the method that uses the whole data. The sequential D&R approaches are spatially adaptive which lead to better performance than the method that uses the whole data when the underlying function is spatially inhomogeneous.  相似文献   

2.
3.
In this paper, two accelerated divide‐and‐conquer (ADC) algorithms are proposed for the symmetric tridiagonal eigenvalue problem, which cost O(N2r) flops in the worst case, where N is the dimension of the matrix and r is a modest number depending on the distribution of eigenvalues. Both of these algorithms use hierarchically semiseparable (HSS) matrices to approximate some intermediate eigenvector matrices, which are Cauchy‐like matrices and are off‐diagonally low‐rank. The difference of these two versions lies in using different HSS construction algorithms, one (denoted by ADC1) uses a structured low‐rank approximation method and the other (ADC2) uses a randomized HSS construction algorithm. For the ADC2 algorithm, a method is proposed to estimate the off‐diagonal rank. Numerous experiments have been carried out to show their stability and efficiency. These algorithms are implemented in parallel in a shared memory environment, and some parallel implementation details are included. Comparing the ADCs with highly optimized multithreaded libraries such as Intel MKL, we find that ADCs could be more than six times faster for some large matrices with few deflations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Let G = (V,E) be an undirected weighted graph on |V | = n vertices and |E| = m edges. A t‐spanner of the graph G, for any t ≥ 1, is a subgraph (V,ES), ESE, such that the distance between any pair of vertices in the subgraph is at most t times the distance between them in the graph G. Computing a t‐spanner of minimum size (number of edges) has been a widely studied and well‐motivated problem in computer science. In this paper we present the first linear time randomized algorithm that computes a t‐spanner of a given weighted graph. Moreover, the size of the t‐spanner computed essentially matches the worst case lower bound implied by a 43‐year old girth lower bound conjecture made independently by Erdős, Bollobás, and Bondy & Simonovits. Our algorithm uses a novel clustering approach that avoids any distance computation altogether. This feature is somewhat surprising since all the previously existing algorithms employ computation of some sort of local or global distance information, which involves growing either breadth first search trees up to θ(t)‐levels or full shortest path trees on a large fraction of vertices. The truly local approach of our algorithm also leads to equally simple and efficient algorithms for computing spanners in other important computational environments like distributed, parallel, and external memory. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2007  相似文献   

5.
The limit laws of three cost measures are derived of two algorithms for finding the maximum in a single-channel broadcast communication model. Both algorithms use coin flips and comparisons. Besides the ubiquitous normal limit law, the Dickman distribution also appears in a natural way. The method of proof proceeds along the line via the method of moments and the “asymptotic transfers,” which roughly bridges the asymptotics of the “conquering cost of the subproblems” and that of the total cost. Such a general approach has proved very fruitful for a number of problems in the analysis of recursive algorithms.  相似文献   

6.
This communication points out a couple of early publications on the power-of-two scheduling scheme that has been consistently ignored by researchers in the past.  相似文献   

7.
This study analyzes multiobjective d-dimensional knapsack problems (MOd-KP) within a comparative analysis of three multiobjective evolutionary algorithms (MOEAs): the ε-nondominated sorted genetic algorithm II (ε-NSGAII), the strength Pareto evolutionary algorithm 2 (SPEA2) and the ε-nondominated hierarchical Bayesian optimization algorithm (ε-hBOA). This study contributes new insights into the challenges posed by correlated instances of the MOd-KP that better capture the decision interdependencies often present in real world applications. A statistical performance analysis of the algorithms uses the unary ε-indicator, the hypervolume indicator and success rate plots to demonstrate their relative effectiveness, efficiency, and reliability for the MOd-KP instances analyzed. Our results indicate that the ε-hBOA achieves superior performance relative to ε-NSGAII and SPEA2 with increasing number of objectives, number of decisions, and correlative linkages between the two. Performance of the ε-hBOA suggests that probabilistic model building evolutionary algorithms have significant promise for expanding the size and scope of challenging multiobjective problems that can be explored.  相似文献   

8.
In this paper, we present an overview of probabilistic techniques based on randomized algorithms for solving “hard’’ problems arising in performance verification and control of complex systems. This area is fairly recent, even though its roots lie in the robustness techniques for handling uncertain control systems developed in the 1980s. In contrast to these deterministic techniques, the main ingredient of the methods discussed in this survey is the use of probabilistic concepts. The introduction of probability and random sampling permits overcoming the fundamental tradeoff between numerical complexity and conservatism that lie at the roots of the worst-case deterministic methodology. The simplicity of implementation of randomized techniques may also help bridging the gap between theory and practical applications.  相似文献   

9.
10.
Complex real-world systems consist of collections of interacting processes/events. These processes change over time in response to both internal and external stimuli as well as to the passage of time itself. Many domains such as real-time systems diagnosis, story understanding, and financial forecasting require the capability to model complex systems under a unified framework to deal with both time and uncertainty. Current models for uncertainty and current models for time already provide rich languages to capture uncertainty and temporal information, respectively. Unfortunately, these semantics have made it extremely difficult to unify time and uncertainty in a way which cleanly and adequately models the problem domains at hand. Existing approaches suffer from significant trade offs between strong semantics for uncertainty and strong semantics for time. In this paper, we explore a new model, the Probabilistic Temporal Network (PTN), for representing temporal and atemporal information while fully embracing probabilistic semantics. The model allows representation of time constrained causality, of when and if events occur, and of the periodic and recurrent nature of processes.  相似文献   

11.
The paper “Euclidean algorithms are Gaussian” [V. Baladi, B. Vallée, Euclidean algorithm are Gaussian, J. Number Theory 110 (2005) 331-386], is devoted to the distributional analysis of three variants of Euclidean algorithms. The Central Limit Theorem and the Local Limit Theorem obtained there are the first ones in the context of the “dynamical analysis” method. The techniques developed have been applied in further various works (e.g. [V. Baladi, A. Hachemi, A local limit theorem with speed of convergence for Euclidean algorithms and Diophantine costs, Ann. Inst. H. Poincaré Probab. Statist. 44 (2008) 749-770; E. Cesaratto, J. Clément, B. Daireaux, L. Lhote, V. Maume, B. Vallée, Analysis of fast versions of the Euclid algorithm, in: Proceedings of Third Workshop on Analytic Algorithmics and Combinatorics, ANALCO'08, SIAM, 2008; E. Cesaratto, A. Plagne, B. Vallée, On the non-randomness of modular arithmetic progressions, in: Fourth Colloquium on Mathematics and Computer Science. Algorithms, Trees, Combinatorics and Probabilities, in: Discrete Math. Theor. Comput. Sci. Proc., vol. AG, 2006, pp. 271-288]). These theorems are proved first for an auxiliary probabilistic model, called “the smoothed model,” and after, the estimates are transferred to the “true” probabilistic model. In this note, we remark that “the smoothed model” described in [V. Baladi, B. Vallée, Euclidean algorithm are Gaussian, J. Number Theory 110 (2005) 331-386] is not adapted to this transfer and replaces it by an adapted one. However, the results remain unchanged.  相似文献   

12.
Reverse time migration has drawn great attention in exploration geophysics because it can be used successfully in areas with large structural and velocity complexity. But its computational cost is considerably high. This paper concerns the fast implementation of the optimized separable approximation of the two‐way propagator, which is the most computational expensive step in reverse time migration. On the basis of the low‐rank property of the propagator and the idea of randomized algorithm, a randomized method is introduced for optimized separable approximation‐based one‐step extrapolation. Numerical results of approximating the propagator show that the randomized method is more efficient than the conventional interpolation method. At the same time, numerical experiments of wavefield extrapolation show that the proposed method is much more accurate than the conventional finite‐difference plus pseudo‐spectrum scheme. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
In a recent paper, Chen and Ji [Chen, K., Ji, P., 2007. A mixed integer programming model for advanced planning and scheduling (APS). European Journal of Operational Research 181, 515–522] develop a mixed integer programming model for advanced planning and scheduling problem that considers capacity constraints and precedence relations between the operations. The orders require processing of several operations on eligible machines. The model presented in the above paper works for the case where each operation can be processed on only one machine. However, machine eligibility means that only a subset of machines are capable of processing a job and this subset may include more than one machine. We provide a general model for advanced planning and scheduling problems with machine eligibility. Our model can be used for problems where there are alternative machines that an operation can be assigned to.  相似文献   

14.
In a recently published paper by Chiu et al. [Chiu, S.W., Wang, S.-L., Chiu, Y.-S.P., 2007. Determining the optimal run time for EPQ model with scrap, rework and stochastic breakdowns. European Journal of Operational Research 180, 664–676], a theorem on conditional convexity of the integrated total cost function was employed in their solution procedure. We reexamine this theorem and present a direct proof to the convexity of the total cost function. This proof can be used in place of Theorem 1 of Chiu et al.’s paper to enhance quality of their optimization process.  相似文献   

15.
In 2014, Wang et al. (2014) extended the model of Lou and Wang (2012) to incorporate the credit period dependent demand and default risk for deteriorating items with maximum lifetime. However, the rates of demand, default risk and deterioration in the model of Wang et al. (2014) are assumed to be specific functions of credit period which limits the contributions. In this note, we first generalize the theoretical results of Wang et al. (2014) under some certain conditions. Furthermore, we also present some structural results instead of a numerical analysis on variation of optimal replenishment and trade credit strategies with respect to key parameters.  相似文献   

16.
Many modern approaches of time series analysis belong to the class of methods based on approximating high‐dimensional spaces by low‐dimensional subspaces. A typical method would embed a given time series into a structured matrix and find a low‐dimensional approximation to this structured matrix. The purpose of this paper is twofold: (i) to establish a correspondence between a class of SVD‐compatible matrix norms on the space of Hankel matrices and weighted vector norms (and provide methods to construct this correspondence) and (ii) to motivate the importance of this for problems in time series analysis. Examples are provided to demonstrate the merits of judiciously selecting weights on imputing missing data and forecasting in time series. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
We extend Clarkson's randomized algorithm for linear programming to a general scheme for solving convex optimization problems. The scheme can be used to speed up existing algorithms on problems which have many more constraints than variables. In particular, we give a randomized algorithm for solving convex quadratic and linear programs, which uses that scheme together with a variant of Karmarkar's interior point method. For problems withn constraints,d variables, and input lengthL, ifn = (d 2), the expected total number of major Karmarkar's iterations is O(d 2(logn)L), compared to the best known deterministic bound of O( L). We also present several other results which follow from the general scheme.  相似文献   

18.
The randomized extended Kaczmarz and Gauss–Seidel algorithms have attracted much attention because of their ability to treat all types of linear systems (consistent or inconsistent, full rank or rank deficient). In this paper, we present tight upper bounds for the convergence of the randomized extended Kaczmarz and Gauss–Seidel algorithms. Numerical experiments are given to illustrate the theoretical results.  相似文献   

19.
The multiple-choice multidimensional knapsack problem (MMKP) is a well-known NP-hard combinatorial optimization problem with a number of important applications. In this paper, we present a “reduce and solve” heuristic approach which combines problem reduction techniques with an Integer Linear Programming (ILP) solver (CPLEX). The key ingredient of the proposed approach is a set of group fixing and variable fixing rules. These fixing rules rely mainly on information from the linear relaxation of the given problem and aim to generate reduced critical subproblem to be solved by the ILP solver. Additional strategies are used to explore the space of the reduced problems. Extensive experimental studies over two sets of 37 MMKP benchmark instances in the literature show that our approach competes favorably with the most recent state-of-the-art algorithms. In particular, for the set of 27 conventional benchmarks, the proposed approach finds an improved best lower bound for 11 instances and as a by-product improves all the previous best upper bounds. For the 10 additional instances with irregular structures, the method improves 7 best known results.  相似文献   

20.
Complex dynamical systems are often subject to non-Gaussian random fluctuations. The exit phenomenon, i.e., escaping from a bounded domain in state space, is an impact of randomness on the evolution of these dynamical systems. The existing work is about asymptotic estimate on mean exit time when the noise intensity is sufficiently small. In the present paper, however, the authors analyze mean exit time for arbitrary noise intensity, via numerical investigation. The mean exit time for a dynamical system, driven by a non-Gaussian, discontinuous (with jumps), α-stable Lévy motion, is described by a differential equation with nonlocal interactions. A numerical approach for solving this nonlocal problem is proposed. A computational analysis is conducted to investigate the relative importance of jump measure, diffusion coefficient and non-Gaussianity in affecting mean exit time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号