首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 801 毫秒
1.
Batch sizing and job sequencing on a single machine   总被引:7,自引:0,他引:7  
We study a single-machine scheduling problem in which the items to be processed have to be batched as well as sequenced. Since processed items become available in batches, flow times are defined to be the same for all items in the same batch. A constant set-up delay is incurred between consecutive batches. For any fixed, but arbitrary item sequence, we present an algorithm that finds a sequence of batches such that the total flow time of the items is minimized; we prove that for a set ofn items, the algorithm runs inO(n) time. We show that, among all sequences, the one leading to the minimum flow time has the items in non-decreasing order of running times. Thus, the optimal algorithm for the combined problem, called thebatch-sizing problem, runs inO(n logn) time. We also prove that this algorithm yields an improved solution to a scheduling problem recently studied by Baker [1].  相似文献   

2.
We consider the problem of testing the uniqueness of maximum matchings, both in the unweighted and in the weighted case. For the unweighted case, we have two results. First, given a graph with n vertices and m edges, we can test whether the graph has a unique perfect matching, and find it if it exists, in O(m log4 n) time. This algorithm uses a recent dynamic connectivity algorithm and an old result of Kotzig characterizing unique perfect matchings in terms of bridges. For the special case of planar graphs, we improve the algorithm to run in O(n log n) time. Second, given one perfect matching, we can test for the existence of another in linear time. This algorithm is a modification of Edmonds' blossom-shrinking algorithm implemented using depth-first search. A generalization of Kotzig's theorem proved by Jackson and Whitty allows us to give a modification of the first algorithm that tests whether a given graph has a unique f-factor, and find it if it exists. We also show how to modify the second algorithm to check whether a given f-factor is unique. Both extensions have the same time bounds as their perfect matching counterparts. For the weighted case, we can test in linear time whether a maximum-weight matching is unique, given the output from Edmonds' algorithm for computing such a matching. The method is an extension of our algorithm for the unweighted case.  相似文献   

3.
As a generalization of the canonical correlation analysis to k random vectors, the common canonical variates model was recently proposed based on the assumption that the canonical variates have the same coefficients in all k sets of variables, and is applicable to many cases. In this article, we apply the local influence method in this model to study the impact of minor perturbations of data. The method is non-standard because of the restrictions imposed on the coefficients. Besides investigating the joint local influence of the observations, we also obtain the elliptical norm of the empirical influence function as a special case of local influence diagnostics. Based on the proposed diagnostics, we find that the results of common canonical variates analysis for the female water striders data set is largely affected by omitting just one single observation.  相似文献   

4.
We consider finite difference approximations of solutions of inverse Sturm‐Liouville problems in bounded intervals. Using three‐point finite difference schemes, we discretize the equations on so‐called optimal grids constructed as follows: For a staggered grid with 2 k points, we ask that the finite difference operator (a k × k Jacobi matrix) and the Sturm‐Liouville differential operator share the k lowest eigenvalues and the values of the orthonormal eigenfunctions at one end of the interval. This requirement determines uniquely the entries in the Jacobi matrix, which are grid cell averages of the coefficients in the continuum problem. If these coefficients are known, we can find the grid, which we call optimal because it gives, by design, a finite difference operator with a prescribed spectral measure. We focus attention on the inverse problem, where neither the coefficients nor the grid are known. A key question in inversion is how to parametrize the coefficients, i.e., how to choose the grid. It is clear that, to be successful, this grid must be close to the optimal one, which is unknown. Fortunately, as we show here, the grid dependence on the unknown coefficients is weak, so the inversion can be done on a precomputed grid for an a priori guess of the unknown coefficients. This observation leads to a simple yet efficient inversion algorithm, which gives coefficients that converge pointwise to the true solution as the number k of data points tends to infinity. The cornerstone of our convergence proof is showing that optimal grids provide an implicit, natural regularization of the inverse problem, by giving reconstructions with uniformly bounded total variation. The analysis is based on a novel, explicit perturbation analysis of Lanczos recursions and on a discrete Gel'fand‐Levitan formulation. © 2005 Wiley Periodicals, Inc.  相似文献   

5.
Pivoting in Extended Rings for Computing Approximate Gr?bner Bases   总被引:1,自引:0,他引:1  
It is well known that in the computation of Gr?bner bases arbitrarily small perturbations in the coefficients of polynomials may lead to a completely different staircase, even if the solutions of the polynomial system change continuously. This phenomenon is called artificial discontinuity in Kondratyev’s Ph.D. thesis. We show how such phenomenon may be detected and even “repaired” by using a new variable to rename the leading term each time we detect a “problem”. We call such strategy the TSV (Term Substitutions with Variables) strategy. For a zero-dimensional polynomial ideal, any monomial basis (containing 1) of the quotient ring can be found with the TSV strategy. Hence we can use TSV strategy to relax term order while keeping the framework of Gr?bner basis method so that we can use existing efficient algorithms (for instance the F 5 algorithm) to compute an approximate Gr?bner basis. Our main algorithms, named TSVn and TSVh, can be used to repair artificial e{\epsilon}-discontinuities. Experiments show that these algorithms are effective for some nontrivial problems.  相似文献   

6.
In this paper an efficient algorithm to generate regular graphs with small vertex valency is presented. The running times of a program based on this algorithm and designed to generate cubic graphs are below two natural benchmarks: (a) If N(n) denotes the number of pairwise non-isomorphic cubic graphs with n vertices and T(n) the time needed for generating the list of all these graphs, then T(n)/N(n) decreases gradually for the observed values of n. This suggests that T(n)/N(n) might be uniformly bounded for all n, ignoring the time to write the outputs, but we are unable to prove this and in fact are not confident about it. (b) For programs that generate lists of non-isomorphic objects, but cannot a priori make sure to avoid the generation of isomorphic copies, the time needed to check a randomly ordered list of these objects for being non-isomorphic is a natural benchmark. Since for large lists (n ≥ 22, girth 3) existing graph isomorphism programs take longer to canonically label all of the N(n) graphs than our algorithm takes to generate them, our algorithm is probably faster than any method which does one or more isomorphism test for every graph. © 1996 John Wiley & Sons, Inc.  相似文献   

7.
It is well known that the coefficients of the input-output characteristics of the thermal steam-turbine model as well as the network model parameters have a great effect on the optimal economic operation of all thermal-electric power systems. Until today, these coefficients, the loss formula coefficients, theB-coefficients, and the active-reactive power loss models have been estimated using the well-known least-square estimation algorithm.In this paper, we present a new algorithm to estimate the power system parameters for economic dispatch calculation (EDC); this algorithm is based on the least absolute-value approximation (LAV)l 1-norm. We compare the results obtained using the proposed algorithms with those obtained using the least-square error algorithm (LS). Optimal costs as well as overall network performance resulting from the implementation of each technique provide the basis of our conclusion.This work was supported by the Natural Science and Engineering Research Council of Canada, Grant No. A4146.  相似文献   

8.
LetB be a positive definite symmetric approximation to the second derivative matrix of the objective function of a minimization calculation. We study modifications of the BFGS method that apply a scaling technique to the columns of a conjugate direction matrixZ satisfyingZ T BZ = I. For a simple scaling technique similar to the ones considered by Powell (1987) and (1989) we show that, due to a two iteration cycle, linear convergence can occur when the method is applied to a quadratic function and Wolfe's line search is employed, although for exact line searches quadratic termination can be proved. We then suggest a different scaling technique that prevents certain columns thought to contain important curvature information from being scaled. For this algorithm we prove global and superlinear convergence and demonstrate the superiority of our method over the BFGS formula on a range of illconditioned optimization problems. Moreover, we present an implementation of our algorithm that requires only 3n 2 +O(n) multiplications per iteration.  相似文献   

9.
For several decades the standard algorithm for factoring polynomials f with rational coefficients has been the Berlekamp-Zassenhaus algorithm. The complexity of this algorithm depends exponentially on n, where n is the number of modular factors of f. This exponential time complexity is due to a combinatorial problem: the problem of choosing the right subsets of these n factors. In this paper, this combinatorial problem is reduced to a type of Knapsack problem that can be solved with lattice reduction algorithms. The result is a practical algorithm that can factor polynomials that are far out of reach for previous algorithms. The presented solution to the combinatorial problem is different from previous lattice-based factorizers; these algorithms avoided the combinatorial problem by solving the entire factorization problem with lattice reduction. This led to lattices of large dimension and coefficients, and thus poor performance. This is why lattice-based algorithms, despite their polynomial time complexity, did not replace Berlekamp-Zassenhaus as the standard method. That is now changing; new versions of computer algebra systems such as Maple, Magma, NTL and Pari have already switched to the algorithm presented here.  相似文献   

10.
We study asymptotics as t → ∞ of solutions to a linear, parabolic system of equations with time‐dependent coefficients in Ω × (0, ∞), where Ω is a bounded domain. On ? Ω × (0, ∞) we prescribe the homogeneous Dirichlet boundary condition. For large values of t, the coefficients in the elliptic part are close to time‐independent coefficients in an integral sense which is described by a certain function κ (t). This includes in particular situations when the coefficients may take different values on different parts of Ω and the boundaries between them can move with t but stabilize as t → ∞. The main result is an asymptotic representation of solutions for large t. As a corollary, it is proved that if κL1(0, ∞), then the solution behaves asymptotically as the solution to a parabolic system with time‐independent coefficients (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
In this article, we study the approximation of common zeros of non-self inverse strongly monotone operators defined on a closed convex subset C of a Hilbert space H. For a non-self family of operators, we introduce an iterative algorithm without relying on projections. Approximation of common fixed points for finite families of non-self strict pseudo-contractions in the sense of Browder-Petryshyn is also obtained. The novelty of our algorithm is that the coefficients are not given a priori and no assumptions are made on them, but they are constructed step by step in a natural way.  相似文献   

12.
We present an algorithm for solving stochastic heat equations, whose key ingredient is a non-uniform time discretization of the driving Brownian motion W. For this algorithm we derive an error bound in terms of its number of evaluations of one-dimensional components of W. The rate of convergence depends on the spatial dimension of the heat equation and on the decay of the eigenfunctions of the covariance of W. According to known lower bounds, our algorithm is optimal, up to a constant, and this optimality cannot be achieved by uniform time discretizations. AMS subject classification (2000)  60H15, 60H35, 65C30  相似文献   

13.
14.
For applications of stochastic fluid models, such as those related to wildfire spread and containment, one wants a fast method to compute time dependent probabilities. Erlangization is an approximation method that replaces various distributions at a time t by the corresponding ones at a random time with Erlang distribution having mean t. Here, we develop an efficient version of that algorithm for various first passage time distributions of a fluid flow, exploiting recent results on fluid flows, probabilistic underpinnings, and some special structures. Some connections with a familiar Laplace transform inversion algorithm due to Jagerman are also noted up front.  相似文献   

15.
We study the classical problem of assigning unique identifiers to identical concurrent processes. In this paper, we consider the asynchronous shared memory model, and the correctness requirement is that upon termination of the algorithm, the processes must have unique IDs always. Our results include tight characterization of the problem in several respects. We call a protocol solving this task Las Vegas if it has finite expected termination time. Our main positive result is the first Las-Vegas protocol that solves the problem. The protocol terminates in O(log n) expected asychronous rounds, using O(n) shared memory space, where n is the number of participating processes. The new protocol improves on all previous solutions simultaneously in running time (exponentially), probability of termination (to 1), and space requirement. The protocol works under the assumption that the asynchronous schedule is oblivious, i.e., independent of the actual unfolding execution. On the negative side, we show that there is no finite-state Las-Vegas protocol for the problem if the schedule may depend on the history of the shared memory (an adaptive schedule). We also show that any Las-Vegas protocol must know n in advance (which implies that crash faults cannot be tolerated) and that the running time is Ω(log n). For the case of an arbitrary (nonoblivious) adversarial schedule, we present a Las-Vegas protocol that uses O(n) unbounded registers. For the read-modify-write model, we present a constant-space deterministic algorithm.  相似文献   

16.
We consider high-dimensional data which contains a linear low-dimensional non-Gaussian structure contaminated with Gaussian noise, and discuss a method to identify this non-Gaussian subspace. For this problem, we provided in our previous work a very general semi-parametric framework called non-Gaussian component analysis (NGCA). NGCA has a uniform probabilistic bound on the error of finding the non-Gaussian components and within this framework, we presented an efficient NGCA algorithm called Multi-index Projection Pursuit. The algorithm is justified as an extension of the ordinary projection pursuit (PP) methods and is shown to outperform PP particularly when the data has complicated non-Gaussian structure. However, it turns out that multi-index PP is not optimal in the context of NGCA. In this article, we therefore develop an alternative algorithm called iterative metric adaptation for radial kernel functions (IMAK), which is theoretically better justifiable within the NGCA framework. We demonstrate that the new algorithm tends to outperform existing methods through numerical examples.  相似文献   

17.
In this paper we present an efficient algorithm to test if two given paths are homotopic; that is, whether they wind around obstacles in the plane in the same way. For paths specified by n line segments with obstacles described by n points, several standard ways achieve quadratic running time. For simple paths, our algorithm runs in O(n log n) time, which we show is tight. For self-intersecting paths the problem is related to Hopcrofts problem; our algorithm runs in O(n 3/2log n) time.  相似文献   

18.
In this paper we introduce a new type of explicit numerical algorithm to solve the spatially discretized linear heat or diffusion equation. After discretizing the space variables as in standard finite difference methods, this novel method does not approximate the time derivatives by finite differences, but use three stage constant-neighbor and linear neighbor approximations to decouple the ordinary differential equations and solve them analytically. In the final expression for the new values of the variable, the time step size appears not in polynomial or rational, but in exponential form with negative coefficients, which can guarantee unconditional stability. The scheme contains a free parameter p. We show that the convergence of the method is third-order in the time step size regardless of the values of p, and, according to von Neumann stability analysis, the method is stable for a wide range of p. We validate the new method by testing the results in a case where the analytical solution exists, then we demonstrate the competitiveness by comparing its performance with several other numerical solvers.  相似文献   

19.
Combinatorial Sublinear-Time Fourier Algorithms   总被引:1,自引:0,他引:1  
We study the problem of estimating the best k term Fourier representation for a given frequency sparse signal (i.e., vector) A of length Nk. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of [^(A)]\hat{\mathbf{A}} , and estimate their coefficients, in polynomial(k,log N) time. Randomized sublinear-time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem (Gilbert et al. in ACM STOC, pp. 152–161, 2002; Proceedings of SPIE Wavelets XI, 2005). In this paper we develop the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime/sampling bounds to the current best randomized Fourier method (Gilbert et al. in Proceedings of SPIE Wavelets XI, 2005). Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in (Iwen in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA’08), 2008).  相似文献   

20.
Wavelet-based homogenization provides a method for constructing a coarse-grid discretization of a variable–coefficient differential operator that implicitly accounts for the influence of the fine scale medium parameters on the coarse scale of the solution. The method is applied to discretizations of operators of the form in one dimension and μ(x)Δ in one and more dimensions. The resulting homogenized matrices are shown to correspond to differential operators of the same (or closely related) form. In dimension one, results are obtained for periodic two-phase and for arbitrary coefficients μ(x). For periodic two-phase coefficients, the homogenized coefficients may be computed exactly as the harmonic mean of the function μ. For non-periodic coefficients, the “mass-lumping” approximation results in an explicit formula for the homogenized coefficients. In higher dimensions, results are obtained for operators of the form μ(x)Δ, where μ(x) may or may not be periodic; explicit formulae for the homogenized coefficients are also derived. Numerical examples in 1D and 2D are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号