首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper we consider two branch and bound algorithms for the maximum clique problem which demonstrate the best performance on DIMACS instances among the existing methods. These algorithms are MCS algorithm by Tomita et al. (2010) and MAXSAT algorithm by Li and Quan (2010a, b). We suggest a general approach which allows us to speed up considerably these branch and bound algorithms on hard instances. The idea is to apply a powerful heuristic for obtaining an initial solution of high quality. This solution is then used to prune branches in the main branch and bound algorithm. For this purpose we apply ILS heuristic by Andrade et al. (J Heuristics 18(4):525–547, 2012). The best results are obtained for p_hat1000-3 instance and gen instances with up to 11,000 times speedup.  相似文献   

2.
The entropy solutions of the compressible Euler equations satisfy a minimum principle for the specific entropy (Tadmor in Appl Numer Math 2:211–219, 1986). First order schemes such as Godunov-type and Lax-Friedrichs schemes and the second order kinetic schemes (Khobalatte and Perthame in Math Comput 62:119–131, 1994) also satisfy a discrete minimum entropy principle. In this paper, we show an extension of the positivity-preserving high order schemes for the compressible Euler equations in Zhang and Shu (J Comput Phys 229:8918–8934, 2010) and Zhang et?al. (J Scientific Comput, in press), to enforce the minimum entropy principle for high order finite volume and discontinuous Galerkin (DG) schemes.  相似文献   

3.
This paper improves heuristic algorithms presented in Benjamin and Beasley (Comput Oper Res 37(12):2270–2280, 2010) for solving the waste collection vehicle routing problem with time windows, particularly the real life waste collection benchmark problems from Kim et al. (Comput Oper Res 33(12):3624–3642, 2006). These consist of ten test problems, involving up to 2,092 customers and 19 waste disposal facilities. The main difference between this paper and Benjamin and Beasley (Comput Oper Res 37(12):2270–2280, 2010) is that here we use a disposal facility positioning (DFP) procedure to evaluate routes for our algorithms. Since the problem involves multiple disposal facilities, the objective of DFP is to choose the best disposal facilities to go on the vehicle route. Computational results indicate that our algorithms with DFP produce substantially better quality routes than previous approaches in the literature.  相似文献   

4.
We provide a new semilocal convergence analysis of the Gauss–Newton method (GNM) for solving nonlinear equation in the Euclidean space. Using a combination of center-Lipschitz, Lipschitz conditions, and our new idea of recurrent functions, we provide under the same or weaker hypotheses than before (Ben-Israel, J. Math. Anal. Appl. 15:243–252, 1966; Chen and Nashed, Numer. Math. 66:235–257, 1993; Deuflhard and Heindl, SIAM J. Numer. Anal. 16:1–10, 1979; Guo, J. Comput. Math. 25:231–242, 2007; Häußler, Numer. Math. 48:119–125, 1986; Hu et al., J. Comput. Appl. Math. 219:110–122, 2008; Kantorovich and Akilov, Functional Analysis in Normed Spaces, Pergamon, Oxford, 1982), a finer convergence analysis. The results can be extended in case outer or generalized inverses are used. Numerical examples are also provided to show that our results apply, where others fail (Ben-Israel, J. Math. Anal. Appl. 15:243–252, 1966; Chen and Nashed, Numer. Math. 66:235–257, 1993; Deuflhard and Heindl, SIAM J. Numer. Anal. 16:1–10, 1979; Guo, J. Comput. Math. 25:231–242, 2007; Häußler, Numer. Math. 48:119–125, 1986; Hu et al., J. Comput. Appl. Math. 219:110–122, 2008; Kantorovich and Akilov, Functional Analysis in Normed Spaces, Pergamon, Oxford, 1982).  相似文献   

5.
An augmented Lagrangian approach for sparse principal component analysis   总被引:1,自引:0,他引:1  
Principal component analysis (PCA) is a widely used technique for data analysis and dimension reduction with numerous applications in science and engineering. However, the standard PCA suffers from the fact that the principal components (PCs) are usually linear combinations of all the original variables, and it is thus often difficult to interpret the PCs. To alleviate this drawback, various sparse PCA approaches were proposed in the literature (Cadima and Jolliffe in J Appl Stat 22:203–214, 1995; d’Aspremont et?al. in J Mach Learn Res 9:1269–1294, 2008; d’Aspremont et?al. SIAM Rev 49:434–448, 2007; Jolliffe in J Appl Stat 22:29–35, 1995; Journée et?al. in J Mach Learn Res 11:517–553, 2010; Jolliffe et?al. in J Comput Graph Stat 12:531–547, 2003; Moghaddam et?al. in Advances in neural information processing systems 18:915–922, MIT Press, Cambridge, 2006; Shen and Huang in J Multivar Anal 99(6):1015–1034, 2008; Zou et?al. in J Comput Graph Stat 15(2):265–286, 2006). Despite success in achieving sparsity, some important properties enjoyed by the standard PCA are lost in these methods such as uncorrelation of PCs and orthogonality of loading vectors. Also, the total explained variance that they attempt to maximize can be too optimistic. In this paper we propose a new formulation for sparse PCA, aiming at finding sparse and nearly uncorrelated PCs with orthogonal loading vectors while explaining as much of the total variance as possible. We also develop a novel augmented Lagrangian method for solving a class of nonsmooth constrained optimization problems, which is well suited for our formulation of sparse PCA. We show that it converges to a feasible point, and moreover under some regularity assumptions, it converges to a stationary point. Additionally, we propose two nonmonotone gradient methods for solving the augmented Lagrangian subproblems, and establish their global and local convergence. Finally, we compare our sparse PCA approach with several existing methods on synthetic (Zou et?al. in J Comput Graph Stat 15(2):265–286, 2006), Pitprops (Jeffers in Appl Stat 16:225–236, 1967), and gene expression data (Chin et?al in Cancer Cell 10:529C–541C, 2006), respectively. The computational results demonstrate that the sparse PCs produced by our approach substantially outperform those by other methods in terms of total explained variance, correlation of PCs, and orthogonality of loading vectors. Moreover, the experiments on random data show that our method is capable of solving large-scale problems within a reasonable amount of time.  相似文献   

6.
This paper presents a Branch, Bound, and Remember (BB&R) exact algorithm using the Cyclic Best First Search (CBFS) exploration strategy for solving the ${1|ST_{sd}|\sum T_{i}}$ scheduling problem, a single machine scheduling problem with sequence dependent setup times where the objective is to find a schedule with minimum total tardiness. The BB&R algorithm incorporates memory-based dominance rules to reduce the solution search space. The algorithm creates schedules in the reverse direction for problems where fewer than half the jobs are expected to be tardy. In addition, a branch and bound algorithm is used to efficiently compute tighter lower bounds for the problem. This paper also presents a counterexample for a previously reported exact algorithm in Luo and Chu (Appl Math Comput 183(1):575–588, 2006) and Luo et?al. (Int J Prod Res 44(17):3367–3378, 2006). Computational experiments demonstrate that the algorithm is two orders of magnitude faster than the fastest exact algorithm that has appeared in the literature. Computational experiments on two sets of benchmark problems demonstrate that the CBFS search exploration strategy can be used as an effective heuristic on problems that are too large to solve to optimality.  相似文献   

7.
Diffusive relaxation systems provide a general framework to approximate nonlinear diffusion problems, also in the degenerate case (Aregba-Driollet et al. in Math. Comput. 73(245):63–94, 2004; Boscarino et al. in Implicit-explicit Runge-Kutta schemes for hyperbolic systems and kinetic equations in the diffusion limit, 2011; Cavalli et al. in SIAM J. Sci. Comput. 34:A137–A160, 2012; SIAM J. Numer. Anal. 45(5):2098–2119, 2007; Naldi and Pareschi in SIAM J. Numer. Anal. 37:1246–1270, 2000; Naldi et al. in Surveys Math. Indust. 10(4):315–343, 2002). Their discretization is usually obtained by explicit schemes in time coupled with a suitable method in space, which inherits the standard stability parabolic constraint. In this paper we combine the effectiveness of the relaxation systems with the computational efficiency and robustness of the implicit approximations, avoiding the need to resolve nonlinear problems and avoiding stability constraints on time step. In particular we consider an implicit scheme for the whole relaxation system except for the nonlinear source term, which is treated though a suitable linearization technique. We give some theoretical stability results in a particular case of linearization and we provide insight on the general case. Several numerical simulations confirm the theoretical results and give evidence of the stability and convergence also in the case of nonlinear degenerate diffusion.  相似文献   

8.
We present a local as well as a semilocal convergence analysis for Newton’s method for approximating a locally unique solution of a nonlinear equation in a Banach space setting. Our hypotheses involve m-Fréchet-differentiable operators and general Lipschitz-type hypotheses, where m≥2 is a positive integer. The new convergence analysis unifies earlier results; it is more flexible and provides a finer convergence analysis than in earlier studies such as Argyros in J. Comput. Appl. Math. 131:149–159, 2001, Argyros and Hilout in J. Appl. Math. Comput. 29:391–400, 2009, Argyros and Hilout in J. Complex. 28:364–387, 2012, Argyros et al. Numerical Methods for Equations and Its Applications, CRC Press/Taylor & Francis, New York, 2012, Gutiérrez in J. Comput. Appl. Math. 79:131–145, 1997, Ren and Argyros in Appl. Math. Comput. 217:612–621, 2010, Traub and Wozniakowski in J. Assoc. Comput. Mech. 26:250–258, 1979. Numerical examples are presented further validating the theoretical results.  相似文献   

9.
A local as well as a semilocal convergence analysis for Newton–Jarratt–type iterative method for solving equations in a Banach space setting is studied here using information only at a point via a gamma-type condition (Argyros in Approximate Solution of Operator Equations with Applications, [2005]; Wang in Chin. Sci. Bull. 42(7):552–555, [1997]). This method has already been examined by us in (Argyros et al. in J. Comput. Appl. Math. 51:103–106, [1994]; Argyros in Comment. Mat. XXIII:97–108, [1994]), where the order of convergence four was established using however information on the domain of the operator. In this study we also establish the same order of convergence under weaker conditions. Moreover we show that all though we use weaker conditions the results obtained here can be used to solve equations in cases where the results in (Argyros et al. in J. Comput. Appl. Math. 51:103–106, [1994]; Argyros in Comment. Mat. XXIII:97–108, [1994]) cannot apply. Our method is inverse free, and therefore cheaper at the second step in contrast with the corresponding two–step Newton methods. Numerical Examples are also provided.  相似文献   

10.
We study the problem of quickly estimating the best k-term Fourier representation for a given periodic function f: [0, 2π] → ?. Solving this problem requires the identification of k of the largest magnitude Fourier series coefficients of f in worst case k 2 · log O(1) N time. Randomized sublinear-time Monte Carlo algorithms, which have a small probability of failing to output accurate answers for each input signal, have been developed for solving this problem (Gilbert et al. 2002, 2005). These methods were implemented as the Ann Arbor Fast Fourier Transform (AAFFT) and empirically evaluated in Iwen et al. (Commun Math Sci 5(4):981–998, 2007). In this paper we present a new implementation, called the Gopher Fast Fourier Transform (GFFT), of more recently developed sparse Fourier transform techniques (Iwen, Found Comput Math 10(3):303–338, 2010, Appl Comput Harmon Anal, 2012). Experiments indicate that GFFT is faster than AAFFT. In addition to our empirical evaluation, we also consider the existence of sublinear-time Fourier approximation methods with deterministic approximation guarantees for functions whose sequences of Fourier series coefficents are compressible. In particular, we prove the existence of sublinear-time Las Vegas Fourier Transforms which improve on the recent deterministic Fourier approximation results of Iwen (Found Comput Math 10(3):303–338, 2010, Appl Comput Harmon Anal, 2012) for Fourier compressible functions by guaranteeing accurate answers while using an asymptotically near-optimal number of function evaluations.  相似文献   

11.
Recently we have introduced a new technique for combining classical bivariate Shepard operators with three point polynomial interpolation operators (Dell’Accio and Di Tommaso, On the extension of the Shepard-Bernoulli operators to higher dimensions, unpublished). This technique is based on the association, to each sample point, of a triangle with a vertex in it and other ones in its neighborhood to minimize the error of the three point interpolation polynomial. The combination inherits both degree of exactness and interpolation conditions of the interpolation polynomial at each sample point, so that in Caira et al. (J Comput Appl Math 236:1691–1707, 2012) we generalized the notion of Lidstone Interpolation (LI) to scattered data sets by combining Shepard operators with the three point Lidstone interpolation polynomial (Costabile and Dell’Accio, Appl Numer Math 52:339–361, 2005). Complementary Lidstone Interpolation (CLI), which naturally complements Lidstone interpolation, was recently introduced by Costabile et al. (J Comput Appl Math 176:77–90, 2005) and drawn on by Agarwal et al. (2009) and Agarwal and Wong (J Comput Appl Math 234:2543–2561, 2010). In this paper we generalize the notion of CLI to bivariate scattered data sets. Numerical results are provided.  相似文献   

12.
In Zhao et al. (Electron J Combin 19: \({\sharp}\) P19, 2012), we determined the minimum number of vertices of one-realizations of a given finite set S, and constructed the corresponding mixed hypergraphs. In this paper, by finding some of their spanning sub-hypergraphs, we determine the minimum number of \({\mathcal{D}}\) -deges (resp. \({\mathcal{C}}\) -edges) of one-realizations of S. As a result, we partially solve an open problem proposed by Tuza and Voloshin (Bolyai Society Mathematical Studies, vol. 17, pp. 235–255. Springer, Berlin, 2008).  相似文献   

13.
This special issue is similar to our previous special issues (Kennedy et al. in Comput. Math. Organ. Theory 16(3):217–219, 2010; 17(3):225–228, 2011) in that it includes articles based on the award winning conference papers of the, here, 2011 BRiMS Annual Conference. These articles were reviewed by the editors, extended to journal article length, and then peer-reviewed and revised before being accepted. The articles include a new way to evaluate designs of interfaces for safety critical systems (Bolton in Comput. Math. Organ. Theory, 2012), an article that extends our understanding of how to model situation awareness (SA) in a cognitive architecture (Rodgers et al. in Comput. Math. Organ. Theory, 2012), an article that presents electroencephalography (EEG) data used to derive dynamic neurophysiologic models of engagement in teamwork (Stevens et al. in Comput. Math. Organ. Theory, 2012), and an article that demonstrates using machine learning to generate models and an example application of that tool (Best in Comput. Math. Organ. Theory, 2012). After presenting a brief summary of each paper we will see some recurrent themes of task analysis, team and individual models, spatial reasoning, usability issues, and particularly that they are models that interact with each other or systems.  相似文献   

14.
For a system of polynomial equations, whose coefficients depend on parameters, the Newton polyhedron of its discriminant is computed in terms of the Newton polyhedra of the coefficients. This leads to an explicit formula (involving Euler obstructions of toric varieties) in the unmixed case, suggests certain open questions in general, and generalizes a number of similar known results (Gelfand et al. in Discriminants, resultants, and multidimensional determinants. Birkhäuser, Boston, 1994; Sturmfels in J. Algebraic Comb. 32(2):207–236, 1994; McDonald in Discrete Comput. Geom. 27:501–529, 2002; Gonzalez-Perez in Can. J. Math. 52(2):348-368, 2000; Esterov and Khovanskii in Funct. Anal. Math. 2(1), 2008).  相似文献   

15.
Recently, the weight distributions of the duals of the cyclic codes with two zeros have been obtained for several cases in Ding et al. (IEEE Trans Inform Theory 57(12), 8000–8006, 2011); Ma et al. (IEEE Trans Inform Theory 57(1):397–402, 2011); Wang et al. (Trans Inf Theory 58(12):7253–7259, 2012); and Xiong (Finite Fields Appl 18(5):933–945, 2012). In this paper we use the method developed in Xiong (Finite Fields Appl 18(5):933–945, 2012) to solve one more special case. We make extensive use of standard tools in number theory such as characters of finite fields, the Gauss sums and the Jacobi sums. The problem of finding the weight distribution is transformed into a problem of evaluating certain character sums over finite fields, which turns out to be associated with counting the number of points on some elliptic curves over finite fields. We also treat the special case that the characteristic of the finite field is 2.  相似文献   

16.
Based on the very recent work by Dang and Gao (Invers Probl 27:1–9, 2011) and Wang and Xu (J Inequal Appl, doi:10.1155/2010/102085, 2010), and inspired by Yao (Appl Math Comput 186:1551–1558, 2007), Noor (J Math Anal Appl 251:217–229, 2000), and Xu (Invers Probl 22:2021–2034, 2006), we suggest a three-step KM-CQ-like method for solving the split common fixed-point problems in Hilbert spaces. Our results improve and develop previously discussed feasibility problem and related algorithms.  相似文献   

17.
The shortest path games are considered in this paper. The transportation of a good in a network has costs and benefits. The problem is to divide the profit of the transportation among the players. Fragnelli et al. (Math Methods Oper Res 52: 251–264, 2000) introduce the class of shortest path games and show it coincides with the class of monotone games. They also give a characterization of the Shapley value on this class of games. In this paper we consider further five characterizations of the Shapley value (Hart and Mas-Colell’s in Econometrica 57:589–614, 1989; Shapley’s in Contributions to the theory of games II, annals of mathematics studies, vol 28. Princeton University Press, Princeton, pp 307–317, 1953; Young’s in Int J Game Theory 14:65–72, 1985, Chun’s in Games Econ Behav 45:119–130, 1989; van den Brink’s in Int J Game Theory 30:309–319, 2001 axiomatizations), and conclude that all the mentioned axiomatizations are valid for the shortest path games. Fragnelli et al. (Math Methods Oper Res 52:251–264, 2000)’s axioms are based on the graph behind the problem, in this paper we do not consider graph specific axioms, we take $TU$ axioms only, that is we consider all shortest path problems and we take the viewpoint of an abstract decision maker who focuses rather on the abstract problem than on the concrete situations.  相似文献   

18.
The paper is devoted to the problem of establishing right-convergence of sparse random graphs. This concerns the convergence of the logarithm of number of homomorphisms from graphs or hyper-graphs \(\mathbb{G }_N, N\ge 1\) to some target graph \(W\) . The theory of dense graph convergence, including random dense graphs, is now well understood (Borgs et al. in Ann Math 176:151–219, 2012; Borgs et al. in Adv Math 219:1801–1851, 2008; Chatterjee and Varadhan in Eur J Comb 32:1000–1017, 2011; Lovász and Szegedy in J Comb Theory Ser B 96:933–957, 2006), but its counterpart for sparse random graphs presents some fundamental difficulties. Phrased in the statistical physics terminology, the issue is the existence of the limits of appropriately normalized log-partition functions, also known as free energy limits, for the Gibbs distribution associated with \(W\) . In this paper we prove that the sequence of sparse Erdös-Rényi graphs is right-converging when the tensor product associated with the target graph \(W\) satisfies a certain convexity property. We treat the case of discrete and continuous target graphs \(W\) . The latter case allows us to prove a special case of Talagrand’s recent conjecture [more accurately stated as level III Research Problem 6.7.2 in his recent book (Talagrand in Mean Field Models for Spin Glasses: Volume I: Basic examples. Springer, Berlin, 2010)], concerning the existence of the limit of the measure of a set obtained from \(\mathbb{R }^N\) by intersecting it with linearly in \(N\) many subsets, generated according to some common probability law. Our proof is based on the interpolation technique, introduced first by Guerra and Toninelli (Commun Math Phys 230:71–79, 2002) and developed further in (Abbe and Montanari in On the concentration of the number of solutions of random satisfiability formulas, 2013; Bayati et al. in Ann Probab Conference version in Proceedings of 42nd Ann. Symposium on the Theory of Computing (STOC), 2010; Contucci et al. in Antiferromagnetic Potts model on the Erdös-Rényi random graph, 2011; Franz and Leone in J Stat Phys 111(3/4):535–564, 2003; Franz et al. in J Phys A Math Gen 36:10967–10985, 2003; Montanari in IEEE Trans Inf Theory 51(9):3221–3246, 2005; Panchenko and Talagrand in Probab Theory Relat Fields 130:312–336, 2004). Specifically, Bayati et al. (Ann Probab Conference version in Proceedings of 42nd Ann. Symposium on the Theory of Computing (STOC), 2010) establishes the right-convergence property for Erdös-Rényi graphs for some special cases of \(W\) . In this paper most of the results in Bayati et al. (Ann Probab Conference version in Proceedings of 42nd Ann. Symposium on the Theory of Computing (STOC), 2010) follow as a special case of our main theorem.  相似文献   

19.
Second-order elliptic operators with unbounded coefficients of the form ${Au := -{\rm div}(a\nabla u) + F . \nabla u + Vu}$ in ${L^{p}(\mathbb{R}^{N}) (N \in \mathbb{N}, 1 < p < \infty)}$ are considered, which are the same as in recent papers Metafune et?al. (Z Anal Anwendungen 24:497–521, 2005), Arendt et?al. (J Operator Theory 55:185–211, 2006; J Math Anal Appl 338: 505–517, 2008) and Metafune et?al. (Forum Math 22:583–601, 2010). A new criterion for the m-accretivity and m-sectoriality of A in ${L^{p}(\mathbb{R}^{N})}$ is presented via a certain identity that behaves like a sesquilinear form over L p ×?L p'. It partially improves the results in (Metafune et?al. in Z Anal Anwendungen 24:497–521, 2005) and (Metafune et?al. in Forum Math 22:583–601, 2010) with a different approach. The result naturally extends Kato’s criterion in (Kato in Math Stud 55:253–266, 1981) for the nonnegative selfadjointness to the case of p ≠?2. The simplicity is illustrated with the typical example ${Au = -u\hspace{1pt}'' + x^{3}u\hspace{1pt}' + c |x|^{\gamma}u}$ in ${L^p(\mathbb{R})}$ which is dealt with in (Arendt et?al. in J Operator Theory 55:185–211, 2006; Arendt et?al. in J Math Anal Appl 338: 505–517, 2008).  相似文献   

20.
In this paper we deal with a random walk in a random environment on a super-critical Galton–Watson tree. We focus on the recurrent cases already studied by Hu and Shi (Ann. Probab. 35:1978–1997, 2007; Probab. Theory Relat. Fields 138:521–549, 2007), Faraud et al. (Probab. Theory Relat. Fields, 2011, in press), and Faraud (Electron. J. Probab. 16(6):174–215, 2011). We prove that the largest generation entirely visited by these walks behaves like logn, and that the constant of normalization, which differs from one case to another, is a function of the inverse of the constant of Biggins’ law of large numbers for branching random walks (Biggins in Adv. Appl. Probab. 8:446–459, 1976).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号