全文获取类型
收费全文 | 193篇 |
免费 | 0篇 |
国内免费 | 3篇 |
专业分类
化学 | 4篇 |
力学 | 4篇 |
数学 | 178篇 |
物理学 | 10篇 |
出版年
2023年 | 5篇 |
2022年 | 3篇 |
2021年 | 6篇 |
2020年 | 4篇 |
2019年 | 4篇 |
2018年 | 8篇 |
2017年 | 1篇 |
2016年 | 6篇 |
2015年 | 1篇 |
2014年 | 7篇 |
2013年 | 6篇 |
2012年 | 7篇 |
2011年 | 10篇 |
2010年 | 13篇 |
2009年 | 21篇 |
2008年 | 14篇 |
2007年 | 8篇 |
2006年 | 10篇 |
2005年 | 10篇 |
2004年 | 8篇 |
2003年 | 8篇 |
2002年 | 5篇 |
2001年 | 6篇 |
2000年 | 4篇 |
1999年 | 3篇 |
1998年 | 2篇 |
1997年 | 4篇 |
1996年 | 2篇 |
1995年 | 1篇 |
1994年 | 4篇 |
1993年 | 1篇 |
1991年 | 2篇 |
1990年 | 1篇 |
1983年 | 1篇 |
排序方式: 共有196条查询结果,搜索用时 15 毫秒
51.
Alexander Buchholz Nicolas Chopin 《Journal of computational and graphical statistics》2019,28(1):205-219
ABC (approximate Bayesian computation) is a general approach for dealing with models with an intractable likelihood. In this work, we derive ABC algorithms based on QMC (quasi-Monte Carlo) sequences. We show that the resulting ABC estimates have a lower variance than their Monte Carlo counter-parts. We also develop QMC variants of sequential ABC algorithms, which progressively adapt the proposal distribution and the acceptance threshold. We illustrate our QMC approach through several examples taken from the ABC literature. 相似文献
52.
Erich Novak Ian H. Sloan Henryk Wozniakowski 《Foundations of Computational Mathematics》2004,4(2):121-156
We study the approximation problem (or problem of optimal recovery in the
$L_2$-norm) for weighted Korobov spaces with smoothness
parameter $\a$. The weights $\gamma_j$ of the Korobov spaces moderate
the behavior of periodic functions with respect to successive variables.
The nonnegative smoothness parameter $\a$ measures the decay
of Fourier coefficients. For $\a=0$, the Korobov space is the
$L_2$ space, whereas for positive $\a$, the Korobov space
is a space of periodic functions with some smoothness
and the approximation problem
corresponds to a compact operator. The periodic functions are defined on
$[0,1]^d$ and our main interest is when the dimension $d$ varies and
may be large. We consider algorithms using two different
classes of information.
The first class $\lall$ consists of arbitrary linear functionals.
The second class $\lstd$ consists of only function values
and this class is more realistic in practical computations.
We want to know when the approximation problem is
tractable. Tractability means that there exists an algorithm whose error
is at most $\e$ and whose information cost is bounded by a polynomial
in the dimension $d$ and in $\e^{-1}$. Strong tractability means that
the bound does not depend on $d$ and is polynomial in $\e^{-1}$.
In this paper we consider the worst case, randomized, and quantum
settings. In each setting, the concepts of error and cost are defined
differently and, therefore, tractability and strong tractability
depend on the setting and on the class of information.
In the worst case setting, we apply known results to prove
that strong tractability and tractability in the class $\lall$
are equivalent. This holds
if and only if $\a>0$ and the sum-exponent $s_{\g}$ of weights is finite, where
$s_{\g}= \inf\{s>0 : \xxsum_{j=1}^\infty\g_j^s\,<\,\infty\}$.
In the worst case setting for the class $\lstd$ we must assume
that $\a>1$ to guarantee that
functionals from $\lstd$ are continuous. The notions of strong
tractability and tractability are not equivalent. In particular,
strong tractability holds if and only if $\a>1$ and
$\xxsum_{j=1}^\infty\g_j<\infty$.
In the randomized setting, it is known that randomization does not
help over the worst case setting in the class $\lall$. For the class
$\lstd$, we prove that strong tractability and tractability
are equivalent and this holds under the same assumption
as for the class $\lall$ in the worst case setting, that is,
if and only if $\a>0$ and $s_{\g} < \infty$.
In the quantum setting, we consider only upper bounds for the class
$\lstd$ with $\a>1$. We prove that $s_{\g}<\infty$ implies strong
tractability.
Hence for $s_{\g}>1$, the randomized and quantum settings
both break worst case intractability of approximation for
the class $\lstd$.
We indicate cost bounds on algorithms with error at
most $\e$. Let $\cc(d)$ denote the cost of computing $L(f)$ for
$L\in \lall$ or $L\in \lstd$, and let the cost of one arithmetic
operation be taken as unity.
The information cost bound in the worst case setting for the
class $\lall$ is of order $\cc (d) \cdot \e^{-p}$
with $p$ being roughly equal to $2\max(s_\g,\a^{-1})$.
Then for the class $\lstd$
in the randomized setting,
we present an algorithm with error at most $\e$ and whose total cost is
of order $\cc(d)\e^{-p-2} + d\e^{-2p-2}$, which for small $\e$ is roughly
$$
d\e^{-2p-2}.
$$
In the quantum setting, we present a quantum algorithm
with error at most $\e$ that
uses about only $d + \log \e^{-1}$ qubits
and whose total cost is of order
$$
(\cc(d) +d) \e^{-1-3p/2}.
$$
The ratio of the costs of the algorithms in the quantum setting and
the randomized setting is of order
$$
\frac{d}{\cc(d)+d}\,\left(\frac1{\e}\right)^{1+p/2}.
$$
Hence, we have a polynomial speedup of order $\e^{-(1+p/2)}$.
We stress that $p$ can be arbitrarily large, and in this case
the speedup is huge. 相似文献
53.
We present a generalization of the mixed integer rounding (MIR) approach for generating valid inequalities for (mixed) integer
programming (MIP) problems. For any positive integer n, we develop n facets for a certain (n + 1)-dimensional single-constraint polyhedron in a sequential manner. We then show that for any n, the last of these facets (which we call the n-step MIR facet) can be used to generate a family of valid inequalities for the feasible set of a general (mixed) IP constraint, which we
refer to as the n-step MIR inequalities. The Gomory Mixed Integer Cut and the 2-step MIR inequality of Dash and günlük (Math Program 105(1):29–53, 2006) are the
first two families corresponding to n = 1,2, respectively. The n-step MIR inequalities are easily produced using periodic functions which we refer to as the n-step MIR functions. None of these functions dominates the other on its whole period. Finally, we prove that the n-step MIR inequalities generate two-slope facets for the infinite group polyhedra, and hence are potentially strong.
相似文献
54.
We study the problem of sampling uniformly at random from the set of k-colorings of a graph with maximum degree Δ. We focus attention on the Markov chain Monte Carlo method, particularly on a popular Markov chain for this problem, the Wang–Swendsen–Kotecký (WSK) algorithm. The second author recently proved that the WSK algorithm quickly converges to the desired distribution when k11Δ/6. We study how far these positive results can be extended in general. In this note we prove the first non-trivial results on when the WSK algorithm takes exponentially long to reach the stationary distribution and is thus called torpidly mixing. In particular, we show that the WSK algorithm is torpidly mixing on a family of bipartite graphs when 3k<Δ/(20logΔ), and on a family of planar graphs for any number of colors. We also give a family of graphs for which, despite their small chromatic number, the WSK algorithm is not ergodic when kΔ/2, provided k is larger than some absolute constant k0. 相似文献
55.
Darren Homrighausen Daniel J. McDonald 《Journal of computational and graphical statistics》2016,25(2):344-362
In this article, we analyze approximate methods for undertaking a principal components analysis (PCA) on large datasets. PCA is a classical dimension reduction method that involves the projection of the data onto the subspace spanned by the leading eigenvectors of the covariance matrix. This projection can be used either for exploratory purposes or as an input for further analysis, for example, regression. If the data have billions of entries or more, the computational and storage requirements for saving and manipulating the design matrix in fast memory are prohibitive. Recently, the Nyström and column-sampling methods have appeared in the numerical linear algebra community for the randomized approximation of the singular value decomposition of large matrices. However, their utility for statistical applications remains unclear. We compare these approximations theoretically by bounding the distance between the induced subspaces and the desired, but computationally infeasible, PCA subspace. Additionally we show empirically, through simulations and a real data example involving a corpus of emails, the trade-off of approximation accuracy and computational complexity. 相似文献
56.
57.
Our randomized preprocessing enables pivoting-free and orthogonalization-free solution of homogeneous linear systems of equations. In the case of Toeplitz inputs, we decrease the estimated solution time from quadratic to nearly linear, and our tests show dramatic decrease of the CPU time as well. We prove numerical stability of our approach and extend it to solving nonsingular linear systems, inversion and generalized (Moore-Penrose) inversion of general and structured matrices by means of Newton’s iteration, approximation of a matrix by a nearby matrix that has a smaller rank or a smaller displacement rank, matrix eigen-solving, and root-finding for polynomial and secular equations and for polynomial systems of equations. Some by-products and extensions of our study can be of independent technical intersest, e.g., our extensions of the Sherman-Morrison-Woodbury formula for matrix inversion, our estimates for the condition number of randomized matrix products, and preprocessing via augmentation. 相似文献
58.
A recent trend in local search concerns the exploitation of several different neighborhoods so as to increase the ability
of the algorithm to navigate the search space. In this work we investigate a hybridization technique, that we call Neighborhood Portfolio Approach, that consists in the interleave of local search techniques based on various combinations of neighborhoods. In particular,
we are able to select the most effective search technique through a systematic analysis of all meaningful combinations built
upon a set of basic neighborhoods. The proposed approach is applied to two practical problems belonging to the timetabling
family, and systematically tested and compared on real-world instances. The experimental analysis shows that our approach
leads to automatic design of new algorithms that provide better results than basic local search techniques. 相似文献
59.
Martin Storath Laurent Demaret Peter Massopust 《Applied and Computational Harmonic Analysis》2017,42(2):199-223
We propose a signal analysis tool based on the sign (or the phase) of complex wavelet coefficients, which we call a signature. The signature is defined as the fine-scale limit of the signs of a signal's complex wavelet coefficients. We show that the signature equals zero at sufficiently regular points of a signal whereas at salient features, such as jumps or cusps, it is non-zero. At such feature points, the orientation of the signature in the complex plane can be interpreted as an indicator of local symmetry and antisymmetry. We establish that the signature rotates in the complex plane under fractional Hilbert transforms. We show that certain random signals, such as white Gaussian noise and Brownian motions, have a vanishing signature. We derive an appropriate discretization and show the applicability to signal analysis. 相似文献
60.