全文获取类型
收费全文 | 6921篇 |
免费 | 565篇 |
国内免费 | 531篇 |
专业分类
化学 | 1442篇 |
晶体学 | 5篇 |
力学 | 334篇 |
综合类 | 205篇 |
数学 | 4164篇 |
物理学 | 1867篇 |
出版年
2024年 | 10篇 |
2023年 | 58篇 |
2022年 | 106篇 |
2021年 | 201篇 |
2020年 | 132篇 |
2019年 | 169篇 |
2018年 | 144篇 |
2017年 | 255篇 |
2016年 | 327篇 |
2015年 | 202篇 |
2014年 | 389篇 |
2013年 | 535篇 |
2012年 | 389篇 |
2011年 | 385篇 |
2010年 | 313篇 |
2009年 | 420篇 |
2008年 | 439篇 |
2007年 | 437篇 |
2006年 | 424篇 |
2005年 | 330篇 |
2004年 | 298篇 |
2003年 | 239篇 |
2002年 | 243篇 |
2001年 | 217篇 |
2000年 | 207篇 |
1999年 | 166篇 |
1998年 | 169篇 |
1997年 | 129篇 |
1996年 | 109篇 |
1995年 | 71篇 |
1994年 | 86篇 |
1993年 | 42篇 |
1992年 | 56篇 |
1991年 | 50篇 |
1990年 | 24篇 |
1989年 | 35篇 |
1988年 | 23篇 |
1987年 | 19篇 |
1986年 | 21篇 |
1985年 | 32篇 |
1984年 | 28篇 |
1983年 | 10篇 |
1982年 | 11篇 |
1981年 | 12篇 |
1980年 | 14篇 |
1979年 | 8篇 |
1978年 | 5篇 |
1977年 | 8篇 |
1976年 | 8篇 |
1973年 | 4篇 |
排序方式: 共有8017条查询结果,搜索用时 15 毫秒
991.
992.
We develop an approach to tuning of penalized regression variable selection methods by calculating the sparsest estimator contained in a confidence region of a specified level. Because confidence intervals/regions are generally understood, tuning penalized regression methods in this way is intuitive and more easily understood by scientists and practitioners. More importantly, our work shows that tuning to a fixed confidence level often performs better than tuning via the common methods based on Akaike information criterion (AIC), Bayesian information criterion (BIC), or cross-validation (CV) over a wide range of sample sizes and levels of sparsity. Additionally, we prove that by tuning with a sequence of confidence levels converging to one, asymptotic selection consistency is obtained, and with a simple two-stage procedure, an oracle property is achieved. The confidence-region-based tuning parameter is easily calculated using output from existing penalized regression computer packages. Our work also shows how to map any penalty parameter to a corresponding confidence coefficient. This mapping facilitates comparisons of tuning parameter selection methods such as AIC, BIC, and CV, and reveals that the resulting tuning parameters correspond to confidence levels that are extremely low, and can vary greatly across datasets. Supplemental materials for the article are available online. 相似文献
993.
Mathew W. McLean Giles Hooker Ana-Maria Staicu Fabian Scheipl David Ruppert 《Journal of computational and graphical statistics》2013,22(1):249-269
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F( ·, ·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as by Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F( ·, ·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. The FGAM is implemented in R in the refund package. There are additional supplementary materials available online. 相似文献
994.
Liewen Jiang Huixia Judy Wang Howard D. Bondell 《Journal of computational and graphical statistics》2013,22(4):970-986
Conventional analysis using quantile regression typically focuses on fitting the regression model at different quantiles separately. However, in situations where the quantile coefficients share some common feature, joint modeling of multiple quantiles to accommodate the commonality often leads to more efficient estimation. One example of common features is that a predictor may have a constant effect over one region of quantile levels but varying effects in other regions. To automatically perform estimation and detection of the interquantile commonality, we develop two penalization methods. When the quantile slope coefficients indeed do not change across quantile levels, the proposed methods will shrink the slopes toward constant and thus improve the estimation efficiency. We establish the oracle properties of the two proposed penalization methods. Through numerical investigations, we demonstrate that the proposed methods lead to estimations with competitive or higher efficiency than the standard quantile regression estimation in finite samples. Supplementary materials for the article are available online. 相似文献
995.
Anestis Antoniadis Irène Gijbels Anneleen Verhasselt 《Journal of computational and graphical statistics》2013,22(3):638-661
In this article, we consider nonparametric smoothing and variable selection in varying-coefficient models. Varying-coefficient models are commonly used for analyzing the time-dependent effects of covariates on responses measured repeatedly (such as longitudinal data). We present the P-spline estimator in this context and show its estimation consistency for a diverging number of knots (or B-spline basis functions). The combination of P-splines with nonnegative garrote (which is a variable selection method) leads to good estimation and variable selection. Moreover, we consider APSO (additive P-spline selection operator), which combines a P-spline penalty with a regularization penalty, and show its estimation and variable selection consistency. The methods are illustrated with a simulation study and real-data examples. The proofs of the theoretical results as well as one of the real-data examples are provided in the online supplementary materials. 相似文献
996.
Genevera I. Allen 《Journal of computational and graphical statistics》2013,22(2):284-299
Selecting important features in nonlinear kernel spaces is a difficult challenge in both classification and regression problems. This article proposes to achieve feature selection by optimizing a simple criterion: a feature-regularized loss function. Features within the kernel are weighted, and a lasso penalty is placed on these weights to encourage sparsity. This feature-regularized loss function is minimized by estimating the weights in conjunction with the coefficients of the original classification or regression problem, thereby automatically procuring a subset of important features. The algorithm, KerNel Iterative Feature Extraction (KNIFE), is applicable to a wide variety of kernels and high-dimensional kernel problems. In addition, a modification of KNIFE gives a computationally attractive method for graphically depicting nonlinear relationships between features by estimating their feature weights over a range of regularization parameters. The utility of KNIFE in selecting features through simulations and examples for both kernel regression and support vector machines is demonstrated. Feature path realizations also give graphical representations of important features and the nonlinear relationships among variables. Supplementary materials with computer code and an appendix on convergence analysis are available online. 相似文献
997.
《Journal of computational and graphical statistics》2013,22(3):690-713
This article proposes data-driven algorithms for fitting SEMIFAR models. The algorithms combine the data-driven estimation of the nonparametric trend and maximum likelihood estimation of the parameters. Convergence and asymptotic properties of the proposed algorithms are investigated. A large simulation study illustrates the practical performance of the methods. 相似文献
998.
《Journal of computational and graphical statistics》2013,22(1):186-200
We introduce fast and robust algorithms for lower rank approximation to given matrices based on robust alternating regression. The alternating least squares regression, also called criss-cross regression, was used for lower rank approximation of matrices, but it lacks robustness against outliers in these matrices. We use robust regression estimators and address some of the complications arising from this approach. We find it helpful to use high breakdown estimators in the initial iterations, followed by M estimators with monotone score functions in later iterations towards convergence. In addition to robustness, the computational speed is another important consideration in the development of our proposed algorithm, because alternating robust regression can be computationally intensive for large matrices. Based on a mix of the least trimmed squares (LTS) and Huber's M estimators, we demonstrate that fast and robust lower rank approximations are possible for modestly large matrices. 相似文献
999.
《Journal of computational and graphical statistics》2013,22(1):225-242
Sliced inverse regression (SIR) is an important method for reducing the dimensionality of input variables. Its goal is to estimate the effective dimension reduction directions. In classification settings, SIR is closely related to Fisher discriminant analysis. Motivated by reproducing kernel theory, we propose a notion of nonlinear effective dimension reduction and develop a nonlinear extension of SIR called kernel SIR (KSIR). Both SIR and KSIR are based on principal component analysis. Alternatively, based on principal coordinate analysis, we propose the dual versions of SIR and KSIR, which we refer to as sliced coordinate analysis (SCA) and kernel sliced coordinate analysis (KSCA), respectively. In the classification setting, we also call them discriminant coordinate analysis and kernel discriminant coordinate analysis. The computational complexities of SIR and KSIR rely on the dimensionality of the input vector and the number of input vectors, respectively, while those of SCA and KSCA both rely on the number of slices in the output. Thus, SCA and KSCA are very efficient dimension reduction methods. 相似文献
1000.
《Journal of computational and graphical statistics》2013,22(2):421-444
Multivariate analysis of variance (MANOVA) extends the ideas and methods of univariate ANOVA in simple and straightforward ways. But the familiar graphical methods typically used for univariate ANOVA are inadequate for showing how measures in a multivariate response vary with each other, and how their means vary with explanatory factors. Similarly, the graphical methods commonly used in multiple regression are not widely available or used in multivariate multiple regression (MMRA). We describe a variety of graphical methods for multiple-response (MANOVA and MMRA) data aimed at understanding what is being tested in a multivariate test, and how factor/predictor effects are expressed across multiple response measures.In particular, we describe and illustrate: (a) Data ellipses and biplots for multivariate data; (b) HE plots, showing the hypothesis and error covariance matrices for a given pair of responses, and a given effect; (c) HE plot matrices, showing all pairwise HE plots; and (d) reduced-rank analogs of HE plots, showing all observations, group means, and their relations to the response variables. All of these methods are implemented in a collection of easily used SAS macro programs. 相似文献