首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   304篇
  免费   11篇
  国内免费   14篇
化学   15篇
晶体学   1篇
力学   21篇
综合类   3篇
数学   222篇
物理学   67篇
  2024年   1篇
  2023年   3篇
  2022年   7篇
  2021年   9篇
  2020年   6篇
  2019年   2篇
  2018年   7篇
  2017年   5篇
  2016年   3篇
  2015年   6篇
  2014年   10篇
  2013年   35篇
  2012年   14篇
  2011年   17篇
  2010年   12篇
  2009年   15篇
  2008年   20篇
  2007年   23篇
  2006年   15篇
  2005年   9篇
  2004年   14篇
  2003年   9篇
  2002年   6篇
  2001年   4篇
  2000年   5篇
  1999年   4篇
  1998年   10篇
  1997年   14篇
  1996年   7篇
  1995年   3篇
  1994年   6篇
  1993年   5篇
  1992年   8篇
  1991年   1篇
  1990年   4篇
  1988年   1篇
  1986年   1篇
  1984年   1篇
  1982年   1篇
  1981年   1篇
  1978年   1篇
  1977年   2篇
  1975年   1篇
  1971年   1篇
排序方式: 共有329条查询结果,搜索用时 828 毫秒
111.
The connected-(1, 2)-or-(2, 1)-out-of-(mn):F lattice system is included by the connected-X-out-of-(mn):F lattice system defined by Boehme et al. [Boehme, T.K., Kossow, A., Preuss, W., 1992. A generalization of consecutive-k-out-of-n:F system. IEEE Transactions on Reliability 41, 451–457]. This system fails if and only if at least one subset of connected failed components occurs which includes at least a (1, 2)-matrix (that is, a row and two columns) or a (2, 1)-matrix(that is, two rows and a column) of failed components. This system is applied to two-dimensional network problems with adjacent constraints, and various systems, for example, a supervision system, etc.  相似文献   
112.
113.
In this paper, we analyze the recursive merge sort algorithm and quantify the deviation of the output from the correct sorted order if the outcomes of one or more comparisons are in error. The disorder in the output sequence is quantified by four measures: the number of runs, the smallest number of integers that need to be removed to leave the sequence sorted, the number of inversions, and the smallest number of successive exchanges needed to sort the sequence. For input sequences whose length is large compared to the number of errors, a comparison is made between the robustness to errors of bubble sort, straight insertion sort, and recursive merge sort.  相似文献   
114.
The Langevin equation–perhaps the most elemental stochastic differential equation in the physical sciences–describes the dynamics of a random motion driven simultaneously by a deterministic potential field and by a stochastic white noise. The Langevin equation is, in effect, a mechanism that maps the stochastic white-noise input to a stochastic output: a stationary steady state distribution in the case of potential wells, and a transient extremum distribution in the case of potential gradients. In this paper we explore the degree of randomness of the Langevin equation’s stochastic output, and classify it à la Mandelbrot into five states of randomness ranging from “infra-mild” to “ultra-wild”. We establish closed-form and highly implementable analytic results that determine the randomness of the Langevin equation’s stochastic output–based on the shape of the Langevin equation’s potential field.  相似文献   
115.
For stochastic systems described by the controlled autoregressive autoregressive moving average (CARARMA) models, a new-type two-stage least squares based iterative algorithm is proposed for identifying the system model parameters and the noise model parameters. The basic idea is based on the interactive estimation theory and to estimate the parameter vectors of the system model and the noise model, respectively. The simulation results indicate that the proposed algorithm is effective.  相似文献   
116.
Stability is a major requirement to draw reliable conclusions when interpreting results from supervised statistical learning. In this article, we present a general framework for assessing and comparing the stability of results, which can be used in real-world statistical learning applications as well as in simulation and benchmark studies. We use the framework to show that stability is a property of both the algorithm and the data-generating process. In particular, we demonstrate that unstable algorithms (such as recursive partitioning) can produce stable results when the functional form of the relationship between the predictors and the response matches the algorithm. Typical uses of the framework in practical data analysis would be to compare the stability of results generated by different candidate algorithms for a dataset at hand or to assess the stability of algorithms in a benchmark study. Code to perform the stability analyses is provided in the form of an R package. Supplementary material for this article is available online.  相似文献   
117.
Tree-structured models have been widely used because they function as interpretable prediction models that offer easy data visualization. A number of tree algorithms have been developed for univariate response data and can be extended to analyze multivariate response data. We propose a tree algorithm by combining the merits of a tree-based model and a mixed-effects model for longitudinal data. We alleviate variable selection bias through residual analysis, which is used to solve problems that exhaustive search approaches suffer from, such as undue preference to split variables with more possible splits, expensive computational cost, and end-cut preference. Most importantly, our tree algorithm discovers trends over time on each of the subspaces from recursive partitioning, while other tree algorithms predict responses. We investigate the performance of our algorithm with both simulation and real data studies. We also develop an R package melt that can be used conveniently and freely. Additional results are provided as online supplementary material.  相似文献   
118.
Treed Regression     
Abstract

Given a data set consisting of n observations on p independent variables and a single dependent variable, treed regression creates a binary tree with a simple linear regression function at each of the leaves. Each node of the tree consists of an inequality condition on one of the independent variables. The tree is generated from the training data by a recursive partitioning algorithm. Treed regression models are more parsimonious than CART models because there are fewer splits. Additionally, monotonicity in some or all of the variables can be imposed.  相似文献   
119.
We consider the implications of streaming data for data analysis and data mining. Streaming data are becoming widely available from a variety of sources. In our case we consider the implications arising from Internet traffic data. By implication, streaming data are unlikely to be time homogeneous so that standard statistical and data mining procedures do not necessarily apply. Because it is essentially impossible to store streaming data, we consider recursive algorithms, algorithms which are adaptive and discount the past and also algorithms that create finite pseudo-samples. We also suggest some evolutionary graphics procedures that are suitable for streaming data. We begin our discussion with a discussion of Internet traffic in order to give the reader some sense of the time and data scale and visual resolution needed for such problems.  相似文献   
120.
Additive models and tree-based regression models are two main classes of statistical models used to predict the scores on a continuous response variable. It is known that additive models become very complex in the presence of higher order interaction effects, whereas some tree-based models, such as CART, have problems capturing linear main effects of continuous predictors. To overcome these drawbacks, the regression trunk model has been proposed: a multiple regression model with main effects and a parsimonious amount of higher order interaction effects. The interaction effects can be represented by a small tree: a regression trunk. This article proposes a new algorithm—Simultaneous Threshold Interaction Modeling Algorithm (STIMA)—to estimate a regression trunk model that is more general and more efficient than the initial one (RTA) and is implemented in the R-package stima. Results from a simulation study show that the performance of STIMA is satisfactory for sample sizes of 200 or higher. For sample sizes of 300 or higher, the 0.50 SE rule is the best pruning rule for a regression trunk in terms of power and Type I error. For sample sizes of 200, the 0.80 SE rule is recommended. Results from a comparative study of eight regression methods applied to ten benchmark datasets suggest that STIMA and GUIDE are the best performers in terms of cross-validated prediction error. STIMA appeared to be the best method for datasets containing many categorical variables. The characteristics of a regression trunk model are illustrated using the Boston house price dataset.

Supplemental materials for this article, including the R-package stima, are available online.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号