首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1578篇
  免费   69篇
  国内免费   28篇
化学   333篇
晶体学   1篇
力学   64篇
综合类   12篇
数学   788篇
物理学   477篇
  2024年   3篇
  2023年   17篇
  2022年   24篇
  2021年   28篇
  2020年   17篇
  2019年   30篇
  2018年   34篇
  2017年   56篇
  2016年   47篇
  2015年   47篇
  2014年   103篇
  2013年   167篇
  2012年   98篇
  2011年   108篇
  2010年   95篇
  2009年   138篇
  2008年   87篇
  2007年   112篇
  2006年   75篇
  2005年   52篇
  2004年   49篇
  2003年   30篇
  2002年   31篇
  2001年   27篇
  2000年   16篇
  1999年   11篇
  1998年   19篇
  1997年   34篇
  1996年   20篇
  1995年   15篇
  1994年   8篇
  1993年   6篇
  1992年   9篇
  1991年   11篇
  1990年   6篇
  1989年   5篇
  1988年   4篇
  1987年   5篇
  1986年   6篇
  1985年   6篇
  1983年   2篇
  1982年   3篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
  1974年   1篇
  1973年   1篇
  1969年   1篇
  1966年   1篇
排序方式: 共有1675条查询结果,搜索用时 15 毫秒
141.
In a total least squares (TLS) problem, we estimate an optimal set of model parameters X, so that (AA)X=BB, where A is the model matrix, B is the observed data, and ΔA and ΔB are corresponding corrections. When B is a single vector, Rao (1997) and Paige and Strakoš (2002) suggested formulating standard least squares problems, for which ΔA=0, and data least squares problems, for which ΔB=0, as weighted and scaled TLS problems. In this work we define an implicitly-weighted TLS formulation (ITLS) that reparameterizes these formulations to make computation easier. We derive asymptotic properties of the estimates as the number of rows in the problem approaches infinity, handling the rank-deficient case as well. We discuss the role of the ratio between the variances of errors in A and B in choosing an appropriate parameter in ITLS. We also propose methods for computing the family of solutions efficiently and for choosing the appropriate solution if the ratio of variances is unknown. We provide experimental results on the usefulness of the ITLS family of solutions.  相似文献   
142.
An issue of considerable importance involves the allocation of fixed costs or common revenue among a set of competing entities in an equitable way. Based on the data envelopment analysis (DEA) theory, this paper proposes new methods for (i) allocating fixed costs to decision making units (DMUs) and (ii) distributing common revenue among DMUs, in such a way that the relative efficiencies of all DMUs remain unchanged and the allocations should reflect the relative efficiencies and the input-output scales of individual DMUs. To illustrate our methods, numerical results for an example are described in this paper.  相似文献   
143.
Iterative procedure is described to generate patterns of dominant Schur vectors of the system dynamics. Their roles in estimating the filter gain is study. These patterns are produced by several integrations of the model from a set of perturbations. This approach is motivated by a number of interesting results on stability of the filter whose gain is approximated in a subspace of dominant Schur vectors. A simple method for the filter design is presented which is aimed at overcoming the most serious drawback of advanced filtering algorithms for high dimensional systems related to very high computational cost in evaluation of the filter gain.The resulting filter will be compared with the existing ones, showing its relevance from a practical point of view. In order to demonstrate its efficiency, the new filter is tested on various experiments. These experiments include the much studied problem of estimating the solution of the Lorenz system as well as that of assimilating sea surface height observations in a high dimensional oceanic model. It is shown that significant increases in efficiency can be obtained by using this filter and that the proposed filter is very promising for solving realistic assimilation problems in meteorology and oceanography.  相似文献   
144.
Analyzing interval-censored data is difficult due to its complex data structure containing left-, interval-, and right-censored observations. An easy-to-implement Bayesian approach is proposed under the proportional odds (PO) model for analyzing such data. The nondecreasing baseline log odds function is modeled with a linear combination of monotone splines. Two efficient Gibbs samplers are developed based on two different data augmentations using the relationship between the PO model and the logistic distribution. In the first data augmentation, the logistic distribution is achieved by the scaled normal mixture with the scale parameter related to the Kolmogorov-Smirnove distribution. In the second data augmentation, the logistic distribution is approximated by a Student’s t distribution up to a scale constant. The proposed methods are evaluated by simulation studies and illustrated with an application of an HIV data set.  相似文献   
145.
In data envelopment analysis (DEA), the cross-efficiency evaluation method introduces a cross-efficiency matrix, in which the units are self and peer evaluated. A problem that possibly reduces the usefulness of the cross-efficiency evaluation method is that the cross-efficiency scores may not be unique due to the presence of alternate optima. So, it is recommended that secondary goals be introduced in cross-efficiency evaluation. In this paper we propose the symmetric weight assignment technique (SWAT) that does not affect feasibility and rewards decision making units (DMUs) that make a symmetric selection of weights. A numerical example is solved by our proposed method and its solution is compared with those of alternative approaches.  相似文献   
146.
There are many data clustering techniques available to extract meaningful information from real world data, but the obtained clustering results of the available techniques, running time for the performance of clustering techniques in clustering real world data are highly important. This work is strongly felt that fuzzy clustering technique is suitable one to find meaningful information and appropriate groups into real world datasets. In fuzzy clustering the objective function controls the groups or clusters and computation parts of clustering. Hence researchers in fuzzy clustering algorithm aim is to minimize the objective function that usually has number of computation parts, like calculation of cluster prototypes, degree of membership for objects, computation part for updating and stopping algorithms. This paper introduces some new effective fuzzy objective functions with effective fuzzy parameters that can help to minimize the running time and to obtain strong meaningful information or clusters into the real world datasets. Further this paper tries to introduce new way for predicting membership, centres by minimizing the proposed new fuzzy objective functions. And experimental results of proposed algorithms are given to illustrate the effectiveness of proposed methods.  相似文献   
147.
Conventional DEA models have been introduced to deal with non-negative data. In the real world, in some occasions, we have outputs and/or inputs, which can take negative data. In DEA literature some approaches have been presented for evaluating performance of units, which operate with negative data. In this paper, firstly, we give a brief review of these works, then we present a new additive based approach in this framework. The proposed model is designed to provide a target with non-negative value associated with negative components for each observed unit, failed by other methods. An empirical application in banking is then used to show the applicability of the proposed method and make a comparison with the other approaches in the literature.  相似文献   
148.
This paper is concerned with solving the Cauchy problem for an elliptic equation by minimizing an energy-like error functional and by taking into account noisy Cauchy data. After giving some fundamental results, numerical convergence analysis of the energy-like minimization method is carried out and leads to adapted stopping criteria for the minimization process depending on the noise rate. Numerical examples involving smooth and singular data are presented.  相似文献   
149.
An issue that has received little attention in the Data Envelopment Analysis literature is the decomposition of profit inefficiency by means of measures that account all sources of technical inefficiency. In this paper we introduce a new way to measure and decompose profit inefficiency through weighted additive models. All our results are derived from a new Fenchel-Mahler inequality using duality theory.  相似文献   
150.
Data envelopment analysis (DEA) is a useful tool of efficiency measurement for firms and organizations. Kao and Hwang (2008) take into account the series relationship of the two sub-processes in a two-stage production process, and the overall efficiency of the whole process is the product of the efficiencies of the two sub-processes. To find the largest efficiency of one sub-process while maintaining the maximum overall efficiency of the whole process, Kao and Hwang (2008) propose a solution procedure to accomplish this purpose. Nevertheless, one needs to know the overall efficiency of the whole process before calculating the sub-process efficiency. In this note, we propose a method that is able to find the sub-process and overall efficiencies simultaneously.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号