首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1576篇
  免费   69篇
  国内免费   28篇
化学   333篇
晶体学   1篇
力学   64篇
综合类   12篇
数学   788篇
物理学   475篇
  2024年   3篇
  2023年   17篇
  2022年   22篇
  2021年   28篇
  2020年   17篇
  2019年   30篇
  2018年   34篇
  2017年   56篇
  2016年   47篇
  2015年   47篇
  2014年   103篇
  2013年   167篇
  2012年   98篇
  2011年   108篇
  2010年   95篇
  2009年   138篇
  2008年   87篇
  2007年   112篇
  2006年   75篇
  2005年   52篇
  2004年   49篇
  2003年   30篇
  2002年   31篇
  2001年   27篇
  2000年   16篇
  1999年   11篇
  1998年   19篇
  1997年   34篇
  1996年   20篇
  1995年   15篇
  1994年   8篇
  1993年   6篇
  1992年   9篇
  1991年   11篇
  1990年   6篇
  1989年   5篇
  1988年   4篇
  1987年   5篇
  1986年   6篇
  1985年   6篇
  1983年   2篇
  1982年   3篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1977年   1篇
  1974年   1篇
  1973年   1篇
  1969年   1篇
  1966年   1篇
排序方式: 共有1673条查询结果,搜索用时 15 毫秒
191.
Graphics play a crucial role in statistical analysis and data mining. Being able to quantify structure in data that is visible in plots, and how people read the structure from plots is an ongoing challenge. The lineup protocol provides a formal framework for data plots, making inference possible. The data plot is treated like a test statistic, and lineup protocol acts like a comparison with the sampling distribution of the nulls. This article describes metrics for describing structure in data plots and evaluates them in relation to the choices that human readers made during several large Amazon Turk studies using lineups. The metrics that were more specific to the plot types tended to better match subject choices, than generic metrics. The process that we followed to evaluate metrics will be useful for general development of numerically measuring structure in plots, and also in future experiments on lineups for choosing blocks of pictures. Supplementary materials for this article are available online.  相似文献   
192.
The multiset sampler has been shown to be an effective algorithm to sample from complex multimodal distributions, but the multiset sampler requires that the parameters in the target distribution can be divided into two parts: the parameters of interest and the nuisance parameters. We propose a new self-multiset sampler (SMSS), which extends the multiset sampler to distributions without nuisance parameters. We also generalize our method to distributions with unbounded or infinite support. Numerical results show that the SMSS and its generalization have a substantial advantage in sampling multimodal distributions compared to the ordinary Markov chain Monte Carlo algorithm and some popular variants. Supplemental materials for the article are available online.  相似文献   
193.
We present a general framework for Bayesian estimation of incompletely observed multivariate diffusion processes. Observations are assumed to be discrete in time, noisy and incomplete. We assume the drift and diffusion coefficient depend on an unknown parameter. A data-augmentation algorithm for drawing from the posterior distribution is presented which is based on simulating diffusion bridges conditional on a noisy incomplete observation at an intermediate time. The dynamics of such filtered bridges are derived and it is shown how these can be simulated using a generalised version of the guided proposals introduced in Schauer, Van der Meulen and Van Zanten (2017, Bernoulli 23(4A)).  相似文献   
194.
We present a nonparametric approach for (1) efficiency and (2) equity evaluation in education. Firstly, we use a nonparametric (Data Envelopment Analysis) model that is specially tailored to assess educational efficiency at the pupil level. The model accounts for the fact that typically minimal prior structure is available for the behavior (objectives and feasibility set) under evaluation. It allows for uncertainty in the data, while it corrects for exogenous ‘environmental’ characteristics that are specific to each pupil. Secondly, we propose two multidimensional stochastic dominance criteria as naturally complementary aggregation criteria for comparing the performance of different school types (private and public schools). While the first criterion only accounts for efficiency, the second criterion also takes equity into consideration. The model is applied for comparing private (but publicly funded) and public primary schools in Flanders. Our application finds that no school type robustly dominates another type when controlling for the school environment and taking equity into account. More generally, it demonstrates the usefulness of our nonparametric approach, which includes environmental and equity considerations, for obtaining ‘fair’ performance comparisons in the public sector context.  相似文献   
195.
At the meeting of the joint Bologna Declaration, EU representatives agreed on the establishment of a common European Higher Education Area by 2010. Since then, several universities have implemented pilot projects, although no formal research has been carried out to analyse their results. In this study, we analysed one of these pilot-projects with two objectives. First, we examined the performance of the new system as compared to that of the traditional system. We used a procedure based on a modified model of Data Envelopment Analysis that is able to distinguish students’ efficiency (managerial efficiency) from efficiency based on the educational programme used (programme efficiency). Then we analysed whether the different systems perform differently for different types of students.  相似文献   
196.
Data envelopment analysis (DEA), as generally used, assumes precise knowledge regarding which variables are inputs and outputs; however, in many applications, there exists only partial knowledge. This paper presents a new methodology for selecting input/output variables endogenously to the DEA model in the presence of partial (or expert’s) knowledge by employing a reward variable observed exogenous to the operation of the DMUs. The reward is an allocation of a limited resource by an external agency, e.g. capital allocation by a market, based on the perceived internal managerial efficiencies. We present an iterative two-stage optimization model which addresses the benefit of possibly violating the expert information to determine an optimal internal performance evaluation of the DMUs for maximizing its correlation with the reward metric. Theoretical properties of the model are analyzed and statistical significance tests are developed for the marginal value of expert violation. The methodology is applied in Fundamental Analysis of publicly-traded firms, using quarterly financial data, to determine an optimized DEA-based fundamental strength indicator. More than 800 firms covering all major sectors of the US stock market are used in the empirical evaluation of the model. The firms so-screened by the model are used within out-of-sample mean-variance long-portfolio allocation to demonstrate the superiority of the methodology as an investment decision tool.  相似文献   
197.
Multiple attribute pricing problems are highly challenging due to the dynamic and uncertain features in the associated market. In this paper, we address the condominium multiple attribute pricing problem using data envelopment analysis (DEA). In this study, we simultaneously consider stochastic variables, non-discretionary variables, and ordinal data, and present a new type of DEA model. Based on our proposed DEA, an effective performance measurement tool is developed to provide a basis for understanding the condominium pricing problem, to direct and monitor the implementation of pricing strategy, and to provide information regarding the results of pricing efforts for units sold as well as insights for future building design. A case study is executed on a leading Canadian condominium developer.  相似文献   
198.
Within the data envelopment analysis context, problems of discrimination between efficient and inefficient decision-making units often arise, particularly if there are a relatively large number of variables with respect to observations. This paper applies Monte Carlo simulation to generalize and compare two discrimination improving methods; principal component analysis applied to data envelopment analysis (PCA–DEA) and variable reduction based on partial covariance (VR). Performance criteria are based on the percentage of observations incorrectly classified; efficient decision-making units mistakenly defined as inefficient and inefficient units defined as efficient. A trade-off was observed with both methods improving discrimination by reducing the probability of the latter error at the expense of a small increase in the probability of the former error. A comparison of the methodologies demonstrates that PCA–DEA provides a more powerful tool than VR with consistently more accurate results. PCA–DEA is applied to all basic DEA models and guidelines for its application are presented in order to minimize misclassification and prove particularly useful when analyzing relatively small datasets, removing the need for additional preference information.  相似文献   
199.
Data Envelopment Analysis (DEA) is a nonparametric method for measuring the efficiency of a set of decision making units such as firms or public sector agencies, first introduced into the operational research and management science literature by Charnes, Cooper, and Rhodes (CCR) [Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The original DEA models were applicable only to technologies characterized by positive inputs/outputs. In subsequent literature there have been various approaches to enable DEA to deal with negative data.  相似文献   
200.
This paper presents a framework for finding optimal modules in a delayed product differentiation scenario. Historical product sales data is utilized to estimate demand probability and customer preferences. Then this information is used by a multiple-objective optimization model to form modules. An evolutionary computation approach is applied to solve the optimization model and find the Pareto-optimal solutions. An industrial case study illustrates the ideas presented in the paper. The mean number of assembly operations and expected pre-assembly costs are the two competing objectives that are optimized in the case study. The mean number of assembly operations can be significantly reduced while incurring relatively small increases in the expected pre-assembly cost.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号