首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8792篇
  免费   1107篇
  国内免费   342篇
化学   753篇
晶体学   13篇
力学   1015篇
综合类   324篇
数学   5486篇
物理学   2650篇
  2024年   21篇
  2023年   86篇
  2022年   270篇
  2021年   263篇
  2020年   183篇
  2019年   197篇
  2018年   221篇
  2017年   332篇
  2016年   438篇
  2015年   282篇
  2014年   532篇
  2013年   585篇
  2012年   480篇
  2011年   535篇
  2010年   423篇
  2009年   562篇
  2008年   580篇
  2007年   595篇
  2006年   476篇
  2005年   428篇
  2004年   372篇
  2003年   312篇
  2002年   291篇
  2001年   248篇
  2000年   223篇
  1999年   194篇
  1998年   179篇
  1997年   158篇
  1996年   134篇
  1995年   104篇
  1994年   71篇
  1993年   82篇
  1992年   71篇
  1991年   38篇
  1990年   44篇
  1989年   28篇
  1988年   30篇
  1987年   23篇
  1986年   29篇
  1985年   29篇
  1984年   28篇
  1983年   9篇
  1982年   18篇
  1981年   5篇
  1980年   5篇
  1979年   5篇
  1978年   3篇
  1977年   5篇
  1959年   5篇
  1957年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
931.
A phased array radar (PAR) is used to detect new targets and update the information of those detected targets. Generally, a large number of tasks need to be performed by a single PAR in a finite time horizon. In order to utilize the limited time and the energy resources, it is necessary to provide an efficient task scheduling algorithm. However, the existing radar task scheduling algorithms can't be utilized to release the full potential of the PAR, because of those disadvantages such as full PAR task structure ignored, only good performance in one aspect considered and just heuristic or the meta-heuristic method utilized. Aiming at above issues, an optimization model for the PAR task scheduling and a hybrid adaptively genetic (HAGA) algorithm are proposed. The model considers the full PAR task structure and integrates multiple principles of task scheduling, so that multi-aspect performance can be guaranteed. The HAGA incorporates the improved GA to explore better solutions while using the heuristic task interleaving algorithm to utilize wait intervals to interleave subtasks and calculate fitness values of individuals in efficient manners. Furthermore, the efficiency and the effectiveness of the HAGA are both improved by adopting chaotic sequences for the population initialization, the elite reservation and the mixed ranking selection, as well as designing the adaptive crossover and the adaptive mutation operators. The simulation results demonstrate that the HAGA possesses merits of global exploration, faster convergence, and robustness compared with three state-of-art algorithms—adaptive GA, hybrid GA and highest priority and earliest deadline first heuristic (HPEDF) algorithm.  相似文献   
932.
Customized personal rate offering is of growing importance in the insurance industry. To achieve this, an important step is to identify subgroups of insureds from the corresponding heterogeneous claim frequency data. In this paper, a penalized Poisson regression approach for subgroup analysis in claim frequency data is proposed. Subjects are assumed to follow a zero-inflated Poisson regression model with group-specific intercepts, which capture group characteristics of claim frequency. A penalized likelihood function is derived and optimized to identify the group-specific intercepts and effects of individual covariates. To handle the challenges arising from the optimization of the penalized likelihood function, an alternating direction method of multipliers algorithm is developed and its convergence is established. Simulation studies and real applications are provided for illustrations.  相似文献   
933.
We present an efficient algorithm for finding the shortest path joining two points in a sequence of triangles in three-dimensional space using the concept of funnels associated with common edges along the sequence of triangles and the planar unfolding for each funnel. We show that the unfolded image of a funnel is a simple polygon, it thus is non-overlapping. Therefore, such funnels are determined iteratively to their associated common edges by the planar unfolding and the shortest path joining two points is determined by cusps of these funnels.  相似文献   
934.
In this paper, we extend the approach developed by the author for the standard finite element method in the L‐norm of the noncoercive variational inequalities (VI) (Numer Funct Anal Optim.2015;36:1107‐1121.) to impulse control quasi‐variational inequality (QVI). We derive the optimal error estimate, combining the so‐called Bensoussan‐Lions Algorithm and the concept of subsolutions for VIs.  相似文献   
935.
In this paper, we study the problem of sampling from a given probability density function that is known to be smooth and strongly log-concave. We analyze several methods of approximate sampling based on discretizations of the (highly overdamped) Langevin diffusion and establish guarantees on its error measured in the Wasserstein-2 distance. Our guarantees improve or extend the state-of-the-art results in three directions. First, we provide an upper bound on the error of the first-order Langevin Monte Carlo (LMC) algorithm with optimized varying step-size. This result has the advantage of being horizon free (we do not need to know in advance the target precision) and to improve by a logarithmic factor the corresponding result for the constant step-size. Second, we study the case where accurate evaluations of the gradient of the log-density are unavailable, but one can have access to approximations of the aforementioned gradient. In such a situation, we consider both deterministic and stochastic approximations of the gradient and provide an upper bound on the sampling error of the first-order LMC that quantifies the impact of the gradient evaluation inaccuracies. Third, we establish upper bounds for two versions of the second-order LMC, which leverage the Hessian of the log-density. We provide non asymptotic guarantees on the sampling error of these second-order LMCs. These guarantees reveal that the second-order LMC algorithms improve on the first-order LMC in ill-conditioned settings.  相似文献   
936.
Point estimators for the parameters of the component lifetime distribution in coherent systems are evolved assuming to be independently and identically Weibull distributed component lifetimes. We study both complete and incomplete information under continuous monitoring of the essential component lifetimes. First, we prove that the maximum likelihood estimator (MLE) under complete information based on progressively Type‐II censored system lifetimes uniquely exists and we present two approaches to compute the estimates. Furthermore, we consider an ad hoc estimator, a max‐probability plan estimator and the MLE for the parameters under incomplete information. In order to compute the MLEs, we consider a direct maximization of the likelihood and an EM‐algorithm–type approach, respectively. In all cases, we illustrate the results by simulations of the five‐component bridge system and the 10‐component parallel system, respectively.  相似文献   
937.
提出了一种基于遗传算法的面向应急对地观测任务的多平台资源部署优化方法。该方法通过把观测区域离散化为网格点的集合,将多平台资源部署问题形式化为一个组合优化问题,其目标是在一定响应时间约束下最大化观测区域覆盖率。设计的求解算法采用整数编码表示各平台资源的部署位置,使用精英保留策略加快算法收敛速度。仿真结果表明,该方法能够快速获得满意的卫星、飞艇、无人机多平台资源部署方案。  相似文献   
938.
We consider the uniqueness of solution (i.e., nonsingularity) of systems of r generalized Sylvester and ?‐Sylvester equations with n×n coefficients. After several reductions, we show that it is sufficient to analyze periodic systems having, at most, one generalized ?‐Sylvester equation. We provide characterizations for the nonsingularity in terms of spectral properties of either matrix pencils or formal matrix products, both constructed from the coefficients of the system. The proposed approach uses the periodic Schur decomposition and leads to a backward stable O(n3r) algorithm for computing the (unique) solution.  相似文献   
939.
We propose a penalized likelihood method to fit the linear discriminant analysis model when the predictor is matrix valued. We simultaneously estimate the means and the precision matrix, which we assume has a Kronecker product decomposition. Our penalties encourage pairs of response category mean matrix estimators to have equal entries and also encourage zeros in the precision matrix estimator. To compute our estimators, we use a blockwise coordinate descent algorithm. To update the optimization variables corresponding to response category mean matrices, we use an alternating minimization algorithm that takes advantage of the Kronecker structure of the precision matrix. We show that our method can outperform relevant competitors in classification, even when our modeling assumptions are violated. We analyze three real datasets to demonstrate our method’s applicability. Supplementary materials, including an R package implementing our method, are available online.  相似文献   
940.
The family of expectation--maximization (EM) algorithms provides a general approach to fitting flexible models for large and complex data. The expectation (E) step of EM-type algorithms is time-consuming in massive data applications because it requires multiple passes through the full data. We address this problem by proposing an asynchronous and distributed generalization of the EM called the distributed EM (DEM). Using DEM, existing EM-type algorithms are easily extended to massive data settings by exploiting the divide-and-conquer technique and widely available computing power, such as grid computing. The DEM algorithm reserves two groups of computing processes called workers and managers for performing the E step and the maximization step (M step), respectively. The samples are randomly partitioned into a large number of disjoint subsets and are stored on the worker processes. The E step of DEM algorithm is performed in parallel on all the workers, and every worker communicates its results to the managers at the end of local E step. The managers perform the M step after they have received results from a γ-fraction of the workers, where γ is a fixed constant in (0, 1]. The sequence of parameter estimates generated by the DEM algorithm retains the attractive properties of EM: convergence of the sequence of parameter estimates to a local mode and linear global rate of convergence. Across diverse simulations focused on linear mixed-effects models, the DEM algorithm is significantly faster than competing EM-type algorithms while having a similar accuracy. The DEM algorithm maintains its superior empirical performance on a movie ratings database consisting of 10 million ratings. Supplementary material for this article is available online.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号