首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3921篇
  免费   399篇
  国内免费   151篇
化学   1038篇
晶体学   3篇
力学   217篇
综合类   148篇
数学   1502篇
物理学   1563篇
  2024年   23篇
  2023年   135篇
  2022年   536篇
  2021年   466篇
  2020年   276篇
  2019年   209篇
  2018年   156篇
  2017年   184篇
  2016年   215篇
  2015年   140篇
  2014年   217篇
  2013年   247篇
  2012年   155篇
  2011年   171篇
  2010年   156篇
  2009年   173篇
  2008年   171篇
  2007年   162篇
  2006年   113篇
  2005年   98篇
  2004年   59篇
  2003年   48篇
  2002年   29篇
  2001年   32篇
  2000年   32篇
  1999年   27篇
  1998年   33篇
  1997年   33篇
  1996年   26篇
  1995年   26篇
  1994年   10篇
  1993年   12篇
  1992年   11篇
  1991年   9篇
  1990年   9篇
  1989年   6篇
  1988年   10篇
  1987年   3篇
  1986年   9篇
  1985年   6篇
  1984年   9篇
  1983年   6篇
  1982年   2篇
  1981年   2篇
  1980年   4篇
  1979年   3篇
  1978年   2篇
  1977年   2篇
  1967年   1篇
  1959年   5篇
排序方式: 共有4471条查询结果,搜索用时 15 毫秒
41.
Probabilistic predictions with machine learning are important in many applications. These are commonly done with Bayesian learning algorithms. However, Bayesian learning methods are computationally expensive in comparison with non-Bayesian methods. Furthermore, the data used to train these algorithms are often distributed over a large group of end devices. Federated learning can be applied in this setting in a communication-efficient and privacy-preserving manner but does not include predictive uncertainty. To represent predictive uncertainty in federated learning, our suggestion is to introduce uncertainty in the aggregation step of the algorithm by treating the set of local weights as a posterior distribution for the weights of the global model. We compare our approach to state-of-the-art Bayesian and non-Bayesian probabilistic learning algorithms. By applying proper scoring rules to evaluate the predictive distributions, we show that our approach can achieve similar performance as the benchmark would achieve in a non-distributed setting.  相似文献   
42.
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such hybrid quantizers is quite complex, and their implementation requires complete knowledge of the statistical model of the analog signal. In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools. These mappings are designed to facilitate the task of recovering underlying information from the quantized signals. By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it. Our main target application is multiple-input multiple-output (MIMO) communication receivers, which simultaneously acquire a set of analog signals, and are commonly subject to constraints on the number of bits. Our results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model. Furthermore, for a symbol detection scenario, it is demonstrated that the proposed approach can realize reliable bit-efficient hybrid MIMO receivers capable of setting their quantization rule in light of the task.  相似文献   
43.
Previous hotel performance studies neglected the role of information entropy in feedback processes between input and output management. This paper focuses on this gap by exploring the relationship between hotel performance at the industry level and the capability of learning by doing and adopting best practices using a sample of 153 UK hotels over a 10-year period between 2008–2017. Besides, this research also fills a literature gap by addressing the issues of measuring hotel performance in light of negative outputs. In order to achieve this, we apply a novel Modified slack-based model for the efficiency analysis and Least Absolute Shrinkage and Selection Operator to examine the influence of entropy related variable on efficiency score. The Results indicate that less can be learnt from inputs than from outputs to improve efficiency levels and resource allocation is more balanced than cash flow and liquidity. The findings suggest that market dynamics explains the cash flow generation potential and liquidity. We find that market conditions are increasingly offering the opportunities for learning and improving hotel efficiency. The results report that the distinctive characteristic of superior performance in hotel operations is the capability to match the cash flow generation potential with market opportunities.  相似文献   
44.
With the quick development of sensor technology in recent years, online detection of early fault without system halt has received much attention in the field of bearing prognostics and health management. While lacking representative samples of the online data, one can try to adapt the previously-learned detection rule to the online detection task instead of training a new rule merely using online data. As one may come across a change of the data distribution between offline and online working conditions, it is challenging to utilize the data from different working conditions to improve detection accuracy and robustness. To solve this problem, a new online detection method of bearing early fault is proposed in this paper based on deep transfer learning. The proposed method contains an offline stage and an online stage. In the offline stage, a new state assessment method is proposed to determine the period of the normal state and the degradation state for whole-life degradation sequences. Moreover, a new deep dual temporal domain adaptation (DTDA) model is proposed. By adopting a dual adaptation strategy on the time convolutional network and domain adversarial neural network, the DTDA model can effectively extract domain-invariant temporal feature representation. In the online stage, each sequentially-arrived data batch is directly fed into the trained DTDA model to recognize whether an early fault occurs. Furthermore, a health indicator of target bearing is also built based on the DTDA features to intuitively evaluate the detection results. Experiments are conducted on the IEEE Prognostics and Health Management (PHM) Challenge 2012 bearing dataset. The results show that, compared with nine state-of-the-art fault detection and diagnosis methods, the proposed method can get an earlier detection location and lower false alarm rate.  相似文献   
45.
In the past decade, big data has become increasingly prevalent in a large number of applications. As a result, datasets suffering from noise and redundancy issues have necessitated the use of feature selection across multiple domains. However, a common concern in feature selection is that different approaches can give very different results when applied to similar datasets. Aggregating the results of different selection methods helps to resolve this concern and control the diversity of selected feature subsets. In this work, we implemented a general framework for the ensemble of multiple feature selection methods. Based on diversified datasets generated from the original set of observations, we aggregated the importance scores generated by multiple feature selection techniques using two methods: the Within Aggregation Method (WAM), which refers to aggregating importance scores within a single feature selection; and the Between Aggregation Method (BAM), which refers to aggregating importance scores between multiple feature selection methods. We applied the proposed framework on 13 real datasets with diverse performances and characteristics. The experimental evaluation showed that WAM provides an effective tool for determining the best feature selection method for a given dataset. WAM has also shown greater stability than BAM in terms of identifying important features. The computational demands of the two methods appeared to be comparable. The results of this work suggest that by applying both WAM and BAM, practitioners can gain a deeper understanding of the feature selection process.  相似文献   
46.
The Coronavirus disease 2019 (COVID-19) has become one of the threats to the world. Computed tomography (CT) is an informative tool for the diagnosis of COVID-19 patients. Many deep learning approaches on CT images have been proposed and brought promising performance. However, due to the high complexity and non-transparency of deep models, the explanation of the diagnosis process is challenging, making it hard to evaluate whether such approaches are reliable. In this paper, we propose a visual interpretation architecture for the explanation of the deep learning models and apply the architecture in COVID-19 diagnosis. Our architecture designs a comprehensive interpretation about the deep model from different perspectives, including the training trends, diagnostic performance, learned features, feature extractors, the hidden layers, the support regions for diagnostic decision, and etc. With the interpretation architecture, researchers can make a comparison and explanation about the classification performance, gain insight into what the deep model learned from images, and obtain the supports for diagnostic decisions. Our deep model achieves the diagnostic result of 94.75%, 93.22%, 96.69%, 97.27%, and 91.88% in the criteria of accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, which are 8.30%, 4.32%, 13.33%, 10.25%, and 6.19% higher than that of the compared traditional methods. The visualized features in 2-D and 3-D spaces provide the reasons for the superiority of our deep model. Our interpretation architecture would allow researchers to understand more about how and why deep models work, and can be used as interpretation solutions for any deep learning models based on convolutional neural network. It can also help deep learning methods to take a step forward in the clinical COVID-19 diagnosis field.  相似文献   
47.
The deployment of machine learning models is expected to bring several benefits. Nevertheless, as a result of the complexity of the ecosystem in which models are generally trained and deployed, this technology also raises concerns regarding its (1) interpretability, (2) fairness, (3) safety, and (4) privacy. These issues can have substantial economic implications because they may hinder the development and mass adoption of machine learning. In light of this, the purpose of this paper was to determine, from a positive economics point of view, whether the free use of machine learning models maximizes aggregate social welfare or, alternatively, regulations are required. In cases in which restrictions should be enacted, policies are proposed. The adaptation of current tort and anti-discrimination laws is found to guarantee an optimal level of interpretability and fairness. Additionally, existing market solutions appear to incentivize machine learning operators to equip models with a degree of security and privacy that maximizes aggregate social welfare. These findings are expected to be valuable to inform the design of efficient public policies.  相似文献   
48.
Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex systems, increase the performance of optimization, and even extend the enabling of the potential for creativity. In this article, the authors discuss the fractional dynamics, FOT and rich fractional stochastic models. First, the use of fractional dynamics in big data analytics for quantifying big data variability stemming from the generation of complex systems is justified. Second, we show why fractional dynamics is needed in machine learning and optimal randomness when asking: “is there a more optimal way to optimize?”. Third, an optimal randomness case study for a stochastic configuration network (SCN) machine-learning method with heavy-tailed distributions is discussed. Finally, views on big data and (physics-informed) machine learning with fractional dynamics for future research are presented with concluding remarks.  相似文献   
49.
The 3D modelling of indoor environments and the generation of process simulations play an important role in factory and assembly planning. In brownfield planning cases, existing data are often outdated and incomplete especially for older plants, which were mostly planned in 2D. Thus, current environment models cannot be generated directly on the basis of existing data and a holistic approach on how to build such a factory model in a highly automated fashion is mostly non-existent. Major steps in generating an environment model of a production plant include data collection, data pre-processing and object identification as well as pose estimation. In this work, we elaborate on a methodical modelling approach, which starts with the digitalization of large-scale indoor environments and ends with the generation of a static environment or simulation model. The object identification step is realized using a Bayesian neural network capable of point cloud segmentation. We elaborate on the impact of the uncertainty information estimated by a Bayesian segmentation framework on the accuracy of the generated environment model. The steps of data collection and point cloud segmentation as well as the resulting model accuracy are evaluated on a real-world data set collected at the assembly line of a large-scale automotive production plant. The Bayesian segmentation network clearly surpasses the performance of the frequentist baseline and allows us to considerably increase the accuracy of the model placement in a simulation scene.  相似文献   
50.
The spleen is one of the most frequently injured organs in blunt abdominal trauma. Computed tomography (CT) is the imaging modality of choice to assess patients with blunt spleen trauma, which may include lacerations, subcapsular or parenchymal hematomas, active hemorrhage, and vascular injuries. While computer-assisted diagnosis systems exist for other conditions assessed using CT scans, the current method to detect spleen injuries involves the manual review of scans by radiologists, which is a time-consuming and repetitive process. In this study, we propose an automated spleen injury detection method using machine learning. CT scans from patients experiencing traumatic injuries were collected from Michigan Medicine and the Crash Injury Research Engineering Network (CIREN) dataset. Ninety-nine scans of healthy and lacerated spleens were split into disjoint training and test sets, with random forest (RF), naive Bayes, SVM, k-nearest neighbors (k-NN) ensemble, and subspace discriminant ensemble models trained via 5-fold cross validation. Of these models, random forest performed the best, achieving an Area Under the receiver operating characteristic Curve (AUC) of 0.91 and an F1 score of 0.80 on the test set. These results suggest that an automated, quantitative assessment of traumatic spleen injury has the potential to enable faster triage and improve patient outcomes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号