首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   190篇
  免费   6篇
  国内免费   2篇
化学   111篇
晶体学   3篇
力学   5篇
数学   44篇
物理学   35篇
  2023年   7篇
  2022年   15篇
  2021年   8篇
  2020年   10篇
  2019年   9篇
  2018年   13篇
  2017年   4篇
  2016年   10篇
  2015年   8篇
  2014年   16篇
  2013年   8篇
  2012年   12篇
  2011年   14篇
  2010年   9篇
  2009年   8篇
  2008年   6篇
  2007年   11篇
  2006年   10篇
  2005年   5篇
  2004年   4篇
  2003年   2篇
  2002年   3篇
  2001年   1篇
  1999年   1篇
  1998年   1篇
  1993年   1篇
  1991年   1篇
  1984年   1篇
排序方式: 共有198条查询结果,搜索用时 31 毫秒
11.
Military course of action planning involves time and space synchronization as well as resource and asset allocation. A mission could be seen as a defined set of logical ordered tasks with time and space constraints. The resources to task rules require that available assets should be allocated to each task. A combination of assets might be required to execute a given task. The couple (task, resources) constitutes an action. This problem is formulated as a multi-objectives scheduling and resource allocation problem. Any solution is assessed based on a number of conflicting and heterogeneous objectives. In fact, military planning officers should keep perfecting the plan based on the Commander’s criteria for success. The scheduling problem and resource allocation problem are considered as NP-Hard Problems [A. Guitouni, B. Urli, J.-M. Martel, Course of action planning: A project based modelling, Working Paper, Faculté des sciences de l’ administration, Université Laval, Québec, 2005]. This paper is concerned with the multi-objectives resource allocation problem. Our objective is to find adequate resource allocation for given courses of action schedule. To optimize this problem, this paper investigates non-exact solution methods, like meta-heuristic methods for finding potential efficient solutions. A progressive resource allocation methodology is proposed based on Tabu Search and multi-objectives concepts. This technique explores the search space so as to find a set of potential efficient solutions without aggregating the objectives into a single objective function. It is guided by the principle of maximizing the usage of any resource before considering a replacement resource. Thus, a given resource is allocated to the maximum number of tasks for a given courses of action schedule. A good allocation is a potential efficient solution. These solutions are retained by applying a combination of a dominance rule and a multi-criteria filtering method. The performance of the proposed Pareto-based approach is compared to two aggregation approaches: weighted-sum and the lexicographic techniques. The result shows that a Pareto-based approach is providing better solutions and allowing more flexibility to the decision-maker.  相似文献   
12.
In order to solve linear programs with a large number of constraints, constraint generation techniques are often used. In these algorithms, a relaxation of the formulation containing only a subset of the constraints is first solved. Then a separation procedure is called which adds to the relaxation any inequality of the formulation that is violated by the current solution. The process is iterated until no violated inequality can be found. In this paper, we present a separation procedure that uses several points to generate violated constraints. The complexity of this separation procedure and of some related problems is studied. Also, preliminary computational results about the advantages of using multiple-points separation procedures over traditional separation procedures are given for random linear programs and survivable network design. They illustrate that, for some specific families of linear programs, multiple-points separation can be computationally effective.  相似文献   
13.
This paper studies an inverse hyperbolic problem for the wave equation with dynamic boundary conditions. It consists of determining some forcing terms from the final overdetermination of the displacement. First, the Fréchet differentiability of the Tikhonov functional is studied, and a gradient formula is obtained via the solution of an associated adjoint problem. Then, the Lipschitz continuity of the gradient is proved. Furthermore, the existence and the uniqueness for the minimization problem are discussed. Finally, some numerical experiments for the reconstruction of an internal wave force are implemented via a conjugate gradient algorithm.  相似文献   
14.
15.
A rapid and precise LC method was developed for the simultaneous determination of aliskiren hemifumarate (ALS), amlodipine besylate (AML) and hydrochlorothiazide (HCZ) using acetonitrile:25 mM octane sulfonic acid sodium salt monohydrate in water (60:40 v/v) as the mobile phase. The flow rate was maintained at 1.2 mL min?1 on a stationary phase composed of Supelco, Discovery® HS (C18) column (25 cm × 4.6 mm, 5 μm). Isocratic elution was applied throughout the analysis. Detection was carried out at λ max (232 nm) at ambient temperature. The method was validated according to ICH guidelines. Linearity, accuracy and precision were satisfactory over the concentration ranges of 32–320, 2–44 and 4–64 μg mL?1 for ALS, AML and HCZ, respectively. LOD and LOQ were estimated and found to be 0.855 and 2.951 μg mL?1, respectively, for ALS, 0.061 and 0.202 μg mL?1, respectively, for AML as well as 0.052 and 0.174 μg mL?1, respectively, for HCZ. The method was successfully applied for the determination of the three drugs in their co-formulated tablets. The results were compared statistically with reference methods and no significant difference was found. The developed method is specific and accurate for the quality control and routine analysis of the cited drugs in pharmaceutical preparations.  相似文献   
16.
17.
18.
Interaction Value Interaction Value Analysis (I.V.A.) models a network of rational actors who generate value by interacting with each other. This model can be used to understand human organizations. Since people form organizations to facilitate interactions between productive individuals, the value added by interaction is the contribution of the organization. This paper examines the result of varying the queuing discipline used in selecting among back-logged interaction requests. Previously developed I.V.A. models assumed a First-in-first-out (FIFO) discipline, but using other disciplines can better represent the “Climate” of an organization. I.V.A. identifies circumstances under which organizations that control members’ interaction choices outperform organizations where individuals choose their own interaction partners. Management can be said to “matter” when individual choices converge to a point where interactions generate a lower than optimal value. In previous I.V.A. models, relinquishing central control of interaction choices reduced the aggregate value by anything from 0% to 12%, depending on circumstances. This paper finds the difference between the two modes of organization to go as high as 47% if actors display preferences between interaction partners instead of treating all equally. A politically divided, dog-eat-dog, “Capitalist” climate follows one queuing discipline, which is found to generally increase the value that a strong control structure can add. A chummy, in-bred “Fraternal” climate gains from control in some circumstances (low interdependence or low differentiation), but not in others (high or medium interdependence and differentiation under low diversity, for example). These are compared to the previous version of I.V.A., in which the queuing discipline was FIFO and the climate deemed “Disciplined”. Previously published findings on Organizational Climate are duplicated and extended with a higher level of detail. Priority queuing in an I.V.A. model is thus a useful proxy for Organizational Climate, open to future validation because its detailed predictions can be confirmed or falsified by observation. Walid Nasrallah is currently Assistant Professor in the Engineering Management program at the American University of Beirut (AUB). He received his Ph.D. from the Construction Engineering and Management program at Stanford University in 2000 and his Master’s degree at MIT in 1989. Between the two, he occupied several positions in the construction and software engineering fields. His research interests today include simulation, decision theory, and the evolution of organizations in response to new technologies.  相似文献   
19.
Ultrasonic imaging is often used to estimate blood flow velocity. Currently, estimates are carried out using Doppler-based techniques. However, there are a number of shortcomings such as the limited spatial resolution and the inability to estimate longitudinal flows. Thus, alternative methods have been proposed to overcome them. Difficulties are notably encountered with high-frequency imaging systems that use swept-scan techniques. In this article, we propose to compare four vector velocity estimation methods that are complementary to Doppler, focusing on 40 MHz, high-frequency imaging. The goal of this study is to evaluate which method could circumvent the limitations of Doppler methods for evaluation of microcirculation, in the vessels having diameter on the order of 1 mm. We used two region-based approaches, one decorrelation-based approach and one spatiotemporal approach. Each method has been applied to seven flow sequences with various orientations and mean velocities. Four sequences were simulated with a system approach based on a 3D set of moving scatterers. Three experimental sequences were carried out by injecting blood-mimicking fluid within a gelatin phantom and then acquiring images with Visualsonics, Vevo 660 system. From velocity estimates, several performance criteria such as the normalized mean error or the normalized mean standard deviation were defined to compare the performance of the four estimators. The results show that region-based methods are the most accurate exhibiting mean errors less than 10% and mean standard deviation less than 13%. However, region-based approaches are those that require the highest calculative cost compared to the decorrelation-based method, which is the fastest. Finally, the spatiotemporal approach appeared to be a trade-off in terms of computational complexity and accuracy of estimates. It provides estimates with errors less than 10% for mean velocity and the CPU time is approximately 17 s for a ROI of size 40 * 80 pixels.  相似文献   
20.
This paper discusses Supply Chain Network (SCN) design problem under uncertainty, and presents a critical review of the optimization models proposed in the literature. Some drawbacks and missing aspects in the literature are pointed out, thus motivating the development of a comprehensive SCN design methodology. Through an analysis of supply chains uncertainty sources and risk exposures, the paper reviews key random environmental factors and discusses the nature of major disruptive events threatening SCN. It also discusses relevant strategic SCN design evaluation criteria, and it reviews their use in existing models. We argue for the assessment of SCN robustness as a necessary condition to ensure sustainable value creation. Several definitions of robustness, responsiveness and resilience are reviewed, and the importance of these concepts for SCN design is discussed. This paper contributes to framing the foundations for a robust SCN design methodology.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号