首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It has been shown by the present authors in a recent paper [1] that if some conservation and balance laws of continuum mechanics are represented in a 4 × 4 form, balance of linear physical momentum (i. e., stress) and balance of mass become closely linked. This seemingly novel result was reached in a completely ad‐hoc fashion by treating time on the same level as the spatial coordinates, and not as parameter, as it is usually done. In order to place the above ad‐hoc result on a firmer foundation and since it is in the theory of relativity that space and time are considered on the same footing, an attempt is made to derive several tensors of continuum mechanics in a systematic manner as 4 × 4 invariant objects.  相似文献   

2.
In the health informatics era, modeling longitudinal data remains problematic. The issue is method: health data are highly nonlinear and dynamic, multilevel and multidimensional, comprised of multiple major/minor trends, and causally complex—making curve fitting, modeling, and prediction difficult. The current study is fourth in a series exploring a case‐based density (CBD) approach for modeling complex trajectories, which has the following advantages: it can (1) convert databases into sets of cases (k dimensional row vectors; i.e., rows containing k elements); (2) compute the trajectory (velocity vector) for each case based on (3) a set of bio‐social variables called traces; (4) construct a theoretical map to explain these traces; (5) use vector quantization (i.e., k‐means, topographical neural nets) to longitudinally cluster case trajectories into major/minor trends; (6) employ genetic algorithms and ordinary differential equations to create a microscopic (vector field) model (the inverse problem) of these trajectories; (7) look for complex steady‐state behaviors (e.g., spiraling sources, etc) in the microscopic model; (8) draw from thermodynamics, synergetics and transport theory to translate the vector field (microscopic model) into the linear movement of macroscopic densities; (9) use the macroscopic model to simulate known and novel case‐based scenarios (the forward problem); and (10) construct multiple accounts of the data by linking the theoretical map and k dimensional profile with the macroscopic, microscopic and cluster models. Given the utility of this approach, our purpose here is to organize our method (as applied to recent research) so it can be employed by others. © 2015 Wiley Periodicals, Inc. Complexity 21: 160–180, 2016  相似文献   

3.
This paper analyzes an intensity‐based approach for equity modeling. We use the Cox–Ingersoll–Ross (CIR) process to describe the intensity of the firm's default process. The intensity is purposely linked to the assets of the firm and consequently is also used to explain the equity. We examine two different approaches to link assets and intensity and derive closed‐form expressions for the firms' equity under both models. We use the Kalman filter to estimate the parameters of the unobservable intensity process. We demonstrate our approach using historical equity time series data from Merrill Lynch. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Pseudoconvexity in Lorentzian doubly warped products   总被引:2,自引:0,他引:2  
A Lorentzian manifold M is said to be null (resp. causally) pseudoconvex if, given any compact set K in M, there exists a compact set K' in M such that any null (resp. causal) geodesic segment with both endpoints in K lies in K'. Various implications of causal and null pseudoconvexity on the geodesic structure of a Lorentzian manifold have been studied in several recent papers by Beem and Parker, Beem and Ehrlich, and Low. We provide sufficient conditions for a Lorentzian doubly warped product manifold to be null pseudoconvex. These conditions are not necessary and provide new examples of non-globally hyperbolic spacetimes which are null pseudoconvex.  相似文献   

5.
The superelasticity and shape memory effect in NiTi alloys are examined on the basis of micromechanics within the energy minimization framework. We describe the behaviour of polycrystalline shape‐memory alloys via orientation‐distribution of the various martensite‐variants (domains) present in the material. Stress‐strain curves are presented and special attention is payed to the volume fraction of martensite for specific NiTi alloys (Nitinol) specimen under uniaxial tension. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
Step‐stress accelerated degradation testing (SSADT) has become a common approach to predicting lifetime for highly reliable products that are unlikely to fail in a reasonable time under use conditions or even elevated stress conditions. In literature, the planning of SSADT has been widely investigated for stochastic degradation processes, such as Wiener processes and gamma processes. In this paper, we model the optimal SSADT planning problem from a Bayesian perspective and optimize test plans by determining both stress levels and the allocation of inspections. Large‐sample approximation is used to derive the asymptotic Bayesian utility functions under 3 planning criteria. A revisited LED lamp example is presented to illustrate our method. The comparison with optimal plans from previous studies demonstrates the necessity of considering the stress levels and inspection allocations simultaneously.  相似文献   

7.
Consumer markets have been studied in great depth, and many techniques have been used to represent them. These have included regression‐based models, logit models, and theoretical market‐level models, such as the NBD‐Dirichlet approach. Although many important contributions and insights have resulted from studies that relied on these models, there is still a need for a model that could more holistically represent the interdependencies of the decisions made by consumers, retailers, and manufacturers. When the need is for a model that could be used repeatedly over time to support decisions in an industrial setting, it is particularly critical. Although some existing methods can, in principle, represent such complex interdependencies, their capabilities might be outstripped if they had to be used for industrial applications, because of the details this type of modeling requires. However, a complementary method—agent‐based modeling—shows promise for addressing these issues. Agent‐based models use business‐driven rules for individuals (e.g., individual consumer rules for buying items, individual retailer rules for stocking items, or individual firm rules for advertizing items) to determine holistic, system‐level outcomes (e.g., to determine if brand X's market share is increasing). We applied agent‐based modeling to develop a multi‐scale consumer market model. We then conducted calibration, verification, and validation tests of this model. The model was successfully applied by Procter & Gamble to several challenging business problems. In these situations, it directly influenced managerial decision making and produced substantial cost savings. © 2010 Wiley Periodicals, Inc. Complexity, 2010  相似文献   

8.
This paper gives an implicit characterization of the class of functions computable in polynomial space by deterministic Turing machines – PSPACE. It gives an inductive characterization of PSPACE with no ad‐hoc initial functions and with only one recursion scheme. The main novelty of this characterization is the use of pointers (also called path information) to reach PSPACE. The presence of the pointers in the recursion on notation scheme is the main difference between this characterization of PSPACE and the well‐known Bellantoni‐Cook characterization of the polytime functions – PTIME. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
10.
Abstract Marine protected areas (MPAs) are gaining momentum as tools within fisheries management. Although many studies have been conducted to their use and potential, only few authors have considered their use in the High Seas. In this paper, we investigate the effects of fish growth enhancing MPAs on the formation of regional fisheries management organisations (RFMOs) for highly migratory fish stocks. We argue that in absence of enforcement MPAs constitute a weakest‐link public good, which can only be realized if everyone agrees. We combine this notion with a game theoretic model of RFMO formation to derive potentially stable RFMOs with and without MPAs. We find that MPAs generally increase the parameter range over which RFMOs are stable, and that they increase stability in a number of cases as compared to the case without MPAs. They do not necessarily induce a fully cooperative solution among all fishing nations. In summary, results of this paper suggest a positive role for MPAs in the High Seas.  相似文献   

11.
We derive a systematic and recursive approach to local conservation laws and the Hamiltonian formalism for the Ablowitz–Ladik (AL) hierarchy. Our methods rely on a recursive approach to the AL hierarchy using Laurent polynomials and on asymptotic expansions of the Green's function of the AL Lax operator, a five-diagonal finite difference operator.  相似文献   

12.
We analyze two collocation schemes for the Helmholtz equation with depth‐dependent sonic wave velocity, modeling time‐harmonic acoustic wave propagation in a three‐dimensional inhomogeneous ocean of finite height. Both discretization schemes are derived from a periodized version of the Lippmann‐Schwinger integral equation that equivalently describes the sound wave. The eigenfunctions of the corresponding periodized integral operator consist of trigonometric polynomials in the horizontal variables and eigenfunctions to some Sturm‐Liouville operator linked to the background profile of the sonic wave velocity in the vertical variable. Applying an interpolation projection onto a space spanned by finitely many of these eigenfunctions to either the unknown periodized wave field or the integral operator yields two different collocation schemes. A convergence estimate of Sloan [J. Approx. Theory, 39:97–117, 1983] on non‐polynomial interpolation allows to show converge of both schemes, together with algebraic convergence rates depending on the smoothness of the inhomogeneity and the source. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Asset-Liability Management Under the Safety-First Principle   总被引:1,自引:0,他引:1  
Under the safety-first principle (Roy in Econometrica 20:431–449, 1952), one investment goal in asset-liability (AL) management is to minimize an upper bound of the ruin probability which measures the likelihood of the final surplus being less than a given target level. We derive solutions to the safety-first AL management problem under both continuous-time and multiperiod-time settings via investigating the relationship between the safety-first AL management problem and the mean-variance AL management problem, and offer geometric interpretations. We classify investors under the safety-first principle as safety-first greedy and nongreedy investors and discuss corresponding optimal strategies for them.  相似文献   

14.
15.
16.
Several important DEA/AR concepts were applied here to banking for the first time. This application includes classification, sensitivity, uniqueness, linked cones (LCs), and profit ratios. Notably, large bank behavior seems to be explained better by profit ratios than by relative efficiency. Measures of DEA efficiency, AR efficiency, and LC profit ratios were made for a bank panel of the U.S.'s 100 largest banks in asset size from 1986 to 1991. High levels of inefficiency were found, as in previous studies. Classification of the DEA efficiency measures identified the inefficient DMUs with some positive primal slacks. Sensitivity analysis of the DEA efficiency measures showed that the extreme-efficient classification was generally relatively insensitive to errors in the data. The ARs eliminated (i) 44% to 60% of the DEA-extreme-efficient DMUs and (ii) all of the banks with unprofitable actual profit ratios. Some statistical analyses highlight the superiority of the LC profit ratios, relative to the AR efficiency measures.  相似文献   

17.
Next Generation Science Standards (NGSS) science and engineering practices are ways of eliciting the reasoning and applying foundational ideas in science. As research has revealed barriers to states and schools adopting the NGSS, this mixed‐methods study attempts to identify characteristics of professional development (PD) that will support NGSS adoption and to improve teacher readiness. In‐service science teachers from across the nation were targeted for the survey and responses represented 38 states. Research questions included: How motivated and prepared are in‐service 7–12 teachers to use NGSS science and engineering practices? What is the profile of 7–12 in‐service teachers who are motivated and feel prepared to use NGSS science and engineering practices? The study revealed that teachers identified engineering most frequently as a PD need to improve their NGSS readiness. High school teachers rated themselves as more prepared than middle school and all teachers who use Modeling Instruction expressed higher NGSS readiness. These findings and their specificity contribute to current knowledge, and can be utilized by districts in selecting PD to support teachers in preparing to implement the NGSS successfully.  相似文献   

18.
Stress fields in the vicinity of free edges and corners of composite laminates exhibit singular characteristics and may lead to premature interlaminar failure modes like delamination fracture. It is of practical interest to investigate the nature of the arising free-edge and free-corner stress singularities - i.e. the singularity orders and modes - closely. The present investigations are performed using the Boundary Finite Element Method (BFEM) which in essence is a fundamental-solution-less boundary element method employing standard finite element formulations. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
The derivation of the space averaged Navier–Stokes equations for the large eddy simulation (LES) of turbulent incompressible flows introduces two groups of terms which do not depend only on the space averaged flow field variables: the divergence of the Reynolds stress tensor and commutation errors. Whereas the former is studied intensively in the literature, the latter terms are usually neglected. This note studies the asymptotic behaviour of these terms for the turbulent channel flow at a wall in the case that the commutation errors arise from the application of a non‐uniform box filter. To perform analytical calculations, the unknown flow field is modelled by a wall law (Reichardt law and 1/αth power law) for the mean velocity profile and highly oscillating functions model the turbulent fluctuations. The asymptotics show that near the wall, the commutation errors are at least as important as the divergence of the Reynolds stress tensor. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
We consider the problem of large‐data scattering for the quintic nonlinear Schrödinger equation on R × T 2. This equation is critical both at the level of energy and mass. Most notably, we exhibit a new type of profile (a “large‐scale profile”) that controls the asymptotic behavior of the solutions. © 2014 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号