首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 202 毫秒
1.
Growing information and knowledge on gene regulatory networks, which are typical hybrid systems, has led a significant interest in modeling those networks. An important direction of gene network modeling is studying the abstract network models to understand the behavior of a class of systems. Boolean Networks has emerged as an important model class on this direction. Limitations of traditional Boolean Networks led the researchers to propose several generalizations. In this work, one such class, the Continuous Time Boolean Networks (CTBN’s), is studied. CTBN’s are constructed by allowing the Boolean variables evolve in continuous time and involve a biologically-motivated refractory period. In particular, we analyze the basic circuits and subsystems of the class of CTBN’s. We demonstrate the existence of various qualitative dynamic behavior including stable, multistable, neutrally stable, quasiperiodic and chaotic behaviors. We show that those models are capable of demonstrating highly adjustable features like maintenance of continuous protein concentrations. Finally, we discuss the relation between qualitative dynamic features and information handling.  相似文献   

2.
In biology, it is common practice to describe the photosynthetic activity of algae (P) with respect to light (I) by means of static models in the form of single-valued functions P = f(I). This implies that the photosynthetic response to any light variation is instantaneous. However, experimental results have repeatedly provided evidence that the variations of photosynthetic activity arising from the changes in the underwater light climate are subject to time delays and cannot, usually, be characterised by means of a single-valued function, making the validity of these static models questionable. In this article, we propose a dynamic model for photosynthesis that is able to account for the experimental results.  相似文献   

3.
We consider single-machine stochastic scheduling models with due dates as decisions. In addition to showing how to satisfy given service-level requirements, we examine variations of a model in which the tightness of due-dates conflicts with the desire to minimize tardiness. We show that a general form of the trade-off includes the stochastic E/T model and gives rise to a challenging scheduling problem. We present heuristic solution methods based on static and dynamic sorting procedures. Our computational evidence identifies a static heuristic that routinely produces good solutions and a dynamic rule that is nearly always optimal. The dynamic sorting procedure is also asymptotically optimal, meaning that it can be recommended for problems of any size.  相似文献   

4.
High-dimensional time series may well be the most common type of dataset in the so-called “big data” revolution, and have entered current practice in many areas, including meteorology, genomics, chemometrics, connectomics, complex physics simulations, biological and environmental research, finance and econometrics. The analysis of such datasets poses significant challenges, both from a statistical as well as from a numerical point of view. The most successful procedures so far have been based on dimension reduction techniques and, more particularly, on high-dimensional factor models. Those models have been developed, essentially, within time series econometrics, and deserve being better known in other areas. In this paper, we provide an original time-domain presentation of the methodological foundations of those models (dynamic factor models usually are described via a spectral approach), contrasting such concepts as commonality and idiosyncrasy, factors and common shocks, dynamic and static principal components. That time-domain approach emphasizes the fact that, contrary to the static factor models favored by practitioners, the so-called general dynamic factor model essentially does not impose any constraints on the data-generating process, but follows from a general representation result.  相似文献   

5.
The problem of parameter sensitivity is not well defined. This paper aims to make it well defined. This is done by requiring the modeller to provide a performance indicator l(X,t,β) which is a measure of how the behaviour of the system is to be judged. This forces the modeller to be precise about what he/she regards as acceptable behaviour. Mathematically we define behaviour to be acceptable at β if l >0 for that β, and we define the system as sensitive if the set of acceptable β's is small. The analysis which is required is termed criteria sensitivity analysis. This method is applicable to many different kinds of models. The basic ideas of criteria sensitivity analysis are presented here. A static model is used to demonstrate major points. The power of this new approach is illustrated by an application to a dynamic model.  相似文献   

6.
The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between worlds and beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting ‘uniform’ and ‘flat’ models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.  相似文献   

7.
Inspired by air-traffic control and other applications where moving objects have to be labeled, we consider the following (static) point-labeling problem: given a set P of n points in the plane and labels that are unit squares, place a label with each point in P in such a way that the number of free labels (labels not intersecting any other label) is maximized. We develop efficient constant-factor approximation algorithms for this problem, as well as PTASs, for various label-placement models.  相似文献   

8.
In this paper, first we consider model of exponential population growth, then we assume that the growth rate at time t is not completely definite and it depends on some random environment effects. For this case the stochastic exponential population growth model is introduced. Also we assume that the growth rate at time t depends on many different random environment effect, for this case the generalized stochastic exponential population growth model is introduced. The expectations and variances of solutions are obtained. For a case study, we consider the population growth of Iran and obtain the output of models for this data and predict the population individuals in each year.  相似文献   

9.
Drop tolerance criteria play a central role in Sparse Approximate Inverse preconditioning. Such criteria have received, however, little attention and have been treated heuristically in the following manner: If the size of an entry is below some empirically small positive quantity, then it is set to zero. The meaning of “small” is vague and has not been considered rigorously. It has not been clear how drop tolerances affect the quality and effectiveness of a preconditioner M. In this paper, we focus on the adaptive Power Sparse Approximate Inverse algorithm and establish a mathematical theory on robust selection criteria for drop tolerances. Using the theory, we derive an adaptive dropping criterion that is used to drop entries of small magnitude dynamically during the setup process of M. The proposed criterion enables us to make M both as sparse as possible as well as to be of comparable quality to the potentially denser matrix which is obtained without dropping. As a byproduct, the theory applies to static F-norm minimization based preconditioning procedures, and a similar dropping criterion is given that can be used to sparsify a matrix after it has been computed by a static sparse approximate inverse procedure. In contrast to the adaptive procedure, dropping in the static procedure does not reduce the setup time of the matrix but makes the application of the sparser M for Krylov iterations cheaper. Numerical experiments reported confirm the theory and illustrate the robustness and effectiveness of the dropping criteria.  相似文献   

10.
To safeguard analytical tractability and the concavity of objective functions, the vast majority of models belonging to oligopoly theory relies on the restrictive assumption of linear demand functions. Here we lay out the analytical solution of a differential Cournot game with hyperbolic inverse demand, where firms accumulate capacity over time à la Ramsey. The subgame perfect equilibrium is characterized via the Hamilton–Jacobi–Bellman equations solved in closed form both on infinite and on finite horizon setups. To illustrate the applicability of our model and its implications, we analyze the feasibility of horizontal mergers in both static and dynamic settings, and find appropriate conditions for their profitability under both circumstances. Static profitability of a merger implies dynamic profitability of the same merger. It appears that such a demand structure makes mergers more likely to occur than they would on the basis of the standard linear inverse demand.  相似文献   

11.
In the last years we have witnessed remarkable progress in providing efficient algorithmic solutions to the problem of computing best journeys (or routes) in schedule-based public transportation systems. We have now models to represent timetables that allow us to answer queries for optimal journeys in a few milliseconds, also at a very large scale. Such models can be classified into two types: those representing the timetable as an array, and those representing it as a graph. Array-based models have been shown to be very effective in terms of query time, while graph-based ones usually answer queries by computing shortest paths, and hence they are suitable to be combined with the speed-up techniques developed for road networks.In this paper, we study the behavior of graph-based models in the prominent case of dynamic scenarios, i.e., when delays might occur to the original timetable. In particular, we make the following contributions. First, we consider the graph-based reduced time-expanded model and give a simplified and optimized routine for handling delays, and a re-engineered and fine-tuned query algorithm. Second, we propose a new graph-based model, namely the dynamic timetable model, natively tailored to efficiently incorporate dynamic updates, along with a query algorithm and a routine for handling delays. Third, we show how to adapt the ALT algorithm to such graph-based models. We have chosen this speed-up technique since it supports dynamic changes, and a careful implementation of it can significantly boost its performance. Finally, we provide an experimental study to assess the effectiveness of all proposed models and algorithms, and to compare them with the array-based state of the art solution for the dynamic case. We evaluate both new and existing approaches by implementing and testing them on real-world timetables subject to synthetic delays.Our experimental results show that: (i) the dynamic timetable model is the best model for handling delays; (ii) graph-based models are competitive to array-based models with respect to query time in the dynamic case; (iii) the dynamic timetable model compares favorably with both the original and the reduced time-expanded model regarding space; (iv) combining the graph-based models with speed-up techniques designed for road networks, such as ALT, is a very promising approach.  相似文献   

12.
Discrete choice models are widely used for understanding how customers choose between a variety of substitutable goods. We investigate the relationship between two well studied choice models, the Nested Logit (NL) model and the Markov choice model. Both models generalize the classic Multinomial Logit model and admit tractable algorithms for assortment optimization. Previous evidence indicates that the NL model may be well approximated by, or be a special case of, the Markov model. We establish that the Nested Logit model, in general, cannot be represented by a Markov model. Further, we show that there exists a family of instances of the NL model where the choice probabilities cannot be approximated to within a constant error by any Markov choice model.  相似文献   

13.
The continuous dynamic network loading problem (CDNLP) aims to compute link travel times and path travel times on a congested network, given time-dependent path flow rates for a given time period. A crucial element of CDNLP is a model of the link performance. Two main modeling frameworks have been used in link loading models: The so-called whole-link travel time (WTT) models and the kinematic wave model of Lighthill–Whitham–Richards (LWR) for traffic flow.In this paper, we reformulate a well-known whole-link model in which the link travel time, for traffic entering a time t, is a function of the number of vehicles on link. This formulation does not require the satisfying of the FIFO (first in, first out) condition. An extension of the basic WTT model is proposed in order to take explicitly into account the maximum number of vehicles that the link can accommodate (occupancy constraint). A solution scheme for the proposed WTT model is derived.Several numerical examples are given to illustrate that the FIFO condition is not respected for the WTT model and to compare the travel time predictions effected by LWR and WTT models.  相似文献   

14.
Dynamic life tables arise as an alternative to the standard (static) life table, with the aim of incorporating the evolution of mortality over time. The parametric model introduced by Lee and Carter in 1992 for projected mortality rates in the US is one of the most outstanding and has been used a great deal since then. Different versions of the model have been developed but all of them, together with other parametric models, consider the observed mortality rates as independent observations. This is a difficult hypothesis to justify when looking at the graph of the residuals obtained with any of these methods.Methods of adjustment and prediction based on geostatistical techniques which exploit the dependence structure existing among the residuals are an alternative to classical methods. Dynamic life tables can be considered as two-way tables on a grid equally spaced in either the vertical (age) or horizontal (year) direction, and the data can be decomposed into a deterministic large-scale variation (trend) plus a stochastic small-scale variation (residuals).Our contribution consists of applying geostatistical techniques for estimating the dependence structure of the mortality data and for prediction purposes, also including the influence of the year of birth (cohort). We compare the performance of this new approach with different versions of the Lee-Carter model. Additionally, we obtain bootstrap confidence intervals for predicted qxt resulting from applying both methodologies, and we study their influence on the predictions of e65t and a65t.  相似文献   

15.
The main purpose of the present paper is to compare two different kinds of approaches in modeling the deck of a suspension bridge: in the first approach we look at the deck as a rectangular plate and in the second one we look at the deck as a beam for vertical deflections and as a rod for torsional deformations. Throughout this paper we will refer to the model corresponding to the second approach as the beam-rod model. In our discussion, we observe that the beam-rod model contains a larger number of elastic parameters if compared with the isotropic plate model. For this reason the beam-rod model is supposed to be more appropriate to describe the behavior of the deck of a real suspension bridge. A possible strategy to make the plate model more efficient could be to relax the isotropy condition with a more general condition of orthotropy, which is expected to increase the number of elastic parameters. In this new setting, a comparison between the two approaches becomes now possible.Basic results are proved for the suggested problem, from existence and uniqueness of solutions to spectral properties. We suggest realistic values for the elastic parameters thus obtaining with both approaches similar responses in the static and dynamic behavior of the deck. This can be considered as a preliminary article since many work has still to be done with the perspective of formulating models for a complete suspension bridge which take into account not only the deck but also the action on it of cables and hangers. With this perspective, a section is devoted to possible future developments.  相似文献   

16.
We consider a general family of regularized Navier–Stokes and Magnetohydrodynamics (MHD) models on n-dimensional smooth compact Riemannian manifolds with or without boundary, with n≥2. This family captures most of the specific regularized models that have been proposed and analyzed in the literature, including the Navier–Stokes equations, the Navier–Stokes-α model, the Leray-α model, the modified Leray-α model, the simplified Bardina model, the Navier–Stokes–Voight model, the Navier–Stokes-α-like models, and certain MHD models, in addition to representing a larger 3-parameter family of models not previously analyzed. This family of models has become particularly important in the development of mathematical and computational models of turbulence. We give a unified analysis of the entire three-parameter family of models using only abstract mapping properties of the principal dissipation and smoothing operators, and then use assumptions about the specific form of the parameterizations, leading to specific models, only when necessary to obtain the sharpest results. We first establish existence and regularity results, and under appropriate assumptions show uniqueness and stability. We then establish some results for singular perturbations, which as special cases include the inviscid limit of viscous models and the α→0 limit in α models. Next, we show existence of a global attractor for the general model, and then give estimates for the dimension of the global attractor and the number of degrees of freedom in terms of a generalized Grashof number. We then establish some results on determining operators for the two distinct subfamilies of dissipative and non-dissipative models. We finish by deriving some new length-scale estimates in terms of the Reynolds number, which allows for recasting the Grashof number-based results into analogous statements involving the Reynolds number. In addition to recovering most of the existing results on existence, regularity, uniqueness, stability, attractor existence, and dimension, and determining operators for the well-known specific members of this family of regularized Navier–Stokes and MHD models, the framework we develop also makes possible a number of new results for all models in the general family, including some new results for several of the well-studied models. Analyzing the more abstract generalized model allows for a simpler analysis that helps bring out the core common structure of the various regularized Navier–Stokes and magnetohydrodynamics models, and also helps clarify the common features of many of the existing and new results. To make the paper reasonably self-contained, we include supporting material on spaces involving time, Sobolev spaces, and Grönwall-type inequalities.  相似文献   

17.
For many years, the longevity risk of individuals has been underestimated, as survival probabilities have improved across the developed world. The uncertainty and volatility of future longevity has posed significant risk issues for both individuals and product providers of annuities and pensions. This paper investigates the effectiveness of static hedging strategies for longevity risk management using longevity bonds and derivatives (q-forwards) for the retail products: life annuity, deferred life annuity, indexed life annuity, and variable annuity with guaranteed lifetime benefits. Improved market and mortality models are developed for the underlying risks in annuities. The market model is a regime-switching vector error correction model for GDP, inflation, interest rates, and share prices. The mortality model is a discrete-time logit model for mortality rates with age dependence. Models were estimated using Australian data. The basis risk between annuitant portfolios and population mortality was based on UK experience. Results show that static hedging using q-forwards or longevity bonds reduces the longevity risk substantially for life annuities, but significantly less for deferred annuities. For inflation-indexed annuities, static hedging of longevity is less effective because of the inflation risk. Variable annuities provide limited longevity protection compared to life annuities and indexed annuities, and as a result longevity risk hedging adds little value for these products.  相似文献   

18.
We investigate the problems of scheduling n weighted jobs to m parallel machines with availability constraints. We consider two different models of availability constraints: the preventive model, in which the unavailability is due to preventive machine maintenance, and the fixed job model, in which the unavailability is due to a priori assignment of some of the n jobs to certain machines at certain times. Both models have applications such as turnaround scheduling or overlay computing. In both models, the objective is to minimize the total weighted completion time. We assume that m is a constant, and that the jobs are non-resumable.For the preventive model, it has been shown that there is no approximation algorithm if all machines have unavailable intervals even if wi=pi for all jobs. In this paper, we assume that there is one machine that is permanently available and that the processing time of each job is equal to its weight for all jobs. We develop the first polynomial-time approximation scheme (PTAS) when there is a constant number of unavailable intervals. One main feature of our algorithm is that the classification of large and small jobs is with respect to each individual interval, and thus not fixed. This classification allows us (1) to enumerate the assignments of large jobs efficiently; and (2) to move small jobs around without increasing the objective value too much, and thus derive our PTAS. Next, we show that there is no fully polynomial-time approximation scheme (FPTAS) in this case unless P=NP.For the fixed job model, it has been shown that if job weights are arbitrary then there is no constant approximation for a single machine with 2 fixed jobs or for two machines with one fixed job on each machine, unless P=NP. In this paper, we assume that the weight of a job is the same as its processing time for all jobs. We show that the PTAS for the preventive model can be extended to solve this problem when the number of fixed jobs and the number of machines are both constants.  相似文献   

19.
We illustrate a physical situation in which topological symmetry, its breakdown, space-time uncertainty principle, and background independence may play an important role in constructing and understanding matrix models. First, we show that the space-time uncertainty principle of string may be understood as a manifestation of the breakdown of the topological symmetry in the large N matrix model. Next, we construct a new type of matrix models which is a matrix model analog of the topological Chern-Simons and BF theories. It is of interest that these topological matrix models are not only completely independent of the background metric but also have nontrivial “p-brane” solutions as well as commuting classical space-time as the classical solutions. In this paper, we would like to point out some elementary and unsolved problems associated to the matrix models, whose resolution would lead to the more satisfying matrix model in future.  相似文献   

20.
This Note introduces recent developments in the analysis of inventory systems with partial observations. The states of these systems are typically conditional distributions, which evolve in infinite dimensional spaces over time. Our analysis involves introducing unnormalized probabilities to transform nonlinear state transition equations to linear ones. With the linear equations, the existence of the optimal feedback policies are proved for two models where demand and inventory are partially observed. In a third model where the current inventory is not observed but a past inventory level is fully observed, a sufficient statistic is provided to serve as a state. The last model serves as an example where a partially observed model has a finite dimensional state. In that model, we also establish the optimality of the basestock policies, hence generalizing the corresponding classical models with full information. To cite this article: A. Bensoussan et al., C. R. Acad. Sci. Paris, Ser. I 341 (2005).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号