首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, an efficient approach of modeling and control is presented for Multi-Rate Networked Control System (MRNCS) with considering long time delay. Firstly, the system is modeled as a switched system with a random switching signal which is subject to random networked-induced delay. For this, time delay is defined as a Markov chain and the model of MRNCS is obtained as a Markovian jump linear system. Afterward, a dynamic output feedback controller is designed for output tracking as well as stabilization of closed-loop system. The modeling and control of MRNCS are presented for two structures. At first, a new model of single-side MRNCS is proposed and a mode-independent controller is designed for stabilizing the system. Then the proposed modeling method is generalized to double-side MRNCS and by introducing the Set of Possible Modes (SPM) concept, an SPM-dependent controller is proposed for double-side MRNCS. To show the effectiveness of the proposed methods, some numerical results are provided on the quadruple-tank process.  相似文献   

2.
引入了集值集值C-τ-半预不变凸概念,证明了集值集值C-τ-半预不变凸优化问题的局部弱有效元是弱有效元,给出了集值预不变凸变分不等式作为集值C-τ-半预不变凸优化问题的充分条件和必要条件,这些结果推广了文[1-4]的相应结果。  相似文献   

3.
In this article, novel joint semiparametric spline-based modeling of conditional mean and volatility of financial time series is proposed and evaluated on daily stock return data. The modeling includes functions of lagged response variables and time as predictors. The latter can be viewed as a proxy for omitted economic variables contributing to the underlying dynamics. The conditional mean model is additive. The conditional volatility model is multiplicative and linearized with a logarithmic transformation. In addition, a cube-root power transformation is employed to symmetrize the lagged response variables. Using cubic splines, the model can be written as a multiple linear regression, thereby allowing predictions to be obtained in a simple manner. As outliers are often present in financial data, reliable estimation of the model parameters is achieved by trimmed least-square (TLS) estimation for which a reasonable amount of trimming is suggested. To obtain a parsimonious specification of the model, a new model selection criterion corresponding to TLS is derived. Moreover, the (three-parameter) generalized gamma distribution is identified as suitable for the absolute multiplicative errors and shown to work well for predictions and also for the calculation of quantiles, which is important to determine the value at risk. All model choices are motivated by a detailed analysis of IBM, HP, and SAP daily returns. The prediction performance is compared to the classical generalized autoregressive conditional heteroskedasticity (GARCH) and asymmetric power GARCH (APGARCH) models as well as to a nonstationary time-trend volatility model. The results suggest that the proposed model may possess a high predictive power for future conditional volatility. Supplementary materials for this article are available online.  相似文献   

4.
5.
This paper addresses the problem of finding rectangular drawings of plane graphs, in which each vertex is drawn as a point, each edge is drawn as a horizontal or a vertical line segment, and the contour of each face is drawn as a rectangle. A graph is a 2–3 plane graph if it is a plane graph and each vertex has degree 3 except the vertices on the outer face which have degree 2 or 3. A necessary and sufficient condition for the existence of a rectangular drawing has been known only for the case where exactly four vertices of degree 2 on the outer face are designated as corners in a 2–3 plane graph G. In this paper we establish a necessary and sufficient condition for the existence of a rectangular drawing of G for the general case in which no vertices are designated as corners. We also give a linear-time algorithm to find a rectangular drawing of G if it exists.  相似文献   

6.
This paper concerns competitive equilibria on a market for risk exchanges (rex). Initially a short resume is offered of some fundamental results obtained in this field and essentially due to K. Borch. Much attention is devoted to the key question that equilibria could be seen as generated by a market working for contingent coverings (shortly an analytic approach) or by a simpler market for risks governed by a synthetic premium principle. The idea of rex markets constrained both on the quantity side as well as for the tarification system applied is then introduced as a useful tool to study e.g. markets where unconstrained equilibria turn out too complex. Finally an example of a non-traditional constrained market is briefly discussed.  相似文献   

7.
Control problems not admitting the dynamic programming principle are known as time-inconsistent. The game-theoretic approach is to interpret such problems as intrapersonal dynamic games and look for subgame perfect Nash equilibria. A fundamental result of time-inconsistent stochastic control is a verification theorem saying that solving the extended HJB system is a sufficient condition for equilibrium. We show that solving the extended HJB system is a necessary condition for equilibrium, under regularity assumptions. The controlled process is a general Itô diffusion.  相似文献   

8.
On the optimization of surface textures for lubricated contacts   总被引:1,自引:0,他引:1  
The pressure field that develops inside a lubricated contact obeys an elliptic equation known as Reynolds equation, with coefficients that depend on the shape of the contacting surfaces. The load-carrying capacity of a contact, defined as the integral of the pressure field, is an important performance indicator that should be as high as possible to avoid wear and damage of the surfaces. In this article, the effect of arbitrary uniform periodic textures on the load-carrying capacity of lubricated devices known as thrust bearings is investigated theoretically by means of homogenization techniques and first-order perturbation analysis. It is shown that the untextured shape is a local optimum for the load-carrying capacity of the homogenized pressure field. This is proved for bearings of general shape and considering both incompressible and compressible models for the lubricant. The homogenization technique however implies an error. Suitable bounds for the effect of this error are provided in a simplified case.  相似文献   

9.
A numerical study is made comparing the exact thermal boundary condition and a harmonic mean conductivity condition at the solid–fluid interface for a finite thickness shrouded non-isothermal fin array. Results highlight that there exists a significant deviation of pressure drop across the length of the fin for the exact thermal boundary condition, which is as high as 20% as compared to that obtained using the harmonic mean conductivity condition. The exact thermal boundary condition forecasts relatively more non-isothermal fin as compared to a harmonic mean conductivity condition. The greater the fin spacing the larger the non-isothermal behavior of the fin and it also depends upon Grashof number as well as inlet fluid velocity. The larger the Grashof number the greater is the non-isothermal behavior of fin. The greater the inlet velocity, the larger is the non-isothermal behavior of fin. Bulk fluid temperature is over predicted by as much as 13% by the harmonic mean conductivity condition for larger fin spacing with highest Grashof number coupled with larger velocity. This deviation is only 6% for smaller fin spacing. Overall Nusselt number is over predicted for the harmonic mean conductivity condition as compared to exact thermal boundary condition. This over prediction is limited to about 8%.  相似文献   

10.
Disassembly activities take place in various recovery operations including remanufacturing, recycling and disposal. The disassembly line is the best choice for automated disassembly of returned products. It is therefore important that the disassembly line be designed and balanced so that it works as efficiently as possible. The disassembly line balancing problem seeks a sequence which: is feasible, minimizes workstations, and ensures similar idle times, as well as other end-of-life specific concerns. However finding the optimal balance is computationally intensive with exhaustive search quickly becoming prohibitively large even for relatively small products. In this paper the problem is mathematically defined and proven NP-complete. Additionally, a new formula for quantifying the level of balancing is proposed. A first-ever set of a priori instances to be used in the evaluation of any disassembly line balancing solution technique is then developed. Finally, a genetic algorithm is presented for obtaining optimal or near-optimal solutions for disassembly line balancing problems and examples are presented to illustrate implementation of the methodology.  相似文献   

11.
In this paper the usage of a stochastic optimization algorithm as a model search tool is proposed for the Bayesian variable selection problem in generalized linear models. Combining aspects of three well known stochastic optimization algorithms, namely, simulated annealing, genetic algorithm and tabu search, a powerful model search algorithm is produced. After choosing suitable priors, the posterior model probability is used as a criterion function for the algorithm; in cases when it is not analytically tractable Laplace approximation is used. The proposed algorithm is illustrated on normal linear and logistic regression models, for simulated and real-life examples, and it is shown that, with a very low computational cost, it achieves improved performance when compared with popular MCMC algorithms, such as the MCMC model composition, as well as with “vanilla” versions of simulated annealing, genetic algorithm and tabu search.  相似文献   

12.
This study proposes an originative method to evaluate complex supply chains. A tentative multi-echelon production, transportation and distribution system with stochastic factors built-in is employed as a test bed for the proposed method. The supply subsystem formulated in this study is a two-stage production facility with constant probability of feedback and stochastic breakdowns. The transportation subsystem is a service facility with one server. The distribution subsystem under study is a single central warehouse with M retailers. All the participants of the supply chain use base-stock policies and single-server settings. We investigated both the make-to-order (MTO) and make-to-stock (MTS) policies for different base-stock levels, as adopted at different sites. Applying quasi-birth-and-death (QBD) processes as decomposed building blocks and then using the existing matrix analytical computing approach for the performance evaluation of a tandem queue constitutes the main procedure of this study. We also discuss the possibilities of extending the current model to account for other inventory control policies as well as for multi-server case. Numerical study shows our proposed analytical model is robust for practical use.  相似文献   

13.
This paper deals with the problem of combining marginal probability distributions as a means for aggregating pieces of expert information. A novel approach, which takes the combining problem as an analogy of statistical estimation, is proposed and discussed. The combined distribution is then searched as a minimizer of a weighted sum of Kullback-Leibler divergences of the given marginal distributions and corresponding marginals of the searched one. Necessary and sufficient conditions for a distribution to be a minimizer are stated. For discrete random variables an iterative algorithm for approximate solution of the minimization problem is proposed and its convergence is proved.  相似文献   

14.
It is known that the accuracy of the maximum likelihood-based covariance and precision matrix estimates can be improved by penalized log-likelihood estimation. In this article, we propose a ridge-type operator for the precision matrix estimation, ROPE for short, to maximize a penalized likelihood function where the Frobenius norm is used as the penalty function. We show that there is an explicit closed form representation of a shrinkage estimator for the precision matrix when using a penalized log-likelihood, which is analogous to ridge regression in a regression context. The performance of the proposed method is illustrated by a simulation study and real data applications. Computer code used in the example analyses as well as other supplementary materials for this article are available online.  相似文献   

15.
It is suggested that there exists many fuzzy set systems, each with its specific pointwise operations for union and intersection. A general law of compound possibilities is valid for all these systems, as well as a general law for representing marginal possibility distributions as unions of fuzzy sets. Max-min fuzzy sets are a special case of a fuzzy set system which uses the pointwise operations of max and min for union and intersection respectively. Probabilistic fuzzy sets are another special case which uses the operations of addition and multiplication. Probably there exists an infinite number of fuzzy set operations and systems. It is shown why the law of idempotency for intersection is not required for such systems. An essential difference between the meaning of the operations of union and intersection in traditional measure theory as compared with their meaning in the theory of possibility is pointed out. The operation of particularization is used to illustrate that the two distinct classical theories of nonfuzzy relations and of probability are merely two aspects of a more generalized theory of fuzzy sets. It is shown that we must distinguish between particularization of conditional fuzzy sets and of joint fuzzy sets. The concept of restriction of nonfuzzy relations is a special case of particularization of both conditional and joint fuzzy sets. The computation of joint probabilities from conditional and marginal ones is a special case of particularization of conditional probabilistic fuzzy sets. The difference between linguistic modifiers of type 1 and particulating modifiers is pointed out, as well as a general difference between nouns and adjectives.  相似文献   

16.
The aim of this paper is to develop an alternative approach for assessing an insurer’s solvency as a proposal for a standard model for Solvency II. Instead of deriving minimum capital requirements–as is done in solvency regulation–our model provides company-specific minimum standards for risk and return of investment performance, given the distribution structure of liabilities and a predefined safety level. The idea behind this approach is that in a situation of weak solvency, an insurer’s asset allocation can be adjusted much more easily in the short term than can, for example, claims cost distributions, operating expenses, or equity capital. Hence, instead of using separate models for capital regulation and solvency regulation–as is typically done in most insurance markets–our single model will reduce the complexity and costs for insurers as well as for regulators. In this paper, we first develop the model framework and second test its applicability using data from a German non-life insurer.  相似文献   

17.
We present a massively parallel algorithm for the fused lasso, powered by a multiple number of graphics processing units (GPUs). Our method is suitable for a class of large-scale sparse regression problems on which a two-dimensional lattice structure among the coefficients is imposed. This structure is important in many statistical applications, including image-based regression in which a set of images are used to locate image regions predictive of a response variable such as human behavior. Such large datasets are increasingly common. In our study, we employ the split Bregman method and the fast Fourier transform, which jointly have a high data-level parallelism that is distinct in a two-dimensional setting. Our multi-GPU parallelization achieves remarkably improved speed. Specifically, we obtained as much as 433 times improved speed over that of the reference CPU implementation. We demonstrate the speed and scalability of the algorithm using several datasets, including 8100 samples of 512 × 512 images. Compared to the single GPU counterpart, our method also showed improved computing speed as well as high scalability. We describe the various elements of our study as well as our experience with the subtleties in selecting an existing algorithm for parallelization. It is critical that memory bandwidth be carefully considered for multi-GPU algorithms. Supplementary material for this article is available online.  相似文献   

18.
The Shorth Plot     
The shorth plot is a tool to investigate probability mass concentration. It is a graphical representation of the length of the shorth, the shortest interval covering a certain fraction of the distribution, localized by forcing the intervals considered to contain a given point x. It is easy to compute, avoids bandwidth selection problems, and allows scanning for local as well as for global features of the probability distribution. The good performance of the shorth plot is demonstrated through simulations and real data examples. These data as well as an R-package for computation of the shorth plot are available online.  相似文献   

19.
In the random censorship from the right model, strong and weak limit theorems for Bahadur-Kiefer type processes based on the product-limit estimator are established. The main theorm is sharp and may be considered as a final result as far as this type of research is concerned. As a consequence of this theorem a sharp uniform Bahadur representation for product-limit quantiles is obtained.  相似文献   

20.
Summary Linear Porgramming models for stochastic planning problems and a methodology for solving them are proposed. A production planning problem with uncertainty in demand is used as a test case, but the methodology presented here is applicable to other types of problems as well. In these models, uncertainty in demand is characterized via scenarios. Solutions are obtained for each scenario and then these individual scenario solutions are aggregated to yield an implementable non-anticipative policy. Such an approach makes it possible to model correlated and nonstationary demand as well as a variety of recourse decision types. For computational purposes, two alternative representations are proposed. A compact approach that is suitable for the Simplex method and a splitting variable approach that is suitable for the Interior Point Methods. A crash procedure that generates an advanced starting solution for the Simplex method is developed. Computational results are reported with both the representations. Although some of the models presented here are very large (over 25000 constraints and 75000 variables), our computational experience with these problems is quite encouraging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号