首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
In this paper mathematical methods for fuzzy stochastic analysis in engineering applications are presented. Fuzzy stochastic analysis maps uncertain input data in the form of fuzzy random variables onto fuzzy random result variables. The operator of the mapping can be any desired deterministic algorithm, e.g. the dynamic analysis of structures. Two different approaches for processing the fuzzy random input data are discussed. For these purposes two types of fuzzy probability distribution functions for describing fuzzy random variables are introduced. On the basis of these two types of fuzzy probability distribution functions two appropriate algorithms for fuzzy stochastic analysis are developed. Both algorithms are demonstrated and compared by way of an example.  相似文献   

2.
In this paper, a new conceptual algorithm for the conceptual analysis of mixed incomplete data sets is introduced. This is a logical combinatorial pattern recognition (LCPR) based tool for the conceptual structuralization of spaces. Starting from the limitations of the elaborated conceptual algorithms, our laboratories are working in the application of the methods, the techniques, and in general, the philosophy of the logical combinatorial pattern recognition with the task to improve those limitations. An extension of Michalski's concept of l-complex for any similarity measure, a generalization operator for symbolic variables, and an extension of Michalski's refunion operator are introduced. Finally, the performance of the RGC algorithm is analyzed. A comparison with several known conceptual algorithms is presented.  相似文献   

3.
A prediction model for methane production in a wastewater processing facility is presented. The model is built by data-mining algorithms based on industrial data collected on a daily basis. Because of many parameters available in this research, a subset of parameters is selected using importance analysis. Prediction results of methane production are presented in this paper. The model performance by different algorithms is measured with five metrics. Based on these metrics, a model built by the Adaptive Neuro-Fuzzy Inference System algorithm has provided most accurate predictions of methane production.  相似文献   

4.
In this paper, an ensemble of discrete differential evolution algorithms with parallel populations is presented. In a single populated discrete differential evolution (DDE) algorithm, the destruction and construction (DC) procedure is employed to generate the mutant population whereas the trial population is obtained through a crossover operator. The performance of the DDE algorithm is substantially affected by the parameters of DC procedure as well as the choice of crossover operator. In order to enable the DDE algorithm to make use of different parameter values and crossover operators simultaneously, we propose an ensemble of DDE (eDDE) algorithms where each parameter set and crossover operator is assigned to one of the parallel populations. Each parallel parent population does not only compete with offspring population generated by its own population but also the offspring populations generated by all other parallel populations which use different parameter settings and crossover operators. As an application area, the well-known generalized traveling salesman problem (GTSP) is chosen, where the set of nodes is divided into clusters so that the objective is to find a tour with minimum cost passing through exactly one node from each cluster. The experimental results show that none of the single populated variants was effective in solving all the GTSP instances whereas the eDDE performed substantially better than the single populated variants on a set of problem instances. Furthermore, through the experimental analysis of results, the performance of the eDDE algorithm is also compared against the best performing algorithms from the literature. Ultimately, all of the best known averaged solutions for larger instances are further improved by the eDDE algorithm.  相似文献   

5.
In this paper, we consider the single-machine scheduling problems with a time-dependent deterioration. By the time-dependent deterioration, we mean that the processing time of a job is defined by an increasing function of total normal processing time of jobs in front of it in the sequence. The objective is to minimize the total completion time. We develop a mixed integer programming formulation for the problem. The complexity status of this problem remains open. Hence, we use the smallest normal processing time (SPT) first rule as a heuristic algorithm for the general cases and analyze its worst-case error bound. Two heuristic algorithms utilize the V-shaped property are also proposed to solve the problem. Computational results are presented to evaluate the performance of the proposed algorithms.  相似文献   

6.
In this paper, we deal with single machine scheduling problems subject to time dependent effects. The main point in our models is that we do not assume a constant processing rate during job processing time. Rather, processing rate changes according to a fixed schedule of activities, such as replacing a human operator by a less skilled operator. The contribution of this paper is threefold. First, we devise a time-dependent piecewise constant processing rate model and show how to compute processing time for a resumable job. Second, we prove that any time-dependent continuous piecewise linear processing time model can be generated by the proposed rate model. Finally, we propose polynomial-time algorithms for some single machine problems with job independent rate function. In these procedures the job-independent rate effect does not imply any restriction on the number of breakpoints for the corresponding continuous piecewise linear processing time model. This is a clear element of novelty with respect to the polynomial-time algorithms proposed in previous contributions for time-dependent scheduling problems.  相似文献   

7.
The m-machine no-wait flowshop scheduling problem with the objective of minimizing total completion time subject to the constraint that the makespan value is not greater than a certain value is addressed in this paper. Setup times are considered non-zero values, and thus, setup times are treated as separate from processing times. Several recent algorithms, an insertion algorithm, two genetic algorithms, three simulated annealing algorithms, two cloud theory-based simulated annealing algorithms, and a differential evolution algorithm are adapted and proposed for the problem. An extensive computational analysis has been conducted for the evaluation of the proposed algorithms. The computational analysis indicates that one of the nine proposed algorithms, one of the simulated annealing algorithms (ISA-2), performs much better than the others under the same computational time. Moreover, the analysis indicates that the algorithm ISA-2 performs significantly better than the earlier existing best algorithm. Specifically, the best performing algorithm, ISA-2, proposed in this paper reduces the error of the existing best algorithm in the literature by at least 90% under the same computational time. All the results have been statistically tested.  相似文献   

8.
When solving multi-objective optimization problems (MOPs) with big data, traditional multi-objective evolutionary algorithms (MOEAs) meet challenges because they demand high computational costs that cannot satisfy the demands of online data processing involving optimization. The gradient heuristic optimization methods show great potential in solving large scale numerical optimization problems with acceptable computational costs. However, some intrinsic limitations make them unsuitable for searching for the Pareto fronts. It is believed that the combination of these two types of methods can deal with big MOPs with less computational cost. The main contribution of this paper is that a multi-objective memetic algorithm based on decomposition for big optimization problems (MOMA/D-BigOpt) is proposed and a gradient-based local search operator is embedded in MOMA/D-BigOpt. In the experiments, MOMA/D-BigOpt is tested on the multi-objective big optimization problems with thousands of variables. We also combine the local search operator with other widely used MOEAs to verify its effectiveness. The experimental results show that the proposed algorithm outperforms MOEAs without the gradient heuristic local search operator.  相似文献   

9.
Optimal algorithms for scheduling divisible load on heterogeneous system are considered in this paper. The platform model we use is general and realistic, in which the mode of communication is non-blocking message receiving, and processors and communication links may have different speeds and arbitrary start-up overheads. The objective is to minimize the processing time of the entire workload. The main contributions are: (1) closed-form expressions for the processing time and the fraction of workload for each processor are derived; (2) the influence of start-up overheads on the optimal processing time is analyzed; (3) for system of bounded number of processors and large workload, optimal sequence and algorithm for workload distribution are proposed. Moreover, some numerical examples are presented to illustrate the analysis.  相似文献   

10.
In this paper, we consider iterative algorithms of Uzawa type for solving linear nonsymmetric saddle point problems. Specifically, we consider systems, written as usual in block form, where the upper left block is an invertible linear operator with positive definite symmetric part. Such saddle point problems arise, for example, in certain finite element and finite difference discretizations of Navier-Stokes equations, Oseen equations, and mixed finite element discretization of second order convection-diffusion problems. We consider two algorithms, each of which utilizes a preconditioner for the operator in the upper left block. Convergence results for the algorithms are established in appropriate norms. The convergence of one of the algorithms is shown assuming only that the preconditioner is spectrally equivalent to the inverse of the symmetric part of the operator. The other algorithm is shown to converge provided that the preconditioner is a sufficiently accurate approximation of the inverse of the upper left block. Applications to the solution of steady-state Navier-Stokes equations are discussed, and, finally, the results of numerical experiments involving the algorithms are presented.

  相似文献   


11.
Replacing suffix trees with enhanced suffix arrays   总被引:9,自引:0,他引:9  
The suffix tree is one of the most important data structures in string processing and comparative genomics. However, the space consumption of the suffix tree is a bottleneck in large scale applications such as genome analysis. In this article, we will overcome this obstacle. We will show how every algorithm that uses a suffix tree as data structure can systematically be replaced with an algorithm that uses an enhanced suffix array and solves the same problem in the same time complexity. The generic name enhanced suffix array stands for data structures consisting of the suffix array and additional tables. Our new algorithms are not only more space efficient than previous ones, but they are also faster and easier to implement.  相似文献   

12.
The complexity status of the minimum dilation triangulation (MDT) problem for a general point set is unknown. Therefore, we focus on the development of approximated algorithms to find high quality triangulations of minimum dilation. For an initial approach, we design a greedy strategy able to obtain approximate solutions to the optimal ones in a simple way. We also propose an operator to generate the neighborhood which is used in different algorithms: Local Search, Iterated Local Search, and Simulated Annealing. Besides, we present an algorithm called Random Local Search where good and bad solutions are accepted using the previous mentioned operator. For the experimental study we have created a set of problem instances since no reference to benchmarks for these problems were found in the literature. We use the sequential parameter optimization toolbox for tuning the parameters of the SA algorithm. We compare our results with those obtained by the OV-MDT algorithm that uses the obstacle value to sort the edges in the constructive process. This is the only available algorithm found in the literature. Through the experimental evaluation and statistical analysis, we assess the performance of the proposed algorithms using this operator.  相似文献   

13.
Recently, optimization algorithms for solving a minimization problem whose objective function is a sum of two convex functions have been widely investigated in the field of image processing. In particular, the scenario when a non-differentiable convex function such as the total variation (TV) norm is included in the objective function has received considerable interests since many variational models encountered in image processing have this nature. In this paper, we propose a fast fixed point algorithm based on the adapted metric method, and apply it in the field of TV-based image deblurring. The novel method is derived from the idea of establishing a general fixed point algorithm framework based on an adequate quadratic approximation of one convex function in the objective function, in a way reminiscent of Quasi-Newton methods. Utilizing the non-expansion property of the proximity operator we further investigate the global convergence of the proposed algorithm. Numerical experiments on image deblurring problem demonstrate that the proposed algorithm is very competitive with the current state-of-the-art algorithms in terms of computational efficiency.  相似文献   

14.
In this article we propose an FBP-type algorithm for inversion of spiral cone beam data, study its theoretical properties, and illustrate performance of the algorithm by numerical examples. In particular, it is shown that the algorithm does not reconstruct f exactly, but computes the result of applying a pseudo-differential operator (PDO) with singular symbol to f. Away from critical directions the amplitude of this PDO is homogeneous of order zero in the dual variable, bounded, and approaches one as the pitch of the spiral goes to zero. Numerical experiments presented in the article show that even when the pitch is relatively large, the accuracy of reconstruction is quite high. On the other hand, under certain circumstances, the algorithm produces artifacts typical of all FBP-type algorithms.  相似文献   

15.
In this paper, we consider single machine scheduling problem in which job processing times are controllable variables with linear costs. We concentrate on two goals separately, namely, minimizing a cost function containing total completion time, total absolute differences in completion times and total compression cost; minimizing a cost function containing total waiting time, total absolute differences in waiting times and total compression cost. The problem is modelled as an assignment problem, and thus can be solved with the well-known algorithms. For the case where all the jobs have a common difference between normal and crash processing time and an equal unit compression penalty, we present an O(n log n) algorithm to obtain the optimal solution.  相似文献   

16.
Summary In this paper new multilevel algorithms are proposed for the numerical solution of first kind operator equations. Convergence estimates are established for multilevel algorithms applied to Tikhonov type regularization methods. Our theory relates the convergence rate of these algorithms to the minimal eigenvalue of the discrete version of the operator and the regularization parameter. The algorithms and analysis are presented in an abstract setting that can be applied to first kind integral equations.Dedicated to Jim Bramble on the occasion of his sixtieth birthday  相似文献   

17.
In this paper we are concerned with the problem of sequencing a given set of jobs without preemption on a single machine so as to minimize total cost, where associated with each job is a processing time and a differentiable cost function defined on the completion time of the job. The problem, in general, is NP-complete and, therefore, there is unlikely to be an algorithm to solve the problem in reasonable time, thus a heuristic algorithm is desirable. We present two heuristic algorithms to solve the problem. The first algorithm is based on the differential of the cost functions, and the second algorithm is based on the least square approximation of the cost functions. Computational experiences for the case of quadratic, cubic, and exponential cost functions are presented.  相似文献   

18.
In this article a unified approach to iterative soft-thresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the so-called finite basis injectivity property or the minimizer possesses a so-called strict sparsity pattern. Moreover it is shown that the constants can be calculated explicitly in special cases (i.e. for compact operators). Furthermore, the techniques also can be used to establish linear convergence for related methods such as the iterative thresholding algorithm for joint sparsity and the accelerated gradient projection method.  相似文献   

19.
The paper is devoted to developing the new time- and memory-efficient algorithm BiCGSTABmem for solving the inverse gravimetry problem of determination of a variable density in a layer using the gravitational data. The problem is in solving the linear Fredholm integral equation of the first kind. After discretization of the domain and approximation of the integral operator, this problem is reduced to solving a large system of linear algebraic equations. It is shown that the matrix of coefficients is the Toeplitz-block-Toeplitz one in the case of the horizontal layer. For calculating and storing the elements of this matrix, we construct an efficient method, which significantly reduces the required memory and time. For the case of the curvilinear layer, we construct a method for approximating the parts of the matrix by a Toeplitz-block-Toeplitz one. This allows us to exploit the same efficient method for storing and processing the coefficient matrix in the case of a curvilinear layer. To solve the system of linear equations, we constructed the parallel algorithm on the basis of the stabilized biconjugated gradient method with using the Toeplitz-block-Toeplitz structure of the matrix. We implemented the BiCGSTAB and BiCGSTABmem algorithms for the Uran cluster supercomputer using the hybrid MPI + OpenMP technology. A model problem with synthetic data was solved for a large grid. It was shown that the new BiCGSTABmem algorithm reduces the computation time in comparison with the BiCGSTAB. Scalability of the parallel algorithm was studied.  相似文献   

20.
All methods for solving least-squares problems involve orthogonalization in one way or another. Certain fundamental estimation and prediction problems of signal processing and time-series analysis can be formulated as least-squares problems. In these problems, the sequence that is to be orthogonalized is generated by an underlying unitary operator. A prime example of an efficient orthogonalization procedure for this class of problems is Gragg's isometric Arnoldi process, which is the abstract encapsulation of a number of concrete algorithms. In this paper, we discuss a two-sided orthogonalization process that is equivalent to Gragg's process but has certain conceptual strengths that warrant its introduction. The connections with classical algorithms of signal processing are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号