共查询到20条相似文献,搜索用时 10 毫秒
1.
Authentication plays an important role in dealing with security. Securing sensitive data and computer systems by allowing easy access for authenticated users and withstanding the attacks of imposters is one of the major challenges in the field of computer security. Nowadays, passwords have become the trend to control access to computer systems. Biometrics are used to measure and analyze an individual's unique behavioral or physiological patterns for authentication purposes. Keystroke dynamics have emerged as an important method in analyzing the typing rhythm in biometric techniques, as they provide an ease of use and increased trustworthiness associated with biometrics for creating username and password schemes. In this experiment, we measure the Hausdorff timing values, mean, standard deviation, and median of keystroke features, such as latency, duration, digraph, and their combinations, and compare their performance. The stochastic diffusion search is used for feature subset selection. 相似文献
2.
Feature reduction based on rough set theory is an effective feature selection method in pattern recognition applications. Finding a minimal subset of the original features is inherent in rough set approach to feature selection. As feature reduction is a Nondeterministic Polynomial‐time‐hard problem, it is necessary to develop fast optimal or near‐optimal feature selection algorithms. This article aims to propose an exact feature selection algorithm in rough set that is efficient in terms of computation time. The proposed algorithm begins the examination of a solution tree by a breadth‐first strategy. The pruned nodes are held in a version of the trie data structure. Based on the monotonic property of dependency degree, all subsets of the pruned nodes cannot be optimal solutions. Thus, by detecting these subsets in trie, it is not necessary to calculate their dependency degree. The search on the tree continues until the optimal solution is found. This algorithm is improved by selecting an initial search level determined by the hill‐climbing method instead of searching the tree from the level below the root. The length of the minimal reduct and the size of data set can influence which starting search level is more efficient. The experimental results using some of the standard UCI data sets, demonstrate that the proposed algorithm is effective and efficient for data sets with more than 30 features. © 2014 Wiley Periodicals, Inc. Complexity 20: 50–62, 2015 相似文献
3.
In this paper, an evolutionary artificial immune system for multi-objective optimization which combines the global search ability of evolutionary algorithms and immune learning of artificial immune systems is proposed. A new selection strategy is developed based upon the concept of clonal selection principle to maintain the balance between exploration and exploitation. In order to maintain a diverse repertoire of antibodies, an information-theoretic based density preservation mechanism is also presented. In addition, the performances of various multi-objective evolutionary algorithms as well as the effectiveness of the proposed features are examined based upon seven benchmark problems characterized by different difficulties in local optimality, non-uniformity, discontinuity, non-convexity, high-dimensionality and constraints. The comparative study shows the effectiveness of the proposed algorithm, which produces solution sets that are highly competitive in terms of convergence, diversity and distribution. Investigations also demonstrate the contribution and robustness of the proposed features. 相似文献
4.
A comparison of several nearest neighbor classifier metrics using Tabu Search algorithm for the feature selection problem 总被引:1,自引:0,他引:1
Magdalene Marinaki Yannis Marinakis Michael Doumpos Nikolaos Matsatsinis Constantin Zopounidis 《Optimization Letters》2008,2(3):299-308
The feature selection problem is an interesting and important topic which is relevant for a variety of database applications.
This paper utilizes the Tabu Search metaheuristic algorithm to implement a feature subset selection procedure while the nearest
neighbor classification method is used for the classification task. Tabu Search is a general metaheuristic procedure that
is used in order to guide the search to obtain good solutions in complex solution spaces. Several metrics are used in the
nearest neighbor classification method, such as the euclidean distance, the Standardized Euclidean distance, the Mahalanobis
distance, the City block metric, the Cosine distance and the Correlation distance, in order to identify the most significant
metric for the nearest neighbor classifier. The performance of the proposed algorithms is tested using various benchmark datasets
from UCI Machine Learning Repository. 相似文献
5.
The feature selection characterized by relatively small sample size and extremely high-dimensional feature space is common
in many areas of contemporary statistics. The high dimensionality of the feature space causes serious difficulties: (i) the
sample correlations between features become high even if the features are stochastically independent; (ii) the computation
becomes intractable. These difficulties make conventional approaches either inapplicable or inefficient. The reduction of
dimensionality of the feature space followed by low dimensional approaches appears the only feasible way to tackle the problem.
Along this line, we develop in this article a tournament screening cum EBIC approach for feature selection with high dimensional
feature space. The procedure of tournament screening mimics that of a tournament. It is shown theoretically that the tournament
screening has the sure screening property, a necessary property which should be satisfied by any valid screening procedure.
It is demonstrated by numerical studies that the tournament screening cum EBIC approach enjoys desirable properties such as
having higher positive selection rate and lower false discovery rate than other approaches.
Zehua Chen was supported by Singapore Ministry of Educations ACRF Tier 1 (Grant No. R-155-000-065-112). Jiahua Chen was supported
by the National Science and Engineering Research Countil of Canada and MITACS, Canada. 相似文献
6.
Rough set feature selection (RSFS) can be used to improve classifier performance. RSFS removes redundant attributes whilst retaining important ones that preserve the classification power of the original dataset. Reducts are feature subsets selected by RSFS. Core is the intersection of all the reducts of a dataset. RSFS can only handle discrete attributes, hence, continuous attributes need to be discretized before being input to RSFS. Discretization determines the core size of a discrete dataset. However, current discretization methods do not consider the core size during discretization. Earlier work has proposed core-generating approximate minimum entropy discretization (C-GAME) algorithm which selects the maximum number of minimum entropy cuts capable of generating a non-empty core within a discrete dataset. The contributions of this paper are as follows: (1) the C-GAME algorithm is improved by adding a new type of constraint to eliminate the possibility that only a single reduct is present in a C-GAME-discrete dataset; (2) performance evaluation of C-GAME in comparison to C4.5, multi-layer perceptrons, RBF networks and k-nearest neighbours classifiers on ten datasets chosen from the UCI Machine Learning Repository; (3) performance evaluation of C-GAME in comparison to Recursive Minimum Entropy Partition (RMEP), Chimerge, Boolean Reasoning and Equal Frequency discretization algorithms on the ten datasets; (4) evaluation of the effects of C-GAME and the other four discretization methods on the sizes of reducts; (5) an upper bound is defined on the total number of reducts within a dataset; (6) the effects of different discretization algorithms on the total number of reducts are analysed; (7) performance analysis of two RSFS algorithms (a genetic algorithm and Johnson’s algorithm). 相似文献
7.
This paper deals with a portfolio selection problem with fuzzy return rates. A possibilistic mean variance (FMVC) portfolio selection model was proposed. The possibilistic programming problem can be transformed into a linear optimal problem with an additional quadratic constraint by possibilistic theory. For such problems there are no special standard algorithms. We propose a cutting plane algorithm to solve (FMVC). The nonlinear programming problem can be solved by sequence linear programming problem. A numerical example is given to illustrate the behavior of the proposed model and algorithm. 相似文献
8.
A. Divsalar P. Vansteenwegen K. Sörensen D. Cattrysse 《European Journal of Operational Research》2014
In this paper, a memetic algorithm is developed to solve the orienteering problem with hotel selection (OPHS). The algorithm consists of two levels: a genetic component mainly focuses on finding a good sequence of intermediate hotels, whereas six local search moves embedded in a variable neighborhood structure deal with the selection and sequencing of vertices between the hotels. A set of 176 new and larger benchmark instances of OPHS are created based on optimal solutions of regular orienteering problems. Our algorithm is applied on these new instances as well as on 224 benchmark instances from the literature. The results are compared with the known optimal solutions and with the only other existing algorithm for this problem. The results clearly show that our memetic algorithm outperforms the existing algorithm in terms of solution quality and computational time. A sensitivity analysis shows the significant impact of the number of possible sequences of hotels on the difficulty of an OPHS instance. 相似文献
9.
J.D. Bermúdez 《Fuzzy Sets and Systems》2012,188(1):16-26
This paper presents a new procedure that extends genetic algorithms from their traditional domain of optimization to fuzzy ranking strategy for selecting efficient portfolios of restricted cardinality. The uncertainty of the returns on a given portfolio is modeled using fuzzy quantities and a downside risk function is used to describe the investor's aversion to risk. The fitness functions are based both on the value and the ambiguity of the trapezoidal fuzzy number which represents the uncertainty on the return. The soft-computing approach allows us to consider uncertainty and vagueness in databases and also to incorporate subjective characteristics into the portfolio selection problem. We use a data set from the Spanish stock market to illustrate the performance of our approach to the portfolio selection problem. 相似文献
10.
This paper investigates the feature subset selection problem for the binary classification problem using logistic regression model. We developed a modified discrete particle swarm optimization (PSO) algorithm for the feature subset selection problem. This approach embodies an adaptive feature selection procedure which dynamically accounts for the relevance and dependence of the features included the feature subset. We compare the proposed methodology with the tabu search and scatter search algorithms using publicly available datasets. The results show that the proposed discrete PSO algorithm is competitive in terms of both classification accuracy and computational performance. 相似文献
11.
A hybrid immune multiobjective optimization algorithm 总被引:1,自引:0,他引:1
In this paper, we develop a hybrid immune multiobjective optimization algorithm (HIMO) based on clonal selection principle. In HIMO, a hybrid mutation operator is proposed with the combination of Gaussian and polynomial mutations (GP-HM operator). The GP-HM operator adopts an adaptive switching parameter to control the mutation process, which uses relative large steps in high probability for boundary individuals and less-crowded individuals. With the generation running, the probability to perform relative large steps is reduced gradually. By this means, the exploratory capabilities are enhanced by keeping a desirable balance between global search and local search, so as to accelerate the convergence speed to the true Pareto-optimal front in the global space with many local Pareto-optimal fronts. When comparing HIMO with various state-of-the-art multiobjective optimization algorithms developed recently, simulation results show that HIMO performs better evidently. 相似文献
12.
Xiang Li Yang Zhang Hau-San Wong Zhongfeng Qin 《Journal of Computational and Applied Mathematics》2009,233(2):264-278
Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean–variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems. 相似文献
13.
Mohamed Saleh Rogelio Oliva Christian Erik Kampmann Pål I. Davidsen 《European Journal of Operational Research》2010
Formal tools to link system dynamics model’s structure to the system modes of behavior have recently become available. In this paper, we aim to expand the use of these tools to perform the model’s policy analysis in a more structured and formal way than the exhaustive exploratory approaches used to date. We consider how a policy intervention (a parameter change) affects a particular behavior mode by affecting the gains of particular feedback loops as well as how it affects the presence of that mode in the variable of interest. The paper demonstrates the utility of considering both of these aspects since the analysis provides an assessment of the overall impact of a policy on a variable and explains why the impact occurs in terms of structural changes in the model. Particularly in the context of larger models, this method enables a much more efficient search for leverage policies, by ranking the influence of each model parameter without the need for multiple simulation experiments. 相似文献
14.
Patroklos Georgiadis Charalampos Michaloudis 《European Journal of Operational Research》2012,216(1):94-104
Much attention has been paid to production planning and control (PPC) in job-shop manufacturing systems. However, there is a remaining gap between theory and practice, in the ability of PPC systems to capture the dynamic disturbances in manufacturing process. Since most job-shop manufacturing systems operate in a stochastic environment, the need for sound PPC systems has emerged, to identify the discrepancy between planned and actual activities in real-time and also to provide corrective measures. By integrating production ordering and batch sizing control mechanisms into a dynamic model, we propose a comprehensive real-time PPC system for arbitrary capacitated job-shop manufacturing. We adopt a system dynamics (SD) approach which is proved to be appropriate for studying the dynamic behavior of complex manufacturing systems. We study the system’s response, under different arrival patterns for customer orders and the existence of various types real-time events related to customer orders and machine failures. We determine the near-optimal values of control variables, which improve the shop performance in terms of average backlogged orders, work in process inventories and tardy jobs. The results of extensive numerical investigation are statistically examined by using analysis of variance (ANOVA). The examination reveals an insensitivity of near-optimal values to real-time events and to arrival pattern and variability of customer orders. In addition, it reveals a positive impact of the proposed real-time PPC system on the shop performance. The efficiency of PPC system is further examined by implementing data from a real-world manufacturer. 相似文献
15.
Stochastic dominance based comparison for system selection 总被引:1,自引:0,他引:1
We present two complementing selection procedures for comparing simulated systems based on the stochastic dominance relationship of a performance metric of interest. The decision maker specifies an output quantile set representing a section of the distribution of the metric, e.g., downside or upside risks or central tendencies, as the basis for comparison. The first procedure compares systems over the quantile set of interest by a first-order stochastic dominance criterion. The systems that are deemed nondominant in the first procedure could be compared by a weaker almost first-order stochastic dominance criterion in the second procedure. Numerical examples illustrate the capabilities of the proposed procedures. 相似文献
16.
The waste disposal charging fee (WDCF) has long been adopted for stimulating major project stakeholders’ (particularly project clients and contractors) incentives to minimize solid waste and increase the recovery of wasted materials in the construction industry. However, the present WDCFs applied in many regions of China are mostly determined based on a rule of thumb. Consequently the effectiveness of implementing these WDCFs is very limited. This study aims at addressing this research gap through developing a system dynamics based model to determine an appropriate WDCF in the construction sector. The data used to test and validate the model was collected from Shenzhen of south China. By using the model established, two types of simulations were carried out. One is the base run simulation to investigate the status quo of waste generation in Shenzhen; the other is policy analysis simulation, with which an appropriate WDCF could be determined to reduce waste generation and landfilling, maximize waste recycling, and minimize the waste dumped inappropriately. The model developed can function as a tool to effectively determine an appropriate WDCF in Shenzhen. Further, it can also be used by other regions intending to stimulate construction waste minimization and recycling through implementing an optimal WDCF. 相似文献
17.
In this paper, we study a non-local coupled system that arises in the theory of dislocations densities dynamics. Within the framework of viscosity solutions, we prove a long time existence and uniqueness result for the solution of this model. We also propose a convergent numerical scheme and we prove a Crandall-Lions type error estimate between the continuous solution and the numerical one. As far as we know, this is the first error estimate of Crandall-Lions type for Hamilton-Jacobi systems. We also provide some numerical simulations.
18.
For an innovative product characterized by short product lifecycle and high demand uncertainty, investment in capacity buildup has to be done cautiously. Otherwise either the product’s market diffusion is impeded or the manufacturer is left with unutilized capacity. Using the right information for making capacity augmentation decisions is critical in facing this challenge. In this paper, we propose a method for identifying critical information flows using the system dynamics model of a two-echelon supply chain. The fundamental premise of system dynamics methodology is that (system) structure determines (its) behavior. Using loop dominance analysis method we study the feedback loop structure of the supply chain system. The outcome is a set of dominant loops that determine the dynamics of capacity growth. It is revealed that the delivery delay information has little effect while the loop that connects retail sales with production order affects the dynamics significantly. Modifying this loop yields appropriate capacity augmentation decisions resulting in higher performance. What-if analyses bring out effects of modifying other structural elements. In conclusion, we claim that the information feedback based methodology is general enough to be useful in designing decision support systems for capacity augmentation. The limitations of the model are also discussed and possible extensions identified. 相似文献
19.
This work presents a new fuzzy multiple attributes decision-making (FMADM) approach, i.e., fuzzy simple additive weighting system (FSAWS), for solving facility location selection problems by using objective/subjective attributes under group decision-making (GDM) conditions. The proposed system integrates fuzzy set theory (FST), the factor rating system (FRS) and simple additive weighting (SAW) to evaluate facility locations alternatives. The FSAWS is applied to deal with both qualitative and quantitative dimensions. The FSAWS process considers the importance of each decision-maker, and the total scores for alternative locations are then derived by homo/heterogeneous group of decision-makers. Finally, a numerical example illustrates the procedure of the proposed FSAWS. 相似文献
20.
We present a new network simplex pivot selection rule, which we call theminimum ratio pivot rule, and analyze the worst-case complexity of the resulting network simplex algorithm. We consider networks withn nodes,m arcs, integral arc capacities and integral supplies/demands of nodes. We define a {0, 1}-valued penalty for each arc of the
network. The minimum ratio pivot rule is to select that eligible arc as the entering arc whose addition to the basis creates
a cycle with the minimum cost-to-penalty ratio. We show that the so-defined primal network simplex algorithm solves minimum
cost flow problem within O(nΔ) pivots and in O(Δ(m + n logn)) time, whereΔ is any upper bound on the sum of all arc flows in every feasible flow. For assignment and shortest path problems, our algorithm
runs in O(n
2) pivots and O(nm +n
2 logn) time. 相似文献