首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 290 毫秒
1.
Psychologists have studied people's intuitive notions of randomness by two kinds of tasks: judgment tasks (e.g., “is this series like a coin?” or “which of these series is most like a coin?”), and production tasks (e.g., “produce a series like a coin”). People's notion of randomness is biased in that they see clumps or streaks in truly random series and expect more alternation, or shorter runs, than are there. Similarly, they produce series with higher than expected alternation rates. Production tasks are subject to other biases as well, resulting from various functional limitations. The subjectively ideal random sequence obeys “local representativeness”; namely, in short segments of it, it represents both the relative frequencies (e.g., for a coin, 50%–50%) and the irregularity (avoidance of runs and other patterns). The extent to which this bias is a handicap in the real world is addressed.  相似文献   

2.
True random number generators are in general more secure than pseudo random number generators. In this paper, we propose a novel true random number generator which generates a 256-bit random number by computer mouse movement. It is cheap, convenient and universal for personal computers. To eliminate the effect of similar movement patterns generated by the same user, three chaos-based approaches, namely, discretized 2D chaotic map permutation, spatiotemporal chaos and “MASK” algorithm, are adopted to post-process the captured mouse movements. Random bits generated by three users are tested using NIST statistical tests. Both the spatiotemporal chaos approach and the “MASK” algorithm pass the tests successfully. However, the latter has a better performance in terms of efficiency and effectiveness and so is more practical for common personal computer applications.  相似文献   

3.
We study the computational problem “find the value of the quantified formula obtained by quantifying the variables in a sum of terms.” The “sum” can be based on any commutative monoid, the “quantifiers” need only satisfy two simple conditions, and the variables can have any finite domain. This problem is a generalization of the problem “given a sum-of-products of terms, find the value of the sum” studied in [R.E. Stearns and H.B. Hunt III, SIAM J. Comput. 25 (1996) 448–476]. A data structure called a “structure tree” is defined which displays information about “subproblems” that can be solved independently during the process of evaluating the formula. Some formulas have “good” structure trees which enable certain generic algorithms to evaluate the formulas in significantly less time than by brute force evaluation. By “generic algorithm,” we mean an algorithm constructed from uninterpreted function symbols, quantifier symbols, and monoid operations. The algebraic nature of the model facilitates a formal treatment of “local reductions” based on the “local replacement” of terms. Such local reductions “preserve formula structure” in the sense that structure trees with nice properties transform into structure trees with similar properties. These local reductions can also be used to transform hierarchical specified problems with useful structure into hierarchically specified problems having similar structure.  相似文献   

4.
Consider the class of linear models (with uncorrelated observation, each having variance σ2), in which it is known that at most k (location) parameters are negligible, but it is not known which are negligible. The problem is to identify the nonnegligible parameters. In this paper, for k = 1, and under certain restrictions on the model, a technique is developed for solving this problem, which has the feature of requiring (in an information theoretic sense) the minimum amount of computation. (It can “search through” 2m objects, using m “steps.”) The technique consists of dichotomizing the set of parameters (one known subset possibly containing the nonnegligible element, and the other not), using chi-square variables. A method for computing the probability that the correct parameter is identified, is presented, and an important application to factorial search designs is established.  相似文献   

5.
Large “O” and small “o” approximations of the expected value of a class of smooth functions (f Cr(R)) of the normalized partial sums of dependent random variable by the expectation of the corresponding functions of normal random variables have been established. The same types of approximations are also obtained for dependent random vectors. The technique used is the Lindberg-Levy method generalized by Dvoretzky to dependent random variables.  相似文献   

6.
Designing a robust sensor network to detect accidental contaminants in water distribution systems is a challenge given the uncertain nature of the contamination events (what, how much, when, where and for how long) and the dynamic nature of water distribution systems (driven by the random consumption of consumers). We formulate a set of scenario-based minimax and minimax regret models in order to provide robust sensor-placement schemes that perform well under all realizable contamination scenarios, and thus protect water consumers. Single-and multi-objective versions of these models are then applied to a real water distribution system. A heuristic solution method is applied to solve the robust models. The concept of “sensitivity region” is used to visualize trade-offs between multiple objectives.  相似文献   

7.
8.
Hidden Markov models are used as tools for pattern recognition in a number of areas, ranging from speech processing to biological sequence analysis. Profile hidden Markov models represent a class of so-called “left–right” models that have an architecture that is specifically relevant to classification of proteins into structural families based on their amino acid sequences. Standard learning methods for such models employ a variety of heuristics applied to the expectation-maximization implementation of the maximum likelihood estimation procedure in order to find the global maximum of the likelihood function. Here, we compare maximum likelihood estimation to fully Bayesian estimation of parameters for profile hidden Markov models with a small number of parameters. We find that, relative to maximum likelihood methods, Bayesian methods assign higher scores to data sequences that are distantly related to the pattern consensus, show better performance in classifying these sequences correctly, and continue to perform robustly with regard to misspecification of the number of model parameters. Though our study is limited in scope, we expect our results to remain relevant for models with a large number of parameters and other types of left–right hidden Markov models.  相似文献   

9.
Two new heuristic algorithms for solving cost-oriented assembly line balancing problems -the Wage-Rate-Method (WR) and the Wage-Rate-Smoothing-Method (WRS) — are presented and compared with two known heuristics — the Positional-Weight-Method (PW) and the Positional-Weight-Wage-Rate-Difference-Method (PWWD) with respect to their solution qualities. Firstly, the heuristics are outlined and their computational effort is stated. Then, a theoretical worst-case bound for the solution quality is given and the results of an extensive performance study are reported. In the study the heuristics were investigated with respect to their solution quality by solving randomly generated line balancing problems and problems from literature. It can be concluded that PWWD and WRS are generally superior to PW and WR.Parts of this research have been supported by the Stiftung Industrieforschung.  相似文献   

10.
In Bayesian analysis it is usual to assume that the risk profiles Θ1 and Θ2 associated with the random variables “number of claims” and “amount of a single claim”, respectively, are independent. A few studies have addressed a model of this nature assuming some degree of dependence between the two random variables (and most of these studies include copulas). In this paper, we focus on the collective and Bayes net premiums for the aggregate amount of claims under a compound model assuming some degree of dependence between the random variables Θ1 and Θ2. The degree of dependence is modelled using the Sarmanov–Lee family of distributions [Sarmanov, O.V., 1966. Generalized normal correlation and two-dimensional Frechet classes. Doklady (Soviet Mathematics) 168, 596–599 and Ting-Lee, M.L., 1996. Properties and applications of the Sarmanov family of bivariate distributions. Communications Statistics: Theory and Methods 25 (6) 1207–1222], which allows us to study the impact of this assumption on the collective and Bayes net premiums. The results obtained show that a low degree of correlation produces Bayes premiums that are highly sensitive.  相似文献   

11.
We present a deceptively simple, yet empirically powerful metaheuristic, called jump search, for solving traveling salesman problems that has been found to be more effective than tabu search on both random and benchmark test problems from the literature. While the underlying philosophy of jump search — applying local search from different starting points — has been discussed in the literature previously (using random starting points), the use of construction-based heuristic solutions has heretofore not been considered. Extensive empirical analysis shows the method to be surprisingly effective vis a vis a randomized strategy and in comparison with tabu search. The approach is quite robust and suggests that a systematic search among neighborhoods of good, not random, solutions provides distinct advantages. This suggests that further research be focused on better construction heuristics and hybrid schemes that incorporate jump search in, for example, tabu search.  相似文献   

12.
Consider evolution of density of a mass or a population, geographically situated in a compact region of space, assuming random creation-annihilation and migration, or dispersion of mass, so the evolution is a random measure. When the creation-annihilation and dispersion are diffusions the situation is described formally by a stochastic partial differential equation; ignoring dispersion make approximations to the initial density by atomic measures and if the corresponding discrete random measures converge “in law” to a unique random measure call it a solution. To account for dispersion Trotter's product formula is applied to semiflows corresponding to dispersion and creation-annihilation. Existence of solutions has been a conjecture for several years despite a claim in ([2], J. Multivariate Anal. 5, 1–52). We show that solutions exist and that non-deterministic solutions are “smeared” continuous-state branching diffusions.  相似文献   

13.
Conditionally specified statistical models are frequently constructed from one-parameter exponential family conditional distributions. One way to formulate such a model is to specify the dependence structure among random variables through the use of a Markov random field (MRF). A common assumption on the Gibbsian form of the MRF model is that dependence is expressed only through pairs of random variables, which we refer to as the “pairwise-only dependence” assumption. Based on this assumption, J. Besag (1974, J. Roy. Statist. Soc. Ser. B36, 192–225) formulated exponential family “auto-models” and showed the form that one-parameter exponential family conditional densities must take in such models. We extend these results by relaxing the pairwise-only dependence assumption, and we give a necessary form that one-parameter exponential family conditional densities must take under more general conditions of multiway dependence. Data on the spatial distribution of the European corn borer larvae are fitted using a model with Bernoulli conditional distributions and several dependence structures, including pairwise-only, three-way, and four-way dependencies.  相似文献   

14.
Several properties of the generation and evolution of phase separating patterns for binary material studied by CDS model are proposed. The main conclusions are (1) for alloys spinodal decomposition, the conceptions of “macro-pattern” and “micropattern” are posed by “black-and- white graph” and “gray-scale graph” respectively. We find that though the four forms of map f that represent the self-evolution of order parameter in a cell (lattice) are similar to each other in “macro-pattern”, there are evident differences in their micro-pattern, e.g., some different fine netted structures in the black domain and the white domain are found by the micro-pattern, so that distinct mechanical and physical behaviors shall be obtained. (2) If the two constitutions of block copolymers are not symmetric (i.e. r ≠ 0.5), a pattern called “grain-strip cross pattern” is discovered, in the 0.43 <r <0.45.  相似文献   

15.
The cyclic projections algorithm is an important method for determining a point in the intersection of a finite number of closed convex sets in a Hilbert space. That is, for determining a solution to the “convex feasibility” problem. This is the third paper in a series on a study of the rate of convergence for the cyclic projections algorithm. In the first of these papers, we showed that the rate could be described in terms of the “angles” between the convex sets involved. In the second, we showed that these angles often had a more tractable formulation in terms of the “norm” of the product of the (nonlinear) metric projections onto related convex sets.In this paper, we show that the rate of convergence of the cyclic projections algorithm is also intimately related to the “linear regularity property” of Bauschke and Borwein, the “normal property” of Jameson (as well as Bakan, Deutsch, and Li’s generalization of Jameson’s normal property), the “strong conical hull intersection property” of Deutsch, Li, and Ward, and the rate of convergence of iterated parallel projections. Such properties have already been shown to be important in various other contexts as well.  相似文献   

16.
The convergence properties of genetic algorithms with noisy fitness information are studied here. In the proposed scheme, hypothesis testing methods are used to compare sample fitness values. The “best” individual of each generation is kept and a greater-than-zero mutation rate is used so that every individual will be generated with positive probability in each generation. The convergence criterion is different from the frequently-used uniform population criterion; instead, the sequence of the “best” individual in each generation is considered, and the algorithm is regarded as convergent if the sequence of the “best” individuals converges with probability one to a point with optimal average fitness.  相似文献   

17.
For a nonnegative strictly stationary random sequence satisfying the “minimal” dependence condition necessary and sufficient conditions for the relative stability are found. As an application the well-known Khinchine stability result for i.i.d. random variables is proved for uniformly strong mixing sequences.  相似文献   

18.
In a random graph on n vertices, the maximum clique is likely to be of size very close to 2 lg n. However, the clique produced by applying the naive “greedy” heuristic to a random graph is unlikely to have size much exceeding lg n. The factor of two separating these estimates motivates the search for more effective heuristics. This article analyzes a heuristic search strategy, the Metropolis process, which is just one step above the greedy one in its level of sophistication. It is shown that the Metropolis process takes super-polynomial time to locate a clique that is only slightly bigger than that produced by the greedy heuristic.  相似文献   

19.
Let {Vk} be a nested sequence of closed subspaces that constitute a multiresolution analysis of L2( ). We characterize the family Φ = {φ} where each φ generates this multiresolution analysis such that the two-scale relation of φ is governed by a finite sequence. In particular, we identify the ε Φ that has minimum support. We also characterize the collection Ψ of functions η such that each η generates the orthogonal complementary subspaces Wk of Vk, . In particular, the minimally supported ψ ε Ψ is determined. Hence, the “B-spline” and “B-wavelet” pair (, ψ) provides the most economical and computational efficient “spline” representations and “wavelet” decompositions of L2 functions from the “spline” spaces Vk and “wavelet” spaces Wk, k . A very general duality principle, which yields the dual bases of both {(·−j):j and {η(·−j):j } for any η ε Ψ by essentially interchanging the pair of two-scale sequences with the pair of decomposition sequences, is also established. For many filtering applications, it is very important to select a multiresolution for which both and ψ have linear phases. Hence, “non-symmetric” and ψ, such as the compactly supported orthogonal ones introduced by Daubechies, are sometimes undesirable for these applications. Conditions on linear-phase φ and ψ are established in this paper. In particular, even-order polynomial B-splines and B-wavelets φm and ψm have linear phases, but the odd-order B-wavelet only has generalized linear phases.  相似文献   

20.
This article analyzes the process in which pupils acquire new uses of multiplication to measure area. Behaviors of five 4th-grade pupils in a series of lessons on areas were studied in depth by qualitative case-study methodology. Their use of multiplication changed as the lesson evolved, characterized conceptually as “using multiplication as a label,” “using it positively to approach problems which have not been solved before,” and “using it effectively to achieve the goal of measuring areas.” These three phases show the pupils’ understanding of multiplication in the context of measuring areas from a secondary accompaniment to a powerful tool of thinking. The phases observed and the students’ progress between the phases differs noticeably among the pupils. Factors that foster learners’ progress are investigated by comparing their behaviors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号