首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, we present a new algorithm to evaluate the Kauffman bracket polynomial. The algorithm uses cyclic permutations to count the number of states obtained by the application of ‘A’ and ‘B’ type smoothings to the each crossing of the knot. We show that our algorithm can be implemented easily by computer programming.  相似文献   

2.
This paper reports one aspect of a larger study which looked at the strategies used by a selection of grade 6 students to solve six non-routine mathematical problems. The data revealed that the students exhibited many of the behaviours identified in the literature as being associated with novice and expert problem solvers. However, the categories of ‘novice’ and ‘expert’ were not fully adequate to describe the range of behaviours observed and instead three categories that were characteristic of behaviours associated with ‘naïve’, ‘routine’ and ‘sophisticated’ approaches to solving problems were identified. Furthermore, examination of individual cases revealed that each student's problem solving performance was consistent across a range of problems, indicating a particular orientation towards naïve, routine or sophisticated problem solving behaviours. This paper describes common problem solving behaviours and details three individual cases involving naïve, routine and sophisticated problem solvers.  相似文献   

3.
We first give conditions for a univariate square integrable function to be a scaling function of a frame multiresolution analysis (FMRA) by generalizing the corresponding conditions for a scaling function of a multiresolution analysis (MRA). We also characterize the spectrum of the ‘central space’ of an FMRA, and then give a new condition for an FMRA to admit a single frame wavelet solely in terms of the spectrum of the central space of an FMRA. This improves the results previously obtained by Benedetto and Treiber and by some of the authors. Our methods and results are applied to the problem of the ‘containments’ of FMRAs in MRAs. We first prove that an FMRA is always contained in an MRA, and then we characterize those MRAs that contain ‘genuine’ FMRAs in terms of the unique low-pass filters of the MRAs and the spectrums of the central spaces of the FMRAs to be contained. This characterization shows, in particular, that if the low-pass filter of an MRA is almost everywhere zero-free, as is the case of the MRAs of Daubechies, then the MRA contains no FMRAs other than itself.  相似文献   

4.
Calibration refers to the adjustment of the posterior probabilities output by a classification algorithm towards the true prior probability distribution of the target classes. This adjustment is necessary to account for the difference in prior distributions between the training set and the test set. This article proposes a new calibration method, called the probability-mapping approach. Two types of mapping are proposed: linear and non-linear probability mapping. These new calibration techniques are applied to 9 real-life direct marketing datasets. The newly-proposed techniques are compared with the original, non-calibrated posterior probabilities and the adjusted posterior probabilities obtained using the rescaling algorithm of Saerens et al. (2002). The results recommend that marketing researchers must calibrate the posterior probabilities obtained from the classifier. Moreover, it is shown that using a ‘simple’ rescaling algorithm is not a first and workable solution, because the results suggest applying the newly-proposed non-linear probability-mapping approach for best calibration performance.  相似文献   

5.
Wavelet method is a recently developed tool in applied mathematics. Investigation of various wavelet methods, for its capability of analyzing various dynamic phenomena through waves gained more and more attention in engineering research. Starting from ‘offering good solution to differential equations’ to capturing the nonlinearity in the data distribution, wavelets are used as appropriate tools at various places to provide good mathematical model for scientific phenomena, which are usually modeled through linear or nonlinear differential equations. Review shows that the wavelet method is efficient and powerful in solving wide class of linear and nonlinear reaction–diffusion equations. This review intends to provide the great utility of wavelets to science and engineering problems which owes its origin to 1919. Also, future scope and directions involved in developing wavelet algorithm for solving reaction–diffusion equations are addressed.  相似文献   

6.
Multi-physics simulation often requires the solution of a suite of interacting physical phenomena, the nature of which may vary both spatially and in time. For example, in a casting simulation there is thermo-mechanical behaviour in the structural mould, whilst in the cast, as the metal cools and solidifies, the buoyancy induced flow ceases and stresses begin to develop. When using a single code to simulate such problems it is conventional to solve each ‘physics’ component over the whole single mesh, using definitions of material properties or source terms to ensure that a solved variable remains zero in the region in which the associated physical phenomenon is not active. Although this method is secure, in that it enables any and all the ‘active’ physics to be captured across the whole domain, it is computationally inefficient in both scalar and parallel. An alternative, known as the ‘group’ solver approach, involves more formal domain decomposition whereby specific combinations of physics are solved for on prescribed sub-domains. The ‘group’ solution method has been implemented in a three-dimensional finite volume, unstructured mesh multi-physics code, which is parallelised, employing a multi-phase mesh partitioning capability which attempts to optimise the load balance across the target parallel HPC system. The potential benefits of the ‘group’ solution strategy are evaluated on a class of multi-physics problems involving thermo-fluid–structural interaction on both a single and multi-processor systems. In summary, the ‘group’ solver is a third faster on a single processor than the single domain strategy and preserves its scalability on a parallel cluster system.  相似文献   

7.
8.
Recently, nature-inspired algorithms have increasingly attracted the attention of researchers. Due to the fact that in BPSO the position vectors consisting of ‘0’ and ‘1’ can be seen as a decision behavior (support or oppose), in this paper, we propose a BPSO with hierarchical structure (BPSO_HS for short), on the basis of multi-level organizational learning behavior. At each iteration of BPSO_HS, particles are divided into two classes, named ‘leaders’ and ‘followers’, and different evolutionary strategies are used in each class. In addition, the mutation strategy is adopted to overcome the premature convergence and slow convergent speed during the later stages of optimization. The algorithm was tested on two discrete optimization problems (Traveling Salesman and Bin Packing) as well as seven real-parameter functions. The experimental results showed that the performance of BPSO_HS was significantly better than several existing algorithms.  相似文献   

9.
We propose a multivariate method for combining results from independent studies about the same ‘large scale’ multiple testing problem. The method works asymptotically in the number of hypotheses and consists of applying the Benjamini-Hochberg procedure to the p-values of each study separately by determining the ‘individual false discovery rates’ which maximize power subject to a restriction on the (global) false discovery rate. We show how to obtain solutions to the associated optimization problem, provide both theoretical and numerical examples, and compare the method with univariate ones.  相似文献   

10.
A proportional reasoning item bank was created from the relevant literature and tested in various forms. Rasch analyses of 303 pupils’ test results were used to calibrate the bank, and data from 84 pupils’ interviews was used to confirm our diagnostic interpretations. A number of sub-tests were scaled, including parallel ‘without models’ and ‘with models’ forms. We provide details of the 13-item ‘without models’ test which was formed from the ‘richest’ diagnostic items and verified on a further test sample (N=212, ages 10-13). Two scales were constructed for this test, one that measures children’s ‘ratio attainment’ and one that measures their ‘tendency for additive strategy.’ Other significant errors — ‘incorrect build-up,’ ‘magical doubling/halving,’ ‘constant sum’ and ‘incomplete reasoning’ — were identified. Finally, an empirical hierarchy of pupils’ attainment of proportional reasoning was formed, incorporating the significant errors and the additive scale.  相似文献   

11.
The paper investigates an EPL (Economic Production Lotsize) model in an imperfect production system in which the production facility may shift from an ‘in-control’ state to an ‘out-of-control’ state at any random time. The basic assumption of the classical EPL model is that 100% of produced items are perfect quality. This assumption may not be valid for most of the production environments. More specifically, the paper extends the article of Khouja and Mehrez [Khouja, M., Mehrez, A., 1994. An economic production lot size model with imperfect quality and variable production rate. Journal of the Operational Research Society 45, 1405–1417]. Generally, the manufacturing process is ‘in-control’ state at the starting of the production and produced items are of conforming quality. In long-run process, the process shifts from the ‘in-control’ state to the ‘out-of-control’ state after certain time due to higher production rate and production-run-time.The proposed model is formulated assuming that a certain percent of total product is defective (imperfect), in ‘out-of-control’ state. This percentage also varies with production rate and production-run time. The defective items are restored in original quality by reworked at some costs to maintain the quality of products in a competitive market. The production cost per unit item is convex function of production rate. The total costs in this investment model include manufacturing cost, setup cost, holding cost and reworking cost of imperfect quality products. The associated profit maximization problem is illustrated by numerical examples and also its sensitivity analysis is carried out.  相似文献   

12.
This article presents a splitting technique for solving the time dependent incompressible Navier–Stokes equations. Using nested finite element spaces which can be interpreted as a postprocessing step the splitting method is of more than second order accuracy in time. The integration of adaptive methods in space and time in the splitting are discussed. In this algorithm, a gradient recovery technique is used to compute boundary conditions for the pressure and to achieve a higher convergence order for the gradient at different points of the algorithm. Results on the ‘Flow around a cylinder’s- and the ‘Driven Cavity’s-problem are presented.  相似文献   

13.
When applying the 2-opt heuristic to the travelling salesman problem, selecting the best improvement at each iteration gives worse results on average than selecting the first improvement, if the initial solution is chosen at random. However, starting with ‘greedy’ or ‘nearest neighbor’ constructive heuristics, the best improvement is better and faster on average. Reasons for this behavior are investigated. It appears to be better to use exchanges introducing into the solution a very small edge and fairly large one, which can easily be removed later, than two small ones which are much harder to remove.  相似文献   

14.
The efficient and accurate calculation of sensitivities of the price of financial derivatives with respect to perturbations of the parameters in the underlying model, the so-called ‘Greeks’, remains a great practical challenge in the derivative industry. This is true regardless of whether methods for partial differential equations or stochastic differential equations (Monte Carlo techniques) are being used. The computation of the ‘Greeks’ is essential to risk management and to the hedging of financial derivatives and typically requires substantially more computing time as compared to simply pricing the derivatives. Any numerical algorithm (Monte Carlo algorithm) for stochastic differential equations produces a time-discretization error and a statistical error in the process of pricing financial derivatives and calculating the associated ‘Greeks’. In this article we show how a posteriori error estimates and adaptive methods for stochastic differential equations can be used to control both these errors in the context of pricing and hedging of financial derivatives. In particular, we derive expansions, with leading order terms which are computable in a posteriori form, of the time-discretization errors for the price and the associated ‘Greeks’. These expansions allow the user to simultaneously first control the time-discretization errors in an adaptive fashion, when calculating the price, sensitivities and hedging parameters with respect to a large number of parameters, and then subsequently to ensure that the total errors are, with prescribed probability, within tolerance.  相似文献   

15.
The paper considers scheduling of inspections for imperfect production processes where the process shift time from an ‘in-control’ state to an ‘out-of-control’ state is assumed to follow an arbitrary probability distribution with an increasing failure (hazard) rate and the products are sold with a free repair warranty (FRW) contract. During each production run, the process is monitored through inspections to assess its state. If at any inspection the process is found in ‘out-of-control’ state, then restoration is performed. The model is formulated under two different inspection policies: (i) no action is taken during a production run unless the system is discovered in an ‘out-of-control’ state by inspection and (ii) preventive repair action is undertaken once the ‘in-control’ state of the process is detected by inspection. The expected sum of pre-sale and post-sale costs per unit item is taken as a criterion of optimality. We propose a computational algorithm to determine the optimal inspection policy numerically, as it is quite hard to derive analytically. To ease the computational difficulties, we further employ an approximate method which determines a suboptimal inspection policy. A comparison between the optimal and suboptimal inspection policies is made and the impact of FRW on the optimal inspection policy is investigated in a numerical example.  相似文献   

16.
A new contrast enhancement algorithm for image is proposed combining genetic algorithm (GA) with wavelet neural network (WNN). In-complete Beta transform (IBT) is used to obtain non-linear gray transform curve so as to enhance global contrast for an image. GA determines optimal gray transform parameters. In order to avoid the expensive time for traditional contrast enhancement algorithms, which search optimal gray transform parameters in the whole parameters space, based on gray distribution of an image, a classification criterion is proposed. Contrast type for original image is determined by the new criterion. Parameters space is, respectively, determined according to different contrast types, which greatly shrink parameters space. Thus searching direction of GA is guided by the new parameter space. Considering the drawback of traditional histogram equalization that it reduces the information and enlarges noise and background blur in the processed image, a synthetic objective function is used as fitness function of GA combining peak signal-noise-ratio (PSNR) and information entropy. In order to calculate IBT in the whole image, WNN is used to approximate the IBT. In order to enhance the local contrast for image, discrete stationary wavelet transform (DSWT) is used to enhance detail in an image. Having implemented DSWT to an image, detail is enhanced by a non-linear operator in three high frequency sub-bands. The coefficients in the low frequency sub-bands are set as zero. Final enhanced image is obtained by adding the global enhanced image with the local enhanced image. Experimental results show that the new algorithm is able to well enhance the global and local contrast for image while keeping the noise and background blur from being greatly enlarged.  相似文献   

17.
18.
We propose a new class of foundation-penalty (FP) cuts for MIPs that are easy to generate by exploiting routine penalty calculations. Their underlying concept generalizes the lifting process and provides derivations of major classical cuts. (Gomory cuts arise from low level FP cuts by simply ‘plugging in’ standard penalties.)  相似文献   

19.
The characteristic polynomial of a multiarrangement   总被引:1,自引:0,他引:1  
Given a multiarrangement of hyperplanes we define a series by sums of the Hilbert series of the derivation modules of the multiarrangement. This series turns out to be a polynomial. Using this polynomial we define the characteristic polynomial of a multiarrangement which generalizes the characteristic polynomial of an arrangement. The characteristic polynomial of an arrangement is a combinatorial invariant, but this generalized characteristic polynomial is not. However, when the multiarrangement is free, we are able to prove the factorization theorem for the characteristic polynomial. The main result is a formula that relates ‘global’ data to ‘local’ data of a multiarrangement given by the coefficients of the respective characteristic polynomials. This result gives a new necessary condition for a multiarrangement to be free. Consequently it provides a simple method to show that a given multiarrangement is not free.  相似文献   

20.
Differential evolution with generalized differentials   总被引:1,自引:0,他引:1  
In this paper, we study the mutation operation of the differential evolution (DE) algorithm. In particular, we propose the differential of scaled vectors, called the ‘generalized differential’, as opposed to the existing scaled differential vector in the mutation of DE. We derive the probability distribution of points generated by the mutation with ‘generalized differentials’. We incorporate a vector-projection-based exploratory method within the new mutation scheme. The vector projection is not mandatory and it is only invoked if trial points continue to be unsuccessful. An algorithm is then proposed which implements the mutation strategy based on the difference of the scaled vectors as well as the vector projection technique. A numerical study is carried out using a set of 50 test problems, many of which are inspired by practical applications. Numerical results suggest that the new algorithm is superior to DE.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号