共查询到20条相似文献,搜索用时 34 毫秒
1.
Flemming Tops?e 《Journal of Global Optimization》2009,43(4):553-564
Inspired by previous work on information theoretical optimization problems, the basics of an axiomatic theory of certain special
two-person zero-sum games is developed. One of the players, “Observer”, is imagined to have a “mind”, the other, “Nature”,
not. These ideas lead to un-symmetric modeling as the two players are treated quite differently. Basic concavity- and convexity
results as well as a general minimax theorem are derived from the axioms. 相似文献
2.
A simple factorization of the finite-dimensional Galerkin operators motivates a study of the numerical stability of a Galerkin
procedure on the basis of its “potential stability” and the “conditioning” of its coordinate functions. Conditions sufficient
for stability and conditions leading to instability are thereby identified. Numerical examples of stability and instability
occurring in the application of the Galerkin method to boundary-integral equations arising in simple scattering problems are
provided and discussed within this framework. Numerical instabilities reported by other authors are examined and explained
from the same point of view.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
3.
The purpose of the present paper is to investigate what significance, if any, inclusion of uncertainties has for the conclusions
of the modelling and analysis, i.e., whether the policy recommendations implicit in the results of the analysis depend on
the inclusion or not of uncertainties. We do this within the context of a model of the Northern European electricity sector.
The paper considers uncertainties about future states of nature. More specifically, we consider the inflow of water into a
hydropower production system, where the states of nature are represented by a “dry”, a “normal” and a “wet” year.
The problems may be formulated as non-linear optimisation models where the objective function basically consists of the expected
value of the sum of consumers', producers', and authorities' surplus. The models take into account that there are losses in
the transmission and distribution of electricity, and that the consumers pay an energy tax on their use of electricity. The
consumers are divided into two groups, households and industry. Also, complementarity formulations are used, as these are
shown to be more adequate for certain aspects, in particular where risk aversion within a liberalised market context is modelled.
For each of eight Northern European countries, the basic results of the models are the installation of new production capacities,
the production on old and new production capacities, the electricity prices, and the interchange between the countries. The
investment in new production capacity is represented by a single value for each country, while the productions differ in that
they depend on natural phenomena, which we refer to as the state of nature and represent by stochastic variables.
It was found that in this context it was relatively easy to include stochastic elements in the model. Second, complementarity
formulations are preferable to optimisation based modelling for some problem types. Third, results of the stochastic model
have natural interpretations, also compared to one or several versions of a deterministic model. And fourth, we have seen
that the quantitative results, and hence the implied policy recommendations, may differ significantly from those of deterministic
models. We therefore conclude that increased attention should be given to the inclusion of stochastic elements into the modelling
of energy systems.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
4.
Microarrays offer unprecedented possibilities for the so-called omic, e.g., genomic and proteomic, research. However, they
are also quite challenging data to analyze. The aim of this paper is to provide a short tutorial on the most common approaches
used for pattern discovery and cluster analysis as they are currently used for microarrays, in the hope to bring the attention
of the Algorithmic Community on novel aspects of classification and data analysis that deserve attention and have potential
for high reward.
R. Giancarlo is partially supported by Italian MIUR grants PRIN “Metodi Combinatori ed Algoritmici per la Scoperta di Patterns
in Biosequenze” and FIRB “Bioinformatica per la Genomica e la Proteomica” and Italy-Israel FIRB Project “Pattern Discovery
Algorithms in Discrete Structures, with Applications to Bioinformatics”. D. Scaturro is supported by a MIUR Fellowship in
the Italy-Israel FIRB Project “Pattern Discovery Algorithms in Discrete Structures, with Applications to Bioinformatics”. 相似文献
5.
Alessio Moretti 《Logica Universalis》2009,3(1):19-57
Whereas geometrical oppositions (logical squares and hexagons) have been so far investigated in many fields of modal logic
(both abstract and applied), the oppositional geometrical side of “deontic logic” (the logic of “obligatory”, “forbidden”,
“permitted”, . . .) has rather been neglected. Besides the classical “deontic square” (the deontic counterpart of Aristotle’s
“logical square”), some interesting attempts have nevertheless been made to deepen the geometrical investigation of the deontic
oppositions: Kalinowski (La logique des normes, PUF, Paris, 1972) has proposed a “deontic hexagon” as being the geometrical
representation of standard deontic logic, whereas Joerden (jointly with Hruschka, in Archiv für Rechtsund Sozialphilosophie
73:1, 1987), McNamara (Mind 105:419, 1996) and Wessels (Die gute Samariterin. Zur Struktur der Supererogation, Walter de Gruyter,
Berlin, 2002) have proposed some new “deontic polygons” for dealing with conservative extensions of standard deontic logic
internalising the concept of “supererogation”. Since 2004 a new formal science of the geometrical oppositions inside logic
has appeared, that is “n-opposition theory”, or “NOT”, which relies on the notion of “logical bi-simplex of dimension m” (m = n − 1). This theory has received a complete mathematical foundation in 2008, and since then several extensions. In this paper,
by using it, we show that in standard deontic logic there are in fact many more oppositional deontic figures than Kalinowski’s
unique “hexagon of norms” (more ones, and more complex ones, geometrically speaking: “deontic squares”, “deontic hexagons”,
“deontic cubes”, . . ., “deontic tetraicosahedra”, . . .): the real geometry of the oppositions between deontic modalities
is composed by the aforementioned structures (squares, hexagons, cubes, . . ., tetraicosahedra and hyper-tetraicosahedra),
whose complete mathematical closure happens in fact to be a “deontic 5-dimensional hyper-tetraicosahedron” (an oppositional
very regular solid).
相似文献
6.
The central objective of this paper is to develop a transparent, consistent, self-contained, and stable country risk rating
model, closely approximating the country risk ratings provided by Standard and Poor’s (S&P). The model should be non-recursive,
i.e., it should not rely on the previous years’ S&P ratings. The set of variables selected here includes not only economic-financial
but also political variables. We propose a new model based on the novel combinatorial-logical technique of Logical Analysis
of Data (which derives a new rating system only from the qualitative information representing pairwise comparisons of country
riskiness). We also develop a method allowing to derive a rating system that has any desired level of granularity. The accuracy
of the proposed model’s predictions, measured by its correlation coefficients with the S&P ratings, and confirmed by k-folding cross-validation, exceeds 95%. The stability of the constructed non-recursive model is shown in three ways: by the
correlation of the predictions with those of other agencies (Moody’s and The Institutional Investor), by predicting 1999 ratings
using the non-recursive model derived from the 1998 dataset applied to the 1999 data, and by successfully predicting the ratings
of several previously non-rated countries. This study provides new insights on the importance of variables by supporting the
necessity of including in the analysis, in addition to economic variables, also political variables (in particular “political
stability”), and by identifying “financial depth and efficiency” as a new critical factor in assessing country risk. 相似文献
7.
8.
Across many industries, e-commerce generates substantial modifications in supply chain structures. The aim of this article
is to assess different forms of existing organizations when a store-based sales network coexists with a web site order network.
Three main organizational models can be detected: “store-picking”, “dedicated warehouse-picking” and “drop-shipping”. We use
a “newsboy” order policy model to compare the advantages of these different models and to note the impact of some parameters
on inventory and flow management policies throughout the supply chain. Several effects are presented, particularly those linked
to the size of the Internet market in relation to traditional market size and market demand hazards.
相似文献
9.
Jing Hai Shao 《数学学报(英文版)》2011,27(6):1195-1204
In this paper, the dimensional-free Harnack inequalities are established on infinite-dimensional spaces. More precisely, we
establish Harnack inequalities for heat semigroup on based loop group and for Ornstein-Uhlenbeck semigroup on the abstract
Wiener space. As an application, we establish the HWI inequality on the abstract Wiener space, which contains three important
quantities in one inequality, the relative entropy “H”, Wasserstein distance “W”, and Fisher information “I”. 相似文献
10.
We completely classify the real root subsystems of root systems of loop algebras of Kac–Moody Lie algebras. This classification
involves new notions of “admissible subgroups” of the coweight lattice of a root system Ψ, and “scaling functions” on Ψ. Our
results generalise and simplify earlier work on subsystems of real affine root systems. 相似文献
11.
Kyungmee Park 《ZDM》2012,44(2):121-135
This study identifies the characteristics of mathematics classrooms in Korea. First, conventional Korean mathematics lessons
are analyzed from the perspective of “theory of variation”. Second, an innovative lesson for gifted children is reported in
detail and analyzed from the perspective of “Lakatos’ proofs and refutations”. Third, the classroom characteristics identified
in both the conventional lessons and the innovative lesson are interpreted in terms of the underlying cultural values that
they share with other East Asian countries. The study concludes that although the two faces of Korean mathematics lessons look different, they may flow from the same “heart”—that of the common Confucian heritage culture culture, and in particular
East Asian pragmatism. 相似文献
12.
Meltem Öztürk 《Annals of Operations Research》2008,163(1):177-196
Semiorders may form the simplest class of ordered sets with a not necessarily transitive indifference relation. Their generalization
has given birth to many other classes of ordered sets, each of them characterized by an interval representation, by the properties
of its relations or by forbidden configurations. In this paper, we are interested in preference structures having an interval
representation. For this purpose, we propose a general framework which makes use of n-point intervals and allows a systematic analysis of such structures. The case of 3-point intervals shows us that our framework
generalizes the classification of Fishburn by defining new structures. Especially we define three classes of ordered sets
having a non-transitive indifference relation. A simple generalization of these structures provides three ordered sets that
we call “d-weak orders”, “d-interval orders” and “triangle orders”. We prove that these structures have an interval representation. We also establish
some links between the relational and the forbidden mode by generalizing the definition of a Ferrers relation. 相似文献
13.
Classification is concerned with the development of rules for the allocation of observations to groups, and is a fundamental
problem in machine learning. Much of previous work on classification models investigates two-group discrimination. Multi-category
classification is less-often considered due to the tendency of generalizations of two-group models to produce misclassification
rates that are higher than desirable. Indeed, producing “good” two-group classification rules is a challenging task for some
applications, and producing good multi-category rules is generally more difficult. Additionally, even when the “optimal” classification
rule is known, inter-group misclassification rates may be higher than tolerable for a given classification model. We investigate
properties of a mixed-integer programming based multi-category classification model that allows for the pre-specification
of limits on inter-group misclassification rates. The mechanism by which the limits are satisfied is the use of a reserved
judgment region, an artificial category into which observations are placed whose attributes do not sufficiently indicate membership
to any particular group. The method is shown to be a consistent estimator of a classification rule with misclassification
limits, and performance on simulated and real-world data is demonstrated. 相似文献
14.
In order to solve a quadratic 0/1 problem, some techniques, consisting in deriving a linear integer formulation, are used.
Those techniques, called “linearization”, usually involve a huge number of additional variables. As a consequence, the exact
resolution of the linear model is, in general, very difficult.
Our aim, in this paper, is to propose “economical” linear models. Starting from an existing linearization (typically the so-called
“classical linearization”), we find a new linearization with fewer variables. The resulting model is called “Miniaturized”
linearization. Based on this approach, we propose a new linearization scheme for which numerical tests have been performed. 相似文献
15.
Fotios Pasiouras Chrysovalantis Gaganis Constantin Zopounidis 《European Journal of Operational Research》2010
The purpose of the present study is the development of classification models for the identification of acquirers and targets in the Asian banking sector. We use a sample of 52 targets and 47 acquirers that were involved in acquisitions in 9 Asian banking markets during 1998–2004 and match them by country and time with an equal number of non-involved banks. The models are developed and validated through a tenfold cross-validation approach using two multicriteria decision aid techniques. For comparison purposes we also develop models through discriminant analysis. The results indicate that the multicriteria decision aid models are more efficient that the ones developed through discriminant analysis. Furthermore, in all the cases the models are more efficient in distinguishing between acquirers and non-involved banks than between targets and non-involved banks. Finally, the models with a binary outcome achieve higher accuracies than the ones which simultaneously distinguish between acquirers, targets and non-involved banks. 相似文献
16.
Rade T. Živaljević 《Discrete and Computational Geometry》2009,41(1):135-161
This paper lays the foundation for a theory of combinatorial groupoids that allows us to use concepts like “holonomy”, “parallel transport”, “bundles”, “combinatorial curvature”, etc. in the context
of simplicial (polyhedral) complexes, posets, graphs, polytopes and other combinatorial objects. We introduce a new, holonomy-type
invariant for cubical complexes, leading to a combinatorial “Theorema Egregium” for cubical complexes that are non-embeddable
into cubical lattices. Parallel transport of Hom-complexes and maps is used as a tool to extend Babson–Kozlov–Lovász graph coloring results to more general statements about
nondegenerate maps (colorings) of simplicial complexes and graphs.
The author was supported by grants 144014 and 144026 of the Serbian Ministry of Science and Technology. 相似文献
17.
General pedagogical knowledge (GPK) is a central component of teacher knowledge. Teacher education programs in many countries
therefore provide corresponding opportunities to learn (OTL), and in-school experience is regarded as a core component of
OTL fostering knowledge in the area of general pedagogy. However, empirical research on the effectiveness of school experiences
during teacher education does not tell us precisely how different kinds of OTL are related to GPK. This paper first reports
on the conceptualizing of the GPK test in the context of TEDS-M. Then the relationship between practical in-school OTL of
German and US future primary teachers and their GPK is investigated. On the basis of results from Latent-Class Analysis using
two core indicators of in-school OTL (the length of time spent on teaching students and the extent of being supported by a
mentor or supervisor), three types of future primary teachers in both the US and Germany are distinguished: “starting” (type
1), “autonomous” (type 2), and “balanced” (type 3). In both countries, type 3 future primary teachers reported that they had
had OTL to reflect on and improve their teaching to a larger extent than the type 1 teachers. Type 3 teachers also generally
achieved better GPK test results than type 1 teachers. Furthermore, there is also a tendency that type 3 future teachers show
better results than type 2. Based on these results, we hypothesize that the quality of future teachers’ activities during
in-school practicum matters with regard to important outcomes of teacher education, making in-school OTL an effective component
of teacher education. Research findings are discussed with regard to the relationship between theory and practice during teacher
education. 相似文献
18.
We prove a preservation theorem for limit steps of countable support iterations of proper forcing notions whose particular
cases are preservations of the following properties on limit steps: “no random reals are added”, “μ(Random(V))≠1”, “no dominating reals are added”, “Cohen(V) is not comeager”. Consequently, countable support iterations of σ-centered forcing notions do not add random reals.
The work was supported by BRF of Israel Academy of Sciences and by grant GA SAV 365 of Slovak Academy of Sciences. 相似文献
19.
Rémi Peyre 《Potential Analysis》2008,29(1):17-36
Carne’s bound is a sharp inequality controlling the transition probabilities for a discrete reversible Markov chain (Section 1).
Its ordinary proof uses spectral techniques which look as efficient as miraculous. Here we present a new proof, comparing
a “drift” for ways “out” and “back”, to get the gaussian part of the bound (Section 2), and using a conditioning technique
to get the flight factor (Section 4). Moreover we show how our proof is more “supple” than Carne’s one and may generalize
(Section 3.2).
相似文献
20.
A. T. Filippov 《Theoretical and Mathematical Physics》2010,163(3):753-767
We propose new models of the “affine” theory of gravity in multidimensional space-times with symmetric connections. We use
and develop ideas of Weyl, Eddington, and Einstein, in particular, Einstein’s proposed method for obtaining the geometry using
the Hamilton principle. More specifically, the connection coefficients are determined using a “geometric” Lagrangian that
is an arbitrary function of the generalized (nonsymmetric) Ricci curvature tensor (and, possibly, other fundamental tensors)
expressed in terms of the connection coefficients regarded as independent variables. Such a theory supplements the standard
Einstein theory with dark energy (the cosmological constant, in the first approximation), a neutral massive (or tachyonic)
meson, and massive (or tachyonic) scalar fields. These fields couple only to gravity and can generate dark matter and/or inflation.
The new field masses (real or imaginary) have a geometric origin and must appear in any concrete model. The concrete choice
of the Lagrangian determines further details of the theory, for example, the nature of the fields that can describe massive
particles, tachyons, or even “phantoms.” In “natural” geometric theories, dark energy must also arise. The basic parameters
of the theory (cosmological constant, mass, possible dimensionless constants) are theoretically indeterminate, but in the
framework of modern “multiverse” ideas, this is more a virtue than a defect. We consider further extensions of the affine
models and in more detail discuss approximate effective (“physical”) Lagrangians that can be applied to the cosmology of the
early Universe. 相似文献