Showing 1 to 100 of 4443 matching Articles
Results per page:
By
Muñoz, Mirna; Negrón, Adriana Peña Pérez; Mejia, Jezreel; Lopez, Graciela Lara
Show all (4)
Post to Citeulike
In Mexico, the small and medium size companies (SMEs) are key for the software development industry, in such a way that having highly qualified personal for the development of high quality software products is a fundamental piece to guarantee their permanency in the market. Therefore, matching the software industry requirements with the academy training represents a big problem that must be reduced for the benefit of both sectors. In this context, to understand the coverage of the academic curricula programs in higher education, regarding the software industry requirements is a fundamental aspect to take actions to reduce the current gap between industry and academy. This paper presents an analysis of coverage between Moprosoft norm, standard used for software industry to ensure quality in Software Engineering practices, and four academic curricula programs of higher education related to Computer Science and Informatics.
more …
By
Imbs, Damien; Rajsbaum, Sergio; Valle, Adrián
Post to Citeulike
The basic read/write shared memory model where asynchronous and crash prone processes communicate to solve a task is difficult to analyze. A more structured model is the iterated immediate snapshot model (IIS), where processes execute communication closed rounds. In each round, they communicate using read/write registers that cannot be reused in later rounds. It is known that a task is solvable in the IIS model if and only if it is solvable in the basic read/write model. Both models are also equivalent when, in addition to read/write registers, processes also have access to stronger communication objects called 01tasks.
This paper extends further the task computability equivalence presenting a simulation that includes xconsensus objects, which solve consensus among up to x processes. The simulation implies that an iterated model where processes communicate through a sequence consisting only of xconsensus objects is equivalent to the basic shared memory model augmented with xconsensus objects.
more …
By
Aranha, Diego F.; FuentesCastañeda, Laura; Knapp, Edward; Menezes, Alfred; RodríguezHenríquez, Francisco
Show all (5)
Post to Citeulike
14 Citations
We implement asymmetric pairings derived from KachisaSchaeferScott (KSS), BarretoNaehrig (BN), and BarretoLynnScott (BLS) elliptic curves at the 192bit security level. Somewhat surprisingly, we find pairings derived from BLS curves with embedding degree 12 to be the fastest for our serial as well as our parallel implementations. Our serial implementations provide a factor3 speedup over the previous stateoftheart, demonstrating that pairing computation at the 192bit security level is not as expensive as previously thought. We also present a general framework for deriving a Weiltype pairing that is wellsuited for computing a single pairing on a multiprocessor machine.
more …
By
Aranha, Diego F.; Knapp, Edward; Menezes, Alfred; RodríguezHenríquez, Francisco
Show all (4)
Post to Citeulike
4 Citations
In the past year, the speed record for pairing implementations on desktopclass machines has been broken several times. The speed records for asymmetric pairings were set on a single processor. In this paper, we describe our parallel implementation of the optimal ate pairing over BarretoNaehrig (BN) curves that is about 1.23 times faster using two cores of an Intel Core i5 or Core i7 machine, and 1.45 times faster using 4 cores of the Core i7 than the stateoftheart implementation on a single core. We instantiate Hess’s general Weil pairing construction and introduce a new optimal Weil pairing tailored for parallel execution. Our experimental results suggest that the new Weil pairing is 1.25 times faster than the optimal ate pairing on 8core extensions of the aforementioned machines. Finally, we combine previous techniques for parallelizing the eta pairing on a supersingular elliptic curve with embedding degree 4, and achieve an estimated 1.24fold speedup on an 8core extension of an Intel Core i7 over the previous best technique.
more …
By
FazHernández, Armando; Longa, Patrick; Sánchez, Ana H.
Post to Citeulike
12 Citations
We propose efficient algorithms and formulas that improve the performance of sidechannel protected scalar multiplication exploiting the GallantLambertVanstone (CRYPTO 2001) and GalbraithLinScott (EUROCRYPT 2009) methods. Firstly, by adapting Feng et al.’s recoding to the GLV setting, we derive new regular algorithms for variablebase scalar multiplication that offer protection against simple sidechannel and timing attacks. Secondly, we propose an efficient technique that interleaves ARMbased and NEONbased multiprecision operations over an extension field, as typically found on GLS curves and pairing computations, to improve performance on modern ARM processors. Finally, we showcase the efficiency of the proposed techniques by implementing a stateoftheart GLVGLS curve in twisted Edwards form defined over
$\mathbb{F}_{p^2}$
, which supports a four dimensional decomposition of the scalar and runs in constant time, i.e., it is fully protected against timing attacks. For instance, using a precomputed table of only 512 bytes, we compute a variablebase scalar multiplication in 92,000 cycles on an Intel Ivy Bridge processor and in 244,000 cycles on an ARM CortexA15 processor. Our benchmark results and the proposed techniques contribute to the improvement of the stateoftheart performance of elliptic curve computations. Most notably, our techniques allow us to reduce the cost of adding protection against timing attacks in the GLVbased variablebase scalar multiplication computation to below 10%.
more …
By
Adj, Gora; Menezes, Alfred; Oliveira, Thomaz; RodríguezHenríquez, Francisco
Show all (4)
Post to Citeulike
2 Citations
We show that a Magma implementation of Joux’s
$$L[1/4+o(1)]$$
algorithm can be used to compute discrete logarithms in the 1303bit finite field
$${\mathbb F}_{3^{6 \cdot 137}}$$
and the 1551bit finite field
$${\mathbb F}_{3^{6 \cdot 163}}$$
with very modest computational resources. Our
$${\mathbb F}_{3^{6 \cdot 137}}$$
implementation was the first to illustrate the effectiveness of Joux’s algorithm for computing discrete logarithms in smallcharacteristic finite fields that are not Kummer or twistedKummer extensions.
more …
By
Yakovlev, Victor; Korzhik, Valery; Bakaev, Mihail; MoralesLuna, Guillermo
Show all (4)
Post to Citeulike
1 Citations
We consider the informationtheoretic secure key distribution problem (KDP) over noisy binary symmetric channels with public discussion and in the presence of an active adversary. There are several versions of such protocols proposed by Maurer, Wolf, Renner, Dodis, Reyzin et al. We describe two new versions of KDP for the same channel model and with the use of extractors as a mean of privacy amplification but with the goal to maximize the key rate under an optimization of the protocol parameters. There are two novelties in solution of KDP: we get the extractor’s seed directly from the distributed initial strings and we prove the main results in terms of explicit estimates without the use of the uncertain symbols O, Ω, Θ. Both asymptotic and nonasymptotic cases are presented. It is shown that the extractors can be superior to conventional hashing for very large lengths of initially distributed strings.
more …
By
Chakraborty, Debrup; Nandi, Mridul
Post to Citeulike
5 Citations
HCTR was proposed by Wang, Feng and Wu in 2005. It is a mode of operation which provides a tweakable strong pseudorandom permutation. Though HCTR is quite an efficient mode, the authors showed a cubic security bound for HCTR which makes it unsuitable for applications where tweakable strong pseudorandom permutations are required. In this paper we show that HCTR has a better security bound than what the authors showed. We prove that the distinguishing advantage of an adversary in distinguishing HCTR and its inverse from a random permutation and its inverse is bounded above by 4.5 σ^{2}/2^{n}, where n is the blocklength of the blockcipher and σ is the number of nblock queries made by the adversary (including the tweak).
more …
By
Beuchat, JeanLuc; Brisebarre, Nicolas; Detrey, Jérémie; Okamoto, Eiji; RodríguezHenríquez, Francisco
Show all (5)
Post to Citeulike
8 Citations
In this article we propose a study of the modified Tate pairing in characteristics two and three. Starting from the η_{T} pairing introduced by Barreto et al. [1], we detail various algorithmic improvements in the case of characteristic two. As far as characteristic three is concerned, we refer to the survey by Beuchat et al. [5]. We then show how to get back to the modified Tate pairing at almost no extra cost. Finally, we explore the tradeoffs involved in the hardware implementation of this pairing for both characteristics two and three. From our experiments, characteristic three appears to have a slight advantage over characteristic two.
more …
By
FuentesCastañeda, Laura; Knapp, Edward; RodríguezHenríquez, Francisco
Post to Citeulike
7 Citations
An asymmetric pairing
$e\colon{\mathbb{G}}_2\times{\mathbb{G}}_1\to{\mathbb{G}}_T$
is considered such that
${\mathbb{G}}_1=E({\mathbb F}_p)[r]$
and
${\mathbb{G}}_2=\tilde E({\mathbb F}_{p^{k/d}})[r]$
, where k is the embedding degree of the elliptic curve
$E/{\mathbb F}_p$
, r is a large prime divisor of
$\# E({\mathbb F}_p)$
, and
$\tilde E$
is the degreed twist of E over
${\mathbb F}_{p^{k/d}}$
with
$r \mid \tilde E ({\mathbb F}_{p^{k/d}} )$
. Hashing to
${\mathbb{G}}_1$
is considered easy, while hashing to
${\mathbb{G}}_2$
is done by selecting a random point Q in
$\tilde E({\mathbb F}_{p^{k/d}})$
and computing the hash value cQ, where c·r is the order of
$\tilde E({\mathbb F}_{p^{k/d}})$
. We show that for a large class of curves, one can hash to
${\mathbb{G}}_2$
in
$\textup{O}(1/\varphi (k)\log c)$
time, as compared with the previously fastestknown
$\textup{O}(\log p)$
. In the case of BN curves, we are able to double the speed of hashing to
${\mathbb{G}}_2$
. For higherembeddingdegree curves, the results can be more dramatic. We also show how to reduce the cost of the finalexponentiation step in a pairing calculation by a fixed number of field multiplications.
more …
By
Coello, Carlos A. Coello
Post to Citeulike
This chapter provides a short overview of multiobjective optimization using metaheuristics. The chapter includes a description of some of the main metaheuristics that have been used for multiobjective optimization. Although special emphasis is made on evolutionary algorithms, other metaheuristics, such as particle swarm optimization, artificial immune systems, and ant colony optimization, are also briefly discussed. Other topics such as applications and recent algorithmic trends are also included. Finally, some of the main research trends that are worth exploring in this area are briefly discussed.
more …
By
Adj, Gora; Menezes, Alfred; Oliveira, Thomaz; RodríguezHenríquez, Francisco
Show all (4)
Post to Citeulike
3 Citations
In 2013, Joux, and then Barbulescu, Gaudry, Joux and Thomé, presented new algorithms for computing discrete logarithms in finite fields of small and medium characteristic. We show that these new algorithms render the finite field
${\mathbb{F}}_{3^{6 \cdot 509}} = {\mathbb{F}}_{3^{3054}}$
weak for discrete logarithm cryptography in the sense that discrete logarithms in this field can be computed significantly faster than with the previous fastest algorithms. Our concrete analysis shows that the supersingular elliptic curve over
${\mathbb{F}}_{3^{509}}$
with embedding degree 6 that had been considered for implementing pairingbased cryptosystems at the 128bit security level in fact provides only a significantly lower level of security. Our work provides a convenient framework and tools for performing a concrete analysis of the new discrete logarithm algorithms and their variants.
more …
By
Taverne, Jonathan; FazHernández, Armando; Aranha, Diego F.; RodríguezHenríquez, Francisco; Hankerson, Darrel; López, Julio
Show all (6)
Post to Citeulike
6 Citations
The availability of a new carryless multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and halftrace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the stateoftheart performance of halving and doublingbased scalar multiplication on NIST curves at the 112 and 192bit security levels, and a new speed record for sidechannel resistant scalar multiplication in a random curve at the 128bit security level.
more …
By
Barradas, Héctor Ruíz; Bert, Didier
Post to Citeulike
We present a fixpoint semantics of event systems. The semantics is presented in a general framework without concerns of fairness. Soundness and completeness of rules for deriving leadsto properties are proved in this general framework. The general framework is instantiated to minimal progress and weak fairness assumptions and similar results are obtained. We show the power of these results by deriving sufficient conditions for leadsto under minimal progress proving soundness of proof obligations without reasoning over statetraces.
more …
By
Oliveira, Thomaz; López, Julio; Aranha, Diego F.; RodríguezHenríquez, Francisco
Show all (4)
Post to Citeulike
7 Citations
In this work we present the λcoordinates, a new system for representing points in binary elliptic curves. We also provide efficient elliptic curve operations based on the new representation and timing results of our software implementation over the field
$\mathbb{F}_{2^{254}}$
. As a result, we improve speed records for protected/unprotected single/multicore software implementations of randompoint elliptic curve scalar multiplication at the 128bit security level. When implemented on a Sandy Bridge 3.4GHz Intel Xeon processor, our software is able to compute a single/multicore unprotected scalar multiplication in 72,300 and 47,900 clock cycles, respectively; and a protected singlecore scalar multiplication in 114,800 cycles. These numbers improve by around 2% on the newer Core i7 2.8GHz Ivy Bridge platform.
more …
By
Chakraborty, Debrup; Sarkar, Palash
Post to Citeulike
9 Citations
This work builds on earlier work by Rogaway at Asiacrypt 2004 on tweakable block cipher (TBC) and modes of operations. Our first contribution is to generalize Rogaway’s TBC construction by working over a ring R and by the use of a masking sequence of functions. The ring R can be instantiated as either GF(2^{n}) or as ℤ . Further, over GF(2^{n}), efficient instantiations of the masking sequence of functions can be done using either a Linear Feedback Shift Register (LFSR), a powering construction or a cellular automata map. Rogaway’s TBC construction was built from the powering construction over GF(2^{n}). Our second contribution is to use the general TBC construction to instantiate general constructions of various modes of operations (AE, PRF, MAC, AEAD) given by Rogaway.
more …
By
Castañeda, Armando; Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel
Show all (4)
Post to Citeulike
1 Citations
In the waitfree shared memory model substantial attention has been devoted to understanding the relative power of subconsensus tasks. Two important subconsensus families of tasks have been identified: kset agreement and Mrenaming. When 2 ≤ k ≤ n − 1 and n ≤ M ≤ 2n − 2, these tasks are more powerful than read/write registers, but not strong enough to solve consensus for two processes.
This paper studies the power of renaming with respect to set agreement. It shows that, in a system of n processes, nrenaming is strictly stronger than (n − 1)set agreement, but not stronger than (n − 2)set agreement. Furthermore, (n + 1)renaming cannot solve even (n − 1)set agreement. As a consequence, there are cases where set agreement and renaming are incomparable when looking at their power to implement each other.
more …
By
MancillasLópez, Cuauhtemoc; Chakraborty, Debrup; RodríguezHenríquez, Francisco
Post to Citeulike
Tweakable enciphering schemes are a certain type of blockcipher mode of operation which provide security in the sense of a strong pseudorandom permutation. It has been proposed that these types of modes are suitable for inplace disk encryption. Currently there are many proposals available for these schemes. EME is one of the efficient candidate of this category. EME2 is a derivative of EME which is currently one of the candidates of a draft standard for wide block modes by the IEEE working group on storage security. We show some weakness of these two modes assuming that some side channel information is available.
more …
By
Coello Coello, Carlos A.
Post to Citeulike
Evolutionary algorithms (as well as a number of other metaheuristics) have become a popular choice for solving problems having two or more (often conflicting) objectives (the socalled multiobjective optimization problems). This area, known as EMOO (Evolutionary MultiObjective Optimization) has had an important growth in the last 20 years, and several people (particularly newcomers) get the impression that it is now very difficult to make contributions of sufficient value to justify, for example, a PhD thesis. However, a lot of interesting research is still under way. In this paper, we will briefly review some of the research topics on evolutionary multiobjective optimization that are currently attracting a lot of interest (e.g., indicatorbased selection, manyobjective optimization and use of surrogates) and which represent good opportunities for doing research. Some of the challenges currently faced by this discipline will also be delineated.
more …
By
Coello, Carlos A. Coello
Post to Citeulike
This chapter provides a short overview of multiobjective optimization using metaheuristics. The chapter includes a description of some of the main metaheuristics that have been used for multiobjective optimization. Although special emphasis is made on evolutionary algorithms, other metaheuristics, such as particle swarm optimization, artificial immune systems, and ant colony optimization, are also briefly discussed. Other topics such as applications and recent algorithmic trends are also included. Finally, some of the main research trends that are worth exploring in this area are briefly discussed.
more …
By
Beuchat, JeanLuc; GonzálezDíaz, Jorge E.; Mitsunari, Shigeo; Okamoto, Eiji; RodríguezHenríquez, Francisco; Teruya, Tadanori
Show all (6)
Post to Citeulike
27 Citations
This paper describes the design of a fast software library for the computation of the optimal ate pairing on a Barreto–Naehrig elliptic curve. Our library is able to compute the optimal ate pairing over a 254bit prime field
$\mathbb{F}_{p}$
, in just 2.33 million of clock cycles on a single core of an Intel Core i7 2.8GHz processor, which implies that the pairing computation takes 0.832msec. We are able to achieve this performance by a careful implementation of the base field arithmetic through the usage of the customary Montgomery multiplier for prime fields. The prime field is constructed via the Barreto–Naehrig polynomial parametrization of the prime p given as, p = 36t^{4} + 36t^{3} + 24t^{2} + 6t + 1, with t = 2^{62} − 2^{54} + 2^{44}. This selection of t allows us to obtain important savings for both the Miller loop as well as the final exponentiation steps of the optimal ate pairing.
more …
By
Aranha, Diego F.; FazHernández, Armando; López, Julio; RodríguezHenríquez, Francisco
Show all (4)
Post to Citeulike
8 Citations
We design a stateoftheart software implementation of field and elliptic curve arithmetic in standard Koblitz curves at the 128bit security level. Field arithmetic is carefully crafted by using the best formulae and implementation strategies available, and the increasingly common native support to binary field arithmetic in modern desktop computing platforms. The ith power of the Frobenius automorphism on Koblitz curves is exploited to obtain new and faster interleaved versions of the wellknown τNAF scalar multiplication algorithm. The usage of the
$\tau^{\lfloor m/3 \rfloor}$
and
$\tau^{\lfloor m/4 \rfloor}$
maps are employed to create analogues of the 3and 4dimensional GLV decompositions and in general, the
$\lfloor m/s \rfloor$
th power of the Frobenius automorphism is applied as an analogue of an sdimensional GLV decomposition. The effectiveness of these techniques is illustrated by timing the scalar multiplication operation for fixed, random and multiple points. In particular, our library is able to compute a random point scalar multiplication in just below 10^{5} clock cycles, which sets a new speed record across all curves with or without endomorphisms defined over binary or prime fields. The results of our optimized implementation suggest a tradeoff between speed, compliance with the published standards and sidechannel protection. Finally, we estimate the performance of curvebased cryptographic protocols instantiated using the proposed techniques and compare our results to related work.
more …
By
Beuchat, JeanLuc; Detrey, Jérémie; Estibals, Nicolas; Okamoto, Eiji; RodríguezHenríquez, Francisco
Show all (5)
Post to Citeulike
1 Citations
This paper is devoted to the design of fast parallel accelerators for the cryptographic Tate pairing in characteristic three over supersingular elliptic curves. We propose here a novel hardware implementation of Miller’s loop based on a pipelined KaratsubaOfman multiplier. Thanks to a careful selection of algorithms for computing the tower field arithmetic associated to the Tate pairing, we manage to keep the pipeline busy. We also describe the strategies we considered to design our parallel multiplier. They are included in a VHDL code generator allowing for the exploration of a wide range of operators. Then, we outline the architecture of a coprocessor for the Tate pairing over
$\mathbb{F}_{3^m}$
. However, a final exponentiation is still needed to obtain a unique value, which is desirable in most of the cryptographic protocols. We supplement our pairing accelerator with a coprocessor responsible for this task. An improved exponentiation algorithm allows us to save hardware resources.
According to our placeandroute results on Xilinx FPGAs, our design improves both the computation time and the areatime tradeoff compared to previoulsy published coprocessors.
more …
By
Rajsbaum, Sergio; Raynal, Michel
Post to Citeulike
Due to the advent of multicore machines, shared memory distributed computing models taking into account asynchrony and process crashes are becoming more and more important. This paper visits models for these systems and analyses their properties from a computability point of view. Among them, the base snapshot model and the iterated model are particularly investigated. The paper visits also several approaches that have been proposed to model failures (mainly the waitfree model and the adversary model) and gives also a look at the BG simulation. The aim of this survey is to help the reader to better understand the power and limits of distributed computing shared memory models.
more …
By
MartínezHerrera, Alberto F.; MexPerera, J. Carlos; NolazcoFlores, Juan A.
Post to Citeulike
Substitution Box (Sbox) is usually the most complex module in some block ciphers. Some prominent ciphers such as AES and Camellia use Sboxes, which are affine equivalents of a multiplicative inverse in small finite fields. This manuscript describes mathematical representations of the Camellia Sbox by using composite fields such as polynomial, normal or mixed. An optimized hardware implementation typically aims to reduce the number of gates to be used. Our theoretical design with composite normal bases allows saving gates in the critical path by using 19 XOR gates, 4 AND gates and 2 NOT gates. With composite mixed bases, the critical path has 2 XOR gates more than the representation with composite normal bases. Redundancies found in the affine transformation matrix that form the composite fields were eliminated. For mixed bases, new Algebraic Normal Form identities were obtained to compute the inner composite multiplicative inverse, reducing the critical path of the complete implementation of the Camellia Sbox. These constructions were translated into transistorgate architectures for hardware representations by using Electric VLSI [29] under MOSIS C5 process [17], [18], thus obtaining the corresponding schematic models.
more …
By
Oliveira, Thomaz; Aranha, Diego F.; López, Julio; RodríguezHenríquez, Francisco
Show all (4)
Post to Citeulike
6 Citations
In this paper we introduce new methods for computing constanttime variablebase point multiplications over the GalbraithLinScott (GLS) and the Koblitz families of elliptic curves. Using a lefttoright doubleandadd and a righttoleft halveandadd Montgomery ladder over a GLS curve, we present some of the fastest timings yet reported in the literature for point multiplication. In addition, we combine these two procedures to compute a multicore protected scalar multiplication. Furthermore, we designed a novel regular
$$\tau $$
adic scalar expansion for Koblitz curves. As a result, using the regular recoding approach, we set the speed record for a singlecore constanttime point multiplication on standardized binary elliptic curves at the
$$128$$
bit security level.
more …
By
Chi, JesúsJavier; Oliveira, Thomaz
Post to Citeulike
1 Citations
In this paper we present a complete Magma implementation for solving the discrete logarithm problem (DLP) on a binary GLS curve defined over the field
$$\mathbb {F}_{2^{62}}$$
. For this purpose, we constructed a curve vulnerable against the gGHS Weil descent attack and adapted the algorithm proposed by Enge and Gaudry to solve the DLP on the Jacobian of a genus32 hyperelliptic curve. Furthermore, we describe a mechanism to check whether a randomly selected binary GLS curve is vulnerable against the gGHS attack. Such method works with all curves defined over binary fields and can be applied to each element of the isogeny class.
more …
By
Chang, Donghoon; Nandi, Mridul
Post to Citeulike
9 Citations
The classical design principle MerkleDamgård [13,6] is scrutinized by many ways such as Joux’s multicollision attack, KelseySchneier second preimage attack etc. In TCC’04, Maurer et al. introduced a strong security notion called as “indifferentiability” for a hash function based on a compression function. The classical design principle is also insecure against this strong security notion whereas chopMD hash is secure with the security bound roughly σ^{2}/2^{s} where s is the number of chopped bits and σ is the total number of message blocks queried by a distinguisher. In case of n = 2s where n is the output size of a compression function, the value σ to get a significant bound is 2^{s/2} which is the birthday complexity, where the hash output size is sbit. In this paper, we present an improved security bound for chopMD. The improved bound shown in this paper is (3(n − s) + 1)q/2^{s} + q/2^{n − s − 1} + σ^{2}/2^{n + 1} where q is the total number of queries. In case of n = 2s, chopMD is indifferentiablysecure if q = O(2^{s}/(3s + 1)) and σ = O(2^{n/2}) which are beyond the birthday complexity. We also present a design principle for an nbit hash function based on a compression function
$f : {0,1}^{2n+b} {\Rightarrow} {0,1}^n$
and show that the indifferentiability security bound for this hash function is roughly (3n + 1)σ/2^{n}. So, the new design of hash function is secondpreimage and rmulticollision secure as long as the query complexity (the number of message blocks queried) of an attacker is less than 2^{n}/(3n + 1) or 2^{n(r − 1)/r} respectively.
more …
By
Pinto, David; Juan, Alfons; Rosso, Paolo
Post to Citeulike
The world wide web is a natural setting for crosslingual information retrieval. The European Union is a typical example of a multilingual scenario, where multiple users have to deal with information published in at least 20 languages. Given queries in some source language and a target corpus in another language, the typical approximation consists in translating either the query or the target dataset to the other language. Other approaches use parallel corpora to obtain a statistical dictionary of words among the different languages. In this work, we propose to use a training corpus made up by a set of QueryRelevant Document Pairs (QRDP) in a probabilistic crosslingual information retrieval approach which is based on the IBM alignment model 1 for statistical machine translation. Our approach has two main advantages over those that use direct translation and parallel corpora: we will not obtain a translation of the query, but a set of associated words which share their meaning in some way and, therefore, the obtained dictionary is, in a broad sense, more semantic than a translation one. Besides, since the queries are supervised, we are working in a more restricted domain than that when using a general parallel corpus (it is well known that in this context results are better than those which are performed in a general context). In order to determine the quality of our experiments, we compared the results with those obtained by a direct translation of the queries with a query translation system, observing promising results.
more …
By
GarzaFabre, Mario; RodriguezTello, Eduardo; ToscanoPulido, Gregorio
Post to Citeulike
2 Citations
Through multiobjectivization, a singleobjective problem is restated in multiobjective form with the aim of enabling a more efficient search process. Recently, this transformation was applied with success to the hydrophobicpolar (HP) lattice model, which is an abstract representation of the protein structure prediction problem. The use of alternative multiobjective formulations of the problem has led to significantly better results. In this paper, an improved multiobjectivization for the HP model is proposed. By decomposing the HP model’s energy function, a twoobjective formulation for the problem is defined. A comparative analysis reveals that the new proposed multiobjectivization evaluates favorably with respect to both the conventional singleobjective and the previously reported multiobjective formulations. Statistical significance testing and the use of a large set of test cases support the findings of this study. Both twodimensional and threedimensional lattices are considered.
more …
By
EstivillCastro, Vladimir; Wood, Derick
Post to Citeulike
2 Citations
The availability of large main memories and the new technologies for disk drives have modified the models for external sorting and have renewed interest in their study. We investigate the replacementselection paradigm for external sorting. We focus on the performance of the merge phase when given random files and specially when given nearly sorted files since such files are common in practice. In particular, we demonstrate that, during the merge phase, the floatingbuffers technique not only reduces the sorting time by fully overlapping I/O and saving seeks, but also it profits significantly from existing order in the input. We also propose a new algorithm for computing a feasible reading sequence for floating buffers that improves upon previous algorithms.
more …
By
Gafni, Eli; Rajsbaum, Sergio
Post to Citeulike
3 Citations
In roundbyround models of distributed computing processes run in a sequence of (synchronous or asynchronous) rounds. The advantage of the roundbyround approach is that invariants established in the first round are preserved in later rounds. An elegant asynchronous roundbyround shared memory model, is the iterated snapshots model (IS). Instead of the snapshots model where processes share an array m[·] that can be accessed any number of times, indexed by process ID, where P_{i} writes to m[i] and can take a snapshot of the entire array, we have processes share a twodimensional array m[·,·], indexed by iteration number and by process ID, where P_{i} in iteration r writes once to m[r, i] and takes one snapshot of row r, m[r,·]. The IS model lends itself more easily to combinatorial analysis. However, to show that whenever a task is impossible in the IS model the task is impossible in the snapshots model, a simulation is needed. Such a simulation was presented by Borowsky and Gafni in PODC97; namely, it was shown how to take a waitfree protocol for the snapshots model, and transform it into a protocol for the IS model, solving the same task.
In this paper we present a new simulation from the snapshots model to the IS model, and show that it can be extended to work with models stronger that waitfree. The main contribution is to show that the simulation can work with models that have access to certain communication objects, called 01tasks. This extends the result of Gafni, Rajsbaum and Herlihy in DISC’2006 stating that renaming is strictly weaker than set agreement from the IS model to the usual noniterated waitfree read/write shared memory model.
We also show that our simulation works with tresilient models and the more general dependent process failure model of Junqueira and Marzullo. This version of the simulation extends previous results by Herlihy and Rajsbaum in PODC’2010 and DISC’2010 about the topological connectivity of a protocol complex in an iterated dependent process failure model, to the corresponding noniterated model.
more …
By
Conde, Rodolfo; Rajsbaum, Sergio
Post to Citeulike
1 Citations
In the consensus task each process proposes a value, and all correct processes have to decide the same value. In addition, validity requires that the decided value is a proposed value. Afek, Gafni and Lieber (DISC’09) introduced the safeconsensus task, by weakening the validity requirement: if the first process to invoke the task returns before any other process invokes it, then it outputs its input; otherwise, when there is concurrency, the consensus output can be arbitrary, not even the input of any process. Surprisingly, they showed that safeconsensus is equivalent to consensus, in a system where any number of processes can crash (e.g., waitfree).
We show that safeconsensus is nevertheless a much weaker communication primitive, in the sense that any waitfree implementation of consensus requires
$\binom{n}{2}$
safeconsensus blackboxes, and this bound is tight. The lower bound proof uses connectivity arguments based on subgraphs of Johnson graphs. For the upper bound protocol that we present, we introduce the g2coalitionsconsensus task, which may be of independent interest. We work in an iterated model of computation, where the processes repeatedly: write their information to a (fresh) shared array, invoke safeconsensus boxes and snapshot the contents of the shared array.
more …
By
Winter, Victor; Kapur, Deepak
Post to Citeulike
Based on our investigations of a case study of controllers for train systems [6,7,13,14], we present a model of reactive systems which emphasizes dynamic partitioning of system behavior into normal and abnormal. The class of reactive systems considered are nonstrict in the sense that their behavior is not entirely governed by past events; instead, future events must also be considered in the design of controllers for such systems.
more …
By
Mata, Felix
Post to Citeulike
1 Citations
Geographic Information Ranking consists of measuring if a document (answer) is relevant to a spatial query. It is done by comparing characteristics in common between document and query. The most popular approaches compare just one aspect of geographical data (geographic properties, topology, among others). It limits the assessment of document relevance. Nevertheless, it can be improved when key characteristics of geographical objects are considered in the ranking (1) geographical attributes, (2) topological relations, and (3) geographical concepts. In this paper, we outline iRank a method that integrates these three aspects to rank a document. Ourapproach evaluates documents from three sources of information: GeoOntologies, dictionaries, and topology files. Relevance is measured according to three stages. In the first stage, the relevance is computed by processing concepts; in second stage relevance is calculated using geographic attributes. In the last stage, the relevance is measured by computing topologic relations. Thus, the main contribution of iRank is show that integration of three ranking criteria is better than when they are used in separate way.
more …
By
Boone, Paul; Chavez, Edgar; Gleitzky, Lev; Kranakis, Evangelos; Opatrny, Jaroslav; Salazar, Gelasio; Urrutia, Jorge
Show all (7)
Post to Citeulike
Abstract
An important technique for discovering routes between two nodes in an adhoc network involves applying the face routing algorithm on a planar spanner of the network. Face routing guarantees message delivery in networks that contains large holes, where greedy algorithms fail. Existing techniques for constructing a suitable planar subgraph involve local tests that eliminate crossings between existing links by deleting some links. They do not test whether the deleted links actually create some crossings and some of the links are deleted needlessly. As a result, some of the routes found in face routing will have an unnecessarily large number of hops from source to destination. We consider a new local test for preprocessing a wireless network that produces a planar subgraph. The test is relatively simple, requires low overhead and does not eliminate existing links unless it is needed to eliminate a crossing, thus reducing overhead associated with multiple hops.
more …
By
GarzaFabre, Mario; RodriguezTello, Eduardo; ToscanoPulido, Gregorio
Post to Citeulike
3 Citations
The hydrophobicpolar (HP) model for protein structure prediction abstracts the fact that hydrophobic interactions are a dominant force in the protein folding process. This model represents a hard combinatorial optimization problem, which has been widely addressed using evolutionary algorithms and other metaheuristics. In this paper, the multiobjectivization of the HP model is proposed. This originally singleobjective problem is restated as a multiobjective one by decomposing the conventional objective function into two independent objectives. By using different evolutionary algorithms and a large set of test cases, the new alternative formulation was compared against the conventional singleobjective problem formulation. As a result, the proposed formulation increased the search performance of the implemented algorithms in most of the cases. Both two and threedimensional lattices are considered. To the best of authors’ knowledge, this is the first study where multiobjective optimization methods are used for solving the HP model.
more …
By
MoralesLuna, Guillermo
Post to Citeulike
1 Citations
We use a simple epistemic logic to pose problems related to relational database design. We use the notion of attribute dependencies, which is weaker than the notion of functional dependencies, to perform design tasks since it satisfies Armstrong axioms. We discuss also the application of the simple epistemic logic to privacy protection and to statistical disclosure.
more …
By
GalvánLópez, Edgar; VázquezMendoza, Lucia; Schoenauer, Marc; Trujillo, Leonardo
Show all (4)
Post to Citeulike
In Genetic Programming (GP), the fitness of individuals is normally computed by using a set of fitness cases (FCs). Research on the use of FCs in GP has primarily focused on how to reduce the size of these sets. However, often, only a small set of FCs is available and there is no need to reduce it. In this work, we are interested in using the whole FCs set, but rather than adopting the commonly used GP approach of presenting the entire set of FCs to the system from the beginning of the search, referred as static FCs, we allow the GP system to build it by aggregation over time, named as dynamic FCs, with the hope to make the search more amenable. Moreover, there is no study on the use of FCs in Dynamic Optimisation Problems (DOPs). To this end, we also use the Kendall Tau Distance (KTD) approach, which quantifies pairwise dissimilarities among two lists of fitness values. KTD aims to capture the degree of a change in DOPs and we use this to promote structural diversity. Results on eight symbolic regression functions indicate that both approaches are highly beneficial in GP.
more …
By
Tchernykh, Andrei; Trystram, Denis
Post to Citeulike
In this paper, we focus on online scheduling of multiprocessor jobs with emphasis on the regulation of idle periods in the frame of general list policies. We consider a new family of scheduling strategies based on two phases which successively combine sequential and parallel executions of the jobs. These strategies are part of a more generic scheme introduced in [6]. The main result is to demonstrate that it is possible to estimate the amount of resources that should remain idle for a better regulation of the load and to obtain approximation bounds.
more …
By
Picek, Stjepan; Coello Coello, Carlos A.; Jakobovic, Domagoj; Mentens, Nele
Show all (4)
Post to Citeulike
2 Citations
The problem of finding the shortest addition chain for a given exponent is of great relevance in cryptography, but is also very difficult to solve since it is an NPhard problem. In this paper, we propose a genetic algorithm with a novel representation of solutions and new crossover and mutation operators to minimize the length of the addition chains corresponding to a given exponent. We also develop a repair strategy that significantly enhances the performance of our approach. The results are compared with respect to those generated by other metaheuristics for instances of moderate size, but we also investigate values up to
$$2^{127}  3$$
. For those instances, we were unable to find any results produced by other metaheuristics for comparison, and three additional strategies were adopted in this case to serve as benchmarks. Our results indicate that the proposed approach is a very promising alternative to deal with this problem.
more …
By
MontanoRivas, Omar
Post to Citeulike
Completionbased automated theory exploration is a method to explore inductive theories with the aid of a convergent rewrite system. It combines a method to synthesise conjectures/definitions in a theory with a completion algorithm. Completion constructs a convergent rewrite system which is then used to reduce redundancies and improve prove automation during the exploration of the theory. However, completion does not always succeed on a set of identities and a reduction ordering. A common failure occurs when an initial identity or a normal form of a critical pair cannot be oriented by the given ordering. A popular solution to this problem consists in using the instances of those rules which can be oriented for rewriting, namely ordered rewriting. Extending completion to ordered rewriting leads to ‘unfailing completion’. In this paper, we extend the class of theories on which the completionbased automated theory exploration method can be applied by using unfailing completion. This produce stronger normalization methods compared to those in [20,21]. The techniques described are implemented in the theory exploration system IsaScheme.
more …
By
Sánchez, Ana Helena; RodríguezHenríquez, Francisco
Post to Citeulike
7 Citations
In 2011, Waters presented a ciphertextpolicy attribute based encryption protocol that uses bilinear pairings to provide control access mechanisms, where the set of user’s attributes is specified by means of a linear secret sharing scheme. Some of the applications foreseen for this protocol lie in the context of mobile devices such a smartphones and tablets, which in a majority of instances are powered by an ARM processor supporting the NEON vector set of instructions. In this paper we present the design of a software cryptographic library that implements a 127bit security level attributebased encryption scheme over mobile devices equipped with a 1.4GHz Exynos 4 CortexA9 processor and a developing board that hosts a 1.7 GHz Exynos 5 CortexA15 processor. For the latter platform and taking advantage of the inherent parallelism of the NEON vector instructions, our library computes a single optimal pairing over a BarretoNaehrig curve approximately 2 times faster than the best timings previously reported on ARM platforms at this level of security. Further, using a 6attribute access formula our library is able to encrypt/decrypt a text/ciphertext in less than 7.5mS and 15.67mS, respectively.
more …
By
Fraigniaud, Pierre; Ilcinkas, David; Rajsbaum, Sergio; Tixeuil, Sébastien
Show all (4)
Post to Citeulike
3 Citations
We consider the task of exploring graphs with anonymous nodes by a team of noncooperative robots, modeled as finite automata. For exploration to be completed, each edge of the graph has to be traversed by at least one robot. In this paper, the robots have no a priori knowledge of the topology of the graph, nor of its size, and we are interested in the amount of memory the robots need to accomplish exploration, We introduce the socalled reduced automata technique, and we show how to use this technique for deriving several space lower bounds for exploration. Informally speaking, the reduced automata technique consists in reducing a robot to a simpler form that preserves its “core” behavior on some graphs. Using this technique, we first show that any set of q≥ 1 noncooperative robots, requires
$\Omega(\log(\frac{n}{q}))$
memory bits to explore all nnode graphs. The proof implies that, for any set of qKstate robots, there exists a graph of size O(qK) that no robot of this set can explore, which improves the O(K^{O(q)}) bound by Rollik (1980). Our main result is an application of this latter result, concerning terminating graph exploration with one robot, i.e., in which the robot is requested to stop after completing exploration. For this task, the robot is provided with a pebble, that it can use to mark nodes (without such a marker, even terminating exploration of cycles cannot be achieved). We prove that terminating exploration requires Ω(log n) bits of memory for a robot achieving this task in all nnode graphs.
more …
By
Adj, Gora; CervantesVázquez, Daniel; ChiDomínguez, JesúsJavier; Menezes, Alfred; RodríguezHenríquez, Francisco
Show all (5)
Post to Citeulike
The security of the JaoDe Feo Supersingular Isogeny DiffieHellman (SIDH) key agreement scheme is based on the intractability of the Computational Supersingular Isogeny (CSSI) problem—computing
$${\mathbb F}_{p^2}$$
rational isogenies of degrees
$$2^e$$
and
$$3^e$$
between certain supersingular elliptic curves defined over
$${\mathbb F}_{p^2}$$
. The classical meetinthemiddle attack on CSSI has an expected running time of
$$O(p^{1/4})$$
, but also has
$$O(p^{1/4})$$
storage requirements. In this paper, we demonstrate that the van OorschotWiener golden collision finding algorithm has a lower cost (but higher running time) for solving CSSI, and thus should be used instead of the meetinthemiddle attack to assess the security of SIDH against classical attacks. The smaller parameter p brings significantly improved performance for SIDH.
more …
By
Aguilar, José Alfonso; ZaldívarColado, Aníbal; TrippBarba, Carolina; Misra, Sanjay; Bernal, Roberto; Ocegueda, Abraham
Show all (6)
Post to Citeulike
1 Citations
Until now, is wellknown that Requirements Engineering (RE) is one of the critical factors for success software. In current literature we can find several reasons of this affirmation. One particular phase, which is vital for developing any new software application is the Requirements Elicitation, is spite of this, most of the development of new software fail because of wrong elicitation phase. Several proposals exist for Requirements Elicitation in Software Engineering, but in the current software development market is focusing on the development of Web and mobile applications, specially using ModelDriven methods, that’s the reason why we asume that it is necessary to know the Elicitation techniques applied in ModelDriven Web Engineering. To do this, we selected the most representative methods such as NDT, UWE and WebML. We have reviewed 189 publications from ACM, IEEE, Science Direct, DBLP and World Wide Web. Publications from the RE literature were analyzed by means of the strict consideration of the current techniques for Requirements Elicitation.
more …
By
Dani, Varsha; King, Valerie; Movahedi, Mahnush; Saia, Jared
Show all (4)
Post to Citeulike
5 Citations
We describe an asynchronous algorithm to solve secure multiparty computation (MPC) over n players, when strictly less than a
${1}\over{8}$
fraction of the players are controlled by a static adversary. For any function f over a field that can be computed by a circuit with m gates, our algorithm requires each player to send a number of field elements and perform an amount of computation that is
$\tilde{O}(\frac{m}{n} + \sqrt n)$
. This significantly improves over traditional algorithms, which require each player to both send a number of messages and perform computation that is Ω(nm).
Additionaly, we define the threshold counting problem and present a distributed algorithm to solve it in the asynchronous communication model. Our algorithm is load balanced, with computation, communication and latency complexity of O(logn), and may be of independent interest to other applications with a load balancing goal in mind.
more …
By
Pierrard, Thomas; Coello Coello, Carlos A.
Post to Citeulike
3 Citations
This paper presents a new artificial immune system algorithm for solving multiobjective optimization problems, based on the clonal selection principle and the hypervolume contribution. The main aim of this work is to investigate the performance of this class of algorithm with respect to approaches which are representative of the stateoftheart in multiobjective optimization using metaheuristics. The results obtained by our proposed approach, called multiobjective artificial immune system based on hypervolume (MOAISHV) are compared with respect to those of the NSGAII. Our preliminary results indicate that our proposed approach is very competitive, and can be a viable choice for solving multiobjective optimization problems.
more …
By
Ramos, J. Guadalupe; Silva, Josep; Vidal, Germán
Post to Citeulike
The development of modern routers require a significant effort to be designed, built, and verified. While hardware routers are faster, they are difficult to configure and maintain. Software routers, on the other hand, are slower but much more flexible, easier to configure and maintain, less expensive, etc. Recently, a modular architecture and toolkit for building software routers and other packet processors has been introduced: the Click system. It includes a specification language with features for declaring and connecting router elements and for designing abstractions.
In this work, we introduce the domainspecific language Rose for the specification of software routers. Rose is embedded in Curry, a modern declarative multiparadigm language. An advantage of this approach is that we have available a framework where router specifications can be transformed, optimized, verified, etc., by using a number of existing formal techniques already developed for Curry programs. Furthermore, we show that the features of Curry are particularly useful to specify router configurations with a highlevel of abstraction. Our first experiments point out that the proposed methodology is both useful and practical.
more …
By
Gafni, Eli; Rajsbaum, Sergio; Raynal, Michel; Travers, Corentin
Show all (4)
Post to Citeulike
3 Citations
We introduce the (b,n)Committee Decision Problem (CD) – a generalization of the consensus problem. While set agreement generalizes consensus in terms of the number of decisions allowed, the CD problem generalizes consensus in the sense of considering many instances of consensus and requiring a processor to decide in at least one instance. In more detail, in the CD problem each one of a set of n processes has a (possibly distinct) value to propose to each one of a set of b consensus problems, which we call committees. Yet a process has to decide a value for at least one of these committees, such that all processes deciding for the same committee decide the same value. We study the CD problem in the context of a waitfree distributed system and analyze it using a combination of distributed algorithmic and topological techniques, introducing a novel reduction technique.
We use the reduction technique to obtain the following results. We show that the (2,3)CD problem is equivalent to the musical benches problem introduced by Gafni and Rajsbaum in [10], and both are equivalent to (2,3)set agreement, closing an open question left there. Thus, all three problems are waitfree unsolvable in a read/write shared memory system, and they are all solvable if the system is enriched with objects capable of solving (2,3)set agreement. While the previous proof of the impossibility of musical benches was based on the BorsukUlam (BU) Theorem, it now relies on Sperner’s Lemma, opening intriguing questions about the relation between BU and distributed computing tasks.
more …
By
Olague, Gustavo; Vega, Francisco Fernández; Pérez, Cynthia B.; Lutton, Evelyne
Show all (4)
Post to Citeulike
3 Citations
We present a new bioinspired approach applied to a problem of stereo images matching. This approach is based on an artifical epidemic process, that we call “the infection algorithm.” The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D informations which allow the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to only produce the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, that propagate like an artificial epidemy over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.
more …
By
Chakraborty, Debrup; Sarkar, Palash
Post to Citeulike
13 Citations
The notion and the first construction of a tweakable enciphering scheme, called CMC, was presented by HaleviRogaway at Crypto 2003. In this paper, we present HCH, which is a new construction of such a scheme. The construction uses the hashencrypthash approach introduced by NaorReingold. This approach has recently been used in the constructions of tweakable enciphering schemes HCTR and PEP. HCH has several advantages over the previous schemes CMC, EME, EME*, HCTR, and PEP. CMC, EME, and EME* use two blockcipher invocations per message block, while HCTR, PEP, and HCH use only one. PEP uses four multiplications per block, while HCTR and HCH use only two. In HCTR, the security bound is cubic, while in HCH security bound is quadratic.
more …
By
Coello, Carlos A. Coello
Post to Citeulike
4 Citations
Summary
In this paper, we will briefly discuss the current state of the research on evolutionary multiobjective optimization, emphasizing the main achievements obtained to date. Achievements in algorithmic design are discussed from its early origins until the current approaches which are considered as the “second generation” in evolutionary multiobjective optimization. Some relevant applications are discussed as well, and we conclude with a list of future challenges for researchers working (or planning to work) in this area in the next few years.
more …
By
Marron, Mark; Hermenegildo, Manuel; Kapur, Deepak; Stefanovic, Darko
Show all (4)
Post to Citeulike
7 Citations
The performance of heap analysis techniques has a significant impact on their utility in an optimizing compiler. Most shape analysis techniques perform interprocedural dataflow analysis in a contextsensitive manner, which can result in analyzing each procedure body many times (causing significant increases in runtime even if the analysis results are memoized). To improve the effectiveness of memoization (and thus speed up the analysis) project/extend operations are used to remove portions of the heap model that cannot be affected by the called procedure (effectively reducing the number of different contexts that a procedure needs to be analyzed with). This paper introduces project/extend operations that are capable of accurately modeling properties that are important when analyzing nontrivial programs (sharing, nullity information, destructive recursive functions, and composite data structures). The techniques we introduce are able to handle these features while significantly improving the effectiveness of memoizing analysis results (and thus improving analysis performance). Using a range of well known benchmarks (many of which have not been successfully analyzed using other existing shape analysis methods) we demonstrate that our approach results in significant improvements in both accuracy and efficiency over a baseline analysis.
more …
By
Molinet Berenguer, José A.; Coello Coello, Carlos A.
Post to Citeulike
1 Citations
In this paper, we propose a new multiobjective evolutionary algorithm (MOEA), which transforms a multiobjective optimization problem into a linear assignment problem using a set of weight vectors uniformly scattered. Our approach adopts uniform design to obtain the set of weights and KuhnMunkres’ (Hungarian) algorithm to solve the assignment problem. Differential evolution is used as our search engine, giving rise to the socalled Hungarian Differential Evolution algorithm (HDE). Our proposed approach is compared with respect to a MOEA based on decomposition (MOEA/D) and with respect to an indicatorbased MOEA (the S metric selection Evolutionary MultiObjective Algorithm, SMS EMOA) using several test problems (taken from the specialized literature) having from two to ten objective functions. Our preliminary experimental results indicate that our proposed HDE outperforms MOEA/D and is competitive with respect to SMSEMOA, but at a significantly lower computational cost.
more …
By
Hannachi, Mohamed Amine; Bouassida Rodriguez, Ismael; Drira, Khalil; Pomares Hernandez, Saul Eduardo
Show all (4)
Post to Citeulike
2 Citations
Multilabelled graphs are a powerful and versatile tool for modelling real applications in diverse domains such as communication networks, social networks, and autonomic systems, among others. Due to dynamic nature of such kind of systems the structure of entities is continuously changing along the time, this because, it is possible that new entities join the system, some of them leave it or simply because the entities’ relations change. Here is where graph transformation takes an important role in order to model systems with dynamic and/or evolutive configurations. Graph transformation consists of two main tasks: graph matching and graph rewriting. At present, few graph transformation tools support multilabelled graphs. To our knowledge, there is no tool that support inexact graph matching for the purpose of graph transformation. Also, the main problem of these tools lies on the limited expressiveness of rewriting rules used, that negatively reduces the range of application scenarios to be modelling and/or negatively increase the number of rewriting rules to be used. In this paper, we present the tool GMTE  Graph Matching and Transformation Engine. GMTE handles directed and multilabelled graphs. In addition, to the exact graph matching, GMTE handles the inexact graph matching. The approach of rewriting rules used by GMTE combines Single PushOut rewriting rules with edNCE grammar. This combination enriches and extends the expressiveness of the graph rewriting rules. In addition, for the graph matching, GMTE uses a conditional rule schemata that supports complex comparison functions over labels. To our knowledge, GMTE is the first graph transformation tool that offers such capabilities.
more …
By
Castañeda, Armando; Rajsbaum, Sergio; Raynal, Michel
Post to Citeulike
5 Citations
Tasks and objects are two predominant ways of specifying distributed problems. A task specifies for each set of processes (which may run concurrently) the valid outputs of the processes. An object specifies the outputs the object may produce when it is accessed sequentially. Each one requires its own implementation notion, to tell when an execution satisfies the specification. For objects linearizability is commonly used, while for tasks implementation notions are less explored.
Sequential specifications are very convenient, especially important is the locality property of linearizability, which states that linearizable objects compose for free into a linearizable object. However, most wellknown tasks have no sequential specification. Also, tasks have no clear locality property.
The paper introduces the notion of intervalsequential object. The corresponding implementation notion of intervallinearizability generalizes linearizability. Intervallinearizability allows to specify any task. However, there are sequential oneshot objects that cannot be expressed as tasks, under the simplest interpretation of a task. The paper also shows that a natural extension of the notion of a task is expressive enough to specify any intervalsequential object.
more …
By
Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel
Post to Citeulike
5 Citations
Processes in a concurrent system need to coordinate using a shared memory or a messagepassing subsystem in order to solve agreement tasks such as, for example, consensus or set agreement. However, coordination is often needed to “break the symmetry” of processes that are initially in the same state, for example, to get exclusive access to a shared resource, to get distinct names or to elect a leader.
This paper introduces and studies the family of generalized symmetry breaking (GSB) tasks, that includes election, renaming and many other symmetry breaking tasks. Differently from agreement tasks, a GSB task is “inputless”, in the sense that processes do not propose values; the task only specifies the symmetry breaking requirement, independently of the system’s initial state (where processes differ only on their identifiers). Among various results characterizing the family of GSB tasks, it is shown that (non adaptive) perfect renaming is universal for all GSB tasks.
more …
By
Lücken, Christian; Brizuela, Carlos; Barán, Benjamin
Post to Citeulike
1 Citations
Multiobjective Evolutionary Algorithms (MOEA) are used to solve complex multiobjective problems. As the number of objectives increases, Paretobased MOEAs are unable to maintain the same effectiveness showed for two or three objectives. Therefore, as a way to ameliorate this performance degradation several authors proposed preferencebased methods as an alternative to Pareto based approaches. On the other hand, parallelization has shown to be useful in evolutionary optimizations. A central aspect for the parallelization of evolutionary algorithms is the population partitioning approach. Thus, this paper presents a new parallelization approach based on clustering by the shape of objective vectors to deal with manyobjective problems. The proposed method was compared with random and
$$k$$
means clustering approaches using a multithreading framework in parallelization of the NSGAII and six variants using preferencebased relations for fitness assignment. Executions were carriedout for the DTLZ problem suite, and the obtained solutions were compared using the generational distance metric. Experimental results show that the proposed shapebased partition achieves competitive results when comparing to the sequential and to other partitioning approaches.
more …
By
Lopez, Edgar Manoatl; Antonio, Luis Miguel; Coello Coello, Carlos A.
Post to Citeulike
3 Citations
The hypervolume has become very popular in current multiobjective optimization research. Because of its highly desirable features, it has been used not only as a quality measure for comparing final results of multiobjective evolutionary algorithms (MOEAs), but also as a selection operator (it is, for example, very suitable for manyobjective optimization problems). However, it has one serious drawback: computing the exact hypervolume is highly costly. The best known algorithms to compute the hypervolume are polynomial in the number of points, but their cost grows exponentially with the number of objectives. This paper proposes a novel approach which, through the use of Graphics Processing Units (GPUs), computes in a faster way the hypervolume contribution of a point. We develop a highly parallel implementation of our approach and demonstrate its performance when using it within the
$$\mathcal {S}$$
Metric Selection Evolutionary MultiObjective Algorithm (SMSEMOA). Our results indicate that our proposed approach is able to achieve a significant speed up (of up to 883x) with respect to its sequential counterpart, which allows us to use SMSEMOA with exact hypervolume calculations, in problems having up to 9 objective functions.
more …
By
Sanvicente–Sánchez, Héctor; Frausto–Solís, Juan; Imperial–Valenzuela, Froilán
Post to Citeulike
Since the apparition of Simulated Annealing algorithm (SA) it has shown to be an efficient method to solve combinatorial optimization problems. Due to this, new algorithms based on two looped cycles (temperatures and Markov chain) have emerged, one of them have been called Threshold Accepting (TA). Classical algorithms based on TA usually use the same Markov chain length for each temperature cycle, these methods spend a lot of time at high temperatures where the Markov chain length is supposed to be small. In this paper we propose a method based on the neighborhood structure to get the Markov chain length in a dynamic way for each temperature cycle. We implemented two TA algorithms (classical or TACM and proposed or TADM) for SAT. Experimentation shows that the proposed method is more efficient than the classical one since it obtain the same quality of the final solution with less processing time.
more …
By
Ita, Guillermo; Bello, Pedro; Contreras, Meliza
Post to Citeulike
To count models for two conjunctive forms (#2SAT problem) is a classic #P problem. We determine different structural patterns on the underlying graph of a 2CF F allowing the efficient computation of #2SAT(F).
We show that if the constrained graph of a formula is acyclic or the cycles on the graph can be arranged as independent and embedded cycles, then the number of models of F can be counted efficiently.
more …
By
Pinto, David; Rosso, Paolo; Jiménez, Ernesto
Post to Citeulike
This paper presents an approach of a crosslingual information retrieval which uses a ranking method based on a penalisation version of the Jaccard formula. The obtained results after the submission of a set of runs to the WebCLEF 2006 have shown that this simple ranking formula may be used in a crosslingual environment. A comparison with runs submitted by other teams ranks us in a third place by using all the topics. A fourth place is obtained with our best overall results by using only the new topic set, and a second place was got by using only the automatic topics of the new topic set. An exact comparison with the rest of the participants is in fact difficult to obtain and, therefore, we consider that further detailed analysis of the components should be done in order to determine the best components of the proposed system.
more …
By
AlbaCabrera, Eduardo; IbarraFiallo, Julio; GodoyCalderon, Salvador; CervantesAlonso, Fernando
Show all (4)
Post to Citeulike
1 Citations
The last few years have seen an important increase in research publications dealing with external typical testorfinding algorithms, while internal ones have been almost forgotten or modified to behave as external on the basis of their alleged poor performance. In this research we present a new internal typical testorfinding algorithm called YYC that incrementally calculates typical testors for the currently analized set of basic matrix rows by searching for compatible sets. The experimentally measured performance of this algorithm stands out favorably in problems where other external algorithms show very low performance. Also, a comparative analysis of its efficiency is done against some external typical testorfinding algorithms published during the last few years.
more …
By
Rendón, Erendira; Sánchez, José Salvador
Post to Citeulike
1 Citations
Clustering in data mining is a discovery process that groups a set of data so as to maximize the intracluster similarity and to minimize the intercluster similarity. Clustering becomes more challenging when data are categorical and the amount of available memory is less than the size of the data set. In this paper, we introduce CBC (Clustering Based on Compressed Data), an extension of the Birch algorithm whose main characteristics refer to the fact that it can be especially suitable for very large databases and it can work both with categorical attributes and mixed features. Effectiveness and performance of the CBC procedure were compared with those of the wellknown Kmodes clustering algorithm, demonstrating that the CBC summary process does not affect the final clustering, while execution times can be drastically lessened.
more …
By
Czyzowicz, Jurek; Kranakis, Evangelos; Krizanc, Danny; Lambadaris, Ioannis; Narayanan, Lata; Opatrny, Jaroslav; Stacho, Ladislav; Urrutia, Jorge; Yazdani, Mohammadreza
Show all (9)
Post to Citeulike
17 Citations
A set of sensors establishes barrier coverage of a given line segment if every point of the segment is within the sensing range of a sensor. Given a line segment I, n mobile sensors in arbitrary initial positions on the line (not necessarily inside I) and the sensing ranges of the sensors, we are interested in finding final positions of sensors which establish a barrier coverage of I so that the sum of the distances traveled by all sensors from initial to final positions is minimized. It is shown that the problem is NP complete even to approximate up to constant factor when the sensors may have different sensing ranges. When the sensors have an identical sensing range we give several efficient algorithms to calculate the final destinations so that the sensors either establish a barrier coverage or maximize the coverage of the segment if complete coverage is not feasible while at the same time the sum of the distances traveled by all sensors is minimized. Some open problems are also mentioned.
more …
By
García, Vicente; Mollineda, Ramón A.; Sánchez, J. Salvador
Post to Citeulike
2 Citations
In this paper, we introduce a new approach to evaluate and visualize the classifier performance in twoclass imbalanced domains. This method defines a twodimensional space by combining the geometric mean of class accuracies and a new metric that gives an indication of how balanced they are. A given point in this space represents a certain tradeoff between those two measures, which will be expressed as a trapezoidal function. Besides, this evaluation function has the interesting property that it allows to emphasize the correct predictions on the minority class, which is often considered as the most important class. Experiments demonstrate the consistency and validity of the evaluation method here proposed.
more …
By
Graff, Mario; GraffGuerrero, Ariel; CerdaJacobo, Jaime
Post to Citeulike
5 Citations
There is great interest for the development of semantic genetic operators to improve the performance of genetic programming. Semantic genetic operators have traditionally been developed employing experimentally or theoreticallybased approaches. Our current work proposes a novel semantic crossover developed amid the two traditional approaches. Our proposed semantic crossover operator is based on the use of the derivative of the error propagated through the tree. This process decides the crossing point of the second parent. The results show that our procedure improves the performance of genetic programming on rational symbolic regression problems.
more …
By
Quintana, Ma. Guadalupe; Morales, Rafael
Post to Citeulike
We present results of using inductive logic programming (ILP) to produce learner models by behavioural cloning. Models obtained using a program for supervised induction of production rules (ripper) are compared to models generated using a wellknown program for ILP (foil). It is shown that the models produced by foil are either too specific or too general, depending on whether or not auxiliary relations are applied. Three possible explanations for these results are: (1) there is no way of specifying to foil the minimum number of cases each clause must cover; (2) foil requires that all auxiliary relations be defined extensionally; and (3) the application domain (control of a pole on a cart) has continuous attributes. In spite of foil’s limitations, the models it produced using auxiliary relations meet one of the goals of our exploration: to obtain more structured learner models which are easier to comprehend.
more …
By
Boone, Paul; Chavez, Edgar; Gleitzky, Lev; Kranakis, Evangelos; Opatrny, Jaroslav; Salazar, Gelasio; Urrutia, Jorge
Show all (7)
Post to Citeulike
4 Citations
An important technique for discovering routes between two nodes in an adhoc network involves applying the face routing algorithm on a planar spanner of the network. Face routing guarantees message delivery in networks that contains large holes, where greedy algorithms fail. Existing techniques for constructing a suitable planar subgraph involve local tests that eliminate crossings between existing links by deleting some links. They do not test whether the deleted links actually create some crossings and some of the links are deleted needlessly. As a result, some of the routes found in face routing will have an unnecessarily large number of hops from source to destination. We consider a new local test for preprocessing a wireless network that produces a planar subgraph. The test is relatively simple, requires low overhead and does not eliminate existing links unless it is needed to eliminate a crossing, thus reducing overhead associated with multiple hops.
more …
By
SolorioFernández, Saúl; CarrascoOchoa, J. Ariel; MartínezTrinidad, José Fco.
Post to Citeulike
4 Citations
In this paper, we introduce a new hybrid filterwrapper method for supervised feature selection, based on the Laplacian Score ranking combined with a wrapper strategy. We propose to rank features with the Laplacian Score to reduce the search space, and then we use this order to find the best feature subset. We compare our method against other based on ranking feature selection methods, namely, Information Gain Attribute Ranking, Relief, Correlationbased Feature Selection, and additionally we include in our comparison a Wrapper Subset Evaluation method. Empirical results over ten realworld datasets from the UCI repository show that our hybrid method is competitive and outperforms in most of the cases to the other feature selection methods used in our experiments.
more …
By
Delporte, Carole; Fauconnier, Hugues; Rajsbaum, Sergio; Raynal, Michel
Show all (4)
Post to Citeulike
An immediate snapshot object is a high level communication object, built on top of a read/write distributed system in which all except one processes may crash. It allows each process to write a value and obtains a set of pairs (process id, value) such that, despite process crashes and asynchrony, the sets obtained by the processes satisfy noteworthy inclusion properties.
Considering an nprocess model in which up to t processes are allowed to crash (tcrash system model), this paper is on the construction of tresilient immediate snapshot objects. In the tcrash system model, a process can obtain values from at least
$$(nt)$$
processes, and, consequently, timmediate snapshot is assumed to have the properties of the basic
$$(n1)$$
resilient immediate snapshot plus the additional property stating that each process obtains values from at least
$$(nt)$$
processes. The main result of the paper is the following. While there is a (deterministic)
$$(n1)$$
resilient algorithm implementing the basic
$$(n1)$$
immediate snapshot in an
$$(n1)$$
crash read/write system, there is no tresilient algorithm in a tcrash read/write model when
$$t\in [1\ldots (n2)]$$
. This means that, when
$$t<n1$$
, the notion of tresilience is inoperative when one has to implement timmediate snapshot for these values of t: the model assumption “at most
$$t<n1$$
processes may crash” does not provide us with additional computational power allowing for the design of a genuine tresilient algorithm (genuine meaning that such an algorithm would work in the tcrash model, but not in the
$$(t+1)$$
crash model). To show these results, the paper relies on wellknown distributed computing agreement problems such as consensus and kset agreement.
more …
By
Goudarzi, Alireza; Lakin, Matthew R.; Stefanovic, Darko
Post to Citeulike
4 Citations
We propose a novel molecular computing approach based on reservoir computing. In reservoir computing, a dynamical core, called a reservoir, is perturbed with an external input signal while a readout layer maps the reservoir dynamics to a target output. Computation takes place as a transformation from the input space to a highdimensional spatiotemporal feature space created by the transient dynamics of the reservoir. The readout layer then combines these features to produce the target output. We show that coupled deoxyribozyme oscillators can act as the reservoir. We show that despite using only three coupled oscillators, a molecular reservoir computer could achieve 90% accuracy on a benchmark temporal problem.
more …
By
Herlihy, Maurice; Rajsbaum, Sergio; Raynal, Michel; Stainer, Julien
Show all (4)
Post to Citeulike
3 Citations
In a waitfree model any number of processes may crash. A process runs solo when it computes its local output without receiving any information from other processes, either because they crashed or they are too slow. While in waitfree sharedmemory models at most one process may run solo in an execution, any number of processes may have to run solo in an asynchronous waitfree messagepassing model.
This paper is on the computability power of models in which several processes may concurrently run solo. It first introduces a family of roundbased waitfree models, called the dsolo models, 1 ≤ d ≤ n, where up to d processes may run solo. The paper gives then a characterization of the colorless tasks that can be solved in each dsolo model. It also introduces the (d,ε)solo approximate agreement task, which generalizes εapproximate agreement, and proves that (d,ε)solo approximate agreement can be solved in the dsolo model, but cannot be solved in the (d + 1)solo model. The paper studies also the relation linking dset agreement and (d,ε)solo approximate agreement in asynchronous waitfree messagepassing systems.
These results establish for the first time a hierarchy of waitfree models that, while weaker than the basic read/write model, are nevertheless strong enough to solve nontrivial tasks.
more …
By
Fraigniaud, Pierre; Rajsbaum, Sergio; Travers, Corentin
Post to Citeulike
1 Citations
The notion of deciding a distributed language
$$\mathcal {L} $$
is of growing interest in various distributed computing settings. Each process
$$p_i$$
is given an input value
$$x_i$$
, and the processes should collectively decide whether their set of input values
$$x=(x_i)_i$$
is a valid state of the system w.r.t. to some specification, i.e., if
$$x\in \mathcal {L} $$
. In nondeterministic distributed decision each process
$$p_i$$
gets a local certificate
$$c_i$$
in addition to its input
$$x_i$$
. If the input
$$x\in \mathcal {L} $$
then there exists a certificate
$$c=(c_i)_i$$
such that the processes collectively accept x, and if
$$x\not \in \mathcal {L} $$
, then for every c, the processes should collectively reject x. The collective decision is expressed by the set of opinions emitted by the processes.
In this paper we study nondeterministic distributed decision in systems where asynchronous processes may crash. It is known that the number of opinions needed to deterministically decide a language can grow with n, the number of processes in the system. We prove that every distributed language
$$\mathcal {L} $$
can be nondeterministically decided using only three opinions, with certificates of size
$$\lceil \log \alpha (n)\rceil +1$$
bits, where
$$\alpha $$
grows at least as slowly as the inverse of the Ackerman function. The result is optimal, as we show that there are distributed languages that cannot be decided using just two opinions, even with arbitrarily large certificates.
To prove our upper bound, we introduce the notion of distributed encoding of the integers, that provides an explicit construction of a long bad sequence in the wellquasiordering
$$(\{0,1\}^*,\le _*)$$
controlled by the successor function. Thus, we provide a new class of applications for wellquasiorderings that lies outside logic and complexity theory. For the lower bound we use combinatorial topology techniques.
more …
By
DelporteGallet, Carole; Fauconnier, Hugues; Rajsbaum, Sergio; Yanagisawa, Nayuta
Show all (4)
Post to Citeulike
One of the central questions in distributed computability is characterizing the tasks that are solvable in a given system model. In the anonymous case, where processes have no identifiers and communicate through multiwriter/multireader registers, there is a recent topological characterization (Yanagisawa 2017) of the colorless tasks that are solvable when any number of asynchronous processes may crash. In this paper, we consider the case where at most t asynchronous processes may crash, where
$$1\le t<n$$
. We prove that a colorless task is tresilient solvable anonymously if and only if it is tresilient solvable nonanonymously. We obtain our results through various reductions and simulations that explore how to extend techniques for nonanonymous computation to anonymous one.
more …
By
Hernández Gómez, Raquel; Coello Coello, Carlos A.; Alba, Enrique
Post to Citeulike
2 Citations
In the last decade, there has been a growing interest in multiobjective evolutionary algorithms that use performance indicators to guide the search. A simple and effective one is the
$$\mathcal {S}$$
Metric Selection Evolutionary MultiObjective Algorithm (SMSEMOA), which is based on the hypervolume indicator. Even though the maximization of the hypervolume is equivalent to achieving Pareto optimality, its computational cost increases exponentially with the number of objectives, which severely limits its applicability to manyobjective optimization problems. In this paper, we present a parallel version of SMSEMOA, where the execution time is reduced through an asynchronous island model with micropopulations, and diversity is preserved by external archives that are pruned to a fixed size employing a recently created technique based on the ParallelCoordinates graph. The proposed approach, called
$$\mathcal {S}$$
PAMICRO (PArallel MICRo Optimizer based on the
$$\mathcal {S}$$
metric), is compared to the original SMSEMOA and another stateoftheart algorithm (HypE) on the WFG test problems using up to 10 objectives. Our experimental results show that
$$\mathcal {S}$$
PAMICRO is a promising alternative that can solve manyobjective optimization problems at an affordable computational cost.
more …
By
SantiagoRamirez, Everardo; GonzalezFraga, J. A.; AscencioLopez, J. I.
Post to Citeulike
2 Citations
In this paper, we compare the performance of three composite correlation filters in facial recognition problem. We used the ORL (Olivetti Research Laboratory) facial image database to evaluate KLaw, MACE and ASEF filters performance. Simulations results demonstrate that KLaw nonlinear composite filters evidence the best performance in terms of recognition rate (RR) and, false acceptation rate (FAR). As a result, we observe that correlation filters are able to work well even when the facial image contains distortions such as rotation, partial occlusion and different illumination conditions.
more …
By
Korjik, Valeri; MoralesLuna, Guillermo
Post to Citeulike
1 Citations
We consider a scenario where information hiding (IH) is performed through noisy channels. There may arise different situations but one of the most common is the case where the legal IH channel is superior to the attacker IH channel. If a special randomized encoding is used by legal users then it is possible to hide information in the noisy components of the cover message. At the same time, the randomized encoding prevents the secret message to be removed from the stegomessage without any significant distortion of the cover message. If a legal decoder of IH knows the cover message, a randomized encoding procedure does not prevent the error correction of the secret message at the receiving side. The special problem of IH  how to distinguish any binary periodic repetitive sequence from truly random binary noise received on noisy channels  is presented. Application of the randomized encoding technique makes a solution to this problem more difficult and hence complicates a traffic analysis. We consider also how is it possible to “camouflage” IH by natural channel noises independently of the properties of the cover messages probability space, and the application of WM in combination with randomized encoding dedicated to the utilization of noisy channels. If an attacker tries to remove WM by adding binary noise then an expansion of errors in the cover message results.
more …
By
GarcíaValdez, Mario; Guervós, Juan Julián Merelo; Fernández de Vega, Francisco
Post to Citeulike
In this paper the effect of node unavailability in algorithms using EvoSpace, a poolbased evolutionary algorithm, is assessed. EvoSpace is a framework for developing evolutionary algorithms (EAs) using heterogeneous and unreliable resources. It is based on Linda’s tuple space coordination model. The core elements of EvoSpace are a central repository for the evolving population and remote clients, here called EvoWorkers, which pull random samples of the population to perform on them the basic evolutionary processes (selection, variation and survival), once the work is done, the modified sample is pushed back to the central population. To address the problem of unreliable EvoWorkers, EvoSpace uses a simple reinsertion algorithm using copies of samples stored in a global queue which also prevents the starvation of the population pool. Using a benchmark problem from the PPeaks problem generator we have compared two approaches: (i) the reinsertion of previous individuals at the cost of keeping copies of each sample, and a common approach of other pool based EAs, (ii) inserting randomly generated individuals. We found that EvoSpace is fault tolerant to highly unreliable resources and also that the reinsertion algorithm is only needed when the population is near the point of starvation.
more …
By
Amaya, Ivan; OrtizBayliss, José Carlos; ConantPablos, Santiago Enrique; TerashimaMarín, Hugo; Coello Coello, Carlos A.
Show all (5)
Post to Citeulike
Solvers for different combinatorial optimization problems have evolved throughout the years. These can range from simple strategies such as basic heuristics, to advanced models such as metaheuristics and hyperheuristics. Even so, the set of benchmark instances has remained almost unaltered. Thus, any analysis of solvers has been limited to assessing their performance under those scenarios. Even if this has been fruitful, we deem necessary to provide a tool that allows for a better study of each available solver. Because of that, in this paper we present a tool for assessing the strengths and weaknesses of different solvers, by tailoring a set of instances for each of them. We propose an evolutionarybased model and test our idea on four different basic heuristics for the 1D bin packing problem. This, however, does not limit the scope of our proposal, since it can be used in other domains and for other solvers with few changes. By pursuing an indepth study of such tailored instances, more relevant knowledge about each solver can be derived.
more …
By
Acevedo, Elena; Acevedo, Antonio; Felipe, Federico
Post to Citeulike
2 Citations
A method for diagnosing Parkinson’s disease is presented. The proposal is based on associative approach, and we used this method for classifying patients with Parkinson’s disease and those who are completely healthy. In particular, AlphaBeta Bidirectional Associative Memory is used together with the modified JohnsonMöbius codification in order to deal with mixed noise. We use three methods for testing the performance of our method: LeaveOneOut, HoldOut and Kfold Cross Validation and the average obtained was of 97.17%.
more …
By
Olague, Gustavo; Romero, Eva; Trujillo, Leonardo; Bhanu, Bir
Show all (4)
Post to Citeulike
2 Citations
This paper presents a linear genetic programming approach, that solves simultaneously the region selection and feature extraction tasks, that are applicable to common image recognition problems. The method searches for optimal regions of interest, using texture information as its feature space and classification accuracy as the fitness function. Texture is analyzed based on the gray level cooccurrence matrix and classification is carried out with a SVM committee. Results show effective performance compared with previous results using a standard image database.
more …
By
Gafni, Eli; Rajsbaum, Sergio
Post to Citeulike
3 Citations
We propose the musical benches problem to model a waitfree coordination difficulty that is orthogonal to previously studied ones such as agreement or symmetry breaking (leader election or renaming). A bench is the usual binary consensus problem for 2 processes. Assume n+1 processes want to sit in n benches as follows. Each one starts with a preference, consisting of a bench and one place (left or right) in the bench where it wants to sit. Each process should produce as output the place of the bench where it decides to sit. It is required that no two processes sit in different places of the same bench. Upon the observance of a conflict in one of the benches an undecided process can “abandon” its initial bench and place and try to sit in another bench at another place.
The musical benches problem is so called because processes jump from bench to bench trying to find one in which they may be alone or not in conflict with one another. If at most one process starts in each bench, the problem is trivially solvable– each process stays in its place. We show that if there is just one bench where two processes rather than one, start, the problem is waitfree unsolvable in read/write shared memory. This impossibility establishes a new connection between distributed computing and topology, via the BorsukUlam theorem.
The musical benches problem seems like just a collection of consensus problems, where by the pigeon hole principle at least one of them will have to be solved by two processes. Consequently, one is tempted to try to find a bivalency impossibility proof of the FLP style. Our second result shows that there is no such proof: We present an algorithm to solve the musical benches problem using set agreement, a primitive stronger than read/write registers, but weaker than consensus. Thus, an FLPstyle impossibility for musical benches will imply an FLPstyle impossibility of setconsensus.
The musical benches problem can be generalized by considering benches other than consensus, such as set agreement or renaming, leading to a very interesting class of new problems.
more …
By
DelporteGallet, Carole; Fauconnier, Hugues; Rajsbaum, Sergio; Yanagisawa, Nayuta
Show all (4)
Post to Citeulike
We consider a system of n anonymous processes communicating through multiwriter/multireader (MWMR) registers. A weakset object is a particularly interesting communication abstraction for anonymous processes; it may be seen as the equivalent of an atomic snapshot object in an anonymous system. It can be accessed through two operations:
$$\textsc {add}()$$
and
$$\textsc {get}()$$
. Intuitively, an
$$\textsc {add}(v)$$
operation puts value v in the set represented by the object, while a
$$\textsc {get}()$$
operation returns the contents of the set. The paper describes a waitfree atomic implementation of a weakset object shared by n anonymous processes using 3n MWMR registers. The description of the algorithm is incremental. The paper first presents an implementation that is waitfree only for the
$$\textsc {Get}()$$
operations, using 2n MWMR registers. Then it describes an implementation that is waitfree for the
$$\textsc {Get}()$$
and the
$$\textsc {Add}()$$
operations, using
$$3n+1$$
MWMR registers, and finally it is improved to an implementation using 3n MWMR registers. In addition, a lowerbound of n registers for the implementation of a waitfree atomic weakset is proved.
more …
By
CruzBarbosa, Raúl; BautistaVillavicencio, David; Vellido, Alfredo
Post to Citeulike
The diagnostic classification of human brain tumours on the basis of magnetic resonance spectra is a nontrivial problem in which dimensionality reduction is almost mandatory. This may take the form of feature selection or feature extraction. In feature extraction using manifold learning models, multivariate data are described through a lowdimensional manifold embedded in data space. Similarities between points along this manifold are best expressed as geodesic distances or their approximations. These approximations can be computationally intensive, and several alternative software implementations have been recently compared in terms of computation times. The current brief paper extends this research to investigate the comparative ability of dimensionalityreduced data descriptions to accurately classify several types of human brain tumours. The results suggest that the way in which the underlying data manifold is constructed in nonlinear dimensionality reduction methods strongly influences the classification results.
more …
By
PérezAlfonso, Damián; FundoraRamírez, Osiel; LazoCortés, Manuel S.; RocheEscobar, Raciel
Show all (4)
Post to Citeulike
Process mining is concerned with the extraction of knowledge about business processes from information system logs. Process discovery algorithms are process mining techniques focused on discovering process models starting from event logs. The applicability and effectiveness of process discovery algorithms rely on features of event logs and process characteristics. Selecting a suitable algorithm for an event log is a tough task due to the variety of variables involved in this process. The traditional approaches use empirical assessment in order to recommend a suitable discovery algorithm. This is a time consuming and computationally expensive approach. The present paper evaluates the usefulness of an approach based on classification to recommend discovery algorithms. A knowledge base was constructed, based on features of event logs and process characteristics, in order to train the classifiers. Experimental results obtained with the classifiers evidence the usefulness of the proposal for recommendation of discovery algorithms.
more …
By
CristóbalSalas, Alfredo; Chernykh, Andrey; RodríguezAlcantar, Edelmira; Gaudiot, JeanLuc
Show all (4)
Post to Citeulike
The messagepassing paradigm is now widely accepted and used mainly for interprocess communication in distributed memory parallel systems. However, one of its disadvantages is the high cost associated with the data exchange. Therefore, in this paper, we describe a messagepassing optimization technique based on the exploitation of singleassignment and constant information properties to reduce the number of communications. Similar to the more general partial evaluation approach, technique evaluates local and remote memory operations when only part of the input is known or available; it further specializes the program with respect to the input data. It is applied to the programs, which use a distributed singleassignment memory system. Experimental results show a considerable speedup in programs running in computer systems with slow interconnection networks. We also show that single assignment memory systems can have better network latency tolerance and the overhead introduced by its management can be hidden.
more …
By
Arias Montaño, Alfredo; Coello Coello, Carlos A.; MezuraMontes, Efrén
Post to Citeulike
This paper introduces a novel Parallel MultiObjective Evolutionary Algorithm (pMOEA) which is based on the island model. The serial algorithm on which this approach is based uses the differential evolution operators as its search engine, and includes two mechanisms for improving its convergence properties (through local dominance and environmental selection based on scalar functions). Two different parallel approaches are presented. The first aims at improving effectiveness (i.e., for better approximating the Pareto front) while the second aims to provide a better efficiency (i.e., by reducing the execution time through the use of small population sizes in each subpopulation). To assess the performance of the proposed algorithms, we adopt a set of standard test functions and performance measures taken from the specialized literature. Results are compared with respect to its serial counterpart and with respect to three algorithms representative of the stateoftheart in the area: NSGAII, MOEA/D and MOEA/DDE.
more …
