Showing 1 to 100 of 5369 matching Articles
Results per page:
By
Muñoz, Mirna; Negrón, Adriana Peña Pérez; Mejia, Jezreel; Lopez, Graciela Lara
Show all (4)
In Mexico, the small and medium size companies (SMEs) are key for the software development industry, in such a way that having highly qualified personal for the development of high quality software products is a fundamental piece to guarantee their permanency in the market. Therefore, matching the software industry requirements with the academy training represents a big problem that must be reduced for the benefit of both sectors. In this context, to understand the coverage of the academic curricula programs in higher education, regarding the software industry requirements is a fundamental aspect to take actions to reduce the current gap between industry and academy. This paper presents an analysis of coverage between Moprosoft norm, standard used for software industry to ensure quality in Software Engineering practices, and four academic curricula programs of higher education related to Computer Science and Informatics.
more …
By
Monroy, Raúl; Bundy, Alan; Green, Ian
3 Citations
Most efforts to automate formal verification of communicating systems have centred around finitestate systems (FSSs). However, FSSs are incapable of modelling many practical communicating systems, including a novel class of problems, which we call VIPS. VIPSs are valuepassing, infinitestate, parameterised systems. Existing approaches using model checking over FSSs are insufficient for VIPSs. This is due to their inability both to reason with and about domainspecific theories, and to cope with systems having an unbounded or arbitrary state space.
We use the Calculus of Communicating Systems (CCS) (Communication and Concurrency. London: Prentice Hall, 1989) to express and specify VIPSs. We take program verification to be proving the program and its intended specification equivalent. We use the laws of CCS to conduct the verification task. This approach allows us to study communicating systems and the data such systems communicate. Automating theorem proving in this context is an extremely difficult task.
We provide automated methods for CCS analysis; they are applicable to both FSSs and VIPSs. Adding these methods to the CL^{A}M proof planner (Lecture Notes in Artificial Intelligence, Vol. 449, Springer, 1990, pp. 647, 648), we have implemented an automated verification planner capable of dealing with problems that previously required human interaction. This paper describes these methods, gives an account as to why they work, and provides a short summary of experimental results.
more …
By
Imbs, Damien; Rajsbaum, Sergio; Valle, Adrián
The basic read/write shared memory model where asynchronous and crash prone processes communicate to solve a task is difficult to analyze. A more structured model is the iterated immediate snapshot model (IIS), where processes execute communication closed rounds. In each round, they communicate using read/write registers that cannot be reused in later rounds. It is known that a task is solvable in the IIS model if and only if it is solvable in the basic read/write model. Both models are also equivalent when, in addition to read/write registers, processes also have access to stronger communication objects called 01tasks.
This paper extends further the task computability equivalence presenting a simulation that includes xconsensus objects, which solve consensus among up to x processes. The simulation implies that an iterated model where processes communicate through a sequence consisting only of xconsensus objects is equivalent to the basic shared memory model augmented with xconsensus objects.
more …
By
Aranha, Diego F.; FuentesCastañeda, Laura; Knapp, Edward; Menezes, Alfred; RodríguezHenríquez, Francisco
Show all (5)
19 Citations
We implement asymmetric pairings derived from KachisaSchaeferScott (KSS), BarretoNaehrig (BN), and BarretoLynnScott (BLS) elliptic curves at the 192bit security level. Somewhat surprisingly, we find pairings derived from BLS curves with embedding degree 12 to be the fastest for our serial as well as our parallel implementations. Our serial implementations provide a factor3 speedup over the previous stateoftheart, demonstrating that pairing computation at the 192bit security level is not as expensive as previously thought. We also present a general framework for deriving a Weiltype pairing that is wellsuited for computing a single pairing on a multiprocessor machine.
more …
By
Chakraborty, Debrup; Sarkar, Palash
4 Citations
This work deals with the various requirements of encryption and authentication in cryptographic applications. The approach is to construct suitable modes of operations of a block cipher to achieve the relevant goals. A variety of schemes suitable for specific applications are presented. While none of the schemes are built completely from scratch, there is a common unifying framework which connects them. All the schemes described have been implemented and the implementation details are publicly available. Performance figures are presented when the block cipher is the AES and the Intel AESNI instructions are used. These figures suggest that the constructions presented here compare well with previous works such as the famous OCB mode of operation. In terms of features, the constructions provide several new offerings which are not present in earlier works. This work significantly widens the range of choices of an actual designer of cryptographic system.
more …
By
Aranha, Diego F.; Knapp, Edward; Menezes, Alfred; RodríguezHenríquez, Francisco
Show all (4)
4 Citations
In the past year, the speed record for pairing implementations on desktopclass machines has been broken several times. The speed records for asymmetric pairings were set on a single processor. In this paper, we describe our parallel implementation of the optimal ate pairing over BarretoNaehrig (BN) curves that is about 1.23 times faster using two cores of an Intel Core i5 or Core i7 machine, and 1.45 times faster using 4 cores of the Core i7 than the stateoftheart implementation on a single core. We instantiate Hess’s general Weil pairing construction and introduce a new optimal Weil pairing tailored for parallel execution. Our experimental results suggest that the new Weil pairing is 1.25 times faster than the optimal ate pairing on 8core extensions of the aforementioned machines. Finally, we combine previous techniques for parallelizing the eta pairing on a supersingular elliptic curve with embedding degree 4, and achieve an estimated 1.24fold speedup on an 8core extension of an Intel Core i7 over the previous best technique.
more …
By
FazHernández, Armando; Longa, Patrick; Sánchez, Ana H.
23 Citations
We propose efficient algorithms and formulas that improve the performance of sidechannel protected scalar multiplication exploiting the GallantLambertVanstone (CRYPTO 2001) and GalbraithLinScott (EUROCRYPT 2009) methods. Firstly, by adapting Feng et al.’s recoding to the GLV setting, we derive new regular algorithms for variablebase scalar multiplication that offer protection against simple sidechannel and timing attacks. Secondly, we propose an efficient technique that interleaves ARMbased and NEONbased multiprecision operations over an extension field, as typically found on GLS curves and pairing computations, to improve performance on modern ARM processors. Finally, we showcase the efficiency of the proposed techniques by implementing a stateoftheart GLVGLS curve in twisted Edwards form defined over
$\mathbb{F}_{p^2}$
, which supports a four dimensional decomposition of the scalar and runs in constant time, i.e., it is fully protected against timing attacks. For instance, using a precomputed table of only 512 bytes, we compute a variablebase scalar multiplication in 92,000 cycles on an Intel Ivy Bridge processor and in 244,000 cycles on an ARM CortexA15 processor. Our benchmark results and the proposed techniques contribute to the improvement of the stateoftheart performance of elliptic curve computations. Most notably, our techniques allow us to reduce the cost of adding protection against timing attacks in the GLVbased variablebase scalar multiplication computation to below 10%.
more …
By
Adj, Gora; Menezes, Alfred; Oliveira, Thomaz; RodríguezHenríquez, Francisco
Show all (4)
4 Citations
We show that a Magma implementation of Joux’s
$$L[1/4+o(1)]$$
algorithm can be used to compute discrete logarithms in the 1303bit finite field
$${\mathbb F}_{3^{6 \cdot 137}}$$
and the 1551bit finite field
$${\mathbb F}_{3^{6 \cdot 163}}$$
with very modest computational resources. Our
$${\mathbb F}_{3^{6 \cdot 137}}$$
implementation was the first to illustrate the effectiveness of Joux’s algorithm for computing discrete logarithms in smallcharacteristic finite fields that are not Kummer or twistedKummer extensions.
more …
By
Yakovlev, Victor; Korzhik, Valery; Bakaev, Mihail; MoralesLuna, Guillermo
Show all (4)
1 Citations
We consider the informationtheoretic secure key distribution problem (KDP) over noisy binary symmetric channels with public discussion and in the presence of an active adversary. There are several versions of such protocols proposed by Maurer, Wolf, Renner, Dodis, Reyzin et al. We describe two new versions of KDP for the same channel model and with the use of extractors as a mean of privacy amplification but with the goal to maximize the key rate under an optimization of the protocol parameters. There are two novelties in solution of KDP: we get the extractor’s seed directly from the distributed initial strings and we prove the main results in terms of explicit estimates without the use of the uncertain symbols O, Ω, Θ. Both asymptotic and nonasymptotic cases are presented. It is shown that the extractors can be superior to conventional hashing for very large lengths of initially distributed strings.
more …
By
Chakraborty, Debrup; Nandi, Mridul
10 Citations
HCTR was proposed by Wang, Feng and Wu in 2005. It is a mode of operation which provides a tweakable strong pseudorandom permutation. Though HCTR is quite an efficient mode, the authors showed a cubic security bound for HCTR which makes it unsuitable for applications where tweakable strong pseudorandom permutations are required. In this paper we show that HCTR has a better security bound than what the authors showed. We prove that the distinguishing advantage of an adversary in distinguishing HCTR and its inverse from a random permutation and its inverse is bounded above by 4.5 σ^{2}/2^{n}, where n is the blocklength of the blockcipher and σ is the number of nblock queries made by the adversary (including the tweak).
more …
By
Beuchat, JeanLuc; Brisebarre, Nicolas; Detrey, Jérémie; Okamoto, Eiji; RodríguezHenríquez, Francisco
Show all (5)
18 Citations
In this article we propose a study of the modified Tate pairing in characteristics two and three. Starting from the η_{T} pairing introduced by Barreto et al. [1], we detail various algorithmic improvements in the case of characteristic two. As far as characteristic three is concerned, we refer to the survey by Beuchat et al. [5]. We then show how to get back to the modified Tate pairing at almost no extra cost. Finally, we explore the tradeoffs involved in the hardware implementation of this pairing for both characteristics two and three. From our experiments, characteristic three appears to have a slight advantage over characteristic two.
more …
By
Uddin, Ashraf; Singh, Vivek Kumar; Pinto, David; Olmos, Ivan
Show all (4)
5 Citations
This paper presents a detailed scientometric and textbased analysis of Computer Science (CS) research output from Mexico during 1989–2014, indexed in Web of Science. The analytical characterization focuses on origins and growth patterns of CS research in Mexico. In addition to computing the standard scientometric indicators of TP, TC, ACPP, HiCP, Hindex, ICP patterns etc., the major publication sources selected by Mexican computer scientists and the major funding agencies for CS research are also identified. The textbased analysis, on the other hand, focused on identifying major research themes pursued by Mexican computer scientists and their trends. Mexico, ranking 35th in the world CS research output during the mentioned period, is also unique in the sense that 75 % of the total CS publications are produced by top ten Mexican institutions alone. Similarly, Mexico has higher ICP instances than world average. The analysis presents a detailed characterization on these aspects.
more …
By
FuentesCastañeda, Laura; Knapp, Edward; RodríguezHenríquez, Francisco
12 Citations
An asymmetric pairing
$e\colon{\mathbb{G}}_2\times{\mathbb{G}}_1\to{\mathbb{G}}_T$
is considered such that
${\mathbb{G}}_1=E({\mathbb F}_p)[r]$
and
${\mathbb{G}}_2=\tilde E({\mathbb F}_{p^{k/d}})[r]$
, where k is the embedding degree of the elliptic curve
$E/{\mathbb F}_p$
, r is a large prime divisor of
$\# E({\mathbb F}_p)$
, and
$\tilde E$
is the degreed twist of E over
${\mathbb F}_{p^{k/d}}$
with
$r \mid \tilde E ({\mathbb F}_{p^{k/d}} )$
. Hashing to
${\mathbb{G}}_1$
is considered easy, while hashing to
${\mathbb{G}}_2$
is done by selecting a random point Q in
$\tilde E({\mathbb F}_{p^{k/d}})$
and computing the hash value cQ, where c·r is the order of
$\tilde E({\mathbb F}_{p^{k/d}})$
. We show that for a large class of curves, one can hash to
${\mathbb{G}}_2$
in
$\textup{O}(1/\varphi (k)\log c)$
time, as compared with the previously fastestknown
$\textup{O}(\log p)$
. In the case of BN curves, we are able to double the speed of hashing to
${\mathbb{G}}_2$
. For higherembeddingdegree curves, the results can be more dramatic. We also show how to reduce the cost of the finalexponentiation step in a pairing calculation by a fixed number of field multiplications.
more …
By
Coello, Carlos A. Coello
This chapter provides a short overview of multiobjective optimization using metaheuristics. The chapter includes a description of some of the main metaheuristics that have been used for multiobjective optimization. Although special emphasis is made on evolutionary algorithms, other metaheuristics, such as particle swarm optimization, artificial immune systems, and ant colony optimization, are also briefly discussed. Other topics such as applications and recent algorithmic trends are also included. Finally, some of the main research trends that are worth exploring in this area are briefly discussed.
more …
By
Adj, Gora; Menezes, Alfred; Oliveira, Thomaz; RodríguezHenríquez, Francisco
Show all (4)
10 Citations
In 2013, Joux, and then Barbulescu, Gaudry, Joux and Thomé, presented new algorithms for computing discrete logarithms in finite fields of small and medium characteristic. We show that these new algorithms render the finite field
${\mathbb{F}}_{3^{6 \cdot 509}} = {\mathbb{F}}_{3^{3054}}$
weak for discrete logarithm cryptography in the sense that discrete logarithms in this field can be computed significantly faster than with the previous fastest algorithms. Our concrete analysis shows that the supersingular elliptic curve over
${\mathbb{F}}_{3^{509}}$
with embedding degree 6 that had been considered for implementing pairingbased cryptosystems at the 128bit security level in fact provides only a significantly lower level of security. Our work provides a convenient framework and tools for performing a concrete analysis of the new discrete logarithm algorithms and their variants.
more …
By
Taverne, Jonathan; FazHernández, Armando; Aranha, Diego F.; RodríguezHenríquez, Francisco; Hankerson, Darrel; López, Julio
Show all (6)
10 Citations
The availability of a new carryless multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and halftrace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the stateoftheart performance of halving and doublingbased scalar multiplication on NIST curves at the 112 and 192bit security levels, and a new speed record for sidechannel resistant scalar multiplication in a random curve at the 128bit security level.
more …
By
Barradas, Héctor Ruíz; Bert, Didier
We present a fixpoint semantics of event systems. The semantics is presented in a general framework without concerns of fairness. Soundness and completeness of rules for deriving leadsto properties are proved in this general framework. The general framework is instantiated to minimal progress and weak fairness assumptions and similar results are obtained. We show the power of these results by deriving sufficient conditions for leadsto under minimal progress proving soundness of proof obligations without reasoning over statetraces.
more …
By
Oliveira, Thomaz; López, Julio; Aranha, Diego F.; RodríguezHenríquez, Francisco
Show all (4)
10 Citations
In this work we present the λcoordinates, a new system for representing points in binary elliptic curves. We also provide efficient elliptic curve operations based on the new representation and timing results of our software implementation over the field
$\mathbb{F}_{2^{254}}$
. As a result, we improve speed records for protected/unprotected single/multicore software implementations of randompoint elliptic curve scalar multiplication at the 128bit security level. When implemented on a Sandy Bridge 3.4GHz Intel Xeon processor, our software is able to compute a single/multicore unprotected scalar multiplication in 72,300 and 47,900 clock cycles, respectively; and a protected singlecore scalar multiplication in 114,800 cycles. These numbers improve by around 2% on the newer Core i7 2.8GHz Ivy Bridge platform.
more …
By
Singh, Vivek Kumar; Uddin, Ashraf; Pinto, David
12 Citations
This paper aims to perform a detailed scientometric and textbased analysis of Computer Science (CS) research output of the 100 most productive institutions in India and in the world. The analytical characterization is based on research output data indexed in Scopus during the last 25 years period (1989–2013). Our computational analysis involves a twodimensional approach involving the standard scientometric methodology and textbased analysis. The scientometric characterization aims to assess CS domain research output in leading Indian institutions visàvis the leading world institutions and to bring out the similarities and differences among them. It involves analysis along traditional scientometric indicators such as total output, citationbased impact assessment, coauthorship patterns, international collaboration levels etc. The textbased characterization aims to identify the key research themes and their temporal trends for the two sets. The key contribution of the experimental work is that it’s an analytical characterization of its kind, which identifies characteristic similarities and differences in CS research landscape of Indian institutions visàvis world institutions.
more …
By
Castañeda, Armando; Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel
Show all (4)
2 Citations
In the waitfree shared memory model substantial attention has been devoted to understanding the relative power of subconsensus tasks. Two important subconsensus families of tasks have been identified: kset agreement and Mrenaming. When 2 ≤ k ≤ n − 1 and n ≤ M ≤ 2n − 2, these tasks are more powerful than read/write registers, but not strong enough to solve consensus for two processes.
This paper studies the power of renaming with respect to set agreement. It shows that, in a system of n processes, nrenaming is strictly stronger than (n − 1)set agreement, but not stronger than (n − 2)set agreement. Furthermore, (n + 1)renaming cannot solve even (n − 1)set agreement. As a consequence, there are cases where set agreement and renaming are incomparable when looking at their power to implement each other.
more …
By
MancillasLópez, Cuauhtemoc; Chakraborty, Debrup; RodríguezHenríquez, Francisco
Tweakable enciphering schemes are a certain type of blockcipher mode of operation which provide security in the sense of a strong pseudorandom permutation. It has been proposed that these types of modes are suitable for inplace disk encryption. Currently there are many proposals available for these schemes. EME is one of the efficient candidate of this category. EME2 is a derivative of EME which is currently one of the candidates of a draft standard for wide block modes by the IEEE working group on storage security. We show some weakness of these two modes assuming that some side channel information is available.
more …
By
Coello Coello, Carlos A.
Evolutionary algorithms (as well as a number of other metaheuristics) have become a popular choice for solving problems having two or more (often conflicting) objectives (the socalled multiobjective optimization problems). This area, known as EMOO (Evolutionary MultiObjective Optimization) has had an important growth in the last 20 years, and several people (particularly newcomers) get the impression that it is now very difficult to make contributions of sufficient value to justify, for example, a PhD thesis. However, a lot of interesting research is still under way. In this paper, we will briefly review some of the research topics on evolutionary multiobjective optimization that are currently attracting a lot of interest (e.g., indicatorbased selection, manyobjective optimization and use of surrogates) and which represent good opportunities for doing research. Some of the challenges currently faced by this discipline will also be delineated.
more …
By
Coello, Carlos A. Coello
This chapter provides a short overview of multiobjective optimization using metaheuristics. The chapter includes a description of some of the main metaheuristics that have been used for multiobjective optimization. Although special emphasis is made on evolutionary algorithms, other metaheuristics, such as particle swarm optimization, artificial immune systems, and ant colony optimization, are also briefly discussed. Other topics such as applications and recent algorithmic trends are also included. Finally, some of the main research trends that are worth exploring in this area are briefly discussed.
more …
By
Beuchat, JeanLuc; GonzálezDíaz, Jorge E.; Mitsunari, Shigeo; Okamoto, Eiji; RodríguezHenríquez, Francisco; Teruya, Tadanori
Show all (6)
44 Citations
This paper describes the design of a fast software library for the computation of the optimal ate pairing on a Barreto–Naehrig elliptic curve. Our library is able to compute the optimal ate pairing over a 254bit prime field
$\mathbb{F}_{p}$
, in just 2.33 million of clock cycles on a single core of an Intel Core i7 2.8GHz processor, which implies that the pairing computation takes 0.832msec. We are able to achieve this performance by a careful implementation of the base field arithmetic through the usage of the customary Montgomery multiplier for prime fields. The prime field is constructed via the Barreto–Naehrig polynomial parametrization of the prime p given as, p = 36t^{4} + 36t^{3} + 24t^{2} + 6t + 1, with t = 2^{62} − 2^{54} + 2^{44}. This selection of t allows us to obtain important savings for both the Miller loop as well as the final exponentiation steps of the optimal ate pairing.
more …
By
Aranha, Diego F.; FazHernández, Armando; López, Julio; RodríguezHenríquez, Francisco
Show all (4)
11 Citations
We design a stateoftheart software implementation of field and elliptic curve arithmetic in standard Koblitz curves at the 128bit security level. Field arithmetic is carefully crafted by using the best formulae and implementation strategies available, and the increasingly common native support to binary field arithmetic in modern desktop computing platforms. The ith power of the Frobenius automorphism on Koblitz curves is exploited to obtain new and faster interleaved versions of the wellknown τNAF scalar multiplication algorithm. The usage of the
$\tau^{\lfloor m/3 \rfloor}$
and
$\tau^{\lfloor m/4 \rfloor}$
maps are employed to create analogues of the 3and 4dimensional GLV decompositions and in general, the
$\lfloor m/s \rfloor$
th power of the Frobenius automorphism is applied as an analogue of an sdimensional GLV decomposition. The effectiveness of these techniques is illustrated by timing the scalar multiplication operation for fixed, random and multiple points. In particular, our library is able to compute a random point scalar multiplication in just below 10^{5} clock cycles, which sets a new speed record across all curves with or without endomorphisms defined over binary or prime fields. The results of our optimized implementation suggest a tradeoff between speed, compliance with the published standards and sidechannel protection. Finally, we estimate the performance of curvebased cryptographic protocols instantiated using the proposed techniques and compare our results to related work.
more …
By
Beuchat, JeanLuc; Detrey, Jérémie; Estibals, Nicolas; Okamoto, Eiji; RodríguezHenríquez, Francisco
Show all (5)
5 Citations
This paper is devoted to the design of fast parallel accelerators for the cryptographic Tate pairing in characteristic three over supersingular elliptic curves. We propose here a novel hardware implementation of Miller’s loop based on a pipelined KaratsubaOfman multiplier. Thanks to a careful selection of algorithms for computing the tower field arithmetic associated to the Tate pairing, we manage to keep the pipeline busy. We also describe the strategies we considered to design our parallel multiplier. They are included in a VHDL code generator allowing for the exploration of a wide range of operators. Then, we outline the architecture of a coprocessor for the Tate pairing over
$\mathbb{F}_{3^m}$
. However, a final exponentiation is still needed to obtain a unique value, which is desirable in most of the cryptographic protocols. We supplement our pairing accelerator with a coprocessor responsible for this task. An improved exponentiation algorithm allows us to save hardware resources.
According to our placeandroute results on Xilinx FPGAs, our design improves both the computation time and the areatime tradeoff compared to previoulsy published coprocessors.
more …
By
Rajsbaum, Sergio; Raynal, Michel
Due to the advent of multicore machines, shared memory distributed computing models taking into account asynchrony and process crashes are becoming more and more important. This paper visits models for these systems and analyses their properties from a computability point of view. Among them, the base snapshot model and the iterated model are particularly investigated. The paper visits also several approaches that have been proposed to model failures (mainly the waitfree model and the adversary model) and gives also a look at the BG simulation. The aim of this survey is to help the reader to better understand the power and limits of distributed computing shared memory models.
more …
By
MartínezHerrera, Alberto F.; MexPerera, J. Carlos; NolazcoFlores, Juan A.
Substitution Box (Sbox) is usually the most complex module in some block ciphers. Some prominent ciphers such as AES and Camellia use Sboxes, which are affine equivalents of a multiplicative inverse in small finite fields. This manuscript describes mathematical representations of the Camellia Sbox by using composite fields such as polynomial, normal or mixed. An optimized hardware implementation typically aims to reduce the number of gates to be used. Our theoretical design with composite normal bases allows saving gates in the critical path by using 19 XOR gates, 4 AND gates and 2 NOT gates. With composite mixed bases, the critical path has 2 XOR gates more than the representation with composite normal bases. Redundancies found in the affine transformation matrix that form the composite fields were eliminated. For mixed bases, new Algebraic Normal Form identities were obtained to compute the inner composite multiplicative inverse, reducing the critical path of the complete implementation of the Camellia Sbox. These constructions were translated into transistorgate architectures for hardware representations by using Electric VLSI [29] under MOSIS C5 process [17], [18], thus obtaining the corresponding schematic models.
more …
By
Oliveira, Thomaz; Aranha, Diego F.; López, Julio; RodríguezHenríquez, Francisco
Show all (4)
7 Citations
In this paper we introduce new methods for computing constanttime variablebase point multiplications over the GalbraithLinScott (GLS) and the Koblitz families of elliptic curves. Using a lefttoright doubleandadd and a righttoleft halveandadd Montgomery ladder over a GLS curve, we present some of the fastest timings yet reported in the literature for point multiplication. In addition, we combine these two procedures to compute a multicore protected scalar multiplication. Furthermore, we designed a novel regular
$$\tau $$
adic scalar expansion for Koblitz curves. As a result, using the regular recoding approach, we set the speed record for a singlecore constanttime point multiplication on standardized binary elliptic curves at the
$$128$$
bit security level.
more …
By
Chi, JesúsJavier; Oliveira, Thomaz
1 Citations
In this paper we present a complete Magma implementation for solving the discrete logarithm problem (DLP) on a binary GLS curve defined over the field
$$\mathbb {F}_{2^{62}}$$
. For this purpose, we constructed a curve vulnerable against the gGHS Weil descent attack and adapted the algorithm proposed by Enge and Gaudry to solve the DLP on the Jacobian of a genus32 hyperelliptic curve. Furthermore, we describe a mechanism to check whether a randomly selected binary GLS curve is vulnerable against the gGHS attack. Such method works with all curves defined over binary fields and can be applied to each element of the isogeny class.
more …
By
Subramanian, K. G.; Venkat, Ibrahim; Wiederhold, Petra
4 Citations
A P system model, called Contextual array P system, that makes use of array objects and contextual array rules, is introduced and its generative power in the description of picture arrays is examined, by comparing it with certain other array generating devices.
more …
By
Chang, Donghoon; Nandi, Mridul
22 Citations
The classical design principle MerkleDamgård [13,6] is scrutinized by many ways such as Joux’s multicollision attack, KelseySchneier second preimage attack etc. In TCC’04, Maurer et al. introduced a strong security notion called as “indifferentiability” for a hash function based on a compression function. The classical design principle is also insecure against this strong security notion whereas chopMD hash is secure with the security bound roughly σ^{2}/2^{s} where s is the number of chopped bits and σ is the total number of message blocks queried by a distinguisher. In case of n = 2s where n is the output size of a compression function, the value σ to get a significant bound is 2^{s/2} which is the birthday complexity, where the hash output size is sbit. In this paper, we present an improved security bound for chopMD. The improved bound shown in this paper is (3(n − s) + 1)q/2^{s} + q/2^{n − s − 1} + σ^{2}/2^{n + 1} where q is the total number of queries. In case of n = 2s, chopMD is indifferentiablysecure if q = O(2^{s}/(3s + 1)) and σ = O(2^{n/2}) which are beyond the birthday complexity. We also present a design principle for an nbit hash function based on a compression function
$f : {0,1}^{2n+b} {\Rightarrow} {0,1}^n$
and show that the indifferentiability security bound for this hash function is roughly (3n + 1)σ/2^{n}. So, the new design of hash function is secondpreimage and rmulticollision secure as long as the query complexity (the number of message blocks queried) of an attacker is less than 2^{n}/(3n + 1) or 2^{n(r − 1)/r} respectively.
more …
By
Pinto, David; Juan, Alfons; Rosso, Paolo
2 Citations
The world wide web is a natural setting for crosslingual information retrieval. The European Union is a typical example of a multilingual scenario, where multiple users have to deal with information published in at least 20 languages. Given queries in some source language and a target corpus in another language, the typical approximation consists in translating either the query or the target dataset to the other language. Other approaches use parallel corpora to obtain a statistical dictionary of words among the different languages. In this work, we propose to use a training corpus made up by a set of QueryRelevant Document Pairs (QRDP) in a probabilistic crosslingual information retrieval approach which is based on the IBM alignment model 1 for statistical machine translation. Our approach has two main advantages over those that use direct translation and parallel corpora: we will not obtain a translation of the query, but a set of associated words which share their meaning in some way and, therefore, the obtained dictionary is, in a broad sense, more semantic than a translation one. Besides, since the queries are supervised, we are working in a more restricted domain than that when using a general parallel corpus (it is well known that in this context results are better than those which are performed in a general context). In order to determine the quality of our experiments, we compared the results with those obtained by a direct translation of the queries with a query translation system, observing promising results.
more …
By
GarzaFabre, Mario; RodriguezTello, Eduardo; ToscanoPulido, Gregorio
2 Citations
Through multiobjectivization, a singleobjective problem is restated in multiobjective form with the aim of enabling a more efficient search process. Recently, this transformation was applied with success to the hydrophobicpolar (HP) lattice model, which is an abstract representation of the protein structure prediction problem. The use of alternative multiobjective formulations of the problem has led to significantly better results. In this paper, an improved multiobjectivization for the HP model is proposed. By decomposing the HP model’s energy function, a twoobjective formulation for the problem is defined. A comparative analysis reveals that the new proposed multiobjectivization evaluates favorably with respect to both the conventional singleobjective and the previously reported multiobjective formulations. Statistical significance testing and the use of a large set of test cases support the findings of this study. Both twodimensional and threedimensional lattices are considered.
more …
By
Sossa, Humberto; Garro, Beatriz A.; Villegas, Juan; Olague, Gustavo; Avilés, Carlos
Show all (5)
1 Citations
In this paper we describe how evolutionary computation can be used to automatically design artificial neural networks (ANNs) and associative memories (AMs). In the case of ANNs, Particle Swarm Optimization (PSO), Differential Evolution (DE), and Artificial Bee Colony (ABC) algorithms are used, while Genetic Programming is adopted for AMs. The derived ANNs and AMs are tested with several examples of wellknown databases.
more …
By
EstivillCastro, Vladimir; Wood, Derick
2 Citations
The availability of large main memories and the new technologies for disk drives have modified the models for external sorting and have renewed interest in their study. We investigate the replacementselection paradigm for external sorting. We focus on the performance of the merge phase when given random files and specially when given nearly sorted files since such files are common in practice. In particular, we demonstrate that, during the merge phase, the floatingbuffers technique not only reduces the sorting time by fully overlapping I/O and saving seeks, but also it profits significantly from existing order in the input. We also propose a new algorithm for computing a feasible reading sequence for floating buffers that improves upon previous algorithms.
more …
By
Gafni, Eli; Rajsbaum, Sergio
11 Citations
In roundbyround models of distributed computing processes run in a sequence of (synchronous or asynchronous) rounds. The advantage of the roundbyround approach is that invariants established in the first round are preserved in later rounds. An elegant asynchronous roundbyround shared memory model, is the iterated snapshots model (IS). Instead of the snapshots model where processes share an array m[·] that can be accessed any number of times, indexed by process ID, where P_{i} writes to m[i] and can take a snapshot of the entire array, we have processes share a twodimensional array m[·,·], indexed by iteration number and by process ID, where P_{i} in iteration r writes once to m[r, i] and takes one snapshot of row r, m[r,·]. The IS model lends itself more easily to combinatorial analysis. However, to show that whenever a task is impossible in the IS model the task is impossible in the snapshots model, a simulation is needed. Such a simulation was presented by Borowsky and Gafni in PODC97; namely, it was shown how to take a waitfree protocol for the snapshots model, and transform it into a protocol for the IS model, solving the same task.
In this paper we present a new simulation from the snapshots model to the IS model, and show that it can be extended to work with models stronger that waitfree. The main contribution is to show that the simulation can work with models that have access to certain communication objects, called 01tasks. This extends the result of Gafni, Rajsbaum and Herlihy in DISC’2006 stating that renaming is strictly weaker than set agreement from the IS model to the usual noniterated waitfree read/write shared memory model.
We also show that our simulation works with tresilient models and the more general dependent process failure model of Junqueira and Marzullo. This version of the simulation extends previous results by Herlihy and Rajsbaum in PODC’2010 and DISC’2010 about the topological connectivity of a protocol complex in an iterated dependent process failure model, to the corresponding noniterated model.
more …
By
Confalonieri, Roberto; Nieves, Juan Carlos; Osorio, Mauricio; VázquezSalceda, Javier
Show all (4)
5 Citations
In this paper, we show how the formalism of Logic Programs with Ordered Disjunction (LPODs) and Possibilistic Answer Set Programming (PASP) can be merged into the single framework of Logic Programs with Possibilistic Ordered Disjunction (LPPODs). The LPPODs framework embeds in a unified way several aspects of commonsense reasoning, nonmonotonocity, preferences, and uncertainty, where each part is underpinned by a well established formalism. On one hand, from LPODs it inherits the distinctive feature of expressing contextdependent qualitative preferences among different alternatives (modeled as the atoms of a logic program). On the other hand, PASP allows for qualitative certainty statements about the rules themselves (modeled as necessity values according to possibilistic logic) to be captured. In this way, the LPPODs framework supports a reasoning which is nonmonotonic, preference and uncertaintyaware. The LPPODs syntax allows for the specification of (1) preferences among the exceptions to default rules, and (2) necessity values about the certainty of program rules. As a result, preferences and uncertainty can be used to select the preferred uncertain default rules of an LPPOD and, consequently, to order its possibilistic answer sets. Furthermore, we describe the implementation of an ASPbased solver able to compute the LPPODs semantics.
more …
By
Bagnoli, Franco; El Yacoubi, Samira; Rechtman, Raúl
6 Citations
An important question to be addressed regarding system control on a time interval [0, T] is whether some particular target state in the configuration space is reachable from a given initial state. When the target of interest refers only to a portion of the spatial domain, we speak about regional analysis. Cellular automata approach have been recently promoted for the study of control problems on spatially extended systems for which the classical approaches cannot be used. An interesting problem concerns the situation where the subregion of interest is not interior to the domain but a portion of its boundary . In this paper we address the problem of regional controllability of cellular automata via boundary actions, i.e., we investigate the characteristics of a cellular automaton so that it can be controlled inside a given region only acting on the value of sites at its boundaries.
more …
By
Conde, Rodolfo; Rajsbaum, Sergio
3 Citations
In the consensus task each process proposes a value, and all correct processes have to decide the same value. In addition, validity requires that the decided value is a proposed value. Afek, Gafni and Lieber (DISC’09) introduced the safeconsensus task, by weakening the validity requirement: if the first process to invoke the task returns before any other process invokes it, then it outputs its input; otherwise, when there is concurrency, the consensus output can be arbitrary, not even the input of any process. Surprisingly, they showed that safeconsensus is equivalent to consensus, in a system where any number of processes can crash (e.g., waitfree).
We show that safeconsensus is nevertheless a much weaker communication primitive, in the sense that any waitfree implementation of consensus requires
$\binom{n}{2}$
safeconsensus blackboxes, and this bound is tight. The lower bound proof uses connectivity arguments based on subgraphs of Johnson graphs. For the upper bound protocol that we present, we introduce the g2coalitionsconsensus task, which may be of independent interest. We work in an iterated model of computation, where the processes repeatedly: write their information to a (fresh) shared array, invoke safeconsensus boxes and snapshot the contents of the shared array.
more …
By
Winter, Victor; Kapur, Deepak
Based on our investigations of a case study of controllers for train systems [6,7,13,14], we present a model of reactive systems which emphasizes dynamic partitioning of system behavior into normal and abnormal. The class of reactive systems considered are nonstrict in the sense that their behavior is not entirely governed by past events; instead, future events must also be considered in the design of controllers for such systems.
more …
By
Mata, Felix
1 Citations
Geographic Information Ranking consists of measuring if a document (answer) is relevant to a spatial query. It is done by comparing characteristics in common between document and query. The most popular approaches compare just one aspect of geographical data (geographic properties, topology, among others). It limits the assessment of document relevance. Nevertheless, it can be improved when key characteristics of geographical objects are considered in the ranking (1) geographical attributes, (2) topological relations, and (3) geographical concepts. In this paper, we outline iRank a method that integrates these three aspects to rank a document. Ourapproach evaluates documents from three sources of information: GeoOntologies, dictionaries, and topology files. Relevance is measured according to three stages. In the first stage, the relevance is computed by processing concepts; in second stage relevance is calculated using geographic attributes. In the last stage, the relevance is measured by computing topologic relations. Thus, the main contribution of iRank is show that integration of three ranking criteria is better than when they are used in separate way.
more …
By
Boone, Paul; Chavez, Edgar; Gleitzky, Lev; Kranakis, Evangelos; Opatrny, Jaroslav; Salazar, Gelasio; Urrutia, Jorge
Show all (7)
Abstract
An important technique for discovering routes between two nodes in an adhoc network involves applying the face routing algorithm on a planar spanner of the network. Face routing guarantees message delivery in networks that contains large holes, where greedy algorithms fail. Existing techniques for constructing a suitable planar subgraph involve local tests that eliminate crossings between existing links by deleting some links. They do not test whether the deleted links actually create some crossings and some of the links are deleted needlessly. As a result, some of the routes found in face routing will have an unnecessarily large number of hops from source to destination. We consider a new local test for preprocessing a wireless network that produces a planar subgraph. The test is relatively simple, requires low overhead and does not eliminate existing links unless it is needed to eliminate a crossing, thus reducing overhead associated with multiple hops.
more …
By
GarzaFabre, Mario; RodriguezTello, Eduardo; ToscanoPulido, Gregorio
5 Citations
The hydrophobicpolar (HP) model for protein structure prediction abstracts the fact that hydrophobic interactions are a dominant force in the protein folding process. This model represents a hard combinatorial optimization problem, which has been widely addressed using evolutionary algorithms and other metaheuristics. In this paper, the multiobjectivization of the HP model is proposed. This originally singleobjective problem is restated as a multiobjective one by decomposing the conventional objective function into two independent objectives. By using different evolutionary algorithms and a large set of test cases, the new alternative formulation was compared against the conventional singleobjective problem formulation. As a result, the proposed formulation increased the search performance of the implemented algorithms in most of the cases. Both two and threedimensional lattices are considered. To the best of authors’ knowledge, this is the first study where multiobjective optimization methods are used for solving the HP model.
more …
By
MoralesLuna, Guillermo
1 Citations
We use a simple epistemic logic to pose problems related to relational database design. We use the notion of attribute dependencies, which is weaker than the notion of functional dependencies, to perform design tasks since it satisfies Armstrong axioms. We discuss also the application of the simple epistemic logic to privacy protection and to statistical disclosure.
more …
By
GalvánLópez, Edgar; VázquezMendoza, Lucia; Schoenauer, Marc; Trujillo, Leonardo
Show all (4)
In Genetic Programming (GP), the fitness of individuals is normally computed by using a set of fitness cases (FCs). Research on the use of FCs in GP has primarily focused on how to reduce the size of these sets. However, often, only a small set of FCs is available and there is no need to reduce it. In this work, we are interested in using the whole FCs set, but rather than adopting the commonly used GP approach of presenting the entire set of FCs to the system from the beginning of the search, referred as static FCs, we allow the GP system to build it by aggregation over time, named as dynamic FCs, with the hope to make the search more amenable. Moreover, there is no study on the use of FCs in Dynamic Optimisation Problems (DOPs). To this end, we also use the Kendall Tau Distance (KTD) approach, which quantifies pairwise dissimilarities among two lists of fitness values. KTD aims to capture the degree of a change in DOPs and we use this to promote structural diversity. Results on eight symbolic regression functions indicate that both approaches are highly beneficial in GP.
more …
By
Tchernykh, Andrei; Trystram, Denis
In this paper, we focus on online scheduling of multiprocessor jobs with emphasis on the regulation of idle periods in the frame of general list policies. We consider a new family of scheduling strategies based on two phases which successively combine sequential and parallel executions of the jobs. These strategies are part of a more generic scheme introduced in [6]. The main result is to demonstrate that it is possible to estimate the amount of resources that should remain idle for a better regulation of the load and to obtain approximation bounds.
more …
By
Picek, Stjepan; Coello Coello, Carlos A.; Jakobovic, Domagoj; Mentens, Nele
Show all (4)
2 Citations
The problem of finding the shortest addition chain for a given exponent is of great relevance in cryptography, but is also very difficult to solve since it is an NPhard problem. In this paper, we propose a genetic algorithm with a novel representation of solutions and new crossover and mutation operators to minimize the length of the addition chains corresponding to a given exponent. We also develop a repair strategy that significantly enhances the performance of our approach. The results are compared with respect to those generated by other metaheuristics for instances of moderate size, but we also investigate values up to
$$2^{127}  3$$
. For those instances, we were unable to find any results produced by other metaheuristics for comparison, and three additional strategies were adopted in this case to serve as benchmarks. Our results indicate that the proposed approach is a very promising alternative to deal with this problem.
more …
By
MontanoRivas, Omar
Completionbased automated theory exploration is a method to explore inductive theories with the aid of a convergent rewrite system. It combines a method to synthesise conjectures/definitions in a theory with a completion algorithm. Completion constructs a convergent rewrite system which is then used to reduce redundancies and improve prove automation during the exploration of the theory. However, completion does not always succeed on a set of identities and a reduction ordering. A common failure occurs when an initial identity or a normal form of a critical pair cannot be oriented by the given ordering. A popular solution to this problem consists in using the instances of those rules which can be oriented for rewriting, namely ordered rewriting. Extending completion to ordered rewriting leads to ‘unfailing completion’. In this paper, we extend the class of theories on which the completionbased automated theory exploration method can be applied by using unfailing completion. This produce stronger normalization methods compared to those in [20,21]. The techniques described are implemented in the theory exploration system IsaScheme.
more …
By
Sánchez, Ana Helena; RodríguezHenríquez, Francisco
10 Citations
In 2011, Waters presented a ciphertextpolicy attribute based encryption protocol that uses bilinear pairings to provide control access mechanisms, where the set of user’s attributes is specified by means of a linear secret sharing scheme. Some of the applications foreseen for this protocol lie in the context of mobile devices such a smartphones and tablets, which in a majority of instances are powered by an ARM processor supporting the NEON vector set of instructions. In this paper we present the design of a software cryptographic library that implements a 127bit security level attributebased encryption scheme over mobile devices equipped with a 1.4GHz Exynos 4 CortexA9 processor and a developing board that hosts a 1.7 GHz Exynos 5 CortexA15 processor. For the latter platform and taking advantage of the inherent parallelism of the NEON vector instructions, our library computes a single optimal pairing over a BarretoNaehrig curve approximately 2 times faster than the best timings previously reported on ARM platforms at this level of security. Further, using a 6attribute access formula our library is able to encrypt/decrypt a text/ciphertext in less than 7.5mS and 15.67mS, respectively.
more …
By
Monroy, Raúl; Bundy, Alan; Green, Ian
1 Citations
Unique Fixpoint Induction (UFI) is the chief inference rule to prove the equivalence of recursive processes in the Calculus of Communicating Systems (CCS) (Milner 1989). It plays a major role in the equational approach to verification. Equational verification is of special interest as it offers theoretical advantages in the analysis of systems that communicate values, have infinite state space or show parameterised behaviour. We call these kinds of systems VIPSs. VIPSs is the acronym of Valuepassing, InfiniteState and Parameterised Systems. Automating the application of UFI in the context of VIPSs has been neglected. This is both because many VIPSs are given in terms of recursive function symbols, making it necessary to carefully apply induction rules other than UFI, and because proving that one VIPS process constitutes a fixpoint of another involves computing a process substitution, mapping states of one process to states of the other, that often is not obvious. Hence, VIPS verification is usually turned into equation solving (Lin 1995a). Existing tools for this proof task, such as VPAM (Lin 1993), are highly interactive. We introduce a method that automates the use of UFI. The method uses middleout reasoning (Bundy et al. 1990a) and, so, is able to apply the rule even without elaborating the details of the application. The method introduces metavariables to represent those bits of the processes’ state space that, at application time, were not known, hence, changing from equation verification to equation solving. Adding this method to the equation plan developed by Monroy et al. (Autom Softw Eng 7(3):263–304, 2000a), we have implemented an automatic verification planner. This planner increases the number of verification problems that can be dealt with fully automatically, thus improving upon the current degree of automation in the field.
more …
By
Fraigniaud, Pierre; Ilcinkas, David; Rajsbaum, Sergio; Tixeuil, Sébastien
Show all (4)
4 Citations
We consider the task of exploring graphs with anonymous nodes by a team of noncooperative robots, modeled as finite automata. For exploration to be completed, each edge of the graph has to be traversed by at least one robot. In this paper, the robots have no a priori knowledge of the topology of the graph, nor of its size, and we are interested in the amount of memory the robots need to accomplish exploration, We introduce the socalled reduced automata technique, and we show how to use this technique for deriving several space lower bounds for exploration. Informally speaking, the reduced automata technique consists in reducing a robot to a simpler form that preserves its “core” behavior on some graphs. Using this technique, we first show that any set of q≥ 1 noncooperative robots, requires
$\Omega(\log(\frac{n}{q}))$
memory bits to explore all nnode graphs. The proof implies that, for any set of qKstate robots, there exists a graph of size O(qK) that no robot of this set can explore, which improves the O(K^{O(q)}) bound by Rollik (1980). Our main result is an application of this latter result, concerning terminating graph exploration with one robot, i.e., in which the robot is requested to stop after completing exploration. For this task, the robot is provided with a pebble, that it can use to mark nodes (without such a marker, even terminating exploration of cycles cannot be achieved). We prove that terminating exploration requires Ω(log n) bits of memory for a robot achieving this task in all nnode graphs.
more …
By
Chakraborty, Debrup; Sarkar, Palash
14 Citations
This work builds on earlier work by Rogaway at Asiacrypt 2004 on tweakable block cipher (TBC) and modes of operations. Our first contribution is to generalize Rogaway’s TBC construction by working over a ring R and by the use of a masking sequence of functions. The ring R can be instantiated as either GF(2^{n}) or as ℤ . Further, over GF(2^{n}), efficient instantiations of the masking sequence of functions can be done using either a Linear Feedback Shift Register (LFSR), a powering construction or a cellular automata map. Rogaway’s TBC construction was built from the powering construction over GF(2^{n}). Our second contribution is to use the general TBC construction to instantiate general constructions of various modes of operations (AE, PRF, MAC, AEAD) given by Rogaway.
more …
By
Adj, Gora; CervantesVázquez, Daniel; ChiDomínguez, JesúsJavier; Menezes, Alfred; RodríguezHenríquez, Francisco
Show all (5)
1 Citations
The security of the JaoDe Feo Supersingular Isogeny DiffieHellman (SIDH) key agreement scheme is based on the intractability of the Computational Supersingular Isogeny (CSSI) problem—computing
$${\mathbb F}_{p^2}$$
rational isogenies of degrees
$$2^e$$
and
$$3^e$$
between certain supersingular elliptic curves defined over
$${\mathbb F}_{p^2}$$
. The classical meetinthemiddle attack on CSSI has an expected running time of
$$O(p^{1/4})$$
, but also has
$$O(p^{1/4})$$
storage requirements. In this paper, we demonstrate that the van OorschotWiener golden collision finding algorithm has a lower cost (but higher running time) for solving CSSI, and thus should be used instead of the meetinthemiddle attack to assess the security of SIDH against classical attacks. The smaller parameter p brings significantly improved performance for SIDH.
more …
By
Dani, Varsha; King, Valerie; Movahedi, Mahnush; Saia, Jared
Show all (4)
6 Citations
We describe an asynchronous algorithm to solve secure multiparty computation (MPC) over n players, when strictly less than a
${1}\over{8}$
fraction of the players are controlled by a static adversary. For any function f over a field that can be computed by a circuit with m gates, our algorithm requires each player to send a number of field elements and perform an amount of computation that is
$\tilde{O}(\frac{m}{n} + \sqrt n)$
. This significantly improves over traditional algorithms, which require each player to both send a number of messages and perform computation that is Ω(nm).
Additionaly, we define the threshold counting problem and present a distributed algorithm to solve it in the asynchronous communication model. Our algorithm is load balanced, with computation, communication and latency complexity of O(logn), and may be of independent interest to other applications with a load balancing goal in mind.
more …
By
Pierrard, Thomas; Coello Coello, Carlos A.
4 Citations
This paper presents a new artificial immune system algorithm for solving multiobjective optimization problems, based on the clonal selection principle and the hypervolume contribution. The main aim of this work is to investigate the performance of this class of algorithm with respect to approaches which are representative of the stateoftheart in multiobjective optimization using metaheuristics. The results obtained by our proposed approach, called multiobjective artificial immune system based on hypervolume (MOAISHV) are compared with respect to those of the NSGAII. Our preliminary results indicate that our proposed approach is very competitive, and can be a viable choice for solving multiobjective optimization problems.
more …
By
Ramos, J. Guadalupe; Silva, Josep; Vidal, Germán
The development of modern routers require a significant effort to be designed, built, and verified. While hardware routers are faster, they are difficult to configure and maintain. Software routers, on the other hand, are slower but much more flexible, easier to configure and maintain, less expensive, etc. Recently, a modular architecture and toolkit for building software routers and other packet processors has been introduced: the Click system. It includes a specification language with features for declaring and connecting router elements and for designing abstractions.
In this work, we introduce the domainspecific language Rose for the specification of software routers. Rose is embedded in Curry, a modern declarative multiparadigm language. An advantage of this approach is that we have available a framework where router specifications can be transformed, optimized, verified, etc., by using a number of existing formal techniques already developed for Curry programs. Furthermore, we show that the features of Curry are particularly useful to specify router configurations with a highlevel of abstraction. Our first experiments point out that the proposed methodology is both useful and practical.
more …
By
Coello Coello, Carlos A.
690 Citations
This paper presents a critical review of the most important evolutionarybased multiobjective optimization techniques developed over the years, emphasizing the importance of analyzing their Operations Research roots as a way to motivate the development of new approaches that exploit the search capabilities of evolutionary algorithms. Each technique is briefly described with its advantages and disadvantages, its degree of applicability and some of its known applications. Finally, the future trends in this discipline and some of the open areas of research are also addressed.
more …
By
Gafni, Eli; Rajsbaum, Sergio; Raynal, Michel; Travers, Corentin
Show all (4)
8 Citations
We introduce the (b,n)Committee Decision Problem (CD) – a generalization of the consensus problem. While set agreement generalizes consensus in terms of the number of decisions allowed, the CD problem generalizes consensus in the sense of considering many instances of consensus and requiring a processor to decide in at least one instance. In more detail, in the CD problem each one of a set of n processes has a (possibly distinct) value to propose to each one of a set of b consensus problems, which we call committees. Yet a process has to decide a value for at least one of these committees, such that all processes deciding for the same committee decide the same value. We study the CD problem in the context of a waitfree distributed system and analyze it using a combination of distributed algorithmic and topological techniques, introducing a novel reduction technique.
We use the reduction technique to obtain the following results. We show that the (2,3)CD problem is equivalent to the musical benches problem introduced by Gafni and Rajsbaum in [10], and both are equivalent to (2,3)set agreement, closing an open question left there. Thus, all three problems are waitfree unsolvable in a read/write shared memory system, and they are all solvable if the system is enriched with objects capable of solving (2,3)set agreement. While the previous proof of the impossibility of musical benches was based on the BorsukUlam (BU) Theorem, it now relies on Sperner’s Lemma, opening intriguing questions about the relation between BU and distributed computing tasks.
more …
By
Olague, Gustavo; Vega, Francisco Fernández; Pérez, Cynthia B.; Lutton, Evelyne
Show all (4)
5 Citations
We present a new bioinspired approach applied to a problem of stereo images matching. This approach is based on an artifical epidemic process, that we call “the infection algorithm.” The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D informations which allow the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to only produce the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, that propagate like an artificial epidemy over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.
more …
By
Chakraborty, Debrup; Sarkar, Palash
28 Citations
The notion and the first construction of a tweakable enciphering scheme, called CMC, was presented by HaleviRogaway at Crypto 2003. In this paper, we present HCH, which is a new construction of such a scheme. The construction uses the hashencrypthash approach introduced by NaorReingold. This approach has recently been used in the constructions of tweakable enciphering schemes HCTR and PEP. HCH has several advantages over the previous schemes CMC, EME, EME*, HCTR, and PEP. CMC, EME, and EME* use two blockcipher invocations per message block, while HCTR, PEP, and HCH use only one. PEP uses four multiplications per block, while HCTR and HCH use only two. In HCTR, the security bound is cubic, while in HCH security bound is quadratic.
more …
By
Taverne, Jonathan; FazHernández, Armando; Aranha, Diego F.; RodríguezHenríquez, Francisco; Hankerson, Darrel; López, Julio
Show all (6)
25 Citations
The availability of a new carryless multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and halftrace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the stateoftheart performance of halving and doublingbased scalar multiplication on NIST curves at the 112 and 192bit security levels and a new speed record for sidechannelresistant scalar multiplication in a random curve at the 128bit security level. The algorithms presented in this work were implemented on Westmere and Sandy Bridge processors, the latest generation Intel microarchitectures.
more …
By
Adj, Gora; CanalesMartínez, Isaac; RiveraZamarripa, Luis; RodríguezHenríquez, Francisco
Show all (4)
The problem of determining whether a polynomial defined over a finite field ring is smooth or not with respect to a given degree, is the most intensive arithmetic operation of the socalled descent phase of indexcalculus algorithms. In this paper, we present an analysis and efficient implementation of Coppersmith’s smoothness test for polynomials defined over finite fields with characteristic three. As a case study, we review the best strategies for obtaining a fast field and polynomial arithmetic for polynomials defined over the ring
$$F_q[X],$$
with
$$q=3^6,$$
and report the timings achieved by our library when computing the smoothness test applied to polynomials of several degrees defined in that ring. This software library was recently used in Adj et al. (Cryptology 2016.
http://eprint.iacr.org/2016/914
), as a building block for achieving a record computation of discrete logarithms over the 4841bit field
$${{\mathbb {F}}}_{3^{6\cdot 509}}$$
.
more …
By
Coello, Carlos A. Coello
5 Citations
Summary
In this paper, we will briefly discuss the current state of the research on evolutionary multiobjective optimization, emphasizing the main achievements obtained to date. Achievements in algorithmic design are discussed from its early origins until the current approaches which are considered as the “second generation” in evolutionary multiobjective optimization. Some relevant applications are discussed as well, and we conclude with a list of future challenges for researchers working (or planning to work) in this area in the next few years.
more …
By
Marron, Mark; Hermenegildo, Manuel; Kapur, Deepak; Stefanovic, Darko
Show all (4)
9 Citations
The performance of heap analysis techniques has a significant impact on their utility in an optimizing compiler. Most shape analysis techniques perform interprocedural dataflow analysis in a contextsensitive manner, which can result in analyzing each procedure body many times (causing significant increases in runtime even if the analysis results are memoized). To improve the effectiveness of memoization (and thus speed up the analysis) project/extend operations are used to remove portions of the heap model that cannot be affected by the called procedure (effectively reducing the number of different contexts that a procedure needs to be analyzed with). This paper introduces project/extend operations that are capable of accurately modeling properties that are important when analyzing nontrivial programs (sharing, nullity information, destructive recursive functions, and composite data structures). The techniques we introduce are able to handle these features while significantly improving the effectiveness of memoizing analysis results (and thus improving analysis performance). Using a range of well known benchmarks (many of which have not been successfully analyzed using other existing shape analysis methods) we demonstrate that our approach results in significant improvements in both accuracy and efficiency over a baseline analysis.
more …
By
Molinet Berenguer, José A.; Coello Coello, Carlos A.
1 Citations
In this paper, we propose a new multiobjective evolutionary algorithm (MOEA), which transforms a multiobjective optimization problem into a linear assignment problem using a set of weight vectors uniformly scattered. Our approach adopts uniform design to obtain the set of weights and KuhnMunkres’ (Hungarian) algorithm to solve the assignment problem. Differential evolution is used as our search engine, giving rise to the socalled Hungarian Differential Evolution algorithm (HDE). Our proposed approach is compared with respect to a MOEA based on decomposition (MOEA/D) and with respect to an indicatorbased MOEA (the S metric selection Evolutionary MultiObjective Algorithm, SMS EMOA) using several test problems (taken from the specialized literature) having from two to ten objective functions. Our preliminary experimental results indicate that our proposed HDE outperforms MOEA/D and is competitive with respect to SMSEMOA, but at a significantly lower computational cost.
more …
By
Hannachi, Mohamed Amine; Bouassida Rodriguez, Ismael; Drira, Khalil; Pomares Hernandez, Saul Eduardo
Show all (4)
2 Citations
Multilabelled graphs are a powerful and versatile tool for modelling real applications in diverse domains such as communication networks, social networks, and autonomic systems, among others. Due to dynamic nature of such kind of systems the structure of entities is continuously changing along the time, this because, it is possible that new entities join the system, some of them leave it or simply because the entities’ relations change. Here is where graph transformation takes an important role in order to model systems with dynamic and/or evolutive configurations. Graph transformation consists of two main tasks: graph matching and graph rewriting. At present, few graph transformation tools support multilabelled graphs. To our knowledge, there is no tool that support inexact graph matching for the purpose of graph transformation. Also, the main problem of these tools lies on the limited expressiveness of rewriting rules used, that negatively reduces the range of application scenarios to be modelling and/or negatively increase the number of rewriting rules to be used. In this paper, we present the tool GMTE  Graph Matching and Transformation Engine. GMTE handles directed and multilabelled graphs. In addition, to the exact graph matching, GMTE handles the inexact graph matching. The approach of rewriting rules used by GMTE combines Single PushOut rewriting rules with edNCE grammar. This combination enriches and extends the expressiveness of the graph rewriting rules. In addition, for the graph matching, GMTE uses a conditional rule schemata that supports complex comparison functions over labels. To our knowledge, GMTE is the first graph transformation tool that offers such capabilities.
more …
By
Castañeda, Armando; Rajsbaum, Sergio; Raynal, Michel
7 Citations
Tasks and objects are two predominant ways of specifying distributed problems. A task specifies for each set of processes (which may run concurrently) the valid outputs of the processes. An object specifies the outputs the object may produce when it is accessed sequentially. Each one requires its own implementation notion, to tell when an execution satisfies the specification. For objects linearizability is commonly used, while for tasks implementation notions are less explored.
Sequential specifications are very convenient, especially important is the locality property of linearizability, which states that linearizable objects compose for free into a linearizable object. However, most wellknown tasks have no sequential specification. Also, tasks have no clear locality property.
The paper introduces the notion of intervalsequential object. The corresponding implementation notion of intervallinearizability generalizes linearizability. Intervallinearizability allows to specify any task. However, there are sequential oneshot objects that cannot be expressed as tasks, under the simplest interpretation of a task. The paper also shows that a natural extension of the notion of a task is expressive enough to specify any intervalsequential object.
more …
By
Imbs, Damien; Rajsbaum, Sergio; Raynal, Michel
7 Citations
Processes in a concurrent system need to coordinate using a shared memory or a messagepassing subsystem in order to solve agreement tasks such as, for example, consensus or set agreement. However, coordination is often needed to “break the symmetry” of processes that are initially in the same state, for example, to get exclusive access to a shared resource, to get distinct names or to elect a leader.
This paper introduces and studies the family of generalized symmetry breaking (GSB) tasks, that includes election, renaming and many other symmetry breaking tasks. Differently from agreement tasks, a GSB task is “inputless”, in the sense that processes do not propose values; the task only specifies the symmetry breaking requirement, independently of the system’s initial state (where processes differ only on their identifiers). Among various results characterizing the family of GSB tasks, it is shown that (non adaptive) perfect renaming is universal for all GSB tasks.
more …
By
Lücken, Christian; Brizuela, Carlos; Barán, Benjamin
1 Citations
Multiobjective Evolutionary Algorithms (MOEA) are used to solve complex multiobjective problems. As the number of objectives increases, Paretobased MOEAs are unable to maintain the same effectiveness showed for two or three objectives. Therefore, as a way to ameliorate this performance degradation several authors proposed preferencebased methods as an alternative to Pareto based approaches. On the other hand, parallelization has shown to be useful in evolutionary optimizations. A central aspect for the parallelization of evolutionary algorithms is the population partitioning approach. Thus, this paper presents a new parallelization approach based on clustering by the shape of objective vectors to deal with manyobjective problems. The proposed method was compared with random and
$$k$$
means clustering approaches using a multithreading framework in parallelization of the NSGAII and six variants using preferencebased relations for fitness assignment. Executions were carriedout for the DTLZ problem suite, and the obtained solutions were compared using the generational distance metric. Experimental results show that the proposed shapebased partition achieves competitive results when comparing to the sequential and to other partitioning approaches.
more …
By
Feng, Xiang; Wan, Wanggen; Xu, Richard Yi Da; Chen, Haoyu; Li, Pengfei; Sánchez, J. Alfredo
Show all (6)
In computer graphics, various processing operations are applied to 3D triangle meshes and these processes often involve distortions, which affect the visual quality of surface geometry. In this context, perceptual quality assessment of 3D triangle meshes has become a crucial issue. In this paper, we propose a new objective quality metric for assessing the visual difference between a reference mesh and a corresponding distorted mesh. Our analysis indicates that the overall quality of a distorted mesh is sensitive to the distortion distribution. The proposed metric is based on a spatial pooling strategy and statistical descriptors of the distortion distribution. We generate a perceptual distortion map for vertices in the reference mesh while taking into account the visual masking effect of the human visual system. The proposed metric extracts statistical descriptors from the distortion map as the feature vector to represent the overall mesh quality. With the feature vector as input, we adopt a support vector regression model to predict the mesh quality score.We validate the performance of our method with three publicly available databases, and the comparison with stateoftheart metrics demonstrates the superiority of our method. Experimental results show that our proposed method achieves a high correlation between objective assessment and subjective scores.
more …
By
Lopez, Edgar Manoatl; Antonio, Luis Miguel; Coello Coello, Carlos A.
3 Citations
The hypervolume has become very popular in current multiobjective optimization research. Because of its highly desirable features, it has been used not only as a quality measure for comparing final results of multiobjective evolutionary algorithms (MOEAs), but also as a selection operator (it is, for example, very suitable for manyobjective optimization problems). However, it has one serious drawback: computing the exact hypervolume is highly costly. The best known algorithms to compute the hypervolume are polynomial in the number of points, but their cost grows exponentially with the number of objectives. This paper proposes a novel approach which, through the use of Graphics Processing Units (GPUs), computes in a faster way the hypervolume contribution of a point. We develop a highly parallel implementation of our approach and demonstrate its performance when using it within the
$$\mathcal {S}$$
Metric Selection Evolutionary MultiObjective Algorithm (SMSEMOA). Our results indicate that our proposed approach is able to achieve a significant speed up (of up to 883x) with respect to its sequential counterpart, which allows us to use SMSEMOA with exact hypervolume calculations, in problems having up to 9 objective functions.
more …
By
Sanvicente–Sánchez, Héctor; Frausto–Solís, Juan; Imperial–Valenzuela, Froilán
2 Citations
Since the apparition of Simulated Annealing algorithm (SA) it has shown to be an efficient method to solve combinatorial optimization problems. Due to this, new algorithms based on two looped cycles (temperatures and Markov chain) have emerged, one of them have been called Threshold Accepting (TA). Classical algorithms based on TA usually use the same Markov chain length for each temperature cycle, these methods spend a lot of time at high temperatures where the Markov chain length is supposed to be small. In this paper we propose a method based on the neighborhood structure to get the Markov chain length in a dynamic way for each temperature cycle. We implemented two TA algorithms (classical or TACM and proposed or TADM) for SAT. Experimentation shows that the proposed method is more efficient than the classical one since it obtain the same quality of the final solution with less processing time.
more …
By
Aguilar, José Alfonso; ZaldívarColado, Aníbal; TrippBarba, Carolina; Misra, Sanjay; Bernal, Roberto; Ocegueda, Abraham
Show all (6)
1 Citations
Until now, is wellknown that Requirements Engineering (RE) is one of the critical factors for success software. In current literature we can find several reasons of this affirmation. One particular phase, which is vital for developing any new software application is the Requirements Elicitation, is spite of this, most of the development of new software fail because of wrong elicitation phase. Several proposals exist for Requirements Elicitation in Software Engineering, but in the current software development market is focusing on the development of Web and mobile applications, specially using ModelDriven methods, that’s the reason why we asume that it is necessary to know the Elicitation techniques applied in ModelDriven Web Engineering. To do this, we selected the most representative methods such as NDT, UWE and WebML. We have reviewed 189 publications from ACM, IEEE, Science Direct, DBLP and World Wide Web. Publications from the RE literature were analyzed by means of the strict consideration of the current techniques for Requirements Elicitation.
more …
By
Ita, Guillermo; Bello, Pedro; Contreras, Meliza
To count models for two conjunctive forms (#2SAT problem) is a classic #P problem. We determine different structural patterns on the underlying graph of a 2CF F allowing the efficient computation of #2SAT(F).
We show that if the constrained graph of a formula is acyclic or the cycles on the graph can be arranged as independent and embedded cycles, then the number of models of F can be counted efficiently.
more …
By
YáñezMárquez, Cornelio; LópezYáñez, Itzamá; AldapePérez, Mario; CamachoNieto, Oscar; ArgüellesCruz, Amadeo José; VilluendasRey, Yenny
Show all (6)
1 Citations
The current paper contains the theoretical foundation for the offthemainstream model known as AlphaBeta associative memories (
$$\alpha \beta $$
model). This is an unconventional computation model designed to operate as an associative memory, whose main application is the solution of pattern recognition tasks, particularly for pattern recall and pattern classification. Although this model was devised, proposed and created in 2002, it is worth noting that its theoretical support remains unpublished to this day. This is despite the fact that more than a hundred scientific articles have been published with applications, improvements, and new models derived from the
$$\alpha \beta $$
model. The present paper includes all the required definitions, and the rigorous mathematical demonstrations of the lemmas and theorems, explaining the operation of the
$$\alpha \beta $$
model, as well as the original models it has inspired or that have been derived from it. Also, brief descriptions of 60 selected articles related to the
$$\alpha \beta $$
model are presented. These latter works illustrate the competitiveness (and sometimes superiority) of several extensions and models derived from the original
$$\alpha \beta $$
model, when compared against some models and paradigms present in the mainstream current scientific literature.
more …
By
Pinto, David; Rosso, Paolo; Jiménez, Ernesto
This paper presents an approach of a crosslingual information retrieval which uses a ranking method based on a penalisation version of the Jaccard formula. The obtained results after the submission of a set of runs to the WebCLEF 2006 have shown that this simple ranking formula may be used in a crosslingual environment. A comparison with runs submitted by other teams ranks us in a third place by using all the topics. A fourth place is obtained with our best overall results by using only the new topic set, and a second place was got by using only the automatic topics of the new topic set. An exact comparison with the rest of the participants is in fact difficult to obtain and, therefore, we consider that further detailed analysis of the components should be done in order to determine the best components of the proposed system.
more …
By
AlbaCabrera, Eduardo; IbarraFiallo, Julio; GodoyCalderon, Salvador; CervantesAlonso, Fernando
Show all (4)
1 Citations
The last few years have seen an important increase in research publications dealing with external typical testorfinding algorithms, while internal ones have been almost forgotten or modified to behave as external on the basis of their alleged poor performance. In this research we present a new internal typical testorfinding algorithm called YYC that incrementally calculates typical testors for the currently analized set of basic matrix rows by searching for compatible sets. The experimentally measured performance of this algorithm stands out favorably in problems where other external algorithms show very low performance. Also, a comparative analysis of its efficiency is done against some external typical testorfinding algorithms published during the last few years.
more …
By
Rendón, Erendira; Sánchez, José Salvador
3 Citations
Clustering in data mining is a discovery process that groups a set of data so as to maximize the intracluster similarity and to minimize the intercluster similarity. Clustering becomes more challenging when data are categorical and the amount of available memory is less than the size of the data set. In this paper, we introduce CBC (Clustering Based on Compressed Data), an extension of the Birch algorithm whose main characteristics refer to the fact that it can be especially suitable for very large databases and it can work both with categorical attributes and mixed features. Effectiveness and performance of the CBC procedure were compared with those of the wellknown Kmodes clustering algorithm, demonstrating that the CBC summary process does not affect the final clustering, while execution times can be drastically lessened.
more …
By
Czyzowicz, Jurek; Kranakis, Evangelos; Krizanc, Danny; Lambadaris, Ioannis; Narayanan, Lata; Opatrny, Jaroslav; Stacho, Ladislav; Urrutia, Jorge; Yazdani, Mohammadreza
Show all (9)
30 Citations
A set of sensors establishes barrier coverage of a given line segment if every point of the segment is within the sensing range of a sensor. Given a line segment I, n mobile sensors in arbitrary initial positions on the line (not necessarily inside I) and the sensing ranges of the sensors, we are interested in finding final positions of sensors which establish a barrier coverage of I so that the sum of the distances traveled by all sensors from initial to final positions is minimized. It is shown that the problem is NP complete even to approximate up to constant factor when the sensors may have different sensing ranges. When the sensors have an identical sensing range we give several efficient algorithms to calculate the final destinations so that the sensors either establish a barrier coverage or maximize the coverage of the segment if complete coverage is not feasible while at the same time the sum of the distances traveled by all sensors is minimized. Some open problems are also mentioned.
more …
By
García, Vicente; Mollineda, Ramón A.; Sánchez, J. Salvador
2 Citations
In this paper, we introduce a new approach to evaluate and visualize the classifier performance in twoclass imbalanced domains. This method defines a twodimensional space by combining the geometric mean of class accuracies and a new metric that gives an indication of how balanced they are. A given point in this space represents a certain tradeoff between those two measures, which will be expressed as a trapezoidal function. Besides, this evaluation function has the interesting property that it allows to emphasize the correct predictions on the minority class, which is often considered as the most important class. Experiments demonstrate the consistency and validity of the evaluation method here proposed.
more …
By
Graff, Mario; GraffGuerrero, Ariel; CerdaJacobo, Jaime
5 Citations
There is great interest for the development of semantic genetic operators to improve the performance of genetic programming. Semantic genetic operators have traditionally been developed employing experimentally or theoreticallybased approaches. Our current work proposes a novel semantic crossover developed amid the two traditional approaches. Our proposed semantic crossover operator is based on the use of the derivative of the error propagated through the tree. This process decides the crossing point of the second parent. The results show that our procedure improves the performance of genetic programming on rational symbolic regression problems.
more …
By
Quintana, Ma. Guadalupe; Morales, Rafael
We present results of using inductive logic programming (ILP) to produce learner models by behavioural cloning. Models obtained using a program for supervised induction of production rules (ripper) are compared to models generated using a wellknown program for ILP (foil). It is shown that the models produced by foil are either too specific or too general, depending on whether or not auxiliary relations are applied. Three possible explanations for these results are: (1) there is no way of specifying to foil the minimum number of cases each clause must cover; (2) foil requires that all auxiliary relations be defined extensionally; and (3) the application domain (control of a pole on a cart) has continuous attributes. In spite of foil’s limitations, the models it produced using auxiliary relations meet one of the goals of our exploration: to obtain more structured learner models which are easier to comprehend.
more …
By
Boone, Paul; Chavez, Edgar; Gleitzky, Lev; Kranakis, Evangelos; Opatrny, Jaroslav; Salazar, Gelasio; Urrutia, Jorge
Show all (7)
11 Citations
An important technique for discovering routes between two nodes in an adhoc network involves applying the face routing algorithm on a planar spanner of the network. Face routing guarantees message delivery in networks that contains large holes, where greedy algorithms fail. Existing techniques for constructing a suitable planar subgraph involve local tests that eliminate crossings between existing links by deleting some links. They do not test whether the deleted links actually create some crossings and some of the links are deleted needlessly. As a result, some of the routes found in face routing will have an unnecessarily large number of hops from source to destination. We consider a new local test for preprocessing a wireless network that produces a planar subgraph. The test is relatively simple, requires low overhead and does not eliminate existing links unless it is needed to eliminate a crossing, thus reducing overhead associated with multiple hops.
more …
By
SolorioFernández, Saúl; CarrascoOchoa, J. Ariel; MartínezTrinidad, José Fco.
4 Citations
In this paper, we introduce a new hybrid filterwrapper method for supervised feature selection, based on the Laplacian Score ranking combined with a wrapper strategy. We propose to rank features with the Laplacian Score to reduce the search space, and then we use this order to find the best feature subset. We compare our method against other based on ranking feature selection methods, namely, Information Gain Attribute Ranking, Relief, Correlationbased Feature Selection, and additionally we include in our comparison a Wrapper Subset Evaluation method. Empirical results over ten realworld datasets from the UCI repository show that our hybrid method is competitive and outperforms in most of the cases to the other feature selection methods used in our experiments.
more …
By
Delporte, Carole; Fauconnier, Hugues; Rajsbaum, Sergio; Raynal, Michel
Show all (4)
An immediate snapshot object is a high level communication object, built on top of a read/write distributed system in which all except one processes may crash. It allows each process to write a value and obtains a set of pairs (process id, value) such that, despite process crashes and asynchrony, the sets obtained by the processes satisfy noteworthy inclusion properties.
Considering an nprocess model in which up to t processes are allowed to crash (tcrash system model), this paper is on the construction of tresilient immediate snapshot objects. In the tcrash system model, a process can obtain values from at least
$$(nt)$$
processes, and, consequently, timmediate snapshot is assumed to have the properties of the basic
$$(n1)$$
resilient immediate snapshot plus the additional property stating that each process obtains values from at least
$$(nt)$$
processes. The main result of the paper is the following. While there is a (deterministic)
$$(n1)$$
resilient algorithm implementing the basic
$$(n1)$$
immediate snapshot in an
$$(n1)$$
crash read/write system, there is no tresilient algorithm in a tcrash read/write model when
$$t\in [1\ldots (n2)]$$
. This means that, when
$$t<n1$$
, the notion of tresilience is inoperative when one has to implement timmediate snapshot for these values of t: the model assumption “at most
$$t<n1$$
processes may crash” does not provide us with additional computational power allowing for the design of a genuine tresilient algorithm (genuine meaning that such an algorithm would work in the tcrash model, but not in the
$$(t+1)$$
crash model). To show these results, the paper relies on wellknown distributed computing agreement problems such as consensus and kset agreement.
more …
By
Goudarzi, Alireza; Lakin, Matthew R.; Stefanovic, Darko
5 Citations
We propose a novel molecular computing approach based on reservoir computing. In reservoir computing, a dynamical core, called a reservoir, is perturbed with an external input signal while a readout layer maps the reservoir dynamics to a target output. Computation takes place as a transformation from the input space to a highdimensional spatiotemporal feature space created by the transient dynamics of the reservoir. The readout layer then combines these features to produce the target output. We show that coupled deoxyribozyme oscillators can act as the reservoir. We show that despite using only three coupled oscillators, a molecular reservoir computer could achieve 90% accuracy on a benchmark temporal problem.
more …
By
Herlihy, Maurice; Rajsbaum, Sergio; Raynal, Michel; Stainer, Julien
Show all (4)
4 Citations
In a waitfree model any number of processes may crash. A process runs solo when it computes its local output without receiving any information from other processes, either because they crashed or they are too slow. While in waitfree sharedmemory models at most one process may run solo in an execution, any number of processes may have to run solo in an asynchronous waitfree messagepassing model.
This paper is on the computability power of models in which several processes may concurrently run solo. It first introduces a family of roundbased waitfree models, called the dsolo models, 1 ≤ d ≤ n, where up to d processes may run solo. The paper gives then a characterization of the colorless tasks that can be solved in each dsolo model. It also introduces the (d,ε)solo approximate agreement task, which generalizes εapproximate agreement, and proves that (d,ε)solo approximate agreement can be solved in the dsolo model, but cannot be solved in the (d + 1)solo model. The paper studies also the relation linking dset agreement and (d,ε)solo approximate agreement in asynchronous waitfree messagepassing systems.
These results establish for the first time a hierarchy of waitfree models that, while weaker than the basic read/write model, are nevertheless strong enough to solve nontrivial tasks.
more …
By
Fraigniaud, Pierre; Rajsbaum, Sergio; Travers, Corentin
1 Citations
The notion of deciding a distributed language
$$\mathcal {L} $$
is of growing interest in various distributed computing settings. Each process
$$p_i$$
is given an input value
$$x_i$$
, and the processes should collectively decide whether their set of input values
$$x=(x_i)_i$$
is a valid state of the system w.r.t. to some specification, i.e., if
$$x\in \mathcal {L} $$
. In nondeterministic distributed decision each process
$$p_i$$
gets a local certificate
$$c_i$$
in addition to its input
$$x_i$$
. If the input
$$x\in \mathcal {L} $$
then there exists a certificate
$$c=(c_i)_i$$
such that the processes collectively accept x, and if
$$x\not \in \mathcal {L} $$
, then for every c, the processes should collectively reject x. The collective decision is expressed by the set of opinions emitted by the processes.
In this paper we study nondeterministic distributed decision in systems where asynchronous processes may crash. It is known that the number of opinions needed to deterministically decide a language can grow with n, the number of processes in the system. We prove that every distributed language
$$\mathcal {L} $$
can be nondeterministically decided using only three opinions, with certificates of size
$$\lceil \log \alpha (n)\rceil +1$$
bits, where
$$\alpha $$
grows at least as slowly as the inverse of the Ackerman function. The result is optimal, as we show that there are distributed languages that cannot be decided using just two opinions, even with arbitrarily large certificates.
To prove our upper bound, we introduce the notion of distributed encoding of the integers, that provides an explicit construction of a long bad sequence in the wellquasiordering
$$(\{0,1\}^*,\le _*)$$
controlled by the successor function. Thus, we provide a new class of applications for wellquasiorderings that lies outside logic and complexity theory. For the lower bound we use combinatorial topology techniques.
more …
By
DelporteGallet, Carole; Fauconnier, Hugues; Rajsbaum, Sergio; Yanagisawa, Nayuta
Show all (4)
One of the central questions in distributed computability is characterizing the tasks that are solvable in a given system model. In the anonymous case, where processes have no identifiers and communicate through multiwriter/multireader registers, there is a recent topological characterization (Yanagisawa 2017) of the colorless tasks that are solvable when any number of asynchronous processes may crash. In this paper, we consider the case where at most t asynchronous processes may crash, where
$$1\le t<n$$
. We prove that a colorless task is tresilient solvable anonymously if and only if it is tresilient solvable nonanonymously. We obtain our results through various reductions and simulations that explore how to extend techniques for nonanonymous computation to anonymous one.
more …
By
Hernández Gómez, Raquel; Coello Coello, Carlos A.; Alba, Enrique
2 Citations
In the last decade, there has been a growing interest in multiobjective evolutionary algorithms that use performance indicators to guide the search. A simple and effective one is the
$$\mathcal {S}$$
Metric Selection Evolutionary MultiObjective Algorithm (SMSEMOA), which is based on the hypervolume indicator. Even though the maximization of the hypervolume is equivalent to achieving Pareto optimality, its computational cost increases exponentially with the number of objectives, which severely limits its applicability to manyobjective optimization problems. In this paper, we present a parallel version of SMSEMOA, where the execution time is reduced through an asynchronous island model with micropopulations, and diversity is preserved by external archives that are pruned to a fixed size employing a recently created technique based on the ParallelCoordinates graph. The proposed approach, called
$$\mathcal {S}$$
PAMICRO (PArallel MICRo Optimizer based on the
$$\mathcal {S}$$
metric), is compared to the original SMSEMOA and another stateoftheart algorithm (HypE) on the WFG test problems using up to 10 objectives. Our experimental results show that
$$\mathcal {S}$$
PAMICRO is a promising alternative that can solve manyobjective optimization problems at an affordable computational cost.
more …
By
SantiagoRamirez, Everardo; GonzalezFraga, J. A.; AscencioLopez, J. I.
2 Citations
In this paper, we compare the performance of three composite correlation filters in facial recognition problem. We used the ORL (Olivetti Research Laboratory) facial image database to evaluate KLaw, MACE and ASEF filters performance. Simulations results demonstrate that KLaw nonlinear composite filters evidence the best performance in terms of recognition rate (RR) and, false acceptation rate (FAR). As a result, we observe that correlation filters are able to work well even when the facial image contains distortions such as rotation, partial occlusion and different illumination conditions.
more …
