Showing 1 to 100 of 770 matching Articles
Results per page:
Export (CSV)
By
Monroy, Raúl; Bundy, Alan; Green, Ian
Post to Citeulike
2 Citations
Most efforts to automate formal verification of communicating systems have centred around finitestate systems (FSSs). However, FSSs are incapable of modelling many practical communicating systems, including a novel class of problems, which we call VIPS. VIPSs are valuepassing, infinitestate, parameterised systems. Existing approaches using model checking over FSSs are insufficient for VIPSs. This is due to their inability both to reason with and about domainspecific theories, and to cope with systems having an unbounded or arbitrary state space.
We use the Calculus of Communicating Systems (CCS) (Communication and Concurrency. London: Prentice Hall, 1989) to express and specify VIPSs. We take program verification to be proving the program and its intended specification equivalent. We use the laws of CCS to conduct the verification task. This approach allows us to study communicating systems and the data such systems communicate. Automating theorem proving in this context is an extremely difficult task.
We provide automated methods for CCS analysis; they are applicable to both FSSs and VIPSs. Adding these methods to the CL^{A}M proof planner (Lecture Notes in Artificial Intelligence, Vol. 449, Springer, 1990, pp. 647, 648), we have implemented an automated verification planner capable of dealing with problems that previously required human interaction. This paper describes these methods, gives an account as to why they work, and provides a short summary of experimental results.
more …
By
Chakraborty, Debrup; Sarkar, Palash
Post to Citeulike
3 Citations
This work deals with the various requirements of encryption and authentication in cryptographic applications. The approach is to construct suitable modes of operations of a block cipher to achieve the relevant goals. A variety of schemes suitable for specific applications are presented. While none of the schemes are built completely from scratch, there is a common unifying framework which connects them. All the schemes described have been implemented and the implementation details are publicly available. Performance figures are presented when the block cipher is the AES and the Intel AESNI instructions are used. These figures suggest that the constructions presented here compare well with previous works such as the famous OCB mode of operation. In terms of features, the constructions provide several new offerings which are not present in earlier works. This work significantly widens the range of choices of an actual designer of cryptographic system.
more …
By
Uddin, Ashraf; Singh, Vivek Kumar; Pinto, David; Olmos, Ivan
Show all (4)
Post to Citeulike
4 Citations
This paper presents a detailed scientometric and textbased analysis of Computer Science (CS) research output from Mexico during 1989–2014, indexed in Web of Science. The analytical characterization focuses on origins and growth patterns of CS research in Mexico. In addition to computing the standard scientometric indicators of TP, TC, ACPP, HiCP, Hindex, ICP patterns etc., the major publication sources selected by Mexican computer scientists and the major funding agencies for CS research are also identified. The textbased analysis, on the other hand, focused on identifying major research themes pursued by Mexican computer scientists and their trends. Mexico, ranking 35th in the world CS research output during the mentioned period, is also unique in the sense that 75 % of the total CS publications are produced by top ten Mexican institutions alone. Similarly, Mexico has higher ICP instances than world average. The analysis presents a detailed characterization on these aspects.
more …
By
Singh, Vivek Kumar; Uddin, Ashraf; Pinto, David
Post to Citeulike
11 Citations
This paper aims to perform a detailed scientometric and textbased analysis of Computer Science (CS) research output of the 100 most productive institutions in India and in the world. The analytical characterization is based on research output data indexed in Scopus during the last 25 years period (1989–2013). Our computational analysis involves a twodimensional approach involving the standard scientometric methodology and textbased analysis. The scientometric characterization aims to assess CS domain research output in leading Indian institutions visàvis the leading world institutions and to bring out the similarities and differences among them. It involves analysis along traditional scientometric indicators such as total output, citationbased impact assessment, coauthorship patterns, international collaboration levels etc. The textbased characterization aims to identify the key research themes and their temporal trends for the two sets. The key contribution of the experimental work is that it’s an analytical characterization of its kind, which identifies characteristic similarities and differences in CS research landscape of Indian institutions visàvis world institutions.
more …
By
Confalonieri, Roberto; Nieves, Juan Carlos; Osorio, Mauricio; VázquezSalceda, Javier
Show all (4)
Post to Citeulike
4 Citations
In this paper, we show how the formalism of Logic Programs with Ordered Disjunction (LPODs) and Possibilistic Answer Set Programming (PASP) can be merged into the single framework of Logic Programs with Possibilistic Ordered Disjunction (LPPODs). The LPPODs framework embeds in a unified way several aspects of commonsense reasoning, nonmonotonocity, preferences, and uncertainty, where each part is underpinned by a well established formalism. On one hand, from LPODs it inherits the distinctive feature of expressing contextdependent qualitative preferences among different alternatives (modeled as the atoms of a logic program). On the other hand, PASP allows for qualitative certainty statements about the rules themselves (modeled as necessity values according to possibilistic logic) to be captured. In this way, the LPPODs framework supports a reasoning which is nonmonotonic, preference and uncertaintyaware. The LPPODs syntax allows for the specification of (1) preferences among the exceptions to default rules, and (2) necessity values about the certainty of program rules. As a result, preferences and uncertainty can be used to select the preferred uncertain default rules of an LPPOD and, consequently, to order its possibilistic answer sets. Furthermore, we describe the implementation of an ASPbased solver able to compute the LPPODs semantics.
more …
By
Bagnoli, Franco; El Yacoubi, Samira; Rechtman, Raúl
Post to Citeulike
4 Citations
An important question to be addressed regarding system control on a time interval [0, T] is whether some particular target state in the configuration space is reachable from a given initial state. When the target of interest refers only to a portion of the spatial domain, we speak about regional analysis. Cellular automata approach have been recently promoted for the study of control problems on spatially extended systems for which the classical approaches cannot be used. An interesting problem concerns the situation where the subregion of interest is not interior to the domain but a portion of its boundary . In this paper we address the problem of regional controllability of cellular automata via boundary actions, i.e., we investigate the characteristics of a cellular automaton so that it can be controlled inside a given region only acting on the value of sites at its boundaries.
more …
By
Monroy, Raúl; Bundy, Alan; Green, Ian
Post to Citeulike
Unique Fixpoint Induction (UFI) is the chief inference rule to prove the equivalence of recursive processes in the Calculus of Communicating Systems (CCS) (Milner 1989). It plays a major role in the equational approach to verification. Equational verification is of special interest as it offers theoretical advantages in the analysis of systems that communicate values, have infinite state space or show parameterised behaviour. We call these kinds of systems VIPSs. VIPSs is the acronym of Valuepassing, InfiniteState and Parameterised Systems. Automating the application of UFI in the context of VIPSs has been neglected. This is both because many VIPSs are given in terms of recursive function symbols, making it necessary to carefully apply induction rules other than UFI, and because proving that one VIPS process constitutes a fixpoint of another involves computing a process substitution, mapping states of one process to states of the other, that often is not obvious. Hence, VIPS verification is usually turned into equation solving (Lin 1995a). Existing tools for this proof task, such as VPAM (Lin 1993), are highly interactive. We introduce a method that automates the use of UFI. The method uses middleout reasoning (Bundy et al. 1990a) and, so, is able to apply the rule even without elaborating the details of the application. The method introduces metavariables to represent those bits of the processes’ state space that, at application time, were not known, hence, changing from equation verification to equation solving. Adding this method to the equation plan developed by Monroy et al. (Autom Softw Eng 7(3):263–304, 2000a), we have implemented an automatic verification planner. This planner increases the number of verification problems that can be dealt with fully automatically, thus improving upon the current degree of automation in the field.
more …
By
Coello Coello, Carlos A.
Post to Citeulike
609 Citations
This paper presents a critical review of the most important evolutionarybased multiobjective optimization techniques developed over the years, emphasizing the importance of analyzing their Operations Research roots as a way to motivate the development of new approaches that exploit the search capabilities of evolutionary algorithms. Each technique is briefly described with its advantages and disadvantages, its degree of applicability and some of its known applications. Finally, the future trends in this discipline and some of the open areas of research are also addressed.
more …
By
Taverne, Jonathan; FazHernández, Armando; Aranha, Diego F.; RodríguezHenríquez, Francisco; Hankerson, Darrel; López, Julio
Show all (6)
Post to Citeulike
14 Citations
The availability of a new carryless multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and halftrace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the stateoftheart performance of halving and doublingbased scalar multiplication on NIST curves at the 112 and 192bit security levels and a new speed record for sidechannelresistant scalar multiplication in a random curve at the 128bit security level. The algorithms presented in this work were implemented on Westmere and Sandy Bridge processors, the latest generation Intel microarchitectures.
more …
By
Adj, Gora; CanalesMartínez, Isaac; RiveraZamarripa, Luis; RodríguezHenríquez, Francisco
Show all (4)
Post to Citeulike
The problem of determining whether a polynomial defined over a finite field ring is smooth or not with respect to a given degree, is the most intensive arithmetic operation of the socalled descent phase of indexcalculus algorithms. In this paper, we present an analysis and efficient implementation of Coppersmith’s smoothness test for polynomials defined over finite fields with characteristic three. As a case study, we review the best strategies for obtaining a fast field and polynomial arithmetic for polynomials defined over the ring
$$F_q[X],$$
with
$$q=3^6,$$
and report the timings achieved by our library when computing the smoothness test applied to polynomials of several degrees defined in that ring. This software library was recently used in Adj et al. (Cryptology 2016.
http://eprint.iacr.org/2016/914
), as a building block for achieving a record computation of discrete logarithms over the 4841bit field
$${{\mathbb {F}}}_{3^{6\cdot 509}}$$
.
more …
By
Feng, Xiang; Wan, Wanggen; Xu, Richard Yi Da; Chen, Haoyu; Li, Pengfei; Sánchez, J. Alfredo
Show all (6)
Post to Citeulike
In computer graphics, various processing operations are applied to 3D triangle meshes and these processes often involve distortions, which affect the visual quality of surface geometry. In this context, perceptual quality assessment of 3D triangle meshes has become a crucial issue. In this paper, we propose a new objective quality metric for assessing the visual difference between a reference mesh and a corresponding distorted mesh. Our analysis indicates that the overall quality of a distorted mesh is sensitive to the distortion distribution. The proposed metric is based on a spatial pooling strategy and statistical descriptors of the distortion distribution. We generate a perceptual distortion map for vertices in the reference mesh while taking into account the visual masking effect of the human visual system. The proposed metric extracts statistical descriptors from the distortion map as the feature vector to represent the overall mesh quality. With the feature vector as input, we adopt a support vector regression model to predict the mesh quality score.We validate the performance of our method with three publicly available databases, and the comparison with stateoftheart metrics demonstrates the superiority of our method. Experimental results show that our proposed method achieves a high correlation between objective assessment and subjective scores.
more …
By
YáñezMárquez, Cornelio; LópezYáñez, Itzamá; AldapePérez, Mario; CamachoNieto, Oscar; ArgüellesCruz, Amadeo José; VilluendasRey, Yenny
Show all (6)
Post to Citeulike
The current paper contains the theoretical foundation for the offthemainstream model known as AlphaBeta associative memories (
$$\alpha \beta $$
model). This is an unconventional computation model designed to operate as an associative memory, whose main application is the solution of pattern recognition tasks, particularly for pattern recall and pattern classification. Although this model was devised, proposed and created in 2002, it is worth noting that its theoretical support remains unpublished to this day. This is despite the fact that more than a hundred scientific articles have been published with applications, improvements, and new models derived from the
$$\alpha \beta $$
model. The present paper includes all the required definitions, and the rigorous mathematical demonstrations of the lemmas and theorems, explaining the operation of the
$$\alpha \beta $$
model, as well as the original models it has inspired or that have been derived from it. Also, brief descriptions of 60 selected articles related to the
$$\alpha \beta $$
model are presented. These latter works illustrate the competitiveness (and sometimes superiority) of several extensions and models derived from the original
$$\alpha \beta $$
model, when compared against some models and paradigms present in the mainstream current scientific literature.
more …
By
Esponda, Fernando; Forrest, Stephanie; Helman, Paul
Post to Citeulike
14 Citations
In a negative representation, a set of elements (the positive representation) is depicted by its complement set. That is, the elements in the positive representation are not explicitly stored, and those in the negative representation are. The concept, feasibility, and properties of negative representations are explored in the paper; in particular, its potential to address privacy concerns. It is shown that a positive representation consisting of n lbit strings can be represented negatively using only O(ln) strings, through the use of an additional symbol. It is also shown that membership queries for the positive representation can be processed against the negative representation in time no worse than linear in its size, while reconstructing the original positive set from its negative representation is an
$${\mathcal{NP}}$$
hard problem. The paper introduces algorithms for constructing negative representations as well as operations for updating and maintaining them.
more …
By
Baldwin, Brian; Goundar, Raveen R.; Hamilton, Mark; Marnane, William P.
Show all (4)
Post to Citeulike
12 Citations
Recent elliptic curve scalar multiplication algorithms are based on efficient co
$$Z$$
arithmetics. These arithmetics were initially introduced by Meloni in 2007 where addition of projective points share the same
$$Z$$
coordinate. The co
$$Z$$
version algorithms are sufficiently fast and secure against a large variety of implementation attacks. This paper analyses the performance of these algorithms in hardware and then compares them against software and hardware–software codesign environments on FPGA, in terms of speed, memory, power and energy consumption. Specifically, this paper presents a survey and performance comparison of implementations of co
$$Z$$
versions of the Montgomery ladder and the Joye’s doubleadd algorithm in an embedded system environment.
more …
By
GarzaFabre, Mario; RodriguezTello, Eduardo; ToscanoPulido, Gregorio
Post to Citeulike
8 Citations
The HP model for protein structure prediction abstracts the fact that hydrophobicity is a dominant force in the protein folding process. This challenging combinatorial optimization problem has been widely addressed through metaheuristics. The evaluation function is a key component for the success of metaheuristics; the poor discrimination of the conventional evaluation function of the HP model has motivated the proposal of alternative formulations for this component. This comparative analysis inquires into the effectiveness of seven different evaluation functions for the HP model. The degree of discrimination provided by each of the studied functions, their capability to preserve a rank ordering among potential solutions which is consistent with the original objective of the HP model, as well as their effect on the performance of local search methods are analyzed. The obtained results indicate that studying alternative evaluation schemes for the HP model represents a highly valuable direction which merits more attention.
more …
By
FernándezZepeda, José Alberto; BrubeckSalcedo, Daniel; FajardoDelgado, Daniel; ZatarainAceves, Héctor
Show all (4)
Post to Citeulike
We address the bodyguard allocation problem (BAP), an optimization problem that illustrates the conflict of interest between two classes of processes with contradictory preferences within a distributed system. While a class of processes prefers to minimize its distance to a particular process called the root, the other class prefers to maximize it; at the same time, all the processes seek to build a communication spanning tree with the maximum social welfare. The two stateoftheart algorithms for this problem always guarantee the generation of a spanning tree that satisfies a condition of Nash equilibrium in the system; however, such a tree does not necessarily produce the maximum social welfare. In this paper, we propose a twoplayer coalition cooperative scheme for BAP, which allows some processes to perturb or break a Nash equilibrium to find another one with a better social welfare. By using this cooperative scheme, we propose a new algorithm called FFCBAP_{S} for BAP. We present both theoretical and empirical analyses which show that this algorithm produces better quality approximate solutions than former algorithms for BAP.
more …
By
Chakraborty, Debrup; López, Cuauhtemoc Mancillas; Sarkar, Palash
Post to Citeulike
1 Citations
In the last one and a half decade there has been a lot of activity toward development of cryptographic techniques for disk encryption. It has been almost canonized that an encryption scheme suitable for the application of disk encryption must be length preserving, i.e., it rules out the use of schemes such as authenticated encryption where an authentication tag is also produced as a part of the ciphertext resulting in ciphertexts being longer than the corresponding plaintexts. The notion of a tweakable enciphering scheme (TES) has been formalized as the appropriate primitive for disk encryption, and it has been argued that they provide the maximum security possible for a tagless scheme. On the other hand, TESs are less efficient than some existing authenticated encryption schemes. Also TES cannot provide true authentication as they do not have authentication tags. In this paper, we analyze the possibility of the use of encryption schemes where length expansion is produced for the purpose of disk encryption. On the negative side, we argue that noncebased authenticated encryption schemes are not appropriate for this application. On the positive side, we demonstrate that deterministic authenticated encryption (DAE) schemes may have more advantages than disadvantages compared to a TES when used for disk encryption. Finally, we propose a new deterministic authenticated encryption scheme called BCTR which is suitable for this purpose. We provide the full specification of BCTR, prove its security and also report an efficient implementation in reconfigurable hardware. Our experiments suggests that BCTR performs significantly better than existing TESs and existing DAE schemes.
more …
Post to Citeulike
7 Citations
Abstract. Let P and Q be two disjoint convex polygons in the plane with m and n vertices, respectively. Given a point x in P , the aperture angle of x with respect to Q is defined as the angle of the cone that: (1) contains Q , (2) has apex at x and (3) has its two rays emanating from x tangent to Q . We present algorithms with complexities O(n log m) , O(n + n log (m/n)) and O(n + m) for computing the maximum aperture angle with respect to Q when x is allowed to vary in P . To compute the minimum aperture angle we modify the latter algorithm obtaining an O(n + m) algorithm. Finally, we establish an Ω(n + n log (m/n)) time lower bound for the maximization problem and an Ω(m + n) bound for the minimization problem thereby proving the optimality of our algorithms.
more …
By
RodríguezMazahua, Lisbeth; RodríguezEnríquez, CristianAarón; SánchezCervantes, José Luis; Cervantes, Jair; GarcíaAlcaraz, Jorge Luis; AlorHernández, Giner
Show all (6)
Post to Citeulike
16 Citations
Big Data has become a very popular term. It refers to the enormous amount of structured, semistructured and unstructured data that are exponentially generated by highperformance applications in many domains: biochemistry, genetics, molecular biology, physics, astronomy, business, to mention a few. Since the literature of Big Data has increased significantly in recent years, it becomes necessary to develop an overview of the stateoftheart in Big Data. This paper aims to provide a comprehensive review of Big Data literature of the last 4 years, to identify the main challenges, areas of application, tools and emergent trends of Big Data. To meet this objective, we have analyzed and classified 457 papers concerning Big Data. This review gives relevant information to practitioners and researchers about the main trends in research and application of Big Data in different technical domains, as well as a reference overview of Big Data tools.
more …
By
Vázquez Espinoza de los Monteros, Roberto A.; Sossa Azuela, Juan Humberto
Post to Citeulike
25 Citations
The brain is not a huge fixed neural network, but a dynamic, changing neural network that continuously adapts to meet the demands of communication and computational needs. In classical neural networks approaches, particularly associative memory models, synapses are only adjusted during the training phase. After this phase, synapses are no longer adjusted. In this paper we describe a new dynamical model where synapses of the associative memory could be adjusted even after the training phase as a response to an input stimulus. We provide some propositions that guarantee perfect and robust recall of the fundamental set of associations. In addition, we describe the behavior of the proposed associative model under noisy versions of the patterns. At last, we present some experiments aimed to show the accuracy of the proposed model.
more …
By
RuizVanoye, Jorge A.; PérezOrtega, Joaquín; Pazos Rangel, Rodolfo A.; DíazParra, Ocotlán; FraireHuacuja, Héctor J.; FraustoSolís, Juan; ReyesSalgado, Gerardo; CruzReyes, Laura
Show all (8)
Post to Citeulike
We propose the usage of formal languages for expressing instances of NPcomplete problems for their application in polynomial transformations. The proposed approach, which consists of using formal language theory for polynomial transformations, is more robust, more practical, and faster to apply to real problems than the theory of polynomial transformations. In this paper we propose a methodology for transforming instances between NPcomplete problems, which differs from Garey and Johnson’s. Unlike most transformations which are used for proving that a problem is NPcomplete based on the NPcompleteness of another problem, the proposed approach is intended for extrapolating some known characteristics, phenomena, or behaviors from a problem A to another problem B. This extrapolation could be useful for predicting the performance of an algorithm for solving B based on its known performance for problem A, or for taking an algorithm that solves A and adapting it to solve B.
more …
By
Chakraborty, Debrup; HernandezJimenez, Vicente; Sarkar, Palash
Post to Citeulike
1 Citations
XCB is a tweakable enciphering scheme (TES) which was first proposed in 2004. The scheme was modified in 2007. We call these two versions of XCB as XCBv1 and XCBv2 respectively. XCBv2 was later proposed as a standard for encryption of sector oriented storage media in IEEEstd 1619.2 2010. There is no known proof of security for XCBv1 but the authors provided a concrete security bound for XCBv2 and a “proof” justifying the bound. In this paper we show that XCBv2 is not secure as a TES by showing an easy distinguishing attack on it. For XCBv2 to be secure, the message space should contain only messages whose lengths are multiples of the block length of the block cipher. Even for such restricted message spaces, the bound that the authors claim is not justified. We show this by pointing out some errors in the proof. For XCBv2 on full block messages, we provide a new security analysis. The resulting bound that can be proved is much worse than what has been claimed by the authors. Further, we provide the first concrete security bound for XCBv1, which holds for all message lengths. In terms of known security bounds, both XCBv1 and XCBv2 are worse compared to existing alternative TESs.
more …
By
Korzhik, Valery; MoralesLuna, Guillermo
Post to Citeulike
1 Citations
We consider a cryptographic scenario where some center broadcasts a random binary string to Alice, Bob and Eve over binary symmetric channels with bit error probabilities ε_{A}, ε_{B} and ε_{E} respectively. Alice and Bob share no secret key initially, and their goal is to generate, after public discussion, a common informationtheoretically secure key facing an active eavesdropper Eve. Under the condition ε_{A}<ε_{E} and ε_{B}<ε_{E}, code authentication (CA) can be used as part of a public discussion protocol to solve this problem. This authentication exploits parts of substrings received by Alice and Bob from the broadcasting center as authenticators to messages transmitted in a public discussion. Unfortunately, it happens to be ineffective because it produces a key of small length. We propose a hybrid authentication (HA) that combines both keyless code authentication and key authentication based on an almost strong universal class of hash functions. We prove a theorem that allows estimation of the performance evaluation of hybrid authentication. The selection algorithm for the main HA parameters, given security and reliability thresholds, is presented in detail.
more …
By
FloresGarrido, Marisol; CarrascoOchoa, JesúsAriel; MartínezTrinidad, José Fco.
Post to Citeulike
7 Citations
Frequent graph mining algorithms commonly use graph isomorphism to identify occurrences of a given pattern, but in the last years, a few works have focused on the case where a pattern could differ from its occurrences, which can be important to analyze noisy data. These later algorithms allow differences in labels and structural differences in edges, but to the best of our knowledge, none of them considers structural differences in vertices. How can we identify occurrences that differ by one (or several) nodes from the pattern they represent? Our work approaches the problem of frequent graph pattern mining with two main characteristics. First, we use inexact matching, allowing structural differences in both edges and vertices. Second, we focus on the problem of mining patterns in a single graph, a problem that has been less explored than the case in which patterns are mined from a graph collection. In this paper, we introduce two similarity functions to compare graphs using inexact matching and an algorithm, AGraP, able to identify patterns that can have structural differences with respect to their occurrences. Our experimental results show that AGraP is able to find patterns that cannot be found by other stateoftheart algorithms. Additionally, we show that the patterns mined by AGraP are useful in classification tasks.
more …
By
Lima, Roberto; MartinezCarranza, Jose; MoralesReyes, Alicia; Cumplido, Rene
Show all (4)
Post to Citeulike
Binary descriptors have won their place as efficient and effective visual descriptors in several vision tasks. In this context, one of the most widely used binary descriptors to date is the ORB descriptor. ORB is robust against rotation changes, and it uses a learning procedure to generate sampling pairwise tests to construct the descriptor. However, this construction involves a sequential memory access of as many steps as the binary string size. From the latter and motivated by the fact that modern computer vision tasks may require the construction of thousands, if not millions of binary descriptors, we propose to accelerate the construction process of the ORB descriptor via an FPGAbased hardware architecture. The latter is leveraged with a novel arrangement of pairwise tests, which takes advantage of a dual random access memory scheme achieving an acceleration of up to 17 times when compared against the sequential way. The empirical assessment indicates that ORB descriptors obtained from the proposed approach keep a similar performance to that of the original ORB.
more …
By
Tchernykh, A.; CristóbalSalas, A.; Kober, V.; Ovseevich, I. A.
Show all (4)
Post to Citeulike
In this paper, a partial evaluation technique to reduce communication costs of distributed image processing is presented. It combines application of incomplete structures and partial evaluation together with classical program optimization such as constantpropagation, loop unrolling and deadcode elimination. Through a detailed performance analysis, we establish conditions under which the technique is beneficial.
more …
By
AvilaGeorge, Himer; TorresJimenez, Jose; RangelValdez, Nelson; Carrión, Abel; Hernández, Vicente
Show all (5)
Post to Citeulike
7 Citations
The Covering Arrays (CAs) are mathematical objects with minimal coverage and maximum cardinality that are a good tool for the design of experiments. A covering array is an N×k matrix over an alphabet v s.t. each N×k subset contains at least one time each combination from {0,1,…,v−1}^{t}, given a positive integer value t. The process of ensuring that a CA contains each of the v^{t} combinations is called verification of CA. In this paper, we present an algorithm for CA verification and its implementation details in three different computation paradigms: (a) sequential approach (SA); (b) parallel approach (PA); and (c) Grid approach (GA). Four different PAs were compared in their performance of verifying a matrix as a CA; the PA with the best performance was included in a different experimentation where the three paradigms, SA, PA, and GA were compared in a benchmark composed by 45 possible CA instances. The results showed the limitations of the different paradigms when solving the verification of CA problem, and points out the necessity of a Grid approach to solve the problem when the size of a CA grows.
more …
By
FalcónCardona, Jesús Guillermo; Coello Coello, Carlos A.
Post to Citeulike
8 Citations
In this paper, we propose a novel multiobjective ant colony optimizer (called iMOACO
$$_{\mathbb {R}}$$
) for continuous search spaces, which is based on ACO
$$_{\mathbb {R}}$$
and the R2 performance indicator. iMOACO
$$_{\mathbb {R}}$$
is the first multiobjective ant colony optimizer (MOACO) specifically designed to tackle continuous manyobjective optimization problems (i.e., multiobjective optimization problems having four or more objectives). Our proposed iMOACO
$$_{\mathbb {R}}$$
is compared to three stateoftheart multiobjective evolutionary algorithms (NSGAIII, MOEA/D and SMSEMOA) and a MOACO algorithm called MOACO
$$_{\mathbb {R}}$$
using standard test problems and performance indicators taken from the specialized literature. Our experimental results indicate that iMOACO
$$_{\mathbb {R}}$$
is very competitive with respect to NSGAIII and MOEA/D and it is able to outperform SMSEMOA and MOACO
$$_{\mathbb {R}}$$
in most of the test problems adopted.
more …
By
Jiménez, Samantha; JuárezRamírez, Reyes; Castillo, Víctor H.; RamírezNoriega, Alan
Show all (4)
Post to Citeulike
Affectivity has influence in learning facetoface environments and improves some aspects in students, such as motivation. For that reason, it is important to integrate affectivity elements into virtual environments. We propose a conceptual model that suggests which elements of tutor, student and dialogue should be integrated and implemented into learning systems. We design an ontology guided by methontology, and apply a mathematical evaluation (OntoQA) to determine the richness of the proposed model. The mathematical evaluation states that the proposed model has relationship richness and horizontal nature. We developed a software application implementing the conceptual model in order to prove its effectivity to generate students’ motivation. The findings suggest that the implemented affective learning ontology impacts positively the motivation in students with low academic performance, in female students and in engineering students.
more …
By
Escalante, Hugo Jair; Montes, Manuel; Sucar, Enrique
Post to Citeulike
9 Citations
This paper introduces two novel strategies for representing multimodal images with application to multimedia image retrieval. We consider images that are composed of both text and labels: while text describes the image content at a very high semantic level (e.g., making reference to places, dates or events), labels provide a midlevel description of the image (i.e., in terms of the objects that can be seen in the image). Accordingly, the main assumption of this work is that by combining information from text and labels we can develop very effective retrieval methods. We study standard information fusion techniques for combining both sources of information. However, whereas the performance of such techniques is highly competitive, they cannot capture effectively the content of images. Therefore, we propose two novel representations for multimodal images that attempt to exploit the semantic cohesion among terms from different modalities. Such representations are based on distributional term representations widely used in computational linguistics. Under the considered representations the content of an image is modeled by a distribution of cooccurrences over terms or of occurrences over other images, in such a way that the representation can be considered an expansion of the multimodal terms in the image. We report experimental results using the SAIAPR TC12 benchmark on two sets of topics used in ImageCLEF competitions with manually and automatically generated labels. Experimental results show that the proposed representations outperform significantly both, standard multimodal techniques and unimodal methods. Results on manually assigned labels provide an upper bound in the retrieval performance that can be obtained, whereas results with automatically generated labels are encouraging. The novel representations are able to capture more effectively the content of multimodal images. We emphasize that although we have applied our representations to multimedia image retrieval the same formulation can be adopted for modeling other multimodal documents (e.g., videos).
more …
By
Alba, Alfonso; ArceSantana, Edgar; AguilarPonce, Ruth M.; CamposDelgado, Daniel U.
Show all (4)
Post to Citeulike
5 Citations
In computer vision and video encoding applications, one of the first and most important steps is to establish a pixeltopixel correspondence between two images of the same scene obtained at slightly different times or points of view. One of the most popular methods to find these correspondences, known as Area Matching, consists in performing a computationally intensive search for each pixel in the first image, around a neighborhood of the same pixel in the second image. In this work we propose a method which significantly reduces the search space to only a few candidates, and permits the implementation of realtime vision and video encoding algorithms which do not require specialized hardware such as GPU’s or FPGA’s. Theoretical and experimental support for this method is provided. Specifically, we present results from the application of the method to the realtime video compression and transmission, as well as the realtime estimation of dense optical flow and stereo disparity maps, where a basic implementation achieves up to 100 fps in a typical dualcore PC.
more …
By
Berkemer, Sarah J.; Chaves, Ricardo R. C.; Fritz, Adrian; Hellmuth, Marc; HernandezRosales, Maribel; Stadler, Peter F.
Show all (6)
Post to Citeulike
Spiders are arthropods that can be distinguished from their closest relatives, the insects, by counting their legs. Spiders have eight, insects just six. Spider graphs are a very restricted class of graphs that naturally appear in the context of cograph editing. The vertex set of a spider (or its complement) is naturally partitioned into a clique (the body), an independent set (the legs), and a rest (serving as the head). Here we show that spiders can be recognized directly from their degree sequences through the number of their legs (vertices with degree 1). Furthermore, we completely characterize the degree sequences of spiders.
more …
By
Borowsky, E.; Gafni, E.; Lynch, N.; Rajsbaum, S.
Show all (4)
Post to Citeulike
53 Citations
Abstract.
We present a shared memory algorithm that allows a set of f+1 processes to waitfree “simulate” a larger system of n processes, that may also exhibit up to f stopping failures.
Applying this simulation algorithm to the ksetagreement problem enables conversion of an arbitrary kfaulttolerant{\it n}process solution for the ksetagreement problem into a waitfree k+1process solution for the same problem. Since the k+1processksetagreement problem has been shown to have no waitfree solution [5,18,26], this transformation implies that there is no kfaulttolerant solution to the nprocess ksetagreement problem, for any n.
More generally, the algorithm satisfies the requirements of a faulttolerant distributed simulation.\/ The distributed simulation implements a notion of faulttolerant reducibility\/ between decision problems. This paper defines these notions and gives examples of their application to fundamental distributed computing problems.
The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity, expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include a safe agreement\/ module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multitry snapshot\/ strategy.
The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write shared memory systems is also presented.
more …
By
Trujillo, Alejandra Guadalupe Silva; Orozco, Ana Lucila Sandoval; Villalba, Luis Javier García; Kim, TaiHoon
Show all (4)
Post to Citeulike
The development of digital media, the increasing use of social networks, the easier access to modern technological devices, is perturbing thousands of people in their public and private lives. People love posting their personal news without consider the risks involved. Privacy has never been more important. Privacy enhancing technologies research have attracted considerable international attention after the recent news against users personal data protection in social media websites like Facebook. It has been demonstrated that even when using an anonymous communication system, it is possible to reveal user’s identities through intersection attacks or traffic analysis attacks. Combining a traffic analysis attack with Analysis Social Networks (SNA) techniques, an adversary can be able to obtain important data from the whole network, topological network structure, subset of social data, revealing communities and its interactions. The aim of this work is to demonstrate how intersection attacks can disclose structural properties and significant details from an anonymous social network composed of a university community.
more …
By
LópezMonroy, A. Pastor; MontesyGómez, Manuel; Escalante, Hugo Jair; González, Fabio A.
Show all (4)
Post to Citeulike
The BagofVisualWords (BoVW) representation is a well known strategy to approach many computer vision problems. The idea behind BoVW is similar to the BagofWords (BoW) used in text mining tasks: to build word histograms to represent documents. Regarding computer vision, most of the research has been devoted to obtain better visual words, rather than in improving the final representation. This is somewhat surprising, as there are many alternative ways of improving the BoW representation within the text mining community that can be applied in computer vision as well. This paper aims at evaluating the usefulness of Distributional Term Representations (DTRs) for image classification. DTRs represent instances by exploiting statistics of feature occurrences and cooccurrences along the dataset. We focus in the suitability and effectiveness of using wellknown DTRs in different image collections. Furthermore, we devise two novel distributional strategies that learn appropriated groups of images to compute better suited distributional features. We report experimental results in several image datasets showing the effectiveness of the proposed DTRs over BoVW and other methods in the literature including deep learning based strategies. In particular we show the effectiveness of the proposed representations on image collections from narrow domains, where target categories are subclasses of a more general class (e.g., subclasses of birds, aircrafts, or dogs).
more …
By
Osorio, Mauricio; Jayaraman, Bharat
Post to Citeulike
7 Citations
Setgrouping and aggregation are powerful operations of practical interest in database query languages. An aggregate operation is a function that maps a set to some value, e.g., the maximum or minimum in the set, the cardinality of this set, the summation of all its members, etc. Since aggregate operations are typically nonmonotonic in nature, recursive programs making use of aggregate operations must be suitably restricted in order that they have a welldefined meaning. In a recent paper we showed that partialorder clauses provide a wellstructured means of formulating aggregate operations with recursion. In this paper, we consider the problem of expressing partialorder programs via negationasfailure (NF), a wellknown nonmonotonic operation in logic programming. We show a natural translation of partialorder programs to normal logic programs: Anycostmonotonic partialorder programsP is translated to astratified normal program such that the declarative semantics ofP is defined as the stratified semantics of the translated program. The ability to effect such a translation is significant because the resulting normal programs do not make any explicit use of theaggregation capability, yet they are concise and intuitive. The success of this translation is due to the fact that the translated program is a stratified normal program. That would not be the case for other more general classes of programs thancostmonotonic partialorder programs. We therefore develop in stages a refined translation scheme that does not require the translated programs to be stratified, but requires the use of a suitable semantics. The class of normal programs originating from this refined translation scheme is itself interesting: Every program in this class has a clear intended total model, although these programs are in general neither stratified nor callconsistent, and do not have a stable model. The partial model given by the wellfounded semantics is consistent with the intended total model and the extended well founded semantics,WFS^{+}, defines the intended model. Since there is a welldefined and efficient operational semantics for partialorder programs^{14, 15, 21)} we conclude that the gap between expression of a problem and computing its solution can be reduced with the right level of notation.
more …
By
MartíFarré, Jaume; Padró, Carles; Vázquez, Leonor
Post to Citeulike
4 Citations
The complexity of a secret sharing scheme is defined as the ratio between the maximum length of the shares and the length of the secret. This paper deals with the open problem of optimizing this parameter for secret sharing schemes with general access structures. Specifically, our objective is to determine the optimal complexity of the access structures with exactly four minimal qualified subsets. Lower bounds on the optimal complexity are obtained by using the known polymatroid technique in combination with linear programming. Upper bounds are derived from decomposition constructions of linear secret sharing schemes. In this way, the exact value of the optimal complexity is determined for several access structures in that family. For the other ones, we present the best known lower and upper bounds.
more …
By
Ranjan, D.; Pontelli, E.; Gupta, G.; Longpre, L.
Show all (4)
Post to Citeulike
6 Citations
Abstract.
In this paper we analyze the complexity of the Temporal Precedence Problem on pointer machines. The problem is to support efficiently two operations: insert and precedes. The operation insert(a) introduces a new element a , while precedes(a,b) returns true iff element a was temporally inserted before element b . We provide a solution to the problem with worstcase time complexity O( lg lg n) per operation, where n is the number of elements inserted. We also demonstrate that the problem has a lower bound of Ω( lg lg n) on pointer machines. Thus the proposed scheme is optimal on pointer machines.
more …
By
Lara López, Graciela; Peña Pérez Negrón, Adriana; Antonio Jiménez, Angélica; Ramírez Rodríguez, Jaime; Imbert Paredes, Ricardo
Show all (5)
Post to Citeulike
5 Citations
One of the basic characteristics of an object is its shape. Several research areas in mathematics and computer science have taken an interest in object representation in both 2D images and 3D models, where shape descriptors are a powerful mechanism enabling the processes of classification, retrieval and comparison for object matching. In this paper, we present a literature survey of this broad field, including a comparative analysis based on the above shape descriptor processes. In view of their significance, we identified the shape descriptors implemented using the concept of visual salience. This paper gives an overview of this topic.
more …
By
GarcíaBorroto, Milton; MartínezTrinidad, José Fco.; CarrascoOchoa, Jesús Ariel
Post to Citeulike
14 Citations
Obtaining accurate class prediction of a query object is an important component of supervised classification. However, it could be also important to understand the classification in terms of the application domain, mostly if the prediction disagrees with the expected results. Many accurate classifiers are unable to explain their classification results in terms understandable by an application expert. Classifiers based on emerging patterns, on the other hand, are accurate and easy to understand. The goal of this article is to review the stateoftheart methods for mining emerging patterns, classify them by different taxonomies, and identify new trends. In this survey, we present the most important emerging pattern miners, categorizing them on the basis of the mining paradigm, the use of discretization, and the stage where the mining occurs. We provide detailed descriptions of the mining paradigms with their pros and cons, what helps researchers and users to select the appropriate algorithm for a given application.
more …
By
MenendezOrtiz, Alejandra; FeregrinoUribe, Claudia; GarciaHernandez, Jose Juan; GuzmanZavaleta, Zobeida Jezabel
Show all (4)
Post to Citeulike
1 Citations
Selfrecovery schemes have been proposed for images and videos, however schemes for audio are intended to authenticate the contents or to localize tampering, but selfrecovery for audio is still an open problem. This work presents a functional selfrecovery scheme for audio that uses the intDCT domain for embedding and extraction of the control bits required to restore segments of an audio attacked with content replacement. Results obtained with the scheme are promising with regard to the quality of the watermarked signals; the scheme can restore signals attacked up to 0.6 % with acceptable quality. Further efforts should improve the restoration capabilities of the scheme.
more …
By
RodríguezGonzález, Ansel Yoan; MartínezTrinidad, José Francisco; CarrascoOchoa, Jesús Ariel; RuizShulcloper, José
Show all (4)
Post to Citeulike
9 Citations
Most of the current algorithms for mining frequent patterns assume that two object subdescriptions are similar if they are equal, but in many realworld problems some other ways to evaluate the similarity are used. Recently, three algorithms (ObjectMiner, STreeDCMiner and STreeNDCMiner) for mining frequent patterns allowing similarity functions different from the equality have been proposed. For searching frequent patterns, ObjectMiner and STreeDCMiner use a pruning property called Downward Closure property, which should be held by the similarity function. For similarity functions that do not meet this property, the STreeNDCMiner algorithm was proposed. However, for searching frequent patterns, this algorithm explores all subsets of features, which could be very expensive. In this work, we propose a frequent similar pattern mining algorithm for similarity functions that do not meet the Downward Closure property, which is faster than STreeNDCMiner and loses fewer frequent similar patterns than ObjectMiner and STreeDCMiner. Also we show the quality of the set of frequent similar patterns computed by our algorithm with respect to the quality of the set of frequent similar patterns computed by the other algorithms, in a supervised classification context.
more …
By
BayroCorrochano, Eduardo; Trujillo, Noel; Naranjo, Michel
Post to Citeulike
28 Citations
This paper presents an application of the quaternion Fourier transform for the preprocessing for neuralcomputing. In a new way the 1D acoustic signals of French spoken words are represented as 2D signals in the frequency and time domain. These kind of images are then convolved in the quaternion Fourier domain with a quaternion Gabor filter for the extraction of features. This approach allows to greatly reduce the dimension of the feature vector. Two methods of feature extraction are tested. The features vectors were used for the training of a simple MLP, a TDNN and a system of neural experts. The improvement in the classification rate of the neural network classifiers are very encouraging which amply justify the preprocessing in the quaternion frequency domain. This work also suggests the application of the quaternion Fourier transform for other image processing tasks.
more …
By
Aguirre, Arturo Hernández; Coello Coello, Carlos A.
Post to Citeulike
2 Citations
In this paper, we propose the use of Information Theory as thebasis for designing a fitness function for Boolean circuit designusing Genetic Programming. Boolean functions are implemented byreplicating binary multiplexers. Entropybased measures, such asMutual Information and Normalized Mutual Information areinvestigated as tools for similarity measures between the targetand evolving circuit. Three fitness functions are built over aprimitive one. We show that the landscape of Normalized MutualInformation is more amenable for being used as a fitness functionthan simple Mutual Information. The evolutionary synthesizedcircuits are compared to the known optimum size. A discussion ofthe potential of the InformationTheoretical approach is given.
more …
By
ExcelenteToledo, Cora Beatriz; Jennings, Nicholas R.
Post to Citeulike
28 Citations
This paper presents and evaluates a decision making framework that enables autonomous agents to dynamically select the mechanism they employ in order to coordinate their interrelated activities. Adopting this framework means coordination mechanisms move from the realm of something that is imposed upon the system at design time, to something that the agents select at runtime in order to fit their prevailing circumstances and their current coordination needs. Using this framework, agents make informed choices about when and how to coordinate and when to respond to requests for coordination. The framework is empirically evaluated, in a grid world scenario, and we highlight those types of environments in which it is effective.
more …
By
OlveraLópez, J. Arturo; CarrascoOchoa, J. Ariel; MartínezTrinidad, J. Francisco; Kittler, Josef
Show all (4)
Post to Citeulike
119 Citations
In supervised learning, a training set providing previously known information is used to classify new instances. Commonly, several instances are stored in the training set but some of them are not useful for classifying therefore it is possible to get acceptable classification rates ignoring non useful cases; this process is known as instance selection. Through instance selection the training set is reduced which allows reducing runtimes in the classification and/or training stages of classifiers. This work is focused on presenting a survey of the main instance selection methods reported in the literature.
more …
By
LópezFranco, C.; BayroCorrochano, E.
Post to Citeulike
2 Citations
In this paper, we show how to use the conformal geometric algebra (CGA) as a framework to model the different catadioptric systems using the unified model (UM). This framework is well suited since it can not only represent points, lines and planes, but also point pairs, circles and spheres (geometric objects needed in the UM). We define our model using the great expressive capabilities of the CGA in a more general and simpler way, which allows an easier implementation in more complex applications. On the other hand, we also show how to recover the projective invariants from a catadioptric image using the inverse projection of the UM. Finally, we present applications in navigation and object recognition.
more …
By
Olague, Gustavo; Hernández, Daniel E.; Llamas, Paul; Clemente, Eddie; Briseño, José L.
Show all (5)
Post to Citeulike
This work describes the use of brain programming for automating the video tracking design process. The challenge is that of creating visual programs that learn to detect a toy dinosaur from a database while tested in a visualtracking scenario. When planning an object tracking system, two subtasks need to be approached: detection of moving objects in each frame and correct association of detection to the same object over time. Visual attention is a skill performed by the brain whose functionality is to perceive salient visual features. The automatic design of visual attention programs through an optimization paradigm is applied to the detectionbased tracking of objects in a video from a moving camera. A system based on the acquisition and integration steps of the natural dorsal stream was engineered to emulate its selectivity and goaldriven behavior useful to the task of tracking objects. This is considered a challenging problem since many difficulties can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid structures, objecttoobject and objecttoscene occlusions, as well as camera motion, models, and parameters. Tracking relies on the quality of the detection process and automatically designing such stage could significantly improve tracking methods. Experimental results confirm the validity of our approach using three different kinds of robotic systems. Moreover, a comparison with the method of regions with convolutional neural networks is provided to illustrate the benefit of the approach.
more …
By
LanchoBarrantes, Bárbara S.; CantúOrtiz, Francisco J.
Post to Citeulike
We can find several studies analyzing the scientific production of Latin American countries, such as Argentina, Brazil, Colombia, Cuba, Guatemala, Peru, Venezuela etc throughout the scientific literature. There are many papers focusing on scientific disciplines, institutions and journals from these countries. However, to the best of our knowledge, we have not found any article that analyzes the scientific production of Mexico, global or recent and with Scopus as a specific database, nor the production in collaboration with its strategic countries in science and technology. For this reason, the present work intends to give Mexico the prominence it deserves by studying its productivity in research by using a bibliometric approach. To perform this study, the international bibliographic database Scopus was used within a ten year publication window from 2007 to 2016. With this sample, we analyzed the production in general, the scientific production in scientific disciplines, and production in collaboration with its strategic countries in science and technology, without forgetting the variables of the citations received from Scival as a parameter of impact on research. This study aims to serve as a precedent for later studies and contribute as a reference of Mexican production to the scientific community and as a tool to elaboration of national public policy in science and technology.
more …
By
Mostefaoui, Achour; Rajsbaum, Sergio; Raynal, Michel
Post to Citeulike
13 Citations
The conditionbased approach studies restrictions on the inputs to a distributed problem, called conditions, that facilitate its solution. Previous work considered mostly the asynchronous model of computation. This paper studies conditions for consensus in a synchronous system where processes can fail by crashing. It describes a full classification of conditions for consensus, establishing a continuum between the asynchronous and synchronous models, with the following hierarchy
$${\cal S}_t^{[t]}\subset\cdots\subset {\cal S}_t^{[0]}\subset\cdots\subset {\cal S}_t^{[t]}$$
where
$${\cal S}_t^{[t]}$$
includes all conditions (and in particular the trivial one made up of all possible input vectors). For a condition
$$C \in{\cal S}_t^{[d]}$, $t \leq d \leq t$$
, we have:
For values of
$$d \leq 0$$
consensus is solvable in an asynchronous system with t failures, and we obtain the known hierarchy of conditions that allows solving asynchronous consensus with more and more efficient protocols as we go from d = 0 to d = −t.
For values of
$$d \leq 0$$
consensus is solvable in an asynchronous system with t failures, and we obtain the known hierarchy of conditions that allows solving asynchronous consensus with more and more efficient protocols as we go from d = 0 to d = −t.
For values of d<0 consensus is known not solvable in an asynchronous system with t failures, but we obtain a hierarchy of conditions that allows solving synchronous consensus with protocols that can take more and more rounds, as we go from d = 0 to d = t.
d = 0 is the borderline case where consensus can be solved in an asynchronous system with t failures, and can be solved optimally in a synchronous system.
After having established the complete hierarchy, the paper concentrates on the two last items:
$$0\leq d\leq t$$
. The main result is that the necessary and sufficient number of rounds needed to solve uniform consensus for a condition
$$C \in {\cal S}_t^{[d]}$$
(such that
$$C\notin {\cal S}_t^{[d1]}$$
) is d +1.
In more detail, the paper presents a generic synchronous earlydeciding uniform consensus protocol that enjoys the following properties. Let f be the number of actual crashes, I the input vector and
$$C \in {\cal S}_t^{[d]}$$
the condition the protocol is instantiated with. The protocol terminates in two rounds when
$$I\in C$$
and
$$f\leq td$$
, and in at most d +1 rounds when
$$I\in C$$
and
$$f>td$$
. (It also terminates in one round when
$$I\in C$$
and
$$d=f=0$$
.) Moreover, whether I belongs or not to C, no process requires more than min
$$(t+1,f+2)$$
rounds to decide. The paper then proves a corresponding lower bound stating that at least d +1 rounds are necessary to get a decision in the worst case when
$$I\in C$$
(for
$$C \in {\cal S}_t^{[d]}$$
and
$$C \notin {\cal S}_t^{[d1]}$$
).
more …
By
CastilloBarrera, FranciscoEdgar; DuránLimón, Héctor A.; MédinaRamírez, Carolina; RodriguezRocha, Beatriz
Show all (4)
Post to Citeulike
4 Citations
Different methods and methodologies have been developed for building ontologies. However, neither of these methods consider to build an ontology with characteristics of an electronic document management system (EDMS) nor define the basic classes to begin the ontology. In this paper we propose a method, called “OntoDocMan”, to build an ontologybased EDMS which captures the knowledge associated with a company’s processes related to a quality standard. We leverage the use of ontology tools, such as Protégé, to obtain the functionality of an EDMS. Our method complements and details the OnToKnowledge methodology during its ontology development phase. The essential point of our method is to make it easy to develop an EDMS for quality standards by means of an ontologybased system. The OntoDocMan method is illustrated by a case study for developing an ontology based on the ISO/TS 16949, which is a Technical Specification (TS) and we show that the Ontology system is friendly and easy to use as any EDMS. Different from common EDMS, our ontologybased EDMS is developed without any programming. In addition, we have discovered that this kind of ontologybased EDMS is an excellent tool for helping auditors to search and validate information, besides new employees can learn about the company’s processes.
more …
By
Reyes, Leo; BayroCorrochano, Eduardo
Post to Citeulike
1 Citations
In this paper, we compare the various methods for the simultaneous and sequential reconstruction of points, lines, planes, quadrics, plane conics and degenerate quadrics using Bundle Adjustment, both in projective and metric space. In contrast, most existing work on projective reconstruction focuses mainly on one type of primitive. We also compare the simultaneous refinement of all primitives through Bundle Adjustment with various sequential methods were only certain primitives are refined together. We found that even though the sequential methods may seem somewhat arbitrary on the choice of which primitives are refined together, a higher precision and speed is achieved in most cases.
more …
By
Behrisch, Mike; VargasGarcía, Edith; Zhuk, Dmitriy
Post to Citeulike
We consider finitary relations (also known as crosses) that are definable via finite disjunctions of unary relations, i.e. subsets, taken from a fixed finite parameter set Γ. We prove that whenever Γ contains at least one nonempty relation distinct from the full carrier set, there is a countably infinite number of polymorphism clones determined by relations that are disjunctively definable from Γ. Finally, we extend our result to finitely related polymorphism clones and countably infinite sets Γ. These results address an open problem raised in Creignou, N., et al. Theory Comput. Syst. 42(2), 239–255 (2008), which is connected to the complexity analysis of the satisfiability problem of certain multiplevalued logics studied in Hähnle, R. Proc. 31st ISMVL 2001, 137–146 (2001).
more …
By
Fraigniaud, Pierre; Rajsbaum, Sergio; Travers, Corentin
Post to Citeulike
13 Citations
This paper studies notions of locality that are inherent to the specification of distributed tasks by identifying fundamental relationships between the various scales of computation, from the individual process to the whole system. A locality property called projectionclosed is identified. This property completely characterizes tasks that are waitfree checkable, where a task
$$T =(\mathcal{I },\mathcal{O },\varDelta )$$
is said to be checkable if there exists a distributed algorithm that, given
$$s\in \mathcal{I }$$
and
$$t\in \mathcal{O }$$
, determines whether
$$t\in \varDelta {(s)}$$
, i.e., whether
$$t$$
is a valid output for
$$s$$
according to the specification of
$$T$$
. Projectionclosed tasks are proved to form a rich class of tasks. In particular, determining whether a projectionclosed task is waitfree solvable is shown to be undecidable. A stronger notion of locality is identified by considering tasks whose outputs “look identical” to the inputs at every process: a task
$$T= (\mathcal{I },\mathcal{O },\varDelta )$$
is said to be localitypreserving if
$$\mathcal{O }$$
is a covering complex of
$$\mathcal{I }$$
. We show that this topological property yields obstacles for waitfree solvability different in nature from the classical impossibility results. On the other hand, localitypreserving tasks are projectionclosed, and thus they are waitfree checkable. A classification of localitypreserving tasks in term of their relative computational power is provided. This is achieved by defining a correspondence between subgroups of the edgepath group of an input complex and localitypreserving tasks. This correspondence enables to demonstrate the existence of hierarchies of localitypreserving tasks, each one containing, at the top, the universal task (induced by the universal covering complex), and, at the bottom, the trivial identity task.
more …
By
AvilésLópez, Edgardo; GarcíaMacías, J Antonio
Post to Citeulike
53 Citations
Wireless sensor networks provide the means for gathering vast amounts of data from physical phenomena, and as such they are being used for applications such as precision agriculture, habitat monitoring, and others. However, there is a need to provide higher level abstractions for the development of applications, since accessing the data from wireless sensor networks currently implies dealing with very lowlevel constructs. We propose TinySOA, a service oriented architecture that allows programmers to access wireless sensor networks from their applications by using a simple serviceoriented API via the language of their choice. We show an implementation of TinySOA and the results of an experiment where programmers developed an application that exemplifies how easy Internet applications can integrate sensor networks.
more …
By
Fausto, Fernando; ReynaOrta, Adolfo; Cuevas, Erik; Andrade, Ángel G.; PerezCisneros, Marco
Show all (5)
Post to Citeulike
Natureinspired metaheuristics comprise a compelling family of optimization techniques. These algorithms are designed with the idea of emulating some kind natural phenomena (such as the theory of evolution, the collective behavior of groups of animals, the laws of physics or the behavior and lifestyle of human beings) and applying them to solve complex problems. Natureinspired methods have taken the area of mathematical optimization by storm. Only in the last few years, literature related to the development of this kind of techniques and their applications has experienced an unprecedented increase, with hundreds of new papers being published every single year. In this paper, we analyze some of the most popular natureinspired optimization methods currently reported on the literature, while also discussing their applications for solving realworld problems and their impact on the current literature. Furthermore, we open discussion on several research gaps and areas of opportunity that are yet to be explored within this promising area of science.
more …
By
OrtegónAguilar, Jaime; BayroCorrochano, Eduardo
Post to Citeulike
This paper addresses the parameters’ estimation of 2D and 3D transformations. For the estimation we present a method based on system identification theory, we named it the “Amethod”. The transformations are considered as elements of the Lie group GL(n) or one of its subgroups. We represent the transformations in terms of their Lie Algebra elements. The Lie algebra approach assures to follow the shortest path or geodesic in the involved Lie group. To prove the potencial of our method, two experiments are presented. The first one is a monocular estimation of 3D rigid motion of an object in the visual space. With this aim, the six parameters of the rigid motion are estimated based on measurements of the six parameters of the affine transformation in the image. Secondly, we present the estimation of the affine or projective transformations involved in monocular region tracking.
more …
By
Graff, Mario; Escalante, Hugo Jair; OrnelasTellez, Fernando; Tellez, Eric S.
Show all (4)
Post to Citeulike
2 Citations
Genetic programming (GP) is an evolutionary algorithm that has received a lot of attention lately due to its success in solving hard world problems. There has been a lot of interest in using GP to tackle forecasting problems. Unfortunately, it is not clear whether GP can outperform traditional forecasting techniques such as autoregressive models. In this contribution, we present a comparison between standard GP systems qand autoregressive integrated moving average model and exponential smoothing. This comparison points out particular configurations of GP that are competitive against these forecasting techniques. In addition to this, we propose a novel technique to select a forecaster from a collection of predictions made by different GP systems. The result shows that this selection scheme is competitive with traditional forecasting techniques, and, in a number of cases it is statistically better.
more …
By
PortilloPortillo, Jose; Leyva, Roberto; Sanchez, Victor; SanchezPerez, Gabriel; PerezMeana, Hector; OlivaresMercado, Jesus; ToscanoMedina, Karina; NakanoMiyatake, Mariko
Show all (8)
Post to Citeulike
This paper proposes a viewinvariant gait recognition algorithm, which builds a unique view invariant model taking advantage of the dimensionality reduction provided by the Direct Linear Discriminant Analysis (DLDA). Proposed scheme is able to reduce the undersampling problem (USP) that appears usually when the number of training samples is much smaller than the dimension of the feature space. Proposed approach uses the Gait Energy Images (GEIs) and DLDA to create a view invariant model that is able to determine with high accuracy the identity of the person under analysis independently of incoming angles. Evaluation results show that the proposed scheme provides a recognition performance quite independent of the view angles and higher accuracy compared with other previously proposed gait recognition methods, in terms of computational complexity and recognition accuracy.
more …
By
Coello, Carlos A. Coello; Cortés, Nareli Cruz
Post to Citeulike
281 Citations
In this paper, we propose an algorithm based on the clonal selection principle to solve multiobjective optimization problems (either constrained or unconstrained). The proposed approach uses Pareto dominance and feasibility to identify solutions that deserve to be cloned, and uses two types of mutation: uniform mutation is applied to the clones produced and nonuniform mutation is applied to the “not so good” antibodies (which are represented by binary strings that encode the decision variables of the problem to be solved). We also use a secondary (or external) population that stores the nondominated solutions found along the search process. Such secondary population constitutes the elitist mechanism of our approach and it allows it to move towards the true Pareto front of a problem over time. Our approach is compared with three other algorithms that are representative of the stateoftheart in evolutionary multiobjective optimization. For our comparative study, three metrics are adopted and graphical comparisons with respect to the true Pareto front of each problem are also included. Results indicate that the proposed approach is a viable alternative to solve multiobjective optimization problems.
more …
By
DíazSantiago, Sandra; RodríguezHenríquez, Lil María; Chakraborty, Debrup
Post to Citeulike
3 Citations
Payments through cards have become very popular in today’s world. All businesses now have options to receive payments through this instrument; moreover, most organizations store card information of its customers in some way to enable easy payments in future. Credit card data are a very sensitive information, and theft of this data is a serious threat to any company. Any organization that stores credit card data needs to achieve payment card industry (PCI) compliance, which is an intricate process where the organization needs to demonstrate that the data it stores are safe. Recently, there has been a paradigm shift in treatment of the problem of storage of payment card information. In this new paradigm instead of the real credit card data a token is stored, this process is called “tokenization.” The token “looks like” the credit/debit card number, but ideally has no relation with the credit card number that it represents. This solution relieves the merchant from the burden of PCI compliance in several ways. Though tokenization systems are heavily in use, to our knowledge, a formal cryptographic study of this problem has not yet been done. In this paper, we initiate a study in this direction. We formally define the syntax of a tokenization system and several notions of security for such systems. Finally, we provide some constructions of tokenizers and analyze their security in light of our definitions.
more …
By
Ramírez, José Luis; Juárez, Manuel; Remesal, Ana
Post to Citeulike
4 Citations
The aim of this article is to present a distance elearning experience of mathematics in higher education. The course is offered as a remedial program for master’s degree students of Computer Science. It was designed to meet the particular needs of the students entering the master’s degree program, as a response to the lack of understanding of logical language which was identified in several previous cohorts of students at CENIDET. The course addresses mathematical abilities of comprehensive functional use of logical language as a basic ability to be developed for later successful participation in the Master of Computer Science and also for later use in professional contexts of Computer Engineering. Eighteen students distributed throughout Mexico volunteered to participate under the guidance of one instructor. The technopedagogical design of the course is grounded on two theoretical approaches. Contentrelated instructional decisions are supported by different concepts of the second generation of Activity Theory. The concept of Orienting Basis of an Action was particularly useful to define the skills the students were expected to develop. Instructional decisions related to the participants’ interaction are underpinned by Slavin’s Team Accelerated Instruction model. We present the course structure in detail and provide some student interaction excerpts in order to illustrate their learning progress.
more …
By
Olague, Gustavo; Pérez, Cynthia B.; Fernández, Francisco; Lutton, Evelyne
Show all (4)
Post to Citeulike
This article presents an adaptive approach to improving the infection algorithm that we have used to solve the dense stereo matching problem. The algorithm presented here incorporates two different epidemic automata along a single execution of the infection algorithm. The new algorithm attempts to provide a general behavior of guessing the best correspondence between a pair of images. Our aim is to provide a new strategy inspired by evolutionary computation, which combines the behaviors of both automata into a single correspondence problem. The new algorithm will decide which automata will be used based on the transmission of information and mutation, as well as the attributes, texture, and geometry, of the input images. This article gives details about how the rules used in the infection algorithm are coded. Finally, we show experiments with a real stereo pair, as well as with a standard test bed, to show how the infection algorithm works.
more …
By
GuzmanZavaleta, Zobeida Jezabel; FeregrinoUribe, Claudia; MoralesSandoval, Miguel; MenendezOrtiz, Alejandra
Show all (4)
Post to Citeulike
3 Citations
Video fingerprinting for contentbased video identification is a very useful task for the management and monetization of copyrighted content distribution. The main challenges of monitoring and copy detection systems are: a) the effective identification of highly transformed videos (robustness) and b) computational efficiency which may be relevant for some applications. Typically, most video fingerprinting methods focus on robustness leaving aside computational efficiency. However, for realtime applications are necessary low computational cost detection methods, for instance, in illegal content monitoring in video streaming distributions. Therefore, in this paper, we propose a lowcost and effective video fingerprint extraction method based on the combination of contentbased features using both acoustic and visual video components. Our method is capable of detecting video copies by using computationally efficient fingerprints while maintaining robustness against the decrease in quality and content preserved distortions, which are frequent but severe attacks.
more …
By
TejadaCárcamo, Javier; Calvo, Hiram; Gelbukh, Alexander; Hara, Kazuo
Show all (4)
Post to Citeulike
3 Citations
We present and analyze an unsupervised method for Word Sense Disambiguation (WSD). Our work is based on the method presented by McCarthy et al. in 2004 for finding the predominant sense of each word in the entire corpus. Their maximization algorithm allows weighted terms (similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense, i.e., the sense with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word. This list is obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus. In the method of McCarthy et al., every occurrence of the ambiguous word uses the same thesaurus, regardless of the context where the ambiguous word occurs. Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed similar words based on the syntactic context of the ambiguous word. We obtain a top precision of 77.54% of accuracy versus 67.10% of the original method tested on SemCor. We also analyze the effect of the number of weighted terms in the tasks of finding the Most Frecuent Sense (MFS) and WSD, and experiment with several corpora for building the Word Space Model.
more …
By
Quintana, Marcos I.; Poli, Riccardo; Claridge, Ela
Post to Citeulike
17 Citations
This paper presents a Genetic Programming (GP) approach to the design of Mathematical Morphology (MM) algorithms for binary images. The algorithms are constructed using logic operators and the basic MM operators, i.e. erosion and dilation, with a variety of structuring elements. GP is used to evolve MM algorithms that convert a binary image into another containing just a particular feature of interest. In the study we have tested three fitness functions, training sets with different numbers of elements, training images of different sizes, and 7 different features in two different kinds of applications. The results obtained show that it is possible to evolve good MM algorithms using GP.
more …
By
GutierrezGarcia, J. Octavio; RamirezNafarrate, Adrian
Post to Citeulike
15 Citations
Cloud data centers are generally composed of heterogeneous commodity servers hosting multiple virtual machines (VMs) with potentially different specifications and fluctuating resource usages. This may cause a resource usage imbalance within servers that may result in performance degradation and violations to service level agreements. This work proposes a collaborative agentbased problem solving technique capable of balancing workloads across commodity, heterogeneous servers by making use of VM live migration. The agents are endowed with (i) migration heuristics to determine which VMs should be migrated and their destination hosts, (ii) migration policies to decide when VMs should be migrated, (iii) VM acceptance policies to determine which VMs should be hosted, and (iv) frontend load balancing heuristics. The results show that agents, through autonomous and dynamic collaboration, can efficiently balance loads in a distributed manner outperforming centralized approaches with a performance comparable to commercial solutions, namely Red Hat, while migrating fewer VMs.
more …
By
Aldaya, Ivan; Cafini, Raul; Cerroni, Walter; Raffaelli, Carla; Savi, Michele
Show all (5)
Post to Citeulike
5 Citations
A programmable optical router is a key enabler for dynamic service provisioning in Future Internet scenarios. It is equipped with optical switching hardware to forward information at hundreds of Gigabits/s rates and above, controlled and managed through modular and flexible procedures according to emerging standards. The possibility to test such costly optical architectures in terms of logical and physical performance, without implementing complex and expensive testbeds, is crucial to speedup the development process of highperformance routers. To this purpose, this paper introduces the softwarebased emulation testbed of a programmable optical router, which is here developed and applied to test optical switching fabrics. Accurate characterization of the optical devices and physical layer aspects is implemented with the Click software router environment. Power loss and optical signaltonoiseratio evaluation are provided through accurate software representation of the physical characteristics of the optical devices employed. The scalability of the proposed emulation testbed is also assessed on standard PC hardware. All the obtained results prove the effectiveness of the proposed tool to emulate an optical router at different levels of granularity.
more …
By
Fernau, Henning; Freund, Rudolf; Schmid, Markus L.; Subramanian, K. G.; Wiederhold, Petra
Show all (5)
Post to Citeulike
9 Citations
Contextual array grammars, with selectors not having empty cells, are considered. A P system model, called contextual array P system, that makes use of array objects and contextual array rules, is introduced and its generative power for the description of picture arrays is examined. A main result of the paper is that there is a proper infinite hierarchy with respect to the classes of languages described by contextual array P systems. Such a hierarchy holds as well in the case when the selector is also endowed with the #−sensing ability.
more …
By
Tchernykh, Andrei; MirandaLópez, Vanessa; Babenko, Mikhail; ArmentaCano, Fermin; Radchenko, Gleb; Drozdov, Alexander Yu.; Avetisyan, Arutyun
Show all (7)
Post to Citeulike
Properties of redundant residue number system (RRNS) are used for detecting and correcting errors during the data storing, processing and transmission. However, detection and correction of a single error require significant decoding time due to the iterative calculations needed to locate the error. In this paper, we provide a performance evaluation of AsmuthBloom and Mignotte secret sharing schemes with three different mechanisms for error detecting and correcting: Projection, Syndrome, and ARRRNS. We consider the best scenario when no error occurs and worstcase scenario, when error detection needs the longest time. When examining the overall coding/decoding performance based on real data, we show that ARRRNS method outperforms Projection and Syndrome by 68% and 52% in the worstcase scenario.
more …
By
GarcíaBorroto, Milton; MartínezTrinidad, José Fco; CarrascoOchoa, Jesús Ariel
Post to Citeulike
16 Citations
Emerging pattern–based classification is an ongoing branch in Pattern Recognition. However, despite its simplicity and accurate results, this classification includes an a priori discretization step that may degrade the classification accuracy. In this paper, we introduce fuzzy emerging patterns as an extension of emerging patterns to deal with numerical attributes using fuzzy discretization. Based on fuzzy emerging patterns, we propose a new classifier that uses a novel graph organization of patterns. The new classifier outperforms some popular and state of the art classifiers on several UCI repository databases. In a pairwise comparison, it significantly beats every other single classifier.
more …
By
BayroCorrochano, Eduardo; RiveraRovelo, Jorge
Post to Citeulike
9 Citations
We present a new approach to model 2D surfaces and 3D volumetric data, as well as an approach for nonrigid registration; both are developed in the geometric algebra framework. The approach for modeling is based on marching cubes idea using however spheres and their representation in the conformal geometric algebra; it will be called marching spheres. Note that before we can proceed with the modeling, it is needed to segment the object we are interested in; therefore, we include an approach for image segmentation, which is based on texture and border information, developed in a regiongrowing strategy. We compare the results obtained with our modeling approach against the results obtained with other approach using Delaunay tetrahedrization, and our proposed approach reduces considerably the number of spheres. Afterward, a method for nonrigid registration of models based on spheres is presented. Registration is done in an annealing scheme, as in ThinPlate Spline Robust Point Matching (TPSRPM) algorithm. As a final application of geometric algebra, we track in real time objects involved in surgical procedures.
more …
By
Pellegrin, Luis; Escalante, Hugo Jair; MontesyGómez, Manuel; González, Fabio A.
Show all (4)
Post to Citeulike
Automatic Image Annotation (AIA) is the task of assigning keywords to images, with the aim to describe their visual content. Recently, an unsupervised approach has been used to tackle this task. Unsupervised AIA (UAIA) methods use reference collections that consist of the textual documents containing images. The aim of the UAIA methods is to extract words from the reference collection to be assigned to images. In this regard, by using an unsupervised approach it is possible to include large vocabularies because any word could be extracted from the reference collection. However, having a greater diversity of words for labeling entails to deal with a larger number of wrong annotations, due to the increasing difficulty for assigning a correct relevance to the labels. With this problem in mind, this paper presents a general strategy for UAIA methods that reranks assigned labels. The proposed method exploits the semanticrelatedness information among labels in order to assign them an appropriate relevance for describing images. Experimental results in different benchmark datasets show the flexibility of our method to deal with assignments from freevocabularies, and its effectiveness to improve the initial annotation performance for different UAIA methods. Moreover, we found that (1) when considering the semanticrelatedness information among the assigned labels, the initial ranking provided by a UAIA method is improved in most of the cases; and (2) the robustness of the proposed method to be applied on different UAIA methods, will allow extending capabilities of stateoftheart UAIA methods.
more …
By
GuzmánCabrera, Rafael; MontesyGómez, Manuel; Rosso, Paolo; VillaseñorPineda, Luis
Show all (4)
Post to Citeulike
6 Citations
Most current methods for automatic text categorization are based on supervised learning techniques and, therefore, they face the problem of requiring a great number of training instances to construct an accurate classifier. In order to tackle this problem, this paper proposes a new semisupervised method for text categorization, which considers the automatic extraction of unlabeled examples from the Web and the application of an enriched selftraining approach for the construction of the classifier. This method, even though language independent, is more pertinent for scenarios where large sets of labeled resources do not exist. That, for instance, could be the case of several application domains in different nonEnglish languages such as Spanish. The experimental evaluation of the method was carried out in three different tasks and in two different languages. The achieved results demonstrate the applicability and usefulness of the proposed method.
more …
By
Kober, V.; Mozerov, M.; ÁlvarezBorrego, J.; Ovseyevich, I. A.
Show all (4)
Post to Citeulike
1 Citations
Two effective algorithms for the removal of impulse noise from color images are proposed. The algorithms consist of two steps. The first algorithm detects outliers with the help of spatial relations between the components of a color image. Next, the detected noise pixels are replaced with the output of a vector median filter over a local spatially connected area excluding the outliers, while noisefree pixels are left unaltered. The second algorithm transforms a color image to the YCbCr color space that perfectly separates the intensity and color information. Then outliers are detected using spatial relations between transformed image components. The detected noise pixels are replaced with the output of a modified vector median filter over a spatially connected area. Simulation results in test color images show a superior performance of the proposed algorithms compared with the conventional vector median filter. The comparisons are made using the mean square error, the mean absolute error, and a subjective human visual error criterion.
more …
By
Gershenson, Carlos
Post to Citeulike
This paper presents the Computing Networks (CNs) framework. CNs are used to generalize neural and swarm architectures. Artificial neural networks, ant colony optimization, particle swarm optimization, and realistic biological models are used as examples of instantiations of CNs. The description of these architectures as CNs allows their comparison. Their differences and similarities allow the identification of properties that enable neural and swarm architectures to perform complex computations and exhibit complex cognitive abilities. In this context, the most relevant characteristics of CNs are the existence multiple dynamical and functional scales. The relationship between multiple dynamical and functional scales with adaptation, cognition (of brains and swarms) and computation is discussed.
more …
By
Romero, Francisco P.; JuliánIranzo, Pascual; Soto, Andrés; FerreiraSatler, Mateus; GallardoCasero, Juan
Show all (5)
Post to Citeulike
7 Citations
Web 2.0 provides userfriendly tools that allow persons to create and publish content online. User generated content often takes the form of short texts (e.g., blog posts, news feeds, snippets, etc). This has motivated an increasing interest on the analysis of short texts and, specifically, on their categorisation. Text categorisation is the task of classifying documents into a certain number of predefined categories. Traditional text classification techniques are mainly based on word frequency statistical analysis and have been proved inadequate for the classification of short texts where word occurrence is too small. On the other hand, the classic approach to text categorization is based on a learning process that requires a large number of labeled training texts to achieve an accurate performance. However labeled documents might not be available, when unlabeled documents can be easily collected. This paper presents an approach to text categorisation which does not need a preclassified set of training documents. The proposed method only requires the category names as user input. Each one of these categories is defined by means of an ontology of terms modelled by a set of what we call proximity equations. Hence, our method is not category occurrence frequency based, but highly depends on the definition of that category and how the text fits that definition. Therefore, the proposed approach is an appropriate method for short text classification where the frequency of occurrence of a category is very small or even zero. Another feature of our method is that the classification process is based on the ability of an extension of the standard Prolog language, named
Bousi~Prolog
, for flexible matching and knowledge representation. This declarative approach provides a text classifier which is quick and easy to build, and a classification process which is easy for the user to understand. The results of experiments showed that the proposed method achieved a reasonably useful performance.
more …
By
Sidorov, G.; Ibarra Romero, M.; Markov, I.; GuzmanCabrera, R.; ChanonaHernández, L.; Velásquez, F.
Show all (6)
Post to Citeulike
2 Citations
We present a method for measuring similarity between source codes. We approach this task from the machine learning perspective using character and word ngrams as features and examining different machine learning algorithms. Furthermore, we explore the contribution of the latent semantic analysis in this task. We developed a corpus in order to evaluate the proposed approach. The corpus consists of around 10,000 source codes written in the Karel programming language to solve 100 different tasks. The results show that the highest classification accuracy is achieved when using Support Vector Machines classifier, applying the latent semantic analysis, and selecting as features trigrams of words.
more …
By
FuentesPacheco, Jorge; RuizAscencio, José; RendónMancha, Juan Manuel
Post to Citeulike
138 Citations
Visual SLAM (simultaneous localization and mapping) refers to the problem of using images, as the only source of external information, in order to establish the position of a robot, a vehicle, or a moving camera in an environment, and at the same time, construct a representation of the explored zone. SLAM is an essential task for the autonomy of a robot. Nowadays, the problem of SLAM is considered solved when range sensors such as lasers or sonar are used to built 2D maps of small static environments. However SLAM for dynamic, complex and large scale environments, using vision as the sole external sensor, is an active area of research. The computer vision techniques employed in visual SLAM, such as detection, description and matching of salient features, image recognition and retrieval, among others, are still susceptible of improvement. The objective of this article is to provide new researchers in the field of visual SLAM a brief and comprehensible review of the stateoftheart.
more …
By
MartínezDiaz, Saul; Kober, Vitaly; Ovseyevich, I. A.
Post to Citeulike
Adaptive composite nonlinear filters for reliable illuminationinvariant pattern recognition are proposed. The information about objects to be recognized, false objects, and a background to be rejected is utilized in an iterative training procedure to design a nonlinear adaptive correlation filter with a given value of discrimination capability. The designed filter during recognition process adapts its parameters to local statistics of the input image. Computer simulation results obtained with the proposed filters in test nonuniform illuminated scenes are discussed and compared with those of linear composite correlation filters in terms of recognition performance.
more …
By
García, V.; Mollineda, R. A.; Sánchez, J. S.
Post to Citeulike
74 Citations
A twoclass data set is said to be imbalanced when one (minority) class is heavily underrepresented with respect to the other (majority) class. In the presence of a significant overlapping, the task of learning from imbalanced data can be a very difficult problem. Additionally, if the overall imbalance ratio is different from local imbalance ratios in overlap regions, the task can become in a major challenge. This paper explains the behaviour of the knearest neighbour (kNN) rule when learning from such a complex scenario. This local model is compared to other machine learning algorithms, attending to how their behaviour depends on a number of data complexity features (global imbalance, size of overlap region, and its local imbalance). As a result, several conclusions useful for classifier design are inferred.
more …
By
Aluru, Srinivas; Gustafson, John; Prabhu, G.M.; Sevilgen, Fatih E.
Show all (4)
Post to Citeulike
11 Citations
The Nbody problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. Greengards algorithm claims to compute the cumulative force on each particle in O(N) time for a fixed precision irrespective of the distribution of the particles. In this paper, we show that Greengards algorithm is distribution dependent and has a lower bound of (N log 2 N) in two dimensions and (N log 4 N) in three dimensions. We analyze the Greengard and BarnesHut algorithms and show that they are unbounded for arbitrary distributions. We also present a truly distribution independent algorithm for the Nbody problem that runs in O(N log N) time for any fixed dimension.
more …
By
MartínezAngeles, Carlos Alberto; Wu, Haicheng; Dutra, Inês; Costa, Vítor Santos; BuenabadChávez, Jorge
Show all (5)
Post to Citeulike
3 Citations
Relational learning algorithms mine complex databases for interesting patterns. Usually, the search space of patterns grows very quickly with the increase in data size, making it impractical to solve important problems. In this work we present the design of a relational learning system, that takes advantage of graphics processing units (GPUs) to perform the most time consuming function of the learner, rule coverage. To evaluate performance, we use four applications: a widely used relational learning benchmark for predicting carcinogenesis in rodents, an application in chemoinformatics, an application in opinion mining, and an application in mining health record data. We compare results using a single and multiple CPUs in a multicore host and using the GPU version. Results show that the GPU version of the learner is up to eight times faster than the best CPU version.
more …
By
HernándezGracidas, Carlos Arturo; Sucar, Luis Enrique; MontesyGómez, Manuel
Post to Citeulike
5 Citations
In this paper we proposed the use of spatial relations as a way of improving annotationbased image retrieval. We analyzed different types of spatial relations and selected the most adequate ones for image retrieval. We developed an image comparison and retrieval method based on conceptual graphs, which incorporates spatial relations. Additionally, we proposed an alternative termweighting scheme and explored the use of more than one sample image for retrieval using several late fusion techniques. Our methods were evaluated with a rich and complex image dataset, based on the 39 topics developed for the ImageCLEF 2008 photo retrieval task. Results show that: (i) incorporating spatial relations produces a significant increase in performance, (ii) the label weighting scheme we proposed obtains better results than other traditional schemes, and (iii) the combination of several sample images using late fusion produces an additional improvement in retrieval according to several metrics.
more …
By
Carvalho, Cícero; RamírezMondragón, Xavier; Neumann, Victor G. L.; TapiaRecillas, Horacio
Show all (4)
Post to Citeulike
In 1988 Lachaud introduced the class of projective Reed–Muller codes, defined by evaluating the space of homogeneous polynomials of a fixed degree d on the points of
$$\mathbb {P}^n(\mathbb {F}_q)$$
. In this paper we evaluate the same space of polynomials on the points of a higher dimensional scroll, defined from a set of rational normal curves contained in complementary linear subspaces of a projective space. We determine a formula for the dimension of the codes, and the exact value of the dimension and the minimum distance in some special cases.
more …
By
SalinasGutiérrez, Rogelio; HernándezAguirre, Arturo; VillaDiharce, Enrique R.
Post to Citeulike
2 Citations
This paper presents the use of graphical models and copula functions in Estimation of Distribution Algorithms (EDAs) for solving multivariate optimization problems. It is shown in this work how the incorporation of copula functions and graphical models for modeling the dependencies among variables provides some theoretical advantages over traditional EDAs. By means of copula functions and two well known graphical models, this paper presents a novel approach for defining new EDAs. Either dependence is modeled by a copula function chosen from a predefined set of six functions that aim to cover a wide range of interrelations. It is also shown how the use of mutual information in the learning of graphical models implies a natural way of employing copula entropies. The experimental results on separable and nonseparable functions show that the two new EDAs, which adopt copula functions to model dependencies, perform better than their original version with Gaussian variables.
more …
By
Coello Coello, Carlos A.
Post to Citeulike
50 Citations
This paper provides a short review of some of the main topics in which the current research in evolutionary multiobjective optimization is being focused. The topics discussed include new algorithms, efficiency, relaxed forms of dominance, scalability, and alternative metaheuristics. This discussion motivates some further topics which, from the author’s perspective, constitute good potential areas for future research, namely, constrainthandling techniques, incorporation of user’s preferences and parameter control. This information is expected to be useful for those interested in pursuing research in this area.
more …
By
Alejo, R.; MonroydeJesús, J.; AmbrizPolo, J. C.; PachecoSánchez, J. H.
Show all (4)
Post to Citeulike
In this paper, we present an improved dynamic sampling approach (ISDSA) for facing the multiclass imbalance problem. ISDSA is a modification of the backpropagation algorithm, which is focused to make a better use of the training samples for improving the classification performance of the multilayer perceptron (MLP). ISDSA uses the mean square error and a Gaussian function to identify the best samples to train the neural network. Results shown in this article stand out that ISDSA makes better exploitation of the training dataset and improves the MLP classification performance. In others words, ISDSA is a successful technique for dealing with the multiclass imbalance problem. In addition, results presented in this work indicate that the proposed method is very competitive in terms of classification performance with respect to classical oversampling methods (also, combined with wellknown features selection methods) and other dynamic sampling approaches, even in training time and size it is better than the oversampling methods
.
more …
By
Herlihy, Maurice; Rajsbaum, Sergio
Post to Citeulike
3 Citations
Roughly speaking, a simplicial complex is shellable if it can be constructed by gluing a sequence of nsimplexes to one another along
$$(n1)$$
faces only. Shellable complexes have been widely studied because they have nice combinatorial properties. It turns out that several standard models of concurrent computation can be constructed from shellable complexes. We consider adversarial schedulers in the synchronous, asynchronous, and semisynchronous messagepassing models, as well as asynchronous shared memory. We show how to exploit their common shellability structure to derive new and remarkably succinct tight (or nearly so) lower bounds on connectivity of protocol complexes and hence on solutions to the
$$k$$
set agreement task in these models. Earlier versions of material in this article appeared in the 2010 ACM Symposium on Principles of Distributed Computing (Herlihy and Rajsbaum 2010), and the International Conference on Distributed Computing (Herlihy and Rajsbaum 2010, doi:
10.1145/1835698.1835724
).
more …
By
Raza, Mushtaq; Faria, João Pascoal; Salazar, Rafael
Post to Citeulike
Collecting product and process measures in software development projects, particularly in education and training environments, is important as a basis for assessing current performance and opportunities for improvement. However, analyzing the collected data manually is challenging because of the expertise required, the lack of benchmarks for comparison, the amount of data to analyze, and the time required to do the analysis. ProcessPAIR is a novel tool for automated performance analysis and improvement recommendation; based on a performance model calibrated from the performance data of many developers, it automatically identifies and ranks potential performance problems and root causes of individual developers. In education and training environments, it increases students’ autonomy and reduces instructors’ effort in grading and feedback. In this article, we present the results of a controlled experiment involving 61 software engineering master students, half of whom used ProcessPAIR in a Personal Software Process (PSP) performance analysis assignment, and the other half used a traditional PSP support tool (Process Dashboard) for performing the same assignment. The results show significant benefits in terms of students’ satisfaction (average score of 4.78 in a 1–5 scale for ProcessPAIR users, against 3.81 for Process Dashboard users), quality of the analysis outcomes (average grades achieved of 88.1 in a 0–100 scale for ProcessPAIR users, against 82.5 for Process Dashboard users), and time required to do the analysis (average of 252 min for ProcessPAIR users, against 262 min for Process Dashboard users, but with much room for improvement).
more …
By
Muraña, Jonathan; Nesmachnow, Sergio; Armenta, Fermín; Tchernykh, Andrei
Show all (4)
Post to Citeulike
This article presents an empirical evaluation of power consumption for scientific computing applications in multicore systems. Three types of applications are studied, in single and combined executions on Intel and AMD servers, for evaluating the overall power consumption of each application. The main results indicate that power consumption behavior has a strong dependency with the type of application. Additional performance analysis shows that the best load of the server regarding energy efficiency depends on the type of the applications, with efficiency decreasing in heavily loaded situations. These results allow formulating a model to characterize applications according to power consumption, efficiency, and resource sharing, which provide useful information for resource management and scheduling policies. Several scheduling strategies are evaluated using the proposed energy model over realistic scientific computing workloads. Results confirm that strategies that maximize host utilization provide the best energy efficiency and performance results.
more …
By
Mercado, Jose; EspinosaCuriel, Ismael; Escobedo, Lizbeth; Tentori, Monica
Show all (4)
Post to Citeulike
BCI video games are making brain training increasingly popular and available; yet scientific evidence to support its efficacy is lacking. Reallife descriptions of BCI video games deployments in concrete scenarios are urgently needed. In this paper, we report a use case of the development and pilottesting of a BCI video game designed to support children with autism when attending to Neurofeedback training sessions, called FarmerKeeper. Caring for children with autism may impose new cognitive, motor, behavioral, and attention challenges that current solutions targeted for other populations may not address. The goal of the game is to maintain children’s attention above a threshold to control a runner who is seeking for lost farm animals. FarmerKeeper uses a consumergrade BCI headset to read user’s attention. We evaluated FarmerKeeper’s usability and user experience through a 4weeks deployment study with 12 children with autism. Our quantitative results show FarmerKeeper outperforms a commercial BCI video game used for neurofeedback training, and qualitatively, FarmerKeeper could successfully support children with autism when attending to neurofeedback training sessions by possibly improving their attention and reducing their anxiety. We close reflecting on our design aspects and discussing directions for future work.
more …
By
Pino, Francisco J.; García, Felíx; Piattini, Mario; Oktaba, Hanna
Show all (4)
Post to Citeulike
1 Citations
Establishing a research strategy that is suitable for undertaking research on software engineering is vital if we are to guarantee that research products are developed and validated following a systematic and coherent method. We took this into account as we carried out the COMPETISOFT research project, which investigated software process improvement (SPI) in the context of Latin American small companies. That experience has enabled us to develop a research strategy based on the integrated use of action research and case study methods. This paper introduces the proposed research strategy and provides extensive discussion of its application for: (1) developing the Methodological framework of COMPETISOFT for SPI, (2) putting this framework into practice in eight small software companies, and (3) refine the Methodological framework due to the practice feedback. The use of this research strategy allowed us to observe that it was suitable for developing, refining, improving, applying, and validating COMPETISOFT’s Methodology framework. Furthermore, having seen it applied, we believe that this strategy offers a successful integration of action research and case study, which can be useful for conducting research in other software engineering areas which address needs of small software companies.
more …
By
QuezadaNaquid, Moisés; MarcelínJiménez, Ricardo; GonzalezCompeán, J. L.; Perez, Jesus Carretero
Show all (4)
Post to Citeulike
1 Citations
Storage pooling is a virtualization technique used in data centers to build upgradeable storage pools and to face up the explosive growth of information. In this technique, a randomized data distribution strategy (DDS) ensures the load balancing when adding new devices to the pool by using reallocation mechanisms. However, when applying faulttolerant schemes to the storage pools, the system produces r redundant objects from a common data source and DDS must allocate them in different devices, which increases the complexity of the reallocation operations performed during the upgrade procedures. This paper presents RSPooling: an adaptive DDS for faulttolerant and largescale storage systems. RSPooling builds storage pools by grouping devices into disjointed subpools and ensures the effectiveness of faulttolerant schemes by performing the allocation of redundant objects from a common data source in different subpools. In RSPooling, the first redundant object is allocated in random manner whereas the rest of them are allocated by using a cyclic list of subpools, this procedure minimizes the amount of reallocation operations, and fosters load balancing. We performed an emulationbased evaluation of RSPooling and a traditional DDS for storage pooling called RUSHp. The evaluation reveals that RSPooling improves the time efficiency of look up operations compared to that obtained from RUSHp. The evaluation also shows that, in upgrade procedures and regardless of the initial settlement, RSPooling requires significantly less reallocation operations than that of RUSHp for load balancing of faulttolerant storage pools.
more …
