Showing 1 to 100 of 3192 matching Articles
Results per page:
By
Muñoz, Mirna; Negrón, Adriana Peña Pérez; Mejia, Jezreel; Lopez, Graciela Lara
Show all (4)
Post to Citeulike
In Mexico, the small and medium size companies (SMEs) are key for the software development industry, in such a way that having highly qualified personal for the development of high quality software products is a fundamental piece to guarantee their permanency in the market. Therefore, matching the software industry requirements with the academy training represents a big problem that must be reduced for the benefit of both sectors. In this context, to understand the coverage of the academic curricula programs in higher education, regarding the software industry requirements is a fundamental aspect to take actions to reduce the current gap between industry and academy. This paper presents an analysis of coverage between Moprosoft norm, standard used for software industry to ensure quality in Software Engineering practices, and four academic curricula programs of higher education related to Computer Science and Informatics.
more …
By
Monroy, Raúl; Bundy, Alan; Green, Ian
Post to Citeulike
2 Citations
Most efforts to automate formal verification of communicating systems have centred around finitestate systems (FSSs). However, FSSs are incapable of modelling many practical communicating systems, including a novel class of problems, which we call VIPS. VIPSs are valuepassing, infinitestate, parameterised systems. Existing approaches using model checking over FSSs are insufficient for VIPSs. This is due to their inability both to reason with and about domainspecific theories, and to cope with systems having an unbounded or arbitrary state space.
We use the Calculus of Communicating Systems (CCS) (Communication and Concurrency. London: Prentice Hall, 1989) to express and specify VIPSs. We take program verification to be proving the program and its intended specification equivalent. We use the laws of CCS to conduct the verification task. This approach allows us to study communicating systems and the data such systems communicate. Automating theorem proving in this context is an extremely difficult task.
We provide automated methods for CCS analysis; they are applicable to both FSSs and VIPSs. Adding these methods to the CL^{A}M proof planner (Lecture Notes in Artificial Intelligence, Vol. 449, Springer, 1990, pp. 647, 648), we have implemented an automated verification planner capable of dealing with problems that previously required human interaction. This paper describes these methods, gives an account as to why they work, and provides a short summary of experimental results.
more …
By
Coello Coello, Carlos A.
Post to Citeulike
Evolutionary algorithms (as well as a number of other metaheuristics) have become a popular choice for solving problems having two or more (often conflicting) objectives (the socalled multiobjective optimization problems). This area, known as EMOO (Evolutionary MultiObjective Optimization) has had an important growth in the last 20 years, and several people (particularly newcomers) get the impression that it is now very difficult to make contributions of sufficient value to justify, for example, a PhD thesis. However, a lot of interesting research is still under way. In this paper, we will briefly review some of the research topics on evolutionary multiobjective optimization that are currently attracting a lot of interest (e.g., indicatorbased selection, manyobjective optimization and use of surrogates) and which represent good opportunities for doing research. Some of the challenges currently faced by this discipline will also be delineated.
more …
By
Pinto, David; Juan, Alfons; Rosso, Paolo
Post to Citeulike
The world wide web is a natural setting for crosslingual information retrieval. The European Union is a typical example of a multilingual scenario, where multiple users have to deal with information published in at least 20 languages. Given queries in some source language and a target corpus in another language, the typical approximation consists in translating either the query or the target dataset to the other language. Other approaches use parallel corpora to obtain a statistical dictionary of words among the different languages. In this work, we propose to use a training corpus made up by a set of QueryRelevant Document Pairs (QRDP) in a probabilistic crosslingual information retrieval approach which is based on the IBM alignment model 1 for statistical machine translation. Our approach has two main advantages over those that use direct translation and parallel corpora: we will not obtain a translation of the query, but a set of associated words which share their meaning in some way and, therefore, the obtained dictionary is, in a broad sense, more semantic than a translation one. Besides, since the queries are supervised, we are working in a more restricted domain than that when using a general parallel corpus (it is well known that in this context results are better than those which are performed in a general context). In order to determine the quality of our experiments, we compared the results with those obtained by a direct translation of the queries with a query translation system, observing promising results.
more …
By
GarzaFabre, Mario; RodriguezTello, Eduardo; ToscanoPulido, Gregorio
Post to Citeulike
2 Citations
Through multiobjectivization, a singleobjective problem is restated in multiobjective form with the aim of enabling a more efficient search process. Recently, this transformation was applied with success to the hydrophobicpolar (HP) lattice model, which is an abstract representation of the protein structure prediction problem. The use of alternative multiobjective formulations of the problem has led to significantly better results. In this paper, an improved multiobjectivization for the HP model is proposed. By decomposing the HP model’s energy function, a twoobjective formulation for the problem is defined. A comparative analysis reveals that the new proposed multiobjectivization evaluates favorably with respect to both the conventional singleobjective and the previously reported multiobjective formulations. Statistical significance testing and the use of a large set of test cases support the findings of this study. Both twodimensional and threedimensional lattices are considered.
more …
By
Gafni, Eli; Rajsbaum, Sergio
Post to Citeulike
6 Citations
In roundbyround models of distributed computing processes run in a sequence of (synchronous or asynchronous) rounds. The advantage of the roundbyround approach is that invariants established in the first round are preserved in later rounds. An elegant asynchronous roundbyround shared memory model, is the iterated snapshots model (IS). Instead of the snapshots model where processes share an array m[·] that can be accessed any number of times, indexed by process ID, where P_{i} writes to m[i] and can take a snapshot of the entire array, we have processes share a twodimensional array m[·,·], indexed by iteration number and by process ID, where P_{i} in iteration r writes once to m[r, i] and takes one snapshot of row r, m[r,·]. The IS model lends itself more easily to combinatorial analysis. However, to show that whenever a task is impossible in the IS model the task is impossible in the snapshots model, a simulation is needed. Such a simulation was presented by Borowsky and Gafni in PODC97; namely, it was shown how to take a waitfree protocol for the snapshots model, and transform it into a protocol for the IS model, solving the same task.
In this paper we present a new simulation from the snapshots model to the IS model, and show that it can be extended to work with models stronger that waitfree. The main contribution is to show that the simulation can work with models that have access to certain communication objects, called 01tasks. This extends the result of Gafni, Rajsbaum and Herlihy in DISC’2006 stating that renaming is strictly weaker than set agreement from the IS model to the usual noniterated waitfree read/write shared memory model.
We also show that our simulation works with tresilient models and the more general dependent process failure model of Junqueira and Marzullo. This version of the simulation extends previous results by Herlihy and Rajsbaum in PODC’2010 and DISC’2010 about the topological connectivity of a protocol complex in an iterated dependent process failure model, to the corresponding noniterated model.
more …
By
Confalonieri, Roberto; Nieves, Juan Carlos; Osorio, Mauricio; VázquezSalceda, Javier
Show all (4)
Post to Citeulike
4 Citations
In this paper, we show how the formalism of Logic Programs with Ordered Disjunction (LPODs) and Possibilistic Answer Set Programming (PASP) can be merged into the single framework of Logic Programs with Possibilistic Ordered Disjunction (LPPODs). The LPPODs framework embeds in a unified way several aspects of commonsense reasoning, nonmonotonocity, preferences, and uncertainty, where each part is underpinned by a well established formalism. On one hand, from LPODs it inherits the distinctive feature of expressing contextdependent qualitative preferences among different alternatives (modeled as the atoms of a logic program). On the other hand, PASP allows for qualitative certainty statements about the rules themselves (modeled as necessity values according to possibilistic logic) to be captured. In this way, the LPPODs framework supports a reasoning which is nonmonotonic, preference and uncertaintyaware. The LPPODs syntax allows for the specification of (1) preferences among the exceptions to default rules, and (2) necessity values about the certainty of program rules. As a result, preferences and uncertainty can be used to select the preferred uncertain default rules of an LPPOD and, consequently, to order its possibilistic answer sets. Furthermore, we describe the implementation of an ASPbased solver able to compute the LPPODs semantics.
more …
By
Bagnoli, Franco; El Yacoubi, Samira; Rechtman, Raúl
Post to Citeulike
4 Citations
An important question to be addressed regarding system control on a time interval [0, T] is whether some particular target state in the configuration space is reachable from a given initial state. When the target of interest refers only to a portion of the spatial domain, we speak about regional analysis. Cellular automata approach have been recently promoted for the study of control problems on spatially extended systems for which the classical approaches cannot be used. An interesting problem concerns the situation where the subregion of interest is not interior to the domain but a portion of its boundary . In this paper we address the problem of regional controllability of cellular automata via boundary actions, i.e., we investigate the characteristics of a cellular automaton so that it can be controlled inside a given region only acting on the value of sites at its boundaries.
more …
By
Mata, Felix
Post to Citeulike
1 Citations
Geographic Information Ranking consists of measuring if a document (answer) is relevant to a spatial query. It is done by comparing characteristics in common between document and query. The most popular approaches compare just one aspect of geographical data (geographic properties, topology, among others). It limits the assessment of document relevance. Nevertheless, it can be improved when key characteristics of geographical objects are considered in the ranking (1) geographical attributes, (2) topological relations, and (3) geographical concepts. In this paper, we outline iRank a method that integrates these three aspects to rank a document. Ourapproach evaluates documents from three sources of information: GeoOntologies, dictionaries, and topology files. Relevance is measured according to three stages. In the first stage, the relevance is computed by processing concepts; in second stage relevance is calculated using geographic attributes. In the last stage, the relevance is measured by computing topologic relations. Thus, the main contribution of iRank is show that integration of three ranking criteria is better than when they are used in separate way.
more …
By
MoralesLuna, Guillermo
Post to Citeulike
1 Citations
We use a simple epistemic logic to pose problems related to relational database design. We use the notion of attribute dependencies, which is weaker than the notion of functional dependencies, to perform design tasks since it satisfies Armstrong axioms. We discuss also the application of the simple epistemic logic to privacy protection and to statistical disclosure.
more …
By
GalvánLópez, Edgar; VázquezMendoza, Lucia; Schoenauer, Marc; Trujillo, Leonardo
Show all (4)
Post to Citeulike
In Genetic Programming (GP), the fitness of individuals is normally computed by using a set of fitness cases (FCs). Research on the use of FCs in GP has primarily focused on how to reduce the size of these sets. However, often, only a small set of FCs is available and there is no need to reduce it. In this work, we are interested in using the whole FCs set, but rather than adopting the commonly used GP approach of presenting the entire set of FCs to the system from the beginning of the search, referred as static FCs, we allow the GP system to build it by aggregation over time, named as dynamic FCs, with the hope to make the search more amenable. Moreover, there is no study on the use of FCs in Dynamic Optimisation Problems (DOPs). To this end, we also use the Kendall Tau Distance (KTD) approach, which quantifies pairwise dissimilarities among two lists of fitness values. KTD aims to capture the degree of a change in DOPs and we use this to promote structural diversity. Results on eight symbolic regression functions indicate that both approaches are highly beneficial in GP.
more …
By
Picek, Stjepan; Coello Coello, Carlos A.; Jakobovic, Domagoj; Mentens, Nele
Show all (4)
Post to Citeulike
2 Citations
The problem of finding the shortest addition chain for a given exponent is of great relevance in cryptography, but is also very difficult to solve since it is an NPhard problem. In this paper, we propose a genetic algorithm with a novel representation of solutions and new crossover and mutation operators to minimize the length of the addition chains corresponding to a given exponent. We also develop a repair strategy that significantly enhances the performance of our approach. The results are compared with respect to those generated by other metaheuristics for instances of moderate size, but we also investigate values up to
$$2^{127}  3$$
. For those instances, we were unable to find any results produced by other metaheuristics for comparison, and three additional strategies were adopted in this case to serve as benchmarks. Our results indicate that the proposed approach is a very promising alternative to deal with this problem.
more …
By
MontanoRivas, Omar
Post to Citeulike
Completionbased automated theory exploration is a method to explore inductive theories with the aid of a convergent rewrite system. It combines a method to synthesise conjectures/definitions in a theory with a completion algorithm. Completion constructs a convergent rewrite system which is then used to reduce redundancies and improve prove automation during the exploration of the theory. However, completion does not always succeed on a set of identities and a reduction ordering. A common failure occurs when an initial identity or a normal form of a critical pair cannot be oriented by the given ordering. A popular solution to this problem consists in using the instances of those rules which can be oriented for rewriting, namely ordered rewriting. Extending completion to ordered rewriting leads to ‘unfailing completion’. In this paper, we extend the class of theories on which the completionbased automated theory exploration method can be applied by using unfailing completion. This produce stronger normalization methods compared to those in [20,21]. The techniques described are implemented in the theory exploration system IsaScheme.
more …
By
Monroy, Raúl; Bundy, Alan; Green, Ian
Post to Citeulike
Unique Fixpoint Induction (UFI) is the chief inference rule to prove the equivalence of recursive processes in the Calculus of Communicating Systems (CCS) (Milner 1989). It plays a major role in the equational approach to verification. Equational verification is of special interest as it offers theoretical advantages in the analysis of systems that communicate values, have infinite state space or show parameterised behaviour. We call these kinds of systems VIPSs. VIPSs is the acronym of Valuepassing, InfiniteState and Parameterised Systems. Automating the application of UFI in the context of VIPSs has been neglected. This is both because many VIPSs are given in terms of recursive function symbols, making it necessary to carefully apply induction rules other than UFI, and because proving that one VIPS process constitutes a fixpoint of another involves computing a process substitution, mapping states of one process to states of the other, that often is not obvious. Hence, VIPS verification is usually turned into equation solving (Lin 1995a). Existing tools for this proof task, such as VPAM (Lin 1993), are highly interactive. We introduce a method that automates the use of UFI. The method uses middleout reasoning (Bundy et al. 1990a) and, so, is able to apply the rule even without elaborating the details of the application. The method introduces metavariables to represent those bits of the processes’ state space that, at application time, were not known, hence, changing from equation verification to equation solving. Adding this method to the equation plan developed by Monroy et al. (Autom Softw Eng 7(3):263–304, 2000a), we have implemented an automatic verification planner. This planner increases the number of verification problems that can be dealt with fully automatically, thus improving upon the current degree of automation in the field.
more …
By
Pierrard, Thomas; Coello Coello, Carlos A.
Post to Citeulike
3 Citations
This paper presents a new artificial immune system algorithm for solving multiobjective optimization problems, based on the clonal selection principle and the hypervolume contribution. The main aim of this work is to investigate the performance of this class of algorithm with respect to approaches which are representative of the stateoftheart in multiobjective optimization using metaheuristics. The results obtained by our proposed approach, called multiobjective artificial immune system based on hypervolume (MOAISHV) are compared with respect to those of the NSGAII. Our preliminary results indicate that our proposed approach is very competitive, and can be a viable choice for solving multiobjective optimization problems.
more …
By
Olague, Gustavo; Vega, Francisco Fernández; Pérez, Cynthia B.; Lutton, Evelyne
Show all (4)
Post to Citeulike
5 Citations
We present a new bioinspired approach applied to a problem of stereo images matching. This approach is based on an artifical epidemic process, that we call “the infection algorithm.” The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D informations which allow the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to only produce the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, that propagate like an artificial epidemy over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.
more …
By
Coello, Carlos A. Coello
Post to Citeulike
4 Citations
Summary
In this paper, we will briefly discuss the current state of the research on evolutionary multiobjective optimization, emphasizing the main achievements obtained to date. Achievements in algorithmic design are discussed from its early origins until the current approaches which are considered as the “second generation” in evolutionary multiobjective optimization. Some relevant applications are discussed as well, and we conclude with a list of future challenges for researchers working (or planning to work) in this area in the next few years.
more …
By
Marron, Mark; Hermenegildo, Manuel; Kapur, Deepak; Stefanovic, Darko
Show all (4)
Post to Citeulike
8 Citations
The performance of heap analysis techniques has a significant impact on their utility in an optimizing compiler. Most shape analysis techniques perform interprocedural dataflow analysis in a contextsensitive manner, which can result in analyzing each procedure body many times (causing significant increases in runtime even if the analysis results are memoized). To improve the effectiveness of memoization (and thus speed up the analysis) project/extend operations are used to remove portions of the heap model that cannot be affected by the called procedure (effectively reducing the number of different contexts that a procedure needs to be analyzed with). This paper introduces project/extend operations that are capable of accurately modeling properties that are important when analyzing nontrivial programs (sharing, nullity information, destructive recursive functions, and composite data structures). The techniques we introduce are able to handle these features while significantly improving the effectiveness of memoizing analysis results (and thus improving analysis performance). Using a range of well known benchmarks (many of which have not been successfully analyzed using other existing shape analysis methods) we demonstrate that our approach results in significant improvements in both accuracy and efficiency over a baseline analysis.
more …
By
Molinet Berenguer, José A.; Coello Coello, Carlos A.
Post to Citeulike
1 Citations
In this paper, we propose a new multiobjective evolutionary algorithm (MOEA), which transforms a multiobjective optimization problem into a linear assignment problem using a set of weight vectors uniformly scattered. Our approach adopts uniform design to obtain the set of weights and KuhnMunkres’ (Hungarian) algorithm to solve the assignment problem. Differential evolution is used as our search engine, giving rise to the socalled Hungarian Differential Evolution algorithm (HDE). Our proposed approach is compared with respect to a MOEA based on decomposition (MOEA/D) and with respect to an indicatorbased MOEA (the S metric selection Evolutionary MultiObjective Algorithm, SMS EMOA) using several test problems (taken from the specialized literature) having from two to ten objective functions. Our preliminary experimental results indicate that our proposed HDE outperforms MOEA/D and is competitive with respect to SMSEMOA, but at a significantly lower computational cost.
more …
By
Lücken, Christian; Brizuela, Carlos; Barán, Benjamin
Post to Citeulike
1 Citations
Multiobjective Evolutionary Algorithms (MOEA) are used to solve complex multiobjective problems. As the number of objectives increases, Paretobased MOEAs are unable to maintain the same effectiveness showed for two or three objectives. Therefore, as a way to ameliorate this performance degradation several authors proposed preferencebased methods as an alternative to Pareto based approaches. On the other hand, parallelization has shown to be useful in evolutionary optimizations. A central aspect for the parallelization of evolutionary algorithms is the population partitioning approach. Thus, this paper presents a new parallelization approach based on clustering by the shape of objective vectors to deal with manyobjective problems. The proposed method was compared with random and
$$k$$
means clustering approaches using a multithreading framework in parallelization of the NSGAII and six variants using preferencebased relations for fitness assignment. Executions were carriedout for the DTLZ problem suite, and the obtained solutions were compared using the generational distance metric. Experimental results show that the proposed shapebased partition achieves competitive results when comparing to the sequential and to other partitioning approaches.
more …
By
Lopez, Edgar Manoatl; Antonio, Luis Miguel; Coello Coello, Carlos A.
Post to Citeulike
3 Citations
The hypervolume has become very popular in current multiobjective optimization research. Because of its highly desirable features, it has been used not only as a quality measure for comparing final results of multiobjective evolutionary algorithms (MOEAs), but also as a selection operator (it is, for example, very suitable for manyobjective optimization problems). However, it has one serious drawback: computing the exact hypervolume is highly costly. The best known algorithms to compute the hypervolume are polynomial in the number of points, but their cost grows exponentially with the number of objectives. This paper proposes a novel approach which, through the use of Graphics Processing Units (GPUs), computes in a faster way the hypervolume contribution of a point. We develop a highly parallel implementation of our approach and demonstrate its performance when using it within the
$$\mathcal {S}$$
Metric Selection Evolutionary MultiObjective Algorithm (SMSEMOA). Our results indicate that our proposed approach is able to achieve a significant speed up (of up to 883x) with respect to its sequential counterpart, which allows us to use SMSEMOA with exact hypervolume calculations, in problems having up to 9 objective functions.
more …
By
Ita, Guillermo; Bello, Pedro; Contreras, Meliza
Post to Citeulike
To count models for two conjunctive forms (#2SAT problem) is a classic #P problem. We determine different structural patterns on the underlying graph of a 2CF F allowing the efficient computation of #2SAT(F).
We show that if the constrained graph of a formula is acyclic or the cycles on the graph can be arranged as independent and embedded cycles, then the number of models of F can be counted efficiently.
more …
By
YáñezMárquez, Cornelio; LópezYáñez, Itzamá; AldapePérez, Mario; CamachoNieto, Oscar; ArgüellesCruz, Amadeo José; VilluendasRey, Yenny
Show all (6)
Post to Citeulike
The current paper contains the theoretical foundation for the offthemainstream model known as AlphaBeta associative memories (
$$\alpha \beta $$
model). This is an unconventional computation model designed to operate as an associative memory, whose main application is the solution of pattern recognition tasks, particularly for pattern recall and pattern classification. Although this model was devised, proposed and created in 2002, it is worth noting that its theoretical support remains unpublished to this day. This is despite the fact that more than a hundred scientific articles have been published with applications, improvements, and new models derived from the
$$\alpha \beta $$
model. The present paper includes all the required definitions, and the rigorous mathematical demonstrations of the lemmas and theorems, explaining the operation of the
$$\alpha \beta $$
model, as well as the original models it has inspired or that have been derived from it. Also, brief descriptions of 60 selected articles related to the
$$\alpha \beta $$
model are presented. These latter works illustrate the competitiveness (and sometimes superiority) of several extensions and models derived from the original
$$\alpha \beta $$
model, when compared against some models and paradigms present in the mainstream current scientific literature.
more …
By
Pinto, David; Rosso, Paolo; Jiménez, Ernesto
Post to Citeulike
This paper presents an approach of a crosslingual information retrieval which uses a ranking method based on a penalisation version of the Jaccard formula. The obtained results after the submission of a set of runs to the WebCLEF 2006 have shown that this simple ranking formula may be used in a crosslingual environment. A comparison with runs submitted by other teams ranks us in a third place by using all the topics. A fourth place is obtained with our best overall results by using only the new topic set, and a second place was got by using only the automatic topics of the new topic set. An exact comparison with the rest of the participants is in fact difficult to obtain and, therefore, we consider that further detailed analysis of the components should be done in order to determine the best components of the proposed system.
more …
By
AlbaCabrera, Eduardo; IbarraFiallo, Julio; GodoyCalderon, Salvador; CervantesAlonso, Fernando
Show all (4)
Post to Citeulike
1 Citations
The last few years have seen an important increase in research publications dealing with external typical testorfinding algorithms, while internal ones have been almost forgotten or modified to behave as external on the basis of their alleged poor performance. In this research we present a new internal typical testorfinding algorithm called YYC that incrementally calculates typical testors for the currently analized set of basic matrix rows by searching for compatible sets. The experimentally measured performance of this algorithm stands out favorably in problems where other external algorithms show very low performance. Also, a comparative analysis of its efficiency is done against some external typical testorfinding algorithms published during the last few years.
more …
By
Rendón, Erendira; Sánchez, José Salvador
Post to Citeulike
2 Citations
Clustering in data mining is a discovery process that groups a set of data so as to maximize the intracluster similarity and to minimize the intercluster similarity. Clustering becomes more challenging when data are categorical and the amount of available memory is less than the size of the data set. In this paper, we introduce CBC (Clustering Based on Compressed Data), an extension of the Birch algorithm whose main characteristics refer to the fact that it can be especially suitable for very large databases and it can work both with categorical attributes and mixed features. Effectiveness and performance of the CBC procedure were compared with those of the wellknown Kmodes clustering algorithm, demonstrating that the CBC summary process does not affect the final clustering, while execution times can be drastically lessened.
more …
By
Graff, Mario; GraffGuerrero, Ariel; CerdaJacobo, Jaime
Post to Citeulike
5 Citations
There is great interest for the development of semantic genetic operators to improve the performance of genetic programming. Semantic genetic operators have traditionally been developed employing experimentally or theoreticallybased approaches. Our current work proposes a novel semantic crossover developed amid the two traditional approaches. Our proposed semantic crossover operator is based on the use of the derivative of the error propagated through the tree. This process decides the crossing point of the second parent. The results show that our procedure improves the performance of genetic programming on rational symbolic regression problems.
more …
By
Quintana, Ma. Guadalupe; Morales, Rafael
Post to Citeulike
We present results of using inductive logic programming (ILP) to produce learner models by behavioural cloning. Models obtained using a program for supervised induction of production rules (ripper) are compared to models generated using a wellknown program for ILP (foil). It is shown that the models produced by foil are either too specific or too general, depending on whether or not auxiliary relations are applied. Three possible explanations for these results are: (1) there is no way of specifying to foil the minimum number of cases each clause must cover; (2) foil requires that all auxiliary relations be defined extensionally; and (3) the application domain (control of a pole on a cart) has continuous attributes. In spite of foil’s limitations, the models it produced using auxiliary relations meet one of the goals of our exploration: to obtain more structured learner models which are easier to comprehend.
more …
By
SolorioFernández, Saúl; CarrascoOchoa, J. Ariel; MartínezTrinidad, José Fco.
Post to Citeulike
4 Citations
In this paper, we introduce a new hybrid filterwrapper method for supervised feature selection, based on the Laplacian Score ranking combined with a wrapper strategy. We propose to rank features with the Laplacian Score to reduce the search space, and then we use this order to find the best feature subset. We compare our method against other based on ranking feature selection methods, namely, Information Gain Attribute Ranking, Relief, Correlationbased Feature Selection, and additionally we include in our comparison a Wrapper Subset Evaluation method. Empirical results over ten realworld datasets from the UCI repository show that our hybrid method is competitive and outperforms in most of the cases to the other feature selection methods used in our experiments.
more …
By
Goudarzi, Alireza; Lakin, Matthew R.; Stefanovic, Darko
Post to Citeulike
4 Citations
We propose a novel molecular computing approach based on reservoir computing. In reservoir computing, a dynamical core, called a reservoir, is perturbed with an external input signal while a readout layer maps the reservoir dynamics to a target output. Computation takes place as a transformation from the input space to a highdimensional spatiotemporal feature space created by the transient dynamics of the reservoir. The readout layer then combines these features to produce the target output. We show that coupled deoxyribozyme oscillators can act as the reservoir. We show that despite using only three coupled oscillators, a molecular reservoir computer could achieve 90% accuracy on a benchmark temporal problem.
more …
By
Hernández Gómez, Raquel; Coello Coello, Carlos A.; Alba, Enrique
Post to Citeulike
2 Citations
In the last decade, there has been a growing interest in multiobjective evolutionary algorithms that use performance indicators to guide the search. A simple and effective one is the
$$\mathcal {S}$$
Metric Selection Evolutionary MultiObjective Algorithm (SMSEMOA), which is based on the hypervolume indicator. Even though the maximization of the hypervolume is equivalent to achieving Pareto optimality, its computational cost increases exponentially with the number of objectives, which severely limits its applicability to manyobjective optimization problems. In this paper, we present a parallel version of SMSEMOA, where the execution time is reduced through an asynchronous island model with micropopulations, and diversity is preserved by external archives that are pruned to a fixed size employing a recently created technique based on the ParallelCoordinates graph. The proposed approach, called
$$\mathcal {S}$$
PAMICRO (PArallel MICRo Optimizer based on the
$$\mathcal {S}$$
metric), is compared to the original SMSEMOA and another stateoftheart algorithm (HypE) on the WFG test problems using up to 10 objectives. Our experimental results show that
$$\mathcal {S}$$
PAMICRO is a promising alternative that can solve manyobjective optimization problems at an affordable computational cost.
more …
By
SantiagoRamirez, Everardo; GonzalezFraga, J. A.; AscencioLopez, J. I.
Post to Citeulike
2 Citations
In this paper, we compare the performance of three composite correlation filters in facial recognition problem. We used the ORL (Olivetti Research Laboratory) facial image database to evaluate KLaw, MACE and ASEF filters performance. Simulations results demonstrate that KLaw nonlinear composite filters evidence the best performance in terms of recognition rate (RR) and, false acceptation rate (FAR). As a result, we observe that correlation filters are able to work well even when the facial image contains distortions such as rotation, partial occlusion and different illumination conditions.
more …
By
GarcíaValdez, Mario; Guervós, Juan Julián Merelo; Fernández de Vega, Francisco
Post to Citeulike
In this paper the effect of node unavailability in algorithms using EvoSpace, a poolbased evolutionary algorithm, is assessed. EvoSpace is a framework for developing evolutionary algorithms (EAs) using heterogeneous and unreliable resources. It is based on Linda’s tuple space coordination model. The core elements of EvoSpace are a central repository for the evolving population and remote clients, here called EvoWorkers, which pull random samples of the population to perform on them the basic evolutionary processes (selection, variation and survival), once the work is done, the modified sample is pushed back to the central population. To address the problem of unreliable EvoWorkers, EvoSpace uses a simple reinsertion algorithm using copies of samples stored in a global queue which also prevents the starvation of the population pool. Using a benchmark problem from the PPeaks problem generator we have compared two approaches: (i) the reinsertion of previous individuals at the cost of keeping copies of each sample, and a common approach of other pool based EAs, (ii) inserting randomly generated individuals. We found that EvoSpace is fault tolerant to highly unreliable resources and also that the reinsertion algorithm is only needed when the population is near the point of starvation.
more …
By
Amaya, Ivan; OrtizBayliss, José Carlos; ConantPablos, Santiago Enrique; TerashimaMarín, Hugo; Coello Coello, Carlos A.
Show all (5)
Post to Citeulike
1 Citations
Solvers for different combinatorial optimization problems have evolved throughout the years. These can range from simple strategies such as basic heuristics, to advanced models such as metaheuristics and hyperheuristics. Even so, the set of benchmark instances has remained almost unaltered. Thus, any analysis of solvers has been limited to assessing their performance under those scenarios. Even if this has been fruitful, we deem necessary to provide a tool that allows for a better study of each available solver. Because of that, in this paper we present a tool for assessing the strengths and weaknesses of different solvers, by tailoring a set of instances for each of them. We propose an evolutionarybased model and test our idea on four different basic heuristics for the 1D bin packing problem. This, however, does not limit the scope of our proposal, since it can be used in other domains and for other solvers with few changes. By pursuing an indepth study of such tailored instances, more relevant knowledge about each solver can be derived.
more …
By
Acevedo, Elena; Acevedo, Antonio; Felipe, Federico
Post to Citeulike
2 Citations
A method for diagnosing Parkinson’s disease is presented. The proposal is based on associative approach, and we used this method for classifying patients with Parkinson’s disease and those who are completely healthy. In particular, AlphaBeta Bidirectional Associative Memory is used together with the modified JohnsonMöbius codification in order to deal with mixed noise. We use three methods for testing the performance of our method: LeaveOneOut, HoldOut and Kfold Cross Validation and the average obtained was of 97.17%.
more …
By
Olague, Gustavo; Romero, Eva; Trujillo, Leonardo; Bhanu, Bir
Show all (4)
Post to Citeulike
3 Citations
This paper presents a linear genetic programming approach, that solves simultaneously the region selection and feature extraction tasks, that are applicable to common image recognition problems. The method searches for optimal regions of interest, using texture information as its feature space and classification accuracy as the fitness function. Texture is analyzed based on the gray level cooccurrence matrix and classification is carried out with a SVM committee. Results show effective performance compared with previous results using a standard image database.
more …
By
CruzBarbosa, Raúl; BautistaVillavicencio, David; Vellido, Alfredo
Post to Citeulike
The diagnostic classification of human brain tumours on the basis of magnetic resonance spectra is a nontrivial problem in which dimensionality reduction is almost mandatory. This may take the form of feature selection or feature extraction. In feature extraction using manifold learning models, multivariate data are described through a lowdimensional manifold embedded in data space. Similarities between points along this manifold are best expressed as geodesic distances or their approximations. These approximations can be computationally intensive, and several alternative software implementations have been recently compared in terms of computation times. The current brief paper extends this research to investigate the comparative ability of dimensionalityreduced data descriptions to accurately classify several types of human brain tumours. The results suggest that the way in which the underlying data manifold is constructed in nonlinear dimensionality reduction methods strongly influences the classification results.
more …
By
PérezAlfonso, Damián; FundoraRamírez, Osiel; LazoCortés, Manuel S.; RocheEscobar, Raciel
Show all (4)
Post to Citeulike
Process mining is concerned with the extraction of knowledge about business processes from information system logs. Process discovery algorithms are process mining techniques focused on discovering process models starting from event logs. The applicability and effectiveness of process discovery algorithms rely on features of event logs and process characteristics. Selecting a suitable algorithm for an event log is a tough task due to the variety of variables involved in this process. The traditional approaches use empirical assessment in order to recommend a suitable discovery algorithm. This is a time consuming and computationally expensive approach. The present paper evaluates the usefulness of an approach based on classification to recommend discovery algorithms. A knowledge base was constructed, based on features of event logs and process characteristics, in order to train the classifiers. Experimental results obtained with the classifiers evidence the usefulness of the proposal for recommendation of discovery algorithms.
more …
By
Arias Montaño, Alfredo; Coello Coello, Carlos A.; MezuraMontes, Efrén
Post to Citeulike
This paper introduces a novel Parallel MultiObjective Evolutionary Algorithm (pMOEA) which is based on the island model. The serial algorithm on which this approach is based uses the differential evolution operators as its search engine, and includes two mechanisms for improving its convergence properties (through local dominance and environmental selection based on scalar functions). Two different parallel approaches are presented. The first aims at improving effectiveness (i.e., for better approximating the Pareto front) while the second aims to provide a better efficiency (i.e., by reducing the execution time through the use of small population sizes in each subpopulation). To assess the performance of the proposed algorithms, we adopt a set of standard test functions and performance measures taken from the specialized literature. Results are compared with respect to its serial counterpart and with respect to three algorithms representative of the stateoftheart in the area: NSGAII, MOEA/D and MOEA/DDE.
more …
By
Ledeneva, Yulia
Post to Citeulike
The task of extractive summarization consists in producing a text summary by extracting a subset of text segments, such as sentences, and concatenating them to form a summary of the original text. The selection of sentences is based on terms they contain, which can be single words or multiword expressions. In a previous work, we have suggested socalled Maximal Frequent Sequences as such terms. In this paper, we investigate the effect of preprocessing on the process of selecting such sequences. Our results suggest that the accuracy of the method is, contrary to expectations, not seriously affected by preprocessing—which is both bad and good news, as we show.
more …
By
Rajsbaum, Sergio
Post to Citeulike
4 Citations
In centralized computing we can compute a function composing a sequence of elementary functions, where the output of the ith function in the sequence is the input to the i + 1st function in the sequence. This computation is done without persistent registers that could store information of the outcomes of these function invocations. In distributed computing, a task is the analogue of a function. An iterated model is defined by some base set of tasks. Processes invoke a sequence of tasks from this set. Each process invokes the i + 1st task with its output from the ith task. Processes access the sequence of tasks, onebyone, in the same order, and asynchronously. Any number of processes can crash. In the most basic iterated model the base tasks are read/write registers. Previous papers have studied this and other iterated models with more powerful base tasks or enriched with failure detectors, which have been useful to prove impossibility results and to design algorithms, due to the elegant recursive structure of the runs. This talk surveys results in this area, contributed mainly by Borowsky, Gafni, Herlihy, Raynal, Travers and the author.
more …
By
RamírezCorona, Mallinali; Sucar, L. Enrique; Morales, Eduardo F.
Post to Citeulike
1 Citations
Hierarchical Multilabel Classification (HMC) is the task of assigning a set of classes to a single instance with the peculiarity that the classes are ordered in a predefined structure. We propose a novel HMC method for tree and Directed Acyclic Graphs (DAG) hierarchies. Using the combined predictions of locals classifiers and a weighting scheme according to the level in the hierarchy, we select the “best” single path for tree hierarchies, and multiple paths for DAG hierarchies. We developed a method that returns paths from the root down to a leaf node (Mandatory Leaf Node Prediction or MLNP) and an extension for Non Mandatory Leaf Node Prediction (NMLNP). For NMLNP we compared several pruning approaches varying the pruning direction, pruning time and pruning condition. Additionally, we propose a new evaluation metric for hierarchical classifiers, that avoids the bias of current measures which favor conservative approaches when using NMLNP. The proposed approach was experimentally evaluated with 10 tree and 8 DAG hierarchical datasets in the domain of protein function prediction. We concluded that our method works better for deep, DAG hierarchies and in general NMLNP improves MLNP.
more …
By
Merelo Guervós, Juan J.; GarcíaValdez, J. Mario
Post to Citeulike
Cloudnative applications add a layer of abstraction to the underlying distributed computing system, defining a highlevel, selfscaling and selfmanaged architecture of different microservices linked by a messaging bus. Creating new algorithms that tap these architectural patterns and at the same time employ distributed resources efficiently is a challenge we will be taking up in this paper. We introduce KafkEO, a cloudnative evolutionary algorithms framework that is prepared to work with different implementations of evolutionary algorithms and other populationbased metaheuristics by using micropopulations and stateless services as the main building blocks; KafkEO is an attempt to map the traditional evolutionary algorithm to this new cloudnative format. As far as we know, this is the first architecture of this kind that has been published and tested, and is free software and vendorindependent, based on OpenWhisk and Kafka. This paper presents a proof of concept, examines its cost, and tests the impact on the algorithm of the design around cloudnative and asynchronous system by comparing it on the well known BBOB benchmarks with other poolbased architectures, with which it has a remarkable functional resemblance. KafkEO results are quite competitive with similar architectures.
more …
By
Wright, Alden; Poli, Riccardo; Stephens, Christopher R.; Langdon, W. B.; Pulavarty, Sandeep
Show all (5)
Post to Citeulike
6 Citations
Estimation of distribution algorithms (EDA) are similar to genetic algorithms except that they replace crossover and mutation with sampling from an estimated probability distribution. We develop a framework for estimation of distribution algorithms based on the principle of maximum entropy and the conservation of schema frequencies. An algorithm of this type gives better performance than a standard genetic algorithm (GA) on a number of standard test problems involving deception and epistasis (i.e. Trap and NK).
more …
By
GarzaFabre, Mario; RodriguezTello, Eduardo; ToscanoPulido, Gregorio
Post to Citeulike
8 Citations
The HP model for protein structure prediction abstracts the fact that hydrophobicity is a dominant force in the protein folding process. This challenging combinatorial optimization problem has been widely addressed through metaheuristics. The evaluation function is a key component for the success of metaheuristics; the poor discrimination of the conventional evaluation function of the HP model has motivated the proposal of alternative formulations for this component. This comparative analysis inquires into the effectiveness of seven different evaluation functions for the HP model. The degree of discrimination provided by each of the studied functions, their capability to preserve a rank ordering among potential solutions which is consistent with the original objective of the HP model, as well as their effect on the performance of local search methods are analyzed. The obtained results indicate that studying alternative evaluation schemes for the HP model represents a highly valuable direction which merits more attention.
more …
By
GarcíaHernández, René Arnulfo; Ledeneva, Yulia
Post to Citeulike
4 Citations
Extractive text summarization consists in selecting the most important units (normally sentences) from the original text, but it must be done as closer as humans do. Several interesting automatic approaches are proposed for this task, but some of them are focused on getting a better result rather than giving some assumptions about what humans use when producing a summary. In this research, not only the competitive results are obtained but also some assumptions are given about what humans tried to represent in a summary. To reach this objective a genetic algorithm is proposed with special emphasis on the fitness function which permits to contribute with some conclusions.
more …
By
Hernández, Daniel; Olague, Gustavo; Clemente, Eddie; Dozal, León
Show all (4)
Post to Citeulike
The interaction between a visual system with its environment is studied in terms of a purposive vision system with the aim of establishing a link between perception and action. A system that performs visuomotor tasks requires a selective perception process in order to execute specific motion actions. This combination is understood as a visual behavior. This paper presents a solution to the process of synthesizing visual behaviors through genetic programming, resulting in specialized visual routines that are used to estimate the trajectory of a camera within a vision based simultaneous localization and map building system. Thus, the experiments were carried out with a realworking system consisting of a robotic manipulator in a handeye configuration. The main idea is to evolve a conspicuous point detector based on the concept of an artificial dorsal stream. The results on this paper show that it is in fact possible to find key points in an image through a visual attention process in combination with an evolutionary algorithm to design specialized visual behaviors.
more …
By
VillatoroTello, Esaú; VillaseñorPineda, Luis; MontesyGómez, Manuel
Post to Citeulike
1 Citations
Recent evaluation results from Geographic Information Retrieval (GIR) indicate that current information retrieval methods are effective to retrieve relevant documents for geographic queries, but they have severe difficulties to generate a pertinent ranking of them. Motivated by these results in this paper we present a novel reranking method, which employs information obtained through a relevance feedback process to perform a ranking refinement. Performed experiments show that the proposed method allows to improve the generated ranking from a traditional IR machine, as well as results from traditional reranking strategies such as query expansion via relevance feedback.
more …
By
Coello Coello, Carlos A.
Post to Citeulike
3 Citations
This paper provides a brief introduction to the socalled multiobjective evolutionary algorithms, which are bioinspired metaheuristics designed to deal with problems having two or more (normally conflicting) objectives. First, we provide some basic concepts related to multiobjective optimization and a brief review of approaches available in the specialized literature. Then, we provide a short review of applications of multiobjective evolutionary algorithms in pattern recognition. In the final part of the paper, we provide some possible paths for future research in this area, which are promising, from the author’s perspective.
more …
By
OrtizHernández, Gustavo; Hübner, Jomi Fred; Bordini, Rafael H.; GuerraHernández, Alejandro; HoyosRivera, Guillermo J.; CruzRamírez, Nicandro
Show all (6)
Post to Citeulike
1 Citations
In this paper we propose a model for designing BeliefDesireIntention (BDI) agents under the principles of modularity. We aim to encapsulate agent functionalities expressed as BDI abstractions into independent, reusable and easier to maintain units of code, which agents can dynamically load. The general idea of our approach is to exploit the notion of namespace to organize components such as beliefs, plans and goals. This approach allowed us to address the namecollision problem, providing interface and information hiding features for modules. Although the proposal is suitable for agentoriented programming languages in general, we present concrete examples in Jason.
more …
By
Gelbukh, Alexander; Sidorov, Grigori
Post to Citeulike
1 Citations
Parallel text alignment is a special type of pattern recognition task aimed to discover the similarity between two sequences of symbols. Given the same text in two different languages, the task is to decide which elements—paragraphs in case of paragraph alignment—in one text are translations of which elements of the other text. One of the applications is training training statistical machine translation algorithms. The task is not trivial unless detailed text understanding can be afforded. In our previous work we have presented a simple technique that relied on bilingual dictionaries but does not perform any syntactic analysis of the texts. In this paper we give a formal definition of the task and present an exact optimization algorithm for finding the best alignment.
more …
By
RodriguezTello, Eduardo; Hao, JinKao; TorresJimenez, Jose
Post to Citeulike
This paper introduces a new evaluation function, called δ , for the Bandwidth Minimization Problem for Graphs (BMPG). Compared with the classical β evaluation function used, our δ function is much more discriminating and leads to smoother landscapes. The main characteristics of δ are analyzed and its practical usefulness is assessed within a Simulated Annealing algorithm. Experiments show that thanks to the use of the δ function, we are able to improve on some previous best results of a set of wellknown benchmarks.
more …
By
RodriguezTello, Eduardo; TorresJimenez, Jose
Post to Citeulike
1 Citations
Genetic algorithms have been successfully applied to many difficult problems but there have been some disappointing results as well. In these cases the choice of the internal representation and genetic operators greatly conditions the result.
In this paper a GA and a reordering algorithm were used for solve SAT instances. The reordering algorithm produces a more suitable encoding for a GA that enables a GA performance improvement. The attained improvement relies on the buildingblock hypothesis, which states that a GA works well when short, loworder, highlyfit schemata (building blocks) recombine to form even more highly fit higherorder schemata. The reordering algorithm delivers a representation which has the most related bits (i.e. Boolean variables) in closer positions inside the chromosome.
The results of experimentation demonstrated that the proposed approach improves the performance of a simple GA in all the tests accomplished. These experiments also allow us to observe the relation among the internal representation, the genetic operators and the performance of a GA.
more …
By
Peña Ayala, Alejandro; Sossa, Humbero
Post to Citeulike
In this paper an approach oriented to acquire, depict, and administrate knowledge about the student is proposed. Moreover, content is also characterized to describe lectures. In addition, the work focuses on the semantics of the attributes that reveal a profile of the student and the teaching experiences. The meaning of such properties is stated as an ontology. Thus, inheritance and causal inferences are made. According to the semantics of the attributes and the conclusions induced, the sequencing module of a Webbased educational system (WBES) delivers the appropriate option of lecture to students. The underlying hypothesis is: the apprenticeship of students is enhanced when a WBES understands the nature of the content and the student’s characteristics. Based on the empirical evidence outcome by a trial, it is concluded that: Successful WBES account the knowledge that describe their students and lectures.
more …
By
Brena, Ramón F.; Chesñevar, Carlos I.; Aguirre, José L.
Post to Citeulike
2 Citations
Disseminating pieces of knowledge among the members of large organizations is a well known problem in Knowledge Management, involving several decisionmaking processes. The JITIK multiagent framework has been successfully used for justintime delivering highly customized notifications to the adequate users in large distributed organizations. However, in JITIK as well as in other similar approaches it is common to deal with incomplete information and conflicting policies, making difficult to make decisions about whether to deliver or not a specific piece of information or knowledge on the basis of a rationally justified procedure. This paper presents an approach to cope with this problem by integrating JITIK with a defeasible argumentation formalism. Conflicts among policies are solved on the basis of a dialectical analysis whose outcome determines whether a particular information item should be delivered to a specific user.
more …
By
Nebro, Antonio J.; Durillo, Juan J.; GarcíaNieto, José; BarbaGonzález, Cristóbal; Ser, Javier; Coello Coello, Carlos A.; BenítezHidalgo, Antonio; AldanaMontes, José F.
Show all (8)
Post to Citeulike
The Speedconstrained Multiobjective PSO (SMPSO) is an approach featuring an external bounded archive to store nondominated solutions found during the search and out of which leaders that guide the particles are chosen. Here, we introduce SMPSO/RP, an extension of SMPSO based on the idea of reference point archives. These are external archives with an associated reference point so that only solutions that are dominated by the reference point or that dominate it are considered for their possible addition. SMPSO/RP can manage several reference point archives, so it can effectively be used to focus the search on one or more regions of interest. Furthermore, the algorithm allows interactively changing the reference points during its execution. Additionally, the particles of the swarm can be evaluated in parallel. We compare SMPSO/RP with respect to three other reference point based algorithms. Our results indicate that our proposed approach outperforms the other techniques with respect to which it was compared when solving a variety of problems by selecting both achievable and unachievable reference points. A realworld application related to civil engineering is also included to show up the real applicability of SMPSO/RP.
more …
By
Ledeneva, Yulia; Gelbukh, Alexander; GarcíaHernández, René Arnulfo
Post to Citeulike
9 Citations
Automatic text summarization helps the user to quickly understand large volumes of information. We present a language and domainindependent statisticalbased method for singledocument extractive summarization, i.e., to produce a text summary by extracting some sentences from the given text. We show experimentally that words that are parts of bigrams that repeat more than once in the text are good terms to describe the text’s contents, and so are also socalled maximal frequent sentences. We also show that the frequency of the term as term weight gives good results (while we only count the occurrences of a term in repeating bigrams).
more …
By
PescadorRojas, Miriam; Hernández Gómez, Raquel; Montero, Elizabeth; RojasMorales, Nicolás; Riff, MaríaCristina; Coello Coello, Carlos A.
Show all (6)
Post to Citeulike
1 Citations
Scalarizing functions play a crucial role in multiobjective evolutionary algorithms (MOEAs) based on decomposition and the R2 indicator, since they guide the population towards nearly optimal solutions, assigning a fitness value to an individual according to a predefined target direction in objective space. This paper presents a general review of weighted scalarizing functions without constraints, which have been proposed not only within evolutionary multiobjective optimization but also in the mathematical programming literature. We also investigate their scalability up to 10 objectives, using the test problems of Lamé Superspheres on the MOEA/D and MOMBIII frameworks. For this purpose, the best suited scalarizing functions and their model parameters are determined through the evolutionary calibrator EVOCA. Our experimental results reveal that some of these scalarizing functions are quite robust and suitable for handling manyobjective optimization problems.
more …
By
López, Juan Carlos; Monroy, Raúl
Post to Citeulike
The inductive approach [1] has been successfully used for verifying a number of security protocols, uncovering hidden assumptions. Yet it requires a high level of skill to use: a user must guide the proof process, selecting the tactic to be applied, inventing a key lemma, etc. This paper suggests that a proof planning approach [2] can provide automation in the verification of security protocols using the inductive approach. Proof planning uses AI techniques to guide theorem provers. It has been successfully applied in formal methods to software development. Using inductive proof planning [3], we have written a method which takes advantage of the differences in term structure introduced by rule induction, a chief inference rule in the inductive approach. Using difference matching [4], our method first identifies the differences between a goal and the associated hypotheses. Then, using rippling [5], the method attempts to remove such differences. We have successfully conducted a number of experiments using HOLClam [6], a socketbased link that combines the HOL theorem prover [7] and the Clam proof planner [8]. While this paper key’s contribution centres around a new insight to structuring the proof of some security theorems, it also reports on the development of the inductive approach within the HOL system.
more …
By
Oliveira, Thomaz; López, Julio; RodríguezHenríquez, Francisco
Post to Citeulike
4 Citations
In this work, we retake an old idea presented by Koblitz in his landmark paper [21], where he suggested the possibility of defining anomalous elliptic curves over the base field
$$\mathbb {F}_4$$
. We present a careful implementation of the base and quadratic field arithmetic required for computing the scalar multiplication operation in such curves. In order to achieve a fast reduction procedure, we adopted a redundant trinomial strategy that embeds elements of the field
$$\mathbb {F}_{4^{m}},$$
with m a prime number, into a ring of higher order defined by an almost irreducible trinomial. We also report a number of techniques that allow us to take full advantage of the native vector instructions of highend microprocessors. Our software library achieves the fastest timings reported for the computation of the timingprotected scalar multiplication on Koblitz curves, and competitive timings with respect to the speed records established recently in the computation of the scalar multiplication over prime fields.
more …
By
Salomon, Shaul; Avigad, Gideon; Goldvard, Alex; Schütze, Oliver
Show all (4)
Post to Citeulike
6 Citations
It has generally been acknowledged that both proximity to the Pareto front and a certain diversity along the front should be targeted when using evolutionary algorithms to evolve solutions to multiobjective optimization problems. Although many evolutionary algorithms are equipped with mechanisms to achieve both targets, most give priority to proximity over diversity. This priority is embedded within the algorithms through the selection of solutions to the elite population based on the concept of dominance. Although the current study does not change this embedded preference, it does utilize an improved diversity preservation mechanism that is based on a recently introduced partitioning algorithm for function selection. It is shown that this partitioning allows for the selection of a welldiversified set out of an arbitrary given set. Further, when embedded into an evolutionary search, this procedure significantly enhances the exploitation of diversity. The procedure is demonstrated on commonly used test cases for up to five objectives. The potential for further improving evolutionary algorithms through the use of the partitioning algorithm is highlighted.
more …
By
Durillo, Juan J.; GarcíaNieto, José; Nebro, Antonio J.; Coello, Carlos A. Coello; Luna, Francisco; Alba, Enrique
Show all (6)
Post to Citeulike
49 Citations
Particle Swarm Optimization (PSO) has received increasing attention in the optimization research community since its first appearance in the mid1990s. Regarding multiobjective optimization, a considerable number of algorithms based on MultiObjective Particle Swarm Optimizers (MOPSOs) can be found in the specialized literature. Unfortunately, no experimental comparisons have been made in order to clarify which MOPSO version shows the best performance. In this paper, we use a benchmark composed of three wellknown problem families (ZDT, DTLZ, and WFG) with the aim of analyzing the search capabilities of six representative stateoftheart MOPSOs, namely, NSPSO, SigmaMOPSO, OMOPSO, AMOPSO, MOPSOpd, and CLMOPSO. We additionally propose a new MOPSO algorithm, called SMPSO, characterized by including a velocity constraint mechanism, obtaining promising results where the rest perform inadequately.
more …
By
CastroManzano, José Martín
Post to Citeulike
Intentional reasoning is a form of logical reasoning with a temporalintentional and defeasible nature. By covering conditions of material and formal adequacy we describe a defeasible logic of intention to support the thesis that intentional reasoning is bona fide reasoning.
more …
By
Ritter, Gerhard X.; Urcid, Gonzalo
Post to Citeulike
This paper presents an overview of the current status of lattice based dendritic computing. Roughly speaking, lattice based dendritic computing refers to a biomimetic approach to artificial neural networks whose computational aspects are based on lattice group operations. We begin our presentation by discussing some important processes of biological neurons followed by a biomimetic model which implements these processes. We discuss the reasons and rationale behind this approach and illustrate the methodology with some examples. Global activities in this field as well as some potential research issues are also part of this discussion.
more …
By
FernándezZepeda, José Alberto; BrubeckSalcedo, Daniel; FajardoDelgado, Daniel; ZatarainAceves, Héctor
Show all (4)
Post to Citeulike
We address the bodyguard allocation problem (BAP), an optimization problem that illustrates the conflict of interest between two classes of processes with contradictory preferences within a distributed system. While a class of processes prefers to minimize its distance to a particular process called the root, the other class prefers to maximize it; at the same time, all the processes seek to build a communication spanning tree with the maximum social welfare. The two stateoftheart algorithms for this problem always guarantee the generation of a spanning tree that satisfies a condition of Nash equilibrium in the system; however, such a tree does not necessarily produce the maximum social welfare. In this paper, we propose a twoplayer coalition cooperative scheme for BAP, which allows some processes to perturb or break a Nash equilibrium to find another one with a better social welfare. By using this cooperative scheme, we propose a new algorithm called FFCBAP_{S} for BAP. We present both theoretical and empirical analyses which show that this algorithm produces better quality approximate solutions than former algorithms for BAP.
more …
By
Clemente, Eddie; Olague, Gustavo; Dozal, Leon
Post to Citeulike
1 Citations
This work presents a novel approach to synthesize an artificial visual cortex based on what we call organic genetic programming. Primate brains have several distinctive features that help in the outstanding display of perception achieved by the visual system, including binocular vision, memory, learning, and recognition, to mention but a few. These features are processed by a complex arrangement of highly interconnected and numerous cortical visual areas. This paper describes a system composed of an artificial dorsal pathway, or where stream, and an artificial ventral pathway, or what stream, that are fused to create a kind of artificial visual cortex. The idea is to show that genetic programming is able to evolve a high number of heterogeneous trees thanks to the hierarchical structure of our virtual brain. Thus, the proposal uses two key ideas: 1) the recognition of objects can be achieved by a hierarchical structure using the concept of function composition, 2) the evolved functions can be related to the tissues of an artificial organ. Experimental results provide evidence that high recognition rates could be achieved for a wellknown multiclass object recognition problem.
more …
By
López Jaimes, Antonio; Coello, Carlos A. Coello; Urías Barrientos, Jesús E.
Post to Citeulike
24 Citations
In this paper, we propose and analyze two schemes to integrate an objective reduction technique into a multiobjective evolutionary algorithm (moea) in order to cope with manyobjective problems. One scheme reduces periodically the number objectives during the search until the required objective subset size has been reached and, towards the end of the search, the original objective set is used again. The second approach is a more conservative scheme that alternately uses the reduced and the entire set of objectives to carry out the search. Besides improving computational efficiency by removing some objectives, the experimental results showed that both objective reduction schemes also considerably improve the convergence of a moea in manyobjective problems.
more …
By
OlveraLópez, J. Arturo; MartínezTrinidad, J. Francisco; CarrascoOchoa, J. Ariel
Post to Citeulike
3 Citations
The object selection is an important task for instancebased classifiers since through this process the size of a training set could be reduced and then the runtimes in both classification and training steps would be reduced. Several methods for object selection have been proposed but some methods discard relevant objects for the classification step. In this paper, we propose an object selection method which is based on the idea of sequential floating search. This method reconsiders the inclusion of relevant objects previously discarded. Some experimental results obtained by our method are shown and compared against some other object selection methods.
more …
By
Gómez Soriano, José Manuel; Montes y Gómez, Manuel; Sanchis Arnal, Emilio; Rosso, Paolo
Show all (4)
Post to Citeulike
13 Citations
In this paper we present a new method to improve the coverage of Passage Retrieval (PR) systems when these systems are employed for the Question Answering (QA) tasks. The ranking of passages obtained by the PR system is rearranged to emphasize those passages with more probability to contain the answer. The new ranking is based on finding the ngram structures of the question that are presented in the passage, and the weight of the passages increases when they contain longer ngrams structures of the question. The results we present show that the application of this method improves notably the coverage of the classical PR system based on the Space Vectorial Model.
more …
By
Metzner, Michael; Lizarraga, Jesus A.; Bobda, Christophe
Post to Citeulike
1 Citations
In this paper a novel coarsegrained architecture virtualization for Field Programmable Gate Arrays (FPGA) is presented which can be used as basis for runtime dynamic hardware multithreading. The architecture uses onchip networking to interconnect routers and computational elements providing a flexible and highly configurable structure. Quadratic routers are reducing total router count while ensuring short communication paths and minimal resource overhead.
more …
By
DeniciaCarral, Claudia; MontesyGómez, Manuel; VillaseñorPineda, Luis; Hernández, René García
Show all (4)
Post to Citeulike
7 Citations
This paper describes a method for definition question answering based on the use of surface text patterns. The method is specially suited to answer questions about person’s positions and acronym’s descriptions. It considers two main steps. First, it applies a sequencemining algorithm to discover a set of definitionrelated text patterns from the Web. Then, using these patterns, it extracts a collection of conceptdescription pairs from a target document database, and applies the sequencemining algorithm to determine the most adequate answer to a given question. Experimental results on the Spanish CLEF 2005 data set indicate that this method can be a practical solution for answering this kind of definition questions, reaching a precision as high as 84%.
more …
By
Fraigniaud, Pierre; Rajsbaum, Sergio; Roy, Matthieu; Travers, Corentin
Show all (4)
Post to Citeulike
1 Citations
This paper carries on the effort to bridging runtime verification with distributed computability, studying necessary conditions for monitoring failure prone asynchronous distributed systems. It has been recently proved that there are correctness properties that require a large number of opinions to be monitored, an opinion being of the form true, false, perhaps, probably true, probably no, etc. The main outcome of this paper is to show that this large number of opinions is not an artifact induced by the existence of artificial constructions. Instead, monitoring an important class of properties, requiring processes to produce at most k different values does require such a large number of opinions. Specifically, our main result is a proof that it is impossible to monitor ksetagreement in an nprocess system with fewer than min {2k,n} + 1 opinions. We also provide an algorithm to monitor ksetagreement with min {2k,n} + 1 opinions, showing that the lower bound is tight.
more …
By
Pakray, Partha; Gelbukh, Alexander; Bandyopadhyay, Sivaji
Post to Citeulike
2 Citations
We present an Answer Validation System (AV) based on Textual Entailment and Question Answering. The important features used to develop the AV system are Lexical Textual Entailment, Named Entity Recognition, QuestionAnswer type analysis, chunk boundary module and syntactic similarity module. The proposed AV system is rule based. We first combine the question and the answer into Hypothesis (H) and the Supporting Text as Text (T) to identify the entailment relation as either “VALIDATED” or “REJECTED”. The important features used for the lexical Textual Entailment module in the present system are: WordNet based unigram match, bigram match and skipgram. In the syntactic similarity module, the important features used are: subjectsubject comparison, subjectverb comparison, objectverb comparison and cross subjectverb comparison. The results obtained from the answer validation modules are integrated using a voting technique. For training purpose, we used the AVE 2008 development set. Evaluation scores obtained on the AVE 2008 test set show 66% precision and 65% FScore for “VALIDATED” decision.
more …
By
Oliveira, Thomaz; López, Julio; Hışıl, Hüseyin; FazHernández, Armando; RodríguezHenríquez, Francisco
Show all (5)
Post to Citeulike
In the RFC 7748 memorandum, the Internet Research Task Force specified a Montgomeryladder scalar multiplication function based on two recently adopted elliptic curves, “curve25519” and “curve448”. The purpose of this function is to support the DiffieHellman key exchange algorithm that will be included in the forthcoming version of the Transport Layer Security cryptographic protocol. In this paper, we describe a ladder variant that permits to accelerate the fixedpoint multiplication function inherent to the DiffieHellman key pair generation phase. Our proposal combines a righttoleft version of the Montgomery ladder along with the precomputation of constant values directly derived from the basepoint and its multiples. To our knowledge, this is the first proposal of a Montgomery ladder procedure for prime elliptic curves that admits the extensive use of precomputation. In exchange of very modest memory resources and a small extra programming effort, the proposed ladder obtains significant speedups for software implementations. Moreover, our proposal fully complies with the RFC 7748 specification. A software implementation of the X25519 and X448 functions using our precomputable ladder yields an acceleration factor of roughly 1.20, and 1.25 when implemented on the Haswell and the Skylake microarchitectures, respectively.
more …
By
Posadas, Román; MexPerera, Carlos; Monroy, Raúl; NolazcoFlores, Juan
Show all (4)
Post to Citeulike
6 Citations
This paper focuses on the study of a new method for detecting masqueraders in computer systems. The main feature of such masqueraders is that they have knowledge about the behavior profile of legitimate users. The dataset provided by Schonlau et al. [1], called SEA, has been modified for including synthetic sessions created by masqueraders using the behavior profile of the users intended to impersonate. It is proposed an hybrid method for detection of masqueraders based on the compression of the users sessions and Hidden Markov Models. The performance of the proposed method is evaluated using ROC curves and compared against other known methods. As shown by our experimental results, the proposed detection mechanism is the best of the methods here considered.
more …
By
Oca, Víctor Montes; Torres, Miguel; Levachkine, Serguei; Moreno, Marco
Show all (4)
Post to Citeulike
In this paper, we propose the use of a knowledge based system, which has been implemented in SWIProlog to approach the automatic description of spatial data by means of some logic rules. The process to establish the predicates is based on the topological and geometrical analysis of spatial data. These predicates are handled by a set of rules, which are used to find the relations between geospatial objects. Moreover, the rules aid the searching of several features that compose the partition of topographic maps. For instance, in the case that any road intersects with other, we appreciate that a connection relation exists between different destinies, which can be accessed by these roads. Furthermore, the rules help us to know each possible access for this case. Therefore, this description assists in the tasks of geospatial data interpretation (map description) in order to provide quality information for spatial decision support systems.
more …
By
Gutiérrez, Francisco J.; LermaRascón, Margarita M.; SalgadoGarza, Luis R.; Cantú, Francisco J.
Show all (4)
Post to Citeulike
3 Citations
Biometrics is the field that differentiates among various people based on their unique biological and physiological patterns such as retina, finger prints, DNA and keyboard typing patterns to name a few. Keystroke Dynamics is a physiological biometric that measures the unique typing rhythm and cadence of a computer keyboard user. This paper presents a Data Miningbased Keystroke Dynamics application for identity verification, and it reports the results of experiments comparing different approaches to Keystroke Dynamics. The methods compared were Decision Trees, a Naïve Bayesian Classifier, Memory Based Learning, and statisticsbased Keystroke Dynamics.
more …
By
Cleofas, Laura; Valdovinos, Rosa Maria; García, Vicente; Alejo, Roberto
Show all (4)
Post to Citeulike
2 Citations
In realworld applications, it has been observed that class imbalance (significant differences in class prior probabilities) may produce an important deterioration of the classifier performance, in particular with patterns belonging to the less represented classes. One method to tackle this problem consists to resample the original training set, either by oversampling the minority class and/or undersampling the majority class. In this paper, we propose two ensemble models (using a modular neural network and the nearest neighbor rule) trained on datasets undersampled with genetic algorithms. Experiments with real datasets demonstrate the effectiveness of the methodology here proposed.
more …
By
Batyrshin, Ildar; Solovyev, Valery
Post to Citeulike
2 Citations
The paper introduces new time series shape association measures based on Euclidean distance. The method of analysis of associations between time series based on separate analysis of positively and negatively associated local trends is discussed. The examples of application of the proposed measures and methods to analysis of associations between historical prices of securities obtained from Google Finance are considered. An example of time series with inverse associations between them is discussed.
more …
By
Hernández, Víctor Adrián Sosa; Schütze, Oliver; Trautmann, Heike; Rudolph, Günter
Show all (4)
Post to Citeulike
In this paper we investigate some aspects of stochastic local search such as pressure toward and along the set of interest within parameter dependent multiobjective optimization problems. The discussions and initial computations indicate that the problem to compute an approximation of the entire solution set of such a problem via stochastic search algorithms is wellconditioned. The new insights may be helpful for the design of novel stochastic search algorithms such as specialized evolutionary approaches. The discussion in particular indicates that it might be beneficial to integrate the set of external parameters directly into the search instead of computing projections of the solution sets separately by fixing the value of the external parameter.
more …
By
DomínguezMedina, Christian; CruzCortés, Nareli
Post to Citeulike
2 Citations
Wireless Sensor Networks have become an active research topic in the last years. The routing problem is a very important part in this kind of networks that need to be considered in order to maximize the network life time. As the size of the network increases, routing becomes more complex due the amount of sensor nodes in the network. Sensor nodes in Wireless Sensor Networks are very constrained in memory capabilities, processing power and batteries. Ant Colony Optimization based routing algorithms have been proposed to solve the routing problem trying to deal with these constrains. We present a comparison of two Ant Colonybased routing algorithms, taking into account current amounts of energy consumption under different scenarios and reporting the usual metrics for routing in wireless sensor networks.
more …
By
Cabrera, Juan Carlos Fuentes; Coello, Carlos A. Coello
Post to Citeulike
7 Citations
In this chapter, we present a multiobjective evolutionary algorithm (MOEA) based on the heuristic called “particle swarm optimization” (PSO). This multiobjective particle swarm optimizer (MOPSO) is characterized for using a very small population size, which allows it to require a very low number of objective function evaluations (only 3000 per run) to produce reasonably good approximations of the Pareto front of problems of moderate dimensionality. The proposed approach first selects the leader and then selects the neighborhood for integrating the swarm. The leader selection scheme adopted is based on Pareto dominance and uses a neighbors density estimator. Additionally, the proposed approach performs a reinitialization process for preserving diversity and uses two external archives: one for storing the solutions that the algorithm finds during the search process and another for storing the final solutions obtained. Furthermore, a mutation operator is incorporated to improve the exploratory capabilities of the algorithm. The proposed approach is validated using standard test functions and performance measures reported in the specialized literature. Our results are compared with respect to those generated by the Nondominated Sorting Genetic Algorithm II (NSGAII), which is a MOEA representative of the stateoftheart in the area.
more …
By
AguilarGonzález, Pablo M.; Kober, Vitaly
Post to Citeulike
1 Citations
Correlation filters for object detection and location estimation are commonly designed assuming the shape and graylevel structure of the object of interest are explicitly available. In this work we propose the design of correlation filters when the appearance of the target is given in a single training image. The target is assumed to be embedded in a cluttered background and the image is assumed to be corrupted by additive sensor noise. The designed filters are used to detect the target in an input scene modeled by the nonoverlapping signal model. An optimal correlation filter, with respect to the peaktooutput energy ratio criterion, is proposed for object detection and location estimation. We also present estimation techniques for the required parameters. Computer simulation results obtained with the proposed filters are presented and compared with those of common correlation filters.
more …
By
Caballero, Leydi; Moreno, Ana M.; Seffah, Ahmed
Post to Citeulike
3 Citations
Human centricity refers to the active involvement in the overall product lifecycle of different human actors including endusers, stakeholders and providers. Persona is one of the different tools that exist for human centricity. While marketing is the original domain in which persona was introduced, this technique has also been widely used in usercentered design (UCD) design. In these two perceptions, persona has demonstrated its potential as an efficient tool for grouping the users or customers and focusing on user or customer needs, goals and behavior. A segmentation technique is generally used with persona in order to group individual users according to their common features, identifying within these groups those that represent a pattern of human behavior. This paper investigates how persona has been used to improve the usability in the agile development domain, while studying which contributions from marketing and HCI have enriched persona in this agile context.
more …
By
PérezSuárez, Airel; MartńezTrinidad, José Fco.; CarrascoOchoa, Jesús A.; MedinaPagola, José E.
Show all (4)
Post to Citeulike
Most of the clustering algorithms reported in the literature build disjoint clusters; however, there are several applications where overlapping clustering is useful and important. Although several overlapping clustering algorithms have been proposed, most of them have a high computational complexity or they have some limitations which reduce their usefulness in real problems. In this paper, we introduce a new overlapping clustering algorithm, which solves the limitations of previous algorithms, while it has an acceptable computational complexity. The experimentation, conducted over several standard collections, demonstrates the good performance of the proposed algorithm.
more …
By
GarciaCeja, Enrique; Osmani, Venet; Maxhuni, Alban; Mayora, Oscar
Show all (4)
Post to Citeulike
2 Citations
Social interactions play an important role in the overall wellbeing. Current practice of monitoring social interactions through questionnaires and surveys is inadequate due to recall bias, memory dependence and high enduser effort. However, sensing capabilities of smartphones can play a significant role in automatic detection of social interactions. In this paper, we describe our method of detecting interactions between people, specifically focusing on interactions that occur in synchrony, such as walking. Walking together between subjects is an important aspect of social activity and thus can be used to provide a better insight into social interaction patterns. For this work, we rely on sampling smartphone accelerometer and WiFi sensors only. We analyse WiFi and accelerometer data separately and combine them to detect walking in synchrony. The results show that from seven days of monitoring using seven subjects in reallife setting, we achieve 99% accuracy, 77.2% precision and 90.2% recall detection rates when combining both modalities.
more …
By
HernándezRodríguez, Selene; CarrascoOchoa, J. A.; MartínezTrinidad, J. Fco.
Post to Citeulike
The k nearest neighbor (kNN) classifier has been extensively used as a nonparametric technique in Pattern Recognition. However, in some applications where the training set is large, the exhaustive kNN classifier becomes impractical. Therefore, many fast kNN classifiers have been developed to avoid this problem. Most of these classifiers rely on metric properties, usually the triangle inequality, to reduce the number of prototype comparisons. However, in soft sciences, the prototypes are usually described by qualitative and quantitative features (mixed data), and sometimes the comparison function does not satisfy the triangle inequality. Therefore, in this work, a fast k most similar neighbor (kMSN) classifier, which uses a Tree structure and an Approximating and Eliminating approach for Mixed Data, not based on metric properties (Tree AEMD), is introduced. The proposed classifier is compared against other fast kNN classifiers.
more …
By
Jaimes, Antonio López; Aguirre, Hernán; Tanaka, Kiyoshi; Coello Coello, Carlos A.
Show all (4)
Post to Citeulike
1 Citations
Here, we present a partition strategy to generate objective subspaces based on the analysis of the conflict information obtained from the Pareto front approximation found by an underlying multiobjective evolutionary algorithm. By grouping objectives in terms of the conflict among them, we aim to separate the multiobjective optimization into several subproblems in such a way that each of them contains the information to preserve as much as possible the structure of the original problem. The ranking and parent selection is independently performed in each subspace. Our experimental results show that the proposed conflictbased partition strategy outperforms NSGAII in all the test problems considered in this study. In problems in which the degree of conflict among the objectives is significantly different, the conflictbased strategy achieves its best performance.
more …
By
KuriMorales, Angel; CortésArce, Iván
Post to Citeulike
Computer Networks are usually balanced appealing to personal experience and heuristics, without taking advantage of the behavioral patterns embedded in their operation. In this work we report the application of tools of computational intelligence to find such patterns and take advantage of them to improve the network’s performance. The traditional traffic flow for Computer Network is improved by the concatenated use of the following “tools”: a) Applying intelligent agents, b) Forecasting the traffic flow of the network via MultiLayer Perceptrons (MLP) and c) Optimizing the forecasted network’s parameters with a genetic algorithm. We discuss the implementation and experimentally show that every consecutive new tool introduced improves the behavior of the network. This incremental improvement can be explained from the characterization of the network’s dynamics as a set of emerging patterns in time.
more …
By
Chrobak, Marek; Epstein, Leah; Noga, John; Sgall, Jiří; Stee, Rob; Tichý, Tomáš; Vakhania, Nodari
Show all (7)
Post to Citeulike
8 Citations
The following scheduling problem is studied: We are given a set of tasks with release times, deadlines, and profit rates. The objective is to determine a 1processor preemptive schedule of the given tasks that maximizes the overall profit. In the standard model, each completed task brings profit, while noncompleted tasks do not. In the metered model, a task brings profit proportional to the execution time even if not completed. For the metered task model, we present an efficient offline algorithm and improve both the lower and upper bounds on the competitive ratio of online algorithms. Furthermore, we prove three lower bound results concerning resource augmentation in both models.
more …
By
Farré, Jacques; Gálvez, José Fortes
Post to Citeulike
4 Citations
Parser generation tools currently used for computer language analysis rely on user wisdom in order to resolve grammar conflicts. Here practical LR(0)based parser generation is introduced, with automatic conflict resolution by potentiallyunbounded lookahead exploration. The underlying LR(0)automaton item dependence graph is used for lookahead DFA construction. A bounded graphconnect technique overcomes the difficulties of previous approaches with empty rules, and compact coding allows to precisely resume righthand contexts. Resulting parsers are deterministic and linear, and accept a large class of LRregular grammars including LALR(k). Their construction is formally introduced, shown to be decidable, and illustrated by a detailed example.
more …
By
RosalesPérez, Alejandro; ReyesGarcía, Carlos A.; Gonzalez, Jesus A.; ArchTirado, Emilio
Show all (4)
Post to Citeulike
3 Citations
In the last years, infant cry recognition has been of particular interest because it contains useful information to determine if the infant is hungry, has pain, or a particular disease. Several studies have been performed in order to differentiate between these kinds of cries. In this work, we propose to use Genetic Selection of a Fuzzy Model (GSFM) for classification of infant cry. GSFM selects a combination of feature selection methods, type of fuzzy processing, learning algorithm, and its associated parameters that best fit to the data. The experiments demonstrate the feasibility of this technique in the classification task. Our experimental results reach up to 99.42% accuracy.
more …
By
Gelbukh, Alexander; Sidorov, Grigori; VeraFélix, José Ángel
Post to Citeulike
The paper presents a bilingual EnglishSpanish parallel corpus aligned at the paragraph level. The corpus consists of twelve large novels found in Internet and converted into text format with manual correction of formatting problems and errors. We used a dictionarybased algorithm for automatic alignment of the corpus. Evaluation of the results of alignment is given. There are very few available resources as far as parallel fiction texts are concerned, while they are nontrivial case of alignment of a considerable size. Usually, approaches for automatic alignment that are based on linguistic data are applied for texts in the restricted areas, like laws, manuals, etc. It is not obvious that these methods are applicable for fiction texts because these texts have much more cases of nonliteral translation than the texts in the restricted areas. We show that the results of alignment for fiction texts using dictionary based method are good, namely, produce state of art precision value.
more …
