Showing 1 to 100 of 218 matching Articles
Results per page:
Export (CSV)
By
Ita, Guillermo; Bello, Pedro; Contreras, Meliza
Post to Citeulike
To count models for two conjunctive forms (#2SAT problem) is a classic #P problem. We determine different structural patterns on the underlying graph of a 2CF F allowing the efficient computation of #2SAT(F).
We show that if the constrained graph of a formula is acyclic or the cycles on the graph can be arranged as independent and embedded cycles, then the number of models of F can be counted efficiently.
more …
By
SantiagoRamirez, Everardo; GonzalezFraga, J. A.; AscencioLopez, J. I.
Post to Citeulike
2 Citations
In this paper, we compare the performance of three composite correlation filters in facial recognition problem. We used the ORL (Olivetti Research Laboratory) facial image database to evaluate KLaw, MACE and ASEF filters performance. Simulations results demonstrate that KLaw nonlinear composite filters evidence the best performance in terms of recognition rate (RR) and, false acceptation rate (FAR). As a result, we observe that correlation filters are able to work well even when the facial image contains distortions such as rotation, partial occlusion and different illumination conditions.
more …
By
Acevedo, Elena; Acevedo, Antonio; Felipe, Federico
Post to Citeulike
2 Citations
A method for diagnosing Parkinson’s disease is presented. The proposal is based on associative approach, and we used this method for classifying patients with Parkinson’s disease and those who are completely healthy. In particular, AlphaBeta Bidirectional Associative Memory is used together with the modified JohnsonMöbius codification in order to deal with mixed noise. We use three methods for testing the performance of our method: LeaveOneOut, HoldOut and Kfold Cross Validation and the average obtained was of 97.17%.
more …
By
CruzBarbosa, Raúl; BautistaVillavicencio, David; Vellido, Alfredo
Post to Citeulike
The diagnostic classification of human brain tumours on the basis of magnetic resonance spectra is a nontrivial problem in which dimensionality reduction is almost mandatory. This may take the form of feature selection or feature extraction. In feature extraction using manifold learning models, multivariate data are described through a lowdimensional manifold embedded in data space. Similarities between points along this manifold are best expressed as geodesic distances or their approximations. These approximations can be computationally intensive, and several alternative software implementations have been recently compared in terms of computation times. The current brief paper extends this research to investigate the comparative ability of dimensionalityreduced data descriptions to accurately classify several types of human brain tumours. The results suggest that the way in which the underlying data manifold is constructed in nonlinear dimensionality reduction methods strongly influences the classification results.
more …
By
PérezAlfonso, Damián; FundoraRamírez, Osiel; LazoCortés, Manuel S.; RocheEscobar, Raciel
Show all (4)
Post to Citeulike
Process mining is concerned with the extraction of knowledge about business processes from information system logs. Process discovery algorithms are process mining techniques focused on discovering process models starting from event logs. The applicability and effectiveness of process discovery algorithms rely on features of event logs and process characteristics. Selecting a suitable algorithm for an event log is a tough task due to the variety of variables involved in this process. The traditional approaches use empirical assessment in order to recommend a suitable discovery algorithm. This is a time consuming and computationally expensive approach. The present paper evaluates the usefulness of an approach based on classification to recommend discovery algorithms. A knowledge base was constructed, based on features of event logs and process characteristics, in order to train the classifiers. Experimental results obtained with the classifiers evidence the usefulness of the proposal for recommendation of discovery algorithms.
more …
By
GarcíaHernández, René Arnulfo; Ledeneva, Yulia
Post to Citeulike
4 Citations
Extractive text summarization consists in selecting the most important units (normally sentences) from the original text, but it must be done as closer as humans do. Several interesting automatic approaches are proposed for this task, but some of them are focused on getting a better result rather than giving some assumptions about what humans use when producing a summary. In this research, not only the competitive results are obtained but also some assumptions are given about what humans tried to represent in a summary. To reach this objective a genetic algorithm is proposed with special emphasis on the fitness function which permits to contribute with some conclusions.
more …
By
Coello Coello, Carlos A.
Post to Citeulike
3 Citations
This paper provides a brief introduction to the socalled multiobjective evolutionary algorithms, which are bioinspired metaheuristics designed to deal with problems having two or more (normally conflicting) objectives. First, we provide some basic concepts related to multiobjective optimization and a brief review of approaches available in the specialized literature. Then, we provide a short review of applications of multiobjective evolutionary algorithms in pattern recognition. In the final part of the paper, we provide some possible paths for future research in this area, which are promising, from the author’s perspective.
more …
By
Batyrshin, Ildar; Solovyev, Valery
Post to Citeulike
2 Citations
The paper introduces new time series shape association measures based on Euclidean distance. The method of analysis of associations between time series based on separate analysis of positively and negatively associated local trends is discussed. The examples of application of the proposed measures and methods to analysis of associations between historical prices of securities obtained from Google Finance are considered. An example of time series with inverse associations between them is discussed.
more …
By
AguilarGonzález, Pablo M.; Kober, Vitaly
Post to Citeulike
1 Citations
Correlation filters for object detection and location estimation are commonly designed assuming the shape and graylevel structure of the object of interest are explicitly available. In this work we propose the design of correlation filters when the appearance of the target is given in a single training image. The target is assumed to be embedded in a cluttered background and the image is assumed to be corrupted by additive sensor noise. The designed filters are used to detect the target in an input scene modeled by the nonoverlapping signal model. An optimal correlation filter, with respect to the peaktooutput energy ratio criterion, is proposed for object detection and location estimation. We also present estimation techniques for the required parameters. Computer simulation results obtained with the proposed filters are presented and compared with those of common correlation filters.
more …
By
KuriMorales, Angel; CortésArce, Iván
Post to Citeulike
Computer Networks are usually balanced appealing to personal experience and heuristics, without taking advantage of the behavioral patterns embedded in their operation. In this work we report the application of tools of computational intelligence to find such patterns and take advantage of them to improve the network’s performance. The traditional traffic flow for Computer Network is improved by the concatenated use of the following “tools”: a) Applying intelligent agents, b) Forecasting the traffic flow of the network via MultiLayer Perceptrons (MLP) and c) Optimizing the forecasted network’s parameters with a genetic algorithm. We discuss the implementation and experimentally show that every consecutive new tool introduced improves the behavior of the network. This incremental improvement can be explained from the characterization of the network’s dynamics as a set of emerging patterns in time.
more …
By
Sossa, Humberto; Garro, Beatriz A.; Villegas, Juan; Avilés, Carlos; Olague, Gustavo
Show all (5)
Post to Citeulike
In this note we present our most recent advances in the automatic design of artificial neural networks (ANNs) and associative memories (AMs) for pattern classification and pattern recall. Particle Swarm Optimization (PSO), Differential Evolution (DE), and Artificial Bee Colony (ABC) algorithms are used for ANNs; Genetic Programming is adopted for AMs. The derived ANNs and AMs are tested with several examples of wellknown databases. As we will show, results are very promising.
more …
By
Villalobos Castaldi, Fabiola M.; FelipeRiveron, Edgardo M.; Gómez, Ernesto Suaste
Post to Citeulike
The retinal vascular network has many desirable characteristics as a basis for authentication, including uniqueness, stability, and permanence. In this paper, a new approach for retinal images features extraction and template coding is proposed. The use of the logarithmic spiral sampling grid in scanning and tracking the vascular network is the key to make this new approach simple, flexible and reliable. Experiments show that this approach can achieve the reduction of data dimensionality and of the required time to obtain the biometric code of the vascular network in a retinal image. The performed experiments demonstrated that the proposed verification system has an average accuracy of 95.0 – 98 %.
more …
By
Suaste, Verónica; Caudillo, Diego; Shin, BokSuk; Klette, Reinhard
Show all (4)
Post to Citeulike
Thirdeye stereo analysis evaluation compares a virtual image, derived from results obtained by binocular stereo analysis, with a recorded image at the same pose. This technique is applied for evaluating stereo matchers on long (or continuous) stereo input sequences where no ground truth is available. The paper provides a critical and constructive discussion of this method. The paper also introduces data measures on input video sequences as an additional tool for analyzing issues of stereo matchers occurring for particular scenarios. The paper also reports on extensive experiments using two toprated stereo matchers.
more …
By
JiménezGuarneros, Magdiel; CarrascoOchoa, Jesús Ariel; MartínezTrinidad, José Fco.
Post to Citeulike
Currently, graph embedding has taken a great interest in the area of structural pattern recognition, especially techniques based on representation via dissimilarity. However, one of the main problems of this technique is the selection of a suitable set of prototype graphs that better describes the whole set of graphs. In this paper, we evaluate the use of an instance selection method based on clustering for graph embedding, which selects border prototypes and some nonborder prototypes. An experimental evaluation shows that the selected method gets competitive accuracy and better runtimes than other state of the art methods.
more …
By
LazoCortés, Manuel S.; MartínezTrinidad, José Francisco; CarrascoOchoa, Jesús Ariel
Post to Citeulike
1 Citations
In this article, we revisit the concept of Goldman’s fuzzy testor and restudy it from the perspective of the conceptual approach to attribute reduct introduced by Y.Y. Yao in the framework of Rough Set Theory. We reformulate the original concept of Goldman’s fuzzy testor and we introduce the Goldman’s fuzzy reducts. Additionally, in order to show the usefulness of the Goldman’s fuzzy reducts, we build a rule based classifier and evaluate its performance in a case of study.
more …
By
OrtizBayliss, José Carlos; TerashimaMarín, Hugo; ConantPablos, Santiago Enrique
Post to Citeulike
Hyperheuristics are methodologies that choose from a set of heuristics and decide which one to apply given some properties of the current instance. When solving a constraint satisfaction problem, the order in which the variables are selected to be instantiated has implications in the complexity of the search. In this paper we propose a logistic regression model to generate hyperheuristics for variable ordering within constraint satisfaction problems. The first step in our approach requires to generate a training set that maps any given instance, expressed in terms of some of their features, to one suitable variable ordering heuristic. This set is used later to train the system and generate a hyperheuristic that decides which heuristic to apply given the current features of the instances at hand at different steps of the search. The results suggest that hyperheuristics generated through this methodology allow us to exploit the strengths of the heuristics to minimize the cost of the search.
more …
By
GarcíaVázquez, Mireya S.; GareaLlano, Eduardo; ColoresVargas, Juan M.; ZamudioFuentes, Luis M.; RamírezAcosta, Alejandro A.
Show all (5)
Post to Citeulike
Iris recognition is being widely used in different environments where the identity of a person is necessary. Therefore, it is a challenging problem to maintain high reliability and stability of this kind of systems in harsh environments. Iris segmentation is one of the most important process in iris recognition to preserve the abovementioned characteristics. Indeed, iris segmentation may compromise the performance of the entire system. This paper presents a comparative study of four segmentation algorithms in the frame of the high reliability iris verification system. These segmentation approaches are implemented, evaluated and compared based on their accuracy using three unconstraint databases, one of them is a video iris database. The result shows that, for an ultrahigh security system on verification at FAR = 0.01 %, segmentation 3 (Viterbi) presents the best results.
more …
By
Figueroa, Karina; Paredes, Rodrigo
Post to Citeulike
2 Citations
Proximity searching consists in retrieving objects out of a database similar to a given query. Nowadays, when multimedia databases are growing up, this is an elementary task. The permutation based index (PBI) and its variants are excellent techniques to solve proximity searching in high dimensional spaces, however they have been surmountable in low dimensional ones. Another PBI’s drawback is that the distance between permutations cannot allow to discard elements safely when solving similarity queries.
In the following, we introduce an improvement on the PBI that allows to produce a better promissory order using less space than the basic permutation technique and also gives us information to discard some elements. To do so, besides the permutations, we quantize distance information by defining distance rings around each permutant, and we also keep this data. The experimental evaluation shows we can dramatically improve upon specialized techniques in low dimensional spaces. For instance, in the real world dataset of NASA images, our boosted PBI uses up to 90 % less distances evaluations than AESA’s, the stateoftheart searching algorithm with the best performance in this particular space.
more …
By
LazoCortés, Manuel S.; MartínezTrinidad, José Fco.; CarrascoOchoa, J. A.
Post to Citeulike
This paper studies the relationship between the two most common definitions of reduct. Although there are other definitions, almost all the literature published in the framework of the Theory of Rough Sets, uses one of the two definitions we study here. However, there is an ambiguity in the use of these definitions and often authors do not previously declare what definition they refer to. Moreover, there are no publications where the relation between these two definitions is widely discussed, just that is what this paper addresses. We enunciate and demonstrate several properties expressing relations between both definitions including some illustrative examples.
more …
By
Tovar, Mireya; Pinto, David; Montes, Azucena; Serna, Gabriel; Vilariño, Darnes
Show all (5)
Post to Citeulike
1 Citations
In this paper we present an approach for the automatic identification of relations in ontologies of restricted domain. We use the evidence found in a corpus associated to the same domain of the ontology for determining the validity of the ontological relations. Our approach employs formal concept analysis, a method used for the analysis of data, but in this case used for relations discovery in a corpus of restricted domain. The approach uses two variants for filling the incidence matrix that this method employs. The formal concepts are used for evaluating the ontological relations of two ontologies. The performance obtained was about 96 for taxonomic relations and 100 % for nontaxonomic relations, in the first ontology. In the second it was about 92 % for taxonomic relations and 98 % for nontaxonomic relations.
more …
By
Figueroa, Karina; Paredes, Rodrigo
Post to Citeulike
1 Citations
The permutation based index has shown to be very effective in medium and high dimensional metric spaces, even in difficult problems such as solving reverse knearest neighbor queries. Nevertheless, currently there is no study about which are the desirable features one can ask to a permutant set, or how to select good permutants. Similar to the case of pivots, our experimental results show that, compared with a randomly chosen set, a good permutant set yields to fast query response or to reduce the amount of space used by the index. In this paper, we start by characterizing permutants and studying their predictive power; then we propose an effective heuristic to select a good set of permutant candidates. We also show empirical evidence that supports our technique.
more …
By
Ita Luna, Guillermo; MarcialRomero, J. Raymundo
Post to Citeulike
1 Citations
We present some results about the parametric complexity of #2SAT and #2UNSAT, which consist on counting the number of models and falsifying assignments, respectively, for two Conjunctive Forms (2CF’s) . Firstly, we show some cases where given a formula F, #2SAT(F) can be bounded above by considering a binary pattern analysis over its set of clauses. Secondly, since #2SAT(F) = 2^{n}#2UNSAT(F) we show that, by considering the constrained graph G_{F} of F, if G_{F} represents an acyclic graph then, #UNSAT(F) can be computed in polynomial time. To the best of our knowledge, this is the first time where #2UNSAT is computed through its constrained graph, since the inclusionexclusion formula has been commonly used for computing #UNSAT(F).
more …
By
TecuanhuehueVera, P.; CarrascoOchoa, Jesús Ariel; MartínezTrinidad, José Fco.
Post to Citeulike
Multidimensional scaling maps a set of ndimensional objects into a lowerdimension space, usually the Euclidean plane, preserving the distances among objects in the original space. Most algorithms for multidimensional scaling have been designed to work on numerical data, but in soft sciences, it is common that objects are described using quantitative and qualitative attributes, even with some missing values. For this reason, in this paper we propose a genetic algorithm especially designed for multidimensional scaling over mixed and incomplete data. Some experiments using datasets from the UCI repository, and a comparison against a common algorithm for multidimensional scaling, shows the behavior of our proposal.
more …
By
LoyolaGonzález, Octavio; MartínezTrinidad, José Fco.; CarrascoOchoa, Jesús Ariel; GarcíaBorroto, Milton
Show all (4)
Post to Citeulike
Applying resampling methods is an important approach for working with class imbalance problems. The main reason is that many classifiers are sensitive to class distribution, biasing their prediction towards the majority class. Contrast pattern based classifiers are sensitive to imbalanced databases because these classifiers commonly find several patterns of the majority class and only a few patterns (or none) of the minority class. In this paper, we present a correlation study among resampling methods for contrast pattern based classifiers. Our experiments performed over several imbalanced databases show that there is a high correlation among different resampling methods. Correlation results show that there are nine different groups with very high inner correlation and very low outer correlation. We show that most resampling methods allow improving the accuracy of the contrast pattern based classifiers.
more …
By
Tovar, Mireya; Pinto, David; Montes, Azucena; Vilariño, Darnes
Show all (4)
Post to Citeulike
Measuring the degree of semantic similarity for word pairs is very challenging task that has been addressed by the computational linguistics community in the recent years. In this paper, we propose a method for evaluating input word pairs in order to measure the degree of semantic similarity. This unsupervised method uses a prototype vector calculated on the basis of word pair representative vectors which are contructed by using snippets automatically gathered from the world wide web.
The obtained results shown that the approach based on prototype vectors outperforms the results reported in the literature for a particular semantic similarity class.
more …
By
MedinaPérez, Miguel Angel; GarcíaBorroto, Milton; GutierrezRodríguez, Andres Eduardo; AltamiranoRobles, Leopoldo
Show all (4)
Post to Citeulike
1 Citations
Developing accurate fingerprint verification algorithms is an active research area. A large amount of fingerprint verification algorithms are based on minutiae descriptors. An important component of these algorithms is the alignment strategy. The single alignment strategy, with O(n^{2}) time complexity, uses the local matching minutiae pair that maximizes the similarity value to align the minutiae. Nevertheless, even if the selected minutiae pair is a true matching pair, it is not necessarily the best pair to carry out fingerprint alignment. The multiple alignments strategy alleviates these limitations by performing multiple minutiae alignments, increasing the time complexity to O(n^{4}). In this paper, we improve the multiple alignment strategy, reducing its complexity while still achieving a high accuracy. The new strategy is based on the rationale that most minutiae descriptors from one fingerprint correspond with their most similar descriptors from the other fingerprint. To test the new strategy behavior, we adapt three well known algorithms to a traditional multiple alignment strategy and to our strategy. Several experiments in the FVC2004 database show that our strategy outperforms both the single and the multiple alignments strategies.
more …
By
SilvánCárdenas, José Luis; Wang, Le
Post to Citeulike
Building footprint geometry is a basic layer of information required by government institutions for a number of land management operations and research. LiDAR (light detection and ranging) is a laserbased altimetry measurement instrument that is flown over relatively wide land areas in order to produce digital surface models. Although high spatial resolution LiDAR measurements (of around 1 m horizontally) are suitable to detect aboveground features through elevation discrimination, the automatic extraction of buildings in many cases, such as in residential areas with complex terrain forms, has proved a difficult task. In this study, we developed a method for detecting building footprint from LiDAR altimetry data and tested its performance over four sites located in Austin, TX. Compared to another standard method, the proposed method had comparable accuracy and better efficiency.
more …
By
GarciaCeja, Enrique; Brena, Ramon; GalvánTejada, Carlos E.
Post to Citeulike
1 Citations
Most of the previous works in hand gesture recognition focus in increasing the accuracy and robustness of the systems, however little has been done to understand the context in which the gestures are performed, i.e, the same gesture could mean different things depending on the context and situation. Understanding the context may help to build more userfriendly and interactive systems. In this work, we used location information in order to contextualize the gestures. The system constantly identifies the location of the user so when he/she performs a gesture the system can perform an action based on this information.
more …
By
Figueroa, Karina; Paredes, Rodrigo; CamarenaIbarrola, J. Antonio; Reyes, Nora
Show all (4)
Post to Citeulike
1 Citations
Similarity searching consists in retrieving from a database the objects, also known as nearest neighbors, that are most similar to a given query, it is a crucial task to several applications of the pattern recognition problem. In this paper we propose a new technique to reduce the number of comparisons needed to locate the nearest neighbors of a query. This new index takes advantage of two known algorithms: FHQT (Fixed Height Queries Tree) and PBA (PermutationBased Algorithm), one for low dimension and the second for high dimension. Our results show that this combination brings out the best of both algorithms, this winner combination of FHQT and PBA locates nearest neighbors up to four times faster in high dimensions leaving the known well performance of FHQT in low dimensions unaffected.
more …
By
RodríguezLópez, Verónica; CruzBarbosa, Raúl
Post to Citeulike
2 Citations
Nowadays, breast cancer is considered a significant health problem in Mexico. Mammogram is an effective study for early detecting signs of this disease. One of the most important findings in this study is a mass, which is the main indicator of malignancy. However, mass detection and diagnosis are difficult. In this study, the impact of the inclusion of seven clinical features on the performance of Bayesian Networks models for mass diagnosis is presented. Here, Naïve Bayes, Tree Augmented Naïve Bayes, Kdependence Bayesian classifier, and Forest Augmented Naïve Bayes models with eight image features nodes were augmented with several clinical features subsets. These models were trained with a data set extracted from the public BCDRF01 database. The experimental results have shown that the Bayesian networks models augmented with a subset of three clinical features have improved their performance up to 0.82 in accuracy, 0.80 in sensitivity, and 0.83 in specificity. Therefore, these augmented models are considered as suitable and promising methods for mass classification.
more …
By
NavaOrtiz, M.; GómezFlores, W.; DíazPérez, A.; ToscanoPulido, G.
Show all (4)
Post to Citeulike
Segmentation is an important step within optical character recognition systems, since the recognition rates depends strongly on the accuracy of binarization techniques. Hence, it is necessary to evaluate different segmentation methods for selecting the most adequate for a specific application. However, when gold patterns are not available for comparing the binarized outputs, the recognition rates of the entire system could be used for assessing the performance. In this article we present the evaluation of five local adaptive binarization methods for digit recognition in water meters by measuring misclassification rates. These methods were studied due to of their simplicity to be implemented in basedcamera devices, such as cell phones, with limited hardware capabilities. The obtained results pointed out that Bernsens method achieved the best recognition rates when the normalized central moments are employed as features.
more …
By
Aizenberg, Igor; Sheremetov, Leonid; VillaVargas, Luis
Post to Citeulike
In this paper, we discuss the longterm time series forecasting using a Multilayer Neural Network with MultiValued Neurons (MLMVN). This is complexvalued neural network with a derivativefree backpropagation learning algorithm. We evaluate the proposed approach using a realworld data set describing the dynamic behavior of an oilfield asset located in the coastal swamps of the Gulf of Mexico. We show that MLMVN can be efficiently applied to univariate and multivariate multistep ahead prediction of reservoir dynamics. This paper is not only intended for proposing a novel model of forecasting but to study carefully several aspects of the application of ANN models to time series forecasting that could be of the interest for pattern recognition community.
more …
By
Tanveer, Saad; Muhammad, Aslam; MartinezEnriquez, A. M.; EscaladaImaz, G.
Show all (4)
Post to Citeulike
Languages like Spanish and Arabic are spoken over a large geographic area. The people that speak these languages develop differences in accent, annotation and phonetic delivery. This leads to difficulty in standardization of languages for education and communication (both text and oral). The problem is addressed by phonetic dictionaries to some extent. They provide the correct pronunciation for a word. But, they contribute little to standardize or unify the language for a learner. Our system is to provide unification of different accents and dialects. It creates a standard for learning and communication.
more …
By
Villavicencio, Luis; LopezFranco, Carlos; AranaDaniel, Nancy; LopezFranco, Lilibet
Show all (4)
Post to Citeulike
In this paper we introduce a representation for object verification and a system for object recognition based on local features, invariant moments, silhouette creation and a ’net’ reduction for depth information. The results are then compared with some of the most recent approaches for object detection such as local features and orientation histograms. Additionally, we used depth information to create descriptors that can be used for 3D verification of detected objects. Moments are computed from a 3D set of points which are arranged to create a descriptive object model. This information showed to be of matter in the decision whether the object is present within the analyzed image segment, or not.
more …
By
HernandezBelmonte, Uriel H.; AyalaRamirez, Victor; SanchezYanez, Raul E.
Post to Citeulike
5 Citations
In this paper, we propose to use a Reduced Connectivity Mask (RCM) in order to enhance connected component labeling (CCL) algorithms. We use the proposed RCM (a 2×2 spatial neighborhood) as the scanning window of a UnionFind labeling method and of a fast connected component labeling algorithm recently proposed in the literature. In both cases, the proposed mask enhances the performance of the algorithm with respect to the one exhibited by each algorithm in its original form. We have tested the two enhanced approaches proposed here against the fast connected component labeling algorithm proposed by He and the classical contour tracing algorithm. We have compared their execution time when labeling a set of uniform random noise test images of varying occupancy percentages. We show detailed results of all these tests. We also discuss the differences in behavior shown by the set of algorithms under test. The RCM variants exhibit a better performance than the previous CCL algorithms.
more …
By
MújicaVargas, Dante; GallegosFunes, Francisco J.; CruzSantiago, Rene
Post to Citeulike
1 Citations
In this paper we present an image processing scheme to segment noisy images based on a robust estimator in the filtering stage and the standard Fuzzy CMeans (FCM) clustering algorithm to segment the images. The main objective of paper is to evaluate the performance of the Rank Mtype Lfilter with different influence functions and to establish a reference base to include the filter in the objective function of FCM algorithm in a future work. The filter uses the Rank Mtype (RM) estimator in the scheme of Lfilter, to get more robustness in the presence of different types of noises and a combination of them. Tests were made on synthetic and real images subjected to three types of noise and the results are compared with six reference modified Fuzzy CMeans methods to segment noisy images.
more …
By
VillalobosCastaldi, Fabiola M.; SuasteGómez, Ernesto
Post to Citeulike
In this paper, we introduced a new videobased spatiotemporal identification system and we also presented our initial identity authentication results based on the spontaneous pupillary oscillation features. We demonstrated that this biometric trait has the capability to provide enough discriminative information to authenticate the identity of a subject. We described the methodology to compute a spatiotemporal biometric template recording the pupil area changes from a video sequence acquired at constant light conditions. To our knowledge, no attempts were made in order to distinguish individuals based on the spatiotemporal representations computed from the normal dilation/contraction condition of the pupil. In preliminary experiments, for the privately collected database, we observe that Equal Error occurs at a threshold of 5.812 and the error is roughly 0.4356%.
more …
By
Batyrshin, Ildar
Post to Citeulike
4 Citations
The method of recognition of shape association patterns with direct and inverse relationships is proposed. This method is based on a new time series shape association measure based on Up and Down trend associations. The application of this technique to analysis of associations between well production data in petroleum reservoirs is discussed.
more …
By
RodríguezDiez, Vladímir; MartínezTrinidad, José Fco.; CarrascoOchoa, Jesús Ariel; LazoCortés, Manuel S.
Show all (4)
Post to Citeulike
Within Testor Theory, typical testors are irreducible subsets of attributes preserving the object discernibility ability of the original set of attributes. Computing all typical testors from a dataset has exponential complexity regarding its number of attributes, however there are other properties of a dataset that have some influence on the performance of different algorithms. Previous studies have determined that a significant runtime reduction can be obtained from selecting the appropriate algorithm for a given dataset. In this work, we present an experimental study evaluating the effect of basic matrix dimensionality on the performance of the algorithms for typical testor computation. Our experiments are carried out over synthetic and real–world datasets. Finally, some guidelines obtained from the experiments, for helping to select the best algorithm for a given dataset, are summarised.
more …
By
López, Marco A.; MarcialRomero, J. Raymundo; Ita, Guillermo; Moyao, Yolanda
Show all (4)
Post to Citeulike
Although the satisfiability problem for two Conjunctive Normal Form formulas (2SAT) is polynomial time solvable, it is well known that #2SAT, the counting version of 2SAT is #PComplete. However, it has been shown that for certain classes of formulas, #2SAT can be computed in polynomial time. In this paper we show another class of formulas for which #2SAT can also be computed in lineal time, the so called outerplanar formulas, e.g. formulas whose signed primal graph is outerplanar. Our algorithm’s time complexity is given by
$$O(n+m)$$
where n is the number of variables and m the number of clauses of the formula.
more …
By
BeltránMárquez, Jessica; Chávez, Edgar; Favela, Jesús
Post to Citeulike
2 Citations
Automatic identification of activities can be used to provide information to caregivers of persons with dementia for identifying assistance needs. Environmental audio provides significant and representative information of the context, making microphones a choice to identify activities automatically. However, in real situations, the audio captured by microphones comes from overlapping sound sources, making its identification a challenge for audio analysis and retrieval. In this paper we propose a succinct representation of the signal by measuring the multiband spectral entropy of the signal frame by frame, followed by a cosine transform and binary codification, we call this the Cosine MultiBand Spectral Entropy Signature (CMBSES). To test our proposal, we created a database of a mixup of triples from a collection of nine environmental sounds in four different signaltonoise ratios (SNR). We codified both the original sounds and the triples and then searched all the original sounds in the mixup collection. To establish a ground truth we also tested the same database with 48 people of assorted ages. Our feature extraction outperforms the stateoftheart Mel Frequency Cepstral Coefficients (MFCC) and it also surpass humans in the experiment.
more …
By
Finn, Chelsea; Duyck, James; Hutcheon, Andy; Vera, Pablo; Salas, Joaquin; Ravela, Sai
Show all (6)
Post to Citeulike
3 Citations
The characterization of individual animal life history is crucial for conservation efforts. In this paper, Sloop, an operational pattern retrieval engine for animal identification, is extended by coupling crowdsourcing with image retrieval. The coupled system delivers scalable performance by using aggregated computational inference to effectively deliver precision and by using human feedback to efficiently improve recall. To the best of our knowledge, this is the first coupled humanmachine animal biometrics system, and results on multiple species indicate that it is a promising approach for largescale use.
more …
By
Gutiérrez, Ernesto; Cervantes, Ofelia; BáezLópez, David; Alfredo Sánchez, J.
Show all (4)
Post to Citeulike
3 Citations
Discovering people’s subjective opinion about a topic of interest has become more relevant with the explosion in the use of social networks, microblogs, forums and ecommerce pages all over the Internet. Sentiment analysis techniques aim to identify polarity of opinions by analyzing explicit and implicit features within the text. This paper presents a hybrid approach to extract features from Spanish sentiment sentences in order to create a model based on support vector machines and determine polarity of opinions. In addition to this, a Spanish Sentiment Lexicon has been constructed. Accuracy of the model is evaluated against two previously tagged corpora and results are discussed.
more …
By
ChaconMurguia, Mario I.; PerezVargas, Francisco J.
Post to Citeulike
6 Citations
This paper presents a method to detect fire regions in thermal videos that can be used for both outdoor and indoor environments. The proposed method works with static and moving cameras. The detection is achieved through a linear weighted classifier which is based on two features. The features are extracted from candidate regions by the following process; contrast enhance by the Local Intensities Operation and candidate region selection by thermal blob analysis. The features computed from these candidate regions are; region shape regularity, determined by Wavelet decomposition analysis and region intensity saturation. The method was tested with several thermal videos showing a performance of 4.99% of false positives in nonfire videos and 75.06% of correct detection with 7.27% of false positives in fire regions. Findings indicate an acceptable performance compared with other methods because this method unlike other works with moving camera videos.
more …
By
CastroSánchez, Noé Alejandro; Sidorov, Grigori
Post to Citeulike
2 Citations
In this paper we present an automatic method for extraction of synonyms of verbs from an explanatory dictionary based only on hyponym/hyperonym relations existing between the verbs defined and the genus used in their definitions. The set of pairs verbgenus can be considered as a directed graph, so we applied an algorithm to identify cycles in these kind of structures. We found that some cycles represent chains of synonyms. We obtain high precision and low recall.
more …
By
Ita, Guillermo; Bautista, César; Altamirano, Luis C.
Post to Citeulike
The 3Colouring of a graph is a classic NPcomplete problem. We show that some solutions for the 3Colouring can be built in polynomial time based on the number of basic cycles existing in the graph. For this, we design a reduction from proper 3Colouring of a graph G to a 2CF Boolean formula F_{G}, where the number of clauses in F_{G} depends on the number of basic cycles in G. Any model of F_{G} provides a proper 3Colouring of G. Thus, F_{G} is a logical pattern whose models codify proper 3Colouring of the graph G.
more …
By
CruzBernal, Alejandra; AlamanzaOjeda, DoraLuz; IbarraManzano, MarioAlberto
Post to Citeulike
The object surfaces on the Range images can be easily treated as elevations, at each point of these surfaces. The Sparse Normal Detector technique focuses in the extraction of keypoints, from homogeneous surface of Range images. Additionally, the contour of the objects in the scene can be represented through these points. First, the homogeneity feature is computed by means of the Sum and Difference Histogram technique, producing the Homogeneity image. Then, the corresponding dense normal vectors of the surface formed by this image are computed. A normal probability density function is used to select the most outstanding dense vectors, yielding the Sparse Normal descriptor. These vectors form new flat directional surfaces. The final detection of interesting point is performed using the Sparse Keypoints Detector technique. The experimental test involves a qualitative analysis, using the Middlebury and DSPLab dataset, and a quantitative evaluation of repeatability.
more …
By
DelgadoGalvan, Julio; NavarroRamirez, Alberto; NunezVarela, Jose; PuenteMontejano, Cesar; MartinezPerez, Francisco
Show all (5)
Post to Citeulike
4 Citations
One of the most basic tasks for any autonomous mobile robot is that of safely navigating from one point to another (e.g. service robots should be able to find their way in different kinds of environments). Typically, vision is used to find landmarks in that environment to help the robot localise itself reliably. However, some environments may lack of these landmarks and the robot would need to be able to find its way in a featureless environment. This paper presents a topological visionbased approach for navigating through a featureless mazelike environment using a NAO humanoid robot, where all processing is performed by the robot’s embedded computer. We show how our approach allows the robot to reliably navigate in this kind of environment in realtime.
more …
By
VázquezSantacruz, Eduardo; Chakraborty, Debrup
Post to Citeulike
In this paper we present a new method to create neural network ensembles. In an ensemble method like bagging one needs to train multiple neural networks to create the ensemble. Here we present a scheme to generate different copies of a network from one trained network, and use those copies to create the ensemble. The copies are produced by adding controlled noise to a trained base network. We provide a preliminary theoretical justification for our method and experimentally validate the method on several standard data sets. Our method can improve the accuracy of a base network and give rise to considerable savings in training time compared to bagging.
more …
By
Terven, Juan R.; Salas, Joaquin; Raducanu, Bogdan
Post to Citeulike
2 Citations
This paper presents a system capable of recognizing six head gestures: nodding, shaking, turning right, turning left, looking up, and looking down. The main difference of our system compared to other methods is that the Hidden Markov Models presented in this paper, are fully connected and consider all possible states in any given order, providing the following advantages to the system: (1) allows unconstrained movement of the head and (2) it can be easily integrated into a wearable device (e.g. glasses, neckhung devices), in which case it can robustly recognize gestures in the presence of egomotion. Experimental results show that this approach outperforms common methods that use restricted HMMs for each gesture.
more …
By
OrtizBayliss, José Carlos; TerashimaMarín, Hugo; ConantPablos, Santiago Enrique
Post to Citeulike
1 Citations
Hyperheuristics are methodologies used to choose from a set of heuristics and decide which one to apply given some properties of the current instance. When solving a Constraint Satisfaction Problem, the order in which the variables are selected to be instantiated has implications in the complexity of the search. We propose a neural network hyperheuristic approach for variable ordering within Constraint Satisfaction Problems. The first step in our approach requires to generate a pattern that maps any given instance, expressed in terms of constraint density and tightness, to one adequate heuristic. That pattern is later used to train various neural networks which represent hyperheuristics. The results suggest that neural networks generated through this methodology represent a feasible alternative to code hyperheuristic which exploit the strengths of the heuristics to minimise the cost of finding a solution.
more …
By
MiramontesJaramillo, Daniel; Kober, Vitaly; DíazRamírez, Víctor Hugo
Post to Citeulike
Tracking of 3D objects based on video information has become an important problem in the last years. We present a tracking algorithm based on rotationinvariant HOGs over a data structure of circular windows. The algorithm is also robust to geometrical distortions. The performance of the algorithm achieves a perfect score in terms of objective metrics for realtime tracking using multicore processing with GPUs.
more …
By
SilvánCárdenas, José Luis
Post to Citeulike
Digital terrain models (DTM) are basic products required for a number of applications and decision making processes. Nowadays, high spatialresolution DTMs are primarily produced through airborne laser scanners (ALS). However, the ALS does not directly deliver DTMs but a dense cloud of 3d points that embeds both terrain elevation and height of natural and humanmade features. Such a point cloud is generally rasterized and referred to as the digital surface model (DSM). The discrimination of aboveground objects from terrain, also termed ground filtering, is a basic processing step that has proved especially difficult for large areas of complex terrain characteristics. This paper presents the development of a multiscale erosion operator for removing aboveground features in the DSM, thus producing a surface that is close to the DTM. Such an approximation was used to separate ground from nonground points in the original pointcloud and the discrimination accuracy was assessed using publicly available data. Results indicated an improvement over a previously published method.
more …
By
Tellez, Eric Sadit; Chavez, Edgar; Graff, Mario
Post to Citeulike
1 Citations
Efficiently searching for patterns in very large collections of objects is a very active area of research. Over the last few years a number of indexes have been proposed to speed up the searching procedure. In this paper, we introduce a novel framework (the Knearest references) in which several approximate proximity indexes can be analyzed and understood. The search spaces where the analyzed indexes work span from vector spaces, general metric spaces up to general similarity spaces.
The proposed framework clarify the principles behind the searching complexity and allows us to propose a number of novel indexes with high recall rate, low search time, and a linear storage requirement as salient characteristics.
more …
By
CruzTechica, Sonia; EscalanteRamirez, Boris
Post to Citeulike
1 Citations
The Hermite transform is introduced as an image representation model for multiresolution image fusion with noise reduction. Image fusion is achieved by combining the steered Hermite coefficients of the source images, then the coefficients are combined with a decision rule based on the linear algebra through a measurement of the linear dependence. The proposed algorithm has been tested on both multifocus and multimodal image sets producing results that exceed results achieved with other methods such as wavelets, curvelets [11], and contourlets [2] proving that our scheme best characterized important structures of the images at the same time that the noise was reduced.
more …
By
MartinezEnriquez, A. M.; EscaladaImaz, G.; Muhammad, Aslam
Post to Citeulike
This paper describes how to solve numerical problems by a non computer user based on Knowledge Based Systems (KBSs) principles. The aims of our approach is to handle numerical information of the problem, by using Backward Logical inferences, deducing whether or not the required mathematical composition functions exist. We implemented a backward inference algorithm which is bounded by O(n), n being the size of KBS. Moreover the inference engine proceeds in a topdown way, so scans a small reduced search space compared to those of the forward chaining algorithms. Our experimental example deals with some statistical analysis and its application to clustering and concept formation.
more …
By
Torres, Fredy Rodríguez; CarrascoOchoa, Jesús A.; MartínezTrinidad, José Fco.
Post to Citeulike
Imbalanced data is a problem of current research interest. This problem arises when the number of objects in a class is much lower than in other classes. In order to address this problem several methods for oversampling the minority class have been proposed. Oversampling methods generate synthetic objects for the minority class in order to balance the amount of objects between classes, among them, SMOTE is one of the most successful and wellknown methods. In this paper, we introduce a modification of SMOTE which deterministically generates synthetic objects for the minority class. Our proposed method eliminates the random component of SMOTE and generates different amount of synthetic objects for each object of the minority class. An experimental comparison of the proposed method against SMOTE in standard imbalanced datasets is provided. The experimental results show an improvement of our proposed method regarding SMOTE, in terms of Fmeasure.
more …
By
RosalesPérez, Alejandro; ReyesGarcía, Carlos A.; GómezGil, Pilar
Post to Citeulike
6 Citations
In this paper we describe a genetic fuzzy relational neural network (FRNN) designed for classification tasks. The genetic part of the proposed system determines the best configuration for the fuzzy relational neural network. Besides optimizing the parameters for the FRNN, the fuzzy membership functions are adjusted to fit the problem. The system is tested in several infant cry database reaching results up to 97.55%. The design and implementation process as well as some experiments along with their results are shown.
more …
By
Barrientos, Mario; Madrid, Humberto
Post to Citeulike
This work introduces a new technique for edge detection based on a graph theory tool known as normalized cut. The problem involves to find certain eigenvector of a matrix called normalized laplacian, which is constructed in such way that it represents the relation of color and distance between the image’s pixels. The matrix dimensions and the fact that it is dense represents a trouble for the common eigensolvers. The power method seemed a good option to tackle this problem. The first results were not very impressive, but a modification of the function that relates the image pixels lead us to a more convenient laplacian structure and to a segmentation result known as edge detection. A deeper analysis showed that this procedure does not even need of the power method, because the eigenvector that defines the segmentation can be obtained with a closed form.
more …
By
Vilariño, Darnes; Pinto, David; Balderas, Carlos; Tovar, Mireya; Beltrán, Beatriz; Paniagua, Sofia
Show all (6)
Post to Citeulike
Detection of discriminant terms allow us to improve the performance of natural language processing systems. The goal is to be able to find the possible term contribution in a given corpus and, thereafter, to use the terms of high contribution for representing the corpus. In this paper we present various experiments that use elliptic curves with the purpose of discovering discriminant terms of a given textual corpus. Different experiments led us to use the mean and variance of the corpus terms for determining the parameters of a Weierstrass reduced equation (elliptic curve). We use the elliptic curves in order to graphically visualize the behavior of the corpus vocabulary. Thereafter, we use the elliptic curve parameters in order to cluster those terms that share characteristics. These clusters are then used as discriminant terms in order to represent the original document collection. Finally, we evaluated all these corpus representations in order to determine those terms that best discrimine each document.
more …
By
Sheremetov, Leonid; Cosultchi, Ana; Batyrshin, Ildar; VelascoHernandez, Jorge
Show all (4)
Post to Citeulike
Several pattern recognition techniques are applied for hydrogeological modeling of mature oilfields. Principle component analysis and clustering have become an integral part of microarray data analysis and interpretation. The algorithmic basis of clustering – the application of unsupervised machinelearning techniques to identify the patterns inherent in a data set – is well established. This paper discusses the motivations for and applications of these techniques to integrate water production data with other physicochemical information in order to classify the aquifers of an oilfield. Further, two time series pattern recognition techniques for basic water cut signatures are discussed and integrated within the methodology for water breakthrough mechanism identification.
more …
By
Ayala, Francisco J.; Herrera, Abel
Post to Citeulike
In this article it is described the process to extract a set of cepstral coefficients from a warped frequency space (mel and bark) and analyze the perceived differences in the reconstructed signal. We will try to determine if there is any audible improvement between these two most used scales for the purpose of speech analysis by synthesis. We will use the same procedure for parameter extraction and signal reconstruction for both functions, replacing only the warping scale. The proposed system is based on a basic cepstral analysis synthesis model on the mel scale, whose excitation signal generation process has been changed. The inverse MLSA filter was obtained in order to generate the analysis signal, then this signal is fed into a wavelet decomposition block and the resultant coefficients are sent to the decoding system where the excitation signal is reconstructed. Furthermore the mel scale is replaced by bark scale.
more …
By
Figueroa Mora, Karina; Paredes, Rodrigo; Rangel, Roberto
Post to Citeulike
Modeling proximity searching problems in a metric space allows one to approach many problems in different areas, e.g. pattern recognition, multimedia search, or clustering. Recently there was proposed the permutation based approach, a novel technique that is unbeatable in practice but difficult to compress. In this article we introduce an improvement on that metric space search data structure. Our technique shows that we can compress the permutation based algorithm without loosing precision. We show experimentally that our technique is competitive with the original idea and improves it up to 46% in real databases.
more …
By
Vilariño, Darnes; Pinto, David; Beltrán, Beatriz; León, Saul; Castillo, Esteban; Tovar, Mireya
Show all (6)
Post to Citeulike
1 Citations
Normalization of SMS is a very important task that must be addressed by the computational community because of the tremendous growth of services based on mobile devices, which make use of this kind of messages. There exist many limitations on the automatic treatment of SMS texts derived from the particular writing style used. Even if there are suficient problems dealing with this kind of texts, we are also interested in some tasks requiring to understand the meaning of documents in different languages, therefore, increasing the complexity of such tasks. Our approach proposes to normalize SMS texts employing machine translation techniques. For this purpose, we use a statistical bilingual dictionary calculated on the basis of the IBM4 model for determining the best translation for a given SMS term. We have compared the presented approach with a traditional probabilistic method of information retrieval, observing that the normalization model proposed here highly improves the performance of the probabilistic one.
more …
By
Villarreal, B. Lorena; Olague, Gustavo; Gordillo, J. L.
Post to Citeulike
1 Citations
Smell sensors in mobile robotics for odor source localization are getting the attention for researches around the world. To solve the problem, it must be considered the environmental model and odor behavior, the perception system and the algorithm for tracking the odors plume. Current algorithms try to emulate the behavior of the animals known by its capability to follow odors. Nevertheless, the odor perception systems are still in its infancy and far to be compared with the biological smell sense. This is why, an algorithm that considers the perception system capabilities and drawbacks, the environmental model and the odor behavior is presented on this work. Besides, an artificial intelligent technique (Genetic Programming) is used as a platform to develop odor source localization algorithms. It is prepared for different environment conditions and perception systems. A comparison between this improved algorithm and a pair of basic techiques for odor source localization is presented in terms of repeatability.
more …
By
LizarragaMorales, Rocio A.; SanchezYanez, Raul E.; AyalaRamirez, Victor
Post to Citeulike
1 Citations
Texel size determination on periodic and nearperiodic textures, is a problem that has been addressed for years, and currently it remains as an important issue in structural texture analysis. This paper proposes an approach to determine the texel size based on the computation and analysis of the texture homogeneity properties. We analyze the homogeneity feature computed from difference histograms, while varying the displacement vector for a preferred orientation. As we vary this vector, we expect a maximum value in the homogeneity data if its magnitude matches the texel size in a given orientation. We show that this approach can be used for both periodic and nearperiodic textures, it is robust to noise and blur perturbations, and its advantages over other approaches in computation time and memory storage.
more …
By
Becerra, H. M.; Hayet, J. B.; Sagüés, C.
Post to Citeulike
We present a novel approach for visual control of wheeled mobile robots, extending the existing works that use the trifocal tensor as source for measurements. In our approach, singularities typically encountered in this kind of methods are removed by formulating the control problem based on the trifocal tensor and by using a virtual target vertical translated from the real target. A single controller able to regulate the robot pose towards the desired configuration without local minima is designed. Additionally, the proposed approach is valid for perspective cameras as well as catadioptric systems obeying a central camera model. All these contributions are supported by convincing simulations.
more …
By
HernándezHernández, Saiveth; OrantesMolina, Antonio; CruzBarbosa, Raúl
Post to Citeulike
Breast cancer is a global health problem principally affecting the female population. Digital mammograms are an effective way to detect this disease. One of the main indicators of malignancy in a mammogram is the presence of masses. However, their detection and diagnosis remains a difficult task. In this study, the impact of the combination of image descriptors and clinical data on the performance of conventional and kernel methods is presented. These models are trained with a dataset extracted from the public database BCDRD01. The experimental results have shown that the incorporation of clinical data to image descriptors improves the performance of classifiers better than using the descriptors alone. Likewise, this combination, but using a nonlinear kernel function, improves the performance similar to those reported in the literature for this dataset.
more …
By
GómezAdorno, Helena; Pinto, David; Vilariño, Darnes
Post to Citeulike
2 Citations
In this paper it is presented a methodology for tackling the problem of question answering for reading comprehension tests. The implemented system accepts a document as input and it answers multiple choice questions about it. It uses the Lucene information retrieval engine for carrying out information extraction employing additional automated linguistic processing such as stemming, anaphora resolution and partofspeech tagging. The proposed approach validates the answers, by comparing the text retrieved by Lucene for each question with respect to its candidate answers. For this purpose, a validation based on textual entailment is executed. We have evaluated the experiments carried out in order to verify the quality of the methodology proposed using two corpora widely used in international forums. The obtained results show that the proposed system selects the correct answer to a given question with a percentage of 3337%, a result that overcomes the average of all the runs submitted in the QA4MRE task of the CLEF 2011 and 2012.
more …
By
LazoCortés, Manuel S.; CarrascoOchoa, Jesús Ariel; MartínezTrinidad, José Fco.; SanchezDiaz, Guillermo
Show all (4)
Post to Citeulike
In their classic form, reducts as well as typical testors are minimal subsets of attributes that retain the discernibility condition. Constructs are a special type of reducts and represent a kind of generalization of the reduct concept. A construct reliably provides sufficient amount of discrimination between objects belonging to different classes as well as sufficient amount of resemblance between objects belonging to the same class. Based on the relation between constructs, reducts and typical testors this paper focuses on a practical use of this relation. Specifically, we propose a method that allows applying typical testor algorithms for computing constructs. The proposed method involves modifying the classic definition of pairwise object comparison matrix adapting it to the requirements of certain algorithms originally designed to compute typical testors. The usefulness of our method is shown through some examples.
more …
By
RosasRomero, Roberto; Starostenko, Oleg; RodríguezAsomoza, Jorge; AlarconAquino, Vicente
Show all (4)
Post to Citeulike
This paper presents a novel approach for registration of 3D images based on optimal freeform rigid transformation. A proposal consists in semiautomatic image segmentation reconstructing 3D object surfaces in medical images. The proposed extraction technique employs gradients in sequences of 3D medical images to attract a deformable surface model by using imaging planes that correspond to multiple locations of feature points in space, instead of detecting contours on each imaging plane in isolation. Feature points are used as a reference before and after a deformation. An issue concerning this relation is difficult and deserves attention to develop a methodology to find the optimal number of points that gives the best estimates and does not sacrifice computational speed. After generating a representation for each of two 3D objects, we find the best similarity transformation that represents the object deformation between them. The proposed approach has been tested using different imaging modalities by morphing data from Histology sections to match MRI of carotid artery.
more …
By
LópezYáñez, Itzamá; Sheremetov, Leonid; YáñezMárquez, Cornelio
Post to Citeulike
2 Citations
The paper describes a novel associative model for the forecasting of time series in petroleum engineering. The model is based on the Gamma classifier, which is inspired on the AlphaBeta associative memories, taking the alpha and beta operators as basis for the gamma operator. The objective is to reproduce and predict future oil production in different scenarios in an adjustable time window. The distinctive features of the experimental data set are spikes, abrupt changes and frequent discontinuities, which considerably decrease the precision of traditional forecasting methods. As experimental results show, this classifierbased predictor exhibits competitive performance. The advantages and limitations of the model, as well as lines of improvement, are discussed.
more …
By
Barajas, Manlio; DávalosViveros, José Pablo; Gordillo, J. L.
Post to Citeulike
2 Citations
A method for tracking the 3D pose and controlling an unmanned aerial vehicle (UAV) is presented. Planar faces of target vehicle are tracked using the Efficient Second Order Minimization algorithm, one at a time. Homography decomposition is used to recover the 3D pose of the textured planar face that is being tracked. Then, a cuboid model is used to estimate the homographies of the remaining faces. This allows switching faces as the object moves and rotates. Cascade and single PID controllers are used to control the vehicle pose. Results confirm that this approach is effective for realtime aerial vehicle control using only one camera. This is a step towards an automatic 3D pose tracking system.
more …
By
Tovar, Mireya; Pinto, David; Montes, Azucena; González, Gabriel; Vilariño, Darnes; Beltrán, Beatriz
Show all (6)
Post to Citeulike
1 Citations
In this paper we present an approach for the evaluation of taxonomic relations of restricted domain ontologies. We use the evidence found in corpora associated to the ontology domain for determining the validity of the taxonomic relations. Our approach employs lexicosyntactic patterns for evaluating taxonomic relations in which the concepts are totally different, and it uses a particular technique based on subsumption for those relations in which one concept is completely included in the other one. The integration of these two techniques has allowed to automatically evaluate taxonomic relations for two ontologies of restricted domain. The performance obtained was about 70% for one ontology of the elearning domain, whereas we obtained around 88% for the ontology associated to the artificial intelligence domain.
more …
By
RodríguezDiez, Vladímir; MartínezTrinidad, José Fco.; CarrascoOchoa, J. Ariel; LazoCortés, Manuel S.
Show all (4)
Post to Citeulike
1 Citations
Testor Theory allows performing feature selection in supervised classification problems through typical testors. Typical testors are irreducible subsets of features preserving the object discernibility ability of the original set of features. However, finding the complete set of typical testors for a dataset requires a high computational effort. In this paper, we make an empirical study about the performance of two of the most recent and fastest algorithms of the state of the art for computing typical testors, regarding the density of the basic matrix. For our study we use synthetic basic matrices to control their characteristics, but we also include public standard datasets taken from the UCI machine learning repository. Finally, we discuss our conclusions drawn from this study.
more …
By
AguilarGonzález, Pablo M.; Kober, Vitaly
Post to Citeulike
Correlation filters for object detection use information about the appearance and shape of the object of interest. Therefore, detection performance degrades when the appearance of the object in the scene differs from the appearance used in the filter design process. This problem has been approached by utilizing composite filters designed from a training set containing known views of the object of interest. However, common composite filter design is usually carried out under the assumption that the ideal appearance and shape of the target are known. In this work we propose an algorithm for composite filter design using noisy training images. The algorithm is a modification of the class synthetic discriminant function technique that uses arbitrary filter impulse responses. Furthermore, filters can be adapted to achieve a prescribed discrimination capability for a class of backgrounds if a representative sample is known. Computer simulation results obtained with the proposed algorithm are presented and compared with those of common composite correlation filters.
more …
By
ChaconMurguia, Mario I.; VillalobosMontiel, Adrian J.; CalderonContreras, Jorge D.
Post to Citeulike
The present work presents a methodology to automatically detect the symmetry point of breast. In order to achieve this goal, the algorithm corrects thermal image tilt to find a breast symmetry axis, compute a modified symmetric index that can be used as a measure of image quality, breast cosmetic and pathologic issues, and a seed location for a former algorithm reported to automatically achieve breast cancer analysis. The methodology involves filtering, edge detection, windowing edge analysis and shape detection based on the Hough transform. Experimental results show that the proposed method is able to define the symmetry axis as precise as a person do, and correctly detected breast areas in 100% of the cases considered, allowing automatic breast analysis by a previous algorithm.
more …
By
RamírezAcosta, Alejandro A.; GarcíaVázquez, Mireya S.; Nakano, Mariko
Post to Citeulike
The transmission over errorprone networks of still images or videos coded by blockbased techniques like JPEG and MPEG respectively, may lead to block loss degrading, particularly the visual quality of images. Working under this environment, such as wireless communication where retransmission may be not feasible, application of error concealment techniques is consequently required to reduce degradation caused by the missing information. This paper surveys algorithms for spatial error concealment and proposes an adaptive and effective method based on edge analysis that performs well in current situations where significant loss of information is present and the data of the past reference images are not also available. The proposed method and the reviewed algorithms were implemented, tested and compared. Experimental results show that the proposed approach outperforms existing methods by up to 8.6 dB on average.
more …
By
Pérez Maldonado, Yara; Caballero Morales, Santiago Omar; Cruz Ortega, Roberto Omar
Post to Citeulike
1 Citations
Hidden Markov Models (HMMs) have been widely used for Automatic Speech Recognition (ASR). Iterative algorithms such as Forward  Backward or BaumWelch are commonly used to locally optimize HMM parameters (i.e., observation and transition probabilities). However, finding more suitable transition probabilities for the HMMs, which may be phonemedependent, may be achievable with other techniques. In this paper we study the application of two Genetic Algorithms (GA) to accomplish this task, obtaining statistically significant improvements on unadapted and adapted Speaker Independent HMMs when tested with different users.
more …
By
LópezMonroy, Adrián Pastor; MontesyGómez, Manuel; VillaseñorPineda, Luis; CarrascoOchoa, Jesús Ariel; MartínezTrinidad, José Fco.
Show all (5)
Post to Citeulike
1 Citations
This paper proposes a novel representation for Authorship Attribution (AA), based on Concise Semantic Analysis (CSA), which has been successfully used in Text Categorization (TC). Our approach for AA, called Document Author Representation (DAR), builds document vectors in a space of authors, calculating the relationship between textual features and authors. In order to evaluate our approach, we compare the proposed representation with conventional approaches and previous works using the c50 corpus. We found that DAR can be very useful in AA tasks, because it provides good performance on imbalanced data, getting comparable or better accuracy results.
more …
By
KuriMorales, Angel; LopezPeña, Ignacio
Post to Citeulike
1 Citations
A method to determine the position of a mobile robot using machine learning strategies was introduced in [1]. The method raises the possibility to decrease the size of database that holds the images that describe an area where a robot will localize itself. The present work does a statistical validation of the approach by calculating the Hamming and Euclidean distances between all the images using on the one hand all their pixels and on the other hand the reduced set of pixels obtained by the GA as described in [1]. To perform the analysis, a new series of images were taken from a specific position at several angles in both horizontal (pan) and vertical (tilt). These images were compared using two different measures: a) the Hamming distance and b) the Euclidean distance to determine how similar are one from another.
more …
By
PinillaBuitrago, Laura Alejandra; MartínezTrinidad, José Fco.; CarrascoOchoa, J. A.
Post to Citeulike
A Skeleton is a simplified and efficient descriptor for shapes, which is of great importance in computer graphics and vision. In this paper, we present a new method for computing skeletons from 2D binary shapes. The contour of each shape is represented by a set of dominant points, which are obtained by a nonparametric method. Then, a set of convex dominant points is used for building the skeleton. Finally, we iteratively remove some skeleton branches in order to get a clean skeleton representation. The proposed method is compared against other methods of the state of the art. The results show that the skeletons built by our method are more stable across a wider range of shapes than the skeletons obtained by other methods; and the shapes reconstructed from our skeletons are closer to the original shapes.
more …
By
Pogrebnyak, Oleksiy; HernándezBautista, Ignacio; Camacho Nieto, Oscar; Manrique Ramírez, Pablo
Show all (4)
Post to Citeulike
A method for image lossless compression using lifting scheme wavelet transform is presented. The proposed method adjusts wavelet filter coefficients analyzing signal spectral characteristics to obtain a higher compression ratio in comparison to the standard CDF(2,2) and CDF(4,4) filters. The proposal is based on spectral pattern recognition with 1NN classifier. Spectral patterns of a small fixed length are formed for the entire image permitting thus the global optimization of the filter coefficients, equal for all decompositions. The proposed method was applied to a set of test images obtaining better results in entropy values in comparison to the standard wavelet lifting filters.
more …
By
Farzana, J.; Muhammad, Aslam; MartinezEnriquez, A. M.; Afraz, Z. S.; Talha, W.
Show all (5)
Post to Citeulike
1 Citations
Vision loss is one of ultimate obstacle in the lives of blind that prevent them to perform tasks on their own and selfreliantly. The blind are trusting on others for the selection of trendy and eyecatching accessories because self –buying effort lead them in such collection that is mismatch with their personalities and society style. That is why they are bound to depend upon on their family for shopping assistance, who often may not afford quality time due to busy routine. The thought of dependency rises lack of selfconfidence in blinds, absorbs their ability to negotiate, decision making power, and social activities. Via uninterrupted speech communication, our proposed talking accessories selector assistant for the blind provides quick decision support in picking the routinely wearable accessories like dress, shoes, cosmetics, according to the society drifts and events. The foremost determination of this assistance is to make the blind liberated and more assertive.
more …
By
LoyolaGonzález, Octavio; GarcíaBorroto, Milton; MedinaPérez, Miguel Angel; MartínezTrinidad, José Fco.; CarrascoOchoa, Jesús Ariel; Ita, Guillermo
Show all (6)
Post to Citeulike
4 Citations
Classifiers based on emerging patterns are usually more understandable for humans than those based on more complex mathematical models. However, most of the classifiers based on emerging patterns get low accuracy in those problems with imbalanced databases. This problem has been tackled through oversampling or undersampling methods, nevertheless, to the best of our knowledge these methods have not been tested for classifiers based on emerging patterns. Therefore, in this paper, we present an empirical study about the use of oversampling and undersampling methods to improve the accuracy of a classifier based on emerging patterns. We apply the most popular oversampling and undersampling methods over 30 databases from the UCI Repository of Machine Learning. Our experimental results show that using oversampling and undersampling methods significantly improves the accuracy of the classifier for the minority class.
more …
By
HernándezLeón, Raudel; HernándezPalancar, José; CarrascoOchoa, J. A.; MartínezTrinidad, José Fco.
Show all (4)
Post to Citeulike
In Associative Classification, building a classifier based on Class Association Rules (CARs) consists in finding an ordered CAR list by applying a rule ordering strategy. Since this CAR list will be used to build a classifier, it is important to develop a good rule ordering strategy. In this paper, we introduce four novel hybrid rule ordering strategies; the first three combine the Netconf measure with SupportConfidence based rule ordering strategies. The fourth strategy, called Hybrid Specific Rules/Netconf (SR/NF), combines the Netconf measure with a rule ordering strategy based on the CAR’s size. The experiments show that the proposed strategies obtain better classification accuracy than the best ordering strategies reported in the literature.
more …
By
Alejo, R.; Toribio, P.; Valdovinos, R. M.; PachecoSanchez, J. H.
Show all (4)
Post to Citeulike
In this paper we propose a modified backpropagation to deal with severe twoclass imbalance problems. The method consists in automatically to find the oversampling rate to train a neural network (NN), i.e., identify the appropriate number of minority samples to train the NN during the learning stage, so to reduce training time. The experimental results show that the performance proposed method is a very competitive when it is compared with conventional SMOTE, and its training time is lesser.
more …
By
SilvánCárdenas, José Luis
Post to Citeulike
A watershed segmentation algorithm is proposed for automatic extraction of tree crowns from LiDAR data to support 3d modelling of forest stands. A relatively sparse LiDAR point cloud was converted to a surface elevation in raster format and a canopy height model (CHM) extracted. Then, the segmentation method was applied on the CHM and results combined with the original point cloud to generate models of individual tree crowns. The method was tested in 200 circular plots (400 m^{2}) located over 50 sites of a conservation area in Mexico City. The segmentation method exhibited a moderate to perfect detection rate on 66% of plots tested. One major factor for a poor detection was identified as the relatively low sampling rate of LiDAR data with respect to crown sizes.
more …
By
CaballeroMorales, SantiagoOmar
Post to Citeulike
In this paper an approach based on insertion of “markers” is proposed to increase the performance of face recognition based on principal component analysis (PCA). The markers represent zerovalued pixels which are expected to remove information likely to affect classification (noisy pixels). The patterns of the markers was optimized with a genetic algorithm (GA) in contrast to other noise generation techniques. Experiments performed with a well known face database showed that the technique was able to achieve significant improvements on PCA particularly when data for training was small in comparison with the size of testing sets. This was also observed when the number of eigenfaces used for classification was small.
more …
By
AldanaMurillo, Noé G.; Hayet, JeanBernard; Becerra, Héctor M.
Post to Citeulike
1 Citations
In this paper, we address the problem of appearancebased localization of humanoid robots in the context of robot navigation using a visual memory. This problem consists in determining the most similar image belonging to a previously acquired set of key images (visual memory) to the current view of the monocular camera carried by the robot. The robot is initially kidnapped and the current image has to be compared with the visual memory. We tackle the problem by using a hierarchical visual bag of words approach. The main contribution of the paper is a comparative evaluation of local descriptors to represent the images. Realvalued, binary and color descriptors are compared using real datasets captured by a smallsize humanoid robot. A specific visual vocabulary is proposed to deal with issues generated by the humanoid locomotion: blurring and rotation around the optical axis.
more …
By
Escalante, Hugo Jair; Sotomayor, Mauricio; Montes, Manuel; LopezMonroy, A. Pastor
Show all (4)
Post to Citeulike
Naïve Bayes nearest neighbors (NBNN) is a variant of the classic KNN classifier that has proved to be very effective for object recognition and image classification tasks. Under NBNN an unseen image is classified by looking at the distance between the sets of visual descriptors of test and training images. Although NBNN is a very competitive pattern classification approach, it presents a major drawback: it requires of large storage and computational resources. NBNN’s requirements are even larger than those of the standard KNN because sets of raw descriptors must be stored and compared, therefore, efficiency improvements for NBNN are necessary. Prototype generation (PG) methods have proved to be helpful for reducing the storage and computational requirements of standard KNN. PG methods learn a reduced subset of prototypical instances to be used by KNN for classification. In this contribution we study the suitability of PG methods for enhancing the capabilities of NBNN. Throughout an extensive comparative study we show that PG methods can reduce dramatically the number of descriptors required by NBNN without significantly affecting its discriminative performance. In fact, we show that PG methods can improve the classification performance of NBNN by using much less visual descriptors. We compare the performance of NBNN to other stateoftheart object recognition approaches and show the combination of PG and NBNN outperforms alternative techniques.
more …
By
SánchezGutiérrez, Máximo E.; Albornoz, E. Marcelo; MartinezLicona, Fabiola; Rufiner, H. Leonardo; Goddard, John
Show all (5)
Post to Citeulike
5 Citations
Emotional speech recognition is a multidisciplinary research area that has received increasing attention over the last few years. The present paper considers the application of restricted Boltzmann machines (RBM) and deep belief networks (DBN) to the difficult task of automatic Spanish emotional speech recognition. The principal motivation lies in the success reported in a growing body of work employing these techniques as alternatives to traditional methods in speech processing and speech recognition. Here a wellknown Spanish emotional speech database is used in order to extensively experiment with, and compare, different combinations of parameters and classifiers. It is found that with a suitable choice of parameters, RBM and DBN can achieve comparable results to other classifiers.
more …
By
Figueroa Mora, Karina; Paredes, Rodrigo
Post to Citeulike
Proximity searching consists in retrieving the most similar objects to a given query. This kind of searching is a basic tool in many fields of artificial intelligence, because it can be used as a search engine to solve problems like
$kN\!N$
searching. A common technique to solve proximity queries is to use an index. In this paper, we show a variant of the permutation based index, which, in his original version, has a great predicting power about which are the objects worth to compare with the query (avoiding the exhaustive comparison). We have noted that when two permutants are close, they can produce small differences in the order in which objects are revised, which could be responsible of finding the true answer or missing it. In this paper we pretend to mitigate this effect. As a matter of fact, our technique allows us both to reduce the index size and to improve the query cost up to 30%.
more …
By
FelipeRiveron, Edgardo M.; Villalobos Castaldi, Fabiola M.; Gómez, Ernesto Suaste; Leiva Vasconcellos, Marcos A.; Albortante Morato, Cecilia
Show all (5)
Post to Citeulike
1 Citations
The focus of this work is to create a methodology to separate the entire vascular network into its independent veins and arteries networks in optical human fundus images. It has been developed following the logical procedure used by humans when they assemble a puzzle. In the development of the methodology we take into consideration physiological properties, topological properties of the tree structure and morphological properties of both networks, that is, they have only bifurcations, crosses and ending points, and also that crosses are produced always between venous and arterial branches. For arterial blood vessels we get a classification capability, based on the pixel counting, of 84.88% while for venous was 82.87%. This indicates that the methodology classified correctly as average 83.80% of the total blood vessels in the images.
more …
