Showing 1 to 100 of 349 matching Articles
Results per page:
Export (CSV)
By
van Noord, Gertjan; Gerdemann, Dale
Post to Citeulike
30 Citations
An extension to finite state transducers is presented, in which atomic symbols are replaced by arbitrary predicates over symbols. The extension is motivated by applications in natural language processing (but may be more widely applicable) as well as by the observation that transducers with predicates generally have fewer states and fewer transitions. Although the extension is fairly trivial for finite state acceptors, the introduction of predicates is more interesting for transducers. It is shown how various operations on transducers (e.g., composition) can be implemented, as well as how the transducer determinization algorithm can be generalized for predicateaugmented finite state transducers.
more …
By
Casadio, Claudia
Post to Citeulike
5 Citations
This paper explores the linguistic implications of Noncommutative Linear Logic, with particular reference to its multiplicative fragment MNLL, that exhibits a direct relationship to Lambek's Syntactic Calculus. Such a framework is appealing for linguistic analysis since it allows one to develop a dynamic characterization of the notion of a function, that plays a basic role in the foundations of categorial grammar. The analysis will focus on a variety of constructions involving scope configurations, unbounded dependencies and Whclauses. Particular attention is given to the proof nets for MNLL, that are planar graphs in which the communication processes and the flow of information are represented by means of a parallelistic architecture. We will introduce proof nets and sequent derivations associated to each linguistic expression and will show that a direct relationship exists between the types and derivations of the Syntactic Calculus and the corresponding types and derivations in MNLL. Moreover, given the symmetric architecture and the crucial role played by the two negations of Noncommutative Linear Logic in the generation of the logical types, we will show that this system is richer in expressive power and in the capacity of performing lefttoright computations of word strings.
more …
By
Dave, Shachi; Parikh, Jignashu; Bhattacharyya, Pushpak
Post to Citeulike
22 Citations
Interlingua and transferbased approaches tomachine translation have long been in use in competing and complementary ways. The former proves economical in situations where translation among multiple languages is involved, and can be used as a knowledgerepresentation scheme. But given a particular interlingua, its adoption depends on its ability (a) to capture the knowledge in texts precisely and accurately and (b) to handle crosslanguage divergences. This paper studies the language divergence between English and Hindi and its implication to machine translation between these languages using the Universal Networking Language (UNL). UNL has been introduced by the United Nations University, Tokyo, to facilitate the transfer and exchange of information over the internet. The representation works at the level of single sentences and defines a semantic netlike structure in which nodes are word concepts and arcs are semantic relations between these concepts. The language divergences between Hindi, an IndoEuropean language, and English can be considered as representing the divergences between the SOV and SVO classes of languages. The work presented here is the only one to our knowledge that describes language divergence phenomena in the framework of computational linguistics through a South Asian language.
more …
By
Langholm, Tore
Post to Citeulike
4 Citations
Indexed grammars are shown to correspond to an existential monadic secondorder logic over phrase structure trees, extended with a single existential quantifier ranging over a certain type of unary function. Indexed grammars are also shown to correspond to contingency grammars, a strengthening of contextfree grammars that makes use of such unary functions.
more …
By
Kelemen, Jozef; Kelemenová, Alica; Mitrana, Victor
Post to Citeulike
In our opinion, the 20th century contributed to the development of human civilization in two main directions, namely the scientific understanding of universe and of human being as its part, and shifting technical, engineered devices into the interplanetary space, into the human body, and into the processes initially reserved for the activities of human minds only. Crossfertilization of different branches of science and technology has been running very rapidly during the last hundred years and it appeared to be very productive in many directions. The most interesting for our further intentions are the mutual influences of biology, the mathematical study of computing and natural language, and cybernetics viewed as the possibility to study structures and phenomena appearing in living systems, in society, and in machines through a unified optics. This paper is not intended to be a usual survey of any topic, it contains some personal reflections of the authors focused on some developments in formal language theory which, in authors' view, are inspired from biology. Maybe, the main title is not entirely appropriate for this paper, but we agree on the slogan `formal linguistics as a pilot science'; furthermore, the subtitle explains clearly our intentions.
more …
By
Almada, Teresa; Vaz de Carvalho, JÚlia
Post to Citeulike
1 Citations
We introduce the variety ℒ_{n}^{m}, m ≥ 1 and n ≥ 2, of mgeneralized Łukasiewicz algebras of order n and characterize its subdirectly irreducible algebras. The variety ℒ_{n}^{m} is semisimple, locally finite and has equationally definable principal congruences. Furthermore, the variety ℒ_{n}^{m} contains the variety of Łukasiewicz algebras of order n.
more …
By
Hechler, Stephen H.
Post to Citeulike
We extend some results of Adam Kolany to show that large sets of satisfiable sentences generally contain equally large subsets of mutually consistent sentences. In particular, this is always true for sets of uncountable cofinality, and remains true for sets of denumerable cofinality if we put appropriate bounding conditions on the sentences. The results apply to both the propositional and the predicate calculus. To obtain these results, we use delta sets for regular cardinals, and, for singular cardinals, a generalization of delta sets. All of our results are theorems in ZFC.
more …
By
Freund, Max A.
Post to Citeulike
2 Citations
With the past and future tense propositional operators in its syntax, a formal logical system for sortal quantifiers, sortal identity and (second order) quantification over sortal concepts is formulated. A completeness proof for the system is constructed and its absolute consistency proved. The completeness proof is given relative to a notion of logical validity provided by an intensional semantic system, which assumes an approach to sortals from a modern form of conceptualism.
more …
By
Correia, Fabrice
Post to Citeulike
1 Citations
We introduce a system PSI for a strict implication operator called Priorean strict implication. The semantics for PSI is based on partial Kripke models without accessibility relations. PSI is proved sound and complete with respect to that semantics, and Prior's system Q and related systems are shown to be fragments of PSI or of a mild extension of it.
more …
By
Giannakidou, Anastasia
Post to Citeulike
59 Citations
In this paper, I discuss the distribution and interpretation of free choice items (FCIs) in Greek, a language exhibiting a lexical paradigm of such items distinct from that of negative polarity items. Greek differs in this respect from English, which uniformly employs any. FCIs are grammatical only in certain contexts that can be characterized as nonveridical (Giannakidou 1998, 1999), and although they yield universallike interpretations in certain structures, they are not, I argue, universal quantifiers. Evidence will be provided that FCIsare indefinites; the quasiuniversal effect is shown to be the result of binding by an operator with universal force. Additionally, the limited distribution of FCIs in non veridical contexts can be accounted for by analyzing them as indefinites which must always be interpreted in an intensional type. The difference between ``regular'' indefinites and FCIs, therefore, is reduced to a type difference which captures the fact that only the latter exhibit limited distribution: because of their intensional type, FCIs will be grammatical only in contexts providing alternatives (worlds or situations), and nonveridical contexts do exactly this. By contrast, FCIs are excluded from veridical and episodic contexts because these provide no alternatives and hence do not satisfy the lexical semantic requirement ofFCIs. The proposed analysis is supported by data from other languages as well (Spanish, Catalan,French) and has important consequences regarding the analysis of English any. If FCIs are not universal quantifiers but indefinites, then the usual ambiguity thesis (free choice any being universal, negative polarity any an existential) can no longer be maintained, at least not as one in terms of quantificational force.
more …
By
Saeboe, Kjell Johan
Post to Citeulike
13 Citations
I present an analysis of Free Choice Items (FCIs), based on Scandinavian, where FCIs are complex and distinct from polarity sensitive items. Scandinavian FCIs are argued to have two components. One is a universal quantifying into modal contexts. The other is an operator mapping a type (s,t) expression onto itself, adjoining to the closest type t or (s,t) expression. Thus invoking Intensional Functional Application, this operator requires the presence of a modal in the scope of the universal quantifier. Facts concerning ‘essential connections’ and ‘existential import’ are accounted for by assuming that the FC determiner has the option of acting like a quantifier.
more …
By
DÜntsch, Ivo; Schmidt, Gunther; Winter, Michael
Post to Citeulike
10 Citations
The standard model for mereotopological structures are Boolean subalgebras of the complete Boolean algebra of regular closed subsets of a nonempty connected regular T_{0} topological space with an additional "contact relation" C defined by xCy ⇔ x ∩ ≠ Ø
A (possibly) more general class of models is provided by the Region Connection Calculus (RCC) of Randell et al. We show that the basic operations of the relational calculus on a "contact relation" generate at least 25 relations in any model of the RCC, and hence, in any standard model of mereotopology. It follows that the expressiveness of the RCC in relational logic is much greater than the original 8 RCC base relations might suggest. We also interpret these 25 relations in the the standard model of the collection of regular open sets in the twodimensional Euclidean plane.
more …
By
Yoshimi, Takehiko
Post to Citeulike
1 Citations
Since the headlines of English news articles have a characteristicstyle, different from the styles which prevail in ordinary sentences, it is difficult for MT systems to generate highquality translation for headlines.We try to solve this problem by adding to an existing system apreediting module which rewrites headlines as ordinary expressions.Rewriting of headlines makes it possible to generate better translations which would not otherwise be generated, with little or no changes to the existing parts of the system. Focusing on the absence of a form of the verb be as a missing part of normal English, we have described rewriting rules for putting properly the verb be into headlines, based on information obtained by morpholexical and rough syntactic analysis.We have incorporated the proposed method into our English–JapaneseMT system, and carried out an experiment with 312 headlines as unknown data. Our method has satisfactorily marked 81.2% recall and 92.0% precision.
more …
By
Fujii, Atsushi; Ishikawa, Tetsuya
Post to Citeulike
23 Citations
Crosslanguage information retrieval (CLIR), where queriesand documents are in different languages, has of late become one ofthe major topics within the information retrieval community. Thispaper proposes a Japanese/English CLIR system, where we combine aquery translation and retrieval modules. We currently target theretrieval of technical documents, and therefore the performance of oursystem is highly dependent on the quality of the translation oftechnical terms. However, the technical term translation is stillproblematic in that technical terms are often compound words, and thusnew terms are progressively created by combining existing basewords. In addition, Japanese often represents loanwords based on itsspecial phonogram. Consequently, existing dictionaries find itdifficult to achieve sufficient coverage. To counter the firstproblem, we produce a Japanese/English dictionary for base words, andtranslate compound words on a wordbyword basis. We also use aprobabilistic method to resolve translation ambiguity. For the secondproblem, we use a transliteration method, which corresponds wordsunlisted in the base word dictionary to their phonetic equivalents inthe target language. We evaluate our system using a test collectionfor CLIR, and show that both the compound word translation andtransliteration methods improve the system performance.
more …
By
Egly, Uwe
Post to Citeulike
1 Citations
In this paper, we compare several cutfree sequent systems for propositional intuitionistic logic Intwith respect to polynomial simulations. Such calculi can be divided into two classes, namely singlesuccedent calculi (like Gentzen's LJ) and multisuccedent calculi. We show that the latter allow for more compact proofs than the former. Moreover, for some classes of formulae, the same is true if proofs in singlesuccedent calculi are directed acyclic graphs (dags) instead of trees. Additionally, we investigate the effect of weakening rules on the structure and length of dag proofs.
The second topic of this paper is the effect of different embeddings from Int to S4. We select two different embeddings from the literature and show that translated (propositional) intuitionistic formulae have sometimes exponentially shorter minimal proofs in a cutfree Gentzen system for S4than the original formula in a cutfree singlesuccedent Gentzen system for Int. Moreover, the length and the structure of proofs of translated formulae crucially depend on the chosen embedding.
more …
By
Gumb, Raymond D.
Post to Citeulike
2 Citations
The logic of partial terms (LPT) is a variety of negative free logic in which functions, as well as predicates, are strict. A companion paper focused on nonconstructive LPTwith definite descriptions, called LPD, and laid the foundation for tableaux systems by defining the concept of an LPDmodel system and establishing Hintikka's Lemma, from which the strong completeness of the corresponding tableaux system readily follows. The present paper utilizes the tableaux system in establishing an Extended Joint Consistency Theorem for LPDthat incorporates the Robinson Joint Consistency Theorem and the CraigLyndon Interpolation Lemma. The method of proof is similar to that originally used in establishing the Extended Joint Consistency Theorem for positive free logic. Proof of the CraigLyndon Interpolation Lemma for formulas possibly having free variables is readily had in LPTand its intuitionistic counterpart. The paper concludes with a brief discussion of the theory of definitions in LPD.
more …
By
Kreitz, Christoph; Pientka, Brigitte
Post to Citeulike
We present a method for integrating ripplingbased rewriting into matrixbased theorem proving as a means for automating inductive specification proofs. The selection of connections in an inductive matrix proof is guided by symmetries between induction hypothesis and induction conclusion. Unification is extended by decision procedures and a rippling/reverserippling heuristic. Conditional substitutions are generated whenever a uniform substitution is impossible. We illustrate the integrated method by discussing several inductive proofs for the integer square root problem as well as the algorithms extracted from these proofs.
more …
By
Jones, Gareth; Collier, Nigel; Sakai, Tetsuya; Sumita, Kazuo; Hirakawa, Hideki
Show all (5)
Post to Citeulike
Internet search engines allow access to online information from all over the world. However, there is currently a general assumption that users are fluent in the languages of all documentsthat they might search for. This has for historical reasons usually been a choice between English and the locally supported language. Given the rapidly growing size of the Internet, it is likely that future users will need to access information in languages in which they are not fluent or have no knowledge of at all. This papershows how information retrieval and machine translation can becombined in a crosslanguage information access frameworkto help overcome the language barrier. We presentencouraging preliminary experimental results using English queries toretrieve documents from the standard Japanese language BMIRJ2retrieval test collection. We outline the scope and purpose ofcrosslanguage information access and provide an example applicationto suggest that technology already exists to provide effective andpotentially useful applications.
more …
By
Avron, Arnon; Konikowska, Beata
Post to Citeulike
15 Citations
The main goal of the paper is to suggest some analytic proof systems for LC and its finitevalued counterparts which are suitable for proofsearch. This goal is achieved through following the general RasiowaSikorski methodology for constructing analytic proof systems for semanticallydefined logics. All the systems presented here are terminating, contractionfree, and based on invertible rules, which have a local character and at most two premises.
more …
By
Suzuki, Masami; Inoue, Naomi; Hashimoto, Kazuo
Post to Citeulike
1 Citations
It is important to give useful clues for selecting desiredcontent from a number of retrieval results obtained (usually) from avague search request. Compared with monolingual retrieval, such asupport framework is inevitable and much more significant for filteringgiven translingual retrieval results. This paper describes an attempt toprovide appropriate translation of major keywords in each document in acrosslanguage information retrieval (CLIR) result, as a browsingsupport for users. Our idea of determining appropriate translation ofmajor keywords is based on word cooccurrence distribution in thetranslation target language, considering the actual situation of WWWcontent where it is difficult to obtain aligned parallel (multilingual)corpora. The proposed method provides higher quality of keywordtranslation to yield a more effective support in identifying the targetdocuments in the retrieval result. We report the advantage of thisbrowsing support technique through evaluation experiments includingcomparison with conditions of referring to a translated documentsummary, and discuss related issues to be examined towards moreeffective crosslanguage information extraction.
more …
By
Batens, Diderik; Meheus, Joke
Post to Citeulike
8 Citations
Adaptive logics typically pertain to reasoning procedures for which there is no positive test. In [7], we presented a tableau method for two inconsistencyadaptive logics. In the present paper, we describe these methods and present several ways to increase their efficiency. This culminates in a dynamic marking procedure that indicates which branches have to be extended first, and thus guides one towards a decision — the conclusion follows or does not follow — in a very economical way.
more …
By
Nguyen, Linh Anh
Post to Citeulike
10 Citations
We give complete sequentlike tableau systems for the modal logics KB, KDB, K5, and KD5. Analytic cut rules are used to obtain the completeness. Our systems have the analytic superformula property and can thus give a decision procedure. Using the systems, we prove the Craig interpolation lemma for the mentioned logics.
more …
By
Beckert, Bernhard; GorÉ, Rajeev
Post to Citeulike
8 Citations
Freevariable semantic tableaux are a wellestablished technique for firstorder theorem proving where free variables act as a metalinguistic device for tracking the eigenvariables used during proof search. We present the theoretical foundations to extend this technique to propositional modal logics, including nontrivial rigorous proofs of soundness and completeness, and also present various techniques that improve the efficiency of the basic naive method for such tableaux.
more …
By
Van Rooy, Robert
Post to Citeulike
6 Citations
In this paper I argue that anaphoric pronouns should always be interpreted exhaustively. I propose that pronouns are either used referentially and refer to the speaker's referents of their antecedent indefinites, or descriptively and go proxy for the description recoverable from its antecedent clause. I show how this view can be implemented within a dynamic semantics, and how it can account for various examples that seemed to be problematic for the view that for all unbound pronouns there always should be a notion of exhaustivity/uniqueness involved. The uniqueness assumption for the use of singular pronouns is also shown to be importantto explain what the discourse referents used in dynamic semantics represent.
more …
By
Mayer, Marta Cialdea; Cerrito, Serenella
Post to Citeulike
4 Citations
In this paper we study proof procedures for some variants of firstorder modal logics, where domains may be either cumulative or freely varying and terms may be either rigid or nonrigid, local or nonlocal. We define both ground and free variable tableau methods, parametric with respect to the variants of the considered logics. The treatment of each variant is equally simple and is based on the annotation of functional symbols by natural numbers, conveying some semantical information on the worlds where they are meant to be interpreted.
This paper is an extended version of a previous work where full proofs were not included. Proofs are in some points rather tricky and may help in understanding the reasons for some details in basic definitions.
more …
By
Fitting, Melvin; Thalmann, Lars; Voronkov, Andrei
Post to Citeulike
15 Citations
Many powerful logics exist today for reasoning about multiagent systems, but in most of these it is hard to reason about an infinite or indeterminate number of agents. Also the naming schemes used in the logics often lack expressiveness to name agents in an intuitive way.
To obtain a more expressive language for multiagent reasoning and a better naming scheme for agents, we introduce a family of logics called termmodal logics. A main feature of our logics is the use of modal operators indexed by the terms of the logics. Thus, one can quantify over variables occurring in modal operators. In termmodal logics agents can be represented by terms, and knowledge of agents is expressed with formulas within the scope of modal operators.
This gives us a flexible and uniform language for reasoning about the agents themselves and their knowledge. This article gives examples of the expressiveness of the languages and provides sequentstyle and tableaubased proof systems for the logics. Furthermore we give proofs of soundness and completeness with respect to the possible world semantics.
more …
By
Rosati, Riccardo
Post to Citeulike
3 Citations
We define a tableau calculus for the logic of only knowing and knowing at most ONℒ, which is an extension of Levesque's logic of only knowing Oℒ. The method is based on the possibleworld semantics of the logic ONℒ, and can be considered as an extension of known tableau calculi for modal logic K45. From the technical viewpoint, the main features of such an extension are the explicit representation of "unreachable" worlds in the tableau, and an additional branch closure condition implementing the property that each world must be either reachable or unreachable. The calculus allows for establishing the computational complexity of reasoning about only knowing and knowing at most. Moreover, we prove that the method matches the worstcase complexity lower bound of the satisfiability problem for both ONℒ and Oℒ. With respect to [22], in which the tableau calculus was originally presented, in this paper we both provide a formal proof of soundness and completeness of the calculus, and prove the complexity results for the logic ONℒ.
more …
By
Baader, Franz; Sattler, Ulrike
Post to Citeulike
144 Citations
Description logics are a family of knowledge representation formalisms that are descended from semantic networks and frames via the system Klone. During the last decade, it has been shown that the important reasoning problems (like subsumption and satisfiability) in a great variety of description logics can be decided using tableaulike algorithms. This is not very surprising since description logics have turned out to be closely related to propositional modal logics and logics of programs (such as propositional dynamic logic), for which tableau procedures have been quite successful.
Nevertheless, due to different underlying institutions and applications, most description logics differ significantly from runofthemill modal and program logics. Consequently, the research on tableau algorithms in description logics led to new techniques and results, which are, however, also of interest for modal logicians. In this article, we will focus on three features that play an important rôle in description logics (number restrictions, terminological axioms, and role constructors), and show how they can be taken into account by tableau algorithms.
more …
By
Seuren, Pieter A. M.; Capretta, Venanizo; Geuvers, Herman
Post to Citeulike
4 Citations
The prime purpose of this paper is, first, to restore to discoursebound occasion sentences their rightful central place in semantics and secondly, taking these as the basic propositional elements in the logical analysis of language, to contribute to the development of an adequate logic of occasion sentences and a mathematical (Boolean) foundation for such a logic, thus preparing the ground for more adequate semantic, logical and mathematical foundations of the study of natural language. Some of the insights elaborated in this paper have appeared in the literature over the past thirty years, and a number of new developments have resulted from them. The present paper aims atproviding an integrated conceptual basis for this new development in semantics. In Section 1 it is argued that the reduction by translation of occasion sentences to eternal sentences, as proposed by Russell and Quine, is semantically and thus logically inadequate. Natural language is a system of occasion sentences, eternal sentences being merely boundary cases. The logic hasfewer tasks than is standardly assumed, as it excludes semantic calculi, which depend crucially on information supplied by cognition and context and thus belong to cognitive psychology rather than to logic. For sentences to express a proposition and thus be interpretable and informative, they must first be properly anchored in context. A proposition has a truth value when it is, moreover, properly keyed in the world, i.e. is about a situation in the world. Section 2 deals with the logical properties of natural language. It argues that presuppositional phenomena require trivalence and presents the trivalent logic PPC_{3}, with two kinds of falsity and two negations. It introduces the notion of Σspace for a sentence A (or A/A, the set of situations in which A is true) as the basis of logical model theory, and the notion of P^{A}/ (the Σspace of the presuppositions of A), functioning as a `private' subuniverse for A/A. The trivalent Kleene calculus is reinterpreted as a logical account of vagueness, rather than of presupposition. PPC_{3} and the Kleene calculus are refinements of standard bivalent logic and can be combined into one logical system. In Section 3 the adequacy of PPC_{3} as a truthfunctional model of presupposition is considered more closely and given a Boolean foundation. In a noncompositional extended Boolean algebra, three operators are defined: 1_{a} for the conjoined presuppositions of a, ã for the complement of a within 1_{a}, and â for the complement of 1_{a} within Boolean 1. The logical properties of this extended Boolean algebra are axiomatically defined and proved for all possible models. Proofs are provided of the consistency and the completeness of the system. Section 4 is a provisional exploration of the possibility of using the results obtained for a new discoursedependent account of the logic of modalities in natural language. The overall result is a modified and refined logical and modeltheoretic machinery, which takes into account both the discoursedependency of natural language sentences and the necessity of selecting a key in the world before a truth value can be assigned.
more …
By
Kim, SungDong; Zhang, ByoungTak; Kim, Yung Taek
Post to Citeulike
5 Citations
Longsentence analysis has been a critical problem in machine translation becauseof its high complexity. Intrasentence segmentation has been proposed as a methodfor reducing parsing complexity. This paper presents a twostep segmentation method:(1) identifying potential segmentation positions in a sentence and (2) selecting an actualsegmentation position amongst them. We have attempted to apply machine learningtechniques to the segmentation task: ``concept learning'' and ``genetic learning''. Bylearning the ``SegmentablePosition'' concept, the rules for identifying potentialsegmentation positions are postulated. The selection of the actual segmentationposition is based on a function whose parameters are determined by genetic learning.Experimental results are presented which illustrate the effectiveness of our approachto longsentence parsing for MT. The results also show improved segmentationperformance in comparison to other existing methods.
more …
By
Bernth, Arendse; Gdaniec, Claudia
Post to Citeulike
7 Citations
Machine Translation of arbitrary input is difficult,but the output quality can be improved significantlyif writers create documents with MT in mind. Thisarticle deals with ``MTranslatability'' – translatabilityof texts by MT systems. It identifies characteristics oftext that decrease MTranslatability and suggests waysto improve them. It also illustrates the effect of writingfor MTranslatability by showing beforeandafter picturesof output from various commercially available MT systems,and gives an overview of tools that help identify and correctthe problems.
more …
By
Krieger, HansUlrich
Post to Citeulike
Typed feature logics have been employed as description languages in modern typeoriented grammar theories such as HPSG and have laid the theoretical foundations for many implemented systems. However, recursion poses severe problems and has been addressed through specialized powerdomain constructions which depend on the particular view of the logician. In this paper, we argue that definite equivalences as introduced by Smolka can serve as the formal basis for arbitrarily formalized typed feature structures and typed featurebased grammars/lexicons, as employed in, e.g., TFS or TDL. The idea here is that type definitions in such systems transformed into an equivalent definite program, whose meaning can be identified with the denotation of the type system. Now, models of a definite program P can be characterized by the set of ground atoms which are logical consequences of the definite program. These models are ordered by subset inclusion and, for reasons that will become clear, we propose the greatest model as the intended interpretation of P or, equivalently, as the denotation of the associated type system. Our transformational approach has also a great impact on nonmonotonically defined types, since under this interpretation, we can view the type hierarchy as a pure transport medium, allowing us to get rid of inheritance, and yielding a perfectly monotonic definite program.
more …
By
Kim, Seonho; Yoon, Juntae; Song, Mansuk
Post to Citeulike
3 Citations
In this paper, we propose a statistical method to automaticallyextract collocations from Korean POStagged corpus. Since a large portion of language is represented by collocation patterns, the collocational knowledge provides a valuable resource for NLP applications. One difficulty of collocation extraction is that Korean has a partially free word order, which also appears in collocations. In this work, we exploit four statistics, ‘frequency’,‘randomness’, ‘convergence’, and ‘correlation' in order to take into account the flexible word order of Korean collocations. We separate meaningful bigrams using an evaluation function based on the four statistics and extend the bigrams to ngram collocations using a fuzzy relation. Experiments show that this method works well for Korean collocations.
more …
By
Daimi, Kevin
Post to Citeulike
13 Citations
The aim of this paper is to describe a technique for identifying the sourcesof several types of syntactic ambiguity in Arabic Sentences with a singleparse only. Normally, any sentence with two or more structuralrepresentations is said to be syntactically ambiguous. However, Arabicsentences with only one structural representation may be ambiguous. Ourtechnique for identifying Syntactic Ambiguity in SingleParse ArabicSentences (SASPAS) analyzes each sentence and verifies the conditionsthat govern the existence of certain types of syntactic ambiguities in Arabicsentences. SASPAS is integrated with the syntactic parser, which is basedon Definite Clause Grammar (DCG) formalism. The system accepts Arabicsentences in their original script.
more …
By
Kallmeyer, Laura
Post to Citeulike
4 Citations
This paper addresses the problem of integrating underspecification of the parent (immediate dominance) and the dominance relation into treegenerating grammar formalisms for natural language processing. Retaining the advantages of Tree Adjoining Grammars (TAG), in particular extended domains of locality and a local derivation process, a TAG extension called local Tree Description Grammar (local TDG) is defined that generates tree descriptions. In contrast to earlier tree description based variants of TAG, the possibility of an underspecified dominance relation provides underspecified representations for scope ambiguities. Furthermore, local TDGs allow an adequate treatment of longdistance scrambling in German which is one of the problematic phenomena for TAG.
more …
By
Holmes, David I.; Robertson, Michael; Paez, Roxanna
Post to Citeulike
24 Citations
This paper describes how traditional andnontraditional methods were used to identifyseventeen previously unknown articles that webelieve to be by Stephen Crane, published inthe NewYork Tribune between 1889 and1892. The articles, printed without byline inwhat was at the time New York City's mostprestigious newspaper, report on activities ina string of summer resort towns on New Jersey'snorthern shore. Scholars had previouslyidentified fourteen shore reports as Crane's;these possible attributions more than doublethat corpus. The seventeen articles confirmhow remarkably early Stephen Crane set hisdistinctive writing style and artistic agenda. In addition, the sheer quantity of the articlesfrom the summer of 1892 reveals how vigorouslythe twentyyearold Crane sought to establishhimself in the role of professional writer. Finally, our discovery of an article about theNew Jersey National Guard's summer encampmentreveals another way in which Crane immersedhimself in nineteenthcentury military cultureand help to explain how a young man who hadnever seen a battle could write so convincinglyof war in his soontocome masterpiece,The Red Badge of Courage. We argue that thejoint interdisciplinary approach employed inthis paper should be the way in whichattributional research is conducted.
more …
By
Whissell, Cynthia; Sigelman, Lee
Post to Citeulike
6 Citations
Intercorrelations among stylistic and emotional variables and constructvalidity deduced from relationships to other ratings of U.S. presidentssuggest that power language (language that is linguistically simple,emotionally evocative, highly imaged, and rich in references to Americanvalues) is an important descriptor of inaugural addresses. Attempts topredict the use of power language in inaugural addresses from variablesrepresenting the times (year, media, economic factors) and the man(presidential personality) lead to the conclusion that timebasedfactors are the best predictors of the use of such language (81%prediction of variance in the criterion) while presidential personalityadds at most a small amount of prediction to the model. Changes in powerlanguage are discussed as the outcome of a tendency to opt for breadthof communication over depth.
more …
By
Kracht, Marcus
Post to Citeulike
5 Citations
In transformational grammar the notion of a chain has been central ever since its introduction in the early 80's. However, an insightful theory of chains has hitherto been missing. This paper develops such a theory of chains. Though it is applicable to virtually all chains, we shall focus on movementinduced chains. It will become apparent that chains are far from innocuous. A proper formulation of the structures and algorithms involved is quite a demanding task. Furthermore, we shall show that it is possible to define structures in which the notion of a chain coincides with that of a constituent, so that the notion of a chain becomes redundant, generally without making the theory more complicated. These structures avoid some of the problems that beset the standard structures (such as unbound traces created by remnant movement).
more …
By
Anane, Rachid
Post to Citeulike
This paper is concerned with the investigation of the relevance and suitability of the data mining approach to serial documents. Conceptually the paper is divided into three parts. The first part presents the salient features of data mining and its symbiotic relationship to data warehousing. In the second part of the paper, historical serial documents are introduced, and the Ottoman Tax Registers (Defters) are taken as a case study. Their conformance to the data mining approach is established in terms of structure, analysis and results. A highlevel conceptual model for the Defters is also presented. The final part concludes with a brief consideration of the implication of data mining for historical research.
more …
By
Dvurečenskij, Anatolij
Post to Citeulike
73 Citations
Pseudo MValgebras are a noncommutative extension of MValgebras introduced recently by Georgescu and Iorgulescu. We introduce states (finitely additive probability measures) on pseudo MValgebras. We show that extremal states correspond to normal maximal ideals. We give an example in that, in contrast to classical MValgebras introduced by Chang, states can fail on pseudo MValgebras. We prove that representable and normalvalued pseudo MValgebras admit at least one state.
more …
By
Fitelson, Branden; Wos, Larry
Post to Citeulike
1 Citations
This article features longsought proofs with intriguing properties (such as the absence of double negation and the avoidance of lemmas that appeared to be indispensable), and it features the automated methods for finding them. The theorems of concern are taken from various areas of logic that include twovalued sentential (or propositional) calculus and infinitevalued sentential calculus. Many of the proofs (in effect) answer questions that had remained open for decades, questions focusing on axiomatic proofs. The approaches we take are of added interest in that all rely heavily on the use of a single program that offers logical reasoning, William McCune's automated reasoning program OTTER. The nature of the successes and approaches suggests that this program offers researchers a valuable automated assistant. This article has three main components. First, in view of the interdisciplinary nature of the audience, we discuss the means for using the program in question (OTTER), which flags, parameters, and lists have which effects, and how the proofs it finds are easily read. Second, because of the variety of proofs that we have found and their significance, we discuss them in a manner that permits comparison with the literature. Among those proofs, we offer a proof shorter than that given by Meredith and Prior in their treatment of Łukasiewicz's shortest single axiom for the implicational fragment of twovalued sentential calculus, and we offer a proof for the Łukasiewicz 23letter single axiom for the full calculus. Third, with the intent of producing a fruitful dialogue, we pose questions concerning the properties of proofs and, even more pressing, invite questions similar to those this article answers.
more …
By
Vermeulen, C.
Post to Citeulike
1 Citations
We consider substitutions in order sensitive situations, having in the back of our minds the case of dynamic predicate logic (DPL) with a stack semantics. We start from the semantic intuition that substitutions are move instructions on stacks: the syntactic operation [y/x] is matched by the instruction to move the value of the ystack to the xstack. We can describe these actions in the positive fragment of DPLE. Hence this fragment counts as a logic for DPLsubstitutions. We give a calculus for the fragment and prove soundness and completeness.
more …
By
Pollmann, Thijs; Baayen, R. Harald
Post to Citeulike
4 Citations
In this paper, some electronically gathered data arepresented and analyzed about the presence of the pastin newspaper texts. In ten large text corpora of sixdifferent languages, all dates in the form of yearsbetween 1930 and 1990 were counted. For six of thesecorpora this was done for all the years between 1200and 1993. Depicting these frequencies on the timeline,we find an underlying regularly declining curve,deviations at regular places and culturally determinedpeaks at irregular points. These three phenomena areanalyzed.
Mathematically spoken, all the underlying curves havethe same form. Whether a newspaper gives much orlittle attention to the past, the distribution of thisattention over time turns out to be inverselyproportional to the distance between past and present.It is shown that this distribution is largelyindependent of the total number of years in a corpus,the culture in which it is published, the language andthe date of origin of the corpus. The phenomenon isexplained as a kind of forgetting: the larger thedistance between past and present, the more difficultit is to connect something of the past to an item inthe present day. A more detailed analysis of the datashows a breakpoint in the frequency vs. distance fromthe publication date of the texts. References toevents older than approximately 50 years are theresult of a forgetting process that is distinctivelydifferent from the forgetting speed of more recentevents.
Pandel's classification of the dimensions ofhistorical consciousness is used to answer thequestion how these investigations elucidate thehistorical consciousness of the cultures in which thenewspapers are written and read.
more …
By
Filip, Hana; Carlson, Gregory N.
Post to Citeulike
3 Citations
In this paper we examine interactions of the reciprocal with distributive and collective operators, which are encoded by prefixes on verbs expressing the reciprocal relation: namely, the Czech distributive po and the collectivizing na. The theoretical import of this study is twofold. First, it contributes to our knowledge of how wordinternal operators interact with phrasal syntax/semantics. Second, the prefixes po and na generate (a range of) readings of reciprocal sentences for which the Strongest Meaning Hypothesis (SMH) proposed by Dalrymple et al. (1998) does not make the right predictions. The distributive prefix po prefers the Strong Reciprocity reading, although the SMH predicts that a weakening should take place, while with the prefix na we find cases where weaker reciprocal readings are preferable to the stronger ones predicted by the SMH. This behavior of po and na is, we propose, due to the way in which they modulate two factors that are crucial in the interpretation of reciprocal sentences: (i) the relevant subpluralities in the group denoted by the reciprocal's antecedent, and (ii) the strength of reciprocal relations. We provide a detailed analysis of the semantics of the prefixes po and na and their contribution to the meaning of reciprocal sentences within the general framework of event semantics with lattice structures.
more …
By
Aghaei, Mojtaba; Ardeshir, Mohammad
Post to Citeulike
2 Citations
We introduce two Gentzenstyle sequent calculus axiomatizations for conservative extensions of basic propositional logic. Our first axiomatization is an ipmrovement of, in the sense that it has a kind of the subformula property and is a slight modification of. In this system the cut rule is eliminated. The second axiomatization is a classical conservative extension of basic propositional logic. Using these axiomatizations, we prove interpolation theorems for basic propositional logic.
more …
By
Ahmed, Tarek Sayed; Németi, Istvan
Post to Citeulike
11 Citations
SC
_{α}, CA_{α}, QA_{α} and QEA_{α} stand for the classes of Pinter's substitution algebras, Tarski's cylindric algebras, Halmos' quasipolyadic algebras, and quasipolyadic equality algebras of dimension α, respectively. Generalizing a result of Németi on cylindric algebras, we show that for K ∈ {SC, CA, QA, QEA} and ordinals α < β, the class Nr_{α}K_{β} of αdimensional neat reducts of βdimensional K algebras, though closed under taking homomorphic images and products, is not closed under forming subalgebras (i.e. is not a variety) if and only if α > 1.
From this it easily follows that for 1 < α < β, the operation of forming αneat reducts of algebras in K_{β} does not commute with forming subalgebras, a notion to be made precise.
We give a contrasting result concerning Halmos' polyadic algebras (with and without equality). For such algebras, we show that the class of infinite dimensional neat reducts forms a variety.
We comment on the status of the property of neat reducts commuting with forming subalgebras for various reducts of polyadic algebras that are also expansions of cylindriclike algebras. We try to draw a borderline between reducts that have this property and reducts that do not.
Following research initiated by Pigozzi, we also emphasize the strong tie that links the (apparently nonrelated) property of neat reducts commuting with forming subalgebras with proving amalgamation results in cylindriclike algebras of relations. We show that, like amalgamation, neat reducts commuting with forming subalgebras is another algebraic expression of definability and, accordingly, is also strongly related to the wellknown metalogical properties of Craig, Beth and Robinson in the corresponding logics.
more …
By
Goldblatt, Robert
Post to Citeulike
6 Citations
A variety V of Boolean algebras with operators is singletonpersistent if it contains a complex algebra whenever it contains the subalgebra generated by the singletons. V is atomcanonical if it contains the complex algebra of the atom structure of any of the atomic members of V.
This paper explores relationships between these "persistence" properties and questions of whether V is generated by its complex algebras or its atomic members, or is closed under canonical embedding algebras or completions. It also develops a general theory of when operations involving complex algebras lead to the construction of elementary classes of relational structures.
more …
By
Allwein, Gerard; MacCaull, Wendy
Post to Citeulike
5 Citations
Gelfand quantales are complete unital quantales with an involution, *, satisfying the property that for any element a, if a ⊙ b ≤ a for all b, then a ⊙ a* ⊙ a = a. A Hilbertstyle axiom system is given for a propositional logic, called Gelfand Logic, which is sound and complete with respect to Gelfand quantales. A Kripke semantics is presented for which the soundness and completeness of Gelfand logic is shown. The completeness theorem relies on a Stone style representation theorem for complete lattices. A Rasiowa/Sikorski style semantic tableau system is also presented with the property that if all branches of a tableau are closed, then the formula in question is a theorem of Gelfand Logic. An open branch in a completed tableaux guarantees the existence of an Kripke model in which the formula is not valid; hence it is not a theorem of Gelfand Logic.
more …
By
Kim, Yuseop; Zhang, ByoungTak; Kim, Yung Taek
Post to Citeulike
4 Citations
In machine translation, collocation dictionaries are important for selecting accurate target words. However, if the dictionary size is too large it can decrease the efficiency of translation. This paper presents a method to develop a compact collocation dictionary for transitive verb–object pairs in English–Korean machine translation without losing translation accuracy. We use WordNet to calculate the semantic distance between words, and knearestneighbor learning to select the translations. The entries in the dictionary are minimized to balance the tradeoff between translation accuracy and time. We have performed several experiments on a selected set of verbs extracted from a raw corpus of over 3 million words. The results show that in realtime translation environments the size of a collocation dictionary can be reduced up to 40% of its original size without significant decrease in its accuracy.
more …
By
Guessoum, Ahmed; Zantout, Rached
Post to Citeulike
2 Citations
The lexicon is a major part of any Machine Translation (MT) system. If the lexicon of an MT system is not adequate, this will affect the quality of the whole system. Building a comprehensive lexicon, i.e., one with a high lexical coverage, is a major activity in the process of developing a good MT system. As such, the evaluation of the lexicon of an MT system is clearly a pivotal issue for the process of evaluating MT systems. In this paper, we introduce a new methodology that was devised to enable developers and users of MT Systems to evaluate their lexicons semiautomatically. This new methodology is based on the idea of the importance of a specific word or, more precisely, word sense, to a given application domain. This importance, or weight, determines how the presence of such a word in, or its absence from, the lexicon affects the MT system's lexical quality, which in turn will naturally affect the overall output quality. The method, which adopts a blackbox approach to evaluation, was implemented and applied to evaluating the lexicons of three commercialEnglish–Arabic MT systems. A specific domain was chosen in which the various wordsense weights were determined by feeding sample texts from the domain into a system developed specifically for that purpose. Once this database of word senses and weights was built, test suites were presented to each of the MT systems under evaluation and their output rated by a human operator as either correct or incorrect. Based on this rating, an overall automated evaluation of the lexicons of the systems was deduced.
more …
By
Minker, Wolfgang
Post to Citeulike
1 Citations
In this article, we discuss robustness and portability issues forparsing components in interactive speech systems. The robustness isobtained by choosing an appropriate grammar formalism. It should bewell adapted to spontaneous speech effects, which are frequent inthese application domains. Portability, on the other hand, can beachieved by choosing a flexible grammar implementation. We illustrateboth issues by describing a stochasticparsing component implemented and evaluated for spoken languagetranslation and information retrieval applications.
more …
By
Leith, Miguel; Cunningham, Jim
Post to Citeulike
Linguistic phenomena of tense and aspect have been investigated in a great deal of theoretical work in linguistics, philosophy and computer science. Modern tense logics, established by Prior, are part of this effort. Point tense logics offer an intuitive representation of tense but lack the expressiveness to represent many aspectual structures. Interval tense logics offer more expressiveness but in the general case can be computationally intractable. From a linguistic perspective there is the problem of precisely how to formalise the aspectual structures, such as a culmination and a culminated process. In this paper we define a computationally tractable augmented fragment of Halpern and Shoham's interval tense logic HS and apply it to represent a core set of aspectual structures, which are incorporated into a temporal semantics of a simple fragment of English. We model the logic fragment using timelines and define two procedures, one for constructing the minimal timelines that satisfy a formula and one for checking semantic entailments between one formula and another by comparing their timelines. The former is applied to compute models of temporal readings and the latter to check entailments between them. Possible extensions to the logic fragment and timeline models are discussed as ways of accounting for a wider range of linguistic behaviour.
more …
By
Martinez, Maricarmen
Post to Citeulike
3 Citations
There is no known syntactic characterization of the class of finite definitions in terms of a set of basic definitions and a set of basic operators under which the class is closed. Furthermore, it is known that the basic propositional operators do not preserve finiteness. In this paper I survey these problems and explore operators that do preserve finiteness. I also show that every definition that uses only unary predicate symbols and equality is bound to be finite.
more …
By
Montagna, Franco
Post to Citeulike
22 Citations
We prove that the sets of standard tautologies of predicate Product Logic and of predicate Basic Logic, as well as the set of standardsatisfiable formulas of predicate Basic Logic are not arithmetical, thus finding a rather satisfactory solution to three problems proposed by Hájek in [H01].
more …
By
Hájek, Petr
Post to Citeulike
13 Citations
Fuzzy logic is understood as a logic with a comparative and truthfunctional notion of truth. Arithmetical complexity of sets of tautologies (identically true sentences) and satisfiable sentences (sentences true in at least one interpretation) as well of sets of provable formulas of the most important systems of fuzzy predicate logic is determined or at least estimated.
more …
By
Leitgeb, Hannes
Post to Citeulike
12 Citations
This papers deals with the class of axiomatic theories of truth for semantically closed languages, where the theories do not allow for standard models; i.e., those theories cannot be interpreted as referring to the natural number codes of sentences only (for an overview of axiomatic theories of truth in general, see Halbach[6]). We are going to give new proofs for two wellknown results in this area, and we also prove a new theorem on the nonstandardness of a certain theory of truth. The results indicate that the proof strategies for all the theorems on the nonstandardness of such theories are "essentially" of the same kind of structure.
more …
By
Kahle, Reinhard
Post to Citeulike
3 Citations
We give a survey on truth theories for applicative theories. It comprises Frege structures, universes for Frege structures, and a theory of supervaluation. We present the prooftheoretic results for these theories and show their syntactical expressive power. In particular, we present as a novelty a syntactical interpretation of ID_{1} in a applicative truth theory based on supervaluation.
more …
By
Halbach, Volker
Post to Citeulike
1 Citations
I survey some important semantical and axiomatic theories of selfreferential truth. Kripke's fixedpoint theory, the revision theory of truth and appraoches involving fuzzy logic are the main examples of semantical theories. I look at axiomatic theories devised by Cantini, Feferman, Freidman and Sheard. Finally some applications of the theory of selfreferential truth are considered.
more …
By
Stamatatos, E.; Fakotakis, N.; Kokkinakis, G.
Post to Citeulike
66 Citations
The most important approaches to computerassistedauthorship attribution are exclusively based onlexical measures that either represent the vocabularyrichness of the author or simply comprise frequenciesof occurrence of common words. In this paper wepresent a fullyautomated approach to theidentification of the authorship of unrestricted textthat excludes any lexical measure. Instead we adapt aset of style markers to the analysis of the textperformed by an already existing natural languageprocessing tool using three stylometric levels, i.e.,tokenlevel, phraselevel, and analysislevelmeasures. The latter represent the way in which thetext has been analyzed. The presented experiments ona Modern Greek newspaper corpus show that the proposedset of style markers is able to distinguish reliablythe authors of a randomlychosen group and performsbetter than a lexicallybased approach. However, thecombination of these two approaches provides the mostaccurate solution (i.e., 87% accuracy). Moreover, wedescribe experiments on various sizes of the trainingdata as well as tests dealing with the significance ofthe proposed set of style markers.
more …
By
Corley, Steffan; Corley, Martin; Keller, Frank; Crocker, Matthew W.; Trewin, Shari
Show all (5)
Post to Citeulike
8 Citations
The Gsearch system allows the selection of sentences by syntacticcriteria from text corpora, even when these corpora contain no priorsyntactic markup. This is achieved by means of a fast chart parser,which takes as input a grammar and a search expression specified by theuser. Gsearch features a modular architecture that can be extendedstraightforwardly to give access to new corpora. The Gsearcharchitecture also allows interfacing with external linguistic resources(such as taggers and lexical databases). Gsearch can be used withgraphical tools for visualizing the results of a query.
more …
By
UreñaLópez, L. Alfonso; Buenaga, Manuel; Gómez, José M.
Post to Citeulike
13 Citations
Information access methods must be improved to overcome theinformation overload that most professionals face nowadays. Textclassification tasks, like Text Categorization, help the usersto access to the great amount of text they find in the Internetand their organizations.TC is the classification of documents into a predefined set ofcategories. Most approaches to automatic TC are based on theutilization of a training collection, which is a set of manuallyclassified documents. Other linguistic resources that areemerging, like lexical databases, can also be used forclassification tasks. This article describes an approach to TCbased on the integration of a training collection (Reuters21578)and a lexical database (WordNet 1.6) as knowledge sources.Lexical databases accumulate information on the lexical items ofone or several languages. This information must be filtered inorder to make an effective use of it in our model of TC. Thisfiltering process is a Word Sense Disambiguation task. WSDis the identification of the sense of words in context. This taskis an intermediate process in many natural language processingtasks like machine translation or multilingual informationretrieval. We present the utilization of WSD as an aid for TC. Ourapproach to WSD is also based on the integration of two linguisticresources: a training collection (SemCor and Reuters21578) and alexical database (WordNet 1.6).We have developed a series of experiments that show that: TC andWSD based on the integration of linguistic resources are veryeffective; and, WSD is necessary to effectively integratelinguistic resources in TC.
more …
By
Woods, M.J.
Post to Citeulike
1 Citations
This article compares the word frequencies of the few most commonwords in Spanish as revealed by a modern corpus of over fivethousand words with a corpus of GoldenAge Spanish texts of overa million words, and finds that although de is by far themost common word in contemporary Spanish, in the 16^{th}and 17^{th} Centuries it was considerably less frequent, and in many texts was less frequent than y, or quefor which shared very similar frequency figures. It is arguedthat this significant change in the Spanish language comes aboutin the 20^{th} Century.
more …
By
AlAnzi, Fawaz S.
Post to Citeulike
2 Citations
This paper presents two grammars for reading numbers of classical andmodern Arabic language. The grammars make use of the structured Arabiccounting system to present an accurate and compact grammar that can beeasily implemented in different platforms. Automating the process ofreading numbers from its numerical representation to its sentential formhas many applications. Inquiring about your bank balance over the phone,automatically writing the amount of checks (from numerical form toletter form), and reading for the blind people are some of the fieldsthat automated reading of numbers can be of service. The parsing problemof sentential representation of numbers in the Arabic language is alsoaddressed. A grammar to convert from sentential representation to thenumerical representation is also presented. Grammars presented can beused to translate from the sentential Arabic numbers to sententialEnglish numbers, and vice versa, by using the common numericalrepresentation as an intermediate code. Such methodology can be used toaid the automatic translation between the two natural languages. Allgrammars described in this paper have been implemented on a UNIX system.Examples of different number representations and the output of theimplementation of the grammars are given as part of the paper.
more …
By
De Gooijer, Jan G.; Laan, Nancy M.
Post to Citeulike
This paper studies the problem of detecting multiplechanges at unknown times in the mean level of elision in thetrimeter sequences of the Orestes, a play written by theAncient Greek dramatist Euripides (485–406 B.C.). Changedetection statistics proposed by MacNeill (1978) and Jandhayala and MacNeill(1991) are adopted for this purpose. Analysis of the trimetersequences yields several points of change. A general explanation fortheir occurrence appears to be that Euripides varies his use ofelision according to the emotional content of his text, i.e., heseems to change the form to support the content and, thus, seems touse elision frequency as a dramatic instrument.
more …
By
Schloen, J. David
Post to Citeulike
9 Citations
An appropriate standardized data model is necessary tofacilitate electronic publication and analysis ofarchaeological data on the World Wide Web. Ahierarchical ``itembased'' model is proposed which canbe readily implemented as an Extensible MarkupLanguage (XML) tagging scheme that can represent anykind of archaeological data and deliver it in acrossplatform, standardized fashion to any Webbrowser. This tagging scheme and the data model itimplements permit seamless integration and jointquerying of archaeological datasets derived from manydifferent sources.
more …
By
Bainbridge, David; Bell, Tim
Post to Citeulike
52 Citations
This article describes the challenges posed by optical musicrecognition – a topic in computer science that aims to convert scannedpages of music into an online format. First, the problem is described;then a generalised framework for software is presented that emphasises keystages that must be solved: staff line identification, musical objectlocation, musical feature classification, and musical semantics. Next,significant research projects in the area are reviewed, showing how eachfits the generalised framework. The article concludes by discussingperhaps the most open question in the field: how to compare the accuracy and success of rival systems, highlighting certain steps thathelp ease the task.
more …
By
Rocio, Vitor Jorge; Lopes, Gabriel Pereira; de la Clergerie, Eric
Post to Citeulike
1 Citations
Efficient partial parsing systems (chunkers) are urgently required by various natural language application areas because these parsers always produce partially parsed text even when the text does not fully fit existing lexica and grammars. Availability of partially parsed corpora is absolutely necessary for extracting various kinds of information that may then be fed into those systems, thereby increasing their processing power. In this paper, we propose an efficient partial parsing scheme, based on chart parsing, that is flexible enough to support both normal parsing tasks and diagnosis in previously obtained partial parses of possible causes (kinds of faults) that led to those partial, instead of complete, parses. Through the use of the builtin tabulation capabilites of the DyALog system, we implemented a partial parser that runs as fast as the best nondeterministic parsers. In this paper we elaborate on the implementation of two different grammar formalisms: Definite Clause Grammars (DCG) extended with head declarations and Bound Movement Grammars (BMG).
more …
