Showing 1 to 100 of 224 matching Articles
Results per page:
Export (CSV)
By
Way, Andrew; Crookston, Ian; Shelton, Jane
4 Citations
This paper presents a detailed study of Eurotra Machine Translation engines, namely the mainstream Eurotra software known as the EFramework, and two “unofficial” spinoffs – the 〈C,A〉,T and Relaxed Compositionality translator notations – with regard to how these systems handle “hard” cases, and in particular their ability to handle combinations of such problems. In the 〈C,A〉,T translator notation, some cases of complex transfer are “wild”, meaning roughly that they interact badly when presented with other complex cases in the same sentence. The effect of this is that each combination of a wild case and another complex case needs ad hoc treatment. The EFramework is the same as the 〈C,A〉,T notation in this respect. In general, the EFramework is equivalent to the 〈C,A〉,T notation for the task of transfer. The Relaxed Compositionality translator notation is able to handle each wild case (bar one exception) with a single rule even where it appears in the same sentence as other complex cases.
more …
By
Dorr, Bonnie J.
31 Citations
This paper describes techniques for automatic construction of dictionaries for use in largescale foreign language tutoring (FLT) and interlingual machine translation (MT) systems. The dictionaries are based on a languageindependent representation called “lexical conceptual structure” (LCS). A primary goal of the LCS research is to demonstrate that synonymous verb senses share distributional patterns. We show how the syntax–semantics relation can be used to develop a lexical acquisition approach that contributes both toward the enrichment of existing online resources and toward the development of lexicons containing more complete information than is provided in any of these resources alone. We start by describing the structure of the LCS and showing how this representation is used in FLT and MT. We then focus on the problem of building LCS dictionaries for largescale FLT and MT. First, we describe authoring tools for manual and semiautomatic construction of LCS dictionaries; we then present a more sophisticated approach that uses linguistic techniques for building word definitions automatically. These techniques have been implemented as part of a set of lexicondevelopment tools used in the milt FLT project.
more …
By
Rogers, James
4 Citations
We sketch an axiomatic reformalization of Generalized Phrase StructureGrammar (GPSG) – a definition purely within the language ofmathematical logic of the theory GPSG embodies. While this treatment raisesa number of theoretical issues for GPSG, our focus is not thereformalization itself but rather the method we employ. The modeltheoreticapproach it exemplifies can be seen as a natural step in the evolution ofconstraintbased theories from their grammarbased antecedents. One goal ofthis paper is to introduce this approach to a broader audience and todemonstrate its application to an existing theory. As such, it joins agrowing literature of similar studies. Prior studies, however, have had anumber of weaknesses – they generally offer little in the way ofconcrete examples of the advantages the approach has to offer, theytypically ignore significant portions of the theories they address, and, byfully abstracting away from the notion of grammar mechanism, they largelyabandon the possibility of establishing meaningful complexity results. Thesecond goal of the paper is to address these issues. Our thrust is to sketchthe reformalization sufficiently to illustrate the way in which it capturesFeature Specification Defaults (FSDs) and the Exhaustive Constant PartialOrdering (ECPO) property. Our definition of FSDs is considerably simplifiedrelative to the original formalization and is free of the procedural flavorthat has led some to assume that FSDs are inherently dynamic. Our treatmentof ECPO uncovers a gap in its definition in the context of partialcategories that has heretofore gone unnoticed. We offer these as ademonstration of the kind of insight that a modeltheoretic reinterpretationcan bring to an existing theory. Further, since these are the types ofproperties that prior studies in this genre have failed to address, FSDs andECPO provide a means for us to explore the limitations of these approachesand to offer ways of overcoming them. Finally, the logical framework weemploy has a well defined generative capacity – definability in thisframework characterizes strong contextfreeness in a particular sense. Thus,despite being more abstract than its constraintbased predecessors, themodeltheoretic approach, as exemplified here, can offer stronger complexityresults than are typically available in the constraintbased framework.
more …
By
Groenink, Annius V.
5 Citations
This paper classifies a family of grammar formalisms that extendcontextfree grammar by talking about tuples of terminal strings, ratherthan independently combining single terminal words into larger singlephrases. These include a number of wellknown formalisms, such as headgrammar and linear contextfree rewriting systems, but also a new formalism,(simple) literal movement grammar, which strictly extends the previouslyknown formalisms, while preserving polynomial time recognizability.
The descriptive capacity of simple literal movement grammars isillustrated both formally through a weak generative capacity argument and ina more practical sense by the description of conjunctive crossserialrelative clauses in Dutch. After sketching a complexity result and drawing anumber of conclusions from the illustrations, it is then suggested that thenotion of mild contextsensitivity currently in use, that depends on therather loosely defined concept of constant growth, needs a modification toapply sensibly to the illustrated facts; an attempt at such a revision isproposed.
more …
By
Joshi, Aravind K.; Kulick, Seth
11 Citations
We describe a categorial system (PPTS) based on partial proof trees(PPTs) as the building blocks of the system. The PPTs are obtained byunfolding the arguments of the type that would be associated with a lexicalitem in a simple categorial grammar. The PPTs are the basic types in thesystem and a derivation proceeds by combining PPTs together. We describe theconstruction of the finite set of basic PPTs and the operations forcombining them. PPTS can be viewed as a categorial system incorporating someof the key insights of lexicalized tree adjoining grammar, namely the notionof an extended domain of locality and the consequent factoring of recursionfrom the domain of dependencies. PPTS therefore inherits the linguistic andcomputational properties of that system, and so can be viewed as a ’middleground‘ between a categorial grammar and a phrase structure grammar. We alsodiscuss the relationship between PPTS, natural deduction, and linear logicproofnets, and argue that natural deduction rather than a proofnet systemis more appropriate for the construction of the PPTs. We also discuss howthe use of PPTs allows us to ’localize‘ the management of resources, therebyfreeing us from this management as the PPTs are combined.
more …
By
Fernando, Tim
3 Citations
Notions of disambiguation supporting a compositional interpretation ofambiguous expressions and reflecting intuitions about how sentences combinein discourse are investigated. Expressions are analyzed both inductively bybreaking them apart, and coinductively by embedding them within largercontexts.
more …
By
Niyogi, Partha; Berwick, Robert C.
30 Citations
Linguists‘ intuitions about language change can be captured by adynamical systems model derived from the dynamics of language acquisition.Rather than having to posit a separate model for diachronic change, as hassometimes been done by drawing on assumptions from population biology (cf.CavalliSforza and Feldman, 1973; 1981; Kroch, 1990), this new modeldispenses with these independent assumptions by showing how the behavior ofindividual language learners leads to emergent, global populationcharacteristics of linguistic communities over several generations. As thesimplest case, we formalize the example of two grammars and show that eventhis situation leads directly to a nonlinear (quadratic) dynamical system.We study this one parameter model in a variety of situations for differentkinds of acquisition algorithms and maturational times, showing howdifferent learning theories can have very different evolutionaryconsequences. This allows us to formulate an evolutionary criterion for theadequacy of grammatical and learning theories. An application of thecomputational model to the historical loss of Verb Second from Old French toModern French is described showing how otherwise adequate grammaticaltheories might fail the evolutionary criterion.
more …
By
MacLeod, Catherine; Grishman, Ralph; Meyers, Adam
3 Citations
This article is a detailed account of COMLEX Syntax, an online syntactic dictionary of English, developed by the Proteus Project at New York University under the auspices of the Linguistics Data Consortium. This lexicon was intended to be used for a variety of tasks in natural language processing by computer and as such has very detailed classes with a large number of syntactic features and complements for the major parts of speech and is, as far as possible, theory neutral. The dictionary was entered by hand with reference to hard copy dictionaries, an online concordance and native speakers‘intuition. Thus it is without prior encumbrances and can be used for both pure research and commercial purposes.
more …
By
Materna, Pavel
2 Citations
Propositional and notional attitudes are construed as relations (inintension) between individuals and constructions (rather than propositrions etc,). The apparatus of transparent intensional logic (Tichy) is applied to derive two rules that make it possible to ‘export’ existential quantifiers without conceiving attitudes as relations to expressions (‘sententialism’).
more …
By
Acun, Ramazan; Anane, Rachid; Laflin, Susan
The paper describes a prototype system called HiSQL (Historical SQL) which extends the functionality of SQL in manipulating historical data, by providing functions for spatial and temporal processing. Conceptually the paper is divided into three parts: the first part deals with the design and architecture of the system; the second part introduces a case study (the defters); and the third part describes specific functions for spatial and temporal processing of serial documents. The paper concludes with a comparison between HiSQL and SQL and suggestions for further work.
more …
By
Skvortsov, Dmitrij
3 Citations
An example of finite tree Mo is presented such that its predicate logic (i.e. the intermediate predicate logic characterized by the class of all predicate Kripke frames based on Mo) is not finitely axiomatizable. Hence it is shown that the predicate analogue of de Jongh  McKay  Hosoi's theorem on the finite axiomatizability of every finite intermediate propositional logic is not true.
more …
By
Mares, Edwin D.; McNamara, Paul
6 Citations
In "Doing Well Enough: Toward a Logic for Common Sense Morality", Paul McNamara sets out a semantics for a deontic logic which contains the operator ‘It is supererogatory that’. As well as having a binary accessibility relation on worlds, that semantics contains a relative ordering relation, ≤. For worlds u, v and w, we say that u ≤w v when v is at least as good as u according to the standards of w. In this paper we axiomatize logics complete over three versions of the semantics. We call the strongest of these logics ‘DWE’ for ‘Doing Well Enough’.
more …
By
Zakharyaschev, Michael
1 Citations
This paper gives a characterization of those quasinormal extensions of the modal system S4 into which intuitionistic propositional logic Int is embeddable by the Gödel translation. It is shown that, as in the normal case, the set of quasinormal modal companions of Int contains the greatest logic, M*, for which, however, the analog of the BlokEsakia theorem does not hold. M* is proved to be decidable and Halldéncomplete; it has the disjunction property but does not have the finite model property.
more …
By
Lewin, Renato A.; Mikenberg, Irene F.; Schwarze, María G.
5 Citations
Annotated logics were introduced by V.S. Subrahmanian as logical foundations for computer programming. One of the difficulties of these systems from the logical point of view is that they are not structural, i.e., their consequence relations are not closed under substitutions. In this paper we give systems of annotated logics that are equivalent to those of Subrahmanian in the sense that everything provable in one type of system has a translation that is provable in the other. Moreover these new systems are structural. We prove that these systems are weakly congruential, namely, they have an infinite system of congruence 1formulas. Moreover, we prove that an annotated logic is algebraizable (i.e., it has a finite system of congruence formulas,) if and only if the lattice of annotation constants is finite.
more …
By
della Rocca, Simona Ronchi; Roversi, Luca
6 Citations
The introduction of Linear Logic extends the CurryHoward Isomorphism to intensional aspects of the typed functional programming. In particular, every formula of Linear Logic tells whether the term it is a type for, can be either erased/duplicated or not, during a computation. So, Linear Logic can be seen as a model of a computational environment with an explicit control about the management of resources.
This paper introduces a typed functional language Λ! and a categorical model for it.
The terms of Λ! encode a version of natural deduction for Intuitionistic Linear Logic such that linear and non linear assumptions are managed multiplicatively and additively, respectively. Correspondingly, the terms of Λ! are built out of two disjoint sets of variables. Moreover, the λabstractions of Λ! bind variables and patterns. The use of two different kinds of variables and the patterns allow a very compact definition of the onestep operational semantics of Λ!, unlike all other extensions of CurryHoward Isomorphism to Intuitionistic Linear Logic. The language Λ! is ChurchRosser and enjoys both Strong Normalizability and Subject Reduction.
The categorical model induces operational equivalences like, for example, a set of extensional equivalences.
The paper presents also an untyped version of Λ! and a type assignment for it, using formulas of Linear Logic as types. The type assignment inherits from Λ! all the good computational properties and enjoys also the PrincipalType Property.
more …
By
Bucklow, Spike L.
10 Citations
This paper describes the formal representation and analysis of a visual structure –craquelure (an accidental feature of paintings).Various statistical methods demonstrate a relationship between a formal representation of craquelure and arthistorical categories of paintings. The results of this work provide confirmation of connoisseurial claims regarding craquelure as a broad indicator of authorship. The techniques employed in this study are; repertory grid, hierarchical clustering, multidimensional scaling and discriminant analysis.
more …
By
Ferrari, Mauro
1 Citations
In this paper we provide cutfree tableau calculi for the intuitionistic modal logics IK, ID, IT, i.e. the intuitionistic analogues of the classical modal systems K, D and T. Further, we analyse the necessity of duplicating formulas to which rules are applied. In order to develop these calculi we extend to the modal case some ideas presented by Miglioli, Moscato and Ornaghi for intuitionistic logic. Specifically, we enlarge the language with the new signs Fc and CR near to the usual signs T and F. In this work we establish the soundness and completeness theorems for these calculi with respect to the Kripke semantics proposed by Fischer Servi.
more …
By
Kang, BeomMo
Dictionary markup is one of the concerns of the Text Encoding Initiative (TEI), an international project for text encoding. In this paper, we investigate ways to use and extend the TEI encoding scheme for the markup of Korean dictionary entries. Since TEI suggestions for dictionary markup are mainly for western language dictionaries, we need to cope with problems to be encountered in encoding Korean dictionary entries. We try to extend and modify the TEI encoding scheme in the way suggested by the TEI. Also, we restrict the content model so that the encoded dictionary might be viewed as a database as well as a computerized, originally printed, dictionary.
more …
By
Gabbay, D.M.; Reyle, U.
4 Citations
Resolution is an effective deduction procedure for classical logic. There is no similar "resolution" system for nonclassical logics (though there are various automated deduction systems). The paper presents resolution systems for intuistionistic predicate logic as well as for modal and temporal logics within the framework of labelled deductive systems. Whereas in classical predicate logic resolution is applied to literals, in our system resolution is applied to L(abelled) R(epresentation) S(tructures). Proofs are discovered by a refutation procedure defined on LRSs, that imposes a hierarchy on clause sets of such structures together with an inheritance discipline. This is a form of Theory Resolution. For intuitionistic logic these structures are called I(ntuitionistic) R(epresentation) S(tructures). Their hierarchical structure allows the restriction of unification of individual variables and/or constants without using Skolem functions. This structures must therefore be preserved when we consider other (nonmodal) logics. Variations between different logics are captured by fine tuning of the inheritance properties of the hierarchy. For modal and temporal logics IRS's are extended to structures that represent worlds and/or times. This enables us to consider all kinds of combined logics.
more …
By
McKelvie, D.; Brew, C.; Thompson, H.S.
4 Citations
This paper describes the LT NSL system (McKelvie et al., 1996), an architecture for writing corpus processing tools. This system is then compared with two other systems which address similar issues, the GATE system (Cunningham et al., 1995) and the IMS Corpus Workbench (Christ, 1994). In particular we address the advantages and disadvantages of an SGML approach compared with a nonsgml database approach.
more …
By
Suzuki, NobuYuki
7 Citations
A possible world structure consist of a set W of possible worlds and an accessibility relation R. We take a partial function r(·,·) to the unit interval [0, 1] instead of R and obtain a Kripke frame with graded accessibility r Intuitively, r(x, y) can be regarded as the reliability factor of y from x We deal with multimodal logics corresponding to Kripke frames with graded accessibility in a fairly general setting. This setting provides us with a framework for fuzzy possible world semantics. The basic propositional multimodal logic gK (grated K) is defined syntactically. We prove that gK is sound and complete with respect to this semantics. We discuss some extensions of gK including logics of similarity relations and of fuzzy orderings. We present a modified filtration method and prove that gK and its extensions introduced here are decidable.
more …
By
Georgatos, Konstantinos
21 Citations
This paper presents a bimodal logic for reasoning about knowledge during knowledge acquisitions. One of the modalities represents (effort during) nondeterministic time and the other represents knowledge. The semantics of this logic are treelike spaces which are a generalization of semantics used for modeling branching time and historical necessity. A finite system of axiom schemes is shown to be canonically complete for the formentioned spaces. A characterization of the satisfaction relation implies the small model property and decidability for this system.
more …
By
HoganBrun, Gabrielle; Whittle, Ruth
2 Citations
MultiMedia does not, just by itself, guarantee accelerated learning and enhanced motivation unless there is a clear pedagogical progression and learning strategy. The authors describe and analyze the didactic dimensions to be considered when designing a multimedia tool, based on their own experience as software authors and language trainers.
more …
By
Kracht, Marcus; Wolter, Frank
21 Citations
This papers gives a survey of recent results about simulations of one class of modal logics by another class and of the transfer of properties of modal logics under extensions of the underlying modal language. We discuss: the transfer from normal polymodal logics to their fusions, the transfer from normal modal logics to their extensions by adding the universal modality, and the transfer from normal monomodal logics to minimal tense extensions. Likewise, we discuss simulations of normal polymodal logics by normal monomodal logics, of nominals and the difference operator by normal operators, of monotonic monomodal logics by normal bimodal logics, of polyadic normal modal logics by polymodal normal modal logics, and of intuitionistic modal logics by normal bimodal logics.
more …
By
Schinke, Robyn; Greengrass, Mark; Robertson, Alexander M.; Willett, Peter
Show all (4)
2 Citations
This paper reports a detailed evaluation of the effectiveness of a system that has been developed for the identification and retrieval of morphological variants in searches of Latin text databases. A user of the retrieval system enters the principal parts of the search term (two parts for a noun or adjective, three parts for a deponent verb, and four parts for other verbs), this enabling the identification of the type of word that is to be processed and of the rules that are to be followed in determining the morphological variants that should be retrieved. Two different search algorithms are described. The algorithms are applied to the Latin portion of the Hartlib Papers Collection and to a range of classical, vulgar and medieval Latin texts drawn from the Patrologia Latina and from the PHI Disk 5.3 datasets. The effectiveness of these searches demonstrates the effectiveness of our procedures in providing access to the full range of classical and postclassical Latin text databases.
more …
By
Sernadas, Amílcar; Sernadas, Cristina; Caleiro, Carlos
11 Citations
Motivated by applications in software engineering, we propose two forms of combination of logics: synchronization on formulae and synchronization on models. We start by reviewing satisfaction systems, consequence systems, onestep derivation systems and theory spaces, as well as their functorial relationships. We define the synchronization on formulae of two consequence systems and provide a categorial characterization of the construction. For illustration we consider the synchronization of linear temporal logic and equational logic. We define the synchronization on models of two satisfaction systems and provide a categorial characterization of the construction. We illustrate the technique in two cases: linear temporal logic versus equational logic; and linear temporal logic versus branching temporal logic. Finally, we lift the synchronization on formulae to the category of logics over consequences systems.
more …
By
Hutchins, John
10 Citations
The early history of applying electronic computers to the task of translating natural languages is chronicled, from the first suggestions by Warren Weaver in March 1947 to the first demonstration of a working, if limited, program in January 1954.
By
Flanders, Julia
1 Citations
This paper discusses the contested role of images in electronic editions, and summarizes some of the chief arguments for their inclusion. I then argue that, to truly determine the importance of images to the function of electronic editions, we must understand the contribution the image makes to the form of textual knowledge provided by the edition. I suggest a distinction between editions which are primarily pedagogical in their aims, those which aim above all at scholarly authority, and those which attempt to provide textual information as highquality data which can be analysed and processed. I conclude that the latter represents the most significant future trend in electronic editing.
more …
By
Blackburn, Patrick; de Rijke, Maarten
24 Citations
Combining logics has become a rapidly expanding entreprise that is inspired mainly by concerns about modularity and the wish to join together tailor made logical tools into more powerful but still manageable ones. A natural question is whether it offers anything new over and above existing standard languages.
By analysing a number of applications where combined logics arise, we argue that combined logics are a potentially valuable tool in applied logic, and that endorsements of standard languages often miss the point. Using the history of quantified modal logic as our main example, we also show that the use of combined structures and logics is a recurring theme in the analysis of existing logical systems.
more …
By
Mccarthy, John
15 Citations
This article is oriented toward the use of modality in artificial intelligence (AI). An agent must reason about what it or other agents know, believe, want, intend or owe. Referentially opaque modalities are needed and must be formalized correctly. Unfortunately, modal logics seem too limited for many important purposes. This article contains examples of uses of modality for which modal logic seems inadequate.
I have no proof that modal logic is inadequate, so I hope modal logicians will take the examples as challenges.
Maybe this article will also have philosophical and mathematical logical interest.
more …
By
Brown, Susan; Fisher, Sue; Clements, Patricia; Binhammer, Katherine; Butler, Terry; Carter, Kathryn; Grundy, Isobel; Hockey, Susan
Show all (8)
2 Citations
This paper describes the novel ways in which the Orlando Project, based at the Universities of Alberta and Guelph, is using SGML to create an integrated electronic history of British women's writing in English. Unlike most other SGMLbased humanities computing projects which are tagging existing texts, we are researching and writing new material, including biographies, items of historical significance, and many kinds of literary and historical interpretation, all of which incorporates sophisticated SGML encoding for content as well as structure. We have created three DTDs, for biographies, for writingrelated activities and publications, and for social, political and other events. A major factor influencing the design of the DTDs was the requirement to be able to merge and restructure the entire text base in many ways in order to retrieve and index it and to reflect multiple views and interpretations. In addition a stable and welldocumented system for tagging was deemed essential for a team which involves almost twenty people, including eight graduate students, in two locations.
more …
By
Amati, Giambattista; CarlucciAiello, Luigia; Pirri, Fiora
1 Citations
In this paper we address the problem of combining a logic Λ with nonmonotonic modal logic. In particular we study the intuitionistic case. We start from a formal analysis of the notion of intuitionistic consistency via the sequent calculus. The epistemic operator M is interpreted as the consistency operator of intuitionistic logic by introducing intuitionistic stable sets. On the basis of a bimodal structure we also provide a semantics for intuitionistic stable sets.
more …
By
Humberstone, Lloyd; Williamson, Timothy
3 Citations
Given a 1ary sentence operator ○, we describe L  another 1ary operator  as as a left inverse of ○ in a given logic if in that logic every formula ϕ is provably equivalent to L○ϕ. Similarly R is a right inverse of ○ if ϕ is always provably equivalent to ○Rϕ. We investigate the behaviour of left and right inverses for ○ taken as the □ operator of various normal modal logics, paying particular attention to the conditions under which these logics are conservatively extended by the addition of such inverses, as well as to the question of when, in such extensions, the inverses behave as normal modal operators in their own right.
more …
By
Desmarais, Lise; Duquette, Lise; Renié, Delphine; Laurier, Michel
Show all (4)
7 Citations
An empirical study was undertaken to evaluate second language learning with a videodisc named ViConte. The 78 subjects were postsecondary students and adults and belonged either to a control group, a videocontrol group or an experimental group. The research methodology is presented as well as analyses of learners' navigational patterns, strategies, gains in vocabulary items, changes in attitude and global evaluation of the videodisc. Suggestions for the development of adaptive learning environments are made based on these findings.
more …
By
Restall, Greg
2 Citations
Combining nonclassical (or ‘subclassical’) logics is not easy, but it is very interesting. In this paper, we combine nonclassical logics of negation and possibility (in the presence of conjunction and disjunction), and then we combine the resulting systems with intuitionistic logic. We will find that Kracht's results on the undecidability of classical modal logics generalise to a nonclassical setting. We will also see conditions under which intuitionistic logic can be combined with a nonintuitionistic negation without corrupting the intuitionistic fragment of the logic.
more …
By
D'agostino, Marcello; Gabbay, Dov M.; Russo, Alessandra
5 Citations
We investigate the semantics of the logical systems obtained by introducing the modalities □ and ⋄ into the family of substructural implication logics (including relevant, linear and intuitionistic implication). Then, in the spirit of the LDS (Labelled Deductive Systems) methodology, we "import" this semantics into the classical proof system KE. This leads to the formulation of a uniform labelled refutation system for the new logics which is a natural extension of a system for substructural implication developed by the first two authors in a previous paper.
more …
By
Rostek, L.; Alexa, M.
3 Citations
This paper presents a method for developing limitedcontext grammar rules in order to mark up text automatically, by attaching specific text segments to a small number of welldefined and applicationdetermined semantic categories. The Text Analysis Tool with Object Encoding (TATOE) was used in order to support the iterative process of developing a set of rules as well as for constructing and managing the lexical resources. The work reported here is part of a realworld application scenario: the automatic semantic mark up of German news messages, as provided by a German press agency, according to the SGMLbased standard News Industry Text Format (NITF) to facilitate their further exchange. The implemented export mechanism of the semantic mark up into NITF is also described in the paper.
more …
By
Flanders, Julia; Bauman, Syd; Caton, Paul; Cournane, Mavis
Show all (4)
1 Citations
This paper discusses the encoding of proper names using the TEI Guidelines, describing the practice of the Women Writers Project at Brown University, and the CELT Project at University College, Cork. We argue that such encoding may be necessary to enable historical and literary research, and that the specific approach taken will depend on the needs of the project and the audience to be served. Because the TEI Guidelines provide a fairly flexible system for the encoding of proper names, we conclude that projects may need to collaborate to determine more specific constraints, to ensure consistency of approach and compatibility of data.
more …
By
Rudman, Joseph
72 Citations
The statement, ’’Results of most nontraditional authorship attribution studies are not universally accepted as definitive,'' is explicated. A variety of problems in these studies are listed and discussed: studies governed by expediency; a lack of competent research; flawed statistical techniques; corrupted primary data; lack of expertise in allied fields; a dilettantish approach; inadequate treatment of errors. Various solutions are suggested: construct a correct and complete experimental design; educate the practitioners; study style in its totality; identify and educate the gatekeepers; develop a complete theoretical framework; form an association of practitioners.
more …
By
Isoda, Eiko
1 Citations
Kripke bundle [3] and Cset semantics [1] [2] are known as semantics which generalize standard Kripke semantics. In [3] and in [1], [2] it is shown that Kripke bundle and Cset semantics are stronger than standard Kripke semantics. Also it is true that Cset semantics for superintuitionistic logics is stronger than Kripke bundle semantics [5].
In this paper, we show that QS4.1 is not Kripke bundle complete via Cset models. As a corollary we can give a simple proof showing that Cset semantics for modal logics are stronger than Kripke bundle semantics.
more …
By
Sinclair, John; Mason, Oliver; Ball, Jackie; Barnbrook, Geoff
Show all (4)
2 Citations
In this report two programs for statistical analysis of concordance lines are described. The programs have been developed for analyzing he lexical context of a given word. It is shown how different parameter settings influence the outcome of collocational analysis, and how the concept of collocation can be extended to allow the extraction of lines typical for a word from a set of concordance lines. Even though all the examples are for English, the software is completely language independent and only requires minimal linguistic resources.
more …
By
Marciniec, Jacek
3 Citations
In this paper the notion of unifier is extended to the infinite set case. The proof of existence of the most general unifier of any infinite, unifiable set of types (terms) is presented. Learning procedure, based on infinite set unification, is described.
more …
By
Elgueta, R
11 Citations
In this paper we mainly deal with firstorder languages without equality and introduce a weak form of equality predicate, the socalled Leibniz equality. This equality is characterized algebraically by means of a natural concept of congruence; in any structure, it turns out to be the maximum congruence of the structure. We show that firstorder logic without equality has two distinct complete semantics (fll semantics and reduced semantics) related by the reduction operator. The last and main part of the paper contains a series of Birkhoffstyle theorems characterizing certain classes of structures defined without equality, not only full classes but also reduced ones.
more …
By
Saunders, David
10 Citations
To allow permanent records of the condition of paintings to be made, the National Gallery in London has developed two highresolution digital imaging systems over the past ten years; the VASARI scanner and the MARC camera. Each is capable of recording images of paintings with excellent colour accuracy, permitting comparisons between the state of paintings now and in the future to be made. In addition to their prime uses in documenting condition and measuring change, the systems have also been used in the technical study of paintings from the Collection, for example in recording changes of colour that result from conservation treatment, clarification of infrared images, comparison of related compositions and computer reconstruction of faded or altered colours.
more …
By
Donghong, Ji; Junping, Gong; Changning, Huang
In this paper, we study the problem of adding a large number of new words into a Chinese thesaurus according to their definitions in a Chinese dictionary, while minimizing the effort of hand tagging. To deal with the problem, we first make use of a kind of supervised learning technique to learn a set of defining formats for each class in the thesaurus, which tries to characterize the regularities about the definitions of the words in the class. We then use traditional techniques in Graph theory to derive a minimal subset of the new words to be added into the thesaurus, which meets the following condition: if we add the new words in the subset into the thesaurus by hand, the other new words can be added into the thesaurus automatically by matching their definitions with the defining formats of each class in the thesaurus. The method uses little, if any, languagespecific or thesaurusspecific knowledge, and can be applied to the thesauri of other languages.
more …
By
Hedstrom, Margaret
40 Citations
The difficulty and expense of preserving digital information is a potential impediment to digital library development. Preservation of traditional materials became more successful and systematic after libraries and archives integrated preservation into overall planning and resource allocation. Digital preservation is largely experimental and replete with the risks associated with untested methods. Digital preservation strategies are shaped by the needs and constraints of repositories with little consideration for the requirements of current and future users of digital scholarly resources. This article discusses the present state of digital preservation, articulates requirements of both users and custodians, and suggests research needs in storage media, migration, conversion, and overall management strategies. Additional research in these areas would help developers of digital libraries and other institutions with preservation responsibilities to integrate longterm preservation into program planning, administration, system architectures, and resource allocation.
more …
By
Nguifo, Engelbert Mephu; Lagrange, MarieSalomé; Renaud, Monique; Sallantin, Jean
Show all (4)
1 Citations
The authors here show that machine learning techniques can be used for designing an archaeological typology, at an early stage when the classes are not yet well defined. The program (LEGAL, LEarning with GAlois Lattice) is a machine learning system which uses a set of examples and counterexamples in order to discriminate between classes. Results show a good compatibility between the classes such as the yare defined by the system and the archaeological hypotheses.
more …
By
Hartonas, Chrysafis
24 Citations
Part I of this paper is developed in the tradition of Stonetype dualities, where we present a new topological representation for general lattices (influenced by and abstracting over both Goldblatt's [17] and Urquhart's [46]), identifying them as the lattices of stable compactopens of their dual Stone spaces (stability refering to a closure operator on subsets). The representation is functorial and is extended to a full duality.
In part II, we consider latticeordered algebras (lattices with additional operators), extending the Jónsson and Tarski representation results [30] for Boolean algebras with Operators. Our work can be seen as developing, and indeed completing, Dunn's project of gaggle theory [13, 14]. We consider general lattices (rather than Boolean algebras), with a broad class of operators, which we dubb normal, and which includes the JónssonTarski additive operators. Representation of lalgebras is extended to full duality.
In part III we discuss applications in logic of the framework developed. Specifically, logics with restricted structural rules give rise to lattices with normal operators (in our sense), such as the Full Lambek algebras (F Lalgebras) studied by Ono in [36]. Our Stonetype representation results can be then used to obtain canonical constructions of Kripke frames for such systems, and to prove a duality of algebraic and Kripke semantics for such logics.
more …
By
Dagan, Ido; Church, Ken
9 Citations
We propose a semiautomatic tool, termight, that supports the construction of bilingual glossaries. Termight consists of two components which address the two subtasks in glossary construction: (a) preparing a monolingual list of all technical terms in a sourcelanguage document, and (b) finding the translations for these terms in parallel source–target documents. As a first step (in each component) the tool extracts automatically candidate terms and candidate translations, based on termextraction and wordalignment algorithms. It then performs several additional preprocessing steps which greatly facilitate human postediting of the candidate lists. These steps include grouping and sorting of candidates and associating example concordance lines with each candidate. Finally, the data prepared in preprocessing is presented to the user via an interactive interface which supports quick postediting operations. Termight was deployed by translators at AT & T Business Translation Services (formerly AT & T Language Line Services) leading to very high rates of semiautomatic glossary construction.
more …
By
Fung, Pascale; McKeown, Kathleen
29 Citations
Technicalterm translation represents one of the most difficult tasks for human translators since (1) most translators are not familiar with terms and domainspecific terminology and (2) such terms are not adequately covered by printed dictionaries. This paper describes an algorithm for translating technical words and terms from noisy parallel corpora across language groups. Given any word which is part of a technical term in the source language, the algorithm produces a ranked candidate match for it in the target language. Potential translations for the term are compiled from the matched words and are also ranked. We show how this ranked list helps translators in technicalterm translation. Most algorithms for lexical and term translation focus on IndoEuropean language pairs, and most use a sentencealigned clean parallel corpus without insertion, deletion or OCR noise. Our algorithm is language and charactersetindependent, and is robust to noise in the corpus. We show how our algorithm requires minimum preprocessing and is able to obtain technicalword translations without sentenceboundary identification or sentence alignment, from the English–Japanese awk manual corpus with noise arising from text insertions or deletions and on the English–Chinese HKUST bilingual corpus. We obtain a precision of 55.35% from the awk corpus for word translation including rare words, counting only the best candidate and direct translations. Translation precision of the bestcandidate translation is 89.93% from the HKUST corpus. Potential term translations produced by the program help bilingual speakers to get a 47% improvement in translating technical terms.
more …
By
Foster, George; Isabelle, Pierre; Plamondon, Pierre
26 Citations
The use of Machine Translation as a tool for professional or other highly skilled translators is for the most part currently limited to postediting arrangements in which the translator invokes MT when desired and then manually cleans up the results. A theoretically promising but hitherto largely unsuccessful alternative to postediting for this application is interactive machine translation (IMT), in which the translator and MT system work in tandem. We argue that past failures to make IMT viable as a tool for skilled translators have been the result of an infelicitous mode of interaction rather than any inherent flaw in the idea. As a solution, we propose a new style of IMT in which the target text under construction serves as the medium of communication between an MT system and its user. We describe the design, implementation, and performance of an automatic word completion system for translators which is intended to demonstrate the feasibility of the proposed approach, albeit in a very rudimentary form.
more …
By
Hartley, Anthony; Paris, CÉcile
10 Citations
In this paper, we look at the current scenario in multilingual documentation generation and the types of tools currently being used in support of the translation task, and discuss their shortcomings. We examine emergent trends in the document industry, observing a reorganisation of the workflow which mirrors a shift of attention from translating to authoring and from the ergonomics of postediting the target text to the ergonomics of producing the source text. We argue that these trends invite the design and development of new tools for the task of producing multilingual texts, and that multilingual generation provides the appropriate technology, shifting attention to an even earlier stage in the authoring process, that of specifying the semantics of the text to be produced. We describe a prototype system which exploits this technology to meet the expressed needs of authors and translators by supporting them in the drafting of multilingual instructions. We suggest that, in the future, a single platform to support multilingual documentation should integrate translationoriented tools and generationbased tools to be employed as appropriate by different types of users (translators and authors) in different circumstances.
more …
By
Kilgarriff, Adam
75 Citations
Word sense disambiguation assumes word senses. Withinthe lexicography and linguistics literature, they areknown to bevery slippery entities. The first part of the paperlooks at problemswith existing accounts of ‘word sense’ and describesthe various kinds of ways in which a word's meaning candeviate from its coremeaning. An analysis is presented in which wordsenses areabstractions from clusters of corpus citations, inaccordance withcurrent lexicographic practice. The corpus citations,not the wordsenses, are the basic objects in the ontology. Thecorpus citationswill be clustered into senses according to thepurposes of whoever or whatever does the clustering. In theabsence of suchpurposes, word senses do not exist.
Word sense disambiguation also needs a set of wordsenses todisambiguate between. In most recent work, the sethas been takenfrom a generalpurpose lexical resource, with theassumption that thelexical resource describes the word senses ofEnglish/French/...,between which NLP applications will need todisambiguate. Theimplication of the first part of the paper is, bycontrast, that wordsenses exist only relative to a task. Thefinal part of the paper pursues this, exploring, bymeans of asurvey, whether and how word sense ambiguity is infact a problem forcurrent NLP applications.
more …
By
Langé, JeanMarc; Gaussier, Éric; Daille, Béatrice
12 Citations
This paper sets forth some ideas for the evolution of translation tools in the near future. The proposed improvements consist in a closer integration of terminology and sentence databases. In particular, we suggest that bilingual sentence databases (translation memories) could be refined by splitting sentences into large “bricks” of text. We also propose a mechanism through which bilingual sentence databases could be generalized by replacing the known terms by variable placeholders, thus yielding technical sentence “skeletons”.
This latter idea is the most original and, in our view, the most promising for future implementations. Although these ideas are not yet supported by experiments, we believe that they can be implemented using simple techniques, following the general philosophy that such tools should go as far as possible while remaining robust and useful for the human translator.
more …
By
Zajac, Rémi; Vanni, Michelle
8 Citations
Fast access to information in different languages is still a major problem for many organizations. We have built a multilingual analyst's workstation integrated in the Tipster document management toolkit. The analyst workstation offers to an Englishspeaking analyst a variety of tools to browse sets of documents in Arabic, Japanese, Spanish and Russian, including a Unicodebased multilingual editor, and a simple machine translation functionality.
The Temple project has developed an open multilingual architecture and software support for rapid development of extensible machine translation functionalities. The targeted languages are those for which natural language processing and human resources are scarce or difficult to obtain. The goal is to support rapid development of machine translation functionalities in a very short time with limited resources.
Glossarybased machinetranslation (GBMT) is used to provide an English gloss of a foreign document. A GBMT system uses a bilingual phrasal dictionary (glossary) to produce a phrasebyphrase translation. Translation (based on phrase patternmatching) is fast and accurate regarding the content of the document and browsed documents can be translated almost in realtime. A GBMT system for a language pair is also extremely simple, cheap and fast to develop. Moreover, all language resources used by the system are entirely under the control of the user.
more …
By
Breidt, Elisabeth; Feldweg, Helmut
2 Citations
This paper presents an approach to avoiding translation by utilising a contextsensitive dictionary lookup system to assist in the comprehension of online texts in foreign languages. The system employs stateoftheart finitestate technology for the context analysis and uses converted bilingual printed dictionaries as its primary resource of lexical information. The printed dictionaries have been opened up for the system by analysing the dictionary structure and parsing the typesetting tapes. The lexical information has been validated and augmented by corpusbased lexicographic revision, paying special attention to the treatment of multiword lexemes, for which formalised local grammars have been added to the augmented dictionaries. Although the system is currently implemented as a device for comprehension assistance, the same technology and methods can be employed to select information from the dictionaries relevant for translation processes.
more …
By
Kolany, Adam
Four consequence operators based on hypergraph satisfiability are defined. Their properties are explored and interconnections are displayed. Finally their relation to the case of the Classical Propositional Calculus is shown.
By
Kaneko, Mamoru; Nagashima, Takashi
16 Citations
This paper provides a Genzten style formulation of the game logic framework GLm (0 ≤ m ≤ ω), and proves the cutelimination theorem for GLm. As its application, we prove the term existence theorem for GLω used in Part I.
By
Wilks, Yorick
6 Citations
This paper addresses the question of whether it is possible tosensetag systematically, and on a large scale, and how we shouldassess progress so far. That is to say, how to attach each occurrenceof a word in a text to one and only one sense in a dictionary – aparticular dictionary of course, and that is part of the problem. Thepaper does not propose a solution to the question, though we havereported empirical findings elsewhere (Cowie et al., 1992;Wilks et al., 1996; Wilks and Stevenson, 1997), and intend to continue andrefine that work. The point of this paper is to examine two wellknowncontributions critically: The first (Kilgarriff, 1993), which is widelytaken to show that the task, as defined, cannot be carried outsystematically by humans and, secondly (Yarowsky, 1995), which claimsstrikingly good results at doing exactly that.
more …
By
Kay, Martin
27 Citations
The only way in which the power of computers has been brought to bear on the problem of language translation is machine translation, that is, the automation of the entire process. Machine translation is an excellent research vehicle but stands no chance of filling actual needs for translation which are growing at a great rate. In the quarter century during which work on machine translation has been going on, there has been considerable progress in relevant areas of computer science. However, advances in linguistics, important though they may have been, have not touched the core of this problem. The proper thing to do is therefore to adopt the kinds of solution that have proved successful in other domains, namely to develop cooperative man–machine systems. This paper proposes a translator's amanuensis, incorporating into a word processor some simple facilities peculiar to translation. Gradual enhancements of such a system could eventually lead to the original goal of machine translation.
more …
By
Kaalep, HeikiJaan
6 Citations
The paper describes a morphological analyser forEstonian and how using a text corpus influenced theprocess of creating it and the resulting programitself. The influence is not limited to the lexicononly, but is also noticeable in the resulting algorithm andimplementation too. When work on the analyser began,there were no computational treatment of Estonianderivatives and compounds. After some cycles ofdevelopment and testing on the corpus, we came up withan acceptable algorithm for their treatment. Both themorphological analyser and the speller based on ithave been successfully marketed.
more …
By
Wolter, Frank
8 Citations
This paper investigates partitions of lattices of modal logics based on superintuitionistic logics which are defined by forming, for each superintuitionistic logic L and classical modal logic Θ, the set L[Θ] of Lcompanions of Θ. Here L[Θ] consists of those modal logics whose nonmodal fragments coincide with L and which axiomatize Θ if the law of excluded middle p V ⌍p is added. Questions addressed are, for instance, whether there exist logics with the disjunction property in L[Θ], whether L[Θ] contains a smallest element, and whether L[Θ] contains lower covers of Θ. Positive solutions as concerns the last question show that there are (uncountably many) superclean modal logics based on intuitionistic logic in the sense of Vakarelov [28]. Thus a number of problems stated in [28] are solved. As a technical tool the paper develops the splitting technique for lattices of modal logics based on superintuitionistic logics and ap plies duality theory from [34].
more …
By
Boualem, Malek; Harié, Stéphane
This paper describes the multilingual text editor MtScript developed in the framework of the MULTEXT project.MtScript enables the use of many differentwriting systems in the same document (Latin, Arabic,Cyrillic, Hebrew, Chinese, Japanese, etc.). Editingfunctions enable the insertion or deletion of textzones even if they have opposite writing directions.In addition, the languages in the text can be marked,customized keyboard input rules can be associated witheach language and different character coding systems(one or two bytes) can be combined. MtScript isbased on a portable environment (Tcl/Tk). MtScript.1.1version has been developed underUnix/XWindows (Solaris, Linux systems) and otherversions are planned to be ported to the Windows andMacintosh environments. The current 1.1 versionpresents several limits that will be fixed in futureversions, such as the justification of bidirectionaltexts, printing support, and text import/exportsupport. Future versions will use SGML and TEI norms,which offer ways of encoding multilingual texts andare to a large extent meant for interchange.
more …
By
Herrmann, Burghard
21 Citations
In [14] we used the term finitely algebraizable for algebraizable logics in the sense of Blok and Pigozzi [2] and we introduced possibly infinitely algebraizable, for short, p.i.algebraizable logics. In the present paper, we characterize the hierarchy of protoalgebraic, equivalential, finitely equivalential, p.i.algebraizable, and finitely algebraizable logics by properties of the Leibniz operator. A Bethstyle definability result yields that finitely equivalential and finitely algebraizable as well as equivalential and p.i.algebraizable logics can be distinguished by injectivity of the Leibniz operator. Thus, from a characterization of equivalential logics we obtain a new short proof of the main result of [2] that a finitary logic is finitely algebraizable iff the Leibniz operator is injective and preserves unions of directed systems. It is generalized to nonfinitary logics. We characterize equivalential and, by adding injectivity, p.i.algebraizable logics.
more …
By
Pulman, Stephen G.
12 Citations
Higher order unification is a way of combining information (or equivalently, solving equations) expressed as terms of a typed higher order logic. A suitably restricted form of the notion has been used as a simple and perspicuous basis for the resolution of the meaning of elliptical expressions and for the interpretation of some noncompositional types of comparative construction also involving ellipsis. This paper explores another area of application for this concept in the interpretation of sentences containing intonationally marked ‘focus’, or various semantic constructs which are sensitive to focus.
Similarities and differences between this approach, and theories using ‘alternative semantics,’ ‘structured meanings’, or flexible categorial grammars, are described. The paper argues that the higher order unification approach offers descriptive advantages over these alternatives, as well as the practical advantage of being capable of fairly direct computational implementation.
more …
