Showing 1 to 100 of 197 matching Articles
Results per page:
Export (CSV)
By
Maksimova, Larisa L.
4 Citations
Three variants of Beth's definability theorem are considered. Let L be any normal extension of the provability logic G. It is proved that the first variant B1 holds in L iff L possesses Craig's interpolation property. If L is consistent, then the statement B2 holds in L iff L = G + {□0}. Finally, the variant B3 holds in any normal extension of G.
more …
By
Rautenberg, Wolfgang
7 Citations
We provide a finite axiomatization of the consequence ⊢^{∧}∪⊢^{∨}, i.e. of the set of common sequential rules for ∧ and ∨. Moreover, we show that ⊢^{∧}∪⊢^{∨} has no proper nontrivial strengthenings other than ⊢^{∧} and ⊢^{∨}. A similar result is true for ⊢^{↔}∪⊢^{→}, but not, e.g., for ⊢^{↔}∪⊢^{+}.
more …
By
HaŁkowska, Katarzyna
4 Citations
We construct a class K of algebras which are matrices of the logical system Z introduced in [4]. It is shown that algebras belonging to the class K are decomposable into disjoint subalgebras which are Boolean algebras.
By
Westerståhl, Dag
12 Citations
The paper elaborates two points: i) There is no principal opposition between predicate logic and adherence to subjectpredicate form, ii) Aristotle's treatment of quantifiers fits well into a modern study of generalized quantifiers.
By
Simons, Peter M.
This paper presents a tree method for testing the validity of inferences, including syllogisms, in a simple term logic. The method is given in the form of an algorithm and is shown to be sound and complete with respect to the obvious denotational semantics. The primitive logical constants of the system, which is indebted to the logical works of Jevons, Brentano and Lewis Carroll, are term negation, polyadic term conjunction, and functors affirming and denying existence, and use is also made of a metalinguistic concept of formal synonymy. It is indicated briefly how the method may be extended to other systems.
more …
By
Karpenko, Alexander S.
1 Citations
In this paper we define n+1valued matrix logic K_{n+1} whose class of tautologies is nonempty iff n is a prime number. This result amounts to a new definition of a prime number. We prove that if n is prime, then the functional properties of K_{n+1} are the same as those of Łukasiewicz's n +1valued matrix logic Ł_{n+1}. In an indirect way, the proof we provide reflects the complexity of the distribution of prime numbers in the natural series. Further, we introduce a generalization K
_{n+1}^{*}
of K_{n+1} such that the set of tautologies of K_{n+1} is not empty iff n is of the form p^{β}, where p is prime and β is natural. Also in this case we prove the equivalence of functional properties of the introduced logic and those of Ł_{n+1}. In the concluding part, we discuss briefly a partition of the natural series into equivalence classes such that each class contains exactly one prime number. We conjecture that for each prime number the corresponding equivalence class is finite.
more …
By
WybraniecSkardowska, Urszula
4 Citations
With reference to Polish logicophilosophical tradition two formal theories of language syntax have been sketched and then compared with each other. The first theory is based on the assumption that the basic linguistic stratum is constituted by objecttokens (concrete objects perceived through the senses) and that the types of such objects (ideal objects) are derivative constructs. The other is founded on an opposite philosophical orientation. The two theories are equivalent. The main conclusion is that in syntactic researches it is redundant to postulate the existence of abstract linguistic entities. Earlier, in a slightly different form, the idea was presented in [27] and signalled in [26] and [25].
more …
By
Vakarelov, Dimiter
6 Citations
Four known threevalued logics are formulated axiomatically and several completeness theorems with respect to nonstandard intuitive semantics, connected with the notions of information, contrariety and subcontrariety is given.
By
Dutkiewicz, Rafal
9 Citations
We prove that the intuitionistic sentential calculus is Łdecidable (decidable in the sense of Łukasiewicz), i.e. the sets of theses of Int and of rejected formulas are disjoint and their union is equal to all formulas. A formula is rejected iff it is a sentential variable or is obtained from other formulas by means of three rejection rules. One of the rules is original, the remaining two are Łukasiewicz's rejection rules: by detachement and by substitution. We extensively use the method of Beth's semantic tableaux.
more …
By
Tembrowski, Bronisław
The starting point for the investigation in this paper is the following McKinseyTarski's Theorem: if f and g are algebraic functions (of the same number of variables) in a topological Boolean algebra (TBA) and if C(f)∩C(g) vanishes identically, then either f or g vanishes identically. The present paper generalizes this theorem to Balgebras and shows that validity of that theorem in a variety of Balgebras (Bvariety) generated by SCI_{B}equations implies that its free LindenbaumTarski's algebra is normal. This is important in the semantical analysis of SCI_{B} (the Boolean strengthening of the sentential calculus with identity, SCI) since normal Balgebras are just models of this logic. The rest part of the paper is concerned with relationships between some closure systems of filters, SCI_{B}theories, Bvarieties and closed sets of SCI_{B}equations that have been derived both from the semantics of SCI_{B} and from the semantics of the usual equational logic.
more …
By
Lejewski, Czesław
The most difficult problem that Leśniewski came across in constructing his system of the foundations of mathematics was the problem of ‘defining definitions’, as he used to put it. He solved it to his satisfaction only when he had completed the formalization of his protothetic and ontology. By formalization of a deductive system one ought to understand in this context the statement, as precise and unambiguous as possible, of the conditions an expression has to satisfy if it is added to the system as a new thesis. Now, some protothetical theses, and some ontological ones, included in the respective systems, happen to be definitions. In the present essay I employ Leśniewski's method of terminological explanations for the purpose of formalizing Łukasiewicz's system of implicational calculus of propositions, which system, without having recourse to quantification, I first extended some time ago into a functionally complete system. This I achieved by allowing for a rule of ‘implicational definitions’, which enabled me to define any propositionforming functor for any finite number of propositional arguments.
more …
By
Woleński, Jan
5 Citations
Popper's definition of verisimilitude was criticized for its paradoxical consequences in the case of false theories. The aim of this paper is to show that paradoxes disappear if the falsity content of a theory is defined with help of dCn or Cn^{−1}.
By
Ho, Nguyen Cat; Rasiowa, Helena
13 Citations
SemiPost algebras of any type T being a poset have been introduced and investigated in [CR87a], [CR87b]. Plain SemiPost algebras are in this paper singled out among semiPost algebras because of their simplicity, greatest similarity with Post algebras as well as their importance in logics for approximation reasoning ([Ra87a], [Ra87b], [RaEp87]). They are pseudoBoolean algebras generated in a sense by corresponding Boolean algebras and a poset T. Every element has a unique descending representation by means of elements in a corresponding Boolean algebra and primitive Post constants which form a poset T. An axiomatization and another characterization, subalgebras, homomorphisms, congruences determined by special filters and a representability theory of these algebras, connected with that for Boolean algebras, are the subject of this paper.
more …
By
O'Keeffe, Katherine O'Brien; Rundell, William
1 Citations
Information theory offers a means for analyzing some constraints on the reading and copying process in Old English. Entropy for strings of various lengths offers a baseline measure of the uncertainty involved in transmission of Old English texts, while avoiding the pitfalls of applying models of modern reading to early medieval practice. Analysis of lengthy prose and verse texts in Old English revealed uniformly high values for entropy at all string lengths. High entropies may be the result of the language's irregular orthography, poetic koiné, and several dialects and imply that the language may have been easy to write but difficult to read. The low redundancy of the language which its high entropy values indicate suggests that the reader of Old English played an enhanced role in “decoding” a text and may provide an explanation for the high variability in the transmission of Old English verse.
Katherine O'Brien O 'Keeffe is Professor of English at Texas A&M University and a codirector of its Interdisciplinary Group for Historical Literary Study.
more …
By
Jay, C. Barry
5 Citations
The internal language of a monoidal category yields simple proofs of results about a natural numbers object therein.
By
Lambek, J.
2 Citations
Categories may be viewed as deductive systems or as algebraic theories. We are primarily interested in the interplay between these two views and trace it through a number of structured categories and their internal languages, bearing in mind their relevance to the foundations of mathematics. We see this as a common thread running through the six contributions to this issue of Studia Logica.
more …
By
Rodenburg, P. H.; Linden, F. J.
3 Citations
A construction is described of a cartesian closed category A with exactly two elements out of a Cmonoid ℳ such that ℳ can be recovered from A without reference to the construction.
By
Paré, Robert; Román, Leopoldo
15 Citations
The notion of a natural numbers object in a monoidal category is defined and it is shown that the theory of primitive recursive functions can be developed. This is done by considering the category of cocommutative comonoids which is cartesian, and where the theory of natural numbers objects is well developed. A number of examples illustrate the usefulness of the concept.
more …
By
Szabo, M. E.
We introduce the notion of an alphabetic trace of a cutfree intuitionistic prepositional proof and show that it serves to characterize the equality of arrows in cartesian closed categories. We also show that alphabetic traces improve on the notion of the generality of proofs proposed in the literature. The main theorem of the paper yields a new and considerably simpler solution of the coherence problem for cartesian closed categories than those in [11, 14].
more …
By
Kirschner, Zdeněk; Rosen, Alexandr
2 Citations
This paper discusses an experiment in machine translation between English and Czech. Our system is based on a dependency grammar and its core parts are implemented in Qsystems. There is no distinct transfer phase. Many features of the system are determined by the fact that it was conceived as “productionoriented”. A brief description of the system is provided, but the main focus is on the problems encountered. They include general problems of translation, problems of translation from English and problems specific to translation from English into Czech (and possibly most other Slavonic languages). Some solutions are described, but for many problems it seems unrealistic to expect satisfactory solutions soon or at all.
more …
By
Pitts, Andrew M.; Taylor, Paul
2 Citations
Working in the fragment of MartinLöfs extensional type theory [12] which has products (but not sums) of dependent types, we consider two additional assumptions: firstly, that there are (strong) equality types; and secondly, that there is a type which is universal in the sense that terms of that type name all types, up to isomorphism. For such a type theory, we give a version of Russell's paradox showing that each type possesses a closed term and (hence) that all terms of each type are provably equal. We consider the kind of category theoretic structure which corresponds to this kind of type theory and obtain a categorical version of the paradox. A special case of this result is the degeneracy of a locally cartesian closed category with a morphism which is generic in the sense that every other morphism in the category can be obtained from it via pullback.
more …
By
Curien, PierreLouis
3 Citations
We present the paradigm of categoriesassyntax. We briefly recall the even stronger paradigm categoriesasmachinelanguage which led from λcalculus to categorical combinators viewed as basic instructions of the Categorical Abstract Machine. We extend the categorical combinators so as to describe the proof theory of first order logic and higher order logic. We do not prove new results: the use of indexed categories and the description of quantifiers as adjoints goes back to Lawvere and has been developed in detail in works of R. Seely. We rather propose a syntactic, equational presentation of those ideas. We sketch the (quasiequational) categorical structures for dependent types, following ideas of J. Cartmell (contextual categories). All these theories of categorical combinators, together with the translations from λcalculi into them, are introduced smoothly, thanks to the systematic use of
 an abstract variablefree notation for λcalculus, going back to N. De Bruijn,
 a sequent formulation of the natural deduction.
more …
By
Obtułowicz, Adam
1 Citations
In the paper there are introduced and discussed the concepts of an indexed category with quantifications and a higher level indexed category to present an algebraic characterization of some version of MartinLöf Type Theory. This characterization is given by specifying an additional equational structure of those indexed categories which are models of MartinLöf Type Theory. One can consider the presented characterization as an essentially algebraic theory of categorical models of MartinLöf Type Theory. The paper contains a construction of an indexed category with quantifications from terms and types of the language of MartinLöf Type Theory given in the manner of Troelstra [11]. The paper contains also an inductive definition of a valuation of these terms and types in an indexed category with quantifications.
more …
By
Defrise, Christine
This paper presents a detailed linguistic (syntactic, semantic and pragmatic) analysis of the French scalar adverb presque; the analysis is performed so as to be computationally relevant. Further, a methodology for describing other closedclass lexical items is suggested. Such descriptions are necessary for the support of natural language processing systems including analyzers, generators and machine translation systems.
more …
By
Harris, Mary Dee
Applying the method of discourse structure analysis described by Grosz and Sidner to lyric poetry, one views the poet as the Initiating Conversational Participant, and the reader as the Other Conversational Participant as she recreates the poem upon reading it. In poetry the linguistic and intentional structures function in counterpoint to the metrical and stanzaic structures, respectively, producing the effects that define poetry. Analysis of attentional state can reveal the dynamics of the focussing process in a poem, providing a unique perspective on its operation. More research is needed to extend the theory to adequately handle lyric poetry.
more …
By
Ide, Nancy M.
5 Citations
This paper describes a computerassisted analysis of semantic patterning in William Blake'sThe Four Zoas and considers the way in which such patterns contribute to the structure and meaning of the work. The analysis involves examining combinations and recombinations of images across the text for concentrations of images and images groups, recurring images, and patterns in the distribution of individual images and clusters of images. Statistical correlation routines were used to determine the degree of correlation among images across the extire text as well as in specific text segments. Principal components analysis enabled identifying thematic clusters of images, and the distribution of these clusters across these text were in turn examined to determine their patterning. Finally, time series analysis and Fourier analysis were used to find and verify patterns in the distribution of images across the text. Fourier analysis revealed striking patterns in the distribution of imagery in theZoas, which suggests that Blake may have used such patterns to help convey the poem's powerful thematic statements.
more …
By
Wilson, Eve
2 Citations
Text typing is the classification of text according to the purpose of the author. There is no universally recognised register of types: the four distinguished here are descriptive, narrative, persuasive or instructional. Type is assigned by analysing the clausal structure of the discourse and certain semantic features within the text such as theme of sentence, modality of verb, and process type (i.e. whether the verbal group is material, mental, relational, behavioural, verbal or existential). The ability to discriminate textual genre is an important step in the evaluation and classification of documents.
more …
By
Olsen, Mark
2 Citations
The Société de 1789 was a political club founded in early 1790 to propagate the ideals of the Revolution and the Enlightenment. A systematic analysis of the language found in the public discourse of the Société using simple quantitative techniques suggests important distinctions in comparison to the language found in a baseline sample, a selection of the General Cahiers de doléances of 1789. It is further argued that these differences represent an Enlightened reforming tradition that carried into the French Revolution.
more …
By
Janus, Louis; Shadduck, Gregg
Concordances are not the only computer tools available to literary scholars. This article looks at two general purpose software packages, Lotus 123 and dBase III. Several suggested approaches to organizing the texts are presented. Each program allows the words to have multiple levels — something standard concordance programs lack. This encourages the researcher to view the text as texture rather than as words strung together linearly. Lotus is easier to use than dBase but lacks the ability to relate information between files. With programming skills, the researcher can develop complex queries in dBase.
more …
By
Burrows, J. F.
33 Citations
The statistical analysis of literary texts has yielded valuable results, not least when it has treated of the frequency patterns of very common words. But, whereas particular frequency patterns have usually been examined as discrete phenomena, it is possible to correlate the frequency profiles of all the very common words, to subject the resulting correlation matrix to eigen analysis, and to present the results in graphic form. The specimens offered here deal, first, with differences among Jane Austen's characters and, secondly, with differences between authors. The most striking general differences among the authors studied relate to historical eras and authorial gender.
more …
By
Luong, N. X.
1 Citations
The purpose of this paper is to show the utility of the application of a nonultrametric treemodel to textual data. The first part introduces a basic topological property of the tree and the notion of neighbourhood, which reflects the structure of the tree. The second part emphasizes through illustration examples the adequacy of this model for representing different varieties of textual data.
more …
By
Frautschi, Richard L.
1 Citations
Based on the ARTFL version of theProfession and excerpts fromEmile, high frequency function and content words, as defined by Brunet, are analyzed via Pearson chi square tests. Next, four measures of narrative voice from the same populations are compared using Markovian chains and further chi square tests. In a third analysis the two orders of evidence are juxtaposed. The lexical and narratological preferences of theVicaire and theGouverneur, while not resolving the problematic of chronological composition (Burgelin, 1969), highlight the distinctiveness of each character.
more …
By
Potter, Rosanne G.
4 Citations
Research on changes in Shaw's rhetoric inMrs. Warren's Profession, Major Barbara, andHeartbreak House led me to a heuristic for gaining literary critical control over computer output. This essay describes the elevenstep process: stepping away from the data, stating first premises, developing a working hypothesis, classifying computersorted data, marking implicit literary substructures, collecting substructural data into tables, applying earlier statistical observations, choosing parts for detailed analysis, designing a visual method for representing the analysis, presenting segment by segment analysis of the selected data, and making larger descriptive generalizations. While describing this heuristic, the essay also reports on the Shaw research.
more …
By
Preston, Cathy Lynn
This paper suggests ways in which the patternmatching capability of the computer can be used to further our understanding of stylized ballad language. The study is based upon a computeraided analysis of the entire 595,000 word corpus of Francis James Child'sThe English and Scottish Popular Ballads (1882–1892), a collection of 305 textual traditions, most of which are represented by a variety of texts. The paper focuses on the “Mary Hamilton” tradition as a means of discussing the function of phatic language in the ballad genre and the significance of textual variation.
more …
By
Anderson, C. W.; McMaster, G. E.
6 Citations
A comparison was made of the levels and patterns of emotional tone scores in four successive versions of three stories that have been translated from German by Ellis to illustrate his argument that the Grimm Brothers made extensive revisions from the proported manuscript of the stories to their celebrated first edition versions. This objective analysis was based upon the evaluation, activity, and potency of the emotions connoted by those of the 1000 most frequent English words detected by the computer as occurring in the narratives. The stores were:The King's Daughter and The Enchanted Prince: Frog King, Sleeping Beauty, andThe Little Brother and Little Sister (Hansel and Gretel). Changes in story length, in mean levels of emotional tone, and in patterns of emotional tone across story versions support Ellis's judgement that subsequent revisions were less drastic than the first one, from the manuscript. It was also shown that the stories are quite different from each other in level and pattern of emotional tone.
more …
By
Fortier, Paul A.
2 Citations
Themes (or semantic fields), rather than individual words are used to study texts from a literary point of view. The approach — Z scores, and a Poisson distribution as a model for distribution — owes much to classical inferential statistics, but the aim of this work is to use statistics as a descriptive rather than a predictive tool. Frequencies of words evoking the themes of night, happiness and claustration were drawn from three Frequency Dictionaries (Juilland, 1970; Imbs, 1971; Engwall, 1984), and used to extrapolate “predicted” frequencies of these themes in four modern French novels. The novels studied were Gide,l'Immoraliste (1902); Céline,Voyage au bout de la nuit (1932); Sartre,la Nausée (1938), and RobbeGrillet,la Jalousie. The results corresponded to known and documented literary phenomena or could be explained in terms of such phenomena. The approach chosen thus has some usefulness.
more …
By
Zock, M.; Laroui, A.; Francopoulo, G.
2 Citations
We describe a system under development, whose goal is to provide a “natural” environment for students learning to produce sentences in French. The learning objective is personal pronouns, the method is inductive (learning through exploration). Input of the learning component are conceptual structures (meanings) and the corresponding linguistic forms (sentences), its outputs are rules characterizing these data. The learning is dialogue based, that is to say, the student may ask certain kinds of questions such as:How does one say 〈idea〉?,Can one say 〈linguistic form〉?,Why does one say 〈linguistic form〉?, and the system answers them.
By integrating the student into the process, that is, by encouraging him to build and explore a search space we hope to enhance not only his learning efficiency (what and how to learn), but also our understanding of the underlying processes. By analyzing the trace of the dialogue (what questions have been asked at what moment), we may infer the strategies a student put to use.
Although the system covers far more than what is discussed here, we will restrict our discussion to a small subset of grammar, personal pronouns, which are known to be a notorious problem both in first and second language learning.
more …
By
Delcourt, Christian
2 Citations
A key word with regard to a subcorpus is a word of which the frequency in that subcorpus is significantly higher than expected under the hypothesis that its use and the variable “part of the corpus” are mutually independent. A study in literary statistics almost invariably includes a chapter devoted to key words. However, a strong attack has been recently launched upon the way stylometry has been modelling texts since the classical works of Herdan, Guiraud or Muller. In fact statistical modelling seems as valid in stylistics as in any other field of the humanities and social sciences. What is questionable is the fact that many studies in literary statistics are more satisfied with the easy identification of monsters, i.e. literary phenomena unexplained by wrong models, than with the laborious research of models fitting the textual data well. A short examination of the mentioned controversy and the quantitative analysis of an example provided by Laclos' novelLes Liaisons dangereuses endeavour to support this argument.
more …
By
Myrsiades, Kostas; Myrsiades, Linda Suny
The Karagiozis, Greek shadow puppet theater performance derived from a sixteenth century Turkish model, is an interactive performative event rather than a static text treated in isolation from its extended environmental and immediate performative contexts and thus requires critical shifts in the research approach one uses. A method of collating data that could handle the differences of multiple variants became necessary. Such a method had to effectively test the transformational growth of the form given the influence of irrational forces, had to organize a large number of variables as well as a substantial body of texts, and had to operate at the smallest level to insure that findings were confirmed or disconfirmed in the most exhaustive and comprehensive way possible. That method was to be found in information management systems, specifically Asksam and Notebook II.
more …
By
Peer, W.
14 Citations
The present paper is a critique of quantitative studies of literature. It is argued that such studies are involved in an act of reification, in which, moreover, fundamental ingredients of the texts, e.g. their (highly important) range of figurative meanings, are eliminated from the analysis. Instead a concentration on lower levels of linguistic organization, such as grammar and lexis, may be observed, in spite of the fact that these are often the least relevant aspects of the text. In doing so, quantitative studies of literature significantly reduce not only the cultural value of texts, but also the generalizability of its own findings. What is needed, therefore, is an awareness and readiness to relate to matters of textuality as an organizing principle underlying the cultural functioning of literary works of art.
more …
By
Logan, H. M.
The study of the history of new words in theNewOED described in this paper was undertaken in 198687, and is based on the material then available. Since then, theNewOED has been finished, and PAT, the inquiry system developed at the University of Waterloo for the investigation of theNewOED data base, has been much altered and improved. Nevertheless, this report should prove useful in indicating the potentiality for analyzing the computerizedNewOED and some of the problems. This project is a study of the ways in which new words are created in English at various periods of time. A chronological dictionary 's created listing words introduced into the language over 50 year increments. These words are then classified by the processes used in forming them to show, in proportional terms, if certain processes are more common at some times than at others.
more …
By
Halteren, Hans
The possible benefits of computing in humanities research are often wasted because of the psychological barriers that computers evoke in nonspecialists. This paper examines the underlying causes and suggests some ways of alleviating the problem. One approach in particular, i.e. ease through familiarity, is discussed in more detail. It is illustrated by means of a description of a database system that uses this approach: the Linguistic DataBase, which contains syntactic analysis trees of natural language data.
more …
By
Farghaly, Ali
3 Citations
This paper presents the view that Computer Assisted Language Instruction (CALI) software should be developed as a natural language processing system that offers an interactive environment for language learners. A description of Artificial Intelligence tools and techniques, such as parsing, knowledge representation and expert systems is presented. Their capabilities and limitations are discussed and a model for intelligent CALI software (MICALI) is proposed. MICALI is highly interactive and communicative and can initiate conversation with a student or respond to questions on a previously defined domain of knowledge. In the present state of the art, MICALI can only operate in limited parsing and domainspecific knowledge representation.
more …
By
Brown, Ralf D.
3 Citations
A semantic and pragmatic interpreter that combines automatic and interactive disambiguation is described. This augmentor has an interactive disambiguation component that is called upon to aid automatic disambiguation when automated strategies prove inadequate. In addition to interactive disambiguation, the augmentor also provides the user interface for the KBMT89 project.
more …
By
Morrisson, Stephen; Kee, Marion; Goodman, Kenneth
1 Citations
This paper describes the parser, especially its mapping rule interpreter, used in KBMT89. The interpreter is characterized by its ability to produce semantic and syntactic structures of a parse simultaneously and therefore more efficiently than other kinds of analyzers. Applicable forms of parser mapping rules, which map syntactic structures to semantic structures, are introduced. The parser, a modified version of Tomita's universal parser, is briefly described. Sample traces illustrate the functioning of the parser and mapping rule interpreter.
more …
By
Nyberg, Eric, 3rd; McCardell, Rita; Gates, Donna; Nirenburg, Sergei
Show all (4)
2 Citations
The structure and function of the targetlanguage generation module for KBMT89 is described. The lexical selection module (which includes thematicrole subcategorization, a meaning distance metric, and syntactic subcategorization) is presented. We also describe the generation mapping rules, and rule interpretation in the generation of fstructures for target language utterances.
more …
By
Brady, Ross T.
2 Citations
We provide a semantics for relevant logics with addition of Aristotle's Thesis, ∼(A→∼A) and also Boethius,(A→B)→∼(A→∼B). We adopt the RoutleyMeyer affixing style of semantics but include in the model structures a regulatory structure for all interpretations of formulae, with a view to obtaining a lessad hoc semantics than those previously given for such logics. Soundness and completeness are proved, and in the completeness proof, a new corollary to the Priming Lemma is introduced (c.f.Relevant Logics and their Rivals I, Ridgeview, 1982).
more …
By
Segerberg, Krister
17 Citations
This paper consists of some lecture notes in which conditional logic is treated as an extension of modal logic. Completeness and filtration theorems are provided for some basis systems.
By
Juillard, M.; Luong, N. X.
1 Citations
Scholars in the humanities often have to account exhaustively for the structure of large masses of data. Treediagrams implemented by means of suitable computer programs can be of considerable assistance in achieving a cohesive representation of the data. This paper discusses the respective merits of the two main approaches to tree representation and introduces a new method based on the use of unrooted trees. After a detailed examination of the topological properties of such trees, two algorithms are described. The second part of the paper consists in practical applications of the method of tree representation to a corpus of contemporary English poetry. Several sets of data made up of both lexical and grammatical items (adjectives, modals, auxiliaries and personal pronouns) have been submitted to the method. The findings are assessed in terms of their heuristic value in the light of modern linguistic theory and compared with the results obtained by means of more traditional statistical procedures.
N. X. Luong is a doctor of Sciences and a lecturer at the University of Nice. He is conducting research on algorithms in the field of discrete mathematics. He has, among other things, created several algorithms for the representation of data in the form of nonhierarchic trees.
more …
