• Kate BEECHING (Bristol, Grande-Bretagne)
    The translation equivalence of bon, enfin, well and I mean
    2011, Vol. XVI-2, pp. 91-105

    The aim of this paper is to evaluate the usefulness of two methods of attempting to capture the semantic change which has led to the multifunctional nature of four pragmaticalising polysemous connectors in French and English - bon, enfin, well and I mean - : a) translation equivalence which may help to disambiguate evolving polysemies and b) Haspelmath's (2003) semantic map approach to cross-linguistic typology and the implicational hierarchies which mark the development of these polysemies. The article concludes that degrees of pragmaticalisation can be revealed in translation equivalence, but this is, of necessity, only partial, and that cross-linguistic semantic mapping can perhaps better capture the diachronic developmental stages and the degree of translation equivalence between the terms.

  • Henning BERGENHOLTZ (Aarhus, Danemark)
    Faster and more reliable retrieval of data in specialized printed and digital dictionaries and lexicons
    2009, Vol. XIV-2, pp. 81-97

    In the information age we have more accessible data than ever before. At the same time there is undoubtedly a greater information need than ever before. We can distinguish between at least three types of information needs: communicative, cognitive and operational. Dictionaries will normally focus on one or more communicative functions, encyclopaedias normally on cognitive functions, user guides and manuals focus on operational functions. To perform such functions, we need reference books containing the necessary data. This is a main topic both in metalexicography and in terminography. Far less observations have been made to the no less important question: How and especially how quick can the user get access to the data ?

  • Paul BOGAARDS (Leiden, Pays-Bas)
    Collocational information in dictionaries
    1997, Vol. II-1, pp. 31-42

    Hausmann (1979, 1984) proposes an interesting theory concerning the categorization and the internal analysis of collocations. What could seem less clear is the practical importance of this theory for the treatment of collocations in dictionaries. In this paper, Hausmann's theory will be discussed and tested in an experiment where learners of French and Dutch were asked to indicate where they would look in the dictionary whenever they did not know the French equivalent of a series of Dutch collocations.

  • Etienne BRUNET (Nice)
    Computerized dictionaries (Encyclopedia Universalis, GR, OED, TLF)
    1997, Vol. II-1, pp. 7-30

    In this article will be examined how some large dictionaries and encyclopedias that are available now on CD-Rom have been computerized, and what new possibilities for consultation and research these CD-Rom offer as compared to the printed version from which they originate.

  • Henri BÉJOINT (Lyon 2)
    Old and new dictionaries and their different representations of language and discourse
    2005, Vol. X-2, pp. 11-18

    The dictionary evolved from medieval glosses that explained fragments of discourse in their contexts. Those fragments were later collected, then classified and reduced to their simplest forms, ie words. The most important aspect of that evolution from the gloss to the dictionary is that the fragment to be explained was decontextualized, extracted from discourse. The main objective of the dictionary is to give an image of the system. It is now possible to improve the dictionary in its role as a tool for explaining discourse. It cannot provide explanations that would be adapted to every single context, but it can give to the user a huge quantity of discourse, and provide explanations that would be more closely adapted to every occurrence or type of occurrence. Lexicographers would be well advised to investigate those new possibilities.

  • Henri BÉJOINT (Lyon 2)
    Computer science and corpus lexicography: the new dictionaries
    2007, Vol. XII-1, pp. 7-23

    The dictionary evolved from medieval glosses that explained fragments of discourse in their contexts. Those fragments were later collected, then classified and reduced to their simplest forms, ie words. The most important aspect of that evolution from the gloss to the dictionary is that the fragment to be explained was decontextualized, extracted from discourse. The main objective of the dictionary is to give an image of the system. It is now possible to improve the dictionary in its role as a tool for explaining discourse. It cannot provide explanations that would be adapted to every single context, but it can give to the user a huge quantity of discourse, and provide explanations that would be more closely adapted to every occurrence or type of occurrence. Lexicographers would be well advised to investigate those new possibilities.

  • Thierry FONTENELLE (Centre de Traduction UE (Luxembourg))
    Dictionaries and tools for linguistics correctness
    2005, Vol. X-2, pp. 119-128

    Spell-checkers and grammar checkers are among the most widely used natural language processing applications. At the heart of these proofing tools, one finds the electronic lexicon, where the various types of lexical information these tools rely on are stored. We describe some of these linguistic properties and show how the border between spell-checker and grammar checker tends to become blurred in the most recent versions of these tools, even if, for the time being at least, the two types of tools keep meeting distinct needs.

  • Thierry FONTENELLE (Centre de Traduction UE (Luxembourg))
    Computerized dictionaries and lexical relations: A comparison between some European programmes
    1997, Vol. II-1, pp. 65-77

    Natural Language Processing systems (e.g. machine translation, information retrieval or generation systems) require large lexical resources which prove extremely costly to build. The automatic or semi-automatic construction of electronic dictionaries is now recognized as a sine qua non in any NLP project and a lot of attention is being paid to methods and tools for acquiring, coding and formalizing the linguistic properties of lexical items appearing in the texts to be processed. This paper deals with the formalization of lexical-semantic relations in computerized dictionaries, focusing more particularly on a few research projects funded by the European Union.

  • Lucie GOURNAY (Paris-Est Créteil)
    Connectors and the expression of contrast: a French-English comparative study
    2011, Vol. XVI-2, pp. 75-89

    This paper focuses on the contribution supplied by a cross-linguistic approach of the diversity of discourse connectives. It includes two case-studies, which aim at implementations in the lexicological and teaching fields: first, the non equivalence of mais / but when sentence-initial mais does not mark an argumentative opposition; second, false cognates actually / actuellement which hardly ever correspond. These two complementary case-studies illustrate two cases of non direct translation.

    The Color of Things: Towards the automatic acquisition of information for a descriptive dictionary
    2005, Vol. X-2, pp. 83-94

    Physical objects are often described in dictionaries by visual features. But the information needed by computer applications for image analysis is not always found in dictionaries, nor in a complete form in any other publicly available information source. This article describes some first steps in finding more complete visual information about objects that could be used to enhance computer usable dictionaries and other knowledge repositories. We show that some information about the common colors of objects can be extracted automatically from text found on the Web.

  • Patrick HANKS (Brandeis University, États-Unis)
    A Pattern Dictionary for Natural Language Processing
    2005, Vol. X-2, pp. 63-82

    This paper briefly surveys three of the main resources for word sense disambiguation that are currently in use - WordNet, FrameNet, and Levin classes - and proposes an alternative approach, focusing on verbs and their valencies. This new approach does not attempt to account for all possible uses of a verb, but rather all its normal uses ("norms"). By corpus pattern analysis (CPA), the normal patterns of use of verbs are established. A meaning ("primary implicature") is associated with each pattern. The patterns are then available as benchmarks against which the probable meaning of any sentence can be measured. The status of abnormal or unusual uses ("exploitations") is also briefly discussed. Also, three kinds of alternation are recognized: syntactic diathesis alternations, semantic-type alternations, and lexical alternations.

  • Ulrich HEID (Stuttgart, Allemagne)
    Semi-automatic updating of dictionaries
    2002, Vol. VII-1, pp. 53-66

    Many more dictionaries are being updated than are written from scratch. Taylor-made computational linguistic support for lexicographers should thus not only provide corpus-derived data, but, along with this, a comparison with the information given in the targeted dictionary. We are developing a system for German, which provides this kind of comparison, for a set a macro- and microstructural data. We report about the main lexicographic and computational aspects of the approach.

  • Adam KILGARRIFF (Brighton, Grande-Bretagne)
    Requirements for state-of-the-art dictionary writing systems
    2005, Vol. X-2, pp. 95-102

    Computers can be used in lexicography to support the analysis of the language, and to support the synthesis of the dictionary text. There are of course many other interactions between computing and lexicography, including the preparation and presentation of electronic dictionaries, the use of dictionaries in language technology systems, and the automatic acquisition of lexical information. In technologically advanced dictionary-making, the lexicographer works with two main systems on their computer: the Corpus Query System for analysis and the Dictionary Writing System for synthesis. Currently, these are always independent, with communication between the two via cut-and-paste. We describe requirements for state-of-the-art dictionary writing systems.

  • J.G. KRUYT (Leyde, Pays-Bas)
    Towards the Integrated Language Database of 8th-21st Century Dutch
    2000, Vol. V-2, pp. 33-44

    In the past decade, technology has had a major impact on the activities of the Institute for Dutch Lexicology (INL). The results include three electronic dictionaries, covering the period from 1200 up to 1976, and some linguistically annotated text corpora of historical and present-day Dutch. Three present-day corpora have been widely used not only for lexicography but also for many other purposes, since becoming accessible over the Internet in 1994. Advanced technology will have even more importance for a project recently started, the Integrated Language Database of 8th-21st Century Dutch, in which the dictionaries, lexica and a diachronic text corpus will be linked in a meaningful way. Parts of the database will be linked with comparable data collections at other institutes, thus creating a supra-institutional research instrument which will provide new opportunities for innovative research.

  • Jon LANDABURU (CNRS-Célia)
    Building a linguistic database for the Indo-american languages of Columbia: maps, glossaries, sound archives
    1997, Vol. II-1, pp. 83-90
  • Patrick LEROYER (Aarhus, Danemark)
    In terms of wine: lexicographisation of an on-line tourist guide for wine-lovers
    2009, Vol. XIV-2, pp. 99-116

    Online tourist guides are information tools communicating destination image and specialised knowledge at the same time. They feature a large variety of lexicographic structures including word lists, articles, conceptual schemes, indexes and registers, search options on keywords, internal and external cross references etc. This is by no means surprising in so far as what is needed is effective data access in order to extract information – precisely in the same way as in lexicography. The functional thesis we defend in this article is that lexicographisation in a user perspective can improve the access process. Taking œnotouristic online guides as a case in point, we will examine different user situations leading to consultation, in particular the need for experiential information, in which users simply wish to improve the conditions of their œnotouristic experience. We will then formulate theoretical proposals aimed at ensuring better interaction of lexicographic functions, data presentation and access possibilities.

  • Denis MAUREL (Tours)
    An electronic dictionary for proper names
    1997, Vol. II-1, pp. 101-111

    Following on plain words lists and conventional electronic dictionaries, the Prolex project relational electronic dictionary is based on the relational model as defined in data base theory. It is represented as a finite state transducer, which allows a quick browsing and an efficient data compaction.

  • Carlos MELÉNDEZ QUERO (Université de Lorraine)
    Lexicographic treatment of discourse particles in Spanish: problems and proposals
    2015, Vol.XX-1, pp. 29-44

    In the present paper we offer some reflexions on the lexicographic study of discourse particles. Setting out from the analysis of emotive evaluative Spanish adverbs, we try to emphasize the difficulties with dictionary use for the learning of the discourse functions of these units. Following a presentation of the dictionaries’ contributions, we suggest a method for the lexicographic definition and explanation of the given words in terms of discursive instructions and communicative intentions. This model enables us to resolve certain problems and to illustrate the principal similarities and differences between analogous evaluative expressions.

  • Morten PILEGAARD (Aarhus, Danemark)
    Collaborative repositories: An organisational and technological response to current challenges in specialised knowledge communication?
    2009, Vol. XIV-2, pp. 57-71

    This paper presents concepts and systems for multilingual terminological and textual knowledge codification, representation, validation, management and sharing structured around the notion of genre. These systems operationalize the different stages of the ‘virtuous knowledge cycle’ within a dynamic, multilingual specialized web-dictionary and a multilingual, genre-based corpus of medical texts genre hierarchies or systems. The knowledge cycle approach mirrors ‘real life’ working processes and allows for repeated conversions of knowledge between its tacit and explicit forms, allowing knowledge to codify and spiral up from the individual to the collective level at corporate, ‘community of practice’. The paper reports on the results of the implementation of these concepts and systems in general and the web-dictionary in particular within the Danish health care, pharmaceutical, medical device and translation sectors which technologically have been fused into one collective ‘knowledge cluster’ and it discusses the opportunities for research and business that spring from fusion of language and health technologies.

  • Miklós PÁLFY (Szeged, Hongrie)
    Structuring lexical parallelisms in the new French-Hungarian dictionary
    1997, Vol. II-1, pp. 59-64

    The computerization of dictionaries has appreciably modified our reflection on lexical structures. This paper aims to demonstrate a few dichotomies in bilingual semantic parallelisms, with the aid of French-Hungarian examples taken from a future electronic dictionary. These functional oppositions are devised as compilation principles defining lexicographical entry or item structures.

  • Caroline SCHAETZEN (DE) (Bruxelles, Belgique)
    Corpora and terminology: Building specialised corpora for making dictionaries
    1996, Vol. I-2, pp. 57-76

    Construction of dictionaries or specialised glossaries is more and more based on large corpora. This article aims at presenting a state of the art of the numerous technical problems that arise in the construction and exploitation of these corpora, and on computer programmes developed to help solve these problems.

  • Marc THOUVENOT (CNRS-Célia)
    Tlachia/Pohua: A tool for developing pictographic dictionaries
    1997, Vol. II-1, pp. 91-100
  • Serge VERLINDE (Louvain, Belgique)
    Electronic dictionaries and learning the lexicon
    2005, Vol. X-2, pp. 19-30

    In this article we would like to illustrate how to combine the lexicographical description of an electronic learner's dictionary (DAFLES, Dictionnaire d'apprentissage du français langue étrangère ou seconde) and a corpus to build a learning environment for French vocabulary for learners of French as a foreign or second language at an intermediate or advanced level (ALFALEX). This is only possible if the lexicographical description is fully coherent, perfectly structured and enriched with information which is not always explicitly exhibited in traditional dictionaries, even in the electronic ones.

  • Piek VOSSEN (Amsterdam, Pays-Bas)
    WordNet, EuroWordNet and Global WordNet
    2002, Vol. VII-1, pp. 27-38

    In this article we aim to present the architecture of the database WordNet, organised in order to represent conceptual relations, and set up initially for the English language, as well as its extensions made under the name of EuroWordNet for seven other European languages.

  • Michael ZOCK (CNRS-LIMSI)
    Is the mental dictionary a model of tomorrow's dictionaries?
    2005, Vol. X-2, pp. 103-117

    A dictionary is a necessary component for natural language processing. Yet there are different kinds of dictionaries (paper, electronic, mental) and in terms of efficiency they are not at all equivalent. Overall the best dictionary is the one that we carry with us everyday, the mental lexicon, and it is only when we lack a term or have word access problems that we reach for their paper or digital counterparts. Given the superiority of the mental lexicon we consider building electronic dictionaries according to similar principles, which supposes that we know what these principles are. To enumerate some of them is the goal of this paper. Unfortunately, the problem is too complex to be addressed in its full scope. Therefore we describe only a subset of the relevant work in psycholinguistics.