Sixth Language Technologies Conference

IS-LTC 2008

October 15 - 17, 2008
Ljubljana
Slovenia

Invited Talks


Tanja Schultz
University of Karlsruhe and Carnegie Mellon University
Rapid Deployment of Speech Processing Systems to New Languages and Domains

Slides of the talk

The performance of speech and language processing technologies has improved dramatically over the past decade, with an increasing number of systems being deployed in a large variety of applications, such as spoken dialog systems, speech summarization and information retrieval systems, and speech translation systems. Most efforts to date were focused on a very small number of languages with large number of speakers, economic potential, and information technology needs of the population. However, speech technology has a lot to contribute even to those languages that do not fall into this category. Languages with a small number of speakers and few linguistic resources may suddenly become of interest for humanitarian and military reasons. Furthermore, a large number of languages are in danger of becoming extinct, and ongoing projects for preserving them could benefit from speech technology.

With more than 6900 languages in the world and the need to support multiple input and output languages, the most important challenge today is to port speech processing systems to new languages rapidly and at reasonable costs. Major bottlenecks are the lack of data and language conventions, and the gap between technology and language expertise. The lack of data results from the fact that today's speech technologies heavily rely on statistically based modeling schemes, such as Hidden Markov Models and n-gram language modeling. Although statistical modeling algorithms are mostly language independent and proved to work well for a variety of languages, the parameter estimation requires vast amounts of training data. Large-scale data resources are currently available for less than 50 languages and the costs for these collections are prohibitive to all but the most widely spoken and economically viable languages. In addition, a surprisingly large number of languages or dialects lack a standardized writing system which hinders web harvesting of large text corpora or the construction of dictionaries and lexicons. Last but not least, despite the well-defined process of system building it is very cost- and time consuming to handle language-specific peculiarities, and it requires substantial language expertise. Unfortunately, it is extremely difficult to find system developers who simultaneously have the necessary technical background and significant insight into the language in question. Consequently, one of the central issues in developing systems in many input and output languages is the challenge of bridging the gap between language and technology expertise. In my talk I will introduce state-of-the-art techniques for rapid language adaptation and present existing solutions to overcome the ever-existing problem of data sparseness and the gap between language and technology expertise. I will describe the building process for speech recognition and speech synthesis components for new unsupported languages and introduce tools to do this rapidly and at lost costs.

The talk describes the SPICE Toolkit (Speech Processing - Interactive Creation and Evaluation), a web based toolkit for rapid language adaptation to new languages. The methods and tools implemented in SPICE enables user to develop speech processing components, to collect appropriate data for building these models, and to evaluate the results allowing for iterative improvements. Building on existing projects like GlobalPhone and FestVox, knowledge and data are shared between recognition and synthesis; this includes phone sets, pronunciation dictionaries, acoustic models, and text resources. SPICE is an online service (http://cmuspice.org). By archiving the data gathered on-the-fly from many cooperative users we hope to significantly increases the repository of languages and resources and make the data and components for new languages available at large to the community. By keeping the users in the developmental loop, SPICE tools can learn from their expertise to constantly adapt and improve. This will hopefully revolutionize the system development process for new languages.


Marco Baroni
Center for Mind/Brain Sciences, University of Trento, Italy
Distributional Semantics: From Ad Hoc Solutions to Persistent Models

Slides of the talk

The past 15 years have seen the development of several general-purpose corpus-based semantic models (CSMs), i.e., computational systems that learn aspects of semantic representation from patterns of co-occurrence in large scale linguistic corpora. Pioneering CSMs such as HAL and LSA, and those that followed in their step, have achieved impressive results in simulations of a variety of semantic knowledge tasks.

In this talk, after a short introduction to the basic ideas behind CSMs, I will discuss 3 major issues that, if solved, would greatly broaden their appeal as cognitive models, as well as resources for language technologies:

  1. Current CSMs provide a single measure of similarity among words; different semantic tasks, however, highlight different facets of semantics ("car" and "truck" are similar in a different way from the one in which "car" and "drive" are similar)
  2. CSMs still fail to account for compositionality (they will typically find the sentences "pandas eat bamboo" and "bamboo eats pandas" equally likely)
  3. Since for CSMs, like for other corpus-based systems, "more data is better data", it is difficult to adapt them to domains where they would be extremely useful, but few data are available: historic and literary data, "less-studied" languages, etc.

Besides pointing out the problems, I will also try to sketch some partial solutions.


Valid HTML 4.01!
Page updated 2008-10-20, et