SCIENCE AND TECHNOLOGY
A gift of tongues
T O K Y O
Early attempts to translate human languages by computer made the subject a laughing stock. A little humility and a lot more processing power have, however, done the trick
Jokes about the uselessness of machine translation abound. The Central Intelligence Agency was said to have spent millions trying to program computers to translate Russian into English. The best it managed to do, so the tale goes, was to turn the famous Russian saying The spirit is willing but the flesh is weak into The vodka is good but the meat is rotten. Sadly, this story is a myth. But machine translation has certainly produced its share of howlers. Since its earliest days, the subject has suffered from exaggerated claims and impossible expectations.
Hype still exists. But Japanese researchers, perhaps spurred on by the linguistic barrier that often seems to separate their countrys scientists and technicians from those in the rest of the world, have made great strides towards the goal of reliable machine translationand now their efforts are being imitated in the West.
Until recently, the main commercial users of translation programs have been big Japanese manufacturers. They rely on machine translation to produce the initial drafts of their English manuals and sales material. (This may help to explain the bafflement many western consumers feel as they leaf through the instructions for their video recorders.) The most popular program for doing this is E-JBank, which was designed by Nobuaki Kamejima, a reclusive software wizard at AI Laboratories in Tokyo. Now, however, a bigger market beckons. The explosion of foreign languages (especially Japanese and German) on the Internet is turning machine translation into a mainstream business. The fraction of web sites posted in English has fallen from 98% to 82% over the past three years, and the trend is still downwards. Consumer software, some of it written by non-Japanese software houses, is now becoming available to interpret this electronic Babel to those who cannot read it.
Enigma variations
Machines for translating from one language to another were first talked about in the 1930s. Nothing much happened, however, until 1946 when an American mathematician called Warren Weaver became intrigued with the way the British had used their pioneering Colossus computer to crack the military codes produced by Germanys Enigma encryption machines. In a memo to his employer, the Rockefeller Foundation, Weaver wrote: I have a text in front of me which is written in Russian, but I am going to pretend that it is really written in English and that it has been coded in some strange symbols. All I need to do is strip off the code in order to retrieve the information contained in the text.
The earliest translation engines were all based on this direct, so-called transformer, approach. Input sentences of the source language were transformed directly into output sentences of the target language, using a simple form of parsing. The parser did a rough analysis of the source sentence, dividing it into subject, object, verb, etc. Source words were then replaced by target words selected from a dictionary, and their order rearranged so as to comply with the rules of the target language.
It sounds simple, but it wasnt. The problem with Weavers approach was summarised succinctly by Yehoshua Bar-Hillel, a linguist and philosopher who wondered what kind of sense a machine would make of the sentence The pen is in the box (the writing instrument is in the container) and the sentence The box is in the pen (the container is in the [play]pen).
Humans resolve such ambiguities in one of two ways. Either they note the context of the preceding sentences or they infer the meaning in isolation by knowing certain rules about the real worldin this case, that boxes are bigger than pens (writing instruments) but smaller than pens (playpens), and that bigger objects cannot fit inside smaller ones. The computers available to Weaver and his immediate successors could not possibly have managed that.
But modern computers, which have more processing power and more memory, can. Their translation engines are able to adopt a less direct approach, using what is called linguistic knowledge. It is this that has allowed Mr Kamejima to produce E-JBank, and has also permitted NeocorTech of San Diego to come up with Tsunami and Typhoonthe first Japanese-language-translation software to run on the standard (English) version of Microsoft Windows.
Linguistic-knowledge translators have two sets of grammatical rulesone for the source language and one for the target. They also have a lot of information about the idiomatic differences between the languages, to stop them making silly mistakes.
The first set of grammatical rules is used by the parser to analyse an input sentence (I read The Economist every week). The sentence is resolved into a tree that describes the structural relationship between the sentences components (I [subject], read [verb], The Economist [object] and every week [phrase modifying the verb]). Thus far, the process is like that of a Weaver-style transformer engine. But then things get more complex. Instead of working to a pre-arranged formula, a generator (ie, a parser in reverse) is brought into play to create a sentence structure in the target language. It does so using a dictionary and a comparative grammara set of rules that describes the difference between each sentence component in the source language and its counterpart in the target language. Thus a bridge to the second language is built on deep structural foundations.
Apart from being much more accurate, such linguistic-knowledge engines should, in theory, be reversibleyou should be able to work backwards from the target language to the source language. In practice, there are a few catches which prevent this from happening as well as it mightbut the architecture does at least make life easier for software designers trying to produce matching pairs of programs. Tsunami (English to Japanese) and Typhoon (Japanese to English), for instance, share much of their underlying programming code.
Having been designed from the start for use on a personal computer rather than a powerful workstation or even a mainframe, Tsunami and Typhoon use memory extremely efficiently. As a result, they are blindingly fast on the latest PCstranslating either way at speeds of more than 300,000 words an hour. Do they produce perfect translations at the click of a mouse? Not by a long shot. But they do come up with surprisingly good first drafts for expert translators to get their teeth into. One mistake that the early researchers made was to imagine that nothing less than flawless, fully automated machine translation would suffice. With more realistic expectations, machine translation is, at last, beginning to thrive.
©Copyright 1997 The Economist Newspaper Limited. All Rights Reserved
Click here for Neocor in the News Page
NeocorTech's Home Page
Last modified by NeocorTech Webmaster:
October 31, 1997. Copyright 1997 NeocorTech LLC. All rights reserved. |
Ne.o.cor.tex : n. The dorsal region of the cerebral cortex. |