The TC-Star project aims at providing reliable speech-to-speech automatic translation. In order to do so, they point a three step ladder: automatic speech recognition (ASR), spoken language translation (SLT) and speech synthesis or text to speech (TTS). It is a noble goal, but ultimately bound to go no further than our current translation technologies. And the reason is: most of the times, people do not know exactly what they mean with what they talk — not too rarely words are pure noise.
To understand this, you must realize that language is not a meaning conveying tool, but instead a violence applying tool. In other words, people do not speak to express what they think, but to get what they want.
Therefore, in order to translate a given text by computer, you do not need an comprehension mechanism — which though complicated can be abstracted — instead you need a bullshit detection algorithm, and if that is possible it would be even more valuable than an auto-translator!
This would be similar in function to a noise reduction filter that works on phrases. Imagine taking out of every sentence everything but noun and verb and translating only that. It would be something like it. Notice, though, that the real problem is not grammar-noise (that is bullshit that can be spotted through the laws of the language) but instead relevance-noise, which implies that there is always an non-computable component.
So, even with an extreme advance in ASR and TTS, the middle step of text to text translation will never work automatically.
As i proposed in interlingo, there is an alternative approach, but it subverts the rules of this game. It does not exactly makes the translation problem go away, it just provides tools whereby it is manageable.
Concisely, the proposal is to develop a language renderer, which outputs a text in any previously “incorporated” language given a text into a metalanguage (called interlingo) specifically designed to allow such rendering.
The translation is still made by a human, but once made the output is automated to any language the system knows.
To avoid the need for a translator, using interlingo, we can develop a system capable of ASR, which would then propose to the speaker, hopefully in real time, the possible conversions of his phrase to interlingo. Those alternatives would be presented in inter-speak (a version of interlingo readable in the speaker language), schematically over an interactive display, whereby the speaker chooses the options and so determines one specific translation.
The experience of using such system would be akin to having your phrases simplified for you, or (for a more perceptive user) of having the connotations of your words erased. What would actually be happening would be like a filtering of the text against fuzzyness of concepts.
Needless to say, really good speakers would not find it pleasant to use the system. Their talents of expression would be severely impaired, even if she was capable of speaking both the input and output language. A person like this would have an instinctive feel for the issues dealt with by interlingo, and the minimum common denominator approach of interlingo might feel uncomfortable or outright wrong sometimes.
Nevertheless, interlingo can provide a consistent, reliable basis for communication across language barriers, and therefore has the potential to be one priceless tool in our current effort to foster world-wide cooperation and cross-cultural understanding.