The longer I think about computational linguistics, what the discipline is, what it’s trying to achieve and how far it has come by now, the clearer I see some fundamental problems in the field. And with that, I don’t mean problems that are currently not solvable, e.g. because research has not advanced far enough, but rather problems which are not solvable on principle.
My reasoning is as follows:
The most fundamental feature of language is the concept of signs (dt. Zeichen). Signs are used to refer to things in the world, and that is what makes them usable for communication. They always consist of two parts: One is the appearance of the sign (its form), the other is its function to refer to something (its meaning). Following F. de Saussure, the man who first established the theory of (linguistic) signs as a scientific discipline (called semiotics), these two parts are often called signifiant and signifié, i.e. “that which is indicating” and “that which is indicated”. Language couldn’t exist without either one of these two features: Without a form, the signs couldn’t be passed on between humans, and without meaning, they would not be able to refer to anything, and would therefore be without value. Because of that, both of these things are essential for linguistics signs, and consequently for language in general.
It follows from this that anyone who tries to model or reverse engineer human language must be concerned with both aspects of a sign: Both the signifiant and the signifié must be dealt with. This does not pose a problem for the signifiant part of the sign – it can easily be represented graphically or acoustically. However, we are likely to run into a problem with the signifié part of the sign. To model this, the device we are trying to model the language on needs to be able to deal with the concept of reference. It needs to understand the fact that a sign points to something outside of the language, something in the real world.
The way I see it, this makes it necessary for the device we use for modeling the language on to have a concept of the real world and to know about its own place in that world. That, however, is the same thing as saying it needs a consciousness.
So can we emulate consciousness on a computer? I think not. First, the philosophers are still hard at work to figure out what consciousness really is (a subdiscipline which in German is called Bewusstseinsphilosophie), and second, what can be achieved in the field of artificial intelligence that I am aware of is still far, far away from anything like self-consciousness in machines. So as far as I can see, there is currently no way that a computer can have a consciousness of itself in this world, and it does not look likely that this is going to be possible in the near future, if at all.
From the fact that a computer can not have a consciousness, it follows that a computer can not have a concept of reference. From there, it follows that a computer can not properly deal with linguistic signs. And if it can not deal with linguistic signs, it can not deal with language in general.
That’s it. Unless I made a mistake, I have just shown that computers can not have real linguistic competence on principle. Please prove me wrong if you can!
I have two more things to say about this:
(1) It may be objected that computational linguists have in fact been concerned with the meaning of linguistic signs, i.e. with semantics. Actually, any text book on computational linguistics has some chapters on semantics. However, if you ask me, what the computational linguists call “semantics” is not actual semantics, but rather something else, which I find quite silly: They model some aspects of the real world inside a computer (calling these models, among other things, ontologies), and then they make the “linguistic signs” they work with point to something inside this virtual model of the world. But if you think about it, you’ll see that this is not real “semantics” at all, because the reference is not to the real world outside of the computer, but only to the model of the real world inside the computer. The only thing achieved by this is to add a level of indirection, because the linguistic signs are now referring to elements inside the virtual model, which are in turn pointing to things in the real world. But the problem was just shifted: Instead of the reference from signs to things in the real world, we now have to deal with the reference of elements of our virtual model to things in the real world. In other words, it is not possible to overcome the limitation that a computer can never mean anything in the real world, no matter how elaborate the models of the real world are which we create inside the computer.
(2) I am not saying that computational linguistics is useless. I see two ways in which it can be worthwile: (A) Practical uses. Obviously, some aspects of language can very well be modeled on the computer, and they can be used to get actual work done. For example, we have tools today which can do part-of-speech tagging, syntactic parsing or even rough translations of texts into foreign languages, and they do some of those tasks rather well. But you have to realize that these tools are severely limited by the fact that they operate solely on the signifiant part of the linguistics signs, and that they lack some elementary things for proper treatment of language, i.e. meaning. Put differently, these tools operate on words, but they operate on dead words, and they can never hide the fact that they have no understanding of it. These tools are hacks much rather than proper implementations of linguistic competence. However, I won’t question that this is enough for some purposes, some of the time. Those tools work up to some point, and people will be happy with them (and even pay you money for them), if it helps them to get their work done quicker. (B) Theoretical uses. I believe that computational linguistics can be an “auxiliary discipline” for traditional linguistics, because trying to reverse-engineer a complex thing such as language will undoubtedly teach you a lot about the way it works (and maybe just as much about the way it doesn’t work). From my personal experience, I know that I was not aware of much of the syntactical complexity of language before I saw how hard it is to write a parser. Keep in mind, however, that only some aspects of language can be modeled on a computer, and that a complete “implementation” can not be the goal of such reverse engineering.
So even though we are already used to the concept of linguistically competent agents, thanks to StarTrek etc., and even though technology evangelists have been predicting it in the real world for years, and still do it today, I do not believe that we will have talking computers/robots in the near future. Actually, I believe that it is quite likely that we will never have talking computers. Computers are extremely powerful tools, and they make many unlikely things possible, but that does not mean that they make everything possible. They are, after all, just that: tools.
NOTE: original comments on this post have unfortunately been lost :(