Artificial Intelligence Essay Research Paper Artificial Intelligence — страница 6

  • Просмотров 850
  • Скачиваний 9
  • Размер файла 20
    Кб

virtual machine, representing the system being simulated.” Research with the goal of imitation is called “weak AI” and that with the goal of simulation is called “strong AI”. And so, as set forth by Chomsky, it is the goal of computational linguistics to create a mathematical model of a native speaker’s understanding of his language, as it is the goal of AI to create a mathematical model of the mind as a whole. This analogy is imbalanced in that computational linguistics is not a separate discipline, but rather could very well be the key to AI. In addition, the relationships between computational linguistics and linguistics, or of AI and cognitive psychology (or philosophy of mind) are not of dependence of one upon the other, but of interdependence. If AI researchers

were to create a functional model of the human mind in a machine, this would provide (perhaps all-encompassing) insight into the nature of the human mind, just as a complete understanding of the human mind would allow for computational modeling. The understanding of the interrelatedness of these fields is essential because in the end it will most likely be through a synthesis of work in the various fields that progress will be made. To return to the specifics of computational linguistics, we see that while Chomsky’s work was vastly responsible for spawning the modern field, the idea of natural language “understanding” (more on this below) has been intricately tied to AI since Alan Turing posed his “Turing Test” in 1950 (which, incidentally, he predicted would be passed

by the year 2000) . This test, which would supposedly determine that a machine had attained “intelligence,” is essentially that a computer would be able to converse in a natural language well enough to convince an interrogator he was talking to a human being. Yet, as we discussed above, there is a great difference between a computer so extensively programmed as to be able to imitate linguistic ability (which in itself has thus far proven extremely difficult if not impossible) or another conscious cognitive function, and one which simulates it. For example, a computer voice recognition system (one far more perfected than those available in the present day) which has advanced pattern-recognition abilities and can respond to any natural language vocal command with the proper

action, still would not be said to understand language. The true sign of AI would be a computer who possessed a generative grammar, the ability to learn and to use language creatively. This possibility may not actually be possible, and Chomsky would be the first to argue that it wouldn’t, yet an examination into his more recent work in his minimalist program shows some strands of thought whose implications are far outside of his rationalist heritage, and which could be important to AI in the future. Attempts at language understanding in computers before Chomsky were limited to trials like the military-funded effort of Warren Weaver, who saw Russian as English coded in some “strange symbols.” His method of computer translation relied on automatic dictionary and grammar

reference to rearrange the word equivalents. But, as Chomsky made very clear, language syntax is much more than lexicon and grammatical word order, and Weaver’s translations were profoundly inaccurate. Contrary to their original speculations in the dawn of the AI age (50’s-60’s), the most complex human capabilities have proven simple for machines, while the simplest things human children do almost mindlessly, such as tying shoes, acquiring language, or learning itself, prove the most difficult (if not impossible). Numerous computer language modeling programs have been created, the details of which are not essential to the topic of this paper and will not be delved into, yet none as of yet can approach the Turing Test. Much difficulty arises from linguistic anomalies like

the ambiguities mentioned above, as in the old AI adage “time flies like an arrow; fruit flies like a banana.” The early language programs, like Joseph Weizenbaum’s ELIZA (which was able to convince adult human beings that they were receiving genuine psychotherapy through a cleverly designed Rogerian system of asking “leading questions” and rephrasing important bits of entered data) had nothing to do with modeling of language. Rather, these were programs which were programmed to respond to input with a variable output of designed speech with no generative grammatical or lexical capability. Early attempts at computational linguistics, under Chomsky’s influence, attempted to model sentences by syntax alone, hoping that if this worked, the semantics could be worked out