Artificial Intelligence Essay Research Paper Artificial Intelligence — страница 3

  • Просмотров 847
  • Скачиваний 9
  • Размер файла 20
    Кб

military-funded effort of Warren Weaver, who saw Russian as English coded in some “strange symbols.” His method of computer translation relied on automatic dictionary and grammar reference to rearrange the word equivalents. But, as Chomsky made very clear, language syntax is much more than lexicon and grammatical word order, and Weaver’s translations were profoundly inaccurate. Contrary to their original speculations in the dawn of the AI age (50’s-60’s), the most complex human capabilities have proven simple for machines, while the simplest things human children do almost mindlessly, such as tying shoes, acquiring language, or learning itself, prove the most difficult (if not impossible). Numerous computer language modeling programs have been created, the details of

which are not essential to the topic of this paper and will not be delved into, yet none as of yet can approach the Turing Test. Much difficulty arises from linguistic anomalies like the ambiguities mentioned above, as in the old AI adage “time flies like an arrow; fruit flies like a banana.” The early language programs, like Joseph Weizenbaum’s ELIZA (which was able to convince adult human beings that they were receiving genuine psychotherapy through a cleverly designed Rogerian system of asking “leading questions” and rephrasing important bits of entered data) had nothing to do with modeling of language. Rather, these were programs which were programmed to respond to input with a variable output of designed speech with no generative grammatical or lexical capability.

Early attempts at computational linguistics, under Chomsky’s influence, attempted to model sentences by syntax alone, hoping that if this worked, the semantics could be worked out subsequently, and only once, for the deep structure. However, as Chomsky showed much later on, semantics is part of syntax (the most important part), and thereby could not be dealt with post-syntactically. Not unsurprisingly, the only linguistic area where computers thus far have shown considerable ability is the area that humans find the most difficult, whereas the simplest human linguistic abilities remain elusive. Sentences known as recursive, or left or right-branching such as The monkey that the lion who had eaten the zebra wouldn’t eat ate the banana, have an infinite capacity for embeddings,

allowing for the vastly superior memory of the computer to be more effective in parsing them. Understanding that Chomsky’s original breakthroughs (those of Syntactic Structures and his 60’s work) had profound impact on Artificial Intelligence, the remainder of this paper will speculate on the potential impact of his minimalist program and the nature of what I will call the “syntactic mind.” The premise of the argument is presented by SUNY Professor William Rapaport in his essay “How to Pass a Turing Test: Syntactic Semantics, Natural Language Understanding, and First Person Cognition,” as a rebuttal to John Searle’s Chinese Room argument, which Rapaport describes as: “1) Computer programs are purely syntactic. 2) Cognition is semantic. 3) Syntax alone is not

sufficient for semantics. 4) Therefore, no purely syntactic computer program can exhibit semantic cognition.” Rapaport responds by saying that syntax is sufficient for semantics, and if you accept that, then you discover that a purely syntactic computer program can exhibit semantic cognition; in other words, if semantics can be incorporated into syntax, then the computer program can simulate the cognitive mind. This is a bold statement, so let’s see how it is derived from Chomsky’s work. Syntax is defined as the relations among a set of markers (Rapaport refrains from calling them symbols as “symbol” implies an inherent connection to an external object), and semantics is the relations between the system of markers and “other things,” (their meanings). His argument

claims that if the set of markers is merged with the set of meanings, then the resulting set is a new set of markers, a sort of meta-syntax. The mechanism that the symbol-user (native speaker) uses to understand the relation between the old and new markers is a syntactic one. The simplest way to put all this would be that semantics must be understood syntactically, and is therefore a form of syntax. The crux of the argument is that a word (for example tree) does not signify an actual external tree-object, but rather signifies the internal representation tree found in the mind. This idea goes to back to Chomsky’s Lectures on Government and Binding where he introduces “Relation R,” elucidated by James McGilvray as “reference, but without the idea that reference relates an