Artificial Intelligence Essay Research Paper Artificial Intelligence — страница 4

  • Просмотров 436
  • Скачиваний 9
  • Размер файла 20

LF [Logical Form, or SEM, semantic form] that stands between elements of an LF and these stipulated semantic values that serve to ‘interpret it’. This relation places both terms of Relation R, LF’s and their semantic values, entirely within the domain of syntax, broadly conceived;. . .They are in the head.” Chomsky’s internalism goes back to the Cartesian view that all sensory input is subjective and therefore nothing can be known outside of the mind. Therefore language cannot refer to external objects, but rather, either to its internal representations of them based on sensory input, or to concepts (like Unicorns) which have no external source to represent. So Chomsky’s internalism and nativism allow for the syntactic phrase in its semantic interface “an internally

constituted perspective that can play a role in individuating, and even constructing the things of a world.” The implications for AI lie in that the purely syntactic symbol manipulation of a computational system’s knowledge base suffices for it to understand natural language. The end-pursuit of “strong” AI is to model or simulate human consciousness. If syntax exists only inside a larger mental meta-syntax (rather than semantics) then the human consciousness is a world of signifiers, our mental reality suffers a permanent disengagement from the signified. “It is not really the world which is known but the idea or symbol. . ., while that which it symbolizes, the great wide world, gradually vanishes into Kant’s unknowable noumena.” If we take the Chomsky/McGilvray

idea of “broad syntax” one step farther, philosophically, we find that the labyrinth of signifiers which is the syntactic mind exists in a world in which there is no concept outside the mechanisms of representation. Strangely, the post-structuralist Jacques Derrida, who Chomsky despises, says the same thing. At the origin of language “in the absence of a center of origin, everything became discourse. . .that is to say, when everything became a system where the central signified, the original or transcendental signified, is never absolutely present outside a system of differences. The absence of the transcendental signified extends the domain and the interplay of signification ad infinitum.” What Derrida is talking about by a transcendental signified is the semantic,

external reality to which syntax refers. It is transcendental in that it transcends syntactic representation, it transcends the syntactic mind. The internalist view does not deny the existence of the external world, rather, when McGilvray refers to “constructing the things of the world” through language, it is the world of human consciousness to which he refers. In this theory, it is through Chomsky’s I-language, through syntax, that we construct our world. This is the essence of Chomsky’s constructivism. So we see that if we are to construct a thinking machine (or for that matter, representations in our mind of a thinking machine) this broad syntax does significantly clarify how to go about designing a computer which can take discourse as input, remember and learn, etc.

. .If we realize however the syntactic nature of the minds which create the machine, we can see that it is possible for a machine to think syntactically, or at least that Searle’s Chinese Room argument does not stand up, because cognition is not dependent on semantics. Thus, a thinking machine would be “a purely syntactic system” of symbols (a neural network) and algorithms for manipulating them. So we have seen that Chomsky (despite his own description of AI as “natural stupidity) has had profound influence upon linguistics, and thereby upon AI, as computational linguistics are central to past and future attempts to simulate the human mind. Artificial Intelligence is based in the view that the only way to prove you know the mind’s causal properties is to build it. In

its purest form, AI research seeks to create an automaton possessing human intellectual capabilities and eventually, consciousness. There is no current theory of human consciousness which is widely accepted, yet AI pioneers like Hans Moravec enthusiastically postulate that in the next century, machines will either surpass human intelligence, or human beings will become machines themselves (through a process of scanning the brain into a computer). Those such as Moravec, who see the eventual result as “the universe extending to a single thinking entity” as the post-biological human race expands to the stars, base their views in the idea that the key to human consciousness is contained entirely in the physical entity of the brain. While Moravec (who is head of Robotics at