The Future Of Computers Essay Research Paper

  • Просмотров 156
  • Скачиваний 5
  • Размер файла 16

The Future Of Computers Essay, Research Paper The era of the First Generation computers began around 1945 and ended around 1957. These Machines featured components such as vacuum tubes, drum memories and had to be programmed in machine code. (That is the 1’s and 0’s computers understand) The Second Generation computers began around 1957 and ended around 1963. These computers had components such as transistors instead of vacuum tubes, magnetic core memories and higher level programming languages called assembly code was introduced. Third Generation computers began around 1963 and ended around 1971. They included integrated circuits, semiconductor memory, magnetic disk storage, and virtual memory. Microcomputers, operating systems and time sharing software concepts were all

developed during this period in the development of the computer. The following generation of computers, the fourth generation, began around 1971 and continues is drawing to a conclusion as we enter the new millennium. This group features microprocessors, very large scale integration, networking, database management systems, advanced programming languages such as Pascal, basic, C, C++, Java, etc. As we can see computers as we know them today have enjoyed a phenomenal technological growth rate from the day the first one was booted, if the growth rate in the future is to resemble anything of the dramatic changes seen in the past significant research and development must be applied to several key areas in the field. Microprocessors, parallel processing systems and other

architectures, optical technologies, molecular technologies are some of these key areas. Parallel processing.Parallel processing computers probably have a place in tomorrow’s world of computers. This computer architecture involves hundreds or even thousands of processors linked to perform hundreds of tasks simultaneously. Tasks like processing hundreds of lines of code, accessing video files, or audio files, playing live media from the web, etc. The mathematician John Von Neuman who laid the foundation for serial computer architecture recognized the potential of parallel processing but put the idea aside due to the great cost of tubes and wiring. ENIAC, the first general purpose electronic computer was the first parallel computer. However, in 1948 the separate units were

centralized and reprogrammed to accept serial input. ENIAC was recalibrated due to limitations of available technology, computer memories were only capable of storing several thousand bits of data and it had to access it a bit at a time. Granularity is the most important feature for the classification of parallel processing computers. Coarse-grain systems, which have the most powerful processors, contain anywhere from 2 to about 200 processors. Medium-grained multiprocessor systems contain less powerful processors but have about 10,000 individual processors. Fine-grained multiprocessor systems contain from 100,000 to 10 million processors. Today parallel processing research proceeds in two directions a) improving single processor speeds by applying parallel techniques of

computation, and b) by building multiprocessor systems from the ground up. Bus-based Architecture.These systems use currently available CPU’s that are tied together and given access to a global or common memory that processors can reach via a central communications channel. Any processor can leave data here for others. This arrangement can however lead to communication overload when the amounts processors manipulating this memory surpasses capacity, thus bus based systems are normally limited to around 20 processors. Multiple-instruction-stream/Multiple-data-stream is how these machines get they’re programming. Its is an operation where programs are broken into several pieces distributed along with data to individual processors when they run independently. The biggest