Introduction
Computer technology has displayed immense growth rate. These include processor developments alongside memories. Indeed, it is the technology improvements that have kick started the computer industry Mid 1980s and early 1990s, saw new great performance RISC microprocessors appearing, which was rather influenced by CPU models similar to discrete RISC like the IBM801. RISC microprocessors were first used in special purpose computers and UNIX workstations, though it gained acceptance in other areas. HP unveiled its first system containing a PA-RISC CPU.MIPS computers unveiled the first commercial RISC microprocessor design which was 32-bit R2000.
Storage of big programs in memory became a pertinent issue in 1980’s. The CISC of the 1970’s were still in use, but they required multiple physically separated chips to implement. This complexity called for some change in design philosophy, with regard to transferring features from hardware to software. The main idea was to let much of the work be done by the more complex instructions based in hardware, at the time, be accomplished by various cheap instructions. The outcome was the Reduced Instruction Set Computer. Today the RISC approach is architecturally convincingly, the most eminent architecture in use today is the IA32. The reason to its immense popularity is that it is compatible backwards with most of its predecessors. RISC has greatly contributed towards system performance optimization. First it incorporates many registers to reduce memory accesses, it also useless instruction formats to ease instruction decoding and all instructions are equally executed in clock cycles easing the execution sequencing.
All major programs had to incorporate logic for controlling primary and secondary memory in 1940s and 1950s, such as overlaying. Introduction of Virtual memory therefore was not only to improve primary memory, but rather to make such an improvement as easy as possible for use by programmers. To give room for multiprogramming and multitasking, majority of early systems shared memory between many programs not containing virtual memory, such as early versions of the PDP-10 through registers. Burroughs Corporation in 1961 unveiled on its own the first commercial computer incorporating virtual memory, the B5000, with segmentation instead of paging. 1969 saw the end of the commercial use of virtual memory debate. A research team based at IBM proved that their virtual overlay system performed better consistently as compared to systems controlled manually.
Since semi computers were few and costly early computers used tough physical memory hierarchies mapped onto plain programs virtual memory space in 1960s. The memory advancements would cover semi-conductor devices, drum and disc. The virtual space seen and applied by programs would be plain hence caching would then be applied to fetch instructions and unprocessed information to the fastest memory access by the processor. Much research studies were undertaken to optimize the size of cache. Optimal values were found to depend more on the language used to programme used with Algol needing the smallest while languages like Cobol and Fotran needed the biggest sizes of cache memory.
. Early days of memory access was slower slightly than register access. But from 1980s hence fort the gap of performance between processor and memory has been improving. Microprocessors have grown much faster as compared to memory, specifically in terms of their frequency of operation, so memory became a performance block. While it was technically viable to have all the main memory as quick as the CPU, an economically viable path has been adopted: incorporate a lot of low-speed memory, but also integrate a small fast-speed cache memory to reduce the performance gap. This provided a magnitude order of much capacity with same prices but only a slightly lowered combined performance.
Latch less pipelining methods were used first in the construction of the IBM System floating point unit in 1967. This achieved a cycle time which was half the latency via the combinational logic with lack of latches. In 1969 the theory of wave pipelining was made formal. As greater levels of integration and good computer aided design became available, a new interest in the wave pipelining theory emerged. The efficiency of the methods to bring down the limitations of instruction and data dependencies have prompted designers to raise the depth of pipeline to be able to lower the amount of tasks performed between each stage. Such an architectural transition permits decrease in the clock cycle of the computer. Through this, the rate of the instructions going in the pipeline is increased. Computers begin more instructions per second hence its performance is optimized.
Conclusion
System performance evolution is one of the most captivating technical developments of the end of this century. Computing power of performance associated technologies has been multiplied by a factor of several thousand for the last 25 years. There are no other technical fields that have had such a remarkable evolution. The memory technologies are progressively integrating themselves to the other memory technologies. Probably they will eat up the range of supercomputers at the start of the next century. To-day, system performance can be perceived as a set of several sub-components working together to achieve a common goal. Possibly in the future, computing devices will incorporate multiple improved system performance boosters. The projected computing power looks impressive and leaves us all to expect more form pipelining, cache and virtual memory and reduced instruction set computers.
References
Sivarama, P.Dandamudi. (2005). Guide to RISC processors: For programmers and engineers.
David A. Patterson, John L. Hennessy. (2011). Computer Organization and Design, Revised Fourth Edition:
John L. Hennessy, David A. Patterson. (2011). Computer Architecture: A Quantitative Approach.
Deborah Morley, Charles S. Parker. (2009). Understanding Computers: Today and Tomorrow, Comprehensive.
William Stallings. (2003). Computer organization and architecture: designing for performance
Wayne Hendrix Wolf . (2008). Computers as components: principles of embedded computing system.