Introduction
Computers play a significant role in our lives today; however, no one single person has credit for its invention. The man who had a vision that one day a general-purpose computing machine would exist was Charles Babbage in 1834. The breakthrough came when the use of binary system in electronic was initiated by scientist (Richards & Alderman, 2007). A German mathematician, Konrad Zuse, invented Z1 in 1938, a machine that had a binary-based operating method, a processor unit and memory and thus historians consider Z1 as the first computer. In America world war two, Presper Eckhart and John Mauchly invented ENIAC in 1945, a machine that weighed 30 tonnes and worked 1000 times faster than its predecessors work (Richards & Alderman, 2007).
Historians believe the first true computer developed by a Manchester university team lead by Dr Freddie Williams in 1951, was known as Ferranti mark 1. The same year, America produced its first commercial computer called UNIVAC whose first use was the prediction of the presidential election outcome. In 1954, the IBM introduction used magnetic core memory, which became the standard for large machines (Richards & Alderman, 2007). IBM introduced a smaller and affordable computer that became very popular. The third generation computer era of 1970 and beyond marked the beginning of the current computers. These are smaller computers privately owned for home and small business use.
The initial computers hard many challenges due to their size. The machines were large and heavy; they occupied a large area and weighed up to 30 tonnes (Phillips, & Taylor, 1969). They were slow compared to the modern computer. The software they used was slow and had a small storage memory and low speed. The machines were big and therefore consumed a lot of power. This was not economical in terms of power saving, space, and initial and operational cost. The heaviness and size meant they were immobile unlike the current generation of tablets and laptops. With development in technology, the computers from second generation were a better improvement of the first generation; characterized with a higher processing speed, less space, more memory and a large network (Phillips, & Taylor, 1969).
The third generation era is the explosive stage of computer use, characterized with the development of better-improved software and less bulky, fast and portable computers. There was increased RAM speed and hard drive that can store many data. Computers have since been used to carry out multiple tasks and usable in many field in science and mathematics, research, data analysis among other fields (Phillips, & Taylor, 1969).
Differences between humans and computers:
There are many differences between the human brain and the computer, although there are some similarities in the two, research in cognitive neuroscience has revealed many important differences.
Computer cannot replace human completely. This is because the human brain is so large and keeps developing and coming up with new ideas and discoveries. Computers rely on human for advancement and every time a new model produced, human brain try to improve it, therefore the human brain will always be ahead of the computer as they rely on human.
The development of computers has seen the technology grow from a 30 tons machine to a portable tablet. The speed has also increased and the storage that has enhanced a lot of science work and daily life in communication and learning
References
American Society of Mechanical Engineers (1999). Computer technology. New York, N.Y: American Society of Mechanical Engineers.
Parker, S. (1997). Computers. Austin, Tex: Raintree Steck-Vaughn.
Phillips, G. M., & Taylor, P. J. (1969). Computers. London: Methuen.
Richards, M., & Alderman, J. (2007). Core memory: A visual survey of vintage computers featuring machines from the computer history museum. San Francisco: Chronicle Books.