line 1 (of Affiliation): dept. name of organization
line 2: name of organization, acronyms acceptable
line 3: City, Country
line 4: e-mail address if desired
line 1 (of Affiliation): dept. name of organization
line 2: name of organization, acronyms acceptable
line 3: City, Country
line 4: e-mail address if desired
Abstract—This paper talks about supercomputers; how its performance is rated; the current top ten supercomputers in the world; the suitable tasks for supercomputers; the common operating system it uses; and the architecture of a supercomputer. The paper also talks about the estimated cost and size of a supercomputer as well as its limitations and challenges.
Keywords—supercomputer; operating system; high performance computing; supercomputer architecture.
Introduction
The computer is, perhaps, one of the most significant inventions in the world today. In many respect, it has touched almost all aspect of human civilization and culture specifically in advancing science and technology. Over the years, computers have evolved and are constantly evolving. In fact, computers that are considered as powerful relative to the time it was developed, is being succeeded by more powerful and more advanced computing systems in just a short span of time. As the computer technology advances, computers with enhanced capacity or supercomputers are also being developed. Today, these computers dominate the computer world in terms of capacity and capability.
What is a supercomputer?
The term ‘super computer,’ as the name suggests, refers to an extremely powerful computer that is way beyond the current normal computing technology (Meuer & Gietl, n.d., 3). The term ‘super computer,’ however, is ambiguous and subjective, primarily because computing technology rapidly improves over time. Thirty years ago, the world’s most powerful computer was determined to be the Cray2, but two decades later, this supercomputer, which is worth $22 million in 1986, is already comparable to the performance of an Apple computer with a price of roughly $500 (Meuer & Gietl, n.d., 3). A super computer, therefore, can be defined as the best available computing system, which is composed of hardware and software, that can provide “sustained performance on demanding computational problems” (Graham, Snir, & Patterson, 2005, p.21). Super computers are sometimes referred to as high-performance computers or high-end computers (Graham, Snir, & Patterson, 2005, p.21). According to scholars, the first supercomputer is Cray1 and is named after its developer, Seymour Cray (Meuer & Gietl, n.d., 3). Cray started developing supercomputers in the 1960s for the Los Alamos National Laboratory (Meuer & Gietl, n.d., 3). Today, supercomputers may be few, but they have become an important part not only of a country’s computer network, but of the global computer network.
Performance and Rating Metrics of Supercomputers
According to scholars, there are two ways of measuring and rating the performance of supercomputers; according to its capability and according its capacity (Graham, Snir, & Patterson, 2005, p.24). A computer’s capability is one of the most important metric that determines how powerful a supercomputer is. Supercomputers are made in such a way that they can provide computing solutions to computing problems that could not be done by an ordinary computer at a certain amount of time (Graham, Snir, & Patterson, 2005, p.24). On the other hand, a supercomputer can also be measured by capacity or by its capability to perform different tasks at once. As observed by scholars, “Capacity systems will often run several jobs simultaneously. The main figure of merit is sustained performance per unit cost” (Graham, Snir, & Patterson, 2005, p.24). Depending on its use, a supercomputer developer may prioritize one metric over the other. In fact, there are times that a trade-off must be made in order to prioritize either capability of capacity. Accordingly, supercomputers that focus on capability have systems that offer extremely fast computing systems at the “expense of increased cost per sustained performance while capacity systems are designed to offer a less aggressive reduction in time to solution, but at a lower cost per sustained performance” (Graham, Snir, & Patterson, 2005, p.24). In order to measure a computer’s capability and capacity, a common unit of measure is used, based on the LINPACK benchmark. According to researchers, a LINPACK benchmark was first developed by Jack Dongarra “to give computer users an idea of the time it would take a machine to solve complex problems in linear algebra”. In order to measure a supercomputer’s performance, the LINPACK benchmark uses the floating-point operations per second as its unit of measure. The floating-point operation per second is abbreviated as flop and is usually measured in terms of teraflops or Tflops. In order to determine the speed of a supercomputer in running a program or multiple programs at once, a supercomputer is rated based on the peak Tflops it can provide or the number of mathematical operations it can perform in a second. Another metric that is commonly used to rate the performance of supercomputers is the number of core it uses. The core is the main processing unit of a computer that receives and performs its calculations. It is understood that the higher the number the core a computer has, the faster and more efficient it performs.
Ten Top Super Computers in the World as of 2015
The top 10 supercomputers in the world as of 2015 according to the top500 organization are:
1) Tianhe-2. On the top spot on the top ten list is the Tianhe-2, a supercomputer developed by the National University of Defense Technology of China for the country’s National Supercomputer Center (Parker, 2015). According to observers, the Tianhe-2 supercomputer operates at 33.86 petaflops based on the LINPACK benchmark (Parker, 2015). The Tianhe-2 has 3,120,000 cores (Parker, 2015).
2) Titan. The Titan is a supercomputer developed in the United States for its Department of Energy (Parker, 2015). At 17.59 petaflops, Titan was once the most powerful supercomputer in the world before Tianhe-2 was developed (Parker, 2015). The Titan has 261,632 cores (Parker, 2015).
3) Sequoia. The Sequoia supercomputer is also developed in the United States and is used for many complex computation purposes. It has a performance level of 17.17 petaflops with 1,572,864 cores (Parker, 2015).
4) K Computer. A Japanese developed supercomputer, the K Computer can operate at 10.51 petaflops and uses 705,024 cores (Parker, 2015).
5) Mira. The Mira is a supercomputer developed in the United States that is being used by the Department of Energy. It performs at 8.59 petaflops and uses 786,432 cores (Parker, 2015).
6) Piz Daint. The Piz Daint in Switzerland is the most powerful supercomputer in Europe and is built on a Cray XC30 system (Parker, 2015). It is currently being utilized by the Swiss National Supercomputing Centre and operates at a speed of 6.25 petaflops with 73,808 cores (Parker, 2015).
7) Shaheen II. The Shaheen II is a supercomputer developed in Saudi Arabia, which is also based on a Cray system. It is currently used at the King Abdullah University of Science and Technology. It has a peak LINPACK Benchmark rating of 5.536 petaflops and uses 196,608 cores (Parker, 2015).
8) Stampede. Stampede is another supercomputer that was developed in the United States. It has a performance rating of 5.17 petaflops (Parker, 2015).
9) Juqueen. The Juqueen is a supercomputer in Germany, which was developed through IBMs BlueGene architecture. It has a performance rating of 5.01 petaflops (Parker, 2015).
10) Vulcan. At the end of the top ten lists is the Vulcan supercomputer, which was also made out of IBMs BlueGene architecture. It is currently being used at the Lawrence Livermore National Laboratory together with the Sequoia (Parker, 2015). The Vulcan operates at 4.29 petaflops (Parker, 2015).
Suitable Computer Tasks for Supercomputers
Supercomputers are primarily used for complex; demanding and time constrained computing tasks. The first supercomputer relative to its time, the Cray 1, for instance, was used in the Los Alamos National Laboratory; a nuclear laboratory in the United States (Meuer & Gietl, n.d., p.3). Today, supercomputers are used for varied functions. By analogy, supercomputers are best suited when solving problems that consists of numerous variables. In real life, scenarios wherein a lot of factors should be considered, frequently occurs. As observed by scholars, supercomputers are used in complex problems examples of such are quantum physics, molecular modeling, weather forecasting, climate simulations and physical simulations such as the simulation of fluids of an object as well as the simulation of nuclear detonations. In general, supercomputers are used in computing operations where there are complex variables involved. One of the most practical application of supercomputers is in the field of intelligence and defense. Supercomputers, for instance, are being utilized by a country for code breaking as well as in surveillance purposes. In the United States, supercomputers are being used to support intelligence and law enforcement agencies, both in local and international operations (Graham, Snir, & Patterson, 2005, p.71).
The Operating System of a Supercomputer
Operating systems are the backbone of a computer as it manages the hardware and the software systems inside the computer. Today, there are different operating systems that are being used in computing technology; the most common of which are the Windows operating system and the UNIX operating system and its variants that include Linux and the MacOS. While most personal computers use either Windows or MacOS, but in the world of high performance computing, Linux is the preferred operating system. Some observers estimates that more than 90% of supercomputers in the world today use Linux. There are several reasons why supercomputer developers use Linux as their operating system; one of which is that Linux is an ‘open source’ operating system. Being an open source operating system, Linux can be adopted and developed by anyone without paying a fee. As an open and free software, the Linux operating system can be customized by its users depending on their operational needs. Developers have more freedom in Linux compared to operating systems that runs on Windows or Mac. Developers, for instance, can change the codes at will in case of glitches and bugs rather than wait for the OS manufacturer’s update. Another advantage of using a Linux operating system is it provides scalability. Scalability is defined as “the ability of the server to adapt to larger loads” and is considered as a measure of efficiency and performance. The Linux system is very much expandable as it can accommodate and made to adopt to new and additional loads easily. Another characteristic of the Linux operating system that makes it a preferred OS for supercomputer manufacturers is its modular nature. The Linux operating system can be pictured as “being made up of small building blocks or modules”. This characteristic makes Linux unique in a sense that it can be customized at a higher extent compared to other operating system that is currently available. The Linux operating system is also generic in nature, which means that it has a common source code that works for the smallest to the most complex computer systems.
Architecture of a Supercomputer
The basic components of a supercomputer are composed of the processor; the memory; the operating system; and the interconnect (Graham, Snir, & Patterson, 2005, p.105). A supercomputer is composed of many processors also known as the nodes. The nodes are then connected together through the interconnect. According to scholars, the performance of a supercomputer is impacted by the capability of each node as well as the ability of the nodes to communicate with each other (Pawsey Supercomputing Centre, n.d.). The nodes in a supercomputer can have one or multiple CPUs with its processors and memory. The more numerous the node is, the more power the computer becomes. Just like most computer systems, the basic supercomputer architecture follows the von Neumann structure in storing and processing information. The Von Neumann architecture is composed of the main memory; the Arithmetic Logic Unit (ALU); the control unit; and the input and output of data. The main memory is where all the data is stored, including the computer’s operating system. The arithmetic logic unit, on the other hand, performs data operations. The control unit, on the other hand, interprets and executes the instructions while the input/output component provides a way to send and receive data in the computer system. The operating system is stored in the system’s central memory or in the hard drive. After turning on the system, the Basic Input Output System (BIOS) software searches and executes the operating system. A supercomputer is a multiprocessor system, which means it employs multiple CPUs connected by the interconnect. In general, a supercomputer can either be set up using a shared memory or a distributed memory. In a shared memory system, every CPU in the node can directly access the central memory in order to boot up the computer’s operating system and access the system’s information. The distributed memory, on the other hand, is a system that employs loosely coupled CPUs. Unlike in the shared memory system wherein the local CPU gets its data from another CPU or central memory; in the distributed memory system, the CPU gets its data from its local memory, making the process of booting and retrieving information faster (van der Steen, 2008, p.7). The CPUs in a distributed memory system, however, can send and receive data from other CPUs in the system if required. Another type of processor-memory configuration is the hybrid memory model. In the hybrid system, each node has multiple processors that share a local memory space, “while the memory is still distributed between the nodes globally” (Pawsey Supercomputing Centre, n.d.). The interconnect is also a significant component of a supercomputer’s architecture. Also known as the network topology, the interconnect is not only cable works, but its configuration also determines the performance and pattern of communication between the nodes.
Cost and Size of a Supercomputer
The cost and size of supercomputers vary directly with its capability and performance. Among the first supercomputer, the Cray 2, for instance, costs around $22 million and has a size of five refrigerators (Meuer, H., & Gietl, H., n.d., p.3). Back then, supercomputers are far more inferior than the latest computers that are available in the market. Today, in order to set up a supercomputer would require a lot of financial investment. According to observers, someone who is looking for a top-of-the-line supercomputer would have to shell out, more or less, $100 million to $250 million in its design and assembly; not to mention the $6 million to $7 million annual costs in energy. Including the maintenance cost, developing and operating a supercomputer is very expensive. This outrageous cost is one of the major drawbacks in supercomputer technology. Aside from being extremely expensive, supercomputers also require a considerable amount of space in order to operate. The physical layer of a supercomputer demands space in order to accommodate the computer hardware and the cables. Although computing systems have significantly reduced in size since the first computers were invented, building an enormously powerful computer requires millions of processors. And even with the current technology of micro components, a top of the line supercomputer keeps getting bigger relative to its capacity. One particular example is the Titan, the most powerful computer in the world prior to the Tianhe-2. According to observers, the Titan is about “the same size as a basketball court and has 200 cabinets that are similar in size to a refrigerator.
Limitations and Challenges of a Supercomputer
Supercomputers are constantly evolving relative to the computing technology of the time. As observed by experts, “today's supercomputer will become tomorrow's workstation” (Meuer & Gietl, n.d., p.3). In a way, such remark is true, especially when the speed of modern computers are considered and compared to supercomputers in the past. But in order to achieve the status of supercomputers, the computer has to be the fastest and most powerful of its kind and the only way to do it is to add more components, specifically processors, in order to increase its computing capacity. For the same reason, one of the major challenges of developers of supercomputers is its bulkiness. Supercomputers requires an enormous amount of space, which is generally undesirable. One of the reasons why supercomputers are becoming bigger instead of becoming smaller, for instance, is that microprocessors could not get any faster than its present capacity. According to experts, a processor can work faster by increasing its frequency (Wagstaff, 2012). However, the frequency of a processor can only reach a certain limit; above of which, the processor would generate heat that could melt the computer (Wagstaff, 2012). Since the processors could not be made to go faster, developers increase the number of processors, thereby increasing the size of the computer as well. Nanotechnology offers a potential solution to the cumbersome size of supercomputers, but the technology is still underdeveloped. Another challenge of supercomputing is its cost. While it is desirable to be able to perform rapid simultaneous processes at once, the high cost of setting up a supercomputer as well as its maintenance is a major drawback and is one of the major reason why such technology is not marketable.
Conclusion
Most supercomputers works in a Linux operating system, primarily because aside from being and open source, this operating system offers many characteristics that suits well with high performance computing. Supercomputers are more complex than ordinary computers. Unlike typical computers that uses only one CPU, the supercomputers’ architecture is designed to accommodate multiple CPUs with million of cores. Its calculation speed and capacity is incomparable to ordinary computers. Measured in terms of petaflops, the performance of a supercomputer is almost 1000 times that of an ordinary computer running at Intel i7 processor. However, as a tradeoff to its the power and capacity, creating and maintaining one is enormously expensive. Even so, supercomputers will continue to dominate the computing landscape because of its practical applications in science and technology. Supercomputers are being utilized in many practical applications, most especially in finding solutions to complex problems. Among its major utilization is in defense as well as in predicting natural phenomenon that is, otherwise, unpredictable. Many countries have embarked on developing their own supercomputers not only for the prestige, but because of its relevance to the development of science and technology.
References
[1] Rouse, M. Linpack benchmark. http://searchdatacenter.techtarget.com/. [Online] 2005. http://searchdatacenter.techtarget.com/definition/Linpack-benchmark.
[2] Parker, C. The 10 most powerful supercomputers. www.weforum.org. [Online] 2015. https://www.weforum.org/agenda/2015/07/10-most-powerful-supercomputers/.
[3] Vaughan-Nichols, S. Linux continues to rule supercomputers. http://www.zdnet.com/. [Online] 2013. http://www.zdnet.com/article/linux-continues-to-rule-supercomputers/.
[4] Unixmen. Why do super computers use Linux? http://www.unixmen.com/. [Online] 2016. http://www.unixmen.com/why-do-super-computers-use-linux/.
[5] Pawsey Supercomputing Centre. Introduction to Supercomputing Technology. portal.pawsey.org.au. [Online] n.d. https://portal.pawsey.org.au/docs/images/4/40/Introduction_to_Supercomputer_Technology.pdf.
[6] Wagstaff, K. What, Exactly, Is a Supercomputer? http://techland.time.com/. [Online] 2012. http://techland.time.com/2012/06/19/what-exactly-is-a-supercomputer/.
[7] Storm, D. Meet the fastest, most powerful science machine in the world: Titan supercomputer. http://www.computerworld.com/. [Online] 2012. http://www.computerworld.com/article/2473620/high-performance-computing/meet-the-fastest--most-powerful-science-machine-in-the-world--titan-super.html.
[8] Meuer, H., & Gietl, H. Supercomputers – Prestige Objects or Crucial Tools for Science and Industry? http://www.top500.org/. [Online] n.d. http://www.top500.org/files/Supercomputers-Paper-London-Final.pdf.
[9] Graham, S., Snir, M., & Patterson, C. Getting Up to Speed: The Future of Supercomputing. http://research.microsoft.com/. [Online] 2005. http://research.microsoft.com/en-us/um/people/blampson/72-CSTB-Supercomputing/72-CSTB-Supercomputing.pdf.
[10] Sharma, V., Varshney, M., & Sharma, S. Design and Implementation of Operating System. books.google.com.ph. [Online] 2010. https://books.google.com.ph/books?id=3iE3UWrE4L0C&pg=PA23&lpg=PA23&dq=where+is+the+os+stored+on+a+supercomputer&source=bl&ots=zPN72IykMB&sig=wzoq29vqTEyI1dhViMHThly2FgQ&hl=en&sa=X&ved=0ahUKEwiPx_qQ3M_MAhWHGaYKHbIOAD0Q6AEIKDAC#v=onepage&q=where%20is%20the%.
[11] van der Steen, A. Overview of recent supercomputers. computing.llnl.gov. [Online] 2008. https://computing.llnl.gov/tutorials/parallel_comp/OverviewRecentSupercomputers.2008.pdf.