A Chinese supercomputer known as Tianhe-2 was today named the world's fastest machine, nearly doubling the previous speed record with its performance of 33.86 petaflops. Tianhe-2's ascendance was revealed in advance and was made official today with the release of the new Top 500 supercomputer list.
Tianhe-2 was developed at China's National University of Defense Technology and will be deployed in the country's National Supercomputing Center before the end of this year. "The surprise appearance of Tianhe-2, two years ahead of the expected deployment, marks China’s first return to the No. 1 position since November 2010, when Tianhe-1A was the top system," the Top 500 announcement states. "Tianhe-2 has 16,000 nodes, each with two Intel Xeon Ivy Bridge processors and three Xeon Phi processors for a combined total of 3,120,000 computing cores."
The combined performance of the 500 systems on the list is 223 petaflops, up from 162 petaflops in the previous list released six months ago. A petaflop represents one quadrillion floating point operations per second, or a million billion.
26 systems hit at least a petaflop. IBM's Blue Gene/Q accounted for four of the top 10, while Intel provided the processors for 80.4 percent of all Top 500 systems. 39 systems use Nvidia GPUs to speed up calculations, and another 15 use other accelerator or co-processor technology such as AMD's ATI Radeon and Intel's Xeon Phi.
252 of the 500 are installed in the US, 112 are in Europe, 66 are in China, and 30 are in Japan. The slowest computer on the list hit 96.6 teraflops, compared to 76.5 teraflops for the slowest computer on last November's list.
Besides Tianhe-2, the only new entrant in the top ten is a Blue Gene/Q system named Vulcan at Lawrence Livermore National Laboratory. Here is a look at the top ten:
It's interesting just how "art-like" those supercomputers look. The "K-computer" also looked like it uses heat-pipes.
Edit: forgot to ask, who exactly does the engineering and building of these supercomputers?
1: they all use heat pipes of some sort or another. K-computer is all liquid-cooled, so what you're seeing is one board's worth of CPU+memory+etc cooling. It plugs into the racks cooling lines, then up farther.
Engineering varies, because different companies/groups provide different parts of each each system. For example, the Cray systems all have multiple cabinets for the compute, which come with their own propriatary interconnects between all the nodes/cores/ranks/etc, they then connect to an IB network which connects to a storage system through interface nodes (their only job is to talk to the "outside" world).
The IB (or other external high speed network) is made by one of a few companies. It's used to talk to the storage (which again, is a different set of companies for the hardware+software)
Then there are the utilities to manage what is running where, monitoring, fault-detection, importing and exporting data from the cluster, etc.
... but it's a *really* small world. Working in the industry, I recognize several of the people pictured from conferences.
Over three million cores... Looks to me more like a huge network, rather than a 'computer'. Particularly if you note that most of the cores have a space-like separation-- i.e., there is no (and cannot be any) actual causal connection between most of the cores.
Over three million cores... Looks to me more like a huge network, rather than a 'computer'. Particularly if you note that most of the cores have a space-like separation-- i.e., there is no (and cannot be any) actual causal connection between most of the cores.
For quite some time, supercomputers have been giant clusters, operating in unison. For a while now, one of the big limitations of supercomputers has been interconnect technology, the idea being that when you get high enough speed between nodes, you can start treating the whole thing as one big system. You'll see Infiniband mentioned in the article, that's the most mainstream (and I believe most common) of the 'networks' tying everything together, letting you do things like *remote* DMA, which helps you make everything look like one single many-cored computer. The discussion about n-dimensional torus topology elsewhere in the comments is just more about how to connect thousands of nodes together in a high-speed, low-latency network.
The second benefit of high-speed clustering is you can (sort of) lash together commodity hardware into a supercomputer. The K Computer looks pretty 'custom', a purpose-built supercomputer, while things like Stampede and the Chinese computers look pretty 'commodity', with the IBM systems leaning a bit more the 'custom' way. ...anyway I went on too long already. Rambling.
More impressive, to me at least, is The GREEN 500, a list ranked based on FLOPS per watt. It's not out yet, but usually published within a month or so of the TOP500. I highly doubt the Tianhe-2 will top that list. It's easy enough to crank up the core count, but it takes a special bit of engineering to design something that's efficient, too. Number 1 from last November's GREEN500 ranked a mere 253rd on the TOP500-- Beacon @ the National Institute for Computational Sciences, University of Tennessee. Tianhe-1A only came in at 106rd on the GREEN500, though Titan came in 3rd.
It's interesting just how "art-like" those supercomputers look. The "K-computer" also looked like it uses heat-pipes.
Edit: forgot to ask, who exactly does the engineering and building of these supercomputers?
1: they all use heat pipes of some sort or another. K-computer is all liquid-cooled, so what you're seeing is one board's worth of CPU+memory+etc cooling. It plugs into the racks cooling lines, then up farther.
Engineering varies, because different companies/groups provide different parts of each each system. For example, the Cray systems all have multiple cabinets for the compute, which come with their own propriatary interconnects between all the nodes/cores/ranks/etc, they then connect to an IB network which connects to a storage system through interface nodes (their only job is to talk to the "outside" world).
The IB (or other external high speed network) is made by one of a few companies. It's used to talk to the storage (which again, is a different set of companies for the hardware+software)
Then there are the utilities to manage what is running where, monitoring, fault-detection, importing and exporting data from the cluster, etc.
... but it's a *really* small world. Working in the industry, I recognize several of the people pictured from conferences.
Over three million cores... Looks to me more like a huge network, rather than a 'computer'. Particularly if you note that most of the cores have a space-like separation-- i.e., there is no (and cannot be any) actual causal connection between most of the cores.
Over three million cores... Looks to me more like a huge network, rather than a 'computer'. Particularly if you note that most of the cores have a space-like separation-- i.e., there is no (and cannot be any) actual causal connection between most of the cores.
For quite some time, supercomputers have been giant clusters, operating in unison. For a while now, one of the big limitations of supercomputers has been interconnect technology, the idea being that when you get high enough speed between nodes, you can start treating the whole thing as one big system. You'll see Infiniband mentioned in the article, that's the most mainstream (and I believe most common) of the 'networks' tying everything together, letting you do things like *remote* DMA, which helps you make everything look like one single many-cored computer. The discussion about n-dimensional torus topology elsewhere in the comments is just more about how to connect thousands of nodes together in a high-speed, low-latency network.
The second benefit of high-speed clustering is you can (sort of) lash together commodity hardware into a supercomputer. The K Computer looks pretty 'custom', a purpose-built supercomputer, while things like Stampede and the Chinese computers look pretty 'commodity', with the IBM systems leaning a bit more the 'custom' way. ...anyway I went on too long already. Rambling.
More impressive, to me at least, is The GREEN 500, a list ranked based on FLOPS per watt. It's not out yet, but usually published within a month or so of the TOP500. I highly doubt the Tianhe-2 will top that list. It's easy enough to crank up the core count, but it takes a special bit of engineering to design something that's efficient, too. Number 1 from last November's GREEN500 ranked a mere 253rd on the TOP500-- Beacon @ the National Institute for Computational Sciences, University of Tennessee. Tianhe-1A only came in at 106rd on the GREEN500, though Titan came in 3rd.
Jon Brodkin
Jon has been a reporter for Ars Technica since 2011 and covers a wide array of telecom and tech policy topics. Jon graduated from Boston University with a degree in journalism and has been a full-time journalist for over 20 years.
reader comments
74