The future of high-performance computing shows brilliant promise across global tech landscapes. Over the years, rapidly moving progress has prompted a scalable shift in the race toward creating the most capable supercomputer. Today, the world’s most powerful supercomputer resides in Tennessee at the Oak Ridge National Lab.
Tomorrow could be a completely different story.
The ever-evolving state of technology is one that constantly keeps casual and power users on their toes. Terminology is constantly in flux as new innovations come to life and modified tech takes on new power. As we near the gateway to
the fourth industrial revolution, the state of the supercomputer pushes us closer to a future ruled by FLoating-point OPerations per Second (FLOPS).
But what exactly is FLOPS? What is the difference between teraFLOPS and petaFLOPS? How are they used, and what do they indicate about the future of high-performance supercomputing? Let’s take a look into FLOPS and answer all of those burning questions.
What is FLOPS?
Floating-point operations per second, or FLOPS, is the unit of measurement that calculates the performance capability of a supercomputer. Floating-point operations can only be executed on computers with integrated floating-point registers.
The average computer’s processor performance is measured by megahertz (MHz) units to calculate its clock speed. Since supercomputers are far more capable when it comes to power performance, the method in which performance is calculated must be on a considerably larger scale.
Technologists refer to this theoretical performance peak measurement as Rpeak. A FLOPS reading alone isn’t enough to precisely gauge the Rpeak of a supercomputer. A number of different, intricate tests need to be run before a final figure is reached.
Do we need the "s" at the end of FLOPS?
Why do we include the “s” at the end of petaFLOPS or teraFLOPS when we are only talking about one? Like the word deer or moose, FLOPS is both singular and plural. Since the “s” stands for “second” it is inherent in the term.
What is a petaFLOPS?
One petaFLOPS is equal to 1,000,000,000,000,000 (one quadrillion) FLOPS, or one thousand teraFLOPS.
2008 marked the first year a supercomputer was able to break what was called “
the petaFLOPS barrier.” The IBM Roadrunner shocked the world with an astounding Rpeak of 1.105 petaFLOPS.
At the time, the head of computer science at Oak Ridge National Laboratory claimed, “The new capability allows you to do fundamentally new physics and tackle new problems. And it will accelerate the transition from basic research to applied technology." Today, IBM Summit can perform more than 100 times faster.
What is a teraFLOPS?
One teraFLOPS is equal to 1,000,000,000,000 (one trillion) FLOPS.
Built in 1996, Intel’s ASCI Red was built to break down the teraFLOPS barrier. By the year 2000, it became the fastest computer in the world and the first to score above one teraflop on the LINPACK benchmark. Before it was decommissioned in 2006, ASCI Red was modified to perform an astounding 2 teraFLOPS.
What is the performance difference between a teraFLOPS and a petaFLOPS?
PetaFLOPS and teraFLOPS are immense measures of processing speed, but when it comes to sheer speed and power, a petaFLOPS-capable processor significantly outperforms a teraFLOPS-capable processor. It takes one thousand teraFLOPS to make up a single petaFLOPS, so naturally, the petaFLOPS boasts far more impressive processing capacity.
To add perspective, look to the Intel ASCI Red and the IBM Roadrunner. Both supercomputers were first in their class to break speed barriers. Built just 12 years apart, the 2008 Roadrunner was capable of performing over 1,000 times faster than the 1996 ASCI Red.
How is Rpeak calculated?
According to an Indiana University study, Rpeak is calculated by “multiplying the number of processors by the clock speed of the processors, and then multiplying that product by the number of FLOPS the processors can perform in one second on standard benchmark programs, such as the
LINPACK DP TPP and HPC Challenge (HPCC) benchmarks, and the SPEC integer and floating-point benchmarks.”
How fast is the world’s fastest supercomputer?
Developed by IBM, the Summit, or OLCF-4, was officially named the world’s fastest supercomputer in November of 2018. Its current LINPACK benchmark clocks in at an astounding 143.5 petaFLOPS - however, this powerhouse machine is capable of up to 200 petaFLOPS. That’s a mind-boggling 200,000 trillion calculations per second.
Technologists and researchers have been able to launch the world’s first exascale scientific calculation through the sheer might of the Summit. Capable of over 3 billion billion mixed precision calculations per second, this groundbreaking machine has redefined the word, “supercomputer.”
Taking up over 5,600 square feet of floor space at the Oak Ridge National Laboratory, Summit is neatly composed of over 185 miles of fiber-optic cables. With a storage capacity of 250 petabytes, this monster PC can store up to 74 year's worth of high-definition video.
What’s next for supercomputers?
Time is the best testament as to how quickly the tides of technological change come and go, and it’s only fair to assume the rapid development pattern will continue moving forward. Today, there are already plans in the works to get
the next barrier-breaking supercomputer built by 2021.
On May 7, 2019, the U.S. Department of Energy officially announced their contract with Cray Inc. to build the
Frontier supercomputer at Oak Ridge National Laboratory. Frontier aims to solve calculations more than 50 times faster than its sister machine, Summit. This next-generation supercomputer is slated to boast processing power measured in exaFLOPS. That’s an astonishing 1,000,000,000,000,000,000 calculations per second.
To add perspective, it is estimated that Frontier will harness the processing power of the next 160 fastest supercomputers combined. U.S. Secretary of Energy Rick Perry stated, “
Frontier will accelerate innovation in AI by giving American researchers world-class data and computing resources to ensure the next great inventions are made in the United States.”
Supercomputers have lit our world on fire and show no signs of stopping. The future is a bright one and we’re getting there one petaFLOPS, or a thousand teraFLOPS, at a time.
About the Author: Tulie Finley-Moise is a contributing writer for HP® Tech Takes. Tulie is a digital content creation specialist based in San Diego, California with a passion for the latest tech and digital media news.