- Scientific discovery today relies on high-performance computing and high-performance research and education networks.
- Large data sets present challenges to these networks.
- Exascale computing looms--innovation must continue to prepare for it.
At this past year’s Supercomputing 2015 (SC15), attendees saw a snapshot of the advanced research that relies so heavily on high-performance computing (HPC). Researchers from across the globe need supercomputers to simulate complex natural and technological systems, such as galaxies, weather and climate conditions, molecular interactions, electric power grids, and aircraft in flight so that they can study, collaborate, and discover. It is clear science would not be nearly as successful today without HPC.
However, the importance of the underlying networks that link these machines cannot be overstated. Connecting supercomputers to each other — as well as to the people who use them — requires research and education networks (RENs) capable of moving massive quantities of data between locations quickly, efficiently, and with minimal latency and packet loss.
Consider transferring a petabyte of data between HPC centers across the region or across the world: Not only does this require a high capacity network, but it also requires a network with massive switching capacity and non-blocking backplanes that support reliable, friction-free network paths.
The Energy Sciences Network (ESnet) is a prime example. Its REN carries approximately 20 petabytes of data each month with traffic increasing an average of 10 times every four years, propelled by the rising tide of data produced by supercomputers and global collaborations involving thousands of field researchers — the so-called ‘long tail of science’.
In short, HPC-enabled science is also REN-enabled science.
Thankfully, those working on RENs today have begun to solve some of the challenges related to surging traffic demands from advanced research. Many RENs are now deploying 200G and 400G long reach optical backbone networks allowing for increased scope and scale. In 2011, ESnet and Internet2 partnered with Ciena to deploy a 100G transcontinental research network. In fall 2015, ESnet announced a 400G network upgrade to support its end-users. In addition to optical backbone transport, multi-terabit switching has become a reality aiding in the appropriate distribution of large data flows.
While significant progress is apparent, REN engineers are working to create an illusion of a seamless computing experience. Specifically, they are striving to address the following network advancements:
- Automation – Software-Defined Networking (SDN) provisioning will have to grow to address broader infrastructure and virtual machine requirements.
- Agility - Networks will have to move away from a rigid, siloed infrastructure, and instead work towards a more agile infrastructure that enables inter-domain, multi-layer resource orchestration.
- Aggregation - RENs will have to deal with not only massive machine-to-machine traffic flows, but also the aggregation of a huge number of devices and sensors.
The scientific endeavors showcased at SC15 are remarkable and have been made possible in large part thanks to advances in networking and software technologies. Now into 2016, continued innovation in networking is critical as exascale speeds become a reality by the end of the decade. Blurring the lines between HPC and REN even further will bring about a host of new discoveries that will have an even greater effect on our lives.