Edge Supercomputers for High Performance Edge Computing

At first glance, a supercomputer may seem to be an other-worldly piece of equipment that is used exclusively for bleeding-edge applications like space exploration or studying quantum mechanics. While that remains true to a traditional sense, today’s article aims to demystify supercomputers – and specifically, how we can use them for high performance edge computing!

In this article, we will cover the following content and more:

  • What is a Supercomputer?
  • Why use a Supercomputer for Edge Computing?
  • Challenges when Building Edge Supercomputers
  • Edge Supercomputer Platform Recommendations
  • NVIDIA Supercomputer with Jetson Mate

What is a Supercomputer?

By definition, a supercomputer is a computer with a high level of performance in comparison to general-purpose computers like our laptops or desktops. They typically are used to perform extremely workload-intensive tasks that are required by specialised fields of science and engineering, such as quantum mechanics, weather forecasting, or running various complex simulations.

Most supercomputers are in fact not singular in nature. Instead, they are made up of separate computers, each with their own processors, that are connected together in a network to work as a single entity. This is known as clustering.

Japan’s Fugaku Supercomputer

For example, the fastest supercomputer in the world now, the Fugaku, is made up of a mind boggling 158,976 CPUs that each have 48 cores. Following closely, the Summit uses 9,216 CPUs with 22 cores each, along with 27,648 NVIDIA Tesla V100 GPUs.


Supercomputers for Edge Computing

With the scale of the supercomputers that we have been talking about, they would have to be housed in specialised facilities with both the space, power and cooling capacities to allow such hardware to run effectively. Naturally, it would be impossible to apply such high-performance computing to edge computing – or wouldn’t it?

First of all, what is Edge Computing?

To compute on the edge means that data processing occurs on distributed devices that are on the field and in active deployment (or on the edge). Some examples of edge devices include smartphones, as well as various SBCs and microcontrollers.

I’ve previously talked extensively about Edge Computing in my Edge AI – What is it and What can it do for Edge IoT article, which I highly recommend you cover before continuing – especially if you’re new to this topic!

Do Supercomputers Have a Role in Edge Computing?

Back to the question on hand: Can we achieve the same kind of supercomputer computing on the edge? Well, yes and no. Let me explain.

No – Supercomputers as we traditionally know them are not suitable for edge scenarios.

Edge computing applications are typically designed to be compact and power efficient. This is because they are operating out in the field, where direct access to extensive power grids may be inconvenient or entirely impossible. In addition, edge computers usually work alongside other pieces of equipment, and may even be mounted directly onto mobile platforms like robots or factory equipment. Thus, it’s highly unlikely to see the same scale of supercomputers being used for edge applications.

Yes – We can reap the same advantages with specialised edge hardware.

But that isn’t to say that there isn’t a solution for edge computing. Thanks to developments in both hardware and software, it’s now possible to take advantage of supercharged computer clusters that are specifically designed for edge computing. The computing power will not be as great as full-scale supercomputers, but when put in perspective with their size, energy consumption, and cost, these edge supercomputers are most definitely super in their own right!

With edge supercomputers, we are pushing data centre services to the edge, in order to provide them on a local node. These nodes then individually host further services and applications in a distributed manner.


Why Do You Need a Supercomputer for Edge Computing?

As earlier mentioned, edge computing brings a unique set of benefits when compared to traditional cloud computing. With edge supercomputers, we can now take them to the next level to tackle some unique challenges!

1. Increase Computing Capabilities at a Local Level

When it comes to computing capacity, it is a simple fact that within the confines of a given size or budget requirement, more is better. Supercomputers on the edge are designed to remain compact and power efficient, while offering several times the capabilities of regular edge devices like SBCs or microcontrollers. This not only makes your existing edge applications more reliable, but also grants you the flexibility to add more functions in the future without worrying about having to switch out hardware!

2. Enjoy Lower Latency & Save Bandwidth

Supercomputers on the edge allow you to perform your heavyweight computing tasks without reliance on external communications to cloud computing services. The most direct advantage of this is that there is no longer a need to transmit data to and from the cloud. As a result, latencies in data processing can be greatly reduced. At the same time, the reduced frequency of external transmissions also means that there will be lower requirements for and thus costs incurred in network bandwidth.

Take an image processing task for example. With a reliance on cloud computing, the entire image must be sent. With an edge supercomputer, however, you would now be able to process the data within the local network. This not only is faster, but also saves you from a tremendous amount of bandwidth costs in the long run!

3. Improve Data Security

A reduction in the transmission of data to external locations also means less open connections and fewer opportunities for cyber attacks. This keeps your local networks operating safely out of the reach of a potential intercept or data breach. Furthermore, since data is no longer stored in the centralised cloud, the consequences of a single breach are heavily mitigated!

4. Real Time Analytics & Artificial Intelligence

Edge AI and machine learning are areas that have gained a lot of attention lately, and Supercomputers on the edge provide the hardware to run their resource-intensive algorithms. With the advantages of low-latencies offered by local computing, it is now possible to design smart systems that can process data collected at the edge to respond dynamically in real time.

For example, using vision-based analytics, robots are now no longer programmed with deterministic commands, but instead a generalisable AI algorithm. This allows production environments to create highly customisable products, and also perform intelligent tasks like quality control or even real-time process optimisation!

Realtime vision tasks have high hardware requirements, Source: Medium

Challenges of High Performance Edge Supercomputers

Despite the benefits that supercomputers grant to edge computing applications, their implementation remains a challenge. For example, specialised hardware and software is typically required to keep the solution compact and easily integrated with other components of your systems. In this section, I will briefly discuss some specific factors that you should consider when designing your edge supercomputer!

Form Factor & Local Network Infrastructure

Like traditional supercomputers, edge supercomputers are commonly built on a clustering infrastructure. It’s very much possible to hook a couple of single board computers together to take advantage of their collective compute capabilities, just as many have done with the popular Raspberry Pi 4. Unfortunately, such a solution results in a messy mesh of wires that is hardly suitable for field deployment.

On the other hand, it’s important to make provisions for the local network infrastructure. For example, in computer clusters, individual computers are connected to each other through interconnects. If these interconnects are not capable of handling large bandwidths, this can inhibit effective communications between your computer nodes and hurt the overall performance of your cluster.

Read more about computer clusters in my previous article here!

Cluster Computing Architecture: Head Node + 3 Slaves

Effective Software for Clusters

In addition to hardware, software is equally important for effective computer clustering. Most platforms use Kubernetes to manage containers, which are independent instances of applications. Kubernetes helps to automatically distribute workload requirements across the different nodes in your cluster to achieve not only high-performance, but also high availability and reliability.

Naturally, both the container runtime and container management platform that you choose will have to be compatible with the application that you intend to run. Take note that compatibilities can change over time, which may require you to make changes to your application. For instance, long-time container runtime favourite docker has just been deprecated as of Kubernetes 1.20, leaving developers with no choice but to switch to other options like containerd or cri-o.

Thus, it is also important to keep these considerations in mind when designing your solution, since an over-dependence on any single software infrastructure can result in significant challenges in adapting your solution later on.

Integrating with IoT Connectivity

End-to-end integration is another significant challenge. Many edge systems are designed for IoT, which involve massively distributed networks. Thus, a high degree of flexibility is required to ensure that the data generated by the system is also appropriately leveraged by the edge supercomputer to power enterprise applications. In fact, this is also a common challenge in integrating edge-to-cloud architectures, and may require supercomputers on the edge to now also adopt edge characteristics – like versatile connectivity.


Building Edge Supercomputers

Fortunately, there are several integrated edge supercomputer platforms that have been designed to help you work around such considerations, allowing you to focus on developing applications to meet your needs! Without further ado, let’s dive in.

Introducing reServer

The reServer is Seeed’s latest addition to the reThings family, and is a compact and powerful server that can be used in both edge and cloud supercomputing scenarios. Based on the ODYSSEY x86 v2 board and powered by the latest 11th Gen Intel Core i3 CPU with Intel UHD Xe Graphics, reServer packs a real punch in computing and AI capabilities for any scenario you can dream of.

reServer represents a new age of edge supercomputing, with diverse network connectivity capabilities including two high-speed 2.5-Gigabit Ethernet ports and hybrid connectivity with 5G LoRaWAN, BLE and WiFi. With compatible hardware and link aggregation, reServer is capable of achieving transmission speeds of up to a whopping 5Gbps for meeting high throughput computing requirements!

Product Features

  • CPU: Latest 11th Gen Intel® Core™ i3 CPU running up to 4.10GHz
  • Graphics: Intel UHD Graphics Xe G4 48EUs running up to 1.25 GHz
  • Rich Peripherals: Dual 2.5-Gigabit Ethernet, USB 3.0 Type-A, USB 2.0 Type-A, HDMI and DP output
  • Hybrid connectivity including 5G, LoRa, BLE and WiFi (Additional Modules required for 5G and LoRa)
  • Dual SATA III 6.0 Gbps data connectors for 3.5” SATA hard disk drives with sufficient internal enclosure storage space
  • M.2 B-Key/ M-Key/ E-Key for expandability with SSDs or 4G and 5G modules
  • Compact server design, with an overall dimension of 124mm*132mm*233mm
  • Quiet cooling fan with a large VC heat sink for excellent heat dissipation
  • Easy to install, upgrade and maintain with ease of access to the internal components

Learn more about the reServer on the Seeed Online Store today!

Jetson Mate Cluster Standard / Advanced

As mentioned, clustering is very common when building edge supercomputer solutions. Seeed is proud to share our complete edge GPU clustering solution with the Jetson Mate and NVIDIA’s Jetson modules. Complete with a carrier board and the Jetson modules, you can easily get your hands on a complete NVIDIA GPU Cluster powered by NVIDIA’s industry-leading GPUs for edge applications!

You can now pick up the hardware for a complete edge GPU cluster from Seeed in two convenient packages:

Read all about building an edge GPU Jetson Cluster in our previous NVIDIA Jetson Cluster article, or read right on to learn how to build an NVIDIA Supercomputer with Jetson Mate!


NVIDIA Supercomputer with Jetson Mate

This section is adapted from this extremely informative video by Gary Explains, who uses the Jetson Mate to crack a SHA-256 hash!

Gary runs four NVIDIA Jetson Modules in his YouTube Video.

A SHA-256 hash is a string of characters that is generated by a cryptographic encryption function from a source, like a password that we want to encrypt. Using the algorithm, it is very easy to arrive at the hash from the source, but as Gary shares, it’s not quite as simple to do the opposite. Involving a trial and error of some 38 million combinations, forcefully decoding a SHA-256 hash is a rather resource-intensive task and is used for benchmarking the performance of the Jetson supercomputer in this video.

Summarising the results from the little experiments in his video, we get the following results:

Setup Time Taken
1x Jetson Nano, CPU Only 5 Minutes
1x Jetson Xavier NX, 3x Jetson Nano, CPU Only 67 Seconds
1x Jetson Nano, w GPU 45 Seconds
2x Jetson Nano, w GPU 27 Seconds
1x jetson Xavier NX, 3x Jetson nano, w GPU 15 Seconds

As you can see, clustering has drastically decreased the amount of time taken for the same task, despite maintaining a compact form factor within the Jetson Mate! In a similar way, resource intensive computing can be handled by coordinating the power of multiple systems to create an edge supercomputer for a multitude of scenarios, including computer vision, forecasting and real-time simulations!

If you’re keen to try this out for yourself, you can pick up the Jetson Mate and Jetson Nano / Xavier modules from the Seeed Online Store, and follow the instructions that Gary has provided on his Github here.

Alternatively, the Seeed Wiki also provides detailed, step-by-step instructions on how you can get yourself set up with the Jetson Mate.


Summary & More Resources

The definition of supercomputers is changing. While full-scale supercomputers have their role in cutting-edge research, edge supercomputers are bringing more cloud services to the edge to enable more intelligent, real-time applications. With more resources like the Jetson Mate enabling compact and affordable hardware and NVIDIA EGX enabling full-stack infrastructure for accelerated edge computing, these transformations are now becoming closer to reality!

To learn more, be sure to check out the following resources:

About Author

Calendar

June 2021
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
282930