Are supercomputers still relevant today?

In Computers, Videos by Paul Shillito

0 Shares

In my last video, I looked at how miniaturisation had dramatically and exponentially improved the computing power over the last 60 years and this reminded me of a video I did back in 2016 comparing a typical $1000 home computer and iPad Pro to the top 10 supercomputers of the time.

That was eight years ago and supercomputers were still being talked about as the ultimate in computing power but since then things have changed we don’t hear that much about them today.

So what has happened in the space of just eight years that would overshadow the most powerful computers available and what is their future?

Well, first of all we need to know what a supercomputer is.

Back in the day when most computers were things that only large companies could afford there were general-purpose machines doing things like the accounts for large companies, the sort of work that would have been done by lots of humans with typewriters and calculators before.

And then there were high performance machines that were doing highly specialist jobs such as simulating a nuclear explosions, forecasting the weather, modelling quantum mechanics or the creation of new molecules etc, all the sort of things that required huge amounts of calculations to be done. Because the number of calculations required was so vast,  the speed of the machines had to be much faster to be able to deliver the results in a reasonable amount of time and this was the realm of supercomputer.

The first super computers were introduced in the 1960s and made by Seymour Cray at the control data corporation or CDC and eventually Cray Research after he left to form his own company.

These were effectively highly tuned versions of conventional designs, using higher performance components such as silicon transistors rather than germanium ones and running at higher clock speeds usually by cooling them down to keep the speed and reliability up.

During the 1960s parallelism was gradually added, going from 1 processor to four processors for example.

Then in the 1970s vector processor’s which operated on very large arrays of data became the dominant form.

Vector processors work on large sets of 1 dimensional arrays using lots of simple processors all under the control of master control unit. Instead of a single powerful CPU working on one instruction at a time, a vector processor could load large arrays of data in one clock cycle and have many simple arithmetic logic units work on each piece of data separately but in parallel under the control of a master CPU greatly increasing the throughput.

This is something which we still have today , with powerful CPUs in our computers working on one piece of information at a time with up to maybe 32 separate processor cores and graphics processors working very much like vector processor’s to create the image data for our displays with several 1000 simple processing cores all working on separate pieces of data but at the same time.

One of the most famous supercomputers was the Cray one which was released in 1975, this used the vector processor design to greatly increase its speed over previous designs.

It was the first Cray to use integrated circuits even though they had been available since the 1960s they weren’t fast enough up until the mid 70s to be used in a machine of this calibre. The Cray Ran at 80 megahertz clock speed and could perform 160 megaflops, which was several times faster than any other machine on the market.

Even the shape was designed to maximise speed which garnered a lot of attention as it is almost like a piece of furniture rather than a computer.

Instead of a straight cabinet, it was bent into a C shape with the fastest parts of the computer arranged in the central section where the connecting cables could be precisely cut to the shortest lengths which would speed up transmission of data inside, and slower modules arranged on the outside of a machine where the signals took slightly longer to get to.

The bench seat around the bottom covered the refrigeration unit which kept it cool. Like all the supercomputers of the time they used individual IC’s to make up custom circuits which were tuned for performance and their operating systems were also created for that particular system.

Just around this time, the first microprocessors like the Intel 8080, the MOS technology 6502, Motorola 6800 and zilog Z80 were powering the first home computers. As the 80s and the 90s followed, Intel gained a dominant position in the CPU market with their x86 series, like the 286, 386, 486 and then Pentium processors in the now Personal Computers.

Meanwhile, super computers continued to be specialist-built machines with the Cray leading the market for almost 2 decades.

However, there were limitations to having a super-fast computer due to the technology of the time. So starting in the 1980s massively parallel architectures were created with hundreds or thousands of CPU cores using microprocessors and or Maths Co-processors which used to be an additional chip you could plug in to your PC to greatly increase it’s maths performance.

In Japan in 1992, Fujitsu made some of its processors using gallium arsenide to gain higher speeds although this was normally reserved for microwave applications due to its toxicity, and in 1994 it reached a peak speed of 1.7 gigaflops with 166 vector processors using this technology for numerical wind tunnel modelling.

But one of the problems with running massively parallel supercomputers was the software and the operating system with each system having to have specially written operating systems and software.

By the mid-90s general-purpose CPUslLike the Intel x86 based ones and the Sun SPARC had increased in performance by so much that supercomputers could use these as processing units instead of custom-built chips, reducing the cost and by 1999 the first Linux-powered supercomputer, the “Roadrunner”  was built with off the shelf parts based on clusters of eight dual 333Mhz Pentium two processors and was considered one of the top 100 computers at the time. Now all supercomputers use bespoke tuned versions of Linux as their Operating system.

As time’s gone by supercomputers have gone down one of two paths, either they are grid based and made up of many distributed computers over a wide area or they are a computer cluster, where all the hardware is close together in one place and works as a single system.

Something else which is also added to the power is the use of general purpose graphics processing units or GPGPU’s.

These are usually based on the same technology that you would find on a high end graphics card using the CUDA cores for NVIDIA cards but are used for things like 3D rendering, and scientific simulations.

These use techniques similar to vector processing with many of the top cards using thousands of processors which perform specific processes and work in parallel to greatly increase the throughput of data.

To give you an idea of just how powerful these are, the top of the range NVIDIA RTX 4090 graphics card is about 40 times more powerful the top of the range Intel i9 14900 CPU with 24 cores.

However, these processing cores are usually optimised for certain jobs and it can be difficult for them to cope with non-parallel complex jobs effectively when compared to a general purpose CPU like the Intel i9, which is why we don’t use GPUs as the main processors.

So modern supercomputers now use a mixture of general-purpose CPUs and GPU’s to handle the very specific applications were highly parallel processing makes them far more useful.

And here is one of the reasons why we don’t see supercomputers as much as we do massive server farms.

The cost of running a supercomputer like the most powerful one in the world to date, the Hewlett Packard El Capitan, is very expensive to build and run.

Its build cost was in the region of $600 million and the power consumption for this is in the region of 40-50 megawatts which is enough to run a large town of nearly 90,000 houses.

The computing power of the El Capitan is in the exaflop range and tops out at a maximum performance of 2.746 exaflops which is equivalent to 2746 petaflops. This means that it will be used for the most demanding of jobs like simulating nuclear bomb tests now the real test have been banned for decades and something which can’t be given over to general purpose server farms on national security grounds.

If you going to make a very large investment in computing, a server farm would give a better return on investment .

The over the last decade there has been a rise in the number of huge server farms which computationally can be almost equally as powerful as a supercomputer but which are part of larger corporate groups like amazons AWS.

These can rented out in parts on a “pay as you go” basis to individuals, companies or governments and make more money out of them than the building of highly specialised supercomputers doing just one job at a time for one client.

Cloud computing, the new “grid computing”, can be quickly pulled together from many geographically dispersed locations to create the power of on-the-fly supercomputers to do specific jobs when needed and then when it’s finished, turned back over to many smaller jobs for lots of clients in a very short period of time.

The most powerful supercomputers draw as much power and require large amounts of cooling  as a large server farm and both of these now struggle to find locations that can supply low cost electrical power and cooling requirements.

This is why places like Iceland and Norway are seen as almost ideal places because their low-cost geothermal or hydroelectric power generation and cold climate mean that costs can be kept as low as possible.

The rise in artificial intelligence has also created a huge demand for processing power which is  using mostly GPUs rather than that of specialist supercomputers.

Some industry leaders have been saying that we might be seeing the last of the truly powerful single-purpose supercomputers soon and their replacements will be conglomerations of server farms spread across the world which can be rented by the hour to anyone that can afford it.

The one thing that would keep supercomputers, “super” and in a league of their own, would be the development of real-world, reliable quantum computers.

When these become available they would be used for solving the most difficult tasks like cracking a 2048 key RSA encryption in hours rather than billions of years with normal supercomputers, atleast until someone creates a new quantum-proof encryption method.

Although they wouldn’t be used for the more menial tasks that server farms would do, there will be no point in having a quantum computer answer questions on a super ChatGPT for example but it would be able to help design new AI systems as well as answer some of the most difficult questions in science today far faster and any of our current technology and they really would be supercomputers in every sense of the word.

So thanks for watching and a big thanks goes to our patreons for their on going support.

Visited 1 times, 1 visit(s) today
Paul Shillito
Creator and presenter of Curious Droid Youtube channel and website www.curious-droid.com.