ComputingHardware

CPU To GPU – A New Age Of Supercomputing

0

You may be interested to learn that we may be due for a sea change in the processing capabilities of our computers, in a few years the specs you have to day will seem as disparate as the speeds of dial-up and fibre optic internet connections. A change which may see your CPU sidelined somewhat for a GPU focussed system. Stick with TechWench as we discuss the implications of a new programming language; Harlan.

CPU To GPU – A New Age Of Supercomputing

Before we tumble ourselves straight into the world of supercomputing we must first determine the difference between a CPU and a GPU, surely just a difference of a single letter, right? Wrong; CPU stands for Central Processing Unit whereas GPU stands for Graphical Processing Unit. A CPU is tailored to process a single operation very quickly whereas a GPU is geared towards parallel processing; running many operations at the same time at a slower pace. Originally, GPU’s were designed for just what the first letter of their acronym states; Graphics. As CPU’s and programming languages developed the world of computing got a lot more graphic heavy and so GPU’s were brought into segue off some of the processing requirements of the CPU but with the main processing still taking place through the CPU. CPU tech has developed wildly since the introduction of GPU’s to encompass some degree of parallel processing (such as the introduction of Multi-core CPU’s which are becoming the norm throughout the world) yet GPU’s, being designed for parallel processing from the ground up have also experienced a meteoric rise in capacity.

Their ability to handle smaller computations simultaneously and the fact that they are designed to feedback into a CPU has meant that they have leant themselves well to the world of Supercomputing, the world of petaflop grade processing for modelling extremely complex systems such as the weather. GPU capacity has risen off the back of 3d graphics and HD video processing which was what the market wanted, the funds and push of the market coupled with the computer academics drive to push information tech boundaries has led to dedicated GPU driven supercomputers. Even regular computing can take advantage of a little slap of GPU processing, Salesforce uses a GPU oriented system for real-time processing of twitter feeds on a mass scale. Amazon offers a cloud based GPU processing services to clients who need to outsource their data processing. Google is modelling the human brain using a series of GPU chips, it seems the future is all GPU and not CPU.

One of the problems that has arisen however is that GPU focussed systems require a much more comprehensive back end processing system before you get your answer. There is a lot more architecture co-ordination required as the main thread task must be broken down into the many parallel computations to be fed through the GPU’s for processing and then recombined on the other side to deliver the required result. This disconnect has led one Indiana University PhD Candidate to develop a programming language tailored to removing the need for the programmer to spend so long co-ordinating the many parallel threads of a GPU oriented system. A key step in creating the golden PU, a chip which is a seamless meld of both GPU and CPU.

Eric Holk developed the Harlen language to help in the push towards GPU and CPU melding, by allowing the programmer to focus less on the low-level programming of hardware Harlen has allowed for capabilities in a language normally associated with much higher-level programming languages such as C++ or Ruby. Harlen is, however, by no means the only GPU programming language available to those who wish to take on the challenges of GPU oriented computing. CUDA and OpenCL are also available yet are focussed towards low-level programming of hardware interaction. Harlen itself uses OpenCL to compile its programs but, as Holk says, was designed to answer the question; “What would we do if we started from scratch on a language and designed it from the beginning to support GPU programming?” It seems that what they would do is create a language which enables GPU’s to perform faster than a CPU can. The fact that GPU’s are designed to feedback into another higher-level system meant that they lend themselves well to creating a multi-core, multi-processor supercomputer system, the problem was that the development of hardware capabilities outstripped the development of software systems to accommodate the change in processing focus.

“Most of the systems so far embed GPU programming in an existing language, which means you have to handle all the quirks of the host language. Harlan lets us make the best decisions for our target hardware and applications” continues Holk. The syntax of Harlan is derived from Lisp via Scheme, Lisp being the flagship language of AI researcher John McCarthy and Scheme being one of the languages that Indiana University researchers are familiar with it made sense to orient the programming language towards these languages as opposed to original vision of a language more akin to C++. Yukihiro Matsumoto, once said that Lisp was the direct ancestor of “every good programming angle. He would know too; he wrote Ruby after all.
What does the advent of Harlan mean for the world though? Well it is somewhat doubtful that Harlan will be rolled out as the language of choice for GPU oriented systems it being more of an experiment to push the boundaries of GPU programming. Bear in mind that this is the brainchild of a PhD candidate and so its applications are largely academic with little focus on rolling the language out on a commercial basis. It is also important to bear in mind the process it takes for high-technology to reach the market, University researchers develop the tech, get taken on by a company and the skills they bring to the company are then rolled out eventually to the market. This is the case with Harlan and Eric Holk, Harlan being his academic focus whilst working with Mozilla on another language; Rust.

The difference between Harlan and Rust is one of applicability, Harlan is a proof of concept and an experiment, Rust is a product with market capacity. Focussed more on developing low-level hardware interaction whilst maintaining a familiar approach to programming it can be made to work effectively with GPU oriented systems; as opposed to Harlan’s focus of writing efficient code for de-facto GPU programming, Rust is comprehensive enough to program from a low to high level as Holk explains in a paper he has penned on the language. “Harlan is about pushing the limits of whats possible, while the Rust on GPUs work is about applying those ideas in a more practical language” as Holk put it.

This may all seem like fantastic news to boffins and researchers who rejoice in the minutiae of the syntax of language and get all squirmy or defensive when a new language appears but what does it mean for the world at large? With a large number nation states having stated goals of increasing supercomputer capacity beyond current capabilities this will be exciting news to them. The current world leader in supercomputing is China with the impressive Tianhe-2 supercomputer clocking in at 33.86 petaflops (a FLOP being the amount of floating-point operations per second a computer can handle), some cosmic distance away from what is recognised as the first supercomputer, the CDC 6600 which ran at a then magnificent 1 megaflop. Here we are some 3 decades later and we are 3 orders of magnitude greater in terms of our flop capacity with the consensus worldwide being that we will reach 1 exaflop (exa being the next order of magnitude up from peta) by 2018. India has a stated goal of reaching this by 2017, an admirable approach to research. It is estimated that a computer of zettaflop capacity (an order of magnitude up again from peta) could accurately model world weather systems for a two week period. Perhaps this will give you an idea of the stakes at play in the globally competitive world of supercomputing though by predictions we won’t be able to reach such capacity until 2030.

Players in the game must sit up and take notice of the capacity of GPU oriented programming and bring it into their own approaches, Intel in particular are ones who should pay attention having pushed their efforts into the Intel MIC as a response to GPU focussed systems. My personal opinion is that pride of product may be getting in the way of development at this point, researchers and developers should be coherent in their approach to this, after all these guys connected us all with the internet, but That Ole Profit Motive means that research efforts are normally disparate and kept wrapped up under secrecy clauses. Here we see the problem in rolling supercomputer capacity out to the masses, University researchers develop the systems, businesses hire them and lock their freedom of research into a contract. The technology is then released at a pace deemed profitable in the long term by the company.

Perhaps a combination of the lower cost of high capacity systems coupled with the wide propagation of tinkering programmers and the development of languages such as Harlan and Rust will lead to a new age of cheap, superpowerful computers with applications beyond weather monitoring or mass-surveillance programs. One application which will certainly be of use to the masses will be the increased level of supercomputer capacity will mean that wholesale cloud computing is now tantalisingly closer. Stick with Techwench as we document and discuss the rapid changes in technology the world experiences, day to day.

Silicon Philanthropy

Previous article

The Double Standards Of Visa and Mastercard

Next article

You may also like

Comments

Comments are closed.

More in Computing