IBM targets flexible mainframes with open source golang on z Systems

News: While IBM sees improved uptake for its mainframes, the HPC supercomputer market is also seeing interest with demand for new systems.

The Go programming language is being brought onto IBM’s System z mainframes.

In a post on GitHub, the popular code-sharing site, Big Blue made both its port available and its Linux on IBM z Systems project.

Firstly, the idea of bringing the open-source Go programming language to its z Systems mainframes would bear significance in that it gives an option to customers that want to pair the mainframe with Linux and Go-based apps; it also gives more flexibility to the mainframe.

Go, which is commonly known as golang, was developed by Google and released in 2009. It is designed to make it easy to build simple, reliable and efficient software.

Since its release it has had more than 780 contributors making over 30,000 commits to the project and on GitHub it has more than 90,000 repositories. The point being that it is a popular language and one that appears to be growing.

The language is particularly good for network applications which require strong levels of concurrency. Projects the language is typically used for includes APIs, Web servers and minimal frameworks for Web applications.

Other users of the technology include Google, Docker and CloudFlare.

The significance of the move, in addition to giving customers more options, is that Docker is written in the Go language, meaning that z Systems and Docker can work more closely together.

Big Blue saw its z Systems mainframe revenue increase by 15% in its latest quarter compared to a year ago.

The Go programming language can also be used for high performance computing, the LANL Go Suite was developed by one of the Los Alamos National Laboratory staff Scott Pakin.

This suite provides Go with the mechanisms required for using the language in a supercomputing environment.

While the z Systems mainframe market is improving for IBM, there is also demand in the High Performance Computing market for new systems, Cray for example has improved the forecast for its 2015 revenue outlook to between $720 million to $725 million up from $715 million.

Demand for supercomputers is being seen in scientific research facilities, the most recent example of this is America’s National Centre for Atmospheric Research. The centre is to have a new supercomputer built by SGI and DataDirect Networks.

The system, which will be called Cheyenne, will be built in order to look more closely at climate research and will replace the current 1.5 petaflop system Yellowstone.

Cheyenne will be an Intel Xeon-based 5.34 petaflop system that will deliver 4,032-nodes, 313 TB of memory and will run SUSE Linux Enterprise Server.

Full production on the system is expected in January 2017 and will have 20 petabytes of storage and will run on the Red Hat Linux using IBM’s general parallel file system.

DataDirect Networks will be responsible for building the file system and data storage; this will be integrated into NCAR’s existing GLADE file system which is a Globally Accessible Data Environment which uses file spaces to provide a common view of data across the HPC, analysis and visualisation resources in use.

The purpose of the supercomputer will be to look at better forecasting of hurricanes and streamflows, which is how water is going to flow over the landscape; the idea is to predict how water will be captured in reservoirs over a year.

In addition to this element it will help NCAR to run its existing weather and climate modelling codes. The new hardware will allow for simulations to be run that expand a greater timescale and can work at a finer resolution.

The finer resolution equates to the ability to look at more granular regional climate change predictions in addition to looking at air quality and solar storms.