Euler: more power to drive research

On Monday, ETH has inaugurated the new Euler high-performance computer at the CSCS data center in Lugano, offering researchers extra processing power and flexibility to evaluate data and run complex simulations.

Enlarged view: HPC-Cluster Euler
On Monday evening, “Euler” has been inaugurated in the presence of invited guests. (Photo: ETH Zurich/Scientific IT Services)

Bernd Rinn, Head of the Scientific IT Services (SIS) division of ETH Zurich’s IT Services, remembers how he was a little nervous. “We were just in the middle of the critical phase,” he says. Several members of his team had been hard at work at the Swiss National Supercomputing Centre (CSCS) in Lugano.

Their task was to put the new Euler high-performance computer into operation. Hewlett Packard, the American IT producer, had been setting it up in a large hall at the CSCS shortly before. The new computer towers are on a temperature-controlled “island”, linked together in a rectangular formation that people can walk through. This construction contains 416 “nodes” – individual computers similar to PCs, each with a memory of 64 gigabytes, which can be increased to 256 gigabytes as required.

Installation and tests have been completed in the meantime, and on Monday evening “Euler” has been inaugurated in the presence of invited guests from science and economy. This system – along with ETH Zurich’s existing high-performance computer called “Brutus” – will provide the university with 440 teraflops of processing power, which is the ability to perform 440,000 billion floating point operations per second. Unlike Brutus, however, Euler is based in Lugano rather than Zurich.

There is a simple reason for this: the CSCS uses water from nearby Lake Lugano to cool its computers. This makes Lugano a significantly more favourable location, as it is becoming increasingly common for the processing power of high-performance computers to be restricted by the limits of the cooling facilities available rather than the energy supply. Furthermore, the CSCS in Lugano also offers enough space to expand the system in future.

Not a supercomputer

“High-performance computing has become one of the most important driving forces for research,” says Rinn, “We push ourselves to the limit." In other words, capacity constraints increasingly determine what is feasible in terms of research and thus what research is actually carried out.

Nowadays there are hardly any scientific disciplines left that do not rely on high-performance computers: climate scientists, for example, use them to model global temperature changes and ice melt, while geologists need them to predict earthquakes. Biologists, meanwhile, require high-performance computers for processing and analysing sequencing data.

Anyone who aims to be at the forefront of international cutting-edge research needs vast quantities of processing power and memory. Euler pushes ETH Zurich into the top group in terms of globally available high-performing computing systems.

However, Rinn is keen to stress that Euler is not another “supercomputer” like Piz Daint, which is also housed at the CSCS. Computers of this kind are made available to research teams on an exclusive, short-term basis for performing extremely elaborate calculations. Access to these supercomputers is only granted to those who can make use of their full processing capacity and apply specially adapted algorithms.

“Euler”, on the other hand, is a “general purpose” computer intended for use by all ETH Zurich researchers and hundreds of staff can work on it at any one time. Its computing capacity is therefore allocated to users according to a “shareholder” system, whereby individual research teams or departments purchase “shares” – a specific amount of processing power – from IT Services. In addition, some of Euler’s capacity is always kept available for all employees to use.

Cloud-based virtual work environments

Rinn is not only proud of the new hardware; he is delighted with the expansion of the software, too. “The Euler hardware shows different faces to the users,” he says. To this end, a special “Euler cloud” – a virtual working environment – is being developed. In this environment, researchers are free to choose and put together their own set of working tools without having to make any major modifications to their own computers.

“This will appeal to any researchers who found that the Brutus system was not flexible enough,” explains Rinn. His team also offers researchers help with setting up these virtual work environments, in which hundreds of programs from different sources can be combined.

Over the past two months, the operating software has been installed and the stability and performance tests have been performed. In the last couple of weeks, selected users have beta tested the new high-performance computer with their own software programs. Since the inauguration, all ETH Zurich researchers are able to work on it.

However, this is just the beginning: a second phase is already being planned. This will involve expanding Euler’s capacity by an extra 250 to 450 teraflops by mid-2015 – depending on the requirements and budget of the system’s shareholders.

New Intel Parallel Computing Center

Enlarged view: Torsten Hoefler

The Scalable Parallel Computing Laboratory (SPCL) at ETH Zurich, headed by ETH-Professor Torsten Hoefler, was named the first and so far only Intel Parallel Computing Center in Switzerland. The goal of these centers is to meet the future demands of scientific computing by developing new parallelization techniques for scientific applications that require processing of massive amounts of data.

The special focus of ETH's center is to optimize applications on many-core architectures, with tens to hundreds of cores. The center at SPCL will collaborate with CSCS and MeteoSwiss to accelerate the weather and climate simulation software COSMO on the multiprocessor architecture Xeon Phi. This will enable cheaper and more efficient weather prediction. Furthermore, Hoefler extends the computer science curriculum at ETH to teach software design and programming for highly parallel many-core systems.

JavaScript has been disabled in your browser