PRACE Resources

The PRACE RI provides access to distributed persistent pan-European world class HPC computing and data management resources and services. Expertise in efficient use of the resources is available through participating centers throughout Europe.
Available resources are announced for each Call for Proposals.

PRACE Production systems (in alphabetical order of the systems’ names):

CURIE

GIF - 41.4 kb

CURIE is a supercomputer of GENCI, located in France at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris. CURIE BULLx system composed by 5 040 compute blades (called thin nodes), each node having 2 octo core Intel SandyBridge EP processors 2.7 GHz, 4 GB/core (64 GB/node) and around 64 GB of local SSD acting as local /tmp. These nodes are interconnected through an Infiniband QDR network and accessing to a multi-layer Lustre parallel filesystem at 250 GB/s. The peak performance of the thin nodes partition is 1.7 Petaflops.

The successor of Curie, with a first tranche starting service Q2 2018, will be a BULL Sequana system based on 9 compute cells. 6 cells will be composed each by 272 compute nodes with two 24-core Intel Skylake EP processors 2.7 GHz, 4 GB/core (192 GB/node). 3 cells will be composed each by 228 nodes with one Intel Knights Landing 68-core 1.4 GHz manycore processor with 16 GB of high-speed memory (MCDRAM) and 192 GB of main memory. All the compute nodes are interconnected through a high-speed interconnect (to be announced later) and accessing to a multi-layer Lustre parallel file system at 500 GB/s. The peak performance of this system will be close to 9 Petaflops

For technical assistance: hotline.tgcc@cea.fr

MARCONI

Italian supercomputer systems complement the PRACE infrastructure from spring 2012.

CINECA’s Tier-0 system named MARCONI provides access to PRACE users since July 2016. The MARCONI system is equipped with the new Intel Xeon processors and it has two different partitions:

  • Marconi – Broadwell (A1 partition) consists of ~7 Lenovo NeXtScale racks with 72 nodes per rack. Each node contains 2 Broadwell processors each with 18 cores and 128 GB of DDR4 RAM.
  • Marconi – KNL (A2 partition) was deployed at the end of 2016 and consists of 3600 Intel server nodes integrated by Lenovo. Each node contains 1 Intel Knights Landing processor with 68 cores, 16 GB of MCDRAM and 96 GB of DDR4 RAM.

The entire system is connected via the Intel OmniPath network. The global peak performance of  the Marconi system is 13 Petaflops. In Q3 2017 the MARCONI Broadwell partition will be replaced by a new one based on Intel Skylake processors and Lenovo Stark architecture, reaching a total computational power in excess of 20 Petaflops.

For technical assistance: superc@cineca.it

Hazel Hen

Hazel HenHazel Hen is the new Cray XC40 system (upgrade of Hornet system) and is designed for sustained application performance and highly scalable applications. It delivers a peak performance of 7.42 Petaflops. This new system is composed of 7,712 compute notes with a total of 185,088 Intel Haswell E5-2680 v3 compute cores. Hazel Hen features 965 Terabyte of Main Memory and a total of 11 Petabyte of storage capacity spread over 32 additional cabinets containing more than 8,300 disk drives. The input-/output rates are +/- 350 Gigabyte per second. For technical assistance: prace-support@hlrs.de

 

JUQUEEN

JPEG - 51.4 kb

Since 1 November 2012, the Gauss Center for Supercomputing provides access to an IBM Blue Gene/Q system JUQUEEN at Forschungszentrum Jülich (FZJ) in Jülich, Germany. Systems of this type are currently the most energy-efficient supercomputers according to the Green 500 List. JUQUEEN has an overall peak performance of 5.87 Petaflops. It consists of 28 racks; each rack comprises 1024 nodes (16394 processing cores). The main memory amounts to 458 TB. More information is available at JUQUEEN’s home page (http://www.fz-juelich.de/ias/jsc/juqueen)

For technical assistance: sc@fz-juelich.de

MareNostrum

MareNostrum

MareNostrum – hosted by BSC in Barcelona, Spain.

MareNostrum is based on Intel latest generation general purpose Xeon E5 processors with 2.1 GHz (two CPUs with 24 cores each per node, 48 cores/node), 2 GB/core and 240 GB of local SSD disk acting as local /tmp. A total of 48 racks, each with 72 compute nodes, for a total of 3456 nodes. A bit more than 200 nodes have 8GB/core. All nodes are interconnected through an Intel Omni-Path 100Gbits/s network, with a non-blocking fat tree network topology.
MareNostrum has a peak performance of 11,14 Petaflops. For technical assistance: support@bsc.es

Piz Daint

Piz Daint sinistra persona 3_web

Piz Daint supercomputer is a Cray XC50 system and the flagship system at CSCS – Swiss National Supercomputing Centre, Lugano.

Piz Daint is a hybrid Cray XC50 system with a 4’400 nodes available to the User Lab. The compute nodes are equipped with an Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB. The nodes are connected by the “Aries” proprietary interconnect from Cray, with a dragonfly network topology. Please visit for further information the CSCS website.  For technical questions: help(at)cscs.ch Please visit for further information the CSCS website.  For technical questions: help(at)cscs.ch

SuperMUC

JPEG - 68.7 kbSuperMUC is the Tier-0 supercomputer at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. It provides resource to PRACE via the German Gauss Centre.

SuperMUC Phase 1 consists of 18 Thin Node Islands with Intel Sandy Bridge processors and one Fat Node Island with Intel Westmere processors. Each compute Island contains (512 compute nodes, each node having 16 physical cores) 8192 cores for the user applications. Each of these cores has approx. 1.6 GB/core available for running applications. Peak performance is 3.1 Petaflops. All compute nodes within an individual Island are connected via a fully non-blocking Infiniband network (FDR10 for the Thin Nodes and QDR for the Fat Nodes). A pruned tree network connects the Islands.

SuperMUC Phase 2 consists of 6 Islands based on Intel Haswell-EP processor technology (512 nodes/island, 28 physical cores/node and available memory 2.0 GB/core for applications, 3072 nodes, 3.6 PF). All compute nodes within an individual Island are connected via a fully non-blocking Infiniband network (FDR14). A pruned tree network connects the Islands. Both system phases share the same Parallel and Home filesystems.

For technical assistance: lrzpost@lrz.de or https://servicedesk.lrz.de/?lang=en

Share: Share on LinkedInTweet about this on TwitterShare on FacebookShare on Google+Email this to someone