Show Preview image

SC10 is the international conference for high performance computing, networking, storage and analysis


Accelize ( provides full-hardware accelerator cards that allow software designers to significantly accelerate the performance-critical portions of their applications, with only a fraction of the power budget. With no alteration to the original application software, algorithms can be alternatively tested on the server and deployed on the accelerator card or cluster, either locally or remotely, allowing instant algorithms updates and dramatic performance improvement. Accelize next-generation ANSI C compiler (HCE) is a key component of the acceleration platform that enables seamless compilation of algorithmic software into configurable hardware.

Accelize accelerators are particularly suited for applications in bio-informatics, molecular dynamics, computational chemistry, electromagnetics and electrodynamics, weather, atmospheric and ocean modelling, and other scientific computing domains.

Accelize accelerators employ FPGA technology and an innovative interconnect infrastructure that allow sub-microsecond latency packet processing. The company provides off-the-shelf to full-custom accelerators to fit specific customer requirements.

Exhibitor image

Advanced HPC

Advanced HPC ( has teamed up with Infortrend Corporation and will be featuring a live demonstration of the EonStor 10GbE iSCSI storage solution. Designed and built with the latest Ethernet technology, Infortrend’s next-generation 10GbE iSCSI storage solutions deliver breakthrough performance IP SAN, flexible scalability, high availability and easy storage management.  

For datacentres deployed with 10GbE backbones, the EonStor 10GbE storage solutions allow users to leverage existing IP infrastructure in order to realise SAN consolidation and satisfy demands of throughput-demanding applications, such as server virtualisation, video on-demand and remote backup, in the most cost-effective manner.   

Comprehensive RAID levels support and CacheSafe technology option help protect data against drive failures and during power outages. Simple and intuitive storage management suite, SANWatch Commander, provides a single point of management for multiple EonStor over a LAN. 

Exhibitor image


Amax ( will be unveiling its ClusterMax parallel storage cluster solution. ClusterMax represents a next-generation parallel storage platform based on the Intel Sandy Bridge processor and latest InfiniBand QDR technology. The storage solution can scale to hundreds of petabytes and utilises a massive parallel file system to achieve extremely high data throughput.

ClusterMax was designed to work with large scale HPC cluster deployments and state-of-the-art supercomputer systems that require solutions for storage to support the massive data sets typical of scientific research. With this peta-scale storage platform, Amax is now capable of deploying large-scale HPC clusters to up to thousands of CPU cores.


Boston ( will unveil a GPGPU solution based on its SuperFLEXGPU architecture, designed for hybrid-based computing. For more than 19 years, Boston has offered the latest in high performance power optimised server, storage and HPC technologies. Boston deliver solutions to the oil and gas, financial services, R&D, digital content creation, and enterprise arenas based on its range of power optimised platforms.

Boston have a high level of expertise in optimising CPU and GPU compute technologies combined with platforms including Microsoft Windows HPC Server 2008 R2, Lustre parallel storage solutions and Nvidia's Cuda. These alliances deliver the best solution for HPC and assure scalability, compatibility and flexibility.

The company's CPU compute platforms support the latest AMD Opteron and Intel Xeon multicore processors in addition to Intel's Itanium 2 based technologies with true 64-bit architecture.

Exhibitor image


Bull ( will be exhibiting its family of servers designed for extreme computing: bullx. HPC users have extremely diverse requirements, and are increasingly searching for hybrid solutions to cover the widest possible spread of applications. Bull has long recognised this need, and has been gradually building an exhaustive range of solutions that includes thin nodes (bullx blade system), fat nodes (bullx supernodes) and accelerators (bullx accelerator blades with integrated Nvidia Fermi GPUs). They can all be combined with each other in a customised way, and managed as a single system using the bullx supercomputer suite. The bullx blades and supernodes have been designed and developed entirely by Bull's R&D group, Europe's largest team of HPC experts. Many prestigious companies and research centres have chosen Bull Extreme Computing solutions, ranging from small departmental clusters to petaflops-scale supersystems.

Exhibitor image


CAPS ( will be demonstrating HMPP Wizard and HMPP Feedback, new tools that help developers tuning their HMPP generated code for GPU hybrid systems. HMPP Feedback and Wizard provide users with code diagnoses and optimisation advices to step-by-step improve the performance of their HMPP applications. New compatibility with debugging and profiling tools will also enable users to analyse the behavior of HMPP application and detect errors.


Cray ( will showcase the new Cray XE6 supercomputer and the latest addition to the Cray CX line – the Cray CX1000 system. The Cray XE6 supercomputer combines Cray’s new Gemini system interconnect with powerful AMD Opteron processors, and is designed to bring production petascale computing to a new and expanded base of HPC users. Fully upgradeable from the Cray XT5 and Cray XT6 systems, the Cray XE6 supercomputer delivers improved network performance and additional enhancements such as improved network resiliency, a scalable software environment and the ability to run a broad array of ISV applications with the latest version of the Cray Linux Environment.

The rack-mounted Cray CX1000 system is built on Intel Xeon processors and features three different configurations. The compute-based Cray CX1000-C utilises the dual-socket Intel Xeon Processor 5600 series for scale-out cluster computing; the Cray CX1000-G features Nvidia Tesla GPUs for accelerator-based HPC; and the Cray CX1000-S offers users symmetric multiprocessing (SMP) nodes for up to 128 cores of 'big memory' computing built on Intel’s QuickPath Interconnect technology.

Exhibitor image

CRC Press - Taylor & Francis

CRC Press – Taylor & Francis ( publishes innovative textbooks and reference books on the latest research and developments in computational science and engineering. It will be featuring the latest publications, including the newest titles in its Computational Science book series, edited by Horst Simon. Key titles on show include: High Performance Computing for Scientists and Engineers, by Gerhard Wellein and Georg Hager, Scientific Data Management, edited by Arie Shoshani and Doron Rotem, and Scientific Computing with Multicore and Accelerators, edited by Jakub Kurzak, David Bader, and Jack Dongarra.

Exhibitor image


EPCC ( has a strong presence at this year's SC10. The booth will showcase the wide range of work that EPCC undertakes, including Grid accounting software, fusion simulations, exascale computing, industrial partnerships and much, much more.

The EPCC is also involved with a number of other booths, including those for the European project PRACE, and EUFORIA. Finally, it is also helping to provide tutorials on OpenMP and Co-array Fortran, and have a number of posters on a range of topics in the poster sessions.

Ethernet Alliance

The Ethernet Alliance ( booth at SC10 will feature a comprehensive multi-vendor, multi-technology display of Ethernet's ability to support high-performance networking. This showcase will highlight how Ethernet delivers a high-performance enterprise data centre infrastructure while converging interprocessor communication, storage and server application communications over a unified network.

Visitors at the Ethernet Alliance booth will be able to see equipment from multiple vendors supporting a comprehensive technology demonstration that will include 10GbE, 40GbE and 100GbE as well as converged traffic including Fibre Channel over Ethernet, (FCoE), 10GBaseT, iSCSI, iWARP and RoCE. The live demonstrations will showcase these various solutions as enabled through a unique interoperability setting.

Booth visitors will also be able to view various products needed to build Ethernet technology networks. Companies such Cisco, CommScope, JDSU, and Intel have all contributed their latest products to showcase. Examples include optical modules for 40 and 100 Gigabit Ethernet and energy-efficient Ethernet equipment.

European Middleware Initiative

The European Middleware Initiative ( will be exhibiting at the show. The EMI project was created on 1 May 2010 out of the joint efforts of the major European distributed computing middleware providers, ARC, gLite, Unicore and dCache.

The project's goal is to support and evolve the European and international research infrastructures and allow an increasing number of scientists and researchers to access resources, data and applications across the world. EMI main goals are to improve the reliability, usability and stability of the middleware services, according to the requirements of users and infrastructure providers.

EMI plans to consolidate the existing middleware services, to make them simpler and easier to use, by adopting, improving and proposing working standards and by carefully integrating proven and new technologies. More than one hundred software developers, testers, designers, team leaders, project managers from 26 institutes in 18 countries, in Europe and outside are now working together to implement this vision.

Exhibitor image


Extoll ( will be demonstrating its technology aimed at performance computing users whose applications are currently limited by latency, scalability or power. Extoll technology is a communication solution that enables ultra low latencies and avoids costly and power-hungry external switches. Unlike currently used equipment, the product is an integral part of the HPC system that provides inherent support for multi-core environments, virtually unlimited scalability and a tight coupling between computational units.

As part of Extoll, Velo minimises the communication overhead, which results in ultra low latency communication. The switch-less design technology integrates all switching resources onto the add-in card; with this technology external switches and their associated disadvantages like costs, power and inflexibility can be completely avoided. The cabling effort is minimised by extremely dense connectors, novel low-power and cost effective patented active optical cables and passive electrical cables for short distances.

Exhibitor image

Fraunhofer Institute for Industrial Mathematics

The Fraunhofer Institute for Industrial Mathematics ( will be launching GPI, a new interface for application development for multicore architectures. GPI stands for Global Address Space Programming Interface, it implements the PGAS (programming model on the API level.

The advantages of GPI include the global adress space which makes programming more productive, and its true asynchronous communication model. It also offers performance at wire speed, and excellent scalability on large multicore systems.

GPI directly extends the simple thread-based programming model from the node level to the cluster level. GPI works with Posix threads or with the optimised hardware-aware multicore thread package MCTP. GPI is lightweight and easy to use and includes additional functionality like fast barriers, collective operations or atomic counters.

GPI was developed by Fraunhofer Institute for Industrial Mathematics. It is available as licensed product for through Fraunhofer’s marketing partner Scapos.

Exhibitor image

German Climate Computing Centre (DKRZ)

The German Climate Computing Centre (DKRZ) ( is a unique national computing centre for premium climate science. Today, the complexity of the Earth system and the various interactions between its components are one of the great scientific challenges. The Earth as a whole cannot be the object of experiments. Particularly with regard to estimating future climate changes, computer simulations have become indispensable. DKRZ provides HPC platforms, sophisticated and high-capacity data management and related services. Therefore, the computer systems of DKRZ are the laboratory for climate researchers.

DKRZ runs an IBM-Power6-system with a peak performance of more than 150 Teraflop/s, which is used for complex climate model calculations. DKRZ will present the visualisations and animations of intricate model experiments. Special focus is set on results of small spatial scales processes such as storms, cyclones or hurricanes in the atmosphere or mesoscale eddies in the ocean.

The university research group 'Scientific Computing', also located at DKRZ, will present high performance I/O optimisations, energy efficiency, and simulation of cluster infrastructure.

Exhibitor image

GR Cooling

GR Cooling ( will exhibit its dielectric fluid submersion cooling system with a fully-populated 13U cooling system running both normal and over-clocked servers. The system offers high performance cooling up to 100kw per rack, a 45 per cent reduction in data centre energy use, and economical equipment costs. Visitors will see and touch servers submerged in the company's GreenDEF cooling fluid, which is an environmentally-friendly variant of mineral oil. Visitors will also be able to view live system performance data as well as discuss case studies from current installations.


Juniper ( will be launching Junos Space Security Design, an enterprise-class security solution specifically designed to enable operators to scale firewall and VPN services rapidly and accurately, allowing them to deploy 1,000s of devices in minutes with minimal human intervention to minimise errors. Using Junos Space Security Design with its wizard driven Web 2.0 interface, operators can fully automate the visualisation, configuration and deployment of security infrastructure in multi domain networks. Junos Space Security Design includes several innovations including topology-based policy definition to enable security architects to model security devices once and have the configuration ready to push to thousands of devices when they connect. Policy abstraction enables operators to create a logical security topology and generate an accurate security configuration, and patent pending security domains allow common security restrictions to be applied to a grouping of distributed network resources.


With more than 100 petabytes of storage currently installed at HPC customers Worldwide, LSI ( understands the storage requirements of compute-intensive HPC environments for extreme bandwidth and uninterrupted access to data.

LSI has a rich and successful history of deploying solutions and supporting customers in HPC supercomputing markets, including government, research, and defence, as well as academics and universities. With LSI HPC and HEC high-bandwidth storage, end customers can deploy InfiniBand and fibre channel data centre solutions with confidence.

Visit LSI at SC10 to learn about its new storage system designed to provide a competitive advantage in HPC markets. LSI meets mainstream HPC requirements with this incredible fast bandwidth and IOPs storage system. LSI will be announcing next generation products for its HPC portfolio at SC10.

MBA Sciences

MBA Sciences ( develops commercial software products based on its patent-pending SPM technology, enabling users to exploit parallelism without needing to become proficient in parallel programming.

Its SPM.Python product, a scalable parallel version of the serial Python language, can be deployed to create parallel capabilities to solve problems in domains spanning finance, life sciences, electronic design, IT, visualisation, and research. Software developers may use SPM.Python to augment new or existing (Python) serial scripts for scalability across parallel hardware. Alternatively, SPM.Python may be used to better manage the execution of (non-Python) applications in parallel in a fault tolerant manner taking into account hard deadlines.

Visit MBA Sciences at the Disruptive Technology booth to view demonstrations that create from scratch, parallel, fault-tolerant scripts in response to live attendee requests. Solutions written in minutes will showcase unique capabilities that enable a broad spectrum of users to more easily produce correct-by-construction, robust scripts that exploit parallelism.

Exhibitor image


The Numerical Algorithms Group ( is dedicated to applying its unique expertise in numerical software engineering to delivering high quality computational software and high performance computing (HPC) services. Its booth will include information on: applications on the first production Cray XT6 in the world: HECToR; how the latest NAG Library for SMP and Multicore delivers increased performance on multicore computers; the new NAG Library for the .NET environment; the latest release of the NAG Fortran Complier; how GPGPU and Cuda 3.0 are used by the trusted NAG Libraries to deliver extra performance; how NAG enables new science on HECToR with code optimisations and training; academic and industry collaborations and awards; and advances in numerical methods, scalable HPC software, and programming for manycore. Visitors will also have the opportunity to meet the company's HPC and numerical computing experts for advice on specific high performance computing challenges.

Exhibitor image


NextIO (, a specialist in GPU consolidation appliances, such as the vCore Express S2070, will be on hand to present the next big thing in GPU management. See a live demonstration of the world's first modular GPU appliance capable of housing up to eight Tesla GPUs supporting up to four server nodes. NextIO will demonstrate 'drag and drop' hot plug reassignment of GPUs to a server without bringing down GPU applications on that server. The server does not have to be rebooted, the GPU applications do not stop and the server enclosure does not have to be opened in order to add or remove GPU resources to a server node. The appliance provides a reconfigurable pool of GPU resources.

Exhibitor image

NICT Japan

NICT Japan ( conducts general research and development on information
technology supporting the ubiquitous society, and supports businesses related to information communications. Some of the exhibits and demonstrations at its booth include:
- Joint research with Kyushu and Ehime Universities, demonstrating realtime space weather forecast calculation, data transfer, visualisation with GPU clusters and JGN2plus network.
- Joint research project with GLIF, AIST, and KDDI R&D laboratories, which will show dynamic multipoint circuit provisioning over Japan, Korea, and US multidomains.
- Collaboration between dynamic circuit provisioning and applications such as TDW. On show will be DCN path provisioning between Japan and US.
- Joint research with NTT Labratory, which will be showing application-initiated network path provisioning and relatime measurement and visualisation of very precise streaming characteristics. This demonstration will also be done on a multidomain connected virtual networks.
- Also, JGN2plus network, Japan’s largest testbed network and its research topics, which aims for the New Generation Network, will be introduced.

Exhibitor image


Numascale ( will present and demonstrate the newly-launched NumaConnect SMP adapters. Based on the NumaChip, NumaConnect enables the HPC community to build highly scalable shared memory machines (SMP) in hardware at a fraction of the cost of current alternatives.

NumaChip is a combined cache coherence controller and distributed switching fabric interfacing to AMD's coherent HyperTransport. NumaChip uses a directory-based cache coherence protocol that is far more scalable than broadcast or snooping based protocols. The on-chip switch connects to neighbor nodes in 2- or 3D torus topologies and can scale to 4,096 nodes. The remote caches can be configured up to 16Gb per node and can support up to 256TB of RAM. With NumaConnect and commodity servers, Numascale makes scalable SMP systems available at cluster pricing to the benefit of research, academia and industry users.

Exhibitor image


The Nvidia ( Tesla 20-series is designed from the ground up for high performance computing. Based on the next generation Cuda GPU architecture codenamed 'Fermi', it supports many 'must have' features for technical and enterprise computing. These include ECC memory for uncompromised accuracy and scalability, support for C++ and 8x the double precision performance compared Tesla 10-series GPU computing products. When compared to the latest quad-core CPU, Tesla 20-series GPU computing processors deliver equivalent performance at 1/20th the power consumption and 1/10th the cost. Each Tesla GPU features hundreds of parallel Cuda cores and is based on the Nvidia Cuda parallel computing architecture with a rich set of developer tools (compilers, profilers, debuggers) for popular programming languages APIs like C, C++, Fortran, and driver APIs like OpenCL and DirectCompute.

Portland Group

The Portland Group ( is developing a Cuda C compiler targeting systems based on the industry-standard general-purpose 64- and 32-bit x86 architectures. The new PGI Cuda C compiler for x86 platforms will be demonstrated at SC10.

The PGI Cuda C compiler for x86 platforms will allow developers using Cuda to compile and optimise Cuda applications to run on x86-based workstations, servers and clusters with or without an Nvidia GPU accelerator. When run on x86-based systems without a GPU, PGI Cuda C applications will use multiple cores and the streaming SIMD (Single Instruction Multiple Data) capabilities of Intel and AMD CPUs for parallel execution.

PGI offers two programming models for GPU accelerators. PGI Accelerator is a high-level directive-based programming model targeting scientific and engineering-domain experts working in high-performance computing. PGI Accelerator compilers are currently available for C99 and Fortran 95/2003.


PRACE, the Partnership for Advanced Computing in Europe (, is building a persistent pan-European Research Infrastructure (RI) for providing leading HPC services to enable world-class science and engineering for European academia and industry.The first production system, a 1 Petaflop/s IBM BlueGene/P (Jugene) at FZJ (Forschungszentrum Jülich) is available for European scientists.

The PRACE BoF at SC10 'PRACE – The European HPC infrastructure created' together with DEISA, the Distributed European Infrastructure for Supercomputing Applications, takes place on Wednesday 17 November from 12.15-1:30pm. The session will present the latest news about the PRACE RI. It covers the current status and the future plans; the results to be expected during the EC funded implementation projects, the integration of services currently provided by DEISA within the European HPC ecosystem, and collaboration opportunities for academia and industry.

Exhibitor image


SGI ( will be featuring a number of storage solutions for both scale-up and scale-out HPC environments, along with products for persistent data management. Key among these are the SGI InfiniteStorage 16000 and the Copan 400. The InfiniteStorage 16000 is a next generation RAID system, which meets the needs of both bandwidth-hungry and IOPs-heavy HPC workloads. Combined with server platforms like Altix UV and the CXFS shared file system, SGI can deliver I/O throughput well above and beyond typical competitive end-to-end offerings.

The Copan 400 is SGI’s MAID (Massive Array of Idle Disks) platform, which delivers cost-effective storage of long term data similar to tape systems, but with the reliability and performance of disk.

SGI will also be showing its latest server offering, which has an architecture specifically for optimising hundreds to thousands of high-bandwidth PCIe slots. Codenamed 'Project Mojo' the platform will house this year's and future, PCIe-based accelerators, for maximising the amount of compute possible in a single cabinet. The server is based upon a 'stick' architecture that contains a couple of PCIe x 16 slots, with a wrap of two slim motherboards, allowing connectivity into Ethernet and up to dual-plane Infiniband networks.


Whamcloud ( will be exhibiting at the show. The company is formed from worldwide high-performance computing (HPC) storage industry veterans. It is focused on enabling application scaling and information insight through the evolution of HPC storage technologies in collaboration computing centres.