Tech Focus: Memory and processors

Share this on social media:

A round-up of the latest processing and memory technologies

The drive for faster processing and increases to memory bandwidth has driven increases in computing performance. The HPC market, in particular, has long been dominated by a single vendor, and so X86 systems based on Intel CPUs have become the standard. However, in recent years new technologies have arisen both in the form of competition for X86 systems from AMD and other processing technologies such as Intel or IBM Power.

This is also further complicated by increasing demand for AI/ML workloads which often require highly parallel computing elements such as GPUs or FPGA. Increasingly HPC systems are built using heterogeneous components to support the convergence of HPC and AI using the same computing platform.

Accelerator technologies can, in some cases, provide huge amounts of computing performance with some configurations of eight GPUs in a single server. This drive for low-power CPUs and high-power accelerators is continuing in both HPC and AI/ML/. Big profile supercomputing contracts for both AMD and Arm in the last 18 months demonstrate the increasing choice that is available in the HPC processor market.

With increasingly complex challenges in science and engineering, researchers are looking to larger computing systems that require fast networking to deliver the data they need for their experiments and research projects.



Featured product: Setting the bar for enterprise AI infrastructure

Fueled by the insatiable demand for better 3D graphics, and the massive scale of the gaming market, NVIDIA has evolved the GPU into a computer brain at the exciting intersection of virtual reality, high performance computing, and artificial intelligence.

From speech recognition and recommender systems to medical imaging and improved supply chain management, AI technology is providing enterprises the compute power, tools, and algorithms their teams need to do their life’s work.

Achieve breakthroughs in AI innovation with NVIDIA, unleashing data science productivity with effortless experimentation that can unlock insights from data. Avoid wasting time on software engineering and system integration, with pre-built, pre-optimised AI tools that let you enjoy the fastest time-to-solution on your most complex models.

Purpose-Built for the Unique Demands of AI

NVIDIA DGX Station A100 brings AI supercomputing to data science teams in the office or at home, offering data center performance without a data center. It’s the only system with four fully interconnected NVIDIA A100 Tensor Core GPUs with up to 320 GB of GPU memory and support for Multi-Instance GPU (MIG) that plugs into a standard wall outlet, resulting in a powerful AI appliance that you can place anywhere.




Featured product: In the Quest for Higher Learning, High Density Servers Hold the Key

With the right tools in place, scientists are able to accelerate their research, analyse massive amounts of information, and complete more data-intensive projects. Compute, storage, and networking are possible in multi-node servers at lower TCO and greater efficiency.

Based on the AMD EPYC™ processors, GIGABYTE multi-node systems are designed for high density and are one of the densest possible air-cooled AMD EPYC platforms on the market.

Using the 2U4N short depth chassis with its dedicated CPU per node and leaving room for additional networking, one will be able to fit up to 10,240 cores and 20,480 threads in 40U of rack space.

Providing an extremely dense compute configuration in a compact footprint, but also helping to greatly reduce TCO.

GIGABYTE’s H-Series systems deliver virtually identical performance to four 1U servers while reducing rack space by 50 per cent, power consumption by 4 per cent, the number of power supplies by 75 per cent, and the number of base power/ 1GbE/ management cables by 56 per cent.


Other products

Achronix is a fabless semiconductor corporation in Santa Clara, California, offering high-performance field programmable gate array (FPGA) solutions.

Its FPGA and eFPGA IP offerings are further enhanced by ready-to-use PCIe accelerator cards targeting AI, ML, networking and data centre applications. All of Achronix’s products are supported by best-in-class EDA software tools.

Aldec is an industry-leading electronic design automation (EDA) company delivering innovative design creation, simulation and verification solutions to assist in the development of complex FPGA, An application-specific integrated circuit (ASIC), SoC and embedded system designs.

With an active user community of over 35,000, 50-plus global partners, offices worldwide and a global sales distribution network in more than 43 countries, the company has established itself as a proven leader in the verification design community.

Alpha Data is a global supplier of high performance commercial off-the-shelf (COTS) reconfigurable computer platforms and support software. The company specialises on strategic market areas of digital signal processing (DSP), imaging systems, communications, military and aerospace and HPC.

AMD EPYC processors are built to handle large scientific and engineering datasets with top performance – ideal for HPC workloads, compute-intensive models and analysis techniques. Used by some of the world’s fastest, most scalable data centres and supercomputers, AMD EPYC helps deliver more innovation and faster results.

Arm HPC solutions, including Arm Neoverse, address the needs of the HPC community today and in the future. Arm’s open IP licensing model and consistent architectural advancements provide CPU and system designers the freedom and flexibility to innovate supercomputer design – as evidenced by the collaboration between Arm, Fujitsu and Riken in delivering Fugaku.

BittWare provides solutions based on FPGA technology from Intel (formerly Altera) and Xilinx for demanding applications such as data centre, military and aerospace, government, instrumentation and test, financial services and broadcast and video.

The FPGA value proposition for HPC has strengthened significantly in recent years. Working alongside CPUs, FPGAs provide part of a heterogeneous approach to computing.

For certain workloads, FPGAs provide significant speed-up versus CPU – in this case 50x faster for machine learning inference.

Cerebras is a computer systems company dedicated to accelerating deep learning. The Wafer-Scale Engine (WSE) – the largest chip ever built – is at the heart of its deep learning system, the Cerebras CS-1. Larger than any other chip by 56 times, the WSE delivers more compute, more memory and more communication bandwidth. This enables AI research at previously-impossible speeds and scale.

Intel offers a comprehensive portfolio to help customers achieve outstanding performance across diverse workloads. With deep learning acceleration built directly into the chip, Intel hardware is designed to support the convergence of AI and HPC.

And Intel software, including a development toolkit and multi-architecture programming model, plus a broad software ecosystem, helps ensure HPC users get more value from their hardware and application investments.

Taking full advantage of Kalray’s patented technology, the Kalray MPPA Coolidge processor is a scalable 80-core processor designed for intensive real-time processing. While data processing acceleration is getting critical for many applications, Kalray processors offer a unique, fully programmable alternative to GPU, ASIC and FPGA.

The Coolidge processor has the capability to run in parallel and in real-time a wide set of heterogeneous processing and control tasks such as AI, mathematical algorithms, inline signal processing, network or storage software stacks.

Kingston server SSD and memory products support the global demand to store, manage and instantly access large volumes of data in both traditional databases and big data infrastructure.

The need to store and manage larger amounts of data has increased exponentially in recent years. Data centres, cloud services, edge computing, internet of things and co-locations are just some of the business models that amass tremendous volumes of data.

Marvell offers a broad portfolio of data infrastructure semiconductor solutions spanning compute, networking, security and storage. The company’s products are deployed by organisations in enterprise, data centre and automotive data infrastructure market segments that require ASIC or data processing units equipped with multi-core low-power ARM processors.

MemVerge’s Memory Machine virtualises DRAM and persistent memory so that data can be accessed, tiered, scaled and protected in-memory.

The software-defined memory service is designed to be compatible with existing applications. This service provides access to persistent memory without changes to applications.

Micron’s technology is powering a new generation of faster, intelligent, global infrastructures that make mainstream AI possible.

Their fast, vast storage and high-performance, high-capacity memory and multi-chip packages power AI training and inference engines – whether in the cloud or embedded in mobile and edge devices.

Scientists leverage HPC as their tool for discovery and path to solving some of the world’s most impactful and complex problems. Accelerating time to insight and deploying HPC at scale requires incredible amounts of raw compute capability, energy efficiency and adaptability that are uniquely found in the Xilinx platform.

With custom datapaths and memory hierarchies, and a rich developer toolset, Xilinx FPGA accelerated applications can enable optimised hardware and software implementations with the flexibility to adapt to changing requirements without sacrificing performance and energy efficiency.