Skip to main content

Case study: Navigating the AI surge - advancements, challenges and the sustainable paths forward

Computer server room

The influence of artificial intelligence (AI) is already being felt across various domains of our lives, and its impact is only poised to grow.

In healthcare, AI-driven tools are reshaping diagnosis, treatment, and drug discovery, making personalised medicine more attainable. The transportation sector is on the cusp of a revolution with the advent of autonomous vehicles and AI-optimised traffic management. Finance is already experiencing AI’s power in fraud detection and algorithmic trading. Agriculture and energy sectors are leveraging AI for precision farming and optimising smart grids, respectively. Meanwhile, the retail landscape is being reshaped with supply-chain optimisations, facial recognition to reduce theft and tailored shopping experiences. Whether it’s content recommendations in entertainment or voice generation in communication, AI’s imprint is evident, signalling a future intertwined with intelligent systems.

It’s no surprise that the rapidly increasing number of AI projects is fuelling an unprecedented demand for advanced computing systems. 

What insights would AMAX offer to those embarking on an AI project?

An optimal computing system is essential; it serves as the foundational pillar for AI project success. However, in the world of advanced computing, more powerful isn’t always better – it’s about finding the right fit.

Seek a trustworthy computing solutions partner and initiate collaboration at the outset. A partner can guide you in understanding your AI project’s needs, recommend the most fitting solutions, be it cloud-based, on-premises, or a hybrid approach, based on your expertise, timelines, and budget. Moreover, a partner can connect you with industry experts and support programmes, ensuring you have a robust starting point rather than building from the ground up.

Nvidia’s AI platform is a prime example of the resources available for AI projects. With an expansive suite that spans from cloud to on-prem solutions complemented by sophisticated software development kits (SDKs) and comprehensive learning programmes (Digital Learning Institute) every aspect of AI development and deployment is covered. From generative AI, LLM training and inference, data science, 3D design and collaboration, simulation, industrial digitalisation, to rendering and 3D graphics, and video processing – a wide range of workloads is supported.

Selecting computing systems that anticipate future growth helps to avoid premature upgrades. However, not all AI projects demand the most advanced GPUs – “Often, L40s can be more suitable than H100s. Integrating software layers, such as GPU optimisation or data storage optimisation, can significantly enhance performance and energy efficiency,” says AMAX’s EMEA General Manager, Niall Smith.

Recently, sustainability considerations have become paramount when selecting computing systems. As our technological needs have grown, so has the demand for more efficient cooling methods. Traditional air-cooling systems are gradually giving way to innovative solutions that prioritise efficiency and environmental responsibility. If you’re in the market for advanced AI systems, it’s wise to explore these non-traditional cooling methods:

Direct-to-chip cooling 

This method delivers coolant directly to the hottest parts of the chip, offering a precise cooling solution. It increases energy efficiency by reducing the distance the coolant travels, ensuring rapid heat dissipation directly where it’s most needed.

Rear-door heat exchangers

Fitted to the back of server racks, these systems use chilled water to absorb the heat generated by the servers. They’re efficient in that they directly capture and remove heat at the source before it enters the ambient environment, reducing the need for extensive room cooling.

Immersion cooling 

This innovative approach involves submerging server components in a non-conductive liquid coolant. Heat from the components is directly transferred to the fluid, which then circulates and dissipates the heat. Immersion cooling is particularly efficient for high-density setups and can significantly reduce energy consumption compared with traditional methods.

Liquid cooling emerges as the indisputable future of computing

Embracing advanced liquid cooling methods delivers a multitude of benefits, harmoniously merging environmental mindfulness with substantial monetary savings. These systems not only reduce energy consumption, but also extend hardware longevity, ensure stable performance, reduce noise and physical space requirements. As chips become increasingly power-intensive and the call for eco-friendly practices intensifies from customers, financial entities, and regulatory bodies alike, liquid cooling emerges as the indisputable future of computing. Consider adopting these technologies to not only strengthen your operational efficiency, but also to strategically position your enterprise at the forefront of sustainable innovation.

Adopt a step-by-step approach

To truly champion sustainability in your computing systems, adopt a step-by-step approach. Initiate your projects on a smaller scale and expand incrementally. This not only reduces your initial infrastructure demands but, after validating your concept, allows for measured growth.

Look into using or repurposing your existing infrastructure

Embrace cloud solutions such as Nvidia DGX Cloud. It can offer you a significant advantage right out of the gate. If on-site data storage is non-negotiable, contemplate the benefits of a liquid-cooled workstation. Units such as AMAX’s LA-2, quiet and compact, equipped with AMD EPYC 9003 CPU and seven Nvidia L40 GPUs, can be deceptively powerful despite their unassuming presence in an office corner.

Always have an eye on the horizon regarding networking and storage. It’s always preferable to have systems that accommodate growth rather than necessitate frequent upgrades.

Smith concludes: “The rapid evolution of AI projects is something we have witnessed firsthand. Just 18 months ago, an AI voice generation start-up reached out to us for a mere two GPUs. Within eight months, it ramped up to a four-GPU server. Fast forward to the present, we’ve just dispatched a cutting-edge cluster equipped with a staggering 64 Nvidia H100 GPUs and a colossal 1PB storage, all tailored for its on-prem AI model training.

“While AI projects hold immense promise, it’s crucial that we approach them with an intentional focus on efficiency to reduce their environmental footprint.”

For more information, visit

Media Partners