Scientific Computing World has launched SCW75, an annual programme to recognise the individuals driving innovation at the intersection of computing and research. The inaugural 2026 list will honour 75 exceptional leaders who are actively transforming their organisations through strategic deployment of scientific computing solutions to further their research goals.
Scientific Computing World editor Robert Roe answers some questions on the search for the superstars of scientific computing.
What prompted you to launch the SCW75 recognition programme now, after 30 years of serving the research computing community?
Over the past few years, we've all witnessed an extraordinary acceleration in how computing underpins research across every scientific discipline, whether that research is carried out in universities, state bodies or the private sector. The Covid pandemic certainly highlighted this – suddenly everyone could see how critical computational infrastructure was for everything from vaccine development to climate modelling. But what struck us most was that while we regularly cover breakthrough technologies and systems, the individuals driving strategic decisions about computing infrastructure rarely receive the recognition they deserve.
These are people making millions of pounds’ worth of decisions that directly impact whether research succeeds or fails, yet they often work behind the scenes. After three decades of covering this space, we felt we had both the credibility and the responsibility to shine a light on these leaders.
Why did you settle on 75 individuals as the number for this inaugural list, and how did you decide on the three specific categories?
Honestly, alliteration played a part, but we also felt that it was a substantial enough number to represent the breadth of the field without being so large that inclusion becomes less meaningful. One of our sister publications, Electro Optics, has been running a similar programme called The Photonics100, for several years now. Based on their experience and advice, 25 people in each of three categories felt like the right balance to ensure depth within each domain while maintaining selectivity. And it gives us room to grow.
As for the categories themselves, they emerged from looking at where computing investment and innovation are actually happening in research organisations. HPC remains the backbone of large-scale scientific computing, but we're seeing enormous growth in simulation-driven R&D, particularly in engineering and climate science. And laboratory informatics has become absolutely critical – you can have all the computing power in the world, but if you can't manage and analyse your experimental data effectively, you're stuck. These three areas capture where the most impactful decisions are being made right now.
You've chosen quite distinct domains – HPC, computational simulation, and lab informatics. How do these three areas complement each other in telling the story of modern scientific computing?
They're more interconnected than people might initially think. A life sciences researcher might use HPC infrastructure to run molecular dynamics simulations, then feed those results into a LIMS platform where they're correlated with experimental data.
Or consider climate research – you need HPC for the computational models, sophisticated simulation frameworks for scenario analysis, and robust data management systems to handle the petabytes of output. What we're really recognising is the entire ecosystem of computational research. It's not just about having the fastest supercomputer or the cleverest algorithm; it's about the people who understand how these pieces fit together strategically to advance research outcomes. That integrated perspective is what distinguishes truly effective research computing leadership.
Can you walk us through what happens between the initial nomination and the final selection? What makes a nominee stand out during the evaluation process?
The initial nomination is deliberately quick – we want to lower the barrier for people to put forward worthy candidates. Once nominations close at the end of February, our editorial team reviews every submission and creates a shortlist.
In early March, those shortlisted candidates receive an invitation to complete a more comprehensive form. That's where we're looking for specifics: What have you actually led in the past two years? What changed as a result? We're particularly interested in leaders who've navigated challenges – perhaps migrating legacy systems to the cloud, or securing buy-in for a major infrastructure investment during budget constraints. The nominees who stand out are those who can clearly articulate not just what they did, but why it mattered and how it moved their organisation's research mission forward. We're also keen to see evidence of advocacy – are they championing best practices, mentoring others, or influencing policy within their institution?
That detailed form asks for "measurable impact on research outcomes”. What kinds of metrics or evidence do you find most compelling when assessing someone's contribution?
We're looking for tangible results, but we recognise that impact manifests differently across domains.
For an HPC leader, it might be demonstrable improvements in job throughput, reduced time-to-results for critical projects, or successful deployment of new architectures that enabled research previously impossible.
For someone in laboratory informatics, perhaps they've reduced data processing bottlenecks from weeks to days, or implemented systems that improved reproducibility and compliance. We're also interested in research outputs that wouldn't have been possible without their computing initiatives – publications, patents, successful grant applications.
But beyond the numbers, we value qualitative impact too: Have they transformed their organisation's culture around scientific computing? Have they built capabilities that will serve researchers for years to come? The most compelling cases combine hard metrics with a clear narrative about organisational transformation.
How will you ensure representation across different sectors – academia, government labs, and industry – as well as geographic diversity?
That's absolutely front of mind for us. Our editorial panel will be actively monitoring the pool to ensure we're not inadvertently skewing towards any particular sector or region. Scientific Computing World has always had a global readership and we're committed to reflecting that. We're encouraging nominations from across all sectors and geographies, and we're prepared to do direct outreach if we notice gaps emerging.
That said, we're not applying rigid quotas – ultimately, we're recognising excellence and impact. But excellence exists everywhere, and if our final list doesn't reflect genuine geographic and sectoral diversity, that tells us we need to work harder on outreach. We're also conscious of ensuring we're not just recognising the usual suspects from the largest, best-funded institutions or commercial operators, although they are of course welcome. Some of the most innovative computing leadership happens in resource-constrained environments where people are doing remarkable things with limited budgets.
Self-nominations are explicitly welcomed. Do you find that self-nominated candidates approach the application differently than those nominated by colleagues?
We've found in other contexts that self-nominations can actually be quite refreshing. People nominating themselves often provide more detailed, specific information about their work because they're intimately familiar with the nuances. There can sometimes be a reluctance about putting oneself forward, but frankly, if you've led significant computing initiatives that have genuinely transformed research outcomes, we want to hear about it directly from you.
What matters isn't who submitted the initial nomination – it's the quality and impact of the work described in the detailed submission. We judge every application on the same criteria regardless of its origin. In fact, self-nomination can demonstrate a healthy confidence and professional self-awareness, both of which are valuable qualities in leadership.
Beyond the recognition itself, what do you hope the SCW75 programme will achieve for the scientific computing community more broadly?
Several things, really. First, we want to elevate the profile of research computing leaders as a distinct professional identity, whether in academia, government research institutions, or commercial entities such as pharmaceutical manufacturers and automotive companies. Too often, these roles are seen as purely technical or operational, when in reality they require strategic vision, change management skills, and deep understanding of research needs. By celebrating these leaders publicly, we're making a statement about the value and complexity of this work.
Second, we hope to facilitate knowledge-sharing and peer learning. The 75 individuals on this list will represent an enormous collective knowledge base – imagine the insights that could emerge from connecting them.
Third, there's a talent pipeline consideration. We want early-career professionals in research computing to see pathways and role models.
And finally, we hope this encourages organisations to invest more thoughtfully in computing leadership, recognising that having the right person making strategic decisions about infrastructure and data can be just as important as the technology itself.
The benefits mention panel discussions and roundtables. How do you envision facilitating connections between the 75 leaders once they're selected?
In truth, we already run virtual round tables and webcasts focused on specific challenges. This is about extending that network of speakers to these research leaders to facilitate intimate discussions, perhaps involving 4-8 of our alumni at a time, where people can speak candidly about what's actually working and what isn't. These topic-focused round tables can be supported by in-person events and perhaps even our own conference further down the line. This is something that has worked well for our stablemates in photonics.
Beyond structured events, we also aim to create a forum or network where members can connect directly, ask questions, and share knowledge. The key is making this genuinely useful rather than just ceremonial. These are busy people – any networking opportunity needs to offer real value, whether that's solving an immediate problem, forming a collaboration, or simply knowing who to ring when facing a particular challenge.
You mention that recognition could help with attracting funding. Have you seen evidence of this kind of professional recognition translating into tangible career or project benefits?
Absolutely. In research environments, credibility and visibility matter enormously when competing for limited resources. Being able to point to external recognition – particularly from a respected, independent publication – adds weight when you're making the case for a major infrastructure investment or competing for institutional funding. It signals to senior leadership and funding bodies that you're not just competent, but recognised as a leader in your field. We've seen this with other recognition programmes; award winners report that it's helped in promotion cases, made it easier to attract talented team members, and strengthened grant applications.
There's also a confidence factor – external validation can give leaders the assurance to push for bolder initiatives. And for organisations, having recognised leaders on staff is a recruitment and reputation tool. It's not the primary reason we're doing this, but yes, we believe recognition translates into tangible professional benefits.
What responses have you received so far from the community about this initiative?
The response has been overwhelmingly positive, actually. We've heard from research computing professionals who feel this is long overdue – people who've been quietly doing exceptional work and appreciate that someone's finally paying attention. We've also had enthusiastic responses from senior researchers and university administrators who recognise how critical computing leadership is to their institution's success.
There's been particular interest in the laboratory informatics category; that community has felt somewhat underrepresented in traditional HPC-focused recognition. A few people have asked whether 75 is too few, which would be a nice problem to have – it suggests there's a depth of talent out there. We've also had queries from international colleagues wanting to ensure their regions will be well represented. The general sentiment seems to be that this initiative is both needed and timely.
As this becomes an annual programme, how do you plan to evolve it? Will previous honorees remain part of an ongoing network?
Very much so. We envision building a cumulative community – by year five, you'd have nearly 400 recognised leaders who could form quite a powerful network. We're considering an alumni programme where previous honorees could mentor new members, participate in advisory panels, or contribute thought leadership pieces. We might also introduce special recognition for sustained impact – perhaps acknowledging someone from a previous year's list who's gone on to achieve something particularly significant.
As the programme matures, we'd like to gather proper insight on career trajectories and organisational impacts to demonstrate the value of investing in computing leadership. And we're open to evolving the categories as the field changes. In five years, we might need a category specifically for AI infrastructure leadership, or quantum computing, or something we can't yet anticipate. The programme needs to remain relevant to where scientific computing is actually heading.
Are there any categories you considered but didn't include in this inaugural year, perhaps for future expansion?
We had robust discussions about several other areas. Research software engineering was a strong contender – there's extraordinary work happening there, and those individuals often bridge computing and domain science in fascinating ways. We also considered a category for research data management and preservation, which is increasingly critical as we think about reproducibility and open science. Cybersecurity in research computing was another area that came up; it's not glamorous, but it's absolutely essential, and the people managing security in open research environments face unique challenges.
For the inaugural year, we wanted to start with three well-defined categories that would resonate broadly across the research computing community. But I'd be surprised if we didn't expand in future years. We're also interested in hearing from the community about what's missing – if there's overwhelming sentiment that we've overlooked a critical area of scientific computing leadership, we'll certainly consider it for 2027.
Robert Roe is the editor of Scientific Computing World and leads the internal judging panel for the SCW75 programme. If you have any queries, please contact him at robert.roe@europascience.com.