When Scientific Computing first explored the design of laboratory robots for derisking the discovery of novel compounds, the focus was firmly on the physical challenge: enabling robots to operate safely and effectively in human-designed environments. Since then, progress has been rapid, but not always as initially imagined.
Rather than fully autonomous robots roaming freely through laboratories, most early deployments have focused on bench-based integration. This approach reflects a practical reality: laboratories are complex, crowded, and highly customised spaces. Allowing robots to move freely would often require a complete redesign of the lab itself. For now, embedding automation at the bench, where instruments, workflows, and safety controls are already well-defined, offers a more realistic, lower-risk path. That said, the trajectory is clear, and as lab design evolves, more mobile and spatially aware robotic systems may yet become commonplace.
Autonomous Robotic Systems for Laboratory Experiments (ALBERT) is a multidisciplinary research initiative and Centre for Doctoral Training (CDT) primarily based at the University of York. It aims to develop a "self-driving" laboratory environment where robots perform complex scientific experiments with minimal human intervention. This academic centre, funded by the Engineering and Physical Sciences Research Council (EPSRC) and the University of York, aims to train a new generation of researchers equipped to address autonomous laboratory systems at the intersection of chemistry, robotics, AI, and sociology.
The ALBERT CDT opened for applications for fully funded PhD projects starting in 2024, covering a range of topics from robotic design and verification to adaptive autonomous systems.
The ALBERT CDT also organised a special session at TAROS 2024 (The International Conference on Tools and Algorithms for the Construction and Analysis of Systems). This session aimed to give PhD students a platform to present their work, indicating early progress and community engagement around autonomous lab systems research.
Over the past 12 months, work on the Autonomous Robotic Systems for Laboratory Experiments (ALBERT) project has continued, so SCW revisited the project and spoke to co-coordinator Professor Ian Fairlamb to discuss how research on the use of robots in chemistry is shaping progress in laboratory automation.
For more information about ALBERT and Professor Fairlamb's background, check the previous interview.
Does the push for automation shape the training and development of the next generation of laboratory scientists?
Professor Fairlamb: Yes, absolutely. That’s already happening in my group. I’d consider us a progressive group, and there are others like us in the UK and internationally, but many groups are still slow to adopt these approaches. Even in my own group, there was some initial scepticism, though most people are now moving in this direction. I have five or six researchers who are fully engaged and already have many of the necessary skills.
Two and a half years ago, I appointed a PhD student with a mathematics degree for the first time in my career. I also now supervise joint students with the Institute for Safe Autonomy, who focus on robotics. That shows the skills we’re bringing in. These students work closely with chemists, and the training we run, through initiatives like ALBERT, deliberately immerses automation and robotics students in real chemistry labs so they understand day-to-day experimental work. That cross-disciplinary interaction has been very positive.
What’s lagging behind is undergraduate chemistry education. While the Royal Society of Chemistry has given some direction on this, change has been slow. Coding is important, but we could also be doing more around data science tools. Even basic scripting to automate simple workflows is still a challenge for many students. Yet if you look at job adverts in what I’d call the “molecule maker” space, the expectations have shifted. Making molecules is no longer just about running reactions. Employers increasingly expect technical skills, programming literacy, data science experience, and the ability to critically evaluate machine learning models. Graduates with that full skill set tend to find jobs quickly and are very well paid. It’s a good space to be in, and once that becomes more widely understood, I think we’ll see even more momentum in that direction.
Do you think coding itself is becoming essential for chemistry undergraduates?
Fairlamb: At this stage, it’s primarily about awareness. Of course, the more experience someone has, the better. Students who tend to excel are often those who developed these interests early. In fact, I’m frequently learning new things from the current generation of students, which is a real positive. Not all students come in with that background, though, and many develop these skills through personal interest rather than formal training. Those students are particularly well-positioned for the future.
There’s been discussion about AI and automation leading to job losses. Is that a concern, or do you think chemists will simply move into different roles?
Fairlamb: My experience suggests the latter. My research group is larger now than it’s ever been. Adopting these technologies actually requires bigger teams. The science we’re doing is more ambitious, and problems that might once have taken decades can now be tackled in a few years. Many groups worldwide are pursuing similarly large-scale challenges.
We still need people with deep technical expertise. I have a senior research technician in my group who is as critical as the equipment itself. In fact, he’s a single point of failure, just like the equipment is. In industry, that role would be supported by a team; in academia, PhD students often help fill that gap. I’ve seen no evidence that automation will reduce jobs. If anything, I expect growth.
Human oversight will always be essential. We talk about self-optimising chemical reactions, but optimisation depends on the parameters you choose. Humans must determine the parameters, evaluate the results, and ensure safety. The same technologies that can produce new anticancer compounds could also be used to create highly energetic materials. That dual-use potential underscores the need for oversight.
So there still needs to be oversight?
Fairlamb: Exactly. There will likely be more oversight, especially in the early stages. This also raises questions about regulation. We’ve seen rapid development of regulatory frameworks for autonomous vehicles, but comparable frameworks for automated laboratory systems remain underdeveloped. When I speak to colleagues in law, they often point out that current regulations assume a traditional lab setup. That may need to change, particularly to address bias in models and algorithms.
There will probably be more desk-based work for future synthetic chemists. Personally, I’d be comfortable with that. I was trained in a traditional manner and spent long hours in the laboratory during my PhD, often working with hazardous or unpleasant materials. While experimentation is exciting, mixing chemicals, seeing colour changes, it’s really the analysis that matters: NMR, mass spectrometry, crystallography, and interpretation of results. There’s a risk of losing some of the hands-on “tinkering,” but I hope that’s balanced by deeper problem-solving using our brains rather than our hands.
Finally, do you think regulatory or policy changes are needed to ensure responsible lab automation?
Fairlamb: Yes, but first we need a more detailed conversation. There are people already raising these issues, but it needs broader engagement. As with many scientific advances, tools developed for good can also be misused. We need open discussions about safeguards and risk reduction.
Self-optimising systems already exist and can be tuned to optimise greener metrics or cleaner reactions. As these tools become more accessible, the questions become more pressing. Learned societies like the Royal Society of Chemistry, and others beyond chemistry, should be involved in shaping these discussions.
There’s also a competitive aspect: even if one lab chooses not to adopt these systems, others will. That makes a coordinated, community-wide approach essential. Right now, we’re somewhat protected by the technical complexity and expertise required. But as equipment becomes cheaper and algorithms become openly available, the barriers will fall. Concepts like a “lab in a suitcase” illustrate both the benefits and the risks. Such technology could be invaluable in extreme environments, but also raises serious questions about how and where it might be used.
That’s why this conversation needs to happen now.
Ian Fairlamb is a professor in the Department of Chemistry at York University and the co-director of Autonomous Robotic Systems for Laboratory Experiments (ALBERT).