Skip to main content

The rise of the machines

WALL-E, the latest animated feature from Pixar (if you have been pupating in a cave for the past few months, just pronounce it ‘Wally’), is about a lot of things, but not about robots. This perceptive observation by a 12 year old next to me in the Odeon could equally be applied to robotics: a field that is mostly about carrying out other tasks, or gathering other knowledge, or improving other efficiencies, as a byproduct of which, robots are produced.

Both of the film’s heavily anthropomorphic main protagonists are designed to operate in an environment where humans cannot, to carry out tasks that humans will not, with untiring commitment. WALL-E himself is an executor of physical tasks; the more highly developed EVE exists to gather, store, process and act upon information for long periods without external control. Both are expressions of strongly specified functional directives. Real world robotics in 2008 is, in many ways, poised between the two stages: pretty good at building WALL-E class machines, and working hard towards EVE.

A literature search for the past year or so shows robotics to be currently more concerned with the internal world of the mammalian body than with WALL-E’s devastated external landscape. Medical sources account for well over half of published peer reviewed results, with surgery heavily predominating through titles such as Robotics vs radical open cystectomy [1]. Introduce EVE-like criteria and the balance shifts; but about seven per cent of published work in autonomous systems, for instance, is still specifically related to medicine. My own interest is mainly in EVE – that is to say, in the data analytic aspects of robotics; but robotics and data analysis are everywhere inseparable. To solve robotics problems is to solve increasingly sophisticated problems in the realtime handling and analysis of data to inform action, and robotic progress is a wavefront moving through successive classes of such problems.

Despite originating, respectively, from 100 and 800 years in the future, both WALL-E and EVE seem surprisingly deficient in the design of their physical manipulators: hands, to you and me. WALL-E is, admittedly, only intended to be a glorified JCB, but even the otherwise sophisticated and super elegant EVE has only a set of spatulate unjointed plates for fingers. Back here in the present, by contrast, remarkable things are being done.

In surgery, the development of haptic manipulators that provide sensory feedback to the hand of the operator is an area of major effort for companies such as Quanser Consulting, who have off-the-shelf touchsensitive feedback systems allowing a (thus far) human agent to work remotely with fingertip sensitivity. The operational environment is Matlab and Simulink, development taking place in Maple. Latest developments are showcased in high precision, context-specific tools, such as a solidly mounted scalpel holder with micro control, but the technology could equally well be applied to biomimetic general purpose manipulators. The Dextrous Hand, an opposable thumb manipulator with pneumatic muscles from the Shadow Robot Company, for example, can be fitted with touch sensors.

Transrapid Shanghai maglev train (foreground) and Grumman maglev development work in Maple (background).

In the long run the feedback loop will inevitably be closed, passing such control developments from human agent to the robot itself in ever-expanding classes of decisionmaking circumstance. Robots controlling their own actions are, so far, held back only to a diminishing extent by technological considerations; psychosocial reluctance to hand over the final human veto is at least as important.

Ignoring fairly trivial domestic devices, this veto is reduced to its most vestigial in military and industrial applications. Internal decision making at the functional level is increasingly automated, only decision points that can produce a potentially undesirable outside effect being referred to human arbitration. Even there, the influence is frequently limited to binary decisions based on machine-provided information – with ‘no’ outcomes being vanishingly rare. Industry, where manipulators are central to function, can frequently extend that situation to the strategic level as well. Industrial roboticisation no longer has to mean heavy fabrication. Pharmaceuticals and the life sciences in particular offer examples of increasing penetration, passage of control, and the software/hardware relation.

Momentum, from Thermo Fisher Scientific, is laboratory workflow software that handles the operational management of parallel complex processes from multiple foci using modular plug-in components. Initially aimed at drug discovery and development it accepts vendor independent instrumentation, copes with changes in process or instruments, takes care of records and can dynamically reallocate resources to maximise efficiency. Although monitoring is in the hands of the user, the software mediates this through a visual analogue environment. In effect, a software robot occupies the role of first officer, managing normal operations and only subject to the captain’s override – an analogy, again inspired by WALL-E, to which I’ll return. This software robot makes possible a level of ‘real-time, data-driven decision-making’ at a far lower level of user input than would be required with conventional analysis.

Mosaic, a sample storage and preparation supply chain management suite from Titian Software in use with a number of pharma companies, is also modular. Among the many aspects that it controls are robotic liquid handlers, automated stores, and integration with existing IT systems.

RTS Life Science seeks to extend the scope of such roboticisation by increasing the plate capacity fourfold (so reducing numeric store volumes) and automating plate reports through Labcyte Echo liquid handlers, combining this with Libradexx software. This takes over the control and recording almost completely under direction from an order-based graphical calendar, which should, in normal operation, be the human controller’s only process contact.

All three of these illustrate the movement towards minimising as far as possible inefficient human participation in efficient robotic process chains.

In a world where many things can go wrong, reluctance to give up the yes/no switch to an automated system isn’t just a Frankenstein reflex fed by the fictional likes of 2001’s HAL; it is a common-sense precaution. As extent and sophistication of robotic design and application extends, however, the realistic scope for exercising that final veto will shrink asymptotically towards zero. When WALL-E’s starship captain comes into conflict with his ship’s AI first officer, his only option is the binary one of switching it off completely. In reality, though, it is already impossible under operational design function conditions to control a Formula One racing car, a jet fighter, or an industrial production line without the help of their embedded autonomous systems, never mind a hypothetical starship. While it would be overstating the case to yet describe hardware as a trivial aspect, robots and robotics are now predominantly defined by software driven ever further towards integrated autonomy by functional requirements. We won’t have to wait long for mechanical hands routinely deployed at robotic discretion, with the final veto almost unimaginable.

For robots whose functions require mobility, locomotion is a central issue which WALL-E and EVE solve at opposite extremes of present thinking. There is, as I commented about 18 months ago [2], little reason to ever build a truly humanoid robot, but there are many cases where the physical upright bipedal physical form may well be contextually appropriate and a lot of work is going into it. Dr Atsuo Takanishi, at Tokyo’s Waseda University, is prominent in this area [3] (and in other humanoid biomimetics for which there is no time here, from vocal cords [4] to the expression of emotion [5][6] in ways recognisably manifested by WALL-E and EVE, not to mention subcomponents and quadrupeds). Bipedalism is combined with cooperative behaviours in a number of soccer-playing robot studies, which may well prefigure future semiautonomous servitor avatars acting for distributed and immobile AI controllers.

Quite apart from pure self-contained robot design, bipedal models have a particular importance in both human prosthesis development and fundamental research into human biomechanics for other purposes. While physical realisation is necessary to such work, software mimicking of control systems for particular movements is the real area of interest. Asif Mughal, at University of Arkansas’ Biomechanical Control Systems Lab and in collaboration with the Rehabilitation Institute of Chicago, is developing aspects of active leg prosthetics based on bipedal mechanisms using up to nine joints, 13 degrees of freedom, and computed voluntary movements [7], [8], [9], [10]. Detailed understanding of ways in which bipedal mechanisms behave during a wide range of natural human activities is central to such work, and obviously feeds into the knowledge base informing development of pure biomimetic machines as well. Using DynaFlexPro and Maple (Maple seems to be a developing theme), Mughal has developed a software model of the manoeuvre which, through a complex (seven joint, 13 degrees of freedom) combination of actions, takes a balanced human form from sitting to standing positions. Equations are developed, the torques and moments involved are identified, local coordinate systems harmonised, then the whole symbolically analysed for direct Lyapunov stability and for compliance with the system’s holonomic constraints of the system. Simulation code in C is then generated (by Maple BlockBuilder) for further investigation and testing in Matlab. Such software models provide the basis for the sort of adaptive, locally autonomous control systems required for artificial legs that can function in real physical situations – whether as prosthetics or as part of larger walking machine systems.

A Quanser haptic surgical arm is demonstrated on a banana (right) and its dynamics are developed in Maple (background left).

Not that walking is the only biomimetic locomotive mode. There are numerous research programs exploring an astonishing range of possibilities, from ornithoptic or insectoid flight (primarily in the context of extraterrestrial exploration) to self assembling millipedes using plastic bones with integral kinetic effectors.

Swimmers are a well developed area, with several prototypes in operation around the world and detailed work (e.g. by Palmisano et al [11]) on both theoretic and practical aspects of the necessary propulsory anatomies. Swimming robots hold out tantalising promise of considerable improvement over existing underwater vehicle performance and, of course, a wealth of terrestrial environments hostile to human exploration within which they can be deployed.

WALL-E, no doubt for exactly the stability reasons sketched above, has a squat, low centre of gravity design and uses plain, down-to-earth caterpillar tracks to get about. EVE, by contrast, eschews ground contact altogether, floating in three dimensions without obvious means of propulsion or support, flicking from slow cruise to rapid transfer and back again with apparent disregard for inertia. It looks very like the ‘Plantier drive’ beloved of French UFOlogists, but the closest that we can get on present knowledge would be maglev plus a cheerful dose of poetic licence.

Maglev (magnetic levitation) mass transit systems, as a serious idea, are more than 40 years old and the concept goes back to a 1930s patent. The promise of frictionless propulsion, operational efficiency and high velocity is seductive. Practical realisations, however, run into Earnshaw’s theorem and numerous issues of design complexity even before the capital costs of completely new infrastructure are taken into account. Prototypes and even small commercial experiments have nevertheless been built, notably the 30km Shanghai demonstration line to Pudong airport, built by Germany’s Transrapid consortium, but no long haul or intercity run as yet. Yes, this does have something to do with robotics – bear with me.

The complexity of controlling a system that is inherently unstable even at rest, becoming more so when the dynamics of moving at high speeds through changeable weather along a rail that cannot be straight over a surface that cannot be flat, requires correspondingly complex control systems (see, for recent examples and further bibliographies, Zhao [12] [13] or Michail et al [14]) with high-speed, self-monitoring data acquisition, processing and action loops. A Maplesoft authored article in January’s Design Product News [15] outlined a Northrop Grumman Advanced Projects Laboratories collapse of the design cycle for such control systems dramatically using Maple, BlockBuilder and Simulink (again!) to produce a five degrees of freedom model in weeks rather than months.

The first real test of maglev as a viable mass transit option will probably be a planned extension of the Shanghai demonstration line to form a 170km intercity link with Hangzhou, also courtesy of Transrapid. Even that, however, will still only be a toe in the water compared to widespread adoption. If successful, maglev systems would represent an upheaval comparable to the arrival of conventional rail, air travel, and motorways, taking a significant chunk of ‘long local’ traffic (at least) from all three. In the opinion of two engineers connected with the Shanghai planning (since they prefer not to be identified, I’ll rename them Alex and Charlie), it will also force a shift of attitudes in favour of increasing robotic autonomy outside the factory and laboratory.

As with many modern vehicles, a human controller could not possibly provide more than oversight for maglev control systems; they would have to be operationally autonomous. Alex argues that maglev trains, to be economic in the long run, must be robots in all but name; a driver, if there is one, will be purely a figurehead. Charlie goes further; while completely new infrastructures are being built, longstanding problems with signalling systems, safety planning, and other macro organisational features stretched to breaking point in existing networks will be more effectively and more efficiently handled by building them into the infrastructure itself. While human controllers will no doubt survive for sometime in the early days, successful establishment of the technology will see them increasingly sidelined. A maglev-based transit system 50 years from now, Alex and Charlie insist, if it exists at all, will be a single robotic entity with swarms of highly self-directing robotic subentities (trains, logistic chains, rescue provision) operating within it.

Mention of swarms reminds me that I’ve neglected many aspects of where we seem to be going. Swarm robotics, at all scales from miniature centipedes reconnoitring the human gut (or even smaller nanoscale devices within the bloodstream) up to interplanetary interstellar research and action networks, is a busy field of its own, which will have to wait for another time.

Meanwhile, apart from the inability of maglev vehicles to leave their tracks and wander at will, WALL-E and EVE may not be as far in the future as they think.


1. Wang, G., et al., Robotic vs open radical cystectomy: prospective comparison of perioperative outcomes and pathological measures of early oncological efficacy. BJU Int, 2008. 101(1): p. 89-93.

2. Grant, F., Recognising the future. Scientific Computing World, 2006(88).

3. Lim, H.-o. and A. Takanishi, Biped walking robots created at Waseda University: WL and WABIAN family. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2007. 365(1850): p. 49-64.

4. Takanishi, A., et al., Artificial Vocal Cords, Vocal Cord Driving Mechanism, Vocalizing Apparatus, and Robot, in PATENT ABSTRACTS OF JAPAN. 2006.

5. Itoh, K., et al., Application of neural network to humanoid robots – development of co-associative memory model. Neural networks : the official journal of the International Neural Network Society, 2005. 18(5-6): p. 666-673.

6. Hayashi, K., et al., Face Shape Presentation System, in PATENT ABSTRACTS OF JAPAN. 2006.

7. Mughal, A.M., A Theoretical Framework for Modeling and Simulation with Optimal Control System of Voluntary Biomechanical Movements (proposal), in Dept. of Applied Science. 2006 (Proposal), University of Arkansas: Little Rock.

8. Humanoid Robotics: The Math Behind Human Beings. 2007; Available from:

9. Mughal, A.M., 3D Biomechanical Modeling and Stability Analysis with Maple’s DynaFlexPro, in Society of Automotive Engineers Digital Human Modeling Conference. 2007: Seattle.

10. Mughal, A.M. and D.K. Iqbal, Analytical Symmetrical and Asymmetrical Bipedal Models with Holonomic Constraints, in Modelling and Simulation 2008, R. Wamkeue, Editor. 2008: Quebec City.

11. Palmisano, J., et al., Design of a Biomimetic Controlled-Curvature Robotic Pectoral Fin, in IEEE International Conference on Robotics and Automation,. 2007: Rome.

12. Intelligent Simulation in Designing Complex Dynamic Control Systems – Zhao (ResearchIndex). 2007; Available from:

13. Adaptive Simulation and Control of VariableStructure Control Systems in Sliding Regimes – Zhao, Utkin (ResearchIndex). 2007; Available from:

14. Michail, K., et al. MAGLEV suspensions – a sensor optimisation framework. 2008.

15. Maplesoft, Mathematics software helps advance Maglev train technology. Design Product News, 2008: p.9.

Quanser Consulting - Haptic control systems,

Align Technology - Dental appliance fabricator,

Maplesoft - Maple, BlockBuilder,

MathWorks - Matlab, Simulink.,

MotionPro - DynaFlex,

Thermo Fisher Scientific - Momentum laboratory workflow management software,

Titian - Mosaic sample management software suite,

RTS Life Science - Labcyte Echo liquid handlers and Libradexx software,

Shadow Robots Company - The Dextrous Hand, and other manipulator components,

Wolfram Research - Linkage designer,


Read more about:

Modelling & simulation

Media Partners