Skip to main content

After 100 issues and 14 years - anything new?

There have been massive changes in our market since that first issue; over the following pages, we asked some of the leading names in scientific computing - indeed in some cases, they are originators of software packages that have become essential to researchers around the world - about exactly what those changes have been, and what they have meant for the industry. We also asked them to reflect on the different challenges faced back then compared to the ones we face now.

Felix Grant, SCW contributor

Nineteen ninety four? Pink Floyd’s The Division Bell. Oh – scientific computing? OK. Well, now. I had been using a desktop PC, rather than time on a mainframe, for only seven years. I was still happily doing all my scientific computing under MS DOS – type an instruction, and watch stuff happen. The foundations of the semantic web were being laid, but my own network was a primitive thing. Fieldwork in war zones and hoiking gear around industrial complexes had taught me the virtue of portability, but technology had yet to truly deliver it. Multiple gigabyte data sets had to be stored on a RAID array bigger than the computer – and backed up on another. Despite the lack of a GUI soaking up resources, I usually went to make a cup of coffee while a large model recalculated.

A typical snapshot from that year would show me scattering Psion 3 computers with home bodged data sensor arrays, sealed in zip up polythene sandwich bags. Removing SSDs (precursors of today’s data cards) at intervals, I would schlep repeatedly back to the dead weight and short battery life “laptop” and shift the data manually from each card for (56kbps modem) transfer back to the mother ship.

This magazine, Scientific Computing World, was also being born in that year, but I wasn’t part of it quite yet. That came about 18 months later – just as I installed Windows 95, the less computationally efficient but more operationally convenient luxury buffered by growing hardware speed.

Jump to the present day, as the 100th issue approaches, and writing this makes me realise how utterly my scientific computing world has changed. A computer, more powerful in every way than my 1994 desktop (not to mention global connectivity) nestles unnoticed in one shirt pocket; 250 gigabytes of storage, more than matching 1994’s RAID, in another. The 1994 desktop itself has been replaced by parallel network. Data comes and goes transparently without any intervention from me, by local WiFi connection, landline, cellphone, satphone. The laptop lives up to its name. I travel less, because more of the world is accessible from the comfort of my lair, and no longer time coffee breaks to coincide with model recalculations. O brave new world...

Some things are circular, though. In Linux, despite graphic front ends, I still need to type an instruction and watch stuff happen.

Pelle Olsen, HPC Lead, Microsoft Western Europe

Looking back at an industry that is so focused on the future can prove to be a very rewarding experience – and give a fascinating perspective on the shape of computing to come. Thinking back to 1994 is strange; so recent and yet, in computing terms, ancient history. I was in my second year of post-doctoral research at NASA Ames, excited by the opportunity to combine my joint passions of mathematics and computer science working with some of the most powerful computers of the time. In retrospect it is obvious that we were going through significant change, even in a field of research that, paradoxically, sees change as the status quo.

By 1994 the age of the supercomputer was at an end – like the dinosaurs before them, the environment within which these mighty creatures lived had become increasingly hostile. The end of the Cold War in the late 1980s and early 1990s saw defence spending, which was the financial sustenance that leviathans like the Cray-1 and Cray-2 needed to survive, become increasingly scarce. The Cray-3 was doomed to extinction almost as soon as it was launched, reflected in the fact that there was only one ever delivered.

We were seeing the transformation from the second to the third age of high performance computing (HPC). The third age dawned with shared memory machines and saw the Message Passing Interface (MPI) established as the de facto standard. MPI, developed in the late 80s and early 90s, was designed to support a more flexible style of programming and is still redominantly used in the coarse grain HPC clusters we see today.

The Reduced Instruction Set Computer (RISC) chip that had been at the heart of earlier generations of the computer gave way to the Complex Instruction Set Computer (CISC) chip – which evolved into the modern microprocessor that has led to HPC’s fourth age. The defining characteristic of the fourth and current age is, of course, the personal computer.The impetus for the rapid development of modern HPC is no longer the needs of defence industries – but rather software firms like Microsoft.

Software companies have pushed development of current HPC architecture simply because PC applications are now so demanding, which has also made the PC processor a viable HPC option in its own right.

The future will bring easier programming that will see HPC becoming mainstream in areas like design; automated design and development will eliminate the need for physical models. A car, for example, will be designed, tested and certified all via computer simulation.

To provide a salutary illustration of the speed of progress, the first Cray-1 I ever worked on is still proving useful in the entrance lobby of the National Museum of Science and Technology in Stockholm – as a couch.

Jim Tung, The MathWorks

In 1994, our company celebrated its tenth anniversary. I had joined the company six years earlier when we had fewer than 10 people; by 1994, with more than 200 employees, I felt like it had become a real company. I recall this as a time when esoteric and specialised approaches were moving into the mainstream, supported by mainstream systems and available to a broader community of scientists and engineers.

The MathWorks products ran on a dizzying range of platforms, including PC and Macintosh, UNIX workstations, VAX, Convex mini-supercomputers and Cray supercomputers. Porting was often ‘just a recompile’, maybe tweaked to optimise performance. Users loved the ability to develop concepts at their desktop, then run big jobs on a workstation, mini, or Cray. But the PC and Macintosh platforms started to dominate, and specialised computer makers – Alliant, Ardent, MasPar, and others – struggled and often went out of business. The Macintosh was also popular; I have enjoyed seeing its return from the abyss, perhaps not in engineering but in the life sciences and other scientific areas. Esoteric supercomputer architectures have largely been replaced by server farms and clusters, thankfully using the same processors and operating systems as desktop computers, so we didn’t have to port our software...again.

A shift to the mainstream was also taking place on the algorithmic research side of scientific computing. Some of the ‘hot’ research topics, such as neural networks, fuzzy logic, and wavelets, were researched to the point that we could build Matlab toolboxes to enable scientists and engineers to apply those techniques effectively to their real problems. Of our 22 products, 17 were Matlab toolboxes, which were transforming from specialised research tools into tools for both researchers and practitioners, enabling the user to both learn the theory and apply it to their problem area.

Sophisticated data visualisation was no longer the exclusive domain of specialised software and graphics supercomputers. That year, we introduced the Windows version of Matlab, which combined eye-catching 3D colour graphics, transparency, and animation with our software’s core computing compatibility.

We considered whether to provide the capability to deploy stand-alone applications from Matlab and feared this would undermine our Matlab business. In the end, we decided to ship our Matlab Compiler product, which turned out to be the right thing to do.

Yes, the web and ‘social computing’ environments enable collaboration across a dramatically larger and more far-ranging community, but they’ll never replace the fun of watching researchers kicking around ideas, huddled around a computer screen.

Jim Aurelio, President and CEO, LabVantage

The birth of the commercial client/server laboratory information management systems (LIMS) in the early 90s firmly established the emerging laboratory informatics market. Along with the demand for LIMS came a dramatic increase in the demand for system integration, interfacing of more complex instrumentation, and data sharing across multiple laboratories. These system connections created expansion opportunities outside the traditional manufacturing quality management (QM) market, opening new markets into research and development, and today, supporting discovery research for many drug and therapeutic development companies. As with the growth in any market, standards have emerged and today play an important role in the development of supporting technology. Industry standards and guidance procedures such as Good Laboratory Practices (GLP) have shaped the key functional requirements of LIMS solutions, while government regulations such as the FDA 21 CFR Part 11 have mandated that privacy and security is managed and validated by the technology.

As an early advocate and visionary for LIMS, I’d love to be able to say I predicted with 100 per cent accuracy the current state of the market. But to be fair, there were a couple of things that took even me by surprise – most notably, the web. The speed at which web technology advances and the benefits it delivered to software computing in general continue to amaze even the foremost of visionaries, regardless of the industry or market. I’m happy to say that even though I didn’t predict the web, we quickly realised the benefits web technology could provide beyond traditional LIMS.

Thick client LIMS computing dates back to the 80s, where the journey began, and believe it or not, some of those systems are still in operation on a day-to-day basis. For many of the very early adopters of LIMS, much of the return on investment in these early systems has run its course. The early LIMS were purchased and customised to address specific laboratory requirements and were effective in helping QA/QC laboratories meet FDA compliance regulations. However, the fast-paced technology advances, modular-driven architecture, the need to lower the total cost of ownership and the off-the-shelf configurability offered with systems have many legacy LIMS managers evaluating their current and long-term needs.

The past 10 plus years have been packed with technology advances and innovations, and Scientific Computing World has been in step with and reporting on the progressive landscape all along the way. It’s been an invaluable publication and resource for me and I congratulate them on their 100th issue anniversary.

Taking a glimpse forward, I see many exciting advances for LIMS, such as its alliance with analytics technology. Coupling the workflowdriven operational structure of LIMS with powerful workflow-driven computational analytics, establishes a new operational intelligence paradigm not yet seen in laboratory informatics.

Dr Bob Hillhouse, Managing Director, LabWare Europe

My first involvement in LIMS predates Scientific Computing World by 10 years, as it was in 1984 that I successfully obtained a British government grant from the National Computer Centre (NCC) to develop a LIMS for VG, the instrumentation company of which I was managing director.

By 1994, it was already obvious to me that large instrument companies did not have the management expertise or vision to cultivate enterprise level software or people. At the end of 1994 I joined LabWare to setup a worldwide LIMS group.

In the early 1990s the majority of LIMS were based on microcomputers, supplied by either Digital (VAX) or Hewlett Packard/IBM (UNIX). A large number of smaller suppliers had appeared using PC-based systems, and LabWare was one of them. The instrument vendors had the advantage of market presence and large global sales and marketing. The smaller LIMS companies had the advantage of motivated software talent coupled with entrepreneurial flair. It was a David and Goliath situation.

Back in 1994 a new generation of configurable LIMS had appeared that allowed customers to change system behaviour as their business needs changed. Today, it’s all about Web 2.0 applications and running full LIMS within a browser. For some vendors, this has meant a completely new purchase by the customer, but our approach has been to provide upgrades from one major version to the next, including WebLIMS.

If a web solution can run on virtually any browser, on virtually any hardware or device, then it is probably a zero-footprint solution – and very few web applications are zero-footprint, especially in the world of LIMS. By using a zerofootprint WebLIMS solution, the application is available to any authorised user.

In 1994 people installed out-of-the-box LIMS and then configured industry specific applications as part of their own project. Today LIMS vendors supply a range of industry templates that provide pre-configured systems for a wide range of applications, such as pharmaceutical QA/QC, water, food, biobanks, public health, hospital pathology, contract, clinical trials and forensics.

LIMS communities are getting larger and this is reflected in the size of the installed base. It is also reflected in terms of the number of people attending annual user meetings and online user forums. In 1994 the major LIMS companies held one annual user meeting per annum. Today LabWare holds five Customer Education Conferences a year on five continents.

In some aspects the LIMS market is mature, but to my eyes it is only the beginning. The people I see in the industry, the dynamism, the competition, the ideas and the advance of computing technology, makes it a very exciting place to be.

Stewart Andrews, CEO, VSNi

My career in technical software is as old as this publication, with the magazine’s launch coinciding with my joining NAG – then the distributor of GenStat, a statistics package that had been originally developed at Rothamstead Research (RRES) in the 1970s.

Back in 1995, GenStat, was only available on UNIX as a serious number-crunching piece of kit. In those days it was common for ‘proper’ technical, scientific software to be primarily used on UNIX; it is only as time has moved on and hardware capabilities have caught up, that technical software as we know it today can be used in Windows.

The UNIX platform meant that statistical software was an intellectual challenge to use in the first place; one had to have an understanding of command line programming to get the best out of it, and as such, the typical user would have been a specialist, high-end statistician who would have considered just understanding the software package as an intellectual achievement in itself. It wasn’t just about solving a statistical problem; it was also about understanding the software in order to input the data for that problem.

In the years since, and as power has moved to the more affordable desktop, so too has the expectation of the user. Today the huge advances by both hard and software vendors enable the access of professional tools to the occasional user, irrespective of their own discipline and expertise. Statistical software is no longer the sole domain of statisticians.

One of our customers, an equine charity, is using GenStat effectively to decide on which are the key factors contributing to a particular equine welfare issue; this means that the vets can advise not only on treatment of an issue, but on preventative actions for the future. It is a relatively small operation, and one that previously required data to be sent to consultant analysts in the UK for analysis.

Dave Champagne, Vice president and general manager, informatics, Thermo Fisher Scientific

Since the introduction of LIMS, laboratory staff have shifted their expertise from manual and time-consuming activities to more sophisticated data analyses that drive business critical decisions. In addition, the increased role of the laboratory in achieving strategic corporate objectives (such as decreasing product time to market and ensuring regulatory compliance) is an indicator of how integral the laboratory has become to business operation.

One of the most important areas of change can be illustrated in the way people work with each other. The evolution of collaborative methods has been enhanced by the technologies available, giving the field of informatics a new role to play in bringing scientists together. And as instrumentation has become more sophisticated, the amount of data that can be generated is tremendous, giving today’s scientist a new challenge – managing all that data.

Informatics is now a critical part of the process of increasing the productivity of the laboratory. Today there is increasing pressure on the laboratory to automate and integrate systems, in order to harmonise processes and make use of all the data being generated. It is now more important for scientists to be able to share data and collaborate on findings.

By embracing the technology inherent in LIMS, laboratories began to centralise information and work globally and more holistically. Today, the growing need to partner and outsource work both domestically and abroad has never been more important. To make this possible, LIMS providers must form partnerships with vendors such as Microsoft, Oracle and others to ensure that information-sharing through the LIMS is on a global scale, and occurs across both multiple locations and disciplines.

Theodore Gray, co-founder, Wolfram Research

This month also marks the 20th anniversary of Mathematica, and boy have things changed since then. Long, long ago when people discussed how many megahertz their computer was and how many kilobytes of memory it had, they were talking about whether it would be able to run a word processor fast enough. Now that we’re talking about gigahertz and gigabytes this is a non-issue.

The speed of scientific computing has remained an issue much longer, and will remain an issue, because there is no upper boundary to the size of calculations one might care to do.

But I think a watershed has been reached in recent years where a fairly large class of problems can now be solved fast enough. By which I mean that the amount of computer time required to solve the problem is only a small fraction of the amount of human time required to formulate the problem and write the program.

If it takes you 10 days to write a program that runs for 10 minutes, who cares if next year it only takes three minutes to run? It still took you 10 days to get the answer to your question.

The only way to make the problem-solving cycle faster is to increase the efficiency of the process of converting a human description of a computing problem into executable code. This is where modern, high level symbolic languages like Mathematica really shine.

Writing code in Fortran or the Fortran-like systems that survive even into the modern era is not an efficient use of human time. Until you change the language you’re working in it’s still going to take you 10 days to write the program no matter how fast your computer is. If you cut the time to write the code down to a day or two (or maybe a few minutes), you’ve made progress.

When scientific computing problems can be solved in fewer than the hundred milliseconds you have available to maintain smooth real time interactivity, you know something new is afoot.

David Sayers, Principal technical consultant, NAG

By 1994 NAG had established itself as a leading provider of mathematical libraries to both industry and education. My own part in this had been to assist top academics at Manchester University develop ordinary differential equation routines for the library since as long ago as 1971. At that time NAG was a collaborative project among a number of universities, but since then it had grown in popularity and changed its status to become, as it now remains, a company limited by guarantee.

By 1994, the original development languages of Algol 60 and Fortran IV had evolved. Fortran itself had become richer and we had seen the emergence of Fortran 77 and then Fortran 90 language standards. Algol 60 had briefly metamorphosed into Algol 68, but the language had declined so that NAG Algol libraries were no longer being developed. Instead NAG would be destined to flirt with other languages: Ada, Pascal and C.

NAG’s concern then, as it remains, was to make available to users the best numerical methods, callable from a user’s chosen computer languages and optimised and fully tested for use on the hardware of their choice. In 1994 that was a daunting task. Not only were languages changing, but new computers such as NeXTstep, Meiko CS2, Microway NDP, Kendal Square Reseach, Sequent, IPSC/860 Intel Paragon and Pyramid were all threatening to gain popularity.

Also new on the scientific scene was the 16-bit PC. I was given the daunting task of producing the 16-bit DLL version of our Mark 16 Fortran library. Neither compiler nor machine were quite up to the job, so a huge task lay ahead of me. When I had finished the implementation, 32-bit PCs had rendered the 16-bit machines obsolete.

Meanwhile other colleagues were excitedly designing our first Fortran 90 library and yet others were looking at PVM as a means of harnessing loosely coupled machines. Today’s challenges have changed little: 32-bit computers are slowly giving way to 64-bit machines, Fortran is continuing to evolve and other languages and environments such as .NET, Matlab, Excel and Maple are coming to the fore.

For the future we shall be working to exploit the multiple processor hardware that is becoming the norm, and to make our algorithms even easier to use. We shall exploit new language interfaces, and embed our algorithms into popular high-level packages. The need for quality numerical mathematics will remain.

Dick Mitchell, Systat

SigmaPlot evolved from a ‘ready, fire, aim’ business approach. In 1981 we created a company with no name and no product with a vague idea of developing a computer-based nurse scheduling system (the four founders had a medical research background). That idea died quickly when we realised that nurse scheduling depended heavily on qualitative criteria, like whose boyfriend was in town. We then developed software for creating art on an Apple II called Flying Colors. The company now had a name – Computer Colorworks. The product was reasonably successful, selling about a million dollars worth, but its big weakness was the need to draw with your fist using a joystick. So Dr Osborn invented the Digital Paintbrush System. This was a pen that you drew with naturally. If you pressed the pen down it drew and it had a button on the side for menu selection. The pen was connected to a very attractive plastic box via nylon ‘strings’ (I think we called them cables) and then to two potentiometers. It worked beautifully and aspiring artists were creating art on their Apple II and subsequently the IBM PC.

Then someone asked if the Digital Paintbrush System could be used as a digitiser for scientific measurement. We developed a method for calibrating it and sold it for $250. The competition at the time was an electromagnetic tablet with a price in the range of $2,000. A 1/16 page advertisement was placed in Science magazine and the phone rang off the hook. We were now in the science business and we renamed the company Jandel Scientific (after John Osborn, his wife Annie and ‘del’ was added because it just sounded good).

As you can imagine the scientists were not satisfied with just a digitiser. Once they had the data they wanted to visualise and analyse it. SigmaPlot was created to graph the data and to a small degree analyse it statistically (the product SigmaStat came much later). The scientists also had images containing objects that they wanted to measure. For example, they wanted to use the Digital Paintbrush System to measure the curvilinear lengths of neurons displayed as an image on the monitor. Or measure the x,y positions of whales in an image of whales in a pod. Thus was born the Jandel Video Analysis System, which was software for quantitative measurement of objects in images. It became Java for short and someone very compulsively registered the name both in the US and Europe. A second version was called Mocha, but then we ran out of coffee names (Starbucks did not exist yet or there could have been ‘Half Caf’, etc.) and renamed it SigmaScan to continue the Sigma line.

Sometime later the phone rang and it was some lawyers from Sun Microsystems. ‘We see you have registered the name Java in both the US and Europe,’ they said. ‘We would like to purchase it. Is it still in use?’ It wasn’t, but I’m pretty sure we didn’t say that. In any case a deal was struck and I can honestly say in retrospect that you never ask enough.

The sales of SigmaPlot increased dramatically, with Jandel recognised as an Inc 500 company four years in a row. One of the reasons for its success was that it was able to draw error bars, which the spreadsheet at the time could not. This was important since our sales were almost 100 per cent to life scientists, who extensively use this feature.



Media Partners