Unscrambler X

Since Unscrambler last passed this way, then in version 9, its Norwegian publisher Camo Software has launched a 'new generation' image. At its heart is Unscrambler X (currently 10.1, the third sub-version in numerical release terms; some limitations in the original 10.0.0 have been resolved at this point), with additional products to extend its reach and power in particular directions. My review is based a month’s use of the 64-bit option, although a quick run through on a one gigabyte Windows XP system showed the 32-bit alternative to be completely happy there. I’ll combine a quick overview, for those who’ve not encountered Unscrambler in its previous incarnations, with a skim of what’s new.

There is a marketing emphasis on chemometrics, and I explored the software through a clays elemental composition matrix, but the approach is completely general and applicable to any discipline. Colleagues tried out my review copy on archive studies in manufacturing faults, infection transmission, language mutation and stressed environment populations with equal success.

The result is a different user interface philosophy from the usual market pattern. This starts in the menu bar, where in place of the familiar 'Statistics' (or similar) is a 'Tasks' heading, below which drop down 'Transform', 'Analyze' and 'Predict' options. Plotting, on the other hand, is instantly familiar – from clicking 'Plot' on the top menu for a gallery from which to pick the required type through to responsive mouse driven rotation of three-dimensional displays.

You can perfectly well use Unscrambler X to carry out tasks other than multivariate, but they are not the product focus. Asking for simple descriptors like the arithmetic mean, for instance, will typically involve deselecting other options (rather than selecting those required), and explicitly choosing or excluding the variables and cases to be included.

Cutting through complex data sets to underlying structures, however, is simplicity itself. Principal components, dendritic cluster and linear discriminant analyses, four regression approaches, support vector machine classification and multivariate curve resolution, are all handled through an intuitive and consistent interactive dialogue system echoed through all three investigation phases.

This emphasis has been further developed in the new look version X, with user interaction through the project based interface refined and simplified at various points. Definition of DoE variables, for instance, have been simplified to a single table from the previous arrangement spread across three different windows.

DoE has been overhauled behind the scenes in a number of other ways. Partial Least Squares, though still available, has given way to Scheff polynomials as the default approach to mixture designs analysis. D-optimal constraint handling has been improved. Plotting of response surfaces has been polished, as have various other points of detail

Attention to detail is a theme reflected throughout the new release, aggregating into strong usability and value added gains. Most areas of the progam see tweaks, improvement, new options and techniques. A new transform (median absolute deviation for quantile normalisation)  here, per-segment prediction diagnostics there. Improved model portability and flexibility. Extended matrix facilities. Various plotting adjustments. Behaviour adjusted to be more closely in line with Windows conventions, including clipboard operations and an increased use of shortcuts and some convenient extras such one click inversion of a range definition which, while it sounds simple, quickly becomes a valuable workflow accelerator.


Building a Smart Laboratory 2018 highlights the importance of adopting smart laboratory technology, as well as pointing out the challenges and pitfalls of the process


Informatics experts share their experiences on the implementing new technologies and manging change in the modern laboratory


This chapter will consider the different classes of instruments and computerised instrument systems to be found in laboratories and the role they play in computerised experiments and sample processing – and the steady progress towards all-electronic laboratories.


This chapter considers how the smart laboratory contributes to the requirements of a knowledge eco-system, and the practical consequences of joined-up science. Knowledge management describes the processes that bring people and information together to address the acquisition, processing, storage, use, and re-use of knowledge to develop understanding and to create value


This chapter takes the theme of knowledge management beyond document handling into the analysis and mining of data. Technology by itself is not enough – laboratory staff need to understand the output from the data analysis tools – and so data analytics must be considered holistically, starting with the design of the experiment