Unsolved problems can now be viewed in a new light, thanks to advances in digital image processing - and there are solutions available for scientific computing. By Ray Girvan
It's perhaps a trite introduction to quote Ecclesiastes' 'there is no new thing under the sun', but in some fields the new has so radically overturned the old that it's easy to forget the field's long ancestry. Image processing is a case in point. It's a quintessentially modern, high-profile computer application, so 'cutting-edge' that it could provide the central dramatic tension - the deconvolution of a blurred incriminating photograph - for the 1987 Kevin Costner film No Way Out (probably the only film to mention a Fourier transform by name).
Yet virtual actions such as digital colour filtering, defocusing, and masking of a digital photograph are direct analogues of physical ones used in silver-salt photography. Image processing pre-dates even this. The 'morph' - then called 'development drawing' - was a common form of comic art in the 1800s. In a celebrated Parisian court case in 1831, Charles Philipon, charged with defamation for a caricature of King Louis-Philippe as a pear, responded by drawing a four-stage transformation to show that the king did indeed resemble a pear.
Even earlier, artists knew of reversible image transformation. The anamorphic drawing was a picture distorted so that it was restored by reflection in a curved glass. Examples still exist today of how this was used to disguise portraits of Charles Edward Stuart, the Young Pretender, in the mid-18th century The extent to which these older techniques have been eclipsed can be explained by the sheer power of a 'grand unified theory' - digital image processing - that put the many types of image transformation on a common footing. Any image is an array of pixels; all transformations are mathematical operations on that array. These operations range from simple ones with physical equivalents, such as cutting, pasting, or adjusting brightness, to powerful, frequency-domain transforms such as the Fast Fourier or Discrete Cosine.
There is a lay perception of image processing as a technique for making obscure images understandable to the human eye; but scientifically, the scope is much wider. Apart from manipulating images, image processing merges into many associated activities such as compression and storage, capture, recognition, and machine vision, as well as the study of algorithms that apply equally to non-image data. The field also overlaps into art: paint and photographic packages such as PhotoShop can be used for the simpler tasks such as file conversion and sharpening. Generally, they don't provide the quantitative control or analysis tools needed for more serious scientific use, and so this article will concentrate on software specifically aimed at scientists for the tasks of image processing in the strict sense (transforming one image into another) and image analysis (extracting data from an image).
Scientific packages occupy a spectrum between 'programming' and 'control panel' styles, low-end packages in either case tending to concentrate on processing alone. Probably the easiest route to programming is to use an add-on to a mathematics package: either the Image Processing Extension Pack electronic book for Mathcad; Digital Image Processing (DIP) 1.1 for Mathematica; or the Matlab Image Processing Toolbox (currently there isn't one for Maple). All install as standard add-ons, adding similar capacity to their host package. Functions include 'housekeeping' (e.g. import/export, format conversion); manipulation (e.g. rotation and scaling); colour operations (e.g. brightness, contrast, gamma, pseudocolour, thresholding, and segmentation); blending (e.g. combination of images by arithmetical or Boolean rules); edge detection; morphological operators (e.g. erosion, and dilation); filtering and convolution; image transforms; area-of-interest (AOI) selection; and very basic image analysis (e.g. histograms and pixel counting).
Mathcad being aimed at engineers nowadays, I found its documentation the friendliest: for instance, the clearest explanation I've ever seen of convolution ('the process of sliding a filter matrix, called the kernel, across the image matrix ...etc ') whereas DIP 1.1 cuts straight to sigma notation. DIP 1.1 has the largest set of frequency domain transforms, and a section on designing linear filters - the latter a reminder of the broader application of filtering to such operations as signal processing. The Matlab Toolbox is also very accessible, and has functions not supported by Mathcad and Mathematica for image registration and customised deblurring. The first enables transformation of images - for instance, an old map and an aerial photo - into the same coordinate system. The second is a set of sharpening algorithms that can be tailored to the known, or estimated, noise type and point spread function (PSF) that caused the blur: motion blur, for instance, can be corrected.
An even more sophisticated programming add-in is the Image Processing Toolkit for PV-Wave, the data analysis and visualisation suite based on Visual Numerics' array-oriented 4GL language for Windows, Unix and VMS. As well as a graphical user interface, this toolkit provides an unusually large set of image transforms and filters, alongside analysis features including shape and statistical measures of objects, spatial and spectral texture analysis, classification and region counting.
Stand-alone control panel packages for image processing provide both 'hands-on' processing and programming (via scripting or macros) for developing reusable applications. Numerical analysis of images is a very important feature that has revolutionised quantitative image work. When I was a metallurgy student in the mid-1970s, obtaining area percentages from a micrograph of a two-phase alloy involved drawing close parallel lines on the image and hand-measuring a sample of intercept lengths. Now such tasks can be fully automated.
ImagePro Plus from Media Cybernetics, now at version 4.5, is a long-standing example of what top-of-the-range PC packages offer. Images in its multi-window workspace are manipulated using pull-down menus or icons. Apart from the processing functions already described, its features include acquisition from scanners, video frame grabbers, microscopes, and other devices; handling of images sequences and stacks; measuring tools such as a multi-purpose Caliper tool that can identify and log the position of features in the luminance profile; export to spreadsheets; an image database; Internet access; and auditing features for commercial use.
Its particular strength is in metallurgical and medical applications. An example might be to look for abnormal erythrocytes (red blood cells) in a sample from a patient with sickle-cell anaemia. Contrast enhancement, automatic count of dark objects, and manual separation of touching objects gives an initial cell count. Sorting by size suggests a threshold for rejecting small objects, leaving only erythrocytes, then a second separation by radius-ratio separates round normal cells and thin sickle cells, the program automatically colour-coding the result.
ImagePro also comes in two cut-down versions: Express and Discovery. Recently, Media Cybernetics added to its list another product, Optimas 6.5. Targeting life science research and industrial testing markets, it's an imaging toolkit driven by a macro language, ALI (Analytical Language for Images) for development.
Similarly, SigmaScan Pro from SPSS Science can be controlled by macro, or from Excel or Visual Basic. A Windows program for general scientific applications, it has a strong emphasis on analysis and reporting. Processing and extracting data from images into Excel spreadsheet form is the preamble to applying worksheet functions (140) for classifying and drawing conclusions from that data.
It's fair to say that professional scientific image-processing packages are expensive, but there are alternatives on the Internet, and not merely for the PC.
For instance, NIH Image is a public-domain processing and analysis program developed for Apple Mac by the National Institute of Mental Health (NIMH). There's a free Windows version, Scion Image, available from the digitiser board specialists Scion Corporation and also a similar Java program Image/J. Other possibilities are the CIS Imaging Laboratory and NASA Image2000 (another Java program) mentioned in the Archimedes Palimpsest case study. All offer processing functions that are well up with mid-range commercial packages.
When looking for image processing software, it's worth investigating which packages provide specialist support for the formats and tasks your scientific discipline requires. Astronomy is a good example, software having moved on somewhat since NASA's Jet Propulsion Laboratory needed mainframes to process the Mariner 4 images in 1965. Now Windows programs routinely offer the same processing features as the better general image packages, but tailored to astronomical work.
For instance, MSB Software's Astroart offers blink comparison, gradient removal to remove the effect of light pollution, a Larson-Sekanina rotational gradient filter to reveal detail such as comet jets, and an integrated star atlas. CCD camera control and acquisition is an important feature, and MaxIm DL/CCD from Diffraction Limited integrates the MaxIm DL image package with support for a large range of cameras.
Mira from Axiom Research is another long-standing package, available in versions including Pro for astronomy and Pro MX applicable to radiography and general laboratory applications. All are compatible with FITS (Flexible Image Transport System), the International Astronomical Union format for astronomical metadata, putting nearly any astronomical image within the scope of a desktop machine.
Workstations may, however, be necessary to give useful speed on bulk processing, and as with general image packages, many freeware programs are available via the Internet for academic use. Two major examples are AIPS (Astronomical Image Processing System), a cross-platform system - everything from Unix/ Linux systems upward - for batch processing of large data sets from radio astronomy; and MIIPS (Multipurpose Interactive Image Processing), a more general scientific and engineering system, but with an astronomical slant, for VAX OpenVMS and Unix.
The same support for specialism applies in other fields such as microscopy, geology and medicine. Specialist packages exist as well as add-ins: for instance, ImagePro has a range of companion plug-ins for fluorescence acquisition, microscope and stage control, materials, metallography, and images of electrophoretic gels. Even general packages sometimes provide specialist support, such as the new support in Matlab's image Toolbox for the DICOM files (Digital Imaging and Communications in Medicine) output by body scanners.
Whether you need special or general image processing software, the range is large, and notable for the quality of public domain offerings for academic use. Some care may be needed with the fine print, though; for instance, use of the NASA Image2000 is allowed outside the USA, but not downloading of the source code. As with any software genre, commercial products offer a guaranteed level of support, performance and continued development, and that is a significant factor when programs have to run reliably for high-volume batch work. The choice, however, is there.
Secrets of Archimedes revealed
Image processing is playing an important role in the recovery of a lost mathematical work from the document known as the Archimedes Palimpsest, recently rediscovered and now on loan to the Walters Art Museum, Baltimore. The Palimpsest contains a 10th century copy of Archimedes' treatise The Method of Mechanical Theorems, in which Archimedes shows how to calculate volumes of revolution by summing elemental slices on an imaginary balance - in effect, a proto-calculus. The difficulty is that in the 12th century, the parchment was scrubbed, rebound, and overwritten at right angles as a Christian prayer-book (the Euchologion).
- A detail of p. 28c of the Archimedes Palimpsest: xenon light (top) and combined image under tungsten and ultraviolet (bottom) enhancing the orignal text.
Teams from Rochester Institute of Technology (RIT), New York, and Johns Hopkins University, Baltimore, are using image processing to highlight the faint Archimedes original and make it sufficiently legible for translation. The process involves multispectral imaging: using filters to select 40 spectral regions.
It was found that both texts appeared dark under ultraviolet light, but the overwriting was far more legible than the Archimedes under red light. The website of Professor Roger L Easton at RIT - Text Recovery from the Archimedes Palimpsest - explains the subsequent processing, with a demonstration that uses a suite of PC imaging software including Adobe PhotoShop, CIS Imaging Laboratory, and NASA Image2000.
One method is to take a pair of grey-scale images photographed under two wavelengths and scale the contrast in one image, so that subtracting it from the other leaves the Euchologion text and parchment background the same grey (i.e. everything 'disappears' except the Archimedes text - though with gaps where the later writing crossed it). A more complex method involves statistical measurement, and creation of a matrix of grey-scale values for parchment, Euchologion, and Archimedes at three wavelengths. The inverse of this matrix is a transform that returns the percentage of each component - including 'mixed' pixels - when operated on the vector of measurements for any pixel. Applied across every pixel of the three-image set, this gives an output image whose R, B and G channels contain parchment, Archimedes, and Euchologion respectively. Further processing can, again, make the overwriting invisible, or else change the Archimedes to a distinctive, easily-read colour.
This is, of course, a much-simplified version of the analysis. Some of the Archimedes text is obscured not merely by overwriting, but also by physical damage from rot and wax droplets, and the addition of forged illuminations with gold leaf. Even the recoverable portions still need skilled interpolation by a translator. It needs to be remembered that image processing applied to real problems isn't necessarily a magical solution, but may need to work alongside various techniques drawn from different disciplines.