Skip to main content

The colour of the past

The Marxist historian Eric Hobsbawm, paraphrasing L.P. Hartley, has frequently remarked that 'the past is another country', but often it can be more like another planet. In 1991, William Gibson commented on the historical research for The Difference Engine, the alternative-Victorian novel he co-wrote with Bruce Sterling: 'When you go to the primary sources, you realise that history is filtered to make contemporary sense. Read the newspapers of the time, and you find an alien world. We found, for instance, reports of a two-year spate of accidents involving people being decapitated by bursting omnibus tyres. You think: "This is Mars!"'

Just as spaceprobe landers return images of extraterrestrial planets, photography provides glimpses of the foreign country that is the past - sometimes of especial vividness. My personal favourite is the colour travelogue of the Russian Empire, taken around 1912 by Sergei Mikhailovich Prokudin-Gorskii, photographer to Tsar Nicholas II. Another current highlight includes the rediscovered Mitchell & Kenyon movies of Edwardian England, recently showcased on the BBC with a 2006 follow-up scheduled: The Open Road, Claude Friese-Greene's 1924 motor tour of Britain filmed in red-green 'Biocolour'. Such glimpses, however, are the result of laborious processing. This article aims to be a sampler of some of the digital aspects of photographic restoration, with images and code chosen for their availability and suitability for hands-on experiment.

From a purely technical viewpoint, the tools of digital photo recovery are easily characterised as mathematical operations on an array of pixels. In many cases, these are directly analogous to those used in silver-salt photography. For example, digital 'unsharp masking' derives from a darkroom technique of taking a negative, contact-printing from it a blurred positive transparency (the mask), then printing through mask and original overlaid. The resultant print simulates the Mach Band effect that the human visual system uses to accentuate boundaries; giving the appearance of sharper edges. But the technical operations blur into artistic and aesthetic judgements that are difficult to imitate algorithmically.


The Coombes family, Newport, Isle of Wight, c. 1955: greyscale photo colourised using Matlab optimisation method by Lischinski, Weiss and Levin. For an experienced user, far less markup would be needed.

Readers can get the general flavour of the digital restoration process from the account by the US Library of Congress of its recombination of the Prokudin-Gorskii images, as can be seen at the online exhibit, The Empire that was Russia. The photographs were originally taken with a special camera that took glass-plate negatives of the same scene through red, green and blue filters; this triplet being designed to be recombined by slide projection. The Library of Congress contracted a professional photographer, Walter Frankhauser, to work on around 100 of the images, the process involving: scanning each plate to grey scale; aligning the three layers manually using shared 'anchor points'; cropping the resulting RGB composite and adjusting it for contrast, gamma and colour balance; then making final adjustments and retouchings to locally defective areas. The results are stunning - but according to the New York Times, they involved 6-20 hours per image, and there are 1903 plates in the collection.


Left: 1912, Assumption Cathedral, Vitebsk (now Vitsebsk, Belarus), Library of Congress, Prints & Photographs Division, Prokudin-Gorskii Collection, [LC-DIG-prok-20428].
Right: Plymouth lady swimmers, c. 1924 from Claude Friese-Greene's The Open Road, reproduced by permission of the British Film Institute.

Fortunately, the Library of Congress has placed the raw images online for anyone to experiment with, and a number of projects have recombined the full set using automatic techniques. The registration process - to a human, a simple matter of sliding an image to fit - is one of the nastier computational problems. A typical approach - for instance, the Matlab routines by Frank Dellaert (www-2.cs.cmu.edu/~dellaert/aligned/) - is the multi-resolution pairwise registration used for automatic stitching of overlapping aerial photos and medical scans. Starting with a very low resolution resample, the image pairs are adjusted for orientation and translation using some measure of match quality (such as a histogram of greyscale values at corresponding points). Using this result as input, the process is repeated at higher resolution, and so on. The World of 1900-1917 in Color (www.prokudin-gorsky.ru), a project of the Moscow-based restoration laboratory Restavrator-M, has similarly registered all the images at high resolution - around 3,700 x 3,500 pixels - to an accuracy of one pixel for stationary objects.

With 1903 images to choose from, I'm sure there are plenty of discoveries to be made. My particular interest is in the photos of Vitebsk (now Vitsebsk, Belarus) where a small overlap between some images enables creation of panoramic composites. Prokudin-Gorskii's photos have become a very popular testbed for university projects and courses. Registration, for instance, is a standard Matlab assignment. At Carnegie Mellon University, Suporn Pongnumkul has attempted to automate colour correction by applying mappings obtained from the high-quality manually combined images, and to automate correction of defects by removing 'false edges' that don't appear in all colour channels.

Damage limitation

The repair of photographic defects has a long pedigree. The traditional speck of dust or hair on the negative led to white artefacts that could be pencilled over, but anything worse might need artistic retouching. Much the same applies to digital images, but recent research has focused on algorithmic methods for interpolating across major gaps in an image (for instance, a wide scratch or crack in a painting). The task, called 'in-painting' after its fine art equivalent, is non-trivial, as standard approaches such as polynomial splines fall over on irregular multidimensional image data.

Some important approaches include Bayesian techniques - probabilistic 'best guesses' at where lines and colours go as they cross the discontinuity - and PDE methods that assume the colours form a flow field controlled by a diffusion or transportation process. Significant pioneers in such methods include: Marcelo Bertalmio of the University of Pompeu-Fabra in Barcelona; Guillermo Sapiro of the University of Minnesota; and Andrea Bertozzi of Duke University. Their approach is based on the Navier-Stokes equation: the image intensity is analogous to a 'stream function' in an incompressible fluid, and the Laplacian of the intensity is analogous to vorticity. The results are impressive: images can be recovered from being scribbled or written on, and there are many non-graphical spin-offs. The work of Sapiro and Bertozzi in particular was funded by the US Office of Naval Research, whose interest was not in the art but the reconstruction of surveillance images sent over low-bandwidth noisy channels, a concept that applies to any type of damaged data.

Don P Mitchell's restoration work on the Venera 9 and 10 lander images is a very nice use of such in-painting. Sent in 1975, each probe lasted about an hour on the surface of Venus, during which time they sent an image apiece. The post-Soviet era allowed Mitchell access to the original 6-bit telemetry data, to which he applied an armoury of filters culminating with Bertalmio's isophote-flow algorithm to interpolate across vertical stripes of corrupted data. It's interesting to compare the rather 'clean' monochrome result, looking like a damp overcast day on Earth, with the Venera 13 and 14 colour images that show Venusian daylight to be amber-brown (or, at least, probably so, given the uncertainty of colour calibration on filter/camera systems that don't match the range of the human eye).


Venus image by Venera 9: (above) raw image, (below) filtered and inpainted.
Reproduced with permission, (c) 2003, Don P. Mitchell.

While the colour on Venus is uncertain, there's at least colour to work with, unlike the vast bulk of pre-WW2 terrestrial photography. Colourisation was a staple of early cinema, and of commercial photography well into the 1950s, but it's a historical irony that the technique has only been perfected when its original mainstream use has almost vanished (though at the time of writing, Sin City is in the cinemas, a modern partially colourised film). What's left is contentious: some view colourising old footage as a reasonable reinterpretation to create valued-added material; others, like the Writers Guild of America West see it as 'cultural vandalism'. The middle ground includes recovery of footage known to be shot in colour but surviving only in monochrome, such as The Daemons, an episode from BBC television's cult science-fiction classic Dr Who. But whatever the situation, the process has been laborious and expensive; even with computer aid, it needed a high degree of manual input (there are problems for instance, in segmenting images along fuzzy boundaries and tracking those boundaries between film frames).

This may be changing with a technique developed by Dr Dani Lischinski, Dr Yair Weiss, and Anat Levin at the School of Computer Science and Engineering, Hebrew University of Jerusalem. Announced at SIGGRAPH 2004, their paper 'Colorization using Optimization' describes an algorithm for colouring greyscale images by a relatively simple assumption: that neighbouring pixels should have similar colours if their intensities are similar. Formalised in terms of a quadratic cost function, this leads to an optimisation problem involving a large sparse system of linear equations easily soluble with Matlab. In practice, this means that you can scribble colours within chosen regions, and the colours will flow outward until they hit any edge of differing intensity, merging smoothly with a colour coming from the other direction. This applies in time as well as distance, so that a colouring scheme will interpolate between marked frames (around 1 in 5-10 frames need marking). If you want to play with this, the authors have released the Matlab code for non-commercial use. I can verify that it works well, but the paper's hint about artistic ability - 'an artist only needs to annotate the image with a few colour scribbles' - rapidly becomes clear.

Ultimately, any form of manipulation of photography will be an inextricable mix of science and art, where objective details fight with aesthetic expectations. We just know Mars is the Red Planet, and so a great deal of debunking has been necessary (see Phil Plait's Bad Astronomy website) to explain the variety of colours, all filter/camera artefacts, appearing in Mars Rover test photos. Psychologically, it's hard to break from the idea that the past was greyscale and suddenly sprang into the unnaturally saturated colours of the Technicolor movie era. Even more recent eras raise problems: writing in the British newspaper, the Independent on Sunday (The Colour of Whiteness, 14 September 2003), Mark Simpson argued that the colour balance of 1950s colour photography is pink-biased due to the dominant Kodachrome film being optimised for Caucasian skin, and that Fuji film 'followed the same imperative, but for Asians'. Any restoration, it appears, will conflict with some convention, just as restored old paintings look wrong because old paintings are murky brown. What did the past look like? We can't know with certainty. But the surprising ability of new digital techniques to take over some of the artistic judgements will allow us to make the best of what glimpses remain.

References

The Empire that was Russia: www.loc.gov/exhibits/empire/
Colorization Using Optimization: www.cs.huji.ac.il/~yweiss/Colorization/
Image and Video Inpainting: mountains.ece.umn.edu/~guille/inpainting.htm
What Color is Mars? www.badastronomy.com/bad/misc/hoagland/mars_colors.html



Felix Grant applies Mathematica to the world's oldest photograph

I first came to love mathematics not as an implement with which to do science, but for its beauty: mathematics and photography both have much in common with sculpture. Brief stints with the University of Texas at Austin and London's Courtauld Institute, and a spell of military photo-interpretation, confirmed my feeling that the boundaries are blurred between mathematical and visual imagery. Scientific computing was cumbersome in those days. Software for image analysis and enhancement is nowadays commonplace, and can be cobbled together relatively easily in generic mathematical packages.

Over the past year, a friend has been digitally restoring negatives from the 1944 Normandy invasion and from the American Civil War that had been damaged by heat or fungal growth. When Ray Girvan mentioned this topic, I was tempted to try experiments of my own. At Austin was Joseph Nicéphore Niépce's iconic pewter plate image View from the Window at Le Gras, taken c1826. Its shallow relief image, never easy to view, has suffered degradation by atmospheric pollutants. Some data sets from different views were available, generated by past studies, and for software I used what was available in Mathematica (first 5.1, later a 5.2 beta), without specialist additions.

Though comprising tonal bocks, a photographic image is perceived as luminance gradients and linear outlines mapped onto mental models. To attempt enhancement of the photographic image as such would set me up in hopelessly unequal competition with the Getty Conservation Institute. I was more interested in extracting informational templates mimicking what Niépce perceived on that day nearly two centuries ago. This could be done in one bite, but I took four and used a conventional graphics editing program to merge the results into a final image.

The first pass (with numerous trial runs, tuning the parameters along the way) extracted surface-effect density gradients over increasing radii, fitting smoothed curves to the result. Some data were considerably above the useful range of others, and these were identified in a second pass assigning a flat on/off bit. I expected outlines to be easier, but it wasn't so. Another set of gradient scans, this time over decreasing radii, theoretically identify sequences of adjacent catastrophic changes; in practice, fuzzy decisions have to be made. There are two different types of edge, which need to be traced separately: one continuous, the other with significantly less acutance.

Final output was colour differentiated: grey luminance fills, red high plateau bit; green outlines with hard edges solid. To my scientific self this was the most interesting but greys and slight edge blurring gave a truer psychoperceptual result.

I'm quite pleased with my amateur dabbling. The result more clearly portrays the informational content than reproductions of the original, or Kodak's gelatin and silver print copy of 1952. It is also satisfyingly close to Helmut Gernsheim's hand watercolour retouched copy.

Topics

Read more about:

Modelling & simulation

Media Partners