Skip to main content
Back

Reading the Unreadable

Stylus tablet 836: one of the most complete stylus tablets unearthed at Vindolanda. The incisions on the surface are complex,whilst the wood grain, surface discolouration, warping, and cracking of the physical object demonstrate the difficulty papyrologists have in reading such texts © Centre for the Study of Ancient Documents,University of Oxford

New digital imaging techniques have allowed historians to read and analyse ancient texts more successfully than ever before. Dr Melissa Terras is part of an academic team that has developed engineering solutions to decipher abraded ancient texts. Here she explains the new technology the Oxford group have developed.

In the 1970s, archaeologists digging at Vindolanda, a Roman fort on Hadrian’s Wall, began to unearth thousands of documents dating from around 100AD. The Vindolanda texts are now the largest contemporaneous source of information about the Roman Army and the Roman occupation of Britain, and are used extensively by historians, linguists, palaeographers and archaeologists. However, many of these documents remain illegible due to their physical condition.

Since 1997, the Department of Engineering Science and the Centre for the Study of Ancient Documents at the University of Oxford have been working together to develop novel advanced image processing and artificial intelligence techniques which will enable historians to analyse and read the Vindolanda texts more efficiently. Merging engineering solutions with knowledge gathered from experts in Classics has resulted in a prototype system which can intake an image of a Vindolanda text and output a plausible interpretation of the words in a realistic timeframe. Additionally, this research has demonstrated new image processing and artificial intelligence techniques to allow reasoning and interpretation of digital image data.

Vindolanda texts

The texts from Vindolanda fall into two categories: ink and stylus. The handwriting on the ink texts – carbon ink written on slivers of fir wood – can be made more visible through the use of infrared technology. These texts provide a body of evidence regarding the language and letter forms used at the time, although their content is usually personal communication, such as an invitation to a birthday party, and a letter from a mother providing warm underwear to her son serving at Hadrian’s Wall.

The stylus texts, which were the official form of legal documentation for the Roman Army, and therefore provide immediate and detailed information about the machinations of the Empire, are more complex. A recess was cut into a postcard-sized wooden tablet which was then filled with wax, and the text was gouged into the surface using an iron stylus pen.

Unfortunately, the wax has deteriorated significantly in the 2,000 years that the documents have been buried, leaving only incomplete incisions on a warped, cracked, stained and often fragmentary wooden surfaces. The wax on the tablets was sometimes melted and the documents reused, adding to the complexity of the documents. It can take one of the few experts in the world who can read these texts months, and sometimes years, to analyse each tablet and produce a possible reading.

The aim of our research was to ascertain if, and how, engineering techniques could aid the papyrologists in reading these valuable and unique texts.

Image processing

The team at Oxford, led by Professor Sir Michael Brady FREng FRS and Professor Alan Bowman FBA, compromised of researchers in Engineering, Classics, and Humanities Computing, and funded by the Engineering and Physical Sciences Research Council (EPSRC). They initially concentrated on image processing techniques which could detect, enhance and measure narrow, variable depth features inscribed on low contrast, textured surfaces, such as the Vindolanda stylus tablets.

A technique called ‘shadow stereo’ or ‘phase congruency’was developed, in which the camera position and the tablet are kept stationary while a number of images are taken where the tablet is illuminated by a strongly orientated light source. If the azimuthal direction of the light source is held fixed, but the light is alternated between two elevations, the shadows cast by incisions will move but stains on the surface of the tablet remain fixed. This strongly resembles the technique used by some papyrologists who use low raking light to help them read incisions on the tablets.

Edge detection is accomplished by noting the movement of shadow to highlight transitions in two images of the same tablet, and so incised strokes can be identified by finding shadows adjacent to highlights which move in the way that incised strokes would be expected to (see figure 1). Although this is not a standard technique in image processing, encouraging results have been achieved so far, and a mathematical model has been developed to demonstrate the best angles to position the light sources. Future work will relate the parameters of analysis to the depth profile of the incisions, to try to identify different overlapping writing on the more complex texts.

Harnessing thought processes

Although these image processing techniques allowed some headway into identifying handwriting on the tablets, it became apparent that the papyrologists required a way to facilitate sorting through the competing hypotheses of what the identified incisions may represent. A series of ‘knowledge elicitation’ tasks were undertaken (such as interviews,walkthroughs, and think-aloud protocols) to understand the reasoning processes used by the experts as they read an ancient document. This resulted in a model of how experts approach, reason about, and produce a reading of a text, making explicit the relationship between the identification of strokes, characters, combinations of characters,words, and phrases.

Additionally, information gleaned from the ink tablet corpus provided a dataset of letter forms used at Vindolanda, and statistics regarding letter,bigraph, and word frequencies. This model was then computationally constructed, using an agent based system called GRAVA (Grounded Reflective Adaptive Vision Architecture). This was developed by Dr Paul Robertson,a Research Scientist at the Massachusetts Institute of Technology’s Computing Science and Artificial Intelligence Laboratory, originally to analyse aerial surveillance photography. GRAVA provides a way to implement different modules, or agents, of computer programs which can compare and pass different types of information to each other. These individual agents (programmed in Yolambda,a dialect of Lisp) can be stacked up to represent a hierarchical system, such as the proposed model of how papyrologists read ancient texts.

Determining the description length

Building such agent based systems in the field of artificial intelligence is not new, but the GRAVA system is unique in that it uses description length as a unifying means of comparing and contrasting information as it is passed from one level of the system to another.

‘Description length’ is calculated from the probability of an input, such as the data describing the shape of an unknown character, matching a known model. It provides a fair basis for cost computation as it captures the notion of likelihood (or probability,‘P’) directly:DL= -log2(P).

By adding the description lengths of the outputs of different agents, it is possible to generate a global description length for the output of the whole system. Communication Theory has demonstrated that the shortest description, or explanation, of a problem is the most efficient. Therefore, the smallest global description length generated from many runs of a system is most likely to be the correct solution to the problem: the minimum description length (MDL).

Whittling down probabilities

To ensure that different ‘answers’ are propagated on each run of the system, Monte Carlo methods (a statistical method for sampling using random numbers and repetition to find an approximation to a solution) are employed to choose at random which potential answer is passed from one agent to another, giving a different global solution on each run of the system. The global description length from many alternative runs of the system is compared, and the MDL generated from any one of these runs is chosen as an indication of the most likely solution to the interpretation problem.

In this implementation of the system, the dataset from the ink tablets provides the statistical information to enable comparison of unknown character forms, letter combinations, and words found on the documents. The image processing techniques, allow the detection of strokes which might be handwriting – not grazes or cuts on the document – to be fed into the system.

The unknown strokes are first compared to models of characters by the character agent, which also utilises information about which characters are most statistically frequent in these texts. The probability of them matching a character is then calculated. One possible solution is randomly passed up to the word agent, who compares strings of characters to word frequency data. The resulting description length from each run of the system represents how well the input data fits a proposed solution.

This prototype ‘cognitive image processing’ system, which intakes an image of a stylus text and outputs plausible interpretations, provides the means to suggest interpretations to the reader. Additionally, it has been demonstrated that MDL provides a robust and easily implemented architecture on which to base an agent based system which compares and contrasts data across differing semantic levels.

It is important to note here that the aim of such a system is not to replace historians or papyrologists. Their expertise – like that of any expert – is a valuable but limited resource. By building a system which can reason in the same way that they do, the system can aid the experts in their task, speeding up the process of reading an ancient document. Additionally, the system can retain the choices made whilst reading the damaged texts, making explicit the hidden reasoning processes undertaken by experts when confronted by such documents.

Fulfilling MDL’S potential

Future work entails developing this prototype system into a stand alone package for use by the papyrologists. There is no reason why this system should not enable the reading of texts from other sources, such as Greek inscriptions on marble, provided that a set of training data is available and formatted correctly for the system to use. The MDL agent based architecture is also easily adaptable to any problem which requires comparing and reasoning about different data types, such as between image and text.

By drawing together disparate research in image processing, ancient history, and artificial intelligence,a complete signal to symbol system has been developed. This offers further opportunities to develop intelligent systems that can interpret image data effectively. It can help people in complex perceptual tasks, and demonstrates the potential which lies in interdisciplinary research for the engineering community.

Further reference

More information about the Vindolanda texts can be found at Vindolanda Tablets Online: http://vindolanda.csad.ox.ac.uk

Biography – Dr Melissa Terras

Melissa is a Lecturer in Electronic Communication in the School of Library, Archive and Information Studies at University College London. Her monograph detailing the development of the prototype system, Image to Interpretation: An Intelligent System to Aid Historians in Reading the Vindolanda Texts,will be published by Oxford University Press (Oxford Studies in Ancient Documents Series) in Autumn 2006.

Keep up-to-date with Ingenia for free

Subscribe