Article - Issue 24, September 2005

Engineering a trustworthy web

Martin Ince

Download the article (317 KB)

Scanning an index finger on a biometric keyboard with fingerprint sensor can help confirm identity and eliminate fraud © Mike Liu (iStockphoto)

Scanning an index finger on a biometric keyboard with fingerprint sensor can help confirm identity and eliminate fraud © Mike Liu (iStockphoto)

The World Wide Web is a vital tool for government, business, academe and social interaction. There are fears though that it is becoming unreliable and can actually facilitate crime. Martin Ince writes about a new UK study commissioned by the OST that takes a 20-year view of creating a web that people can trust.

From a standing start, the World Wide Web has in just over a decade become the planet’s information machine of choice. For many it has turned into the first port of call for anything from train times to scholarly papers.

The web supports email, the communications medium that underlies almost the whole of Western business and government and a significant percentage of personal correspondence. Less visibly, the web provides the information foundation for the rest of the world’s technology, including the banking system, utilities such as water, gas and electricity, and the movement of goods around the world.

The web is a highly engineered structure, governed by a host of rules and protocols. But nobody sat down and designed it. Instead it is the paradigm of a ‘self-organising’ system that just grew and spread until today, nobody in Germany would think it remarkable to read a map stored on a computer in New Zealand.

Cyber trust and crime prevention

New research from a group convened by the UK Office of Science and Technology (OST) suggests that building the web we know today was the easy part – the future may be increasingly problematic. The Cyber Trust and Crime Prevention Project recently brought together experts in computing and the web, crime and security, law and social sciences, including economics. As well as the Department of Trade and Industry, which is in charge of the OST, it involved Government departments such as the Home Office and the Cabinet Office, and the police.

The project, one of several within the OST’s Foresight programme, did not aim to predict the future of the internet. Instead, it used a number of methods to look at possible developments over the next 15–20 years both in cyberspace and in the solid world. It commissioned a dozen state-of-science reviews on everything from online identity to the economics of cyber trust, and tested several models of the future – from free-ranging chaos to total state control.

Criminals target the web

The aim of the project was to find out what the helter-skelter growth in the web means for trust and crime. The rapid growth in online spending implies that a large proportion of people trust the safety of web transactions. But there have already been cases of online fraud and deception, including spam and virus attacks, and ‘phishing’ trips to obtain details of transactions.

In July 2005, several bank customers in the UK fell for one such piece of phishing, in which they were invited to send their security details in reply to a fake email apparently from their bank. This incident highlights the dichotomy that if you walk into a building that has a sign saying Barclays Bank, you are almost certainly entering Barclays Bank. But can you be so sure when you open an email claiming to come from some well-known company?

The project heard repeatedly that criminals are enthusiastic ‘early adopters’ of technology. They like the web because it moves money and information around the world instantly and – with care – in secret. So as well as opening the door to new forms of crime, the web could allow existing criminals to do more harm.

In one possible indicator of the shape of things to come, the French insurance industry estimates that it loses over €1.5 billion (£1 billion) a year from computer failures, half of them caused maliciously, by external hackers or corrupt insiders.

Security lapses

The overall ethos of the project was that the web is the way more and more things will happen in the future. It will not be abandoned, so it has to improve. One of the project’s expert advisers, Angela Sasse of University College London, proved that today’s computer systems are not designed to encourage security.

Sasse’s work shows that most people use dictionary words, or names such as those of their children or pets, as computer passwords, while only 9% of passwords are ‘cryptographically strong’ combinations of letters and numbers. And as personal identification numbers are prone to being forgotten, people often write them down – thus negating the security intent.

These difficulties not only damage security but also reduce organisations’ productivity. New technology such as biometrics might help to solve these problems, but itself raises fresh issues. About 5% of people do not have readable fingerprints, either because they have not got fingers or because their print pattern is very faint. And members of some religious groups object to being photographed.

IT complexity

One of the major issues faced by the project is that the web is changing as it grows. Information is becoming ‘pervasive’ much as electricity did 100 years ago, but the systems that deliver it are far less predictable.

It is possible to engineer robust IT systems for safety-critical tasks such as air traffic control which rarely go wrong despite their complexity. Such systems are designed and built using formal methods that ensure their reliability and predictability. Making other systems equally tough could be feasible but would require large budgets.

There would also need to be a commitment to ensure that there are enough people available to produce such rigorous web technology. But, with some notable exceptions, the project found that the engineering and computing departments of UK universities do not seem to regard it as a priority to produce the systems engineers who will be needed to produce such systems for general use.

Whom to trust

Even systems that work as their designers intend will only be effective if they are trusted. Web users tend to use websites of respected physical-world information providers such as the BBC. New, web-based sources find it much harder to become trusted. This is a key conundrum for the future of the web. If people trust it too much, they may be robbed. This would make the web less attractive as a place to spend time or do business. But if they trust it too little, they may opt not to use it at all.

William Dutton and Adrian Shepherd of the Oxford Internet Institute call this the “certainty trough”: people who never use the internet do not trust it; people who use it a little trust it a lot; but genuine experts who really know about it trust it rather less.

One possible way of enhancing trust in the web was suggested by a group of lawyers led by project adviser John Edwards, of solicitors Herbert Smith. He suggested the adoption of “generally accepted digital principles” analogous to generally accepted accounting principles in the world of finance. To satisfy them, any online activity would have to show acceptable levels of security for communications and transactions. To make sure they were enforced, autonomous software agents would roam the web looking for problems: as Edwards puts it, acting “as AA man rather than Robocop”. It would offer helpful suggestions to transgressors, but there would also be sanctions. Any web user’s record would be open to inspection and persistent offenders could be banned to a digital sin bin where it would be apparent to others that they were in danger of doing business with an unsound counterpart.

It is very hard to prevent untrustworthy people going online, but allowing people to see that they are about to deal with someone unreliable could increase trust and reduce losses at the same time. This is the basis of the feedback mechanism whereby users of eBay, whether buyers or sellers, can rate the people they have done business with.

In the future, such trust calculations could be carried out by machines rather than people. You may already trust your computer software to decide whether an incoming email is spam or contains a virus. So you might also eventually trust an intelligent online agent to find you a cheap loan, and even give it the authority to sign up on your behalf.

Quo vadis?

In its Three Futures’ workshops, the project explored how the variables might play out in future decades. One such scenario would be to allow everyone to be their own data guardian, allowing out only such limited data as they are comfortable about releasing. They would live in the digital version of a gated estate. This would entail business disadvantages because it limits digital commerce.

In another scenario, a government would hold a huge variety of data on citizens and make it available with their consent. This would simplify matters, but would require far more trust in governments than people have today. Polls show that many people are already more concerned about surveillance, for example by CCTV, than about information privacy. They would need more trust in the government’s honesty and competence than they have today. There is the risk that the government data centre containing the information could become a magnet for every hacker on the planet.

The central and perhaps most likely scenario was a future in which the web becomes more important but only attracts attention when something major goes wrong. Examples developed for the workshop, which were not predictions of any kind, included the loss of everyone’s digital health records just after the paper copies had been thrown away.

One risk is that public opinion can become resigned to periodic failure of web systems, and its low opinion of them is corroborated by scares and panics. In fact, there are plenty of things that people and organisations can do to improve their own physical and data security. After all, anyone who had a document shredder in their home 10 years ago would have been diagnosed as paranoid. Today they are a normal household appliance.

And finally…

The project’s action plan is now being taken forward by the Department of Trade and Industry, other Government departments, and independent bodies including the Information Assurance Advisory Council. The conclusions that it has put forward are divided into the categories of Cyber Trust and Crime Prevention.

To generate Cyber Trust, the project felt it important that systems’ providers realise the benefits of interaction between people and technology. They also noted that the growth of Virtual Agents has been anticipated and that it will be necessary for some kind of legal status and empowerment for them needs to be created. Finally, it noted that society will divide in yet more new ways over the coming decades. Thus technology will need to follow, provide, anticipate and act upon these changes in order for a majority of users to have continued and a long-term trust in the world wide web.

In terms of Crime Prevention, the project advocates that people’s identity must be made safe, simple and secure in both the cyber and physical world – bearing in mind that people may choose to have multiple identities for legitimate reasons, as many already do in playing online games or buying and selling goods online. It also recommends that IT systems should be able to reduce the potential for damage and make it possible to detect cyber crime. And the project points out that legal and regulatory frameworks will need to be devised that are appropriate to both cyber and physical worlds.

“People who never use the internet do not trust it, people who use it a little trust it a lot, but genuine experts who really know about it trust it rather less.”

Biography – Martin Ince

Martin Ince is a freelance journalist and was communications advisor to the Cyber Trust and Crime Prevention project. Contact:

Further reference

The findings of the Cyber Trust and Crime Prevention Project are available at: Of the dozen state of science reviews, all of which are available online, four concentrate on engineering. They cover dependable pervasive systems, knowledge technologies and the semantic web, trust in agent-based software, and usability and trust in information systems.

The book of the project’s science reviews, Trust and Crime in Information Societies, edited by Robin Mansell and Brian S Collins, the project’s scientific advisors, has just been published. See:

The project’s findings are designed to help Government and others with their thinking, and are not Government policy.

[Top of the page]