Article - Issue 11, February 2002
The Homeland Security Initiative
John Forrest FREng
The attacks on 11 September in the USA demonstrated the vulnerability of the infrastructure of modern societies in a most horrific way. Such attacks, destined to cause major media impact and economic disruption, without regard for human life, are not new, as demonstrated by previous attacks on the World Trade Center, on Canary Wharf, on embassies and on military targets such as the USS Cole. However, the number of such attacks and their damaging potential are increasing. Until recently, it was generally the case that the perpetrator of such an attack wished to survive and would therefore control the operation remotely, which introduces a significant degree of unreliability as well as a vastly increased likelihood of detection. The new factor we face, seen particularly within the last year, is the type of attack carried out by fanatics who can circulate in normal society without suspicion and are prepared (or even eager) to die in the attack. To counter this is a problem of a wholly different order and an enormous challenge.
The USA reacted quickly after 11 September and set up a Homeland Security Initiative, coordinated by Senator Tom Ridge, and based in the White House. It is recognised that such an initiative must address the short-, medium- and long-term issues.
The short-term issues involve rapid re-assessment of security and implementation of new procedures at key centres of infrastructure, such as nuclear power plants, airports, ports, other major transport termini, centres of economic activity (e.g. stock exchanges, data centres). A new risk assessment analysis based on the new threat has to be carried out on all processes and procedures at these installations.
The medium-term issues are somewhat more complex because they involve greater co-ordination and information flow between administrations (local, national and international) and the associated databases that have been constructed to serve specific needs or interests.
The longer-term issues relate particularly to the role of new technologies in providing the required level of security in our society. There are many such technologies related to identification and authentication. The general field of biometric authentication (3-D image, fingerprints, voiceprints, retinal scan, DNA scan) is developing rapidly. Equally active is the field of software to provide highly secure interaction between such input information and the databases that hold the registered profile of the individual wishing to make some transaction. This is the new domain of trust infrastructures. The largest challenge is to give the required security and the required privacy. Many good security solutions give rise to privacy concerns. Equally, solutions that would be accepted by civil rights groups give little or no security.
The longer-term issues are often well addressed by the universities. Carnegie Mellon University (CMU) in Pittsburgh was the first to recognise the potential of their research to the new Homeland Security Initiative. Some 50 existing research projects at CMU are related in some way to the new initiative. About 30 of these are in the computer science department. Others are in the electrical engineering, biological sciences, and related departments. A new CMU Program, in which American Management Systems Inc (AMS), a leading US management consultant and system integrator to federal, state and local government, is the lead sponsor and industrial affiliate, has been set up to focus work on the medium- and longerterm issues associated with the Homeland Security Initiative.
Atwo-day workshop, involving participants from AMS, Computer Associates, EDS, Harris, Intel, ActivCard, GemPlus, and other companies active in security systems, was held on 27–28 November at CMU to launch the program. The workshop had been arranged to initiate debate on the issues involved and formulate the research programme ahead. The following topics were encompassed:
Information hardware and biometrics
law and policy.
Simon Perry (Computer Associates) cited the main problem in achieving trusted information infrastructures as being that we have a very poor control environment and that when problems are discovered, we tend just to put ‘patches’ on basically insecure information platforms. Very few companies have either adequate virus control strategies or adequate access control. Even basic facilities, such as password protection, are poorly used. A major issue is the need for much improved education and discipline of all users of interconnected computer systems.
A clear example of the ‘patch’ approach is seen with viruses, for which the defence is developed after the virus has appeared. There is much scope for improving on the existing ‘firewall’ approach in order to provide layered security around the operating system, memory, peripherals and for new rule-based tools that test for unusual behaviour. An interesting discussion point was whether software reliability could come about through the present mechanisms of market forces, or whether regulation or the imposition of stringent legal liability in case of damage would be required.
Clear authentication, verification of assertions or requirements, ability to check against past experience, and ability to obtain assurance from others were all seen as important components in trusted infrastructures.
A difficult issue is in establishing the authority or authorities for trusted infrastructures. We will almost certainly see the evolution of trusted third parties who guard the privacy of the user and provide the trusted authentication and assertion to service providers. These trusted third parties need to be based on the confidence of an already-trusted brand – they could be banks, major retailers, major service providers, or new companies based on some amalgam of these. Much needs to be addressed about how such entities might need to be licensed, might operate, and might be regulated.
In introducing the presentations on hardware, Gilles Lisimaque, one of the founders of GemPlus, a major supplier of smart cards, emphasised that the use, format and function of ID cards were dependent on cultural differences between nations. He highlighted a number of challenges, such as the ability legally to have two or more identities (except if these were used for fraudulent purposes).
He gave an excellent overview of cards from the simplest magnetic strip type through to the state-of-the-art smart card. It was his view that the cards of the future would need to use a number of validation technologies (private key and biometrics), as one technology alone would not provide the required level of security.
He stressed the importance of having a card introduction policy that could build in complexity and security as needed, as experience in use was gained, and as time evolved. He cited the experience in Germany, starting in 1994 with the introduction of 80 million health insurance cards. This started with a simple PIN-based card and has steadily evolved to more complex cards. France started out with a very complex system – an excellent system, but one that has still not achieved what was envisaged. The US Department of Defense access card system he thought was the most advanced and excellent example of any smart card system yet implemented.
Smart card technology is steadily improving and 64 kB memories are now available. It is important to carry out as much initial processing as possible on the card because the greatest danger of security breaches occurs when information is taken on and off the card. There is a limitation associated with power consumption and the silicon area must not be so large that it could be damaged under flexing of the card.
Privacy issues are still treated in an ad hoc manner, but the latest cards offer the facility to partition securely totally different domains of information, so that the user is the only one with knowledge of the private keys required to release or interconnect certain of the information domains.
The format of a state-of-the-art smart card was presented by Bill Holmes (SSP Solutions). This Level 3 Smart Card, costing around $20 in quantity, has strong authentication and biometric templates, tamper detection (to avoid the detection of keys by monitoring power consumption) and excellent physical security. It uses an ARM7 32-bit RISC processor with 96kB ROM, 64kB EEPROM and 5 kB RAM. The chip has a USB interface (which will be the ISO standard) on chip for very fast 12Mb/s data transfer.
The chip enables photographic and biometric templates to be stored (voice, fingerprint, etc) with secure comparison done on the card. There is provision for both public and private keys with a number of information domains. The card has obvious applications in fasttrack access (e.g. at airports), office access, banking, e-commerce, the development of e-government (tax, permits, passports, etc), and emergency medical use.
Ram Banerjee (ActivCard) described their work as a software provider to most of the major card companies and service providers. He focused mainly on their work to develop the Department of Defense Common Access Card. There are 4.3 million of these cards in use, issued to service personnel worldwide. These cards provide building and base access and other facilities such as cash from ATMs on board ships and at bases.
He stressed the importance of having different, partitioned, secure domains on the card to ensure privacy. The card has applets for PIN management, identity details, banking and benefits, medical information, plus space for digital certificates, and space for other applets, programmed after issue.
Cards can be created at any of 1900 ‘Rapids’ stations worldwide (essentially a PC, secure link, biometric scanner, and card printer). It takes three to ten minutes to issue, modify or replace a card. The source data for military personnel (23 million personnel records) is held on a database in California and is accessed by a secure encrypted link during the loading and authentication process.
There was much discussion about national identity cards and related constitutional issues. Only around half a dozen countries worldwide now do not have some requirement for a national identity card. Barry Goleman (AMS) presented the view that in the USA (even more than in the UK) the driver’s licence is the de facto national ID and is accepted as such. However, it is easy to counterfeit, fake licences being available over the Internet for less than $100, and there are some 240 valid formats of driver’s licence in existence in the US. A single national format, although perhaps issued with different state logos, would be a step forward. The largest step forward, in coordination with replacing the licence by a smart card, would be in improved authentication of the input information used in the issuing of licences. There is poor access to primary record databases (birth certificate, passport, etc), poor processes of exchange of information between states, and limited rigour of validation.
John Sabo (Computer Associates) talked about the International Security Trust and Privacy Alliance (ISTPA) which has focused on protection of personal information. Johns Hopkins University has an important project on consumer usability, manageability and costs of privacy technology. He set out various useful definitions used in their work:
Security: the establishment and maintenance of policies and measures to protect a system or society.
Privacy: the proper handling and use of personal information, consistent with the preferences of the subject.
Confidence/trust: the freedom from worry, the predictability of action.
Finally, Mike Shamos (CMU) gave an overview of legal aspects. He outlined common fears about ID cards (the ‘big brother’ concern, mistrust of government, compromises to privacy, identity theft, database linking, concentration of power, fear about unknown effects, etc). However, he felt that all of these could be addressed and that ID cards could actually improve privacy, reduce identity theft, and give greater security to many aspects of everyday life.
Major improvements are needed in the control environment for the use of interconnected computers, their software and their security against intrusion. Discipline and education of users are critical in this, but layered and more intelligent security systems are needed.
It was agreed that hardware is not a problem. The capability of smart cards meets current needs in information capacity and price. Very impressive developments have occurred in the security of information on cards and in partitioning information to satisfy privacy requirements.
Biometric authentication has improved greatly, but several methods used in parallel will be necessary. One alone is unlikely to be sufficient to give adequate levels of security and reliability.
The big challenges lie in the organisation, validation and access of the source data on which authentication and issue of digital certificates are based.
The successful implementation of new infrastructures for authentication of individuals with regard to permits, access to facilities, e-banking and ecommerce will be in large part a marketing exercise, with a focus on benefits (e.g. ‘fast-track’ access at airports, obtaining permits, etc) and on the extra security of transactions (e.g. avoidance of identity theft and fraud). The major effort will need to be in understanding, explaining, and implementing the compromises which will have to be made to achieve the optimum combination of security and privacy, some aspects of which are mutually exclusive. New types of organisation, acting as trusted third parties, will undoubtedly have an important role to play, but much yet remains to be decided on how such organisations might operate and be regulated.