The World Trade Center Disaster: Who Was
Prepared?
A little after 8am on Tuesday morning,
September 11th, 2001, four cross-country passenger jetliners were hijacked
with loaded fuel tanks. One was crashed into a section of the Pentagon,
another plunged into the Pennsylvania countryside when passengers
prevented the hijackers from hitting their target. The other two planes
were crashed into New York City’s two World Trade Center (WTC) towers,
ultimately causing them to implode and kill 3,000 people.
The financial industry’s equipment
loss was immense. The Tower Group, a technology research company,
estimated that securities firms alone will spend up to $3.2 billion just
to replace computer equipment. Much of the WTC IT and telecommunications
equipment was underground and was destroyed by the collapsing debris.
Tower calculates replacements will include 16,000 trading-desk
workstations, 34,000 PCs, 8,000 servers, plus large numbers of computer
terminals, printers, storage devices, and network hubs and switches.
Setting up this equipment will cost an additional $1.5 billion.
The most vital issue for many companies was their
loss of staff. Few recovery plans anticipated such a catastrophe.
Organizations that were directly hit did not even know who in their
companies had survived or where they were because hardly any kept secure,
accessible lists of employees or contact information. The New York Board
of Trade (NYBT), which had its trading floor in the WTC to deal in such
commodities as coffee, orange juice, cocoa, sugar and cotton, had to call
all employees, one by one. Often survivors couldn’t be reached because
area telephone facilities were destroyed while any working circuits were
overloaded. A few companies had considered some staff problems. Disaster
recovery companies did provide some workspace for their customers.
Comdisco had seven WTC customers, and it made space available for 3,000
customer employees, enabling those companies to continue operations. Some
recovery companies, including SunGard, made available tractor-trailers
equipped with portable data centers. Not all plans worked. Barclays Bank
had planned for evacuating its 1,200-person investment-banking unit to its
disaster recovery site in New Jersey, but the site proved to be too small
for so many employees. Moreover, the bridges and tunnels crossing the
Hudson River were immediately closed so most employees could not get
there. Fortunately Barclays was able to shift much of its work to its
London, Hong Kong, and Tokyo offices, although the time differences forced
those workers to do double shifts.
Data loss is
extremely critical, often requiring extensive planning. Many organizations
already relied on disaster recovery companies such as SunGard, Comdisco,
and Recall, which offer office space, computers, and telecommunications
equipment when disasters occur. “Cold site” recovery requires the
companies to back up their own data onto tapes, storing them offsite. If a
disaster occurs, the organizations transport their backup tapes to the
recovery sites where they load and boot their applications from scratch
using their backup tapes. Although the cold site approach is relatively
inexpensive, restoring data can be slow, often taking up to 24 hours. If
the tapes are stored at the affected site or relatively close by, all data
may be permanently lost, which could put some companies out of business.
Moreover, the data for all activity since the last backup will be lost.
“Hot site” backups can solve some problems, but may
cost some companies as much as $1 million monthly. A hot site is located
offsite where a reserve computer continually creates a mirror image of the
production computer’s data. Should a data disaster occur, the company can
quickly switch over to the back up computer and continue to operate. If
the production site itself is destroyed, the staff will actually go to the
hot site to operate.
While many companies lost a
great deal of data in the attack, a recent Morgan Stanley technology team
report said the WTC was “probably one of the best-prepared office
facilities from a systems and data recovery perspective.” Lower
Manhattan’s extraordinary data security concern erupted in 1993 when a
large bomb exploded in the subterranean parking area of the WTC in a
terrorist attack. Six people were killed and more than 1,000 were injured.
Realizing how vulnerable they were, many companies took steps to protect
themselves. Pressures for emergency planning further increased as
companies faced the feared Y2K problems. As a result, the data for many
organizations were relatively well protected when the recent WTC attack
occurred. Let us look at how some organizations responded to the
attack.
Prior to 1993, to protect
itself, the NYBT had contracted with SunGard Data Systems Inc. for “cold
site” disaster recovery. After the 1993 bombing it decided to establish
its own hot site. It rented a computer and trading floor space in Queens
for $300,000 annually. It hired Comdisco to help it set up the hot backup
site, which it hoped to never have to use despite the expense. After the
attack the NYBT quickly moved its operations to Queens and began trading
on September 17, along with the NYSE, Nasdaq, and the other exchanges that
had not suffered direct hits.
Sometimes backups
are too limited. Most disaster recovery companies and their clients have
been too focused on recovery of mainframes and have insufficient
capabilities for recovering midrange systems and servers. Moreover,
backups are often stored in the same office or site and so are useless if
the location is destroyed. For example the Board of Trade backed up only
some servers and PCs, and those backups were stored in a fireproof safe in
the WTC where they were buried beneath many thousands of tons of rubble.
Giant bond trader Cantor Fitzgerald occupied
several top floors in one of the WTC buildings and lost its offices and
nearly 700 of its 1000 American staff. No company could have adequately
planned for the magnitude of this disaster. However Cantor was almost
immediately able to shift its functions to its Connecticut and London
offices, and its surviving U.S. traders began settling trades by
telephone. Despite its enormous losses, the company amazingly resumed
operations in just two days, partly with the help of backup companies,
software, and computer systems. One reason for its rapid recovery was
Recall, Cantor’s disaster recovery company. Recall had up-to-date Cantor
data because it had been picking up Cantor backup tapes three to five
times daily. Moreover, in 1999 Cantor had started switching much of its
trading to eSpeed, a fully automated on-line system. After the WTC
disaster Peter DaPuzzo, a founder and head of Cantor Fitzgerald, decided
that the company would not replace any of the over 100 lost bond traders.
Instead the company switched its entire bond trading to
eSpeed.
America’s oldest bank, the
Bank of New York (BONY), is a critical hub for securities processing
because it is one of the largest custodians and clearing institutions in
the United States. Half the trading in U.S. government bonds moves through
its settlement system. The bank also handles around 140,000 fund transfers
totaling $900 billion every day. Since the bank facilitates the transfer
of cash between buyers and sellers, any outage or disruption of its
systems would leave some firms short of anticipated cash already promised
to others. BONY was under extraordinary pressure to keep running at full
speed.
BONY operations were
heavily concentrated in downtown Manhattan, very close to the World Trade
Center. The bank is headquartered at 1 Wall Street, almost adjoining the
WTC and had two other sites on Barclay and Church Streets that were even
closer. These buildings housed 5,300 employees plus the bank’s main
computer center. On September 11th, the bank lost the two closest sites
and their equipment. The bank had arranged for its computer processing to
revert to centers outside New York in case of emergency, but it was not
able to follow its plan. The World Trade Center attack had heavily damaged
a major Verizon switching station at 140 West Street serving 3 million
data circuits in lower Manhattan. The loss of this switching station left
BONY without any bandwidth for transmitting voice and data communications
to downtown New York, and the bank struggled to find ways to connect with
customers.
The bank’s disaster
recovery plan called for paper check processing to be moved from its
financial district computer center to its Cherry Hill, New Jersey
facility. With communication so disrupted, BONY management decided Cherry
Hill was too distant and moved the functions to its closer center in Lodi,
New Jersey. However, that center lacked machines for its lock-box
business, in which it opens envelopes that contain bill payments, deposits
checks, and reads payment stubs to credit the right accounts.
The bank had deliberately planned to have different
levels of backup for different functions. The bank’s government bond
processing was backed up by a second computer that could take over on a
moment’s notice. No such backup existed for the bank’s 350 automated
teller machines. The bank rationalized that its customers could use other
banks’ machines in case of a problem and its customers were forced to do
that. Even the backup system for the government bond business did not work
properly because the communication lines between its backup sites and
clients’ backup sites were often of low capacity and had not been fully
tested and debugged. For example, BONY’s required connection to the
Government Securities Clearing Corporation, a central component of the
government bond market, failed, so tapes had to be driven to that
organization for several days. Trades were properly posted but clients
could not obtain timely reports on their positions. The bank had also
established redundant telecommunication facilities in case of problems
with one line, but they turned out to be routed through the same physical
phone facilities. John Costas, the president and COO of UBS Warburg,
explained “We’ve all learned that when we have backup lines, we should
know a lot more about where they run.”
As a
result, the Bank of New York’s customers expecting funds from the Bank of
New York didn’t receive them on time and had to borrow emergency cash for
the Federal Reserve. Yet Thomas A. Renyi, the Bank of New York’s chairman,
expressed pride in how the bank had responded. He said “Our longstanding
disaster recovery plans worked, and they worked in the extreme.” It will
be months before BONY can return to its computer center at 101 Barclay
Street and the bank is working with IBM on locating an interim computer
center and on improving its backup systems.
The
Nasdaq stock exchange seems to have had more success. It has no trading
floor anywhere but instead is a vast distributed network with over 7,000
workstations at about 2,500 sites, all connected to its network through at
least 20 points of presence (POPs). The POPs in turn are doubly or triply
connected to its main network and data centers in Connecticut and
Maryland. Nasdaq’s headquarters at 1 Liberty Plaza were heavily damaged.
Its operational staff and its press and broadcast functions are housed in
its Times Square building. On September 11th (Tuesday), Nasdaq opened as
usual, at 8am but it closed at 9:15am, and did not open again until the
following Monday when the NYSE and other exchanges resumed trading. Nasdaq
was well prepared for the disaster with its highly redundant setup. Nasdaq
had required many managers to carry two cell phones in case both the
telephone and one cell phone did not work and required every employee from
the chairman on down to carry a crisis line number card. It had many
cameras and monitoring systems so that the company would know what
actually happened if a disaster or other crisis should strike. Nasdaq had
even purposely established a very close relationship with Worldcom, its
telecommunications provider, and it had made sure Worldcom had access to
different networks for the purpose of redundancy.
At first Nasdaq established a command center at its
Times Square office, but the implosion of the WTC buildings destroyed
Nasdaq’s telephone switches connected to that office, and so the essential
staff members were quickly moved to a nearby hotel. Management immediately
addressed the personnel situation, creating an executive locator system in
Maryland with everyone’s names and telephone numbers and a list of the
still missing. Next it evaluated the physical situation—what was
destroyed, what ceased to work, where work could proceed—while finding
offices for the 127 employees who worked near the WTC. Then it started to
evaluate the regulatory and trading industry situations and the conditions
of Nasdaq’s trading companies. The security staff was placed on high alert
to search for attempted penetration of the building or the network.
On Wednesday, September 12, Nasdaq management
determined that 30 of the 300 firms it called would not be able to open
the next day, 10 of which needed to operate out of backup centers.
Management assigned some of its own staff to work with all 30 firms to
help solve their problems. The next day it learned that the devastated
lower Manhattan telecommunications would not be ready to support Nasdaq
opening the following day. It decided to postpone Nasdaq’s opening until
Monday, September 17. On Saturday and again on Sunday Nasdaq successfully
ran industry-wide testing. On Monday, only six days after the attack,
Nasdaq opened and successfully processed 2.7 billion shares, by far its
largest volume ever.
Nasdaq found its
distributed systems worked very well, while its rapid recovery validated
the necessity for two network topologies. Moreover, while Nasdaq lost no
senior staff, the company had three dispersed management sites, and had it
lost one, the company could still operate because of the leadership at its
two remaining sites. Nasdaq also realized its extensive crisis management
rehearsals for both Y2K and the conversion to decimals had proven vital,
verifying the need to schedule more rehearsals regularly. The company even
recognized how critical ongoing communications were, and so it formalized
regular nationwide company telecommunication forums. It even established
automatic triggers for regular communication forums with the Securities
and Exchange Commission (SEC).
Case Study Questions:
- Summarize the business and technology problems created by the
September 11th, 2001 attack on the World Trade Center.
- How well prepared were companies described in this case for
the problems resulting from the WTC disaster?
- Compare the responses of NASDAQ and the Bank of New York to
September 11th. What management, organization, and technology
factors affected their disaster recoveries?
- Were there any security problems that companies had failed to
anticipate when the WTC attacks occurred? How well did companies
deal with them?
- Explain some effective actions taken because of creative
management responses which were not in company disaster
plans.
- Select a major financial company and write a summary
description of its operations. Then develop an outline of a
security plan for that company.
|
Sources: Anthony
Guerra, “The Buck Stopped Here: BONY’s Disaster Recovery Comes Under
Attack, “ Wall Street and Technology, November 2001; Saul Hansell
with Riva D. Atlas, “Disruptions Put Bank of New York to the Test,”
The New York Times, October 6, 2001; Tom Field, “How Nasdaq Bounced
Back,” CIO Magazine, November 1, 2001; Dennis K. Berman and Calmetta
Coleman, “Companies Test System-Backup Plans as They Struggle to
Recover Lost Data,” The Wall Street Journal, September 13, 2001;
Jayson Blair, “A Nation Challenged: The Computers,” The New York
Times, September 20, 2001; Debra Donston, “Disaster Recovery’s Core
Component: People,” eWeek, September 13, 2001; Tom Field, “Disaster
Recovery: Nasdaq,” CIO, October 12, 2001; John Foley, “Ready for
Anything?” Information Week, September 24, 2001; Sharon Gaudin,
“Protecting a Net in a Time of Terrorism,” Network World Fusion,
September 24, 2001; Stan Gibson, “Mobilizing IT,” eWeek, September
17, 2001; Eugene Grygo and Jennifer Jones, “U.S. Recovery: Cost of
Rebuilding N.Y. IT Infrastructures Estimated at $3.2 Billion,”
InfoWorld, September 19, 2001; Edward Iwata and Jon Schwartz, “Tech
Firms Jump In to Help Companies Mobilize to Rebuild Systems, Reclaim
Lost Data,” USA Today, September 19, 2001; April Jacobs, “Good
Planning Kept NASDAQ Running During Attacks,” Network World Fusion,
September 24, 2001; Suzanne Kapner, “Wall Street Runs Through
London,” The New York Times, September 27, 2001; Richard Karpinski,
“E-Business Aftermath,” InternetWeek, September 24, 2001; Diane
Rezendes Khirallah, “Disaster Takes Toll on Public Network,”
Information Week, September 17, 2001; Daniel Machalaba and Carrick
Mollenkamp, “Companies Struggle to Cope with Chaos, Breakdowns and
Trauma,” The Wall Street Journal, September 13, 2001; Paul McDougall
and Rick Whiting, “Assessing the Impact (Part One), Information
Week, September 17, 2001; Patrick McGeehan, “A Nation Challenged:
Wall Street,” The New York Times, September 21, 2001; Paula Musich,
“Rising From the Rubble,” eWeek, September 24, 2001; Kathleen
Ohlson, “Businesses Start the Recovery Process,” Network World
Fusion, September 12, 2001; Julia Scheeres, “Attack Can’t Erase
Stored Data,” wired.com, September 21, 2001; Carol Sliwa, “New York
Board of Trade Gets Back to Business,” Computerworld, September 24,
2001; Marc L. Songini, “Supply Chains Face Changes After Attacks,”
Computerworld, October 1, 2001; Bob Tedeschi, “More Web Spending
with a Focus,” The New York Times, October 8, 2001; Dan Verton, “IT
Operations Damaged in Pentagon Attack,” Computerworld, September 24,
2001; Shawn Tully, “Rebuilding Wall Street,” Fortune, October 1,
2001; and “WTC Technology Replacement Costs Billions,” excite.com,
September 14, 2001.
|
|
|
|