September 11, 2001

Privacy vs. National Security: The NSA and the Data Mining Controversy

Earlier this month, USA Today revealed that since 9/11, the National Security Agency (NSA) has been secretly assembling a massive database containing trillions of anonymous domestic telephone numbers voluntarily provided to it by three U.S. telecommunications giants – ATT, Verizon, and BellSouth. The program was designed to determine mathematical patterns that could pre-empt future terrorist attacks in America. The operation involved the collection of records of incoming and outgoing phone numbers that had been stripped of any identifying information or content.1 In short, the NSA program sought to use the phone call patterns of known terrorist suspects to locate others who were suspected of being part of a larger network seeking to harm Americans or American interests.

Despite all the media hype, data mining and wiretapping are not the same thing and the NSA program does not appear to have been involved in eavesdropping on private domestic conversations. As such, the program would not require a warrant for a wiretap or pen register order from the Foreign Intelligence Surveillance (FISA) Court which governs the collection of phone numbers in real time from a single phone.2 Nevertheless, critics are arguing that the NSA program is yet another attempt by the White House to trample on the constitutional rights of Americans.


This is not the first time that the data mining issue has inserted itself into the privacy vs. national security debate. With the dawn of the new era of megaterrorism, new technologies have become necessary to “predict” future acts of terrorism and to “profile” terror suspects before their act are committed.  To do so requires vast amounts of data from a wide variety of sources. The moment three hijacked airliners crashed into their New York and Washington targets on 9/11, the lines were drawn between those who sought to protect the nation from a new existential threat and those who sought to preserve any infringement of the privacy rights of U.S. citizens. The battle lines on many of these issues will end up in the Supreme Court for adjudication.

Even as America struggles to adjust to a new threat and to redefine its security and intelligence systems, major confrontations have already begun to take place between privacy advocates and national security advocates. The issues have come into conflict with one another because new information technologies now permit the harvesting and analysis of vast amounts of information and the quest for such information inevitably straddles the line between personal privacy and national security. 

From the security perspective, the greater the amount of data, the greater the likelihood that terrorists can be “profiled” and terrorist acts reduced. Privacy advocates, however, see these new “data mining systems” as invasions of privacy and infringements of their constitutional rights. The conflict between the right to privacy and the need to protect national security has already consumed (or at least inhibited the development of) several creative ideas since the tragedies of 9/11 and others are currently under a microscope. The NSA program is only the latest in a long line of security-motivated data mining and analysis programs that have come under attack, but it will not be the last.

Some of the most interesting hi-tech data mining programs were established under DARPA – the Defense Advanced Research Projects Agency. Conceived during the Cold War as a reaction to the Russian launching of Sputnik in 1958, DARPA (a research arm of the Defense Department) is tasked by the government to think independently of the rest of the military and to respond creatively to national defense challenges. The result has often led to spectacular achievements. Despite the fact that DARPA played a leading role in the development of the Internet and is on the cutting edge of developing a new generation of counter-terrorism tools, critics continue to criticize the agency as a group of hi-tech computer geeks who work in secret to develop strange projects designed to invade America’s privacy. The perception is wrong, but in the real world, perception is everything and the politicization of these perceptions has often sounded the deathnell for serious national security programs.

One of the most controversial DARPA projects in recent years was the Terrorism Information Awareness (TIA) Program headed by retired Admiral John Poindexter (President Reagan’s National Security Advisor). Poindexter had conceived a project designed to capture the “information signature” of possible terrorists so that the government could track them by collecting as much information about them as possible and use computer algorithms and human analysis to detect patterns and associations. Information was to have been collected from publicly available commercial sources to create a “virtual, centralized, grand database.” This database was to include information from passport applications, visas, work permits, driver’s licenses, car rentals, airline ticket purchases, arrests, credit transactions as well as education, medical and housing records. Poindexter had envisioned software that could quickly analyze “multiple petabytes’” of data.3 One petabyte (for those unfamiliar with the language) would fill the Library of Congress’ space (that holds over 18 million books) more than 50 times. It could hold 40 pages of text for each of the more than 6.2 billion human beings in the world.

Heather MacDonald, writing of the TIA in the Wall Street Journal noted that it was an idea ahead of its time and several concepts that it discussed (such as biometric identification technology enabling the identification and tracking of individuals through face or gait recognition) would find their way into the CAPPS II program (see below). She wrote “(The) purpose (of the TIA Program) was to prevent another attack on American soil by uncovering the electronic footprints that terrorists leave as they plan and rehearse their assaults. Before terrorists strike, they must enter the country, receive funds, case their targets, buy supplies, and send phone and e-mail messages. Many of those activities leave a trail in electronic databases. Researchers hoped that cutting-edge computer analysis could find that trail in government intelligence files and, possibly, in commercial databases as well.”4 She stated that public health authorities have “mined” medical data for years to spot the outbreak of infectious diseases, and credit-card companies have found fraudulent credit card purchases with this method but neither has raised the issue that somehow one’s rights to privacy were being invaded. Unfortunately, in the case of the TIA Program, the furor caused by privacy advocates (maintaining that TIA would let domestic policy and spy organizations take data stored on a centralized database to build dossiers on U.S. citizens) resulted in Congress closing Poindexter’s office in the fall of 2003 and his agency was barred from continuing nearly all his research. Poindexter quit government, claiming his work had been misunderstood.5

TIA data was designed to focus only on foreign intelligence and counterintelligence information. It would not have been difficult to build in safeguards that protected against potential abuses of the system. The Defense Department had already set up its own internal and external oversight boards to make sure that constitutional rights and privacy protections were not compromised, but when the matter became politicized, it was doomed. Or, at least, that’s the way it seemed at the time. Although it appeared that Congress had cut off funding for the original TIA program in 2003, it later came to light that Congress had actually left undisturbed a separate but similar $64M research program run by a little-known secretive U.S. government office called Advanced Research and Development Activity (ARDA) – part of the “National Foreign Intelligence Program.”

ARDA sponsors corporate and university research on information technology for U.S. intelligence agencies and is developing computer software that can extract information from databases as well as text, voices, other audio, video, graphs, images, maps, equations and chemical formulas in an effort called “Novel Intelligence from Massive Data.” Using some of the same researchers as Poindexter’s program, ARDA continued to research and develop computer software and equipment to intercept and analyze foreign intelligence that is transmitted electronically and to protect the U.S. methods used to obtain and communicate it.6 The Defense Department maintains that the final product of the research would only be used overseas or against non-U.S. citizens in this country, not against Americans on U.S. soil. Interestingly, a Guardian article disclosed that at least 18 other data-mining projects by Poindexter may have also survived including Evidence Extraction and Link Discovery, Genoa II, Babylon, Communicator, Genisys, HumanID, Bio-Surveillance, TIDES, EARS and WAE.

But the TIA Program was not the only DARPA Program to be affected in the conflict between privacy and national security. There were many others; one of the most intriguing being FutureMap (Futures Markets Applied to Prediction). In July 2003, FutureMAP (a subproject of TIA) met the same funding fate as TIA did in Congress. It would have allowed up to 10,000 participants to buy and sell future contracts as they wagered on catastrophic events in the Middle East. Headline writers had a field day with the program, caricaturing the program as silly as well as immoral and one senator (Ron Wyden, D-Oregon) was especially hostile to it. The program was attacked as a “terrorism lottery” being paid for with tax dollars, and Senator Wyden’s website spoke of selling “futures” in possible terrorist attacks and offering profit potential to anonymous bidders if catastrophic events occurred. In effect, he attacked the program as a “terrorist betting parlor.” In point of fact, that was the purpose of the program.

Wyden was right in terms of the operation of the program, but he was wrong in that he missed the big picture. Poindexter tried, but failed, to make that point. FutureMap involved the on-line selling of “futures” on terrorist attacks and in doing so (like other financial markets) investors were encouraged to buy “futures” in events they thought were likely to happen, and to sell off futures as they believed events became less likely to happen. Poindexter saw FutureMap as a way to zero-in on potential terrorists who sought to profit from a future catastrophic event because they either knew of it or were themselves planning it. It was a brilliant concept and it deserved a better fate.

Some of the possibilities the website offered for “sale” were the overthrow of the King of Jordan, the assassination of Yasser Arafat, a missile attack by North Korea and whether terrorists would attack Israel with bio-weapons in 2004. Bidders would have profited if the events for which they held futures – including government coups, assassinations and missile attacks – occurred. The website assured potential investors that DARPA would not have access to their identities or funds. This promise created the possibility that terrorists themselves could drive up the market for an event that they themselves were planning thereby profiting from an attack. The unstated intention of the Program was (to the extent possible) to alter future events in the interests of American security.

Eventually, the funding was cut and the program (at least on the surface) shelved. But the barrage of criticism obscured the fact that futures markets have proven to be pretty good predictors of events and the technique has proven successful in commodities and other matters.  However, as with TIA, when the concept of using high-tech software and new forms of data mining for information to prevent terror attacks became a political issue, the fate of the program was sealed.7

In hanging DARPA out to dry, privacy groups failed to recognize that the agency’s job has always been to explore new ideas and research with an eye to enhancing national security. If the United States is going to defeat a shadowy network of terrorists that has already attacked on a massive scale, the key will be information analysis. Without such a system in place, the nation is exposed to a national catastrophe that may be preventable.

All of which leads to the CAPPS II program – the Computer-Assisted (Airline) Passenger Prescreening System. CAPP II identified about 15% of air passengers for further scrutiny and was intended to conduct a “risk assessment” for the Transportation Security Administration (TSA) of each passenger, using national security information and information provided by passengers during the reservation process – information that was to include their name, date of birth, home address and home phone number. That information (which currently is not in many airline records) was to have been checked against terrorist watch lists, commercial databases and lists of people wanted for violent crimes, but unlike the TIA program, CAPP II was not designed to look at credit card transactions or medical histories. The “risk score” was to have included an “authentication score” provided by running passenger name record data against commercial databases to indicate a “threat level” in each passenger’s identity. 

Civil rights organizations honed in on the Program as an invasion of privacy. They worried that such systems could be fooled by identity thieves or forgers and that inaccurate data might flag innocent passengers, who would have had no legal means of challenging the TSA’s evaluations (although the TSA said that an ombudsman would have received complaints). Privacy groups also feared what is termed “mission creep,” where personal data would be used by the government for more than just providing air security.

The federal General Accounting Office (GAO) then joined the fray and attacked the TSA for not putting out adequate privacy guidelines, policies on how data was to have been safeguarded and how airlines were to have provided data. In a Report issued in mid-February 2004, the GAO suggested that the CAPPS II system was being impeded not only by a lack of international cooperation in providing the necessary data especially on the part of the Europeans (due to passenger privacy laws), but uncertainty over the possible expansion (“mission creep”) of the program’s original mission into tracking “regular” criminal activities and the system’s inability to recognize identity theft. In addition, there was no protocol in place to assist victims of identity fraud in clearing their names from the CAPP II system. Without the capacity to be tested, Capps II was doomed.

In the end, CAPPS II could not comply with the requirements that Congress had mandated – a precondition for further federal funding. The findings of the report weakened Congressional support for the program. Privacy advocates feared that if such powerful software systems were developed without limits from Congress, government agents could use them on any database. However, Heather MacDonald (writing in the Wall Street Journal) disagreed. She expressed concern that privacy groups were having (and continue to have) a detrimental effect on national security, challenging technologies that were critical for American security and undermining support for the CAPPS II program much as they undermined support for the Total Information Awareness (TIA) “data mining” program in late 2003.8 She noted in another April 26th editorial in the New York Post that in 2003: “The Electronic Privacy Information Center told the European Parliament why it should refuse to cooperate with CAPPS II – that the system would result in “widespread spying,” it said, by giving the Transportation Security Administration “access to [a passenger’s] financial and transactional data, such as credit reports and records of purchases, confidential business records”  – untrue, but convincing enough to place the entire rationale for the program in jeopardy. The ACLU sought to undermine the program by suggesting that it would discriminate against minorities by using credit scores to rank a flier’s risk; such scores, according to the ACLU, have a “well-documented bias against minorities” – this, despite the fact that “TSA has denied any intention of using credit scores to assess risk.”

In 2004, CAPPS II was replaced by a much scaled back version of the program called Secure Flight. The new program was to focus its screening efforts on looking for known or suspected terrorists, rather than using it for other law enforcement purposes – the “mission creep” issue that concerned the GAO. It was also to include a redress mechanism through which people could resolve questions if they believed they have been unfairly or incorrectly selected for additional screening. More importantly, the TSA was to take over responsibility from the airlines for checking airline passengers’ names against airline data, intelligence data (based on an expanded terrorist no-fly watch lists maintained in the FBI’s Terrorist Screening Center) and also commercial databases.  In the end, it didn’t matter. In February 2006, the Secure Flight program went back to the drawing board. After spending $150 million and years testing the program, the program had failed to establish that it could properly match names to watchlists, protect the privacy of air passengers or provide a workable method for resolving mistakes. Secure Flight had also been found to be vulnerable to abuse both from employees and outsiders gaining unauthorized access to the system.9

But the data mining controversy was to play itself out in one more arena. In the immediate aftermath of 9/11, Torch Concepts (a Defense Department contractor) had approached the Pentagon and offered to try and develop new data mining and analysis techniques to detect potential terrorists by sifting through large numbers of passenger records. In April 2002, with the assistance of the TSA officials, JetBlue Airways agreed to turn over to the company the names and addresses of 1.5 million passengers to be used its a risk assessment study.10 To protect privacy, Torch stripped passengers’ names and other unique identifiers out of the records, and at the project’s end, destroyed the data. Word, however, leaked out, with the result that privacy groups expressed the fear that someone somewhere could take improper advantage of this information at one level or another. In February 2004, the Department of Homeland Security’s privacy commissioner, Nuala O’Conner Kelly confirmed that employees of the Transportation Security Administration (TSA) had acted “outside the spirit of the Privacy Act” when they facilitated the transfer of passenger names, but she suggested that they did not break the law because “no data was brought into the control of the Transportation Security Administration.”

In a parallel case, the U.S. government disclosed that it had used data from the 1990 census and the records of close to a half million Northwest Airlines passengers in a test of a CAPPS II-like program. The experimental program was designed to ascertain people that did not conform to predetermined norms and therefore could be a ‘threat’. The Northwest Airlines passenger data was turned over by the airline without the knowledge or permission of the passengers concerned, and given to NASA’s Ames Research Center. Even though Northwest Airlines denied that it had shared passenger information with NASA, the Agency subsequently abandoned the program after Congress refused to fund it.

The net result of both the JetBlue and Northwest Airlines cases is that airlines are now loath to share their passenger data (regardless of all the privacy safeguards) for such research. For the airlines, it is a dilemma. Both JetBlue and Northwest feared that publicly cooperating with the government could scare off customers and that sharing this kind of information could also land them in court because of lawsuits from privacy groups. With JetBlue, Northwest and even American Airlines facing possible class action lawsuits filed on behalf of passengers who claim the carriers violated their privacy because they shared their private information without telling them11, perhaps the best solution would be for the government to compel all the airlines to hand over information needed. That would relieve the airlines of lawsuit threats and transfer responsibility to the federal government where it belongs. JetBlue Airways and Northwest Airlines may now face hundreds of billions of dollars in privacy lawsuits and a class action may also face ATT, Verizon, and BellSouth. This would be an unfortunate turn of events if it takes place, because these companies acted out of a sense of patriotic duty to provide data for national security data mining purposes and are now threatened with legal action because they did so.

In the past, privacy was viewed as a kind of social and cultural concept. However, with the emergence of computers and the worldwide Web, privacy has also become a digital problem.12 With the Web revolution and the emergence of data mining now upon us, privacy concerns pose technical challenges fundamentally different from anything we have ever confronted prior to the information revolution. How we resolve this issue will not only define our ability to anticipate the actions of our enemies, but it will determine the extent to which we are prepared to modify our traditional understanding of privacy in the post-modern era. At some point, those who seek data for security purposes, those who provide data for that purpose and those about whom the data is provided must come to some understanding that violations of privacy do not occur because of data mining; they occur because of the misuse of the data subsequently. Thus, a legal balance must be found between the requirements demanded for privacy in today’s world and the need for valid data mining results; between the need for privacy preservation and the quest for the knowledge necessary to protect our nation.


1. There is no prohibition against federal agencies receiving voluntarily provided business records by such companies in so far as that information relates to their normal billings procedures. In addition, the three companies reported to have supplied telephone records to the NSA also appear to be acting lawfully. The Telecommunications Act of 1934, as amended, generally prohibits the release of “individually identifiable customer proprietary network information” except under force of law or with the approval of the customer. But, according to USA Today, the telephone records voluntarily provided to the NSA had been anonymized. In addition, the Electronic Communications Privacy Act of 1986 explicitly permits telecommunications companies to provide customer records to the government if the government asks for them. So it would appear that the companies have been acting not just in the public interest, but also within the law and without encroaching on the privacy of any of their customers. (Richard A. Falkenrath, “The Right Call on Phone Records: The NSA’s Program Safeguards Security — and Civil Liberties,” The Washington Post, May 13, 2006, A17″

2. Under FISA, the application for a surveillance order must include the identity or description of the target, the nature and location of the place being tapped, the type of information being sought and how long the monitoring will last. Fred Kaplan, “The Right Kind of Paranoia: How the NSA could fix its data-mining program,” Slate, May 12, 2006; Heather MacDonald, “Information Please: Only a paranoid solipsist could feel threatened by the calling analysis program,” Weekly Standard, May 22, 2006, Volume 011, Issue 34;  “The Datamining Scare: Another nonthreat to your civil liberties,” Wall Street Journal, May 13, 2006; Daveed Gartenstein-Ross, “NSA Nonsense: The legal issues,” National Review Online, May 12, 2006

3. Michael Sniffen, “Controversial Terror Research Lives On,” The Guardian, February 23, 2004.

4. Heather MacDonald, “The ‘Privacy’ Jihad,” The Wall Street Journal, April 1, 2004.

5. Michael Sniffen, “Controversial Terror Research Lives On,” op. cit.; see also “Pentagon’s terrorism research lives on at other agencies,” Associated Press, February 23, 2004.

6. Tom Regan, “US still funding powerful data mining Tools – Total Information Awareness projects transferred to other agencies,” Christian Science Monitor, February 23, 2004.

7. Charles Cooper, “Hanging DARPA out to dry,” CNET, August 8, 2003; “DARPA’s FutureMAP – Trading In Terrorism,” SCOOP, July 31, 2003.

8. Heather MacDonald. op. cit.; see also “Our Own Worst Enemy,” Wall Street Journal, August 5, 2004

9. Sarah Lai Stirland, “Bush administration urged to delay launch of passenger screening system,” National Journal (Technology Daily), March 24, 2005; Ryan Singel, “Secure Flight Gets Wary Welcome, ” Wired News, August 30, 2004; Kim Zetter, “Secure Flight Faces Uphill Battle,” Wired News, March 31, 2005; Michael Hampton “Secure Flight Grounded,” Wired News, February 10, 2006

10. Ryan Singel, “JetBlue Fesses Up, Quietly,” Wired News, September 21, 2003

11. Sara Kehaulani Goo, “Airlines Confirm Giving Passenger Data to FBI After 9/11,” The Washington Post, May 2, 2004

12. A. Rezgur, A. Bouguettaya, and M. Y. Eltoweissy. Privacy on the Web: Facts, Challenges, and Solutions. IEEE Security & Privacy, 1(6):40-49, Nov-Dec 2003.

Leave a Reply

Your email address will not be published. Required fields are marked *