Archive for 'Government Programs'

Summary of E-Verify Challenges

Wednesday, May 25th, 2011

If you didn’t get a chance to check out Dr. Antón’s testimony on E-Verify, then you might be interested in her post summarizing the main points for the Center for Democracy and Technology:

Last month, I testified before the House Ways and Means Social Security Subcommittee hearing on the Social Security Administration’s Role in Verifying Employment Eligibility. My testimony focused on the E-Verify pilot system, and the operational challenges the system faces. According to the U.S. Citizenship and Immigration Services website, E-Verify “is an Internet-based system that allows businesses to determine the eligibility of their employees to work in the United States.” The goal of E-Verify – to ensure that only authorized employees can be employed in the U.S. – is laudable. However, the E-Verify pilot system is still in need of major improvements before it should be promoted to a permanent larger-scaled system.

Read the rest on the CDT blog.

Dr. Antón testifies before Congress about E-Verify

Friday, April 15th, 2011

Yesterday afternoon, Dr. Antón testified before the Subcommittee on Social Security of the U.S. House of Representatives Committee on Ways and Means on behalf of the USACM about E-Verify. Here’s part of the official ACM press release on the testimony:

WASHINGTON – April 14, 2011 – At a Congressional hearing today on the Social Security Administration’s role in verifying employment eligibility, Ana I. Antón testified on behalf of the U.S. Public Policy Council of the Association for Computing Machinery (USACM) that the automated pilot system for verifying employment eligibility faces high-stakes challenges to its ability to manage identity and authentication. She said the system, known as E-Verify, which is under review for its use as the single most important factor in determining whether a person can be gainfully employed in the U.S., does not adequately assure the accuracy of identifying and authenticating individuals and employers authorized to use it. Dr. Antón, an advisor to the Department of Homeland Security’s Data Privacy and Integrity Advisory Committee and vice-chair of USACM, also proposed policies that provide alternative approaches to managing identity security, accuracy and scalability.

More information about the hearing, including testimony from other witnesses, is made available by the Subcommittee here, and Dr. Antón’s written testimony is available from the USACM here (PDF).

Dr. Antón previously testified before the House Ways and Means Social Security Subcommittee during the summer of 2007 about the security and privacy of Social Security Numbers.

OMB Requests Comments on Government Cookie Policy

Friday, July 31st, 2009

The Federal Office of Management and Budget (OMB) is considering changing the cookie policy for federal government websites. In a recent Federal Register entry, they propose allowing Federal agencies to use cookies to track users to their websites, as long as those agencies:

  • “Adhere to all existing laws and policies (including those designed to protect privacy) governing the collection, use, retention, and safeguarding of any data gathered from users;
  • Post clear and conspicuous notice on the Web site of the use of Web tracking technologies;
  • Provide a clear and understandable means for a user to opt-out of being tracked; and
  • Not discriminate against those users who decide to opt- out, in terms of their access to information.”

The OMB is seeking comments on the proposed policy changes through August 10, 2009. Comments may be made on the OSTP blog.

In response, we offer the following comments:

Cookies are small text files used by web servers to maintain state information in the normally state-less Hyper Text Transfer Protocol (HTTP). There are concerns about the use of cookies on government websites:

[1] Most Internet users do not understand cookies, including thinking that they are viruses, or that they are bad all the time. (See V. Ha, K. Inkpen, F. Al Shaar, L. Hdeib, “An Examination of User Perception and Misconception of Internet Cookies”, Proc. of the Conf. on Human Factors in Computer Systems, Montreal, 2006, pp. 833-838)

[2] Web browsers, as currently implemented, do not allow cookies to meet the FTC’s Fair Information Practices (FIPS). For example, users are not given notice and made aware of a website’s use of cookies before those cookies are placed on their computers. Websites may mention cookies in their privacy policies, but studies show that most Internet users do not comprehend privacy policies, and think that the mere existence of a privacy policy makes their information secure, even if the privacy policy states “we share your information with everyone”! (See M.W. Vail, J.B. Earp, A.I. Antón, “An Empirical Study of Consumer Perceptions and Comprehension of Web Site Privacy Policies”, IEEE Trans. on Engineering Management, 55(3), Aug. 2008, pp. 442-454)

[3] Cookies do not meet the Choice/Consent FIP. In order to read a website’s privacy policy, a user must visit the website’s homepage, and then find the policy link and read it. However, most privacy policies express the concept of “implied consent,” i.e., simply visiting the homepage of the website implies consent with the privacy policy, without even having the opportunity to read it.

[4] Cookies do not meet the Access/Participation FIP. Modern browsers often contain cookie management utilities, to view and delete cookies stored on a user’s computer. Oftentimes, the information contained in the cookie is encrypted, or is a code or identifier that is only understandable to the website, but not the users. Users are unable to interpret the data contained in such cookies. Without understanding the data, users cannot verify the accuracy of such information.

[5] Cookies do not meet the Integrity/Security FIP. The cookie specification contains an expiration field, indicating the lifetime of the cookie. Many cookies are set with lifetimes of 10, 20, or 30 years. This is much longer than necessary.

[6] OMB’s proposal requires websites to provide a means “for a user to opt-out of being tracked.” However, opt-out cookies do not reliably opt a user out of the tracking. Automated cookie removal by antispyware utilities, and manual cookie deletion will delete the opt-out cookie along with other cookies on the user’s machine. Thus, the user is unknowingly opted-in to the tracking service.  To achieve reliable opt-out, modifications must be made to the design of antispyware utilities, web browsers, and whitelists of opt-out cookies must be maintained. (See P. Swire, A.I. Antón, Testimony before the Federal Trade Commission, Apr. 10, 2008)

Cookies have an important function in the design of the modern Internet, but raise legitimate privacy concerns that remain unadressed, especially within the context of government websites. The advantage of having website statistics may not outweigh the privacy cost. There are other means to evaluate a website, such as user focus groups, surveys, etc. These may be less effective, and subject to other biases, but the efficiency loss is well worth the privacy gained by not using cookies on government websites, until an alternative, privacy-preserving technology is developed.

The ECPA and Personal Health Record Systems

Thursday, December 11th, 2008

Yesterday, William Yasnoff discussed whether or not the Electronic Communications Protection Act (ECPA) provided federal privacy protection for Personal Health Record (PHR) systems. Here at The Privacy Place, we have previously focused on whether the Health Insurance Portability and Accountability Act (HIPAA) applies to PHRs (short answer: no), but today I would like to take a moment to talk about the ECPA.  If you are interested in our coverage of HIPAA and PHRs, I would point you to our post on Microsoft’s HealthVault and our post on Google’s Google Health project.

Let’s start with some background on the ECPA.  The ECPA was passed in 1986 as an amendment to the Wiretap Act of 1968 and primarily deals with electronic surveillance.  The purpose of the Wiretap Act was to make it illegal for any person to intercept oral communications like telephone calls.  The first title of the ECPA extends the original Wiretap Act to prevent the interception of electronic communications.  The second title of the ECPA (commonly called the Stored Communications Act) adds protection for stored communications and prevents people from intentionally accessing stored electronic communications without authorization.  The ECPA has been amended three times since it was passed.  First, it was amended by the Communications Assistance to Law Enforcement Act (CALEA) in 1994.  Second, it was amended by the USA PATRIOT Act in 2001.  Third, it was amended by the USA PATRIOT Act reauthorization acts in 2006.

Now, Yasnoff makes several claims in his post, which I will discuss in order.  First, he claims that there are no exceptions in the ECPA and that this means whichever organization holds your information must get your permission to release it.  This is categorically not true.  There are many exceptions in the ECPA, but for the sake of simplicity, I will limit this discussion to the two main exceptions of the original Wiretap Act, both of which were retained by the ECPA.

The first exception allows interception when one of the parties has given prior consent.  This could mean that the government can legally access your communications if your PHR service provider consents prior to the communication.  Thus, Yasnoff’s strong statement that PHRs “MUST GET YOUR PERMISSION” (emphasis from original statement) is simply incorrect.

The second exception allows interceptions if they are done in the ordinary course of business.  This could mean that your data would be accessible by third parties such as an information technology vendor that maintains the software.  Effectively, this is a somewhat broader exception than the exception found in HIPAA for Treatment, Payment, and Operations, which Yasnoff found to be wholly unacceptable for protecting patient privacy.

Second, Yasnoff claims that the ECPA “is not long or complicated – I urge you to read it yourself if you have any doubts.”  This statement as well is categorically untrue.  Paul Ohm, who was previously an attorney for the Department of Justice and is currently an Associate Professor of Law at the University of Colorado Law School, has publicly challenged Tax Law experts that the ECPA is more complicated than the U.S. Tax Code.

Bruce Boyden, an Assistant Professor of Law at the Marquette University Law School, wrote a chapter in Proskauer on Privacy discussing electronic communications and the ECPA. In it he details many of the nuanced aspects of the ECPA, including the three subsequent amendments to the ECPA. With regard to the first title (Interception) he says:

To “intercept” a communication means, under the act, “the aural or other acquisition of the contents of any wire, electronic, or oral communications through the use of any electronic, mechanical, or other device.” The application of this definition to electronic communications has at times been particularly difficult, and courts have struggled with a number of questions: What exactly qualifies as the acquisition of the contents of a communication, and how is it different from obtaining a communication while in electronic storage under the Stored Communications Act? Does using deception to pose as someone else constitute and interception? Does using a person’s own device to see messages intended for them qualify?

Boyden later talks about limitations to the second title (Stored Communications):

[T]here are two key limitations in section 2701 [of the ECPA].  First, it does not apply to access of any stored communication, but only those communications stored on an electronic communications service facility as defined under the act.  Second, the definition of “electronic storage” in the act does not encompass all stored communications, but only those in “temporary, intermediate storage” by the electronic communication service or those stored for backup protection.

These seem like rather important exceptions which continue to refute Yasnoff’s claim that there are no exceptions in the ECPA, but to his second point, this seems pretty complicated.  At least, it certainly doesn’t seem as simple as just finding some information that has been communicated to and stored by a PHR service provider, which was Yasnoff’s implication.

Boyden has also discussed whether automated computer access to communications is a violation of the ECPA.  The discussion is more complicated than it may appear at first and there’s an interesting discussion of it over on Concurring Opinions.

Broadly, several organizations feel that current US privacy law, including the ECPA, is discombobulated. The Electronic Frontier Foundation believes that fixing the ECPA is one of the top five priorities in their privacy agenda for the new administration. The Center for Democracy and Technology would like to see the new administration pass consumer privacy legislation and a “comprehensive privacy and security framework for electronic personal health information.” The ACLU would like to see the new administration “harmonize privacy rules.” I submit that these organizations do not feel that the ECPA provides clear and adequate privacy protections for PHR systems.

Yasnoff’s third claim is that PHRs which are “publicly available” receive stronger protections under the ECPA than those that are “private.”  In fact, Yasnoff says:

Only those that are “publicly-available” are included. While this clearly would apply to generally available web-based PHRs, systems provided only to specific individuals by employers, insurers, and even healthcare providers are less likely to be considered “publicly-available.” Therefore, ECPA protection is limited. So you are only covered if you use a PHR that is available to anyone.

This statement is either completely backwards as it relates to the ECPA or, perhaps more likely, not a factor for ECPA protection at all.  The EFF’s Internet Law Treatise has an article describing the differences in public communications versus private communications:

“[T]he legislative history of the ECPA suggests that Congress wanted to protect electronic communications that are configured to be private, such as email and private electronic bulletin boards,” as opposed to publicly-accessible communications. See Konop, 302 F.3d at 875, citing S. Rep. No. 99-541, at 35-36, reprinted in 1986 U.S.C.C.A.N. 3555, 3599.

Thus, the public accessibility of the PHR service is not important. The pressing concern is whether the communication itself was meant to be public or private. If it was public, then the ECPA simple doesn’t apply. It if was private, then whatever protections the ECPA does afford, would apply.

By now it must be clear that I disagree with William Yasnoff’s assessment of the ECPA’s application to PHRs.  I did, however, want to point out one interesting privacy protection that the ECPA offers which HIPAA does not: a private right of action. 

Basically, a private right of action allows citizens to file civil lawsuits in an attempt to recover losses caused by violations of a law.  The ECPA has a private right of action clause, while the HIPAA does not.  HIPAA’s lack of a private right of action has caused some criticism.  On the other hand, the ECPA’s private right of action has also been criticized as unnecessary and wasteful.  Perhaps it is a stretch, but this was the only possible improvement in privacy protection that I was able to find to support Yasnoff’s argument regarding the use of the ECPA to provide privacy protections for PHRs.

I would like to conclude by saying as directly as possible that the ECPA does NOT provide clear or adequate privacy protection for personal health information given to PHR systems. Privacy in general and healthcare privacy in particular are hotly debated current concerns for many organizations. I believe it is likely that the Obama administration and the next session of Congress will attempt to address the privacy concerns raised by organizations like the EFF, the CDT, and the ACLU. In the meantime, however, do not use a PHR service under the assumption that the ECPA protects the privacy of your medical records.

A success story in health information exchange

Sunday, February 19th, 2006

We all are aware that our lives are practically becoming digital; so are hospitals. Major funding initiatives are underway to support the transition of hospitals into the digital age. In 2004, the US government spent $50 million to test computerization of health records and further proposed $125 million in related federal spending for the year 2005.

In April 2004, President Bush asked the IT industry to build a system that would provide every citizen of the United States with an electronic health record (EHR) that could be accessed from any location by 2014. He appointed Dr. Brailer (national coordinator for Health Information Technology for the Department of Health and Human Services) to coordinate this effort and establish the Nationwide Health Information Network (NHIN).

In December 2005, Dr. Brailer’s office awarded $18.6 million in contracts to four consortia led by IBM, Computer Science Corporation, Accenture and Northrop Grumman to develop prototype architectures for the NHIN. Each group consists of developers, hospitals, laboratories, pharmacies and physicians who must prove that EHRs can be exchanged across different health organizations.

In a similar effort to build such data interchange networks, Connecting for Health, a public-private collaborative led by the Markle Foundation, developed a prototype system (which will release in Spring 2006) that was successful in exchanging thousands of health records from three independently developed regional records systems (California, Massachusetts and Indiana). These three independently developed health systems had no common architecture but were able to apply the common framework developed by Connecting for Health for the exchange of records.

Seeing such successful projects, we can be rest assured that our federal money is being utilized efficiently and in the right direction.

Are you on the Federal Terror Watchlist?

Wednesday, December 7th, 2005

According to a C|Net article, 30,000 airline passengers have been mistakenly placed on the federal watch list. Having your name match with a name on the watchlist means you are subject to extra screening. According to Jim Kennedy, director of the Transportation Security Administration’s redress office, none of these passengers were kept from boarding.

In order to avoid these inconveniences, a person must submit forms to the TSA proving their identity, and the evaluation of these forms can take 45 to 60 days. At this point, the passenger’s name is not removed from the list. Instead, their name is put on a “clearance” list. This means they will not be able to check-in at a kiosk, and they would typically have to explain their situation to a customer service representative at check-in.

As a private citizen, I understand that sometimes all we have to go on is a name. Consider the possibility that a list of names were found in a known-terrorist’s desk drawer. These names are then put on the watchlist. This seems like a reasonable action. However, as a computer scientist and a researcher, it seems inefficient and almost irresponsible to just place a person’s name on a “clearance” list after having their identity verified and still subject the individual to inconvenience whenever they travel. If this is the best that the government has come up with, it seems a bit disturbing.

In the government’s defense, it seems they are trying to rectify these issues with a new Secure Flight program that is currently being scrutinized before approval. According to this article, Homeland Security is in the final stages of approving a new pre-flight screening process. The Data Privacy and Integrity Advisory Commitee is advising them to narrowly focus the pre-screening program, possibly by requiring a passenger’s name and date of birth. The advisory panel also says that the TSA has yet to fully define Secure Flight, while the American Civil Liberties Union has repeatedly called on Homeland Security to eliminate the program.

Read more about this C|Net story here.

Enforcement of Privacy Policies

Monday, November 28th, 2005 is currently conducting a survey to gauge user comprehension and views on privacy policies. While conducting the survey, we’ve received several pieces of valuable feedback from our participants. One particular area of interest is the lack of enforceability of privacy policies. Many respondants expressed concerned that privacy policies are useless because the privacy practices of an institution may not be in compliance with their privacy policy. Furthermore, the privacy policy may not be a consideration when the business is sold or goes bankrupt.

This is a very good point. However, we cannot abandon privacy policies because of current lack of enforcement. We will need to maintain privacy policies for those mechanisms that are in place, or being put into place, to ensure compliance of the policies. For example, consider the UK Information Commisioner’s Office’s recent unveiling of their new enforcement strategy. David Smith, the new deputy information commissioner, has announced that his office will bring enforcement actions against businesses that deliberately or repeatedly ignore their responsibilities under the Data Protection Act of 1998.

Privacy policies are necessary policies because we require accountability. We need to hold organizations accountable for their privacy practices, and one such way of doing so is to ensure that companies are keeping their promises (via the privacy policies) to consumers.

Read more about the Information Commisioner’s Office’s new Strategy here.

National Security Letters

Wednesday, November 9th, 2005

According to a Washington Post article, the FBI can issue a letter to an Internet Service Provider (ISP) or Financial Institution forcing them to hand over information on their customers. The Post article describes a situation where George Christian, who manages digital records for libraries in Connecticut, was approached by the FBI who demanded he turn over information about usage on a specific computer. They also warned him not to tell anyone about the demand, ever.

The Washington Post explains the nature of the letters:

The FBI now issues more than 30,000 national security letters a year, according to government sources, a hundredfold increase over historic norms. The letters — one of which can be used to sweep up the records of many people — are extending the bureau’s reach as never before into the telephone calls, correspondence and financial lives of ordinary Americans.

Issued by FBI field supervisors, national security letters do not need the imprimatur of a prosecutor, grand jury or judge. They receive no review after the fact by the Justice Department or Congress. The executive branch maintains only statistics, which are incomplete and confined to classified reports. The Bush administration defeated legislation and a lawsuit to require a public accounting, and has offered no example in which the use of a national security letter helped disrupt a terrorist plot.

The most disturbing part about this, to me at least, is the lack of checks and balances in place. This gives the FBI carte blanche to invade the privacy of any individual, at any time, for any reason, leaving individuals with little to no recourse.

Read more in the Washington Post article here.

TSA’s Secure Flight in the news

Tuesday, September 27th, 2005

There have been several stories regarding TSA’s Secure Flight program and no-fly lists over the past few days. The major news this week is that TSA has announced that they will not use commercial data brokers in the initial deployment of Secure Flight (news presented in a article and confirmed at EPIC’s overview of Secure Flight). This announcement came just before a major report by the Secure Flight Privacy/IT Working Group [pdf] was released yesterday, in which the group was highly critical of the TSA’s actions regarding Secure Flight. Bruce Schneier discusses the report more in depth in a blog entry; he was a member of the working group.

Some other major stories regarding the TSA have come forward regarding people’s difficulties with the no-fly lists and the pains they endure in trying to remove themselves from the list, once mistakenly placed on it. Wired is running a story about several people who have had bad experiences with the system, including a nun who spent ninth months on the list, missing meetings and events, until an appeal was made to Karl Rove and the situation was rectified. Another person’s dilemma is described in this article: a pilot was placed on the no-fly list and thus effectively unable to work, all because of what seems to be a data error. The pilot is fighting the situation in court. In this case, the government is maintaining that a person’s presence on the list and reasons for being there are so secret that even in court, they will not be disclosed to the defense.

In the Wired article, Secure Flight is presented by the TSA as the solution to these types of problems. However, with so many criticisms and concerns over privacy practices and data accuracy, there is much to be done before Secure Flight will have a chance to adequately address these issues.

The cost of gov’t secrecy

Tuesday, September 6th, 2005

A new report released by indicates that the government’s spending on maintaining secrets is rising across the board. The summary of findings indicates that in 2004, $148 was spent keeping new secrets for every $1 spent releasing old secrets; this cost has been on the rise for the past several years, and as recently as 2001 the government only spent $20 to keep secrets for every $1 to release them.

The U.S. government also classified more documents last year than any year previously: 15.6 million documents classified, at a cost of $460 per document to keep it secret. Conversely, the number of Freedom of Information Requests hit an all-time annual high, with 4,080,737 requests for information. The government is still unable to keep up, though agencies are improving in their ability to handle requests.

This report is interesting in its discussion of how the government keeps secrets, what types of secrets are being kept, and the costs involved.

The AP has a story on the new report that summarizes many of the key findings and presents some reasons for the increased secrecy.