Archive for 'Legislation'

In Defense of Smart Phone Security by Default

Sunday, October 19th, 2014

The Apple iOS8 phone and the latest Google Android phone claim to establish landmark privacy protections by establishing encryption by default. According to Apple and Google, they will be unable to “open” the phone for anyone, not even law enforcement. These new measures have been sharply criticized by the Director of the FBI and the Attorney General. As a software engineering professor, I’ve devoted my career to teaching students how to develop (a) secure, (b) privacy preserving, and (c) legally compliant software systems. I’m not qualified to debate whether or not this move by Apple and Google is lawful or constitutional. However, as a technologist I can assert that applying security best practices will yield a system that can withstand intrusions and denial of service attacks, limits access to authenticated and authorized users, etc.

The recent “encryption by default” design decision by Apple and Google is currently being discussed in software engineering and security classes across our nation, and perhaps across the globe. By default, privacy and security researchers, technologists and activists applaud this decision because it is raising the bar for truly implementing security best practices. It’s a bitter pill to swallow for professors who teach students to develop secure, privacy preserving, and legally compliant software, to have our students be told on the job, “Oh, that stuff you learned about security back in school? We only want you to secure the system part way, not all the way. So, leave in a back door.” Such a position undermines those academic institutions seeking to prepare tomorrow’s security and privacy workforce in an ever-changing world where sophisticated criminals are getting smarter and their offensive techniques are surpassing our ability to stay ahead.

From my experience working with government agencies, I thoroughly understand the desire to “catch the bad guys” and value the ability to prevent malicious criminal activity by individuals or nation states. I want our government, Department of Homeland Security, Department of Defense and Intelligence Community to protect us from the unfathomable. I find myself wondering why the very institutions who promote security and privacy best practices (via, for example, centers of excellence at our nation’s top universities) are so vehemently opposed to industry actually implementing best practices. My analysis yields two observations:

  1. Taking the Easy Way Out. For law enforcement to expect companies to provide the government with back door access (even when required by law), seems to me to be the lazy approach. If one reads between the lines, one could infer that the government is lacking the incentives and/or the will to innovate and improve the state of the art in cyber offense. Where’s the spirit of the scientists and engineers who enabled man to walk the moon? Where’s the American will to innovate, to surpass the state of the art, and be the best? Why let other nations beat us at our own game? The only way we can get better at offense is by facing the best possible defense. At a time when other nation states are getting so sophisticated, we risk not developing our own capabilities if we rely on an easy backdoor rather than honing our own skills. We need to keep ourselves sharp by learning how to confront the state of the art systems. If we aren’t staying ahead of the curve then other countries and their intelligence services will have reason to develop capabilities beyond our agencies when we’re relying on these factors.
  2. Creating a Backdoor for Use in Other Countries. If the United States expects companies to provide a back door to gain access to systems and the data that resides in those systems, then other governments will, too. We can’t well expect Apple or Google to provide a backdoor to the U.S., but not to China or Russia. At least in the United States, we have a legal framework that requires search warrants, etc. to gain access via the backdoor. But many other countries lack these legal safeguards and will require the phone companies to enable snooping into the systems within those countries with no legal protections comparable to US system. As security engineers have learned in many other systems, you can’t build a vulnerability that is used only by the good guys and not by others.

I certainly empathize with law enforcement’s desire to gain evidence for critical investigations. But Congress and the White House have agreed that cybersecurity should be funded as a national priority. As professors of computer security, we can’t teach the importance of building secure systems and then explain to our students that we will leave tens of millions of devices insecure.

Dr. Annie I. Antón is a Professor in and Chair of the School of Interactive Computing at the Georgia Institute of Technology in Atlanta. She has served the national defense and intelligence communities in a number of roles since being selected for the IDA/DARPA Defense Science Study Group in 2005-2006.

Summary of E-Verify Challenges

Wednesday, May 25th, 2011

If you didn’t get a chance to check out Dr. Antón’s testimony on E-Verify, then you might be interested in her post summarizing the main points for the Center for Democracy and Technology:

Last month, I testified before the House Ways and Means Social Security Subcommittee hearing on the Social Security Administration’s Role in Verifying Employment Eligibility. My testimony focused on the E-Verify pilot system, and the operational challenges the system faces. According to the U.S. Citizenship and Immigration Services website, E-Verify “is an Internet-based system that allows businesses to determine the eligibility of their employees to work in the United States.” The goal of E-Verify – to ensure that only authorized employees can be employed in the U.S. – is laudable. However, the E-Verify pilot system is still in need of major improvements before it should be promoted to a permanent larger-scaled system.

Read the rest on the CDT blog.

Dr. Antón testifies before Congress about E-Verify

Friday, April 15th, 2011

Yesterday afternoon, Dr. Antón testified before the Subcommittee on Social Security of the U.S. House of Representatives Committee on Ways and Means on behalf of the USACM about E-Verify. Here’s part of the official ACM press release on the testimony:

WASHINGTON – April 14, 2011 – At a Congressional hearing today on the Social Security Administration’s role in verifying employment eligibility, Ana I. Antón testified on behalf of the U.S. Public Policy Council of the Association for Computing Machinery (USACM) that the automated pilot system for verifying employment eligibility faces high-stakes challenges to its ability to manage identity and authentication. She said the system, known as E-Verify, which is under review for its use as the single most important factor in determining whether a person can be gainfully employed in the U.S., does not adequately assure the accuracy of identifying and authenticating individuals and employers authorized to use it. Dr. Antón, an advisor to the Department of Homeland Security’s Data Privacy and Integrity Advisory Committee and vice-chair of USACM, also proposed policies that provide alternative approaches to managing identity security, accuracy and scalability.

More information about the hearing, including testimony from other witnesses, is made available by the Subcommittee here, and Dr. Antón’s written testimony is available from the USACM here (PDF).

Dr. Antón previously testified before the House Ways and Means Social Security Subcommittee during the summer of 2007 about the security and privacy of Social Security Numbers.

Data Privacy Day 2009

Wednesday, January 28th, 2009

Last year on January 28th, the first annual Data Privacy Day celebration was held in the United States at Duke University. Today marks the second annual Data Privacy Day, and the celebration has grown dramatically.

Last year, Governor Easley proclaimed January 28th as Data Privacy Day for the state of North Carolina. This year, he proclaimed January Data Privacy Month. North Carolina, Washington, California, Oregon, Massachusetts, and Arizona have also declared January 28th to be state-wide Data Privacy Day. Last but certainly not least, Congressman David Price and Congressman Cliff Stearns introduced House Resolution 31 which was passed on January 26th with a vote of 402 to 0 to make today National Data Privacy Day in the United States. It is truly outstanding to see such strong support in the form of resolutions and proclamations.

The best way to support or celebrate Data Privacy Day is to take action. Since the goal of Data Privacy Day is to promote awareness and education about data privacy, one easy way to act is to check out all the great educational resources made available in conjunction with Data Privacy Day. For example, Google has posted about what it has done to protect privacy and increase awareness of privacy. Microsoft is holding an event tonight and has more information on data privacy on their website.

Here at The Privacy Place, we were once again pleased to have the opportunity to celebrate Data Privacy Day at Duke University by attending the panel discussion on Protecting National Security and Privacy. The panel discussion was extremely well-attended and well-received. This event had a number of sponsors, including Intel who has a fantastic website with extensive information on Data Privacy Day. If you weren’t able to make it to the panel, I would strongly encourage you to check out Intel’s site.

Lastly, Data Privacy Day is all about awareness and education, so be sure to spread the word!

[Update: Fixed the link to the House Resolution that passed on Monday.]

The ECPA and Personal Health Record Systems

Thursday, December 11th, 2008

Yesterday, William Yasnoff discussed whether or not the Electronic Communications Protection Act (ECPA) provided federal privacy protection for Personal Health Record (PHR) systems. Here at The Privacy Place, we have previously focused on whether the Health Insurance Portability and Accountability Act (HIPAA) applies to PHRs (short answer: no), but today I would like to take a moment to talk about the ECPA.  If you are interested in our coverage of HIPAA and PHRs, I would point you to our post on Microsoft’s HealthVault and our post on Google’s Google Health project.

Let’s start with some background on the ECPA.  The ECPA was passed in 1986 as an amendment to the Wiretap Act of 1968 and primarily deals with electronic surveillance.  The purpose of the Wiretap Act was to make it illegal for any person to intercept oral communications like telephone calls.  The first title of the ECPA extends the original Wiretap Act to prevent the interception of electronic communications.  The second title of the ECPA (commonly called the Stored Communications Act) adds protection for stored communications and prevents people from intentionally accessing stored electronic communications without authorization.  The ECPA has been amended three times since it was passed.  First, it was amended by the Communications Assistance to Law Enforcement Act (CALEA) in 1994.  Second, it was amended by the USA PATRIOT Act in 2001.  Third, it was amended by the USA PATRIOT Act reauthorization acts in 2006.

Now, Yasnoff makes several claims in his post, which I will discuss in order.  First, he claims that there are no exceptions in the ECPA and that this means whichever organization holds your information must get your permission to release it.  This is categorically not true.  There are many exceptions in the ECPA, but for the sake of simplicity, I will limit this discussion to the two main exceptions of the original Wiretap Act, both of which were retained by the ECPA.

The first exception allows interception when one of the parties has given prior consent.  This could mean that the government can legally access your communications if your PHR service provider consents prior to the communication.  Thus, Yasnoff’s strong statement that PHRs “MUST GET YOUR PERMISSION” (emphasis from original statement) is simply incorrect.

The second exception allows interceptions if they are done in the ordinary course of business.  This could mean that your data would be accessible by third parties such as an information technology vendor that maintains the software.  Effectively, this is a somewhat broader exception than the exception found in HIPAA for Treatment, Payment, and Operations, which Yasnoff found to be wholly unacceptable for protecting patient privacy.

Second, Yasnoff claims that the ECPA “is not long or complicated – I urge you to read it yourself if you have any doubts.”  This statement as well is categorically untrue.  Paul Ohm, who was previously an attorney for the Department of Justice and is currently an Associate Professor of Law at the University of Colorado Law School, has publicly challenged Tax Law experts that the ECPA is more complicated than the U.S. Tax Code.

Bruce Boyden, an Assistant Professor of Law at the Marquette University Law School, wrote a chapter in Proskauer on Privacy discussing electronic communications and the ECPA. In it he details many of the nuanced aspects of the ECPA, including the three subsequent amendments to the ECPA. With regard to the first title (Interception) he says:

To “intercept” a communication means, under the act, “the aural or other acquisition of the contents of any wire, electronic, or oral communications through the use of any electronic, mechanical, or other device.” The application of this definition to electronic communications has at times been particularly difficult, and courts have struggled with a number of questions: What exactly qualifies as the acquisition of the contents of a communication, and how is it different from obtaining a communication while in electronic storage under the Stored Communications Act? Does using deception to pose as someone else constitute and interception? Does using a person’s own device to see messages intended for them qualify?

Boyden later talks about limitations to the second title (Stored Communications):

[T]here are two key limitations in section 2701 [of the ECPA].  First, it does not apply to access of any stored communication, but only those communications stored on an electronic communications service facility as defined under the act.  Second, the definition of “electronic storage” in the act does not encompass all stored communications, but only those in “temporary, intermediate storage” by the electronic communication service or those stored for backup protection.

These seem like rather important exceptions which continue to refute Yasnoff’s claim that there are no exceptions in the ECPA, but to his second point, this seems pretty complicated.  At least, it certainly doesn’t seem as simple as just finding some information that has been communicated to and stored by a PHR service provider, which was Yasnoff’s implication.

Boyden has also discussed whether automated computer access to communications is a violation of the ECPA.  The discussion is more complicated than it may appear at first and there’s an interesting discussion of it over on Concurring Opinions.

Broadly, several organizations feel that current US privacy law, including the ECPA, is discombobulated. The Electronic Frontier Foundation believes that fixing the ECPA is one of the top five priorities in their privacy agenda for the new administration. The Center for Democracy and Technology would like to see the new administration pass consumer privacy legislation and a “comprehensive privacy and security framework for electronic personal health information.” The ACLU would like to see the new administration “harmonize privacy rules.” I submit that these organizations do not feel that the ECPA provides clear and adequate privacy protections for PHR systems.

Yasnoff’s third claim is that PHRs which are “publicly available” receive stronger protections under the ECPA than those that are “private.”  In fact, Yasnoff says:

Only those that are “publicly-available” are included. While this clearly would apply to generally available web-based PHRs, systems provided only to specific individuals by employers, insurers, and even healthcare providers are less likely to be considered “publicly-available.” Therefore, ECPA protection is limited. So you are only covered if you use a PHR that is available to anyone.

This statement is either completely backwards as it relates to the ECPA or, perhaps more likely, not a factor for ECPA protection at all.  The EFF’s Internet Law Treatise has an article describing the differences in public communications versus private communications:

“[T]he legislative history of the ECPA suggests that Congress wanted to protect electronic communications that are configured to be private, such as email and private electronic bulletin boards,” as opposed to publicly-accessible communications. See Konop, 302 F.3d at 875, citing S. Rep. No. 99-541, at 35-36, reprinted in 1986 U.S.C.C.A.N. 3555, 3599.

Thus, the public accessibility of the PHR service is not important. The pressing concern is whether the communication itself was meant to be public or private. If it was public, then the ECPA simple doesn’t apply. It if was private, then whatever protections the ECPA does afford, would apply.

By now it must be clear that I disagree with William Yasnoff’s assessment of the ECPA’s application to PHRs.  I did, however, want to point out one interesting privacy protection that the ECPA offers which HIPAA does not: a private right of action. 

Basically, a private right of action allows citizens to file civil lawsuits in an attempt to recover losses caused by violations of a law.  The ECPA has a private right of action clause, while the HIPAA does not.  HIPAA’s lack of a private right of action has caused some criticism.  On the other hand, the ECPA’s private right of action has also been criticized as unnecessary and wasteful.  Perhaps it is a stretch, but this was the only possible improvement in privacy protection that I was able to find to support Yasnoff’s argument regarding the use of the ECPA to provide privacy protections for PHRs.

I would like to conclude by saying as directly as possible that the ECPA does NOT provide clear or adequate privacy protection for personal health information given to PHR systems. Privacy in general and healthcare privacy in particular are hotly debated current concerns for many organizations. I believe it is likely that the Obama administration and the next session of Congress will attempt to address the privacy concerns raised by organizations like the EFF, the CDT, and the ACLU. In the meantime, however, do not use a PHR service under the assumption that the ECPA protects the privacy of your medical records.

Is That Vault Really Protecting Your Privacy?

Tuesday, October 9th, 2007

Last week, Microsoft announced a new PHR (Patient Health Records) system called HealthVault. HealthVault is a web-based portal that enables end-users to upload their health records on the web. Unfortunately, what people don’t realize is that HealthVault and similar PHR systems are not subject to or governed by law. When the Health Insurance Portability and Accountability Act (HIPAA) was enacted, we did not envision that private software firms would eventually want to create databases for our health records. As a result, HealthVault and other PHR systems are not subject to the same privacy and security laws to which traditional medical records are subject to in the United States because they are not “covered entities” as specified in the HIPAA.

Over the course of the past 7 years, researchers at ThePrivacyPlace.org have evaluated over 100 privacy statements for financial and healthcare web portals. In addition, we focus on evaluating the extent to which the privacy of sensitive information is protected in these systems as well as the extent to which system comply with relevant regulations.

Even though physicians and the press are excited about the introduction of these new PHR systems [1], there are questions that I urge the public to ask before entrusting their sensitive health records to any PHR system. My concerns are based on a careful evaluation of the HealthVault privacy statements [2, 3]. Microsoft appears to have sought the counsel of physicians who believe that patient consent is the best indicator of privacy protections. Unfortunately, most physicians do not understand the subtleties buried within healthcare privacy statements within the context of the software that implements those statements. For this reason, I now list three primary questions that one should ask before entrusting their health records to HealthVault or any other PHR system:

Will your health information be stored in other countries without appropriate legal oversight, skirting many of the protections afforded by the HIPAA?

The HealthVault privacy statement explicitly states that your health records may be off-shored to countries that do not afford the same privacy protections for sensitive information that we do in the United States. In particular, if information is disclosed or altered, do you have any legal recourse or remedy?

Will your health care records be merged with other personal information about you that was previously collected within the context of non-health related services?

Within the context of HealthVault, the answer to this question is yes. Microsoft explicitly states that they will merge the information they have previously collected from you via non-health related services with your HealthVault information. Moreover, it is unclear what information Microsoft already has about us other than our names and contact information and precisely what information third parties may access. Furthermore, we don’t know if that information is accurate or complete. Thus, use of the merged information may not be what we expect.

Are the access controls to your health records based not only on your consent, but also on the principle of least privilege?

Although HealthVault requires patient consent for any accesses and sharing of your health records, access controls leave the door wide open for data breaches. HealthVault enables individuals to grant access to other people and programs that can further grant read/write access to your health record. The only safeguard is a history mechanism to provide an accounting of accesses if you suspect that your information has been breached after the fact. A better approach would be for Microsoft to proactively enforce contractual obligations via audits and monitoring mechanisms.

The hype surrounding HealthVault’s privacy protections among those in the medical community must be balanced with the reality of the information security and privacy practice expressed in its public privacy statements. It is critical to address these privacy concerns in the design of PHR systems before we deploy them with vulnerabilities that will ultimately lead to yet another rash of data breaches.

References

[1] Steve Lohr. Microsoft Rolls Out Personal Health Records, New York Times, 4 October 2007.

[2] Microsoft HealthVault Search and HealthVault.com Beta Version Privacy Statement, October 2007.

[3] Microsoft HealthVault Beta Version Privacy Statement, October 2007.

Is Simplicity the Key to Privacy?

Friday, February 24th, 2006

While doing a literature search, I ran across an interesting news article. According to this article, Representative Ed Markey (D-MA) is introducing a bill called “Eliminate Warehousing of Consumer Internet Data Act of 2006”. One of the more compelling notions behind this bill is that of deleting data after use. I wonder — could this be a key component to protecting privacy?

Think about it. Even if a company used your information, if they had to delete it afterwards, wouldn’t your privacy most likely be preserved? Furthermore, if they used it to transfer to someone else, and that person is bound by the same law, the same logic would follow. The recipient of the transfer could use your information for its intended purpose and then they must delete it. Granted, there would need to be stringent guidelines for what constitutes “intended use”.

If we were to accept this approach, we would no longer have the staggering number of ubiquitous data warehouses full of our private information, and companies such as ChoicePoint and Axiom would not be selling our information to other vendors.

This sounds like a nice idea, but would such a bill ever be passed? Would politicians be able to withstand the lobbying power of organizations seeking to bar such legislation from being passed?

This certainly isn’t a solution to our privacy woes, but it is a simple idea that may be a step in the right direction. Read more in Markey’s press realease.

Diebold certified despite inability to comply with NC law

Monday, December 5th, 2005

A previous blog entry discussed Diebold’s struggles with the latest electronic voting law passed in North Carolina. The situation then was that Diebold said it could not meet the law’s requirements, and its efforts in seeking an exemption had been dismissed. On Friday, December 2, the North Carolina Board of Elections still certified Diebold as one of three approved vendors for the state, despite Diebold’s admitted inability to comply with law.

The EFF, which had fought Diebold’s attempts at an exemption in court, immediately posted its criticism of the board’s decision. An EFF attorney is quoted as saying: “In August, the state passed tough new rules designed to ensure transparency in the election process, and the Board simply decided to take it upon itself to overrule the legislature. The Board’s job is to protect voters, not corporations who want to obtain multi-million dollar contracts with the state.” The position of the North Carolina Board of Elections is that none of the electronic voting machine vendors could fully meet the requirements, so the board simply loosened the standards regarding source code escrow. The EFF disputes this claim, however, saying that at least one of the other vendors “has publicly stated that it is capable of meeting the escrow requirement for the code used it its system.”

A C|Net article quotes an advisor to the board as saying that “the Board of Elections decided at the last minute that it would allow the companies to be certified as long as they provided the state with the outside escrow locations of all the codes” – a requirement that must be met by December 22. This advisor to the board further says: “This is an extra step the board has decided to put in to strengthen the law that we have to work with.” I find this statement to be very inaccurate and unfortunate – how can making an exception and an end-run around the law possibly strengthen it?

The Baltimore Sun put up a well-reasoned editorial on the general concerns with electronic voting companies. Their conclusion: “Most nations do not use private companies to count their national election results; if the United States must, it had better make sure the voting, counting and transmission of poll data are as transparent and auditable – at every point – as possible.”

Diebold pulls out of North Carolina market

Thursday, December 1st, 2005

Earlier this year, the North Carolina legislature passed a bill that sets up standards for voting equipment used in elections. The bill sets out to “restore public confidence in the election process”. The law outlines many vendor responsibilities in Section 2.(a), including that “the vendor shall place in escrow with an independent escrow agent approved by the State Board of Elections all software that is relevant to functionality, setup, configuration, and operation of the voting system, including, but not limited to, a complete copy of the source and executable code…” as well as including “a list of programmers responsible for creating the software”. The penalties for violating the bill include a felony charge and a civil penalty of up to $100,000 per violation.

The new state law has led to Diebold threatening to pull out of the state, due to not being able to meet these openness requirements. Diebold originally sought an exemption, via an injunction guarding against prosecution as well as reinterpreting the law to not include Diebold’s situation. According to this AP article, Diebold claims that it cannot provide the source code nor list of programmers for Windows, on which their voting machines are based. The EFF was involved in the case to thwart Diebold’s exemption status; the EFF details its involvement and links to its legal brief here.

I find Diebold’s position confusing and questionable in this situation. To say that they cannot comply with the state law means that they are either unwilling to comply, find it overly difficult to do so, or actually find it impossible to meet the requirements. If the first or second case is true, then Diebold is simply making a business decision not to compete in North Carolina. If, however, compliance is actually impossible, then one wonders how Diebold is still able to do business in California, where the Elections Code requires “an exact copy of the source code” for voting machines be provided to the state. Is the North Carolina market simply not worth it to Diebold?

North Carolina is supposed to announce today the list of approved vendors for electronic voting machines that meet the new law’s requirements.

Can Your Google Searches Incriminate You?

Tuesday, November 22nd, 2005

Slashdot posted a blurb about a Raleigh, NC WRAL.com news article detailing how a Google search has been used in a criminal case. Apparently, the defendent searched Google for the words “neck”, “snap”, “break”, and “hold” before the death of his wife. The evidence was found on the defendent’s computer after a search of his home.

The slashdot blogger asks questions such as: “Should police be able to search through your search history for “questionable” searches before you’ve been arrested for a crime, and what effect would this have on the health of society?”

It seems to me that the debate here is about the confidentiality of your online activities and whether the lack thereof would compromise the health of society. Personally, I believe that, with probable cause and a warrant, Google searches and search histories are fair game. It seems no different than rifling through your videotapes, mail, and magazines to see what you’ve been reading about lately. If all of these media are admissible, I don’t see why Google searches wouldn’t be. The fact that it is in digital form and easily accessible would seem irrelevant.

As privacy researchers, we are interested in protecting the rights of individuals. However, this must be tempered with common sense and an overarching goal of benefiting society. In this case, it seems that this particular invasion of privacy is legal and probably just.

Read more about the article and commentary here.