In Defense of Smart Phone Security by Default

October 19th, 2014

The Apple iOS8 phone and the latest Google Android phone claim to establish landmark privacy protections by establishing encryption by default. According to Apple and Google, they will be unable to “open” the phone for anyone, not even law enforcement. These new measures have been sharply criticized by the Director of the FBI and the Attorney General. As a software engineering professor, I’ve devoted my career to teaching students how to develop (a) secure, (b) privacy preserving, and (c) legally compliant software systems. I’m not qualified to debate whether or not this move by Apple and Google is lawful or constitutional. However, as a technologist I can assert that applying security best practices will yield a system that can withstand intrusions and denial of service attacks, limits access to authenticated and authorized users, etc.

The recent “encryption by default” design decision by Apple and Google is currently being discussed in software engineering and security classes across our nation, and perhaps across the globe. By default, privacy and security researchers, technologists and activists applaud this decision because it is raising the bar for truly implementing security best practices. It’s a bitter pill to swallow for professors who teach students to develop secure, privacy preserving, and legally compliant software, to have our students be told on the job, “Oh, that stuff you learned about security back in school? We only want you to secure the system part way, not all the way. So, leave in a back door.” Such a position undermines those academic institutions seeking to prepare tomorrow’s security and privacy workforce in an ever-changing world where sophisticated criminals are getting smarter and their offensive techniques are surpassing our ability to stay ahead.

From my experience working with government agencies, I thoroughly understand the desire to “catch the bad guys” and value the ability to prevent malicious criminal activity by individuals or nation states. I want our government, Department of Homeland Security, Department of Defense and Intelligence Community to protect us from the unfathomable. I find myself wondering why the very institutions who promote security and privacy best practices (via, for example, centers of excellence at our nation’s top universities) are so vehemently opposed to industry actually implementing best practices. My analysis yields two observations:

  1. Taking the Easy Way Out. For law enforcement to expect companies to provide the government with back door access (even when required by law), seems to me to be the lazy approach. If one reads between the lines, one could infer that the government is lacking the incentives and/or the will to innovate and improve the state of the art in cyber offense. Where’s the spirit of the scientists and engineers who enabled man to walk the moon? Where’s the American will to innovate, to surpass the state of the art, and be the best? Why let other nations beat us at our own game? The only way we can get better at offense is by facing the best possible defense. At a time when other nation states are getting so sophisticated, we risk not developing our own capabilities if we rely on an easy backdoor rather than honing our own skills. We need to keep ourselves sharp by learning how to confront the state of the art systems. If we aren’t staying ahead of the curve then other countries and their intelligence services will have reason to develop capabilities beyond our agencies when we’re relying on these factors.
  2. Creating a Backdoor for Use in Other Countries. If the United States expects companies to provide a back door to gain access to systems and the data that resides in those systems, then other governments will, too. We can’t well expect Apple or Google to provide a backdoor to the U.S., but not to China or Russia. At least in the United States, we have a legal framework that requires search warrants, etc. to gain access via the backdoor. But many other countries lack these legal safeguards and will require the phone companies to enable snooping into the systems within those countries with no legal protections comparable to US system. As security engineers have learned in many other systems, you can’t build a vulnerability that is used only by the good guys and not by others.

I certainly empathize with law enforcement’s desire to gain evidence for critical investigations. But Congress and the White House have agreed that cybersecurity should be funded as a national priority. As professors of computer security, we can’t teach the importance of building secure systems and then explain to our students that we will leave tens of millions of devices insecure.

Dr. Annie I. Antón is a Professor in and Chair of the School of Interactive Computing at the Georgia Institute of Technology in Atlanta. She has served the national defense and intelligence communities in a number of roles since being selected for the IDA/DARPA Defense Science Study Group in 2005-2006.

NSF Grant on Regulatory Compliance Software Engineering

August 10th, 2012

The National Science Foundation recently awarded researchers from The Privacy Place a grant to work on Regulatory Compliance Software Engineering with UCON_LEGAL! You can read the abstract below. More details are available at research.gov.

Abstract: Software engineers need improved tools and methods for translating complex legal regulations into workable information technology systems. Compliance with legal requirements is an essential element in trustworthy systems. The research proposed herein will advance the cutting edge for creating more accurate, efficient, and reliable RCSE (Regulatory Compliance Software Engineering), resulting in compliant software systems. System specifications typically concentrate on system-level entities, whereas legal discussions emphasize fundamental rights and obligations discursively. This work bridges three cultures of scholarship and research: software specification, law, and access control. By empowering software developers and policy makers to better understand regulatory texts and the access controls specified within these texts, current and future software systems will be better aligned with the law.

There are three main expected results of this work: (1) Framework, methodology and heuristics to identify UCONLEGAL components in legal texts; (2) extended TLA (Temporal Logic of Actions) rules from UCONABC and mapping of predicates, actions, states, variables and obligations between UCONLEGAL and UCONABC; (3) validated and extended role-based access controls to meet healthcare and financial legal requirements through further development of UCONLEGAL. The impacts of this work are expected to be far reaching; law and regulations govern the collection, use, transfer and removal of information from software systems in many sectors of society, and this research explicitly calls for models and theories for analyzing and reasoning about security and privacy in a regulatory and legal context.

Summary of E-Verify Challenges

May 25th, 2011

If you didn’t get a chance to check out Dr. Antón’s testimony on E-Verify, then you might be interested in her post summarizing the main points for the Center for Democracy and Technology:

Last month, I testified before the House Ways and Means Social Security Subcommittee hearing on the Social Security Administration’s Role in Verifying Employment Eligibility. My testimony focused on the E-Verify pilot system, and the operational challenges the system faces. According to the U.S. Citizenship and Immigration Services website, E-Verify “is an Internet-based system that allows businesses to determine the eligibility of their employees to work in the United States.” The goal of E-Verify – to ensure that only authorized employees can be employed in the U.S. – is laudable. However, the E-Verify pilot system is still in need of major improvements before it should be promoted to a permanent larger-scaled system.

Read the rest on the CDT blog.

Dr. Antón testifies before Congress about E-Verify

April 15th, 2011

Yesterday afternoon, Dr. Antón testified before the Subcommittee on Social Security of the U.S. House of Representatives Committee on Ways and Means on behalf of the USACM about E-Verify. Here’s part of the official ACM press release on the testimony:

WASHINGTON – April 14, 2011 – At a Congressional hearing today on the Social Security Administration’s role in verifying employment eligibility, Ana I. Antón testified on behalf of the U.S. Public Policy Council of the Association for Computing Machinery (USACM) that the automated pilot system for verifying employment eligibility faces high-stakes challenges to its ability to manage identity and authentication. She said the system, known as E-Verify, which is under review for its use as the single most important factor in determining whether a person can be gainfully employed in the U.S., does not adequately assure the accuracy of identifying and authenticating individuals and employers authorized to use it. Dr. Antón, an advisor to the Department of Homeland Security’s Data Privacy and Integrity Advisory Committee and vice-chair of USACM, also proposed policies that provide alternative approaches to managing identity security, accuracy and scalability.

More information about the hearing, including testimony from other witnesses, is made available by the Subcommittee here, and Dr. Antón’s written testimony is available from the USACM here (PDF).

Dr. Antón previously testified before the House Ways and Means Social Security Subcommittee during the summer of 2007 about the security and privacy of Social Security Numbers.

OMB Requests Comments on Government Cookie Policy

July 31st, 2009

The Federal Office of Management and Budget (OMB) is considering changing the cookie policy for federal government websites. In a recent Federal Register entry, they propose allowing Federal agencies to use cookies to track users to their websites, as long as those agencies:

  • “Adhere to all existing laws and policies (including those designed to protect privacy) governing the collection, use, retention, and safeguarding of any data gathered from users;
  • Post clear and conspicuous notice on the Web site of the use of Web tracking technologies;
  • Provide a clear and understandable means for a user to opt-out of being tracked; and
  • Not discriminate against those users who decide to opt- out, in terms of their access to information.”

The OMB is seeking comments on the proposed policy changes through August 10, 2009. Comments may be made on the OSTP blog.

In response, we offer the following comments:

Cookies are small text files used by web servers to maintain state information in the normally state-less Hyper Text Transfer Protocol (HTTP). There are concerns about the use of cookies on government websites:

[1] Most Internet users do not understand cookies, including thinking that they are viruses, or that they are bad all the time. (See V. Ha, K. Inkpen, F. Al Shaar, L. Hdeib, “An Examination of User Perception and Misconception of Internet Cookies”, Proc. of the Conf. on Human Factors in Computer Systems, Montreal, 2006, pp. 833-838)

[2] Web browsers, as currently implemented, do not allow cookies to meet the FTC’s Fair Information Practices (FIPS). For example, users are not given notice and made aware of a website’s use of cookies before those cookies are placed on their computers. Websites may mention cookies in their privacy policies, but studies show that most Internet users do not comprehend privacy policies, and think that the mere existence of a privacy policy makes their information secure, even if the privacy policy states “we share your information with everyone”! (See M.W. Vail, J.B. Earp, A.I. Antón, “An Empirical Study of Consumer Perceptions and Comprehension of Web Site Privacy Policies”, IEEE Trans. on Engineering Management, 55(3), Aug. 2008, pp. 442-454)

[3] Cookies do not meet the Choice/Consent FIP. In order to read a website’s privacy policy, a user must visit the website’s homepage, and then find the policy link and read it. However, most privacy policies express the concept of “implied consent,” i.e., simply visiting the homepage of the website implies consent with the privacy policy, without even having the opportunity to read it.

[4] Cookies do not meet the Access/Participation FIP. Modern browsers often contain cookie management utilities, to view and delete cookies stored on a user’s computer. Oftentimes, the information contained in the cookie is encrypted, or is a code or identifier that is only understandable to the website, but not the users. Users are unable to interpret the data contained in such cookies. Without understanding the data, users cannot verify the accuracy of such information.

[5] Cookies do not meet the Integrity/Security FIP. The cookie specification contains an expiration field, indicating the lifetime of the cookie. Many cookies are set with lifetimes of 10, 20, or 30 years. This is much longer than necessary.

[6] OMB’s proposal requires websites to provide a means “for a user to opt-out of being tracked.” However, opt-out cookies do not reliably opt a user out of the tracking. Automated cookie removal by antispyware utilities, and manual cookie deletion will delete the opt-out cookie along with other cookies on the user’s machine. Thus, the user is unknowingly opted-in to the tracking service.  To achieve reliable opt-out, modifications must be made to the design of antispyware utilities, web browsers, and whitelists of opt-out cookies must be maintained. (See P. Swire, A.I. Antón, Testimony before the Federal Trade Commission, Apr. 10, 2008)

Cookies have an important function in the design of the modern Internet, but raise legitimate privacy concerns that remain unadressed, especially within the context of government websites. The advantage of having website statistics may not outweigh the privacy cost. There are other means to evaluate a website, such as user focus groups, surveys, etc. These may be less effective, and subject to other biases, but the efficiency loss is well worth the privacy gained by not using cookies on government websites, until an alternative, privacy-preserving technology is developed.

The Evolution of Internet Users’ Privacy Concerns

July 29th, 2009

The Privacy Place is proud to announce the release of a new technical report by Dr. Annie I. Antón, Dr. Julia B. Earp, and Jessica D. Young detailing the evolution of Internet users’ privacy concerns since 2002. This research has been submitted to IEEE Security and Privacy Magazine, but you can read the detailed technical report on this research today by downloading the full paper here: How Internet Users’ Privacy Concerns Have Evolved Since 2002

Abstract:

In 2002, we established a baseline for Internet users’ online privacy values. Through a survey we found that information transfer, notice/awareness, and information storage were the top online privacy concerns of Internet users. Since this survey there have been many privacy-related events, including changes in online trends and the creation of laws, prompting us to rerun the survey in 2008 to examine how these events may have affected Internet users’ online privacy concerns. In this paper, we discuss the 2008 survey, which revealed that U.S. Internet users top three privacy concerns have not changed since 2002; however, their level of concern within these categories may have been influenced by these privacy-related events. In addition, we examine differences in privacy concerns between U.S. and international respondents.

Data Privacy Day 2009

January 28th, 2009

Last year on January 28th, the first annual Data Privacy Day celebration was held in the United States at Duke University. Today marks the second annual Data Privacy Day, and the celebration has grown dramatically.

Last year, Governor Easley proclaimed January 28th as Data Privacy Day for the state of North Carolina. This year, he proclaimed January Data Privacy Month. North Carolina, Washington, California, Oregon, Massachusetts, and Arizona have also declared January 28th to be state-wide Data Privacy Day. Last but certainly not least, Congressman David Price and Congressman Cliff Stearns introduced House Resolution 31 which was passed on January 26th with a vote of 402 to 0 to make today National Data Privacy Day in the United States. It is truly outstanding to see such strong support in the form of resolutions and proclamations.

The best way to support or celebrate Data Privacy Day is to take action. Since the goal of Data Privacy Day is to promote awareness and education about data privacy, one easy way to act is to check out all the great educational resources made available in conjunction with Data Privacy Day. For example, Google has posted about what it has done to protect privacy and increase awareness of privacy. Microsoft is holding an event tonight and has more information on data privacy on their website.

Here at The Privacy Place, we were once again pleased to have the opportunity to celebrate Data Privacy Day at Duke University by attending the panel discussion on Protecting National Security and Privacy. The panel discussion was extremely well-attended and well-received. This event had a number of sponsors, including Intel who has a fantastic website with extensive information on Data Privacy Day. If you weren’t able to make it to the panel, I would strongly encourage you to check out Intel’s site.

Lastly, Data Privacy Day is all about awareness and education, so be sure to spread the word!

[Update: Fixed the link to the House Resolution that passed on Monday.]

Silver Bullet Security Podcast Interviews Dr. Williams

December 24th, 2008

Two days ago, the 33rd episode of the Silver Bullet Security Podcast was released. If you are new to the this podcast, it’s a monthly podcast featuring interviews with noted security experts. It’s co-sponsored by IEEE Security and Privacy Magazine and Cigital. I would highly recommend it for anyone interested in software security and privacy research. I’ve been a loyal listener almost since it started, and I have yet to find an episode that didn’t teach me something new.

In it, Dr. Gary McGraw, the host of the series, interviews Dr. Laurie Williams, an Associate Professor of Computer Science at North Carolina State University. They discuss the work the Software Engineering Realsearch Group is doing in software security, testing, and agile development. In my humble and admittedly biased opinion, Dr. Williams is an excellent teacher and the podcast is absolutely worth checking out.

In a previous episode, Dr. Annie Antón, a Professor of Computer Science at North Carolina State University and the Director of The Privacy Place, was also interviewed by Dr. McGraw. They discussed the our work here at The Privacy Place including research on privacy policies, the role of regulations in computer privacy and security, and the relationship between privacy and security. Of course, my opinion as to this podcast is even more biased, but I would still encourage you to check it out. :-)

Previous podcasts have included interviews with luminaries such as Ed Felten, Bruce Schneier, Dorothy Denning, Eugene Spafford, Adam Shostack, and Matt Bishop. I am tempted to simply list all the interviewees because each episode is fantastic, but I’ll leave the rest as a teaser. If you were so inclined, you could even follow their RSS or iTunes feed as a New Year’s resolution. ;-)

The ECPA and Personal Health Record Systems

December 11th, 2008

Yesterday, William Yasnoff discussed whether or not the Electronic Communications Protection Act (ECPA) provided federal privacy protection for Personal Health Record (PHR) systems. Here at The Privacy Place, we have previously focused on whether the Health Insurance Portability and Accountability Act (HIPAA) applies to PHRs (short answer: no), but today I would like to take a moment to talk about the ECPA.  If you are interested in our coverage of HIPAA and PHRs, I would point you to our post on Microsoft’s HealthVault and our post on Google’s Google Health project.

Let’s start with some background on the ECPA.  The ECPA was passed in 1986 as an amendment to the Wiretap Act of 1968 and primarily deals with electronic surveillance.  The purpose of the Wiretap Act was to make it illegal for any person to intercept oral communications like telephone calls.  The first title of the ECPA extends the original Wiretap Act to prevent the interception of electronic communications.  The second title of the ECPA (commonly called the Stored Communications Act) adds protection for stored communications and prevents people from intentionally accessing stored electronic communications without authorization.  The ECPA has been amended three times since it was passed.  First, it was amended by the Communications Assistance to Law Enforcement Act (CALEA) in 1994.  Second, it was amended by the USA PATRIOT Act in 2001.  Third, it was amended by the USA PATRIOT Act reauthorization acts in 2006.

Now, Yasnoff makes several claims in his post, which I will discuss in order.  First, he claims that there are no exceptions in the ECPA and that this means whichever organization holds your information must get your permission to release it.  This is categorically not true.  There are many exceptions in the ECPA, but for the sake of simplicity, I will limit this discussion to the two main exceptions of the original Wiretap Act, both of which were retained by the ECPA.

The first exception allows interception when one of the parties has given prior consent.  This could mean that the government can legally access your communications if your PHR service provider consents prior to the communication.  Thus, Yasnoff’s strong statement that PHRs “MUST GET YOUR PERMISSION” (emphasis from original statement) is simply incorrect.

The second exception allows interceptions if they are done in the ordinary course of business.  This could mean that your data would be accessible by third parties such as an information technology vendor that maintains the software.  Effectively, this is a somewhat broader exception than the exception found in HIPAA for Treatment, Payment, and Operations, which Yasnoff found to be wholly unacceptable for protecting patient privacy.

Second, Yasnoff claims that the ECPA “is not long or complicated – I urge you to read it yourself if you have any doubts.”  This statement as well is categorically untrue.  Paul Ohm, who was previously an attorney for the Department of Justice and is currently an Associate Professor of Law at the University of Colorado Law School, has publicly challenged Tax Law experts that the ECPA is more complicated than the U.S. Tax Code.

Bruce Boyden, an Assistant Professor of Law at the Marquette University Law School, wrote a chapter in Proskauer on Privacy discussing electronic communications and the ECPA. In it he details many of the nuanced aspects of the ECPA, including the three subsequent amendments to the ECPA. With regard to the first title (Interception) he says:

To “intercept” a communication means, under the act, “the aural or other acquisition of the contents of any wire, electronic, or oral communications through the use of any electronic, mechanical, or other device.” The application of this definition to electronic communications has at times been particularly difficult, and courts have struggled with a number of questions: What exactly qualifies as the acquisition of the contents of a communication, and how is it different from obtaining a communication while in electronic storage under the Stored Communications Act? Does using deception to pose as someone else constitute and interception? Does using a person’s own device to see messages intended for them qualify?

Boyden later talks about limitations to the second title (Stored Communications):

[T]here are two key limitations in section 2701 [of the ECPA].  First, it does not apply to access of any stored communication, but only those communications stored on an electronic communications service facility as defined under the act.  Second, the definition of “electronic storage” in the act does not encompass all stored communications, but only those in “temporary, intermediate storage” by the electronic communication service or those stored for backup protection.

These seem like rather important exceptions which continue to refute Yasnoff’s claim that there are no exceptions in the ECPA, but to his second point, this seems pretty complicated.  At least, it certainly doesn’t seem as simple as just finding some information that has been communicated to and stored by a PHR service provider, which was Yasnoff’s implication.

Boyden has also discussed whether automated computer access to communications is a violation of the ECPA.  The discussion is more complicated than it may appear at first and there’s an interesting discussion of it over on Concurring Opinions.

Broadly, several organizations feel that current US privacy law, including the ECPA, is discombobulated. The Electronic Frontier Foundation believes that fixing the ECPA is one of the top five priorities in their privacy agenda for the new administration. The Center for Democracy and Technology would like to see the new administration pass consumer privacy legislation and a “comprehensive privacy and security framework for electronic personal health information.” The ACLU would like to see the new administration “harmonize privacy rules.” I submit that these organizations do not feel that the ECPA provides clear and adequate privacy protections for PHR systems.

Yasnoff’s third claim is that PHRs which are “publicly available” receive stronger protections under the ECPA than those that are “private.”  In fact, Yasnoff says:

Only those that are “publicly-available” are included. While this clearly would apply to generally available web-based PHRs, systems provided only to specific individuals by employers, insurers, and even healthcare providers are less likely to be considered “publicly-available.” Therefore, ECPA protection is limited. So you are only covered if you use a PHR that is available to anyone.

This statement is either completely backwards as it relates to the ECPA or, perhaps more likely, not a factor for ECPA protection at all.  The EFF’s Internet Law Treatise has an article describing the differences in public communications versus private communications:

“[T]he legislative history of the ECPA suggests that Congress wanted to protect electronic communications that are configured to be private, such as email and private electronic bulletin boards,” as opposed to publicly-accessible communications. See Konop, 302 F.3d at 875, citing S. Rep. No. 99-541, at 35-36, reprinted in 1986 U.S.C.C.A.N. 3555, 3599.

Thus, the public accessibility of the PHR service is not important. The pressing concern is whether the communication itself was meant to be public or private. If it was public, then the ECPA simple doesn’t apply. It if was private, then whatever protections the ECPA does afford, would apply.

By now it must be clear that I disagree with William Yasnoff’s assessment of the ECPA’s application to PHRs.  I did, however, want to point out one interesting privacy protection that the ECPA offers which HIPAA does not: a private right of action. 

Basically, a private right of action allows citizens to file civil lawsuits in an attempt to recover losses caused by violations of a law.  The ECPA has a private right of action clause, while the HIPAA does not.  HIPAA’s lack of a private right of action has caused some criticism.  On the other hand, the ECPA’s private right of action has also been criticized as unnecessary and wasteful.  Perhaps it is a stretch, but this was the only possible improvement in privacy protection that I was able to find to support Yasnoff’s argument regarding the use of the ECPA to provide privacy protections for PHRs.

I would like to conclude by saying as directly as possible that the ECPA does NOT provide clear or adequate privacy protection for personal health information given to PHR systems. Privacy in general and healthcare privacy in particular are hotly debated current concerns for many organizations. I believe it is likely that the Obama administration and the next session of Congress will attempt to address the privacy concerns raised by organizations like the EFF, the CDT, and the ACLU. In the meantime, however, do not use a PHR service under the assumption that the ECPA protects the privacy of your medical records.

Camera phones and our privacy

October 4th, 2008

By Jessica Young and Aaron Massey

This season’s premiere of Grey’s Anatomy showed interns using camera phones to take pictures of their resident’s injury. This episode aired only days after a story broke about an incident at the University of New Mexico Hospital. Two employees at the University of New Mexico Hospital had used their cell phones to take pictures of patients and then posted these pictures online. These two employees were fired because these actions were a violation of the hospital’s policy.

The University of New Mexico Hospital is not the first hospital to experience problems with cell phone cameras in a hospital. In March 2008, Resnick Neuropsychiatric Hospital at UCLA banned cell phones in the hospital to protect the rights of its patients because of past incidents in the hospital. San Diego’s Rady Children’s Hospital has banned cell phones in patient areas after pictures of children were found on an employee’s phone and computer. Other hospitals have also experienced problems with employees using camera phones in ways that violate patient privacy. Although there are policies are in place, enforcement is difficult.

Privacy law in the United States is historically tied to innovations in cameras. Warren and Brandeis wrote their famous article, “The Right to Privacy,” in response to the invention of the portable “instantaneous photography.” These fears have been reborn now that most people carry cell phones with them at all times and a majority of these phones have cameras within them.

Newer phones are capable of easily sharing pictures and videos with others – regardless of location. As a result, candid pictures can be taken at unexpected times and in someone’s worst moments. For example, a customer at a grocery store recently had an embarrassing picture taken in a moment of anger after the store couldn’t process his credit card. Within moments, the picture was online and generating comments. In the article linked above, Harmon discusses the use of the candid camera phone:

“In recent weeks the devices have been banned from some federal buildings, Hollywood movie screenings, health club locker rooms and corporate offices. But the more potent threat posed by the phonecams, privacy experts say, may not be in the settings where people are already protective of their privacy but in those where they have never thought to care.”

The recent incidents with cell phone cameras at hospitals are troubling examples of why people should be concerned about privacy in places they previously “never thought to care.” Hopefully people will become more aware of cell phone use and capabilities as it relates to individuals’ privacy—not just in a hospital but everywhere.