Archive for 'Privacy Policies'

Data Privacy Day 2009

Wednesday, January 28th, 2009

Last year on January 28th, the first annual Data Privacy Day celebration was held in the United States at Duke University. Today marks the second annual Data Privacy Day, and the celebration has grown dramatically.

Last year, Governor Easley proclaimed January 28th as Data Privacy Day for the state of North Carolina. This year, he proclaimed January Data Privacy Month. North Carolina, Washington, California, Oregon, Massachusetts, and Arizona have also declared January 28th to be state-wide Data Privacy Day. Last but certainly not least, Congressman David Price and Congressman Cliff Stearns introduced House Resolution 31 which was passed on January 26th with a vote of 402 to 0 to make today National Data Privacy Day in the United States. It is truly outstanding to see such strong support in the form of resolutions and proclamations.

The best way to support or celebrate Data Privacy Day is to take action. Since the goal of Data Privacy Day is to promote awareness and education about data privacy, one easy way to act is to check out all the great educational resources made available in conjunction with Data Privacy Day. For example, Google has posted about what it has done to protect privacy and increase awareness of privacy. Microsoft is holding an event tonight and has more information on data privacy on their website.

Here at The Privacy Place, we were once again pleased to have the opportunity to celebrate Data Privacy Day at Duke University by attending the panel discussion on Protecting National Security and Privacy. The panel discussion was extremely well-attended and well-received. This event had a number of sponsors, including Intel who has a fantastic website with extensive information on Data Privacy Day. If you weren’t able to make it to the panel, I would strongly encourage you to check out Intel’s site.

Lastly, Data Privacy Day is all about awareness and education, so be sure to spread the word!

[Update: Fixed the link to the House Resolution that passed on Monday.]

Silver Bullet Security Podcast Interviews Dr. Williams

Wednesday, December 24th, 2008

Two days ago, the 33rd episode of the Silver Bullet Security Podcast was released. If you are new to the this podcast, it’s a monthly podcast featuring interviews with noted security experts. It’s co-sponsored by IEEE Security and Privacy Magazine and Cigital. I would highly recommend it for anyone interested in software security and privacy research. I’ve been a loyal listener almost since it started, and I have yet to find an episode that didn’t teach me something new.

In it, Dr. Gary McGraw, the host of the series, interviews Dr. Laurie Williams, an Associate Professor of Computer Science at North Carolina State University. They discuss the work the Software Engineering Realsearch Group is doing in software security, testing, and agile development. In my humble and admittedly biased opinion, Dr. Williams is an excellent teacher and the podcast is absolutely worth checking out.

In a previous episode, Dr. Annie Antón, a Professor of Computer Science at North Carolina State University and the Director of The Privacy Place, was also interviewed by Dr. McGraw. They discussed the our work here at The Privacy Place including research on privacy policies, the role of regulations in computer privacy and security, and the relationship between privacy and security. Of course, my opinion as to this podcast is even more biased, but I would still encourage you to check it out. 🙂

Previous podcasts have included interviews with luminaries such as Ed Felten, Bruce Schneier, Dorothy Denning, Eugene Spafford, Adam Shostack, and Matt Bishop. I am tempted to simply list all the interviewees because each episode is fantastic, but I’ll leave the rest as a teaser. If you were so inclined, you could even follow their RSS or iTunes feed as a New Year’s resolution. 😉

Readability of Internet Privacy Policies

Friday, September 5th, 2008

By Dr. Annie I. Antón and Gurleen Kaur

Erik Sherman’s September 4, 2008, BNET blog post, Privacy Policies are Great — for PhDs, analyzes the readability of common Internet privacy policies including Google, Microsoft and Yahoo.  His study supports the findings, published by ThePrivacyPlace.Org researchers in IEEE Security & Privacy.  Our studies showed that privacy policies are inaccessible to the very end-users they are intended to inform. 

Our first study, published in 2004, analyzed 40 online privacy policy documents from nine financial institutions to examine their clarity and readability.  Our findings revealed that compliance with existing legislation was, at best, questionable.

Our second study, published in 2007, analyzed 24 healthcare privacy policy documents from nine healthcare Web sites both pre- and post-HIPAA (Health Insurance Portability and Accountability Act).  Our findings revealed that HIPAA’s introduction has led to more descriptive privacy policies, but many remain difficult to read.

Last month, ThePrivacyPlace published an empirical study in IEEE Transactions on  Engineering Management that reveals that users perceive traditional, paragraph-form policies to be more secure than other policy representations, but that user-comprehension of paragraph-form policies is poor in comparison to other policy representations.  

Are Google Health’s Privacy Practices Healthy?

Friday, June 20th, 2008

by Jessica Young and Annie I. Antón

On May 19, 2008, Google launched Google Health [1], a new Personal Health Record (PHR) web portal that allows patients to gather and organize their medical records while keeping their physicians up to date about their health condition. As with other PHRs, like Microsoft’s HealthVault, Google Health does not appear to be covered by federal or state health privacy laws. According to the Google Health Terms of Service, Google is not a “covered entity” as defined in the Health Insurance Portability and Accountability Act (HIPAA); as such, “HIPAA does not apply to the transmission of health information by Google to any third party” [2].

Researchers at ThePrivacyPlace.org have evaluated privacy policies and privacy breaches since its founding in 2001. In particular, The Privacy Place researchers are addressing the extent to which information is protected in financial and health care systems that must comply with relevant laws and regulations.

Google Health is not a covered entity as defined in HIPAA. Thus, any personal health data that you submit to Google Health will not be afforded the same legal protections required of health care providers under HIPAA. Until state and federal agencies establish and enforce laws to protect the privacy of personal health records maintained by non-covered entities, individuals should carefully consider the risks involved in submitting sensitive health information to Google Health and other PHRs such as HealthVault. PHRs are not subject to the same privacy and security laws to which traditional medical records are subject to in the United States.

As with our analysis of Microsoft’s HealthVault [3] in October 2007, we encourage patients to carefully consider and question the privacy practices as articulated in the Google Health privacy policy, terms of service, and frequently asked questions. To further explore this new service, we analyzed and evaluated the protections and vulnerabilities involved in using Google Health. The Google Health Help Center provides a link and U.S. Postal Address—but no other contact method, such as email address or phone number —for users to submit a “Question About Privacy” [4]; clicking on this link displays a web page with a form for inquiries that states, “[w]e hope this information [Privacy Policy] will help you make an informed decision about sharing your personal information with us” [5]. Unfortunately, Google has been unresponsive to our questions regarding its Google Health privacy policy.

We sent four questions regarding Google Health’s privacy practices via the Google Health Help Center [5] on May 23, 2008. On June 4, 2008, we submitted the same four questions but this time via Google’s Web Search Help Center [6], where users are invited to submit questions specifically about Google’s privacy practices. It has been over three weeks since our first inquiry and we have yet to receive a response of any kind to any of our questions. Patients are concerned about the privacy of their health information [7]. A lack of prompt replies to questions regarding health privacy is disconcerting and suggests that privacy is not a priority for those managing Google Health or manning the Google Help Center.

We focused on three questions of Microsoft’s HealthVault in our previous analysis [3]. Here we examine these same three questions within the context of Google Health.

Will your health information be stored in other countries without appropriate legal oversight, skirting many of the protections afforded by the HIPAA?

The three Google Health privacy-related documents provide no insights about where personal health information will be stored. As we received no answers to our inquiries to the Google Health Help Center, we turned to the general Google Privacy Policy, which states, “Google processes personal information on our servers in the United States of America and in other countries” [8]. Users should always be concerned about the location of their data because different countries have different data protection standards and laws. If your data is breached in some way, the physical location of the server on which it was stored will affect the recourses that will be available to you.

Will your health care records be merged with other personal information about you that was previously collected within the context of non-health related services?

No, according to the documents we reviewed. Google Health’s Privacy Policy states: “The [record] log information will be used to operate and improve service and will not be correlated with your use of other Google services” [9]. This is also addressed in the FAQ by the statement that “no personal or medical information in your Google Health profile is used to customize your Google.com search results or used for advertising” [10]. At this point in time, it appears that Google Health information will not be merged with information from other Google services without your consent.

Are the access controls to your health records based not only on your consent, but also on the principle of least privilege?

Google Health allows users to grant read/write access to their information to other third-party sites and/or individuals. The Google Health Privacy Policy states: “you [as a Google Health user] can revoke sharing privileges at any time. When you revoke someone’s ability to read your health information, that party will no longer be able to read your information, but may have already seen or may retain a copy of the information” [9]. Thus, users should determine their access control rules as soon as possible when setting up their accounts. Access control rules and the ability to change these rules become immaterial once private health information reaches an unauthorized or unintended agent. Google is not clearly implementing the principle of least privilege, because it appears that others may be able to grant read/write access to your health information, leaving the door open for data breaches.

References

[1] Google Health.

[2] Google Health Terms of Service, April 28, 2008

[3] A.I. Antón. Is That Vault Really Protecting Your Privacy?, ThePrivacyPlace.org Blog, October 9, 2007.

[4] Google Health Help Center Contacting Support page.

[5] Google Health Help Center Contact Us [Question about privacy] page.

[6] Google Web Search Help Center.

[7] National Consumer Health Privacy Survey, California Health Care Foundation, 2005.

[8] Google Privacy Policy, October 14, 2005.

[9] Google Health Privacy Policy, no date provided.

[10] Google Health Frequently Asked Questions, no date provided.

Is That Vault Really Protecting Your Privacy?

Tuesday, October 9th, 2007

Last week, Microsoft announced a new PHR (Patient Health Records) system called HealthVault. HealthVault is a web-based portal that enables end-users to upload their health records on the web. Unfortunately, what people don’t realize is that HealthVault and similar PHR systems are not subject to or governed by law. When the Health Insurance Portability and Accountability Act (HIPAA) was enacted, we did not envision that private software firms would eventually want to create databases for our health records. As a result, HealthVault and other PHR systems are not subject to the same privacy and security laws to which traditional medical records are subject to in the United States because they are not “covered entities” as specified in the HIPAA.

Over the course of the past 7 years, researchers at ThePrivacyPlace.org have evaluated over 100 privacy statements for financial and healthcare web portals. In addition, we focus on evaluating the extent to which the privacy of sensitive information is protected in these systems as well as the extent to which system comply with relevant regulations.

Even though physicians and the press are excited about the introduction of these new PHR systems [1], there are questions that I urge the public to ask before entrusting their sensitive health records to any PHR system. My concerns are based on a careful evaluation of the HealthVault privacy statements [2, 3]. Microsoft appears to have sought the counsel of physicians who believe that patient consent is the best indicator of privacy protections. Unfortunately, most physicians do not understand the subtleties buried within healthcare privacy statements within the context of the software that implements those statements. For this reason, I now list three primary questions that one should ask before entrusting their health records to HealthVault or any other PHR system:

Will your health information be stored in other countries without appropriate legal oversight, skirting many of the protections afforded by the HIPAA?

The HealthVault privacy statement explicitly states that your health records may be off-shored to countries that do not afford the same privacy protections for sensitive information that we do in the United States. In particular, if information is disclosed or altered, do you have any legal recourse or remedy?

Will your health care records be merged with other personal information about you that was previously collected within the context of non-health related services?

Within the context of HealthVault, the answer to this question is yes. Microsoft explicitly states that they will merge the information they have previously collected from you via non-health related services with your HealthVault information. Moreover, it is unclear what information Microsoft already has about us other than our names and contact information and precisely what information third parties may access. Furthermore, we don’t know if that information is accurate or complete. Thus, use of the merged information may not be what we expect.

Are the access controls to your health records based not only on your consent, but also on the principle of least privilege?

Although HealthVault requires patient consent for any accesses and sharing of your health records, access controls leave the door wide open for data breaches. HealthVault enables individuals to grant access to other people and programs that can further grant read/write access to your health record. The only safeguard is a history mechanism to provide an accounting of accesses if you suspect that your information has been breached after the fact. A better approach would be for Microsoft to proactively enforce contractual obligations via audits and monitoring mechanisms.

The hype surrounding HealthVault’s privacy protections among those in the medical community must be balanced with the reality of the information security and privacy practice expressed in its public privacy statements. It is critical to address these privacy concerns in the design of PHR systems before we deploy them with vulnerabilities that will ultimately lead to yet another rash of data breaches.

References

[1] Steve Lohr. Microsoft Rolls Out Personal Health Records, New York Times, 4 October 2007.

[2] Microsoft HealthVault Search and HealthVault.com Beta Version Privacy Statement, October 2007.

[3] Microsoft HealthVault Beta Version Privacy Statement, October 2007.

On Secondary Use of Information

Thursday, March 23rd, 2006

In their enthusiastic charge to protect people from privacy invasion, privacy advocates sometimes get to focused on preventing the disclosure of information. He see bunches of client based tools, often browser plugins that warn people that they are about to submit personal information to web sites that don’t have published privacy policies. Some of the more sophisticated tools will compare an end-user’s preferences to a site’s published policy and inform the user if the site policy is consistent with the user’s preferences.

But focusing on preventing the disclosure of information isn’t enough because people _want_ to disclose their information to companies, both electronically and directly.

Read the rest of this entry »

What you say (online) can be used against you

Thursday, November 3rd, 2005

The allure of posting thoughts, feelings, and commentary online has generally been fueled by the freedom and (at least pseudo) anonymity that the Internet provides. A person can start a blog or post on numerous social networking sites without fear of reprisal, as he/she will generally use a pseudonym or simply leave an anonymous comment. However, as the Internet has become more mainstream, companies and organizations are increasingly trying to discover the identities of such posters and hold them accountable for their words, actions, or portrayed behavior. Two recent situations receiving news coverage illustrate this trend.

The first example involves an employee who posted an anonymous comment (which included a racial slur) to a Yahoo! message board discussing his company. The company, Alleghany Energy Service, discovered the post and sued to reveal the identity of this anonymous poster. The company eventually received a subpoena and compelled Yahoo! to reveal the poster’s identity, and then fired the poster for the racial slur. The employee is countersuing for wrongful termination, among other claims. GWU law professor Daniel Solove, in his blog Concurring Opinions, discusses this situation in greater detail, including analyzing the legal situation surrounding the original suit and the countersuit.

On a college level, many students are now members of a site called the Facebook, which describes itself as “an online directory that connects people through social networks at schools”. Students can post pictures and personal details, as well as engage in discussions about anything and join groups for common interests. However, not just students are taking note, and some students have found themselves held accountable for the pictures and words posted online. A student paper at Boston College, The Heights, covers in this article how students have been subject to disciplinary action and, in one case so far, expulsion at the hands of university officials. You can use the print feature to view all article text without having to register for the site, as clicking to view the next page will force you into a registration process. The article summarizes the situation with this statement: “Students at schools across the country have recently been charged with everything from alcohol related infractions to making threatening comments to a campus police officer – all from photos or information posted on the Facebook.”

Both of these stories show the difficulty of maintaining any sort of private online identity, separate and distinct from the real world. In both cases, the actions of the company/university are somewhat questionable, as they involve pursuing the employee/student outside of the work environment and into that individual’s actions at home. In the case of the university, though, the students’ homes may be university property, in which case different rules may apply.

Google updates their privacy policy, and everyone takes notice

Tuesday, October 18th, 2005

On October 14th, 2005, Google put up a new privacy policy, replacing one that had been in effect since July 1st, 2004 (available here). This fact alone does not seem particularly newsworthy, but what has been interesting to observe is the extensive coverage on the internet of this change. People have been analyzing the changes, comparing the previous policy to the new one, and generally commenting on Google and privacy.

Google has also put up a new section entitled Google Privacy Policy Highlights, which seems to be an attempt to quickly capture the essence of the privacy policy for those who won’t read the entirety of the document. Given that so few people actually read privacy policies, this may be a benefit for consumers and regular internet users in getting them to read anything at all about what they are agreeing to when they use Google services. However, providing these highlights necessarily risks omitting details that may be important to some individuals.

The implications and legal status of a highlights document is also unclear. Just as in the case of the HIPAA Privacy Rule, a privacy policy highlights page may benefit users by making policies more readily accessible and actually read. However, following the Privacy Rule is necessary but not sufficient for HIPAA compliance; likewise, a company adhering to its highlighted privacy policy elements may still be violating other aspects of their policy. Furthermore, while Google still seems to be squarely on the side of good, more devious or uncaring companies may use a privacy policy highlights document to deceptively portray their privacy practices, knowing few (if any) people will take the time to review the longer, more legally significant full policy.

Google has continued to make previous versions of the privacy policy available for review/download, which is a good business practice but could go further. Granted, Google is doing more than most companies in this respect, but the next step would be to actually highlight the changes between two documents. Very few (if any) sites are providing this sort of privacy policy insight, so curious/concerned individuals are left to use other means for such analysis, such as this HTML diff tool. Using this tool, one can view the changes from the old policy to the new one here, although this only provides a literal diff between the documents and no high-level insight. Another text comparison that emphasizes the changes between documents is here.

Read the rest of this entry »

IBM Announces A Privacy Policy Promising Not To Use Genetic Information In Hiring, Benefits Decisions

Tuesday, October 11th, 2005

Compared with the Chicago Bulls (see a blog entry I posted several days ago), IBM Corp., the world’s largest technology employer by revenue, is doing something right and big for the society to help protect employee privacy. IBM will soon announce a work force privacy policy that is promising not to use genetic information in hiring or in determining eligibility for its health care or benefits plans. Genetic tests are not prevalent in the marketplace, but some companies have secretly performed the tests without employees

Video Surveillance Should Be Included in Privacy Policies

Friday, October 7th, 2005

Video surveillance is very prominent these days and given ThePrivacyPlace.Org’s extensive analyses of privacy documents at Financial Institutions (see: Financial Privacy Policies and the Need for Standardization), I decided to examine whether financial institutions now mention the collection of information via video surveillance in their policy documents. I checked all nine institutions examined in our 2004 IEEE Security & Privacy paper. None of the nine institutions mention video surveillance in any of their privacy, security or legal statements. Given that banks, for example, collect video of all ATM transactions and of patrons that enter their institutions it seems only natural that we, as patrons, have a right to expect that these practices be included in their policy statements. Video surveillance impacts one’s sense of autonomy because the knowledge that one’s actions are being observed may alter one’s behavior, thus resulting in a loss of privacy. There are certainly merits to video surveillance at ATM machines (public safety, deterrence of crime, etc.). However, the loss of autonomy results in a potential invasion of privacy about which patrons need to be informed. This begs the questions: Why are financial institution not including their video surveillance practices in their policy statements? Are patrons not entitled to know how this video is used and whether it is aggregated with other kinds of information about them?