In the end of January, the European Commissioner for Justice, Fundamental Rights, and Citizenship, Viviane Reding, said that the Commission proposed a new data protection regulation called “the right to be forgotten.” This rule, though sounding innocuous, is probably the “biggest threat to free speech on the Internet,” as law professor Jeffrey Rosen put it.
CAGW’s Deb Collier on cookies:
This morning, the Wall Street Journal announced the results of an investigation into several online advertising companies, including Google, who were by-passing Apple Safari’s privacy settings and imbedding a tracking cookies onto computers and mobile devices using the Apple Safari web browser without the knowledge or consent of the user. Apple Safari’s privacy settings normally default to prohibit tracking, and Apple is currently working to close the loophole in its the code. Meanwhile, Google has ceased using this tracking code according to an article in CNET. This is one example of the type of issues that the Privacy Working Group was founded to review and discuss, particularly with the increased interest and scrutiny by lawmakers over the protection of personal information and online behavior.
Last week the Privacy Working Group hosted a discussion on the right to be forgotten at DAR Constitution Hall in Washington, DC. Tom Schatz, president of Citizens Against Government Waste, and Daniel Castro, senior fellow at the Information Technology & Innovation Foundation, hosted a group of approximately fifty privacy experts, policy practitioners, and academics. The discussion began with a panel consisting of Emma Llanso of the Center for Democracy and Technology, Noah Lang of Reputation.com, and Solveig Singleton of Convergence Law Institute.
Citizens Against Government Waste does a great job of recapping the event, which centered around a concept that has been highly controversial in Europe:
Highlighting the discussion was a conversation about how Professor of Law at the University of Fribourg and Georgetown University Law Center Dr. Franz Werro’s paper relating to the European concept of the “right to be forgotten” translates into American parlance while mitigating concerns about the First Amendment to the Constitution’s role in the debate. For more information on Dr. Werro’s paper, you may want to read a recent article in the Atlantic.
If you’re a public policy expert or academic working on privacy issues and would like to learn about future Privacy Working Group events, please e-mail email@example.com or call 202-772-2176.
View photos here:
By: Daniel Castro, Senior Fellow, Information Technology and Innovation Foundation
This week has seen a flurry of articles on the Facebook Photo Tagging feature after the security firm Sophos posted a story on its blog noting that this feature, which had previously been limited to North American users, had now been extended to other regions. The Photo Tagging feature makes it easier for users to tag their friends in a photo. Initially, this feature required users to label every photo manually. Not surprisingly users asked for an easier way to do this and Facebook responded with new features. First, Facebook made tagging easier by allowing users to tag multiple photos of the same person all at once. Later, Facebook created Tag Suggestions, an automated feature that uses facial recognition software to identify who is in the photo to make tagging faster.
Notably, Facebook has created many privacy options around this feature. These include the following:
- Users are notified when they are tagged.
- Users can untag themselves from any photo.
- Users can only tag their friends.
- Users can disable the “Tag Suggestions” feature so that their name will not be suggested automatically.
Some individuals may dislike the change, but Facebook has not done anything wrong. User privacy has not been compromised and users have not come forward to demonstrate actual harm as a result of these new features. Moreover, automatic photo tagging is not unique to Facebook. Picasa, Flickr, iPhoto and others have experimented with this feature in the past and will likely include it in the future.
In fact, tagging photos has proven to be a popular activity on Facebook. As of December 2010, users were adding tags to photos at a rate of 100 million tags per day.
So how much savings does this feature offer? If we assume that the time to tag a photo falls from about 10 seconds to 2 second per photo on average, a back of the envelope estimates shows that we can expect to see a large gain in efficiency:
100 million tags x 10 seconds x 30% users = 83,333 hours x $30/hour = $2.5M
100 million tags x 2 second x 30% users = 8,333 hours x $30/hour = $500K
So by this estimate we might gain as much as $2 million per day or $730 million per year in productivity from this new feature. (If you are interested, the employee costs are from March 2011 BLS data, the 30 percent estimate is the percent of Facebook users in the U.S., and the 10 second estimate is a rough estimate based on an academic paper.) The Facebook Photo Tagging feature is a classic example of how using information technology (IT) for automation can make processes more efficient. Instead of manually tagging hundreds or thousands of photos, the Facebook Tag Suggestion feature allows users to do this in a few clicks.
So with so much benefit, what explains the outrage? Most of this comes as no surprise: privacy fundamentalists have yet to “Like” a single new Facebook feature. Instead, they are stuck singing a one-note tune opposing most technical advancements based on the claim that it reduces user privacy.
In this case they are also objecting to a specific technology: facial recognition. Facial recognition is a subset of image recognition, a challenge that computer scientists have spent countless hours trying to solve. Humans are generally very good at this type of task: show us a photo of a person in one photo and we can identify them in a second photo; show us a photo of an apple and we can pick out the apple in the second photo. But teaching a computer to identify these types of visual patterns is a much more difficult problem. (For those of us who often have trouble putting names to faces maybe this indicates that we are more machine now than man.)
Over the years, computer scientists have been getting much better at this as algorithms, processors and sensors have all improved. Facial recognition software, while still sometimes generating both false positives and false negatives, works fairly well and continues to get even better. In particular, it works well at identifying one individual out of a relatively small population. This means that Facebook, whose users are likely to be photographing others in their own social network, has a much easier task—it does not have to automatically identify individuals in a photo from the entire universe of Facebook users, but rather only from a particular list of Facebook friends.
Facial recognition has many potential benefits. It can be used to improve security, for example, to ensure that an ATM transaction is tied to a specific individual, in addition to that person’s ATM card and PIN number. It can be used to automate the visual authentication of an individual to an identity document. For example, airports have begun to use facial recognition to allow individuals to go through immigration using a passport at unmanned clearances gates at immigration. And, in the future, it may even be used to personalize transactions at self-service kiosks or on Minority Report-like advertisements.
Moreover, facial recognition helps advance the state of the art in image recognition—learning how to do better image recognition could help countless applications function better if these applications can understand more information about their environment. In particular, augmented reality applications—which impose metadata on a virtual display of the physical world—can benefit from better object recognition. Already we are seeing the potential of these applications in mobile apps, such as Google Goggles and the Layar browser.
There are some legitimate questions about the use of facial recognition specifically and biometric information generally. For one, it is not clear how this information can be used. Could a social network release an app that lets you discretely photograph someone to find out his or her name? Could the FBI or DHS license the use of a large database of photographs from a private company that links faces to individual identities? Could a company use videos of users from a site like YouTube to create biometric identifiers from an individual’s gait or voice patterns?
None of these potential applications are necessarily bad, but this does highlight the need for companies to establish clear privacy policies around biometric information, i.e. data about human characteristics or behaviors that can be used to uniquely identify an individual. This is not specific to Facebook or to facial recognition, but a general need for organizations to better address a broad category of information that may be used to uniquely identify individuals based on biometrics. For example, the manner in which an individual types on a keyboard has been found to be a unique behavioral biometric identifier. Organizations should be transparent about if and when this type of information is recorded and converted into a biometric template (i.e. a reference of distinct characteristics). And like other personally identifiable information, organizations should protect this information according to the degree of risk of it becoming public and they should make clear how they use this information.
Most of the concerns about facial recognition are about how it can be used for surveillance or suspect identification, such as the use of DMV photos by the FBI. People are wary about government or private companies tracking where they go. While some potential harms are speculative, they are not unimaginable. With enough data, for example, it may someday be possible to setup a computer outside of an abortion clinic to identify who goes inside. However, this information, although harder to find, could still be found in the pre-digital era. Yes it took a lot more legwork, but the information and potential for abuse still existed.
It may be that in the future we will all have a familiar face. Anonymity while in public is never a certainty. Individuals never know if they will encounter somebody they know while in public (hence the expression “it’s a small world”). In the past, most individuals would not have even found anonymity among smaller communities. And celebrities already experience this lack of anonymity today when they are recognized in public. So it may be that one day most of us are like celebrities, where we cannot go anywhere without someone recognizing us.
The problem is that privacy advocates often present a false choice when critiquing technology. The potential privacy harm from surveillance exists regardless of whether the technology is used or not. We cannot, and we should not, try to turn back the clock. Surveillance already exists and most of us accept it in exchange for security and convenience. I don’t mind that a grocery store uses cameras to prevent shoplifting because I know that keeps down prices for customers like me. If using technology makes the process of tracking and identifying shoplifters easier, we all benefit.
Technology, no matter how simple or complex, is just a tool and it can be used for good and bad purposes. If you give a school child a pencil, he might create a stunning drawing or he might just poke his classmate. What we want to prevent are abuses of technology.
This gets back to why the focus on privacy legislation should be to protect individuals from harm, not tell companies how to use data. So for those who are concerned about the misuse of technology, let’s have a conversation about how to update harassment and defamation laws to ensure individual’s rights are protected. No child should be bullied and no individual should feel threatened because they appeared in a public space. But protection against these harms should be independent of technology.
Posted by Dave Williams, CAGW
The federal government is obsessed with information collecting and there could be no other area more susceptible to that than the health care industry. According to Politico:
Congress approved $27 billion as part of the federal stimulus bill that included the HITECH Act to spur health IT adoption. “This is a one-time offer on behalf of the Congress and the administration,” Blumenthal told a packed house of IT experts, doctors and chief information officers at George Washington Medical Center last week.
Physicians can eventually earn from $44,000 to $65,000 in Medicare bonus payments if they can demonstrate meaningful use, but penalties are also looming in the future for those who don’t buy the technology. Those who don’t use health IT face a 1 percent reduction in Medicare payments beginning in 2015, increasing to 3 percent in 2017. The deadline to adopt the systems for 2011 is Oct. 1.
Data from the National Center for Health Statistics at the Centers for Disease Control and Prevention suggest that primary care doctors are buying more of the technology, which HHS believes will lead to better patient care. The data, which Blumenthal — a primary care physician himself — touted, show that between 2008 and the end of 2010, the percentage of primary care doctors who had adopted the technology grew from just under 20 percent to slightly below 30 percent. Not surprisingly, Blumenthal said, large urban hospitals and teaching hospitals are more likely than rural hospitals to adopt health IT.
There are however some real concerns about privacy:
Sharing the data is one thing, having the data “breached” — either through a lost or stolen laptop or through unauthorized access – is another. The HITECH Act also requires entities — hospitals, clinics, doctors’ offices — to report when there has been a breach of personal health data. A final rule on the breach notification portions of the law was pulled last year over concerns of patients and privacy advocates. The rule would have allowed physicians, hospitals or insurers to determine when a breach of patient data is harmful to a patient.
A look at the database that HHS keeps to track data breaches, required by HITECH, might give some patients pause. Last June, for instance, the personal health data of 1.2 million patients were breached because of the theft of a laptop from AvMed Health Plans, a Florida-based nonprofit insurer and Medicare Advantage provider, according to the database that HHS maintains. In February, a similar breach occurred at Blue Cross Blue Shield of Tennessee. Just over 1.02 million records were breached after hard drives were stolen from the insurer. The database also reveals other types of breaches, including the hacked records of a North Carolina doctor and another hacking incident at the Puerto Rico Department of Health.
An additional threat to the health IT initiative is the possibility that Republican leaders in the House will try to tap the $27 billion from the stimulus for their pet projects or plow it back into the general fund to reduce the deficit.
This is problematic on many levels. The federal government has not been a good protector of information in the past and lacks free market incentives to create a strong system that will respond to breaches. Unlike social networking and other Internet sites, consumers can always choose to use another service. The only remedy to government breaches is to move to another country.
Also, if Republicans do gut the program financially, bureaucrats may try and move ahead with the program with even less oversight and safeguards.
While it may seem that sharing information is a way to deliver better and faster health care, the exorbitant price tag and privacy pitfalls remind everybody why the government needs to proceed with caution.
Online debate ends, Economist readers think governments should do more to protect online privacy. Today marked the 7th and final day of an Oxford style debate held by The Economist to debate the statement:
This house believes that governments should do far more to protect online privacy.
Marc Rotenburg, president of the Electronic Privacy Information Center, served as the proposer and Jim Harper, director of information policy studies at the Cato Institute, served as the opposition. The pro side saw an early lead, but finished with only a slight margin: 52% yes and 48% no
Harper illustrates government failure in the pursuit of privacy protection, noting its high cost. He argues that it is consumers’ responsibility to become educated on privacy matters and protect information they share.
Rotenburg argues that because the number of online service providers is dwindling and companies often change and/or do not follow their “privacy policies,” government- with its main charge to protect citizens- is the only entity capable of regulating online privacy.
To see the entire debate, along with engaging guest commentary, click here. And leave comments with your thoughts!
Today the Washington Legal Foundation posted a video explaining the implications of June’s FTC-Twitter privacy settlement. Brendon M. Tavelli, an associate at the law firm Proskauer Rose LLP, explains that there are two key ways in which this settlement differs from all previous FTC settlements.
- This is the first time the FTC has targeted a social network- a medium that deals in disclosing information, not hiding private data.
- This case does not involve “sensitive personal data” such as credit card information or social security numbers. In fact, Twitter has little information on its users other than names and passwords.
So why is this a big deal? Why did the FTC’s ruling state that “Twitter will be barred for 20 years from misleading consumers about the extent to which it maintains and protects the security…of consumer information”?
The answer, as Tavelli explains, is that Twitter promised to respect users’ privacy if they chose to make their Tweets private, and they failed to live up to that promise.
The FTC has made their point very clear:
Neither the medium nor the type of information matter. Be careful when crafting privacy policies.
Watch the video here.
ITIF'S Daniel Castro, "Stricter Privacy Regulations for Online Advertising Will Harm the Free Internet"
In a recent article, ITIF’s Daniel Castro explains the importance of online advertising. He presents much empirical data showing how targeted advertising is crucial for supporting the websites responsible for the majority of the free and low-cost content online.
Check out the full article here.
In this week’s edition of Advertising Age, Carolyn Homer and Ryan Radia, both of CEI, argue that the “strict privacy mandates” being discussed in Congress (including Rep. Bobby Rush’s “Best Practices Act”) would not only kill the millions of U.S. jobs in the e-commerce and online advertising industry, but would be less than effective, given the federal government’s track record in the arena.
To see the full article, click here.