Looking beyond the Privacy Mirror

Over the last two weeks, I have been using my homegrown Facebook application, Privacy Mirror, as a means of experimenting with Facebook’s privacy settings. Although Facebook provides a nice interface to view your profile through your friends’ eyes, it does not do the same for applications. I built Privacy Mirror with the hopes of learning what 3rd party application developers can see of my profile by way of my friends’ use of applications. I have yet to speak with representatives of Facebook to confirm my findings, but I am confident in the following findings.

Imagine that Alice and Bob are friends in Facebook. Alice decides to add a new application, called App X, to her profile in Facebook. (For clarity’s sake, by “add”, I mean that she authorizes the application to see her profile. Examples of Facebook applications include Polls, Friend Wheel, Movies, etc.) At this point, App X can see information in Alice’s profile. App X can also see that Alice is friends with Bob; in fact, App X can see information in Bob’s profile. Bob can limit how much information about him is available to applications that his friends add to their profiles through the Application Privacy settings. In this case, let’s imaging that Bob has only allowed 3rd party applications to see his profile picture and profile status.

After a while, Alice tells Bob about App X. He thinks it sounds cool and adds it to his profile. At this point if App X, via Alice’s profile, looks at Bob’s profile it will see not only his profile picture and status but also his education history, hometown info, activities and movies. That is significantly more than what he authorized in his Application privacy settings. What is going here?

It appears what’s going on is that if Alice and Bob both have authorized the same application, that application no longer respects either user’s Application Privacy settings. Instead, it respects the Profile Privacy settings of each person. In essence, App X acts (from a privacy settings point of view) as if it were a friend of Alice and Bob and not a third-party application.

Putting my privacy commissioner hat for a moment, I’d want to analyze this situation from a consent and disclosure perspective. When Bob confirms his friendship with Alice he is, in a sense, opting in to a relationship with her. This opt-in indicates that he is willing to disclose certain information to Alice. Bob can control what information is disclosed to Alice through his Profile Privacy settings and this allows him to mitigate privacy concerns he has in terms of his relationship with Alice.

What Bob isn’t consenting to (and is not opting in to) is a relationship with Alice’s applications. Bob is completely unaware of which applications Alice currently has or will have in the future. This is an asymmetry of relationship. It is entirely possible that Alice and Bob will have applications in common and once they do the amount of profile information disclosed (by both of them) to an application can radically change and change without notice to either Alice or Bob. Furthermore, it is unclear which Facebook privacy settings Bob needs to manipulate to control what Alice’s applications can learn about him.

This lack of clarity is harmful. It shouldn’t take a few hundred lines of PHP, three debuggers, and an engineering degree to figure out how privacy controls work. This lack of clarity robs Facebook users of the opportunity to make meaningful and informed choices about their privacy.

This experiment started after I read the Canadian Privacy Commissioner’s report of findings on privacy complaints brought against Facebook. This report raised significant concerns about third-party applications and their access to profile information.

As of the beginning of Catalyst (today!), Facebook has about 15 days remaining to respond to the Canadian Privacy Commissioner’s office, I hope that this issue about third party applications and privacy controls is meaningfully addressed in Facebook’s response.

(Cross-posted with Burton Group’s Identity Blog.)

Further findings from the Privacy Mirror experiment

I find that I rely on my debugging skills in almost every aspect of my life: cooking, writing, martial arts, photography… And it helps when you’ve got friends who a good debuggers as well. In this case, my friends lent a hand helping me figure out what I was seeing in my Privacy Mirror.

The following is a snapshot of the Application Privacy settings I have set in Facebook:

Facebook Application Privacy Settings

Given these settings, I would expect that the Facebook APIs would report the following to a 3rd party application developer:

  • My name
  • My networks
  • My friends ids
  • My profile status

Continue reading “Further findings from the Privacy Mirror experiment”

Personal Privacy Impact Assessments for Facebook

I’m reading Canada’s Assistant Privacy Commissioner Elizabeth Denham’s recently released findings into complaints levied against Facebook. (Report of Findings into the Complaint Filed by the Canadian Internet Policy and Public Interest Clinic (CIPPIC)against Facebook Inc. Under the Personal Information Protection and Electronic Documents Act.) My first reaction to this is, frankly, one of jealousy. I wish we had a similar commissioner/czar/wonk here in the US. I suppose elements of the FTC work in this regard but without the same charter, which is too bad.

Section 4 of the report is, for me, where the action is at. Section 4 is concerned with 3rd party application in Facebook and use of personal data by those applications. As the Facebook platform grows with new additions like Facebook Connect, issues of third-party access to user information will continue to be a concern to those who pay attention to such things. There’s a challenge here as the ways in which 3rd party applications use user information is hard to decipher, as it is, from an end-user perspective, a fairly black-box operation.

I wonder if Facebook could build a personal privacy impact assessment (PPIA) app. The PPIA would analyze the action you are about to take on Facebook, your privacy settings, the 3rd party apps you’ve allows access to your profile, and the privacy settings you have set for those apps. The PPIA could give you a quick read on which applications would be privy to the action you are about to do. It could indicate which groups of friends (based on your privacy settings) would see what you are about to do. Essentially, it would let you see across how much of your social graph a certain action (like posting a link or photo) will travel.

We all have PPIAs built in – one that is cultivated through social interactions schooled by social norms. When it comes to dealing with large systems, like Facebook, big business, or the government for that matter, we all can use a little help.  I wonder if someone can get a PPIA prototype up ahead of Catalyst to at least give me a warning about potentially embarrassing photos being posted somewhere…

(Cross posted from Burton Group’s Identity Blog.)

Transparent or Translucent?

Last week I was at the recent Department of Homeland Security’s Government 2.0 Privacy and Best Practices conference. Not surprisingly the subject of transparency came up again and again. One thing that definitely caught my attention was a comment by one of the panelists that efforts towards government transparency are too often focused on data transparency rather than process transparency. While we have Data.gov as one of the current administration’s steps towards furthering government transparency, we do not have an analogous Process.gov. Said another way – we get the sausage but don’t get to see how it is made. This isn’t transparent government but translucent government.

From what I’ve seen I’d say that enterprises have achieved the opposite kind of translucency with their identity management programs. Though enterprises have achieved some degree of process transparency by suffering through the pains of documenting, engineering, and re-engineering process, they haven’t been able to achieve data transparency. Identity information has yet to become readily available throughout the enterprise in ways that the business can take advantage of. Identity information (such as entitlements) has yet to achieve enterprise master-data status. Worse yet, the quality of identity data still lags behind the quality of identity-related processes in the enterprise.

For those of you attending the Advanced Role Management workshop at Catalyst this year, you’ll hear me and Kevin present the findings from our recent roles research. Throughout our interviews we heard identity teams discuss their struggles with data management and data quality. Finding authoritative sources of information, relying on self-certified entitlement information, and decoding arcane resource codes were just some of the struggles we heard.  No one said that identity data transparency was easy, but without it enterprises can only achieve identity translucency and not true transparency.

(Cross-posted from Burton Group’s Identity Blog.)