The challenge in fixing Facebook’s underlying privacy problems

A few Facebook hacks came across my desk this week. The first set are so called “rogue” applications which do the tediously predictable grab of user information followed by the equally tediously predictable spam-a-palooza. Calling such applications “rogue” is misleading. These didn’t start out okay and turn evil somewhere along the way. These apps were built to cause trouble – they are malware. Facebook has a healthy set of malware apps and the number is growing every day. You can easily spot effected Facebook users by their status messages – “Sorry for the email – my Facebook got a virus.”

The second hack is of a far more interesting class. Ronen Zilberman, a security researcher, harnessed features of the Facebook platform to unwittingly perform a man-in-the-middle attack on itself. Zilberman documents how the attack works in very clear language. You can even see a video of the attack in action. Why is this a more interesting class of attack on Facebook? First, it doesn’t require an application to be added to the victim’s Facebook profile. Second and more importantly, this attack fundamentally turns Facebook’s goals against itself.

Facebook’s mission is to “give people the power to share and make the world more open and connected.” Its business is to accomplish this mission before someone else does. This requires that Facebook provide a means to connect as many people, websites and services as possible and as fast as possible. And in the course of this social networking land-grab, it is not surprising that we have seen both Facebook malware and the Facebook’s platform being used to support anti-social behavior. The Facebook platform is optimized to provide frictionless connections and sharing of information. But as exploits for ill-purposes increase, Facebook has to act and act in a manner counter to their mission.

Facebook is currently trying to tackle some of its privacy issues with new privacy settings. The changes to the Privacy Settings are in beta, expected to rollout system-wide shortly. I sincerely hope that Facebook simplifies the privacy settings interface while adding more granular controls – though I am not too hopeful this will happen. Furthermore, I am very curious to see if changes in privacy settings will improve the situation I discovered with Privacy Mirror – again, not too hopeful. But changes in privacy settings are just patches on the underlying problem: increased privacy controls and platform restrictiveness are antithetical to Facebook’s mission. Until Facebook institutes more control within its platform, we will continue to see more malware and more “interesting” attacks.

In order to achieve its mission, Facebook has to prove that it is a safe space in which its customers can engage in social behaviors. To accomplish this, Facebook must recognize the fact that its users have relationships with each other and that Facebook itself has a relationship with each of its users. These relationships are governed by social norms and are not dictated but negotiated through countless social interactions. These relationships and the rules governing them must be respected in order for Facebook to prove that it is a safe place to make shared information public and keep private information private.

(Cross-posted from Burton Group’s Identity Blog.)

The role of design in protecting cyberspace: thoughts from CFP 2009

Among the sessions in this year’s Computers Freedom and Privacy conference was a panel on the recently released National review of cyber-security. Ed Felten presented three related areas that he believes have to be improved in equal measure to improve overall cyber-security:

  1. Product development
  2. System administration
  3. User behavior

But, to me, there was something missing from the list – product design.

Too often I have seen products whose user interface, in fact its entire user experience, was constructed after the fact.   First the special sauce gets codified, then the chrome is put on and product gets a face.  It is easy to recognize products that have been built in this way as they tend to expose their internal data models to users, forcing users to adopt the metaphors of the engineers that built the product in the first place.  These types of products make problems internal to the product problems for the end-user and this can lead to very bad things.  See Three Mile Island as an example.  Poor user experience design leads to so-called “user error,” but is it really user error if the end-user is confronted with meaningless alarms, confusing error messages, and misleading feedback?

At CFP, I talked to Bruce Schneier his research that went into Beyond Fear to get a better understanding of the psychology of fear and its relation to security.  As you probably know, humans (and other animals too) are fantastically bad about evaluating risk. Optimism bias and other factors cause us to either over or under-estimate risks. Combine this with the fact that how choices are presented directly influences how choices are made and you realize the crucial need to build better user experiences for security (frankly, all) products.

“Is everything okay with the mother ship and should we blow up Russia?” This is the question presented Buckaroo Bonzai and I think I’ve seen a form of it as a dialogue box in Windows.  Would it be considered user error if an end-user pressed the “Yes” button and nuked Moscow? Bad design is at the least confusing and at the worst dangerous.

I did talk to Ed afterwards and he acknowledged the role of design in product development. As he said, if we only attempt to improve one of the three areas product devolvement or system administration or user behavior we won’t improve cyber-security; we have to improve all three.  User experience design as a part of an improved product development processes can directly lead to better more informed user behavior. Okay you product managers and designers make your voices heard – better safer products through better design!

(Cross-posted from Burton Group’s Identity Blog.)

Privacy Risks Get Real – California Privacy Laws, Octomom, and Kaiser Permanente

No organization wants to be the first  to be fined because of a new regulation. Unfortunately, that’s exactly where Kaiser Permanente finds itself.  After some high profile cases of unauthorized access to celebrities’ medical records, the California legislature adopted two new privacy laws (SB 541 and AB 211);  these regulations were so swiftly enacted that they contained spelling errors. Both regulations went into effect on January 1 of this year. Five months later, Kaiser Permanente has become the first enterprise to be fined under this new regime.

Regulators have levied the maximum fine, $250,000, for the recent incident involving Nadya “Octomom” Suleman.  (Kevin commented on this previously.)  All in all, 23 individuals looked at Ms. Suleman’s records without authorization. Of these, 15 have either been fired or resigned.  And although the state regulators have fined Kaiser, they have yet to penalize any of these 23 individuals – which they can do under state law.

As reported in the LA Times, Suleman’s lawyer said:

I think Kaiser handled it professionally. They found out, they terminated the employees, they brought it to our attention. They certainly didn’t try to hide it.

It’s important to note that even though Kaiser acted appropriately, laws like SB 541 are clear cut: unauthorized access to medical information =  fine. Do not pass Go; do not collect $200.

As we’ve said before privacy risks are real. The fines are increasing. The number of regulations is increasing. Now more than ever is the time to register for this year’s Catalyst conference so you can attend our Privacy Risks Get Real track and learn how to reduce the chance your organization will become the next “first.”

(Cross posted from Burton Group’s Identity blog.)

Nailing Down the Definition of “Entitlement Management”

Ian Yip’s take on access management versus entitlement management can be partially summed up with this equation:

Entitlement management is simply fine-grained authorisation + XACML

I have four problems with this.

First, definitions that include a protocol are worrisome as they can overly restrict the definition. For example, if I defined federation as authentication via SAML, people would quickly point out that authentication via WS-Fed was just as viable as a definition. So in terms of an industry conversation, we need to make sure that our terms are not too narrow.

Second, I fear that this definition is a reflection of products in the market today and not a statement on what “entitlement management” is meant to do.  Yes, most of today’s products can use XACML. Yes, they facilitate authorization decisions based on a wider context. But who’s to say that these products, and the market as a whole, have reached their final state? Along these lines, I wonder if externalized authorization stores are a required part of an “entitlement management” solution?

Third, there is something missing from the definition – the policy enforcement point. A fine-grained authorization engine provides a policy decision point, but that still leaves the need for an enforcement point. This holds true whether an application has externalized its authorization decisions or not.

Finally, I have a problem with the phrase “entitlement management” (just ask my co-workers). As I have blogged about before, Kevin and I have been in the midst of a large research project focusing on role management. One of the things we have learned from this project is that enterprises do not use the phrase “entitlement management” the same way we do.

A bit of history – three or so years ago Burton Group, at a Catalyst, introduced the phrase “entitlement management” to include the run-time authorization decision process that most of the industry referred to as “fine-grained authorization.” At the time, this seemed about right. Flash forward to this year and our latest research and we have learned that our definition was too narrow.

The enterprises that we talked to use “entitlement management” to mean:
·      The gathering of entitlements from target systems (for example, collecting all the AD groups or TopSecret resource codes)
·      Reviewing these entitlements to see if they are still valid
·      Reviewing the assignment of these entitlements to individuals to see if the assignments are appropriate
·      Removing and cleaning up excessive or outdated entitlements
More often than not, we found that our customers used “entitlement management” as a precursor to access certification processes.

Using a single term (“entitlement management”) to span both the run-time authorization decisions as well as the necessary legwork of gathering, interpreting, and cleansing entitlements can lead to confusion. The way enterprise customers currently use “entitlement management” works well to describe how legwork is vital to the success of other identity projects.  (I’ll be working on a report this quarter that delves deeper into this.)

I am all for a broader conversation on fine-grained authZ versus entitlement management. And as Ian Yip has pointed out on twitter, identity blog conversations have dropped off a bit and I’d love to stoke the fire a bit.  But we can’t have meaningful conversations without shared definitions. So what’s your take? What do you mean when you say “fine-grained authorization” and “entitlement management?”

(Cross-posted from Burton Group’s Identity blog.)