The Cloud Identity Summit is underway here in New Orleans, and it’s off to a great start. The organizers have done a wonderful job again, and with so much great content, the hardest thing is choosing which of the many interesting talks to go to.
My talk is already done (it’s oddly liberating to not be obsessing over my deck), and I’ve been blown away by the positive response I’ve gotten. And I’m not referring to the usual reaction to the inside jokes baked into the photoshops supporting my deck. I’ve had a few people tell me they really enjoyed my talk while passing me in the halls or at the dinner line. Even Bob said that he really liked it, and that means a lot.
Now I haven’t reached Glazerian levels of simulcast-iness where I can tweet comments on my talk while on stage, and immediately publish the text when I step off it. But I’m really proud of this talk because of what I conceived of while going through the process of putting it together. My initial abstract was built around some simple but important points I wanted to make about Identity at the nexus of Security and Usability. As I put the talk together, a lot of the disparate ideas and concepts I’ve had in my head sort of coalesced into what I ended up calling the 4 Core Principles of Invisible Identity. And instead of waiting for the video to be published, I figured I’d blog about it to generate some feedback and comments, especially since live tweeting was down (beyond the snark).
What is Invisible Identity?
Invisible Identity is an architectural and functional imperative to make identity simply disappear from people’s sight, moving instead into background as a silent protector and enabler. No more in-your-face interrupts, challenges and form-after-form-after-form. It relies on passive capabilities, like biometric and behavioral authentication, rules-based provisioning and more. But figuring out which technologies to use, and how to use them, needs to be more science than art, and looking at the implementations out there that are successful led me to realize that there are 4 core principles that every organization, small or large, can apply to create their invisible identity approach.
The 4 Core Principles of Invisible Identity
Following these principles will enable organizations to ensure that their identity-based security solution never loses focus on the symbiotic partnership between security and usability – whether they are starting with the most basic technologies that they will then grow over time, or whether they are revamping or evolving their large and existing infrastructure.
In the identity community, we all understand Context to essentially mean everything we can know about the identity of a person/thing – their static attributes, the dynamic environmental information about them (like device information, geolocation, or how they chose to authenticate), historical information about what they did, and when (a lot of this is being combined into what is being called end-user behavior analytics).
But I’m going to propose that we need to slightly alter the definition of context from being about the identity to being about the transaction. So in addition to all that information about the identity, it should include information about the transaction – its nature, frequency, risk analysis, impact. It should also include information about the relationship between the identity actor and the transaction – for example, is this a repeat transaction, an occasional or a one-off, and how does that fit into the overall nature of this transaction.
This changes context from being something normally relegated to the moment of first authentication to something that ebbs and flows and permeates every interaction between the person and the service.
And that goes hand in hand with being adaptive. Because once you understand how context is constantly changing, you can create security that adjusts to the demands of the situation and is right-sized instead of being onerous. It’s how you end up incorporating progressive profiling into your system, collecting data only as required, and even discarding it when no longer needed. It’s how JIT provisioning becomes a mandate, reducing data sprawl and minimizing the risk exposure of both the people and the enterprise. It’s how you can do step up authentication when the risk of the situation demands it, but also being able to choose the right kind of mechanism as justified by the particular risk – like PIN vs biometric.
Being adaptive forces you to be multi-factor and omni-channel, but also keeps you from being all the factors all the time. Or only one time. And it also forces you to think through your failure conditions and create backup or alternative flows (for example, switching to a voice-based system when interacting with a person that doesn’t have a smartphone).
Calm Technology is something that many of you may not be familiar with. We’re all familiar with the notion that a good design allows people to accomplish their goals in the least amount of moves. Calm technology allows them to do the same thing with the least amount of their attention. It’s a User Experience principle for technology that gets out of the way and lets the person do what they were trying to do.
Consider the example of making a purchase using a wearable like the Apple Watch, and imagine that the wearers authenticity is communicated based on the band incorporating a contact biometric like heartbeat (as offered in the Nymi band) instead of having to enter a PIN. There are a myriad of ways we can tap into (terrible pun intended) people’s other senses and technologies, and use mechanisms such as haptic feedback or biometrics to layer extra security into activities without introducing more friction.
Respect the User
And last, but certainly not least, Respect the User. As someone that has consciously tried to drop the term “user” and switch to person, I use it here to make a point. Too often we forget that we’re dealing with humans (in product management specs, we use “actor” and “user stories”, which tends to dehumanize them at the very beginning).
We should be treat them as a partner in the security process. After all, most of the time, they’re just trying to do something you want them to do. So making them endure an inordinately painful flow and taking an adversarial approach to them is inherently counterintuitive. But we’ve been conditioned to think that way. Understand that it isn’t that users don’t want security. It’s that they have a instinctive way of mapping the level of security controls they must endure to the level of risk they perceive in what they are doing, and will naturally reject a mismatch. Understanding that human factor is key.
And your biggest allies are transparency and choice. People will make tradeoffs and even accept higher levels of scrutiny/friction at the right points, IF they understand the benefits but more importantly, IF they understand their protections. The biggest challenge with Invisible Identity is battling the so-called creepiness factor. It’s why employees refuse to install MDM apps, or why many consumers prefer to create yet-another-account instead of using social login. And you have to understand the line between delight and overreach, because nothing will lose you a person’s trust faster than an unexpected outcome or unpleasant surprise, even if it is to their benefit. This is why security and user experience need to have equal standing at the design table.
Hope this made sense. I’d love to hear your feedback, so please leave comments here or on twitter.
In my next post stemming from CISNOLA talk, I’ll touch on the topic of Privacy in the world of Invisible Identity.