Monday, November 26, 2012

Not drawn to scale #BYOD

The case for & against embedded browsers for native client authentication

Box makes available a very slick desktop sync agent. In addition to a password based authentication mechanism (relevant to consumers), the agent also supports SSO for enterprise customer employees. 

Disclaimer - Box uses Ping software to make the SSO happen.

Once installed, I need to connect it to my enterprise identity. 

Below is the login UI.


On clicking on the 'Use Single Sign On' link, the screen toggles to eliminate the password form field. (and every password form field eliminated means that both a) a kitten is saved and b) the terrorists lose)




When I clock on 'Continue' on the above, the Box agent opens a browser window, loading a login page hosted by Ping (my enterprise). As soon as I authenticate to this page, Ping will send a SAML assertion attesting to my identity back to Box, for it to then accept as proof of my identity rather than via a password. Based on this authentication Box will send a (different) token down to the agent, this then used on API calls against the Box endpoints.

But, because the browser in which I am asked to authenticate to Ping is embedded within the agent (ie its not my default desktop browser), my Ping AD password (which I've stored in the default browser) is not available.

So I get an empty login form to fill.



My AD password is an incredibly long, complex, and random string Password1 and is not one I have remembered. Consequently, to authenticate in this window, I have to go to that default browser and manually retrieve my Ping password from its store. Not a huge effort, and one that happens only once (or at most infrequently) but still less than optimal.

If the Box agent had instead opened the login page in my default browser, the sequence would have been a bit smoother. The downside (and the reason why I expect Box chose the model they did) is that the parameter passing between the default browser and the Box agent (necessary to deliver the second token to that agent for use on API calls) is more complicated than when the browser is embedded. For mobile OSs, use of a custom scheme URL gets you around this. Is there nothing comparable on desktop OSs?

Saturday, November 24, 2012

Use case - Account linking

Screen_shot_2012-11-24_at_7
Existing customer links their Facebook identity to account - allowing for personalization, social marketing, and maybe login (with other authn factors?)

Posted via email from Pre(posterous)

Use case - sitting on couch

Screen_shot_2012-11-24_at_7
Ol skool - the family watching TV in the living room. Entitlements are based purely on the subcription

Posted via email from Pre(posterous)

Friday, November 23, 2012

Use case - social 'introduction'

Screen_shot_2012-11-23_at_2
In this scenario, the telco ascribes some initial set of entitlements (view a customized TV lineup based on user profile & location) to a potential customer based on an identity held by some social provider like Facebook etc.

The likely goal for the telco is to convert this potential customer into a real (paying) one.

Posted via email from Pre(posterous)

Use case - TV Everywhere

Screen_shot_2012-11-23_at_2
Alternatively, a customer of a different telco viewing content hosted by *this* telco  based on their subscription at the first - this the TV Everywhere premise. Note that the user has entitlements (as specified by the asserting telco) but no 'account'.

Posted via email from Pre(posterous)

Use case - road warrior

Screen_shot_2012-11-23_at_2
On top of that basic model, we can overlay particular use cases.

For instance, shown here is the scenario of a telco customer viewing their subscribed TV content from a hotel room whilst on a business trip.

Posted via email from Pre(posterous)

Use case - Road Warrior

Screen_shot_2012-11-23_at_12
On top of the basic model, we can overlay particular use cases.

For instance, shown here is a telco customer accessing their subscribed TV channels from a hotel room whilst on a business trip.

Posted via email from Pre(posterous)

Basic telco identity model

Screen_shot_2012-11-23_at_12

Users with identities (telco issued or not) use devices (telco bound or not) on networks (telco owned or not) to access applications (telco hosted or not).

Posted via email from Pre(posterous)

Monday, November 19, 2012

An Identity standards-based Model for Mobile Application Security


What are the identity requirements for securing employees access to business applications on mobile devices? What identity standards are relevant?

I’ll present a model for the above in this and subsequent posts.

At its most basic, the model proposes using 3 identity standards (SAML, OAuth, & SCIM) between the enterprise and the MAM (Mobile Application Management) & SaaS Providers – thereby allowing the enterprise to extend it’s purview out to the MAM (and subsequently) the SaaS Clouds.

Caveat -  today’s MAM solutions do not (AFAIK) follow this model. But there are indications (from discussions we at Ping Identity are having with them) that some providers see the value.

Actors


First, who are the actors.
  •  Enterprise – wants to control employee access to relevant business applications & corresponding data.  Does NOT want the corresponding security controls to overly interfere with employee’s ability to do work. The enterprise holds the authoritative identity for a given employee in AD or equivalent.
  • MAM Provider - enforces enterprise security policy with respect to how employees use devices to access business applications . Comprised of 
    • Cloud endpoints, behind which sit the MAM policy (and maybe business apps) to be pushed down to the device installed agent
    • Device installed agent, enforces the MAM policy and so keeps business data on the device protected & sandboxed
  • SaaS Applications – offers up some service to enterprise employees. Comprised of
    • Cloud endpoints, behind which sit the application data – accessed either via a web interface or via a native application. Note: although labelled here as ‘cloud’, the principle is the same for on-prem endpoints, ie for custom enterprise native apps interacting with on-prem endpoints).
    • Native versions, OS native applications that interact with the corresponding cloud endpoints to pull down application data, for local display & manipulation
    • Web versions, accessed via device browser (while the relative importance of web & native may change in near future with HTML5, both are likely to co-exist. So our model must support both.)
Graphically, see below. The native applications & MAM agent are installed on the device (as well as the browser), they interact with their corresponding server endpoints – these interactions acting under the control of the enterprise via the access control, encryption, etc mechanisms provided by the MAM solution.


The fundamental interaction is between the native applications ( labelled as 'SaaS1' and 'SaaS2') and their corresponding cloud. The challenge is to ensure that that the employee be able to use those applications in order to do their job and that those interactions are secure (authenticated, confidential, etc). The MAM agent on the device, driven by policy from its own corresponding Cloud, will enforce security policies.

Goals


At the highest level, I believe the enterprise has the following goals for mobile - it wants to ensure that
  1. Valid employees can access the applications (and associated data) relevant to their role from their device and so be productive & content.
  2. Nobody else (neither family, colleagues borrowing the phone, the well-meaning stranger who finds the phone in a cab, or a malicious hacker) can access those applications (and associated data) if they get their hands on the device.
  3. The controls of #2 do not inappropriately impact the employee’s personal applications & data on the device , ie  respect the employee’s privacy.
Note: I contend that the above (even the last) are valid whether the device is owned by the enterprise (a COPE model) or the employee (BYOD).  Consequently, ownership is mostly a red herring (or at least an orthogonal issue) when thinking about securing mobile applications.

The first goal above is satisfied by giving employees convenient access into both the:

1.     Web version of the business applications
2.     Native versions of the business applications

Of course, ultimately the enterprise cares less about employee convenience than about productivity. But there is of course a correlation between the two – if you make it convenient for employees to access business applications, then you don’t prevent them from using those applications to sell widgets, review iPads, create Venn diagrams, or whatever it is they do for you.

The second goal can be satisfied by ensuring that the above access to applications & data, while convenient, is also secure, ie
  1. only valid employees can access applications & data
  2. valid employees can only access applications relevant to their enterprise role
  3. when a valid employee becomes an invalid ex-employee, their access is terminated
  4. any data delivered down to device of valid employee is protected both in transit and in device storage
  5. when a valid employee becomes an invalid ex-employee, any data stored on device is removed (or equivalently made inaccessible)
           
The third goal above can be satisfied by isolating the employee’s personal applications & data from the security & policy controls of the enterprise – fundamentally to leave alone anything that is not under the enterprises legitimate authority – like Angry Bird high scores and wedding pics. Whether BYOD or COPE, it behooves the enterprise to respect the employee's privacy.

In the next post , I’ll show how the enterprise can use identity standards like SCIM, OAuth, & SAML to meet the above requirements of convenience, security & privacy – specifically how the enterprise can
  1. Manage (create, edit, delete etc) identities for its employees at both the MAM & various SaaS providers so that
  2. Those employees can access the SaaS applications relevant to their job , and do so in a manner compliant with enterprise policy (as enforced by the MAM) and importantly
  3. Not require the employee be issued (and so be forced to remember) passwords for all these various services but instead enable access to applications via Single Sign On for both web & native applications.
Tieing back to the diagram, as drawn above there is significant 'whitespace' between the enterprise & the MAM (where IT wants to define security policies) and the SaaS clouds (where employees need to be able to access application functionality). Employee's have an identity in the on-prem AD, but the stuff they need to do is out in the cloud. How do you bridge that gap? I'll show how in the next post.

Wednesday, September 05, 2012

All BYOD threats are NOT created equal

I hold this truth to be self-evident (but I'll argue it anyway).

You can classify threats to business data on mobile devices (whether BYOD or not) depending on whether
  1. the employee initiates the process by which business data is put at risk 
  2. there is malice involved in the above process, ie an active 'attack' against the data compared to inadvertent disclosure
Below is a taxonomy, with representative threats


I will contend that the measures that IT should consider to stop/control each of the above categories may be different (though there will certainly be overlap).

For instance, to mitigate the risk of a well-meaning but naive employee moving corporate data onto a cloud service provider to help themselves 'get things done', IT need not immediately start thinking about encryption, keys, and containers. Arguably simpler would be for the enterprise to


  1. make sure employees are aware of corporate policy about such 3rd party applications or
  2. prevent employee from installing the 3rd party native app (perhaps hard to reconcile with BYOD) or
  3. actually subscribe to a cloud service storage provider (hopefully chosen based on discussions with the oh so demanding CoIT-aware employees), and so bring back this BYOC scenario into IT's domain of control. 

Similarly, while Lyle and Mary in the above may be both acting maliciously - it's clear that stopping Lyle is a different proposition (by removing him from AD, revoking any extant tokens, etc) than slowing down Mary (by turning off phone features like camera & screen shot, by making her data access dependent on roles, monitoring access and watching for patterns, etc).

You can also categorize the security protections IT might apply. At a really high-level, IT can

  1. stop business data getting onto the device (e.g. by ensuring only authorized employees can access and download, or never serving up actual data but rather only pixels, etc)
  2. once data is on device, prevent inappropriate viewing (by having a PIN on the device)
  3. once data is on device, prevent inappropriate sharing (via encryption, disabling screen shot, etc)
  4. once data is on device, prevent it from inappropriately leaving device (by preventing installation of 3rd party storage provider native apps)
These different types of protection guard against different categories of threats shown in the diagram. 

For instance, a PIN may not provide much protection against a determined and malicious attack (and not at all against Mary's dreams of sun) but it will surely help protect against the employee's daughter coming across sensitive product strategy during a chat session with her friend Brittany.






Friday, August 31, 2012

(High-level) Consideration for OAuth token lifetimes


When choosing lifetimes for OAuth access & refresh tokens, the following three considerations should/may/might factor in:
  1. Application sensitivity (the risk of application data being compromised) - the more risk associated with an application, the shorter should be the token lifetimes (all else being equal)
  2. Average application usage frequency (how much time passes between separate application sessions) - if an apps gets used only once a month, the appropriate refresh token lifetime will be different than for an app that gets used daily (all else being equal)
  3. Average application  session duration (how long a user typically interacts with the native app) - if an app is used only briefly, the appropriate access token lifetime may be different than for an app that is used continuously all day (all else being equal)

If the above considerations are in conflict, risk is likely the more important consideration. In other words:
Tie goes to the (risk) runner 

Overly simplistic Claim #1 - a refresh token only serves a purpose if its lifetime is longer than the average time period between application usages

Apps that are used only infrequently therefore demand longer RT lifetimes - because otherwise the RT would have expired before it was ever used.

But this consideration is overridden by application sensitivity/risk

So, a taxonomy if you will
  • Low sensitivity & infrequently used ---> Longish RT lifetime (both push lifetime up)
  • Low sensitivity & frequently used    ---> Longish RT lifetime (opposing forces)
  • High sensitivity & infrequently used ---> No RT (opposing forces, risk wins)
  • High sensitivity & frequently used    ---> Shortish RT lifetime (both pull lifetime down)

Overly simplistic Claim #2 - an access token lifetime should not exceed the average length of time for an app's usage session.


But this consideration will also be overridden by application sensitivity/risk

So, another taxonomy

  • Low sensitivity & short usage  ---> Shortish AT lifetime (opposing forces)
  • Low sensitivity & long usage    ---> Longish AT lifetime (both push lifetime up)
  • High sensitivity & short usage  ---> Shortish AT lifetime (both pull lifetime down)
  • High sensitivity & long usage    ---> Shortish AT lifetime (opposing forces, risk wins)

Depending on where your application sits in the 3D array defined by sensitivity, session length, and usage frequency - the above might help you choose values for the access & refresh token lifetimes. 

For instance, for a low sensitivity app used infrequently for only short times , you could consider
  1. short access token lifetime (because session length suggests so and risk doesnt overrule)
  2. long refresh token lifetime (because frequency suggests so and risk doesnt overrule)
etc etc

Wednesday, August 29, 2012

MIM == (MKM) Mobile key Management?

In a great post on the relevance of identity to MIM (Managed Information Management), Brian Katz proposes a model for MIM that I believe looks roughly like
  1. Native app obtains a security token (via combination of SAML & OAuth)
  2. Native app uses that token on API calls (as per OAuth)
  3. API validates token and determines user identity
  4. API makes authorization decision about granting request (based on user roles & application type etc)
  5. API attaches appropriate (enterprise-defined) MIM policy to returned data
  6. Native app respects MIM policy
Brian writes
The advantage of MIM, is that data can now be passed from any one app on a device to another app on the device that can read the policy and follows the policy. Any app that does not respect the policy won’t be able to read the data in the first place (yes the data is encrypted). 

This sort of sharing has pretty serious implications for the crypto & key capabilities of the individual applications - and the architecture.

Consider two applications App1 and App2 on the device that need to share a given bit of enterprise data for some type of mashup.

When the API first releases the data & policy to App1, it can encrypt the data for App1 using the public key (ignoring the subtlety of how a symmetric key is used under the covers) associated with a private key that App1 can access. As App1 has the private key, it can decrypt the data and use it. Also, because presumably the call between the application and the API was protected by TLS, the application can be confident that the provided policy was valid and not interjected by an attacker - and so will respect its stipulations.

As Brian hilites, the data is protected both in transit (by combination of TLS & data encryption) and at rest (by the data encryption).

Subsequently sharing of the data between App1 & App2 introduces new twists.

if App1 hasnt changed the data then it could simply hand on to App2 the original encrypted data returned by the API. This of course presumes that App2 can decrypt it, implying that App2 must have access to the same private/secret key that App1 does. In this model, the ultimate guarantee of protection for 'data at rest' is the degree of protection for the private key(s), hidden away in whatever key store the OS provides.

if App1 has its own distinct key pair (as will surely be necessary if MIM is to support different apps having different data access rights), or if App1 changed the data after receiving it from the API (and it's a pretty poor mashup that doesnt expect this to happen) then the original encryption applied by the API is now useless - App1 needs itself to encrypt the data for App2. This implies that App1 is able to discover App2's public key, and know how to sign the data before sending it on. As did the API originally, App1 encrypts the data, attaches the MIM policy, and sends it over to App2.

But while the encryption applied by App1 ensures that App3 can't read the data - it does nothing to help App2 trust the MIM policy that accompanies the newly encrypted data. Unlike when App1 originally received the MIM policy directives over a TLS protected channel with the trusted API (and so could trust their origin & validity), App2 receives the new data+policy package from App1. So why should it trust & respect the policy? as far as App2 is concerned, App1 (or an attacker) could have modified the policy.

For App2 to trust (and be willing to respect) the MIM policy attached to the changed data - it needs to know that the policy came from the enterprise. The surest way to support this requirement is for the enterprise to digitally sign the policy (using its own private key). If a MIM policy statement is signed by a private key belonging to the enterprise, then App2 can use the corresponding public key to validate the signature, and so trust the policy. But, it's not enough for the enterprise to sign a policy statement in isolation. If so, an attacker could trivially switch a (valid) strict policy statement for a (valid) lax policy statement for nefarious purposes. It is the combination of data + policy that must be signed - only then will App2 know that
  1. this policy came from the enterprise
  2. the enterprise wants this policy applied to this data
But, even if the enterprise applied a signature to the original combination of data and policy that was returned to App1 - if the data was changed before being sent over to App2 - the original signature will no longer be valid.

Consequently, App1 needs to apply a new signature. But this requires that App2 trust App1.....

Thus the title of this post - MIM done with inter-app sharing as described above implies a complex key management architecture - MIM ultimately boils down to MKM.

The complexity of all of the above MKM (and the implications of inter-app trust) is what made me propose to Brian on Twitter that, instead, the apps do not directly share data but, rather all data flows to/fro the API - and it is with the AP that individual apps maintain their trust (and get data).

An admitted downside of this alternative model is that, while data can still remain protected at rest - no inter-app sharing is possible when not connected to the API, ie on a plane.

Wednesday, June 06, 2012

Redefining the application perimeter

For browser-based applications, it's easy to deal with those employees who 'have decided to pursue other career opportunities' - the enterprise either stops issuing SSO assertions to those applications or actively de-provisions that employee at the application providers (either on-prem or SaaS). For browser-based apps then, it's relatively easy for the enterprise to Turn Off Access (TOA) to the application.

If you were to draw a circle around the 'application', you'd draw it solely around the application server because that's where the data sits.


TOA to mobile native applications is more complicated - at least if the native applications have pulled data from the server and stored it locally. If you were to draw a line around the 'application' then you would need to include the corner of the device where that application stored its data.



For local-storage native applications, TOA requires the enterprise to

  1. delete any data on the device
  2. prevent the native application from downloading more data (via API calls)
The first requirement implies some MDM type functionality or agent on the device (equivalent to deleting the data would be delete the keys used to encrypt the data).

The second requirement can be met by either
  1. deleting from the device the security tokens that the native application had been issued to authenticate its API calls (again implying MDM type functionality) or
  2. leaving the OAuth tokens on the device but revoking/canceling them at the server (so they can no longer be used on API calls) or
  3. ignore the tokens and deprovision the employee account at the application provider so that, even if the application presents a valid token on an API call, the authorization will fail
Given you need some level of MDM type functionality to remove any application data, it seems logical to depend on the same to delete the tokens, ie #1 above. But that presumes the tokens are available to the MDM agent. Perhaps this depends on where the tokens are stored, eg keyStore or not??

Simply revoking the OAuth tokens, as in #2), rather than worrying about deleting them from the device is attractive because the operation can be performed solely on the server and not the device. But revocation presumes either that a) the application providers will, on receiving an API call with a token attached, call back to the enterprise to validate that token (as necessary when the tokens are simply pointers and not self-contained objects) or b) the application supports some sort of 'revocation endpoint' at which the enterprise can actively push 'kill token' messages (for which there is currently no standard).

SCIM provides a protocol by which #3 can occur. But active de-provisioning does presume that the application provider supports SCIM (or a proprietary equivalent). Even if the tokens are deleted or revoked, the enterprise will likely want to deprovision the employee account if possible to ensure they don't continue to pay for it.

Local-storage native applications, whether used on BYOD or corporate owned devices, change the application perimeter, and so change the requirements for dealing with ex's. 




Thursday, May 10, 2012

Over simplified graphical representation of OpenID Connect

The OAuth 2.0 authz code grant type defines how to use the browser to get an access token (blue) from the AS to the Client. The OAuth bearer spec defines how to then use that token on API calls to arbitrary endpoints.


OpenID Connect layers new pieces on top - the new ID_token and the UserInfo endpoint (both in orange). As before, the client (normally) leverages the browser as the means to obtain tokens. 

The Client consumes the ID_token and creates a session based on it. The Client uses the access token to call both the UserInfo and other API endpoints.


Wednesday, May 02, 2012

Paul Madsen continues with Ping Identity’s Office of the CTO


Identity Management Expert Paul Madsen continues with Ping Identity’s Office of the CTO
Respected Identity Advocate to Help Develop and Evangelize Next Generation of Standards Including OpenID Connect and OAuth
Ping Identity®, The Cloud Identity Security Leader™, today announced that Paul Madsen will remain in the company’s Office of the CTO as senior technical architect. In this role, he will continue to develop and evangelize the next generation of identity standards include OpenID Connect and OAuth.
“An active and well-respected member of the Identity community, Paul brings an in-depth understanding of interoperability and open standards to our team,” said Patrick Harding, CTO of Ping Identity. “This expertise directly aligns with Ping Identity’s standards-based approach to solving complex identity management challenges and makes him a natural fit for our expanding team.”

Thursday, April 26, 2012

A taxonomy of confusion

Axel Nennker pointed out on Twitter an OpenID implementation between Amazon & MyHabit.com.

A screenshot of the login page

My first reaction was that this was an example of the password anti-pattern, ie the user is being asked by MyHabit.com to present their Amazon credentials. 

Axel pointed out to me that this was actually an Amazon page and not a MyHabit page, but branded to look like a MyHabit page - in Axel's words

So it's not password anti-pattern, because MyHabit never sees the user's credentials.

But it is, to my mind, misleading, because the MyHabit branded login page may make it feel to a user like they are presenting their Amazon password to MyHabit. 

It's not password anti-pattern, its the 'anti password anti-pattern'.

A taxonomy is called for