Friday, August 31, 2012

(High-level) Consideration for OAuth token lifetimes

When choosing lifetimes for OAuth access & refresh tokens, the following three considerations should/may/might factor in:
  1. Application sensitivity (the risk of application data being compromised) - the more risk associated with an application, the shorter should be the token lifetimes (all else being equal)
  2. Average application usage frequency (how much time passes between separate application sessions) - if an apps gets used only once a month, the appropriate refresh token lifetime will be different than for an app that gets used daily (all else being equal)
  3. Average application  session duration (how long a user typically interacts with the native app) - if an app is used only briefly, the appropriate access token lifetime may be different than for an app that is used continuously all day (all else being equal)

If the above considerations are in conflict, risk is likely the more important consideration. In other words:
Tie goes to the (risk) runner 

Overly simplistic Claim #1 - a refresh token only serves a purpose if its lifetime is longer than the average time period between application usages

Apps that are used only infrequently therefore demand longer RT lifetimes - because otherwise the RT would have expired before it was ever used.

But this consideration is overridden by application sensitivity/risk

So, a taxonomy if you will
  • Low sensitivity & infrequently used ---> Longish RT lifetime (both push lifetime up)
  • Low sensitivity & frequently used    ---> Longish RT lifetime (opposing forces)
  • High sensitivity & infrequently used ---> No RT (opposing forces, risk wins)
  • High sensitivity & frequently used    ---> Shortish RT lifetime (both pull lifetime down)

Overly simplistic Claim #2 - an access token lifetime should not exceed the average length of time for an app's usage session.

But this consideration will also be overridden by application sensitivity/risk

So, another taxonomy

  • Low sensitivity & short usage  ---> Shortish AT lifetime (opposing forces)
  • Low sensitivity & long usage    ---> Longish AT lifetime (both push lifetime up)
  • High sensitivity & short usage  ---> Shortish AT lifetime (both pull lifetime down)
  • High sensitivity & long usage    ---> Shortish AT lifetime (opposing forces, risk wins)

Depending on where your application sits in the 3D array defined by sensitivity, session length, and usage frequency - the above might help you choose values for the access & refresh token lifetimes. 

For instance, for a low sensitivity app used infrequently for only short times , you could consider
  1. short access token lifetime (because session length suggests so and risk doesnt overrule)
  2. long refresh token lifetime (because frequency suggests so and risk doesnt overrule)
etc etc

Wednesday, August 29, 2012

MIM == (MKM) Mobile key Management?

In a great post on the relevance of identity to MIM (Managed Information Management), Brian Katz proposes a model for MIM that I believe looks roughly like
  1. Native app obtains a security token (via combination of SAML & OAuth)
  2. Native app uses that token on API calls (as per OAuth)
  3. API validates token and determines user identity
  4. API makes authorization decision about granting request (based on user roles & application type etc)
  5. API attaches appropriate (enterprise-defined) MIM policy to returned data
  6. Native app respects MIM policy
Brian writes
The advantage of MIM, is that data can now be passed from any one app on a device to another app on the device that can read the policy and follows the policy. Any app that does not respect the policy won’t be able to read the data in the first place (yes the data is encrypted). 

This sort of sharing has pretty serious implications for the crypto & key capabilities of the individual applications - and the architecture.

Consider two applications App1 and App2 on the device that need to share a given bit of enterprise data for some type of mashup.

When the API first releases the data & policy to App1, it can encrypt the data for App1 using the public key (ignoring the subtlety of how a symmetric key is used under the covers) associated with a private key that App1 can access. As App1 has the private key, it can decrypt the data and use it. Also, because presumably the call between the application and the API was protected by TLS, the application can be confident that the provided policy was valid and not interjected by an attacker - and so will respect its stipulations.

As Brian hilites, the data is protected both in transit (by combination of TLS & data encryption) and at rest (by the data encryption).

Subsequently sharing of the data between App1 & App2 introduces new twists.

if App1 hasnt changed the data then it could simply hand on to App2 the original encrypted data returned by the API. This of course presumes that App2 can decrypt it, implying that App2 must have access to the same private/secret key that App1 does. In this model, the ultimate guarantee of protection for 'data at rest' is the degree of protection for the private key(s), hidden away in whatever key store the OS provides.

if App1 has its own distinct key pair (as will surely be necessary if MIM is to support different apps having different data access rights), or if App1 changed the data after receiving it from the API (and it's a pretty poor mashup that doesnt expect this to happen) then the original encryption applied by the API is now useless - App1 needs itself to encrypt the data for App2. This implies that App1 is able to discover App2's public key, and know how to sign the data before sending it on. As did the API originally, App1 encrypts the data, attaches the MIM policy, and sends it over to App2.

But while the encryption applied by App1 ensures that App3 can't read the data - it does nothing to help App2 trust the MIM policy that accompanies the newly encrypted data. Unlike when App1 originally received the MIM policy directives over a TLS protected channel with the trusted API (and so could trust their origin & validity), App2 receives the new data+policy package from App1. So why should it trust & respect the policy? as far as App2 is concerned, App1 (or an attacker) could have modified the policy.

For App2 to trust (and be willing to respect) the MIM policy attached to the changed data - it needs to know that the policy came from the enterprise. The surest way to support this requirement is for the enterprise to digitally sign the policy (using its own private key). If a MIM policy statement is signed by a private key belonging to the enterprise, then App2 can use the corresponding public key to validate the signature, and so trust the policy. But, it's not enough for the enterprise to sign a policy statement in isolation. If so, an attacker could trivially switch a (valid) strict policy statement for a (valid) lax policy statement for nefarious purposes. It is the combination of data + policy that must be signed - only then will App2 know that
  1. this policy came from the enterprise
  2. the enterprise wants this policy applied to this data
But, even if the enterprise applied a signature to the original combination of data and policy that was returned to App1 - if the data was changed before being sent over to App2 - the original signature will no longer be valid.

Consequently, App1 needs to apply a new signature. But this requires that App2 trust App1.....

Thus the title of this post - MIM done with inter-app sharing as described above implies a complex key management architecture - MIM ultimately boils down to MKM.

The complexity of all of the above MKM (and the implications of inter-app trust) is what made me propose to Brian on Twitter that, instead, the apps do not directly share data but, rather all data flows to/fro the API - and it is with the AP that individual apps maintain their trust (and get data).

An admitted downside of this alternative model is that, while data can still remain protected at rest - no inter-app sharing is possible when not connected to the API, ie on a plane.