When you don't have anything nice to say, well then perhaps its time consider a career as an analyst.
Saturday, December 24, 2011
Friday, December 23, 2011
Thursday, December 22, 2011
Wednesday, December 21, 2011
Tuesday, December 20, 2011
Monday, December 19, 2011
Friday, December 16, 2011
Wednesday, December 14, 2011
Monday, December 12, 2011
Callback security in mobile OAuth 2.0
OAuth 2.0 defines how mobile native applications can obtain an access token from an Authorization Server – this token used on calls to the API behind which sits the application data (e.g. their calendar, their TripIt data , etc) that the native application seeks.
Obtaining the first access token typically happens via the browser – the sequence
1) The user indicates they wish to ‘authorize the app’ or equivalent
2) the native application pops an external browser window and loads the AS login page
3) the User logs into the AS (and may be asked for their consent for that native application to be able to access particular resources)
4) if successful, the AS redirects the browser to a specified callback URL, including in the redirect URL an ‘authorization code’
5) The browserpasses the authorization code to the native application
6) The native application sends the authorization code back to the AS, and is returned the desired access token (as well as optionally a refresh token, which the native application can use going forward to get new access tokens)
Step 5 in the above might have caused you to wonder just how exactly does the browser, after grabbing the code from the callback URL, pass it to the native application?
Good question.
One emerging best practice is to leverage the mobile OSes (Android & iOS at least) support for custom schemes as a means of inter-application messaging. When the native application is installed, it registers itself as the handler for URLs of a particular scheme (just like the browser handles HTTP URLs, and the markets have their corresponding schemes).
Below is the Android manifest for an app registering itself as the handler for the 'coolmobileapp' scheme.
Once registered as a handler for a given scheme, whenever the OS sees a URL in that scheme, it will pass that URL to the application.
If then, the callback URL that the AS redirects the browser to belongs to a scheme that the relevant native application ‘owns’, then the browser, upon seeing that URL, will pass it onto the OS for appropriate forwarding to the app.
Once the native application is passed the URL, it can grab the authorization code, and use it to obtain the desired tokens.
The hitch in the above mechanism is that nothing prevents multiple native applications from registering themselves as the handler for a given URL scheme. If you have multiple browsers installed on your phone you will have seen how Android deals with multiple such handlers – it asks the user which to use for a given URL (unless a default is set). Nothing in the OS prevents such collisions, nor mediate conflicts other than this user query mechanism.
Consequently, it is theoretically possible for a rogue application to lay claim to the very same custom scheme handler as a valid native app with the hope that it would obtain the authorization code as part of the OAuth authorization process rather than the valid & appropriate app.
In the next installment, I will present ideas as to how to mitigate the above callback attack - based on discussions with my Ping colleagues Scott Tomilson, Travis Spencer, and Brian Campbell (to be fair, Brian has contributed very little).
Obtaining the first access token typically happens via the browser – the sequence
1) The user indicates they wish to ‘authorize the app’ or equivalent
2) the native application pops an external browser window and loads the AS login page
3) the User logs into the AS (and may be asked for their consent for that native application to be able to access particular resources)
4) if successful, the AS redirects the browser to a specified callback URL, including in the redirect URL an ‘authorization code’
5) The browserpasses the authorization code to the native application
6) The native application sends the authorization code back to the AS, and is returned the desired access token (as well as optionally a refresh token, which the native application can use going forward to get new access tokens)
Step 5 in the above might have caused you to wonder just how exactly does the browser, after grabbing the code from the callback URL, pass it to the native application?
Good question.
One emerging best practice is to leverage the mobile OSes (Android & iOS at least) support for custom schemes as a means of inter-application messaging. When the native application is installed, it registers itself as the handler for URLs of a particular scheme (just like the browser handles HTTP URLs, and the markets have their corresponding schemes).
Below is the Android manifest for an app registering itself as the handler for the 'coolmobileapp' scheme.
<activity android:name=".AppActivity" android:label="@string/app_name"> <intent-filter> <data android:scheme="coolmobileapp" android:host="cma.ex.com" /> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.BROWSABLE" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity>
Once registered as a handler for a given scheme, whenever the OS sees a URL in that scheme, it will pass that URL to the application.
If then, the callback URL that the AS redirects the browser to belongs to a scheme that the relevant native application ‘owns’, then the browser, upon seeing that URL, will pass it onto the OS for appropriate forwarding to the app.
Once the native application is passed the URL, it can grab the authorization code, and use it to obtain the desired tokens.
The hitch in the above mechanism is that nothing prevents multiple native applications from registering themselves as the handler for a given URL scheme. If you have multiple browsers installed on your phone you will have seen how Android deals with multiple such handlers – it asks the user which to use for a given URL (unless a default is set). Nothing in the OS prevents such collisions, nor mediate conflicts other than this user query mechanism.
Consequently, it is theoretically possible for a rogue application to lay claim to the very same custom scheme handler as a valid native app with the hope that it would obtain the authorization code as part of the OAuth authorization process rather than the valid & appropriate app.
In the next installment, I will present ideas as to how to mitigate the above callback attack - based on discussions with my Ping colleagues Scott Tomilson, Travis Spencer, and Brian Campbell (to be fair, Brian has contributed very little).
Tuesday, December 06, 2011
Monday, December 05, 2011
Saturday, December 03, 2011
Tuesday, November 29, 2011
Sunday, November 06, 2011
Thursday, November 03, 2011
Tuesday, November 01, 2011
Wednesday, October 19, 2011
Friday, October 14, 2011
Wednesday, October 05, 2011
Monday, September 19, 2011
I refuse to call this 'Baking as a Service'
So the story goes, when Betty Crocker instant cake mixes were
initially introduced, they sold poorly. But why? the mixes made
trivially easy what had been a long and messy process.
The problem, according to the business psychologists Dr. Burleigh Gardner and Dr. Ernest Dichter, was eggs. They argued that powdered eggs should be left out, so cooks could add a few fresh eggs into the batter, giving them a sense of creative contribution.
Though it would mean more work for the (inevitably aproned) housewives, the hope was that cracking an egg into the bowl would give her some pride of creation in the resultant cake and mitigate any feelings of spousal & maternal guilt.
The premise of combining fresh ingredients with pre-made has been formalized with the 'semi homemade' movement in cooking - the approved ratio is that 70% of fresh ingredients like vegetables or meat supplements the 30% of store bought mix or sauce.
The moral of the story for cloud identity management?
A mix of on-prem & on-demand IdM infrastructure will give to the enterprise the right balance of control and convenience - the store bought on-demand mix means that the (probably less likely to be aproned but hey I don't judge) IT admin need not build a cloud identity solution from scratch, while the on-prem eggs ensures that they can maintain the desired level of ownership that allows them to meet their CISO at the end of each day with a guilt-free conscience (and maybe also a dry martini).
The problem, according to the business psychologists Dr. Burleigh Gardner and Dr. Ernest Dichter, was eggs. They argued that powdered eggs should be left out, so cooks could add a few fresh eggs into the batter, giving them a sense of creative contribution.
Though it would mean more work for the (inevitably aproned) housewives, the hope was that cracking an egg into the bowl would give her some pride of creation in the resultant cake and mitigate any feelings of spousal & maternal guilt.
The premise of combining fresh ingredients with pre-made has been formalized with the 'semi homemade' movement in cooking - the approved ratio is that 70% of fresh ingredients like vegetables or meat supplements the 30% of store bought mix or sauce.
The moral of the story for cloud identity management?
A mix of on-prem & on-demand IdM infrastructure will give to the enterprise the right balance of control and convenience - the store bought on-demand mix means that the (probably less likely to be aproned but hey I don't judge) IT admin need not build a cloud identity solution from scratch, while the on-prem eggs ensures that they can maintain the desired level of ownership that allows them to meet their CISO at the end of each day with a guilt-free conscience (and maybe also a dry martini).
Wednesday, August 24, 2011
Thursday, August 11, 2011
Wednesday, August 10, 2011
Friday, July 15, 2011
Thursday, July 14, 2011
Friday, June 24, 2011
Tuesday, June 07, 2011
Friday, June 03, 2011
Friday, May 20, 2011
Sunday, May 15, 2011
Wednesday, May 04, 2011
Tuesday, May 03, 2011
Friday, April 29, 2011
Scoping scope
ReadWriteWeb describes Twitter's new consent UI by which an application asks of a user access to their Twitter account.
Aside: RWW describes this page as the 'OAuth screen', makes just as much sense to call it the 'HTTP screen'. OAuth is the plumbing for this screen, not the (visible) shower curtain.
RWW points out that the list of allowed actions isn't quite as complete as indicated. Notably omitted from the list is 'Read that DM where you made fun of your boss's new haircut'.
The UI might make a user believe that this list of permissions is unique to Favstar.FM. But that's not the case - these are generic permissions, afforded to all (registered) applications. The only differentiation in permissions that Twitter supports is between 'read' & 'read and write', this selected by the application developer at registration time
Twitter's model ignores a key advantage of the OAuth model (one not supported by the password anti-pattern), namely allowing a user to give differentiated permissions to different applications.
Separately:
Aside: RWW describes this page as the 'OAuth screen', makes just as much sense to call it the 'HTTP screen'. OAuth is the plumbing for this screen, not the (visible) shower curtain.
RWW points out that the list of allowed actions isn't quite as complete as indicated. Notably omitted from the list is 'Read that DM where you made fun of your boss's new haircut'.
The UI might make a user believe that this list of permissions is unique to Favstar.FM. But that's not the case - these are generic permissions, afforded to all (registered) applications. The only differentiation in permissions that Twitter supports is between 'read' & 'read and write', this selected by the application developer at registration time
Twitter's model ignores a key advantage of the OAuth model (one not supported by the password anti-pattern), namely allowing a user to give differentiated permissions to different applications.
Separately:
- Red & green text? Really?
- Does the stuttering repetition of 'Favstar.FM' imply a glitch in the code? or an overzealous registration page?
- The list of things the app will not be able to perform seems incomplete. I suggest the following additions at minimum
Saturday, April 16, 2011
Not on the first date
My Ping colleague Travis blogged yesterday on a POC he put together with Axiomatics David Brossard.
Travis used some security terms I was unfamiliar with (e.g. 'fika', 'bullar'?) but I believe the jist of the POC is that:
a) a user can be introduced to a bank (or equivalent) through some existing social identity(e.g. facebook, Twitter, etc) they have
b) they can enjoy a richer experience because of the information made available to the bank (e.g. services customized to their location, etc) from the social provide
c) if they actually want to become a customer of the bank, the normal (and far more rigorous) procedure of proofing, registration, and issuance kicks in - the same process as would happen if the user hadnt first been introduced by the social provider
I think of the value of the introduction step as smoothing out the 'hassle as function of relationship maturity' curve.
The current default is that there are two possible states for a given user's relationship with a provider (bank or otherwise) - a) member of the unwashed horde or b) customer.
The transition from a) to b) can be onerous - and critically, the hassle may not be commensurate with the user's own assessment of where they are in the relationship with the provider. The situation is shown in the graphic below
The user may have been thinking they just wanted to 'hold hands' but the provider is all hot & heavy and saying 'lets do this, lets do this' etc etc
The model the POC enables is shown in the diagram below. The introduction of the user to the bank, while not enabling the same richness of experience and service as a full account (with associated hassle) still provides something richer than the unwashed status. Importantly, its likely a degree of hassle more inline with the user's own feeling for the relationship with the bank. There is a middle ground - a 'lets cuddle' stage if you will (or won't - that's your decision, no pressure from me)
If the relationship matures further, and the creation of a 'real' account is deemed necessary - the user will hopefully be convinced of the value of taking on the hassle - having already established some confidence in the provider.
Lets keep the pressure to be 'socially promiscuous' off the Web, and in high school where it belongs.
Travis used some security terms I was unfamiliar with (e.g. 'fika', 'bullar'?) but I believe the jist of the POC is that:
a) a user can be introduced to a bank (or equivalent) through some existing social identity(e.g. facebook, Twitter, etc) they have
b) they can enjoy a richer experience because of the information made available to the bank (e.g. services customized to their location, etc) from the social provide
c) if they actually want to become a customer of the bank, the normal (and far more rigorous) procedure of proofing, registration, and issuance kicks in - the same process as would happen if the user hadnt first been introduced by the social provider
I think of the value of the introduction step as smoothing out the 'hassle as function of relationship maturity' curve.
The current default is that there are two possible states for a given user's relationship with a provider (bank or otherwise) - a) member of the unwashed horde or b) customer.
The transition from a) to b) can be onerous - and critically, the hassle may not be commensurate with the user's own assessment of where they are in the relationship with the provider. The situation is shown in the graphic below
The user may have been thinking they just wanted to 'hold hands' but the provider is all hot & heavy and saying 'lets do this, lets do this' etc etc
The model the POC enables is shown in the diagram below. The introduction of the user to the bank, while not enabling the same richness of experience and service as a full account (with associated hassle) still provides something richer than the unwashed status. Importantly, its likely a degree of hassle more inline with the user's own feeling for the relationship with the bank. There is a middle ground - a 'lets cuddle' stage if you will (or won't - that's your decision, no pressure from me)
If the relationship matures further, and the creation of a 'real' account is deemed necessary - the user will hopefully be convinced of the value of taking on the hassle - having already established some confidence in the provider.
Lets keep the pressure to be 'socially promiscuous' off the Web, and in high school where it belongs.
Friday, April 15, 2011
Thursday, April 14, 2011
Thursday, April 07, 2011
More token taxonomizing
Generally, APIs fall into one of two categories – either SOAP-based or REST-based. SOAP defines an XML messaging envelope for carrying payload and associated metadata. SOAP messages are typically delivered over HTTP as POSTs – the semantics of the message carried in the SOAP Body within the HTTP body. Advocates of REST argue that HTTP itself offers all the desired semantics through its built-in support for GET, PUT, POST, DEL etc. There is much emotion on both sides. Regardless, both SOAP and REST APIs rely on tokens to carry security information.
API Clients typically interact with some sort of 'token service' to get a token, and then include that token on their calls to the API - the token layers on the security information the API needs to make an authorization decision. For the SOAP world, WS-Trust defines the protocol between the Client and the token service by which the Client gets the necessary toklens. In the REST world, OAuth does the equivalent.
Tokens can be distinguished along a number of (somewhat) orthogonal axes:
API Clients typically interact with some sort of 'token service' to get a token, and then include that token on their calls to the API - the token layers on the security information the API needs to make an authorization decision. For the SOAP world, WS-Trust defines the protocol between the Client and the token service by which the Client gets the necessary toklens. In the REST world, OAuth does the equivalent.
Tokens can be distinguished along a number of (somewhat) orthogonal axes:
- artifact or assertion - does the token have internal structure with heterogeneous elements within? Or is an opaque blob like a random text string that have no inherent meaning or semantics, but acts only as a reference or pointer into some store – the referenced object likely having the semantics. The artifact model has advantages when the delivery channel is constrained in security or size-limits
- Security model - what is required of the Client for presenting a token to some API? Is mere possession of the token sufficient (the bearer model), or is the Client required to demonstrate knowledge of some secret associated with the token (the POP or proof-of-possession model)?
- Implicit or explicit security. Does the token carry within itself its own inherent security protections, ie is there a space to carry a signature for the token, thereby allowing a recipient to determine the issuer of the token? Alternatively, does the token itself rely on external security protection, e.g. provided by the transport channel over which it is sent
- Standardized or proprietary – is the structure (if present) of the token defined by some group – or is it proprietary to a single issuer (or recipient). The criteria as to what constitutes ‘some group’ varies. If the same entity both issues and verifiers the tokens - there is less motivation for a standard (assuming they are opaque to the client)
- Duration - are they long-lived or short-lived? Generally, bearer tokens are given short lifetimes to balance the risk of their theft.
Wednesday, April 06, 2011
From Properties to Capabilities
Hal Lockhart has an interesting post on different authorization models, distinguished by the nature of the claims/attributes made within some token/assertion. Hal argues that there is a spectrum between 'properties' (where the attributescontain information about who/what the Subject is) and 'capabilities' (where the attributes contain information about what the Subject is allowed to do).
Coincidentally, I was working at my 'drawings' this morning - sketching out something similar for OAuth.
The diagram below shows three different models for the interaction between OAuth Client, Authorization Server, and Resource Server.
In the top model, the Client requests of the OAuth AS a token, but does not specify any particular RS target. Consequently, the AS can issue a token with claims that contain only generic attributes of the Client/Subject. This is Hal's properties model. The AS authenticates, the RS authorizes.
In the bottom model, the Client includes in its request to the AS the scope of what actions it desires to perform at the RS. The token that the AS subsequently issues reflects a positive authorization decision and not some generic attribute of the Client. This is Hal's capabilities model. The AS makes an authorization decision and tells the RS about it. Makes for simple RSs.
Reflecting the delegated authorization use case that originally motivated OAuth - it is the capabilities model that gets the most attention. The assumption is that the OAuth access token issued to the Client reflects a set of capabilities assigned to it by the Resource Owner. But there are lots of applications of OAuth where the access token wont reflect capabilities but simply Client properties.
Note: The middle model has the respponsibility for the complete authorization decision shared amongst the AS & RS. It is there because any good taxonomy has at least 3 permutations - I have nothing more to say about it.
Coincidentally, I was working at my 'drawings' this morning - sketching out something similar for OAuth.
The diagram below shows three different models for the interaction between OAuth Client, Authorization Server, and Resource Server.
In the top model, the Client requests of the OAuth AS a token, but does not specify any particular RS target. Consequently, the AS can issue a token with claims that contain only generic attributes of the Client/Subject. This is Hal's properties model. The AS authenticates, the RS authorizes.
In the bottom model, the Client includes in its request to the AS the scope of what actions it desires to perform at the RS. The token that the AS subsequently issues reflects a positive authorization decision and not some generic attribute of the Client. This is Hal's capabilities model. The AS makes an authorization decision and tells the RS about it. Makes for simple RSs.
Reflecting the delegated authorization use case that originally motivated OAuth - it is the capabilities model that gets the most attention. The assumption is that the OAuth access token issued to the Client reflects a set of capabilities assigned to it by the Resource Owner. But there are lots of applications of OAuth where the access token wont reflect capabilities but simply Client properties.
Note: The middle model has the respponsibility for the complete authorization decision shared amongst the AS & RS. It is there because any good taxonomy has at least 3 permutations - I have nothing more to say about it.
A token argument for lazy deprovisioning
When an employee can only access cloud applications through SSO from their enterprise deprovisioning employees as they 'leave to purse other career opportunities' is easy - the enterprise simply turns off the ability to SSO (by turning off the ability to authenticate to the enterprise). For accessing cloud applications through the browser - the enterprise acts as a gatekeeper for its employees to the Cloud - and so becomes an effective shut-off valve when needed.
As the enterprise can be confident that the Recently Boxed Up Employee (RBUE) no longer has access to cloud applications - it may choose to be somewhat relaxed about cleaning up any remnants of those ex-employees at the various SaaS providers. The motivation is more about data hygiene and less about security - and so there need not be the same urgency to the clean up.
This laissez faire attitude changes if the RBUE had also been able to access the cloud applications through a native application on their phone (or yes, tablet, these days you have to mention tablets). Unlike for Web SSO, the authentication model for such native installed mobile applications does not typically (caveat,caveat, caveat) involve the enterprise at run-time. Instead, the application is issued (for the relevant employee) relatively long-lived tokens at registration time, and it is by presenting these tokens to the Cloud APIs fronting the data that the application authenticates (for the relevant employee).
While the enterprise was (likely/hopefully) involved (if not directly) in the original process of issuing the tokens to the native applications - it is not typically (caveat,caveat, caveat) involved in the subsequent run-time issuance or verification of those tokens when presented by the native applications (e.g. the Cloud provider wouldn't typically call-out to the enterprise asking
Have you fired this guy yet? FYI, he spends most of his time on Chatter talking about baseball
As it's not involved in the day-to-day issuance and/or verification of the tokens that the native application uses to authenticate to the Cloud - the enterprise can't play the passive shut-off role possible for Web SSO - it can't stop RBUE access to the Cloud by simply stopping issuing tokens for that RBUE.
Instead, the enterprise needs to get up off its corporate butt and actively deprovision the RBUE (the same operation required as when, before Cloud SSO was possible, each employee had particular accounts and passwords at SaaS providers).
No longer can the enterprise spend its time watching Oprah - secure in the belief that turning off SSO effectively terminates Cloud access for the RBUE. The enterprise needs to actively (and promptly) reach out to each and every relevant Cloud provider and send the 'Delete user' message (specifics varying according to the proprietary user management API each Cloud API offers up).
Sorry Oprah.
This argument presumes that the tokens are issued to the native application by the Cloud provider, and not the enterprise. This seems to be today's reality. In this model, the tokens aren't federated - the Cloud issues tokens that it will subsequently verify. JWT will likely enable a different model, where an enterprise can issue a token to the native application, and a Cloud API can verify it. In such a model, the enterprise regains the right to be lazy about 'deprovisioning'.
As the enterprise can be confident that the Recently Boxed Up Employee (RBUE) no longer has access to cloud applications - it may choose to be somewhat relaxed about cleaning up any remnants of those ex-employees at the various SaaS providers. The motivation is more about data hygiene and less about security - and so there need not be the same urgency to the clean up.
This laissez faire attitude changes if the RBUE had also been able to access the cloud applications through a native application on their phone (or yes, tablet, these days you have to mention tablets). Unlike for Web SSO, the authentication model for such native installed mobile applications does not typically (caveat,caveat, caveat) involve the enterprise at run-time. Instead, the application is issued (for the relevant employee) relatively long-lived tokens at registration time, and it is by presenting these tokens to the Cloud APIs fronting the data that the application authenticates (for the relevant employee).
While the enterprise was (likely/hopefully) involved (if not directly) in the original process of issuing the tokens to the native applications - it is not typically (caveat,caveat, caveat) involved in the subsequent run-time issuance or verification of those tokens when presented by the native applications (e.g. the Cloud provider wouldn't typically call-out to the enterprise asking
Have you fired this guy yet? FYI, he spends most of his time on Chatter talking about baseball
As it's not involved in the day-to-day issuance and/or verification of the tokens that the native application uses to authenticate to the Cloud - the enterprise can't play the passive shut-off role possible for Web SSO - it can't stop RBUE access to the Cloud by simply stopping issuing tokens for that RBUE.
Instead, the enterprise needs to get up off its corporate butt and actively deprovision the RBUE (the same operation required as when, before Cloud SSO was possible, each employee had particular accounts and passwords at SaaS providers).
No longer can the enterprise spend its time watching Oprah - secure in the belief that turning off SSO effectively terminates Cloud access for the RBUE. The enterprise needs to actively (and promptly) reach out to each and every relevant Cloud provider and send the 'Delete user' message (specifics varying according to the proprietary user management API each Cloud API offers up).
Sorry Oprah.
This argument presumes that the tokens are issued to the native application by the Cloud provider, and not the enterprise. This seems to be today's reality. In this model, the tokens aren't federated - the Cloud issues tokens that it will subsequently verify. JWT will likely enable a different model, where an enterprise can issue a token to the native application, and a Cloud API can verify it. In such a model, the enterprise regains the right to be lazy about 'deprovisioning'.
Tuesday, April 05, 2011
A SHOULD is as good as a MUST to a fined AS
Recently, the IETG OAuth WG has spent MUCH time discussing whether a particular endpoint in the authorization dance SHOULD be SSL, or instead MUST be SSL. (where the SHOULD and MUST are interpreted as in RFC 2119).
The endpoint in question is that hosted by the OAuth Client, to which the Authorization Server (AS) redirects the user's browser after obtaining their consent for that Client accessing the services hosted by a Resource Server (RS). Typically, the Client would have previously provided the AS the appropriate endpoint when registering with that AS. Notwithstanding that, a Client may also supply a value for this endpoint when it send the user's browser to the AS asking for authorization - it serves as an additional check that the entity sending the authorization request is indeed a valid registered Client.
Importantly, is is by sending the browser to this client URL that the AS communicates an 'authorization code' to the Client - this code effectively a pointer to the more fundamental 'access token' that the Client ultimately desires - as it is the access token that the Client will include on its API calls to the RS. Once the Client obtains the authorization code - it sends it back to the AS in exchange for that desired access token.
The flow is shown below - the infamous endpoint is hilited. The exchange of the authorization code is shown in steps (D) and (E).
If this endpoint is not protected by SSL, then an attacker will be able to grab an authorization code issued for one user, and fool a good (in the 'not overtly evil' sense, no endorsement is intended) Client to exchange the stolen authorization code for an access token. The attacker ends up able to access the user's resources at the RS 'through' the good Client.
One version of the full attack looks like
Everybody in the OAuth WG seems to agree on the reality of the attack and that SSL mitigates it.
Consequently, were the OAuth specification to mandate that Client implementations and deployments MUST protect this endpoint with SSL - then the attack is prevented. There are some in the OAuth WG who advocate this - arguing that the specification has a duty to turn up the security knobs and prevent clearly recognized attacks against the protocol.
Others in the WG, (generally the major social providers), looking at the reality of the Clients that will be interacting with the ASs they will host, argue that, while MUST may be desirable, it is impractical for their Clients (for which setting up SSL is claimed to be either too onerous, too expensive, or both). Consequently, they argue that making SSL protection a MUST will serve to only cause them to break compliance with the specification. As they don't want to be out of compliance with the spec (but they are willing to do so if necessary), these WG members argue that making SSL protection a SHOULD, along with appropriate text about risks of not using SSL, is the right choice - because it achieves the same security protection (namely that those Clients who will use SSL will indeed use it, while those who wont wont) without forcing the latter group out of compliance.
Notwithstanding new twists (like rants along the lines of 'you new kids are wrecking the game!!!!' ), this MUST/SHOULD SSL issue reflects the familiar tension between security & deployment realities that security standards always face - dialing the knobs up to 11 would be nice, but many folk's amps stop at 10 (or lower).
The Facebooks et al have indicated that, regardless of what the spec stipulates, they won't enforce a MUST, ie they will happily (well perhaps not happily, but they'll do it nonetheless) send the browser to an HTTP redirect endpoint at a Client (nor would they reject an HTTP endpoint at Client registration time). Ultimately, this is the Facebook AS making a security policy decision for the Facebook RSs - measuring the risk of the attack and making the call that to not mandate SSL is acceptable. The position is arguably defensible given the nature of the data and services currently sitting behind facebook RSs (e.g. cute bunny photos etc).
Facebook's position can be summed up as:
Make it MUST in the spec if you really need to, but we are going to interpret it as a SHOULD
The flip side to this model is likely going to be more relevant for an enterprise running an OAuth AS for its RS APIs. This 'enterprise-centric' flip side is:
Make it SHOULD in the spec if you really need to, but we are going to interpret it as a MUST
In other words, even if the OAuth specification doesnt mandate that the client endpoint be protected with SSL (ie the spec has a SHOULD), an AS is still completely free to reject Client endpoints that arent protected by SSL - either at registration time or by refusing to send the browser to such an endpoint. Just as the Facebooks of the consumer world can make their own policy decision regarding how they want their Client endpoint's secured (SSL or not), an AS hosted by an enterprise can make its own decision - in the direction of greater overall security by interpreting the spec's SHOULD as a MUST.
It's likely that an Enterprise would somehow and eventually pay a financial cost for a breach to their API security (e.g. perhaps they would be 'fined' were this to happen). The moral of this essay can therefore be summarized as follows:
A SHOULD is as good as a MUST to a fined AS.
Sorry about that.
The endpoint in question is that hosted by the OAuth Client, to which the Authorization Server (AS) redirects the user's browser after obtaining their consent for that Client accessing the services hosted by a Resource Server (RS). Typically, the Client would have previously provided the AS the appropriate endpoint when registering with that AS. Notwithstanding that, a Client may also supply a value for this endpoint when it send the user's browser to the AS asking for authorization - it serves as an additional check that the entity sending the authorization request is indeed a valid registered Client.
Importantly, is is by sending the browser to this client URL that the AS communicates an 'authorization code' to the Client - this code effectively a pointer to the more fundamental 'access token' that the Client ultimately desires - as it is the access token that the Client will include on its API calls to the RS. Once the Client obtains the authorization code - it sends it back to the AS in exchange for that desired access token.
The flow is shown below - the infamous endpoint is hilited. The exchange of the authorization code is shown in steps (D) and (E).
If this endpoint is not protected by SSL, then an attacker will be able to grab an authorization code issued for one user, and fool a good (in the 'not overtly evil' sense, no endorsement is intended) Client to exchange the stolen authorization code for an access token. The attacker ends up able to access the user's resources at the RS 'through' the good Client.
One version of the full attack looks like
- Evil Brian starts dance at good Client & good AS
- AS redirects Brian's browser to (non-SSL protected Http) Client redirect endpoint, contains an authz code
- Brian stops the (HTTP) redirect containing his authz code
- Brian waits for unsuspecting Good Paul to walk into Starbucks
- Good Paul starts dance at good Client & good AS (the same ones as for Brian)
- Brian intercepts the (non-SSL protected Http) redirect containing Paul's authz code
- Brian takes Paul's authz code from #6 and inserts it in URI from #3
- Brian sends his browser to new redirect URI from #7
- Client extracts substituted authz code and exchanges it for access token with AS
- Client associates access token with evil Brian's account but AS associated it with Paul's
- Brian can access Paul's stuff through Client
Everybody in the OAuth WG seems to agree on the reality of the attack and that SSL mitigates it.
Consequently, were the OAuth specification to mandate that Client implementations and deployments MUST protect this endpoint with SSL - then the attack is prevented. There are some in the OAuth WG who advocate this - arguing that the specification has a duty to turn up the security knobs and prevent clearly recognized attacks against the protocol.
Others in the WG, (generally the major social providers), looking at the reality of the Clients that will be interacting with the ASs they will host, argue that, while MUST may be desirable, it is impractical for their Clients (for which setting up SSL is claimed to be either too onerous, too expensive, or both). Consequently, they argue that making SSL protection a MUST will serve to only cause them to break compliance with the specification. As they don't want to be out of compliance with the spec (but they are willing to do so if necessary), these WG members argue that making SSL protection a SHOULD, along with appropriate text about risks of not using SSL, is the right choice - because it achieves the same security protection (namely that those Clients who will use SSL will indeed use it, while those who wont wont) without forcing the latter group out of compliance.
Notwithstanding new twists (like rants along the lines of 'you new kids are wrecking the game!!!!' ), this MUST/SHOULD SSL issue reflects the familiar tension between security & deployment realities that security standards always face - dialing the knobs up to 11 would be nice, but many folk's amps stop at 10 (or lower).
The Facebooks et al have indicated that, regardless of what the spec stipulates, they won't enforce a MUST, ie they will happily (well perhaps not happily, but they'll do it nonetheless) send the browser to an HTTP redirect endpoint at a Client (nor would they reject an HTTP endpoint at Client registration time). Ultimately, this is the Facebook AS making a security policy decision for the Facebook RSs - measuring the risk of the attack and making the call that to not mandate SSL is acceptable. The position is arguably defensible given the nature of the data and services currently sitting behind facebook RSs (e.g. cute bunny photos etc).
Facebook's position can be summed up as:
Make it MUST in the spec if you really need to, but we are going to interpret it as a SHOULD
The flip side to this model is likely going to be more relevant for an enterprise running an OAuth AS for its RS APIs. This 'enterprise-centric' flip side is:
Make it SHOULD in the spec if you really need to, but we are going to interpret it as a MUST
In other words, even if the OAuth specification doesnt mandate that the client endpoint be protected with SSL (ie the spec has a SHOULD), an AS is still completely free to reject Client endpoints that arent protected by SSL - either at registration time or by refusing to send the browser to such an endpoint. Just as the Facebooks of the consumer world can make their own policy decision regarding how they want their Client endpoint's secured (SSL or not), an AS hosted by an enterprise can make its own decision - in the direction of greater overall security by interpreting the spec's SHOULD as a MUST.
It's likely that an Enterprise would somehow and eventually pay a financial cost for a breach to their API security (e.g. perhaps they would be 'fined' were this to happen). The moral of this essay can therefore be summarized as follows:
A SHOULD is as good as a MUST to a fined AS.
Sorry about that.
Monday, March 21, 2011
Monday, March 07, 2011
Subscribe to:
Posts (Atom)