Tuesday, December 17, 2013

An IoT continuum

Currently, the burden falls on us humans to 1) sense the world around us 2) analyze that sensory data & decide how to best deal with it 3) act on that world accordingly

The Internet of Things will change that - evolving from systems that help us with #1 to eventually helping us deal with #2 & #3.

Consider a story, loosely based on reality, with progressively greater assistance provided to the handsome protagonist

  1. I look out the window and see snow coming down hard. I decide to leave early for a trip to the airport. Once in the car, I engage the 4-wheel drive.
  2. I get an SMS from a weather alert service notifying me of snow squalls in the area. I decide to leave early for a trip to the airport. Once in the car, I engage the 4-wheel drive.
  3. I get an email recommending I leave now for my trip to the airport - given snow squalls in the area. Once in the car, I engage the 4-wheel drive.
  4. I get an email recommending I leave now for my trip to the airport - given snow squalls on my typical route. As I turn the ignition key, the car auto engages the 4-wheel drive.
  5. My Home Concierge recommends I leave now for my trip to the airport - given snow squalls on my preferred route. Once I confirm the change, the car turns itself on, pre-warms the cabin, and auto engages the 4-wheel drive.
If the Internet of Things can get my two teenage sons to have the driveway shoveled - I won't even get my shoes wet.


Monday, November 18, 2013

Things that go bump in the night

My daughter's best friend's family (let's call them the Smiths) recently moved to the other side of the country (unfortunately selfishly bringing their daughter). My daughter is saddened by this.

To some extent, I see my role as trying to minimize large-scale sadness increases for my children (also my wife I guess though that definitely wasn't in our vows so that's mostly bonus the way I see it).

Consequently, I'm looking for any mechanisms that might help my daughter with the change.


Might technology help?

The girls are already using explicit connectivity technology, some iOS app called Bump & numerous 2-hour FaceTime sessions in which mine and the Smith's households' respective dogs are forced to appear on camera in humiliating costumes.

Explicit mechanisms are definitely important for keeping remote friends feeling connected, but so also can be implicit or passive mechanisms - such as the Good Night Lamp.

According to Forbes
The Good Night Lamp is a simple set of lamps – one big, one or more little. When the big one is turned on, the little ones turn on. When the big one is turned off, its junior partners also turn off. More junior lamps can be added to the network, but that, at heart, is the whole offer. There is nothing to tinker with or customize – it is a simple point of presence, sent over the Internet.
This would be perfect for the girls. But unfortunately there are no Good Night Lamp kits available for purchase - they're sold out after their initial run.

Coincidentally, I have a Smartthings kit of various things - can I not use Smartthings to duplicate the GNL use case?

Use Case
When my daughter performs some explicit action, turn on/off bedside lamp of her friend in Vancouver. And vice versa. 
Tools

My Smartthings kit includes 
  • Hub
  • Multi
  • Presence
  • Outlet
  • Motion
Implementation

Temporarily putting aside the two household aspect, I could use a Hub, Multi and Outlet to satisfy the use case within my own house - using IFTTT to tie it all together.


When the Multi switch is closed (the two halves placed together), the Outlet is turned on, and so any light plugged into turned on as well. And vice versa.

To deal with two different households, I could purchase another Smartthings Hub, Multi, and Outlet - ship them to the Smiths and then duplicate the above rules, although inter-household and not intra.

This would work, but at the cost of me bearing the full financial burden (and the Smith girl is missing a friend too right?) of effectively purchasing two Smartthings kits and distributing the various pieces over the country. 

Preferable (to me if not the newly trendy, sodden and real estate-indebted Smiths) would be a model where it is the Smiths that purchase the second Smartthings kit - and yet we are still able to apply the above logic, albeit based on explicit authorization rules (the Smiths can control my outlet, and I can control their outlet) rather than implicit logic (all the things belong to me).

For Smartthings to support this would require
  1. an invitation mechanism whereby I can request the Smiths to assign me permissions over their household things
  2. an authorization UX whereby I can assign the Smiths permissions to control my household things
  3. an authorization framework by which the permissions of a given 'turn on Outlet' request from a household to the Smartthings cloud platform can be checked.
OAuth,OpenID Connect & UMA (User Managed Access) are identity & authorization standards that were designed to meet these sorts of requirements. 

Of course, this sort of 'identity interoperability' across two smart households begs the question - shouldn't this work across different Home Automation platforms? What if the Smiths were to purchase WigWag and not Smarthings? This sort of cross-platform interoperability neeedn't even imply a WigWag hub controlling a Smartthings Outlet - the interoperability could happen between the two respective clouds using HTTP & APIs.









Friday, November 15, 2013

Client authentication in MQTT

As leveraged by Paul Freemantle, the latest working draft of MQTT allows for (if not defines how to) use of OAuth access tokens in authenticating the client to the server/broker.
The CONNECT Packet contains Username and Password fields. Implementations can choose how to make use of the content of these fields. They may provide their own authentication mechanism, use an external authentication system such as LDAP or Oauth [sic] tokens, or leverage operating system authentication mechanisms.
The spec also allows for client authentication through VPN or SSL. And also it seems inserting arbitrary credentials in the application payload as well.
An implementation might allow for authentication where the credentials are flowed in an Application Message from the Client to the Server.
Separate from the interoperability challenge presented by so many different client authentication mechanisms, there is (to my mind) a more fundamental issue with MQTT's client authentication model.

There are both ClientID and Username params allowed on the CONNECT message. This would allow for separate identification of both the MQTT client and any user that that client was sending messages on behalf of. This seems appropriate - allowing for a single client to potentially represent different users over time. But there is only a single Password (or equivalent) parameter on the CONNECT and it appears to serve double duty for both authentication of the client and also any user.

Because there is only one Password parameter, it seems you can't authenticate both the client and a user simultaneously on the same CONNECT.

If you did need to authenticate both client & user simultaneously, it would seem you would need to do something like

  1. use client-authn SSL to authenticate the client & use the Password field for the user, or
  2. use the Password field for client & some application message param for the user (or vice versa?)

Choice is good except when it isn't.....

If MQTT allowed for a 'client_pwd' (name it what you will) to be paired with the existing Clientid parameter, and thereby distinguish between credentials for the client (client_pwd) and the user (Password), then the whole situation would be cleaner.

Even cleaner would be to define a new CONNECT field called 'access_token', and use that instead of forcing OAuth tokens into the existing parameters (which can be problematic as Paul discovered).
I couldn't encode the token as the password, because of the way Mosquitto and mosquitto_pyauth call my code. I ended up passing the token as the username instead. I need to look at the mosquitto auth plugin interface more deeply to see if this is something I can fix or I need help from Mosquitto for.




Thursday, November 14, 2013

OAuth binding for MQTT

Paul Freemantle blogs about some experimenting he has done around using OAuth within MQTT - specifically using an OAuth access token in the place of a username password.
I've been thinking about security and privacy for IoT. I would argue that as the IoT grows we are going to need to think about federated and user-directed authorization. In other words, if my device is publishing data, I ought to be able to decide who can use that data. And my identity ought to be something based on my own identity provider.
Choir member here appreciating the sermon.

Paul appeared to focus on how the MQTT client, once having obtained an OAuth access token reflecting the relevant user's consent to some set of operations (captured in a scope), used that token on its CONNECT messages to a MQTT broker. In the end, he used the existing MQTT username parameter to carry the token.

Coincidentally, I was thinking about the same integration yesterday, though focussed less on how to bind the tokens to the MQTT messages and more on how we might leverage MQTT's existing pub/sub model to get the token to the Client.

Something like
  1. Client send the username/password on MQTT CONNECT message
  2. Client sends SUBSCRIBE message for a topic of 'access_token/#'
  3. MQTT broker responds with a PUBLISH message carrying the access token.
  4. Client discards password, stores access token
On subsequent interactions (as long as the token hadnt expired) with the broker, the client would use the token instead of the username/password combination.

The above pattern, namely the client directly exchanging a username/password for an access token, mirrors the OAuth Resource Owner Credentials grant type - allowed for in OAuth but not recommended.

There are definite advantages to instead leveraging a web browser (as Paul indicated he had used) for the token issuance, including

  1. password need not be presented to broker
  2. allows for federated user authentication (ie at some other IdP)
  3. allows for a detailed & granular consent UI
But perhaps the above issuance model would be easier to layer onto simple MQTT clients - leveraging as it does the existing flow.






Wednesday, October 30, 2013

Smartthings - binding thing to user

Some 'things' need to be bound to a user identity in order to allow for differentiated authorizations or reporting.

Follows is how this binding works for Smartthings

I placed my order at Smartthings.com. When the package arrived, it included a registration code. I assume all the thing serial numbers (or equivalent identifiers) is indexed by that code as part of the packaging & shipping process.

Though I'm not sure, I don't believe the registration code was bound to my account at issuance time (even though it could have been because I was logged in at the time). Binding will happen post shipping.




I then download and install the Smartthings app (here Android). Below is the initial login screen


After logging in, I am prompted to plug in the Smartthings hub, connect it by ethernet to the home router, and then enter the registration code into the app


User account is bound to code, and code is bound to device identifiers - so consequently user account can be bound to device identifiers.

It is through the temporary identifier of the code that Smarthings is able to know that a given motion sensor is under my account and so apply a permissions model based on this binding, ie I can manage them but nobody else (unless I stipulate so). Even more basic, when the Smartthings cloud receives sensor data from a particular thing/hub combination, it is able to associate that data with my account.

I recently purchased a Fitbit Aria scale. The Aria used a completely different mechanism to associate the scale with my existing Fitbit account. Area seems ripe for usability testing.

Friday, October 18, 2013

Assurance over time

Consider a traditional authentication event.

The user logs in at a given time t0 and establishes some initial level of assurance. As time goes by, that assurance drops, the rate dependent on the context, ie public kiosk, etc.

 A graph of assurance over time looks something like

 

To prevent this decline, you can require that the user re-authenticate whenever the assurance hits some threshold at






















An alternative to using explicit additional logins (as above) is to maintain assurance above at by monitoring implicit factors such as location, continuous typing, facial recognition, etc

Nb: the 'how' of detecting & monitoring these passive or implicit factors clearly demands some new pieces on the network. Depending on where this functionality sits, we may also need new mechanisms & protocols for communicating the information around.






















From the user's PoV, this passive model has advantages - minimizing as it does the pain of explicit logins.

Ultimately, using a combination of explicit & implicit authentication factors appears to be emerging as the optimal balance of security & usability.

I don't even drink milk!

Along with the smart toaster, a fridge that can sense when the household is about to run out of milk and send a compensating order to the local supermarket is presented as home automation's killer app.

Couple of things
  1. Ahem, Webvan?
  2. Lactose intolerance is a serious health issue for many
Now a fridge that could monitor 'beer' metrics - that's a use case!

And it's more interesting from an identity perspective. 

Any fridge can order milk. Only fridges that exceeds the local age of majority can order beer or wine.

Or more precisely, only fridges acting on behalf of a human who exceeds the local age of majority can order beer or wine.

That demands an identity model in which
  1. The fridge can obtain an identity token for individual users - these to be attached to the 'Buy beer' API calls to the local depanneur
  2. The token contains (or references) the user's 'age' attributes 
  3. The token is issued from an identity provider that is accredited to issue age attributes
  4. The depanneur can validate the token as coming from a trusted authority, look at the age attribute, and so determine that the fridge's request can be authorized.
OpenID Connect is tailor made for the above set of requirements. 

Wednesday, October 16, 2013

Users, groups & things

Below is an attempt to tease out a taxonomy of Internet of Things use cases - differentiating based on

  1. on whose behalf the thing acts on (whether a data subject or not) 
  2. the data subject of the data the thing collects & shares




















A Fitbit Flex, Jawbone Up, Nike Fuel Band etc all collect the data of a given single user. It is that same user that the thing acts on behalf of. This makes for a pretty straightforward identity model - single device, single user.

At any given time, a smart scale like a Withings or Fitbit Aria is also representing a single user (and sharing that user's data). But, unlike the wearables above, for this sort of thing that user can change over time. Consequently, such a thing has to support multiple different users - including UI that allows users to select themselves from a list. Ideally, such a thing (and associated apps) would also support differentiated consent/authorization for all the different users. For instance, should my wife be allowed to see my weight data (and surreptitiously try to curtail my beer consumption as a result?) That's not a world I want to live in you, do you?

The archetypical 'smart toaster' would need this sort of identity model if it were to allow each breakfast eater to have personalized toast patterns.

A thermostat like a Nest, or a fridge, etc collects the data associated with a group of users (the family members) and can be said to act on behalf of the user that bought, installed, configured & registered it (not the teenager in all likelihood). Because the data is aggregated, the privacy risks are different than for a device that acts only for single users.

Things can also act autonomously, ie be 'doing their thang' not on behalf of a user of that thing, but for themselves (or more precisely some unnamed admin or even a corporate entity).

A residential electricity meter, like the Nest, collects data associated with a group of users (the family) but, unlike the Nest, is not under the governance of the homeowner. Instead the meter is owned and operated by the electricity provider. While the provider may give access to the homeowner, its fundamental purpose is to determine how much to charge per month.

Likewise, nobody would argue that a speed camera snapping a pic of me (only slightly exceeding the limit, which everybody agrees is ridiculously low on that stretch of road) is acting on my behalf. It's operating on behalf of the local region or county tax revenues. Along the other axis, those cameras can focus on (and differentiate) individual drivers or post-game hockey final loss mob members - and so create privacy concerns.

And probably the biggest use case (in number of sensors & perhaps $$) - all those factory floor robots, air quality sensors, street lights & water pipes silently reporting operational status.



Monday, October 14, 2013

Internet of Smells

If I've learned anything in 20 years of marriage, it is this

Hockey equipment must be aired out after use



Unfortunately every time the equipment is removed from the bag (enabling airing and thereby not negatively impacting marital complacency) there is a risk that it won't all be placed back in -  with almost certain consequences for pain & bruising during the next game.

What if every piece of equipment were able to report on its presence in (or more critically absence from) the bag as I pull it out of the garage?

How would the system learn that a particular piece of protective equipment was meant to be in the bag and, if not, alert me to that fact?

One model would be for me to manually specify a rule, ie 'At 6.15am on Tuesday & Thursday mornings, alert me if any of the 'Goalie Equipment' group of sensors are not within 1 m of the 'Goalie Hockey bag' sensor'.

Sounds like a lot of work (for myself).

Alternatively, the system could over time, recognize the above implied pattern, and build a rule itself alerting me whenever that pattern was violated.











Friday, October 11, 2013

Consent anti-patterns for Internet of Things

User-control will be key for many (but not all of course) Internet of Thing use cases. A key piece of such control will be collecting the User's consent for

  1. a given thing to act on their behalf
  2. a given application to access/control the thing

Follows are examples of (to my mind) 'bad' consent UI.

NB Although these examples are not IoT-specific, I believe the principles of what makes a good consent UI are still applicable.

Anti-Pattern 1 - Optional vs Required permissions


In the IoT context, what permissions are mandatory for the thing to have, and what are merely 'nice to have'? For example, if the thing is merely a sensor, dont ask for 'write' permissions.

Anti-pattern 2 - Unclear consent persistence


Will I be interacting with the thing going forward? or is this a one-night stand, e.g. allowing a bus stop to add the bus schedule to my calendar? Consent should reflect this

Anti-pattern 3 - Confuse The User as to Where They Are


Make sure the user knows to whom they are giving permissions.

Here is another example of confusing UI/UX. The Saga lifestreaming app has sent me to Fibit's site so that I can authorize Saga's access to my step data. But, because this is happening in a browser window embedded within the Saga app (as opposed to the default phone browser), its not clear that I'm actually presenting my Fitbit password to Fitbit.




Of course, just as important as providing the user intuitive mechanisms for granting permissions to things & applications is providing them a mechanism to view & manage (eg revoke) those permissions over time.

Anti-pattern 4 - Non-exhaustive list

Will the Saga app actually be able to see everything else other than my password?




Wednesday, October 09, 2013

Complexity asymmetry & constrained devices

Security protocols differ in how they distribute complexity between 'asserting party' and 'relying party' - and so will differ in how applicable they are to use cases where the two actors have unequal capabilities.

SAML assumes a relatively equal burden for the IdP & SP, e.g. both are assembling XML messages and may be signing those messages.

OAuth 1.0a, with its multiple tokens and client crypto requirements, likewise placed on the client a relatively high burden.

TLS can work in a symmetrical mode, one where both client & server are authenticated (and share the associated burden relatively evenly) and another where the client gets off easily (but doesnt get authenticated).

OAuth 2.0, and so OpenID Connect, was designed to move most of the complexity off the client. Being an OAuth 2.0 or OIDC client is pretty simple - assemble some HTTP messages, send them to AS via browser, keep track of some tokens, and add those tokens as headers on API calls.

So, for a constrained thing, OAuth 2.0 and OIDC make more sense than SAML (like I need to say that).

When you pair OAuth 2.0 with server-only authentication TLS (or DTLS?), you get

  1. client authentication (via OAuth 2.0)
  2. server authentication (via TLS)  
  3. data confidentiality (via TLS)
  4. data integrity (via TLS)
and, critically, keep most of the complexity off the thing and instead on the server or gateway that is likely more capable of bearing the burden.

Client-authn TLS provides all of the above security characteristics, but with a different distribution of complexity.

Tuesday, October 08, 2013

Identities - Thing & User

Things will ship from the factory with an identity - either burnt into the firmware or embedded into the software.

In home automation, wearable, & healthcare use cases, that thing identity will need to be associated with or bound to a user identity (or multiple user identities). Once this association is made, then any subsequent message from the thing can be understood to be occurring 'on behalf of' that particular user.

In theory this binding could happen before purchase by the manufacturer/retailer provisioning some sort of identity credential before shipping to a customer (with an existing account) but more likely is that it happens after the thing is brought home.

The binding mechanism can take different forms - it will depend on whether

  1. the thing has a UI
  2. the thing interacts directly with its Cloud, as opposed to via some computer/phone etc

The different mechanisms will place different usability burdens on the User.

For the Fitbit Flex, the binding happens via the desktop app


Once logged into my Fitbit user account from my laptop, the Flex messages with the USB dongle and presumably passes its device identifier - this to be passed by the desktop app to the Fitbit cloud for the association to be recorded.

When I receive my Smartthings kit (any day now), it appears it will be a more manual mechanism to bind particular devices to my user account



Regardless of how it happens, after binding the cloud associated with the thing is able to say that 'Thing with identity X is acting on behalf of User(s) with identity Y(Z)'.

How that association is manifested can also differ.

Very non-optimal (though I'm sure it exists) would be for the user's password to be handed to the thing (or to a proxying gateway) and used on its API calls.

Better would be an OAuth-type model, something like the following

  1. Thing asks its cloud server for an access token , presenting its own identifier/secret
  2. Cloud server logs user in and says 'You OK with this?'
  3. Cloud server issues token to Thing (and remembers the pairing of Thing & User)
  4. Thing uses token on its API calls to Cloud
The advantages of this model are
  1. The user can be selective and granular in how they permission their things
  2. The user can revoke the token when relevant (lost, stolen or sold thing)

Monday, October 07, 2013

OAuth for multi-thing coordination use cases

Let's say you have a Fitbit wrist band for counting your steps and a Nest thermostat for controlling your home's temperature (if you have both you almost certainly also have 3-4 iDevices but let's ignore that for now).

Both are great. Both give you visibility & control into areas of your life you were previously unaware of (blissfully or otherwise).

But both are oblivious of the other - both operate in their own silos, defined by proprietary APIs, and likely different choices for radio protocol.


Because of this balkanization, interesting use cases that require coordination of the Fitbit and Nest are challenging or not even possible. For instance, if after a brisk walk on a hot day I would hope to return home to a nicely chilled house - my steps as counted by the Fitbit would need to be taken as input to a rules engine that could send an appropriate temperature adjustment message to the Nest.

In the absence of the Fitbit and Nest stacks directly communicating, the only solution is to try to layer on the necessary coordination layer, as shown below



The coordination layer would use whatever external APIs Fitbit and Nest made available in order to a) query my steps via the Fitbit cloud and b) send a directive to the nest via it's cloud to lower the temperature.

Of course, the two different API calls would need to be authenticated as

a) coming from a valid API client
b) compatible with the user's preferences

OAuth 2 satisfies both of the above requirements - giving to the user the ability to

a) tell Fitbit what parties can see their step data
b) tell Nest what parties can control the thermostat

This sort of IFTTT-style coordination is a key value proposition of the emerging IoT platforms like Xively, Thingworx etc. Application developers can build on the platform, and need not worry about the specifics of how different devices integrated into to the platform, and be able leverage the resultant cross-device coordination capabilities. Of course (unless the platform scrape the login screens) Fitbit and Nest have to buy into the above scenario and actively allow 3rd party platforms to call their APIs (as opposed to only their own native applications).

If the Fitbit & Nest business folks were to get together to 'do lunch' and agree that there was value for both in more directly working together, then a different (and arguably simpler, at least in the near term) integration is possible. This is shown below



Here the user's step data is sent from Fitbit (not necessarily from the device itself, as shown) to Nest. It is at Nest that the rule 'drop temperature when steps equals 3000' was defined - Fitbit knows only that the user has authorized this integration - this consent manifested in the issuance of an OAuth access token from Nest to Fitbit. By including this access token on its API calls to Nest, Fitbit gives to Nest the information required to look up the user in their systems. The rules kicks in, and the Nest cloud directs the appropriate thermostat (mine not yours) to lower the temperature accordingly.

NB This model is 'simpler' than the platform model above because there are only two services to coordinate (and so one less account for the user to manage). In the long run however, such pair-wise coordination won't scale well. What happens when I want a G&T ready on the counter when I walk into my comfortably cool house?

Of course, beyond agreeing to use OAuth, Nest & Fitbit would also need to agree on the specifics of the API by which the step data was communicated. And while shown here as the step data being pushed from Fitbit to Nest (perhaps triggered by a subscription), it could be a pull from Nest, based on some polling schedule.

The mirror image integration is also possible



Here the coordination logic resides with Fitbit, and it is Nest that is relatively dumb. Nest would probably feel that there is a big difference between giving 'write' access to Fitbit, whereas in the previous scenario Fitbit needed only to give to Nest 'read' access. I guess the BD folks will work it over drinks at lunch.

However architected, OAuth 2.0 stands to be a key enabler of multi-device coordination in the Internet of Things.

Wednesday, June 12, 2013

Elvis-like, the data has left the building

Enterprises want to ensure that its business data is accessed only by those who have a valid right to do so, ie those that require access in order to do their jobs. When the business data is only ever stored on a server, behind a web page or an API, restricting such access is relatively easy. Authenticate the user sending the request for the data, check their roles, determine if the roles are such that the user has a justifiable need to get at the data and, if so, approve the request and send the data back to the requesting client. (When the identity store (where the roles are kept) is remote from the business data (as is the case when the data is held by some SaaS), the mechanisms (and standardized protocols) might differ, but the logic remains the same).

A client sends a request for the data - this request intercepted by some sort of enforcement mechanism (a Policy Enforcement Point (PEP) in the lingo). A co-located Policy Decision Point (PDP) determines which policy is relevant for the requested data, and interprets that policy to make a decision whether to grant the request or not. If the decision is 'yes', the data is served up back to the client (either as HTML or JSON depending on the nature of the client).


All good, but as soon as you allow the business data, once released by the server, to be stored by the requesting client beyond the original session, then the original access control check is no longer sufficient and must be supplemented. Despite the additional complexity, mobile native applications and enabling offline usage (the 'CEO sitting in seat 3B' use case) push the enterprise towards supporting this sort of client storage.

The Mobile Information Management (MIM) proposition is that the business data delivered down to the clients (the native applications) carries with it (implicitly or explicitly) the policies governing its access & usage - reminiscent of DRM. Before the data is delivered down to the mobile client, appropriate policy is bound to the data - the policy will stipulate what users and/or applications can access the data, what they can do with it, and restrictions on subsequent sharing. Once on the device, only if the policy stipulations are met will the data be made accessible to particular applications, and what they subsequently do with that data will also be accordingly constrained.

But merely attaching some rules to a document or powerpoint (PRISM anyone?) doesn't actually restrict access to that data. It would be a polite hacker that, seeing a rule forbidding her access, respected that rule. There needs to be a mechanism on the device comparable to the PEP in the diagram above to enforce the policy, ie prevent all data access unless the policy is met.

You also need something comparable to the PDP to read & interpret the policy associated with a given piece of data. But whereas in the above model, the policy decision was whether or not to release the data itself, in this case the data has already been released, it's already on the client after all. So what is the decision?

In MIM, the policy enforcement mechanism (the PEP that prevents unauthorized access) is appropriate encryption of the data, and the policy decision (made by the PDP determining what is authorized access) is whether or not to release a key that can be used to decrypt the data.

An abstract model is shown below











On the right is some piece of business data, encrypted to prevent just anybody who has access to the device on which it sits from being able to access the data. On the left is a decryption key that will decrypt the encrypted data and so make it accessible to application usage. For an application to be able to 'get at' the data, it will need to gain access to the decryption key.

Sitting between the two (the encrypted data and the corresponding decryption key) is the policy enforcement & decision infrastructure that ensures the two will 'meet' if and only if the policy is satisfied. Policy associated with the encrypted business data stipulates under which contexts the decryption key can be released, as well as additional constraints should that happen. Taken as input to the 'key release' decision are all the various current contexts, ie what app is trying to access the data, one behalf of which user, when, where etc.

If the policy decision is 'yes', then the PEP releases the decryption key to the application, along with additional constraints (read but no right, no sharing etc). Now armed with the decryption key, the application uses it to decrypt the data and does whatever it does.

The above is abstract, how to make it concrete? Specifically

  1. how is the policy bound to data?
  2. how & where is the data encrypted?
  3. with which key?
  4. how & from where is the decryption key obtained? 
  5. how are PEP & PDP roles distributed? 
Next time I'll propose an architecture for the above that leverages 
  • REST APIs
  • OAuth & OpenID Connect as mechanisms for authenticating & authorizing clients to such APIs

Wednesday, May 29, 2013

Discovery for IoT

Premise is that the IoT would have us awash in services advertising us of their availability.

So, how to filter this sea down to something useful & manageable?


Via filters determined (mostly) by the context of everything else we have going on - what searches we've performed, what events are in our calendar, what we've recently bought, listened to etc etc.

Those services that meet the criteria are allowed to prompt us (via applications?) for interaction.

Friday, May 24, 2013

Identity, application models & the Internet of Things

In a blog post entitled 'Mobile apps must die', Scott Jenson argues that the Internet of Things (and the associated implication of having to interact with all the 'things') will make the native application model impractical, and push application development back to the browser.

I buy the argument, will repeat some of it and will try to tease out some of the identity implications.

First, a bit of a recap of Scott's argument (or my interpretation at least)

  • Whereas on a desktop we might have had ~10 installed apps, on a phone or tablet we might have ~100. Users have to manage this list. It is trivially easy to install apps from the app stores. That’s great from the app developers PoV, it minimizes the pain of installation and so allows for Users to play and experiment. But from the Users PoV there is a price to be paid for easy experimenting – the application remains. SSO between these apps helps but the problem is bigger than just authentication.
  • Offline mode will become an antiquated concept as connectivity becomes ubiquitous. ‘CEO on a plane’ will disappear as an important use case when every plane has wifi. Consequently, the current advantage native has over browser models with respect to supporting offline via device storage will become less relevant.
  • As more and more objects become connected (IoT), the nature of mobile applications (through which we’ll interact with those objects) will have to change accordingly. When my fridge, dryer, furnace, air conditioner, microwave, and thermostat etc are all connected and desperately want to interact with me – do I want a unique app for each of them? And what about objects outside the house – coke machines, point-of-sale terminals, bus stops schedules, restaurant menus, gas pumps etc
So the Internet of things would push us to have 1000s of native applications on our devices, but that would place a completely unrealistic management burden on the User – installing, authenticating, sorting, updating, & deleting of applications when no longer needed etc.

The problem is that the current native application life-cycle looks like

  1. Discover
  2. Install
  3. Authenticate
  4. Use
  5. Update
  6. Remove
This sequence places a heavy burden on the user and is very static – not particularly applicable to a ‘Just in time’ model (as Scott puts it) where I might interact with an application once and never again. 

Clearly this isn’t viable in an IoT world where we will constantly be presented with previously unseen connected objects. We’d spend our days installing apps and by the time we were ready to interact, the opportunity will have passed (somebody else would have grabbed the last Dr Pepper etc)

IoT demands an application interaction model that is far more dynamic, something like

  1. Sense – my device must be constantly on the lookout for IoT connected objects and, based on rules I’ve defined, determine whether & how best to interact with them
  2. Notify – based on rules I’ve defined, prompt me to know that I can now interact with the object
  3. Authenticate – the object may need to know who I am, but this obviously has to be seamless from a UX PoV. (the object may have to be able to authenticate to me as well)
  4. Use – I interact with the object. This can’t require an ‘install’, instead whatever unique application functionality must be downloaded and run in an existing app designed with this sort of flexibility – ie a web page running in a browser
  5. Cleanup – as there was no install, there are no artifacts (except perhaps some state to simplify the next interaction) to be cleaned up, ie no uninstall
The Internet of Things would appear then to be pushing us towards a future where

  • The pendulum swings back to the browser (& so HTML5 comes into its own)
  • The importance of browser means Web SSO remains relevant
  • For Web SSO, SAML gives way to OIDC due to its support for Javascript-powered apps running in the browser and pulling data from APIs offered up by the 'things' (or network endpoints on their behalf)
  • SSO (in the sense of facilitating seamless user authentication to all the various IoT objects) is absolutely critical. 
The last can be summarized as 


IoT won't scale without SSO to the T.

Imagine I have a smart toaster that I want to interact with on my phone to determine if I need to empty the crumb tray (this needs to happen Science!!) 

How

  1. does the toaster advertise its presence to the phone?
  2. is the user invited to interact with the toaster?
  3. toaster data (crumb tray status etc) get sent to the toaster cloud for analysis
  4. toaster data (crumb tray status etc) get sent to the phone for display
This diagram is a really rough attempt at a model



The toaster serves up crumb tray data (& some javascript) to the device browser. The javascript interacts with an OAuth Authorization Server (we can get the User consent at this point) and obtains an OAuth access token that represents the combination of toaster & User. The javascript then uses the access token to upload the data to an associated TAAPI (Toaster Analysis API) for analysis and then renders application UI to the user based on that analysis (eg ALERT - CRUMB TRAY DANGEROUSLY FULL).

Wednesday, May 01, 2013

The Quantified Self & Application Scale

As part part of the self-quantification (QS) movement, there would appear to be a pairing of measuring device & associated application for every aspect of personal health - diet, weight, prescription medicine, fitness, blood, breathing, GI tract health, etc.

Devices measure the various X (some passively, some actively), and the associated application, once installed onto a phone or tablet, displays and analyzes the collected data for the user - presumably to help them make health decisions in the direction of longer life (and so longer duration customer for the application provider).

When each health aspect has its own device, and each device has its own native application - the user will necessarily bear the burden of installing, managing, and authenticating each native application.

That may be an acceptable burden for somebody with 2-3 separate devices (and so 2-3 native applications). But what of the neurotic hypochondriacs? Or the paranoid new parents obsessing over each cough and sniffle of their new baby - both of whom might have > 10 health monitoring applications?

This hilites a key problem with native applications - their lifecycle (discover, install, login, use, manage, remove) doesn't scale particularly well for the user. App stores make the first two trivially easy (arguably too easy), but dont help much with the steps that follow.

The problem will only get worse when it's not only personal health monitoring devices that we will want to be able to interact with, but everything. When my fridge, washing machine, dishwasher, printer, garage door opener, mailbox, and TV remote are all collecting data and clamoring for my attention to view, analyze, and act on that data - do I want a separate native application for each of them? No I do not. 

If only there were an alternative to the native application model - one where application functionality can be downloaded in real-time, rather than a priori installed.....


Tuesday, April 30, 2013

Which begs the question ....

In YAPAUOFA (Yet Another Post About Using OAuth For Authentication) I argue (following the lead of John and Vittorio) that the issue with trying to use OAuth OOB for authentication is that a Client can use a token it obtains 'fairly' in order to impersonate the corresponding user at some other Client.

Necessarily then, a Client has gone 'bad'.

Why then is this not an issue for the use case that OAuth was designed for, ie delegated authorization of API access? Could not such a Client also go 'bad' and do similarly malicious things?

In the delegated access use case for OAuth, all that an access tokens 'means' is that it allows a Client to access the corresponding User's protected resources. Clearly a Client can go bad and share this access token and (in a bearer token model) anybody else who obtains that token will also be able to access those same resources. Critically, all these additional Clients will only enjoy the permissions allowed by the scope attached to the original token.

As John points out
In the authorization case the client can be trusted with the access token because it has no real motivation to share it.  They could give it to a third party and also grant them access to the information(protected resource), but they can just share the information anyway if they are bad
Yes the Client can go bad and share the access token. But it already has access to the protected resources and can already do all the evil operations it desires against those resources. The fact that a shared token will allow multiple bad Clients to perform evil operations instead of only the original bad Client probably wont matter much to the user.

So in the authorization case, if a Client goes bad - it doesn't really matter whether that evil manifests as that Client doing malicious things with the token, or rather sharing that token and allowing other bad Clients to do the malicious things. Put another way, sharing the token doesn't change the 'scope' of the possible damage.

But this isn't the case when basic OAuth is used for authentication. If a malicious Client can use its own token to impersonate a User at some other (non-evil) Client, then the scope of the attack expands greatly - to include all the damage that malicious Client can do at the other Client.


YAPAUOFA (Yet Another Post About Using OAuth For Authentication)


As both John and Vittorio have written (extensively) on the matter, I will be brief.

The fundamental problem with using OAuth (with no additional constraints) for authentication is that it relies on the following premise

‘If I can enable the delivery of a valid access token to a Client, then I can lay claim to the identity represented by that access token at the corresponding AS.’

So, if an attacker (acting as a normal OAuth Client, albeit one with evil in its heart) can obtain a valid access token for a User Bob from a valid AS, it can present it to a different valid Client and have the following conversation

Attacker->Client: Here is a token
Client->AS: tell me about the User associated with this token
AS->Client: that token is good and refers to Bob
Client to itself: Hmm, well it meets the criteria ……
Client->Attacker: Welcome Bob
Attacker->Client: Err, hi, yes well of course I'm Bob. Let's start moving $$

The core problem is that the valid Client is unable to distinguish between a token previously handed to the Attacker, and a valid token being delivered via the real Bob. 

So why not add something to the token to make that distinction easy for the valid Client, so the conversation now becomes

Attacker->Client: Here is a token
Client to itself: Whoaaa, this token is not for me!!!
Client->Attacker: Nice try bud

That is what OpenID Connect does, the OIDC id_token carries the audience (in the aud param) to which the token was issued, preventing it from being presented elsewhere. If an OIDC Client is presented with an id_token with an audience not itself, it will stop the login process.

Facebook Connect similarly restricts the audience, but does so implicitly by requiring that the Client validate a signature calculated over the signed_request. If a Client can't validate the signature, it will know that something is up and abort.

Persona UX

You start off at the SP

The SP sends you to the IdP (of which there is 1? there was no discovery step)


I am going to log in to Persona.



How long?
 I'm in