One of my favorite movies released in 2012 was Cloud Atlas. This is not necessarily an easy movie to watch or explain.
That is not the point I bring it up.
In one of the films many timelines, there is a post-apocalyptic setting where civilization is very primitive. In this primitive civilization, the two main groups are an islands main inhabitants—goat herders—and “Prescients” that are very advanced and seemingly from a different planet. Twice a year the goat herders and the prescients meet to barter and exchange information.
The goat herders are extremely curious about the prescients and how they travel with so magically across the waves of the sea. “Twice a year the Prescients come bartering on waves. Their ships come creep crawling just floatin’ on the smart of the old un’s.” Tom Hank’s narrative is magical and mysterious. Just as the knowledge of the Prescients seems to be to the goat herders. In a subsequent scene, the tribal elders can just not resist and more, and the ask Halle Barry’s character—the Prescient emissary—how their ships float on the waves.
She answer the question with complete honesty. “Fusion Engines…..” Everyone in the room nods and the term “Fusion Engines” gets past around the room as if it were obvious and plain. That the answers to the mysteries of the old ones had been finally revealed.
Tom Hank’s goes on to narrate that no one wanted as what a “Fusion Engine” was because they didn’t want to look stupid in front of the gathering.
Tech Talk and FusioOAuth, XACML, Federated Naming….. My point is, as technologists, we sometimes love the mystery and complexity of the language we use to talk about the trends and information we are discussing. It is hard to avoid it. This is complex stuff. There are sometimes no words in existence yet to clearly define to all those concerned how things really work.
Fusion Engines…. Just nod your head and look like it is normal. The reality is, there probably no one that knows exactly what all of this technology surrounding Identity and Access Management is. Further, people who do know are actually happy to tell you and wouldn’t think that anyone is stupid.
So if a “Fusion Engine” moment comes up for you. Don’t be hesitant to ask what is really being talked about. It helps everybody.
CA Technologies acquires Layer 7, MuleSoft acquires Programmable Web, 3Scale gets funding
It is clear that the API Economy is kicking into gear in a big way. Last week, Intel announced its acquisition of Mashery, this week, CA Technologies announced its acquisition of Layer7 , MuleSoft announced its acquisition of ProgrammableWeb and 3Scale closed a round of funding for 4.2M.
Money is flooding into the API Economy as the importance of APIs only heightens. Expect this trend to continue.
The upside of this flurry of activity is the focus being given to the API Economy.
But here is my assessment.
CA’s acquisition of Layer7 doesn’t necessarily bode well for Layer7 or its customers. CA as a large vendor will probably take longer than Layer 7 would do independently for defining and delivering on the roadmap, but they might put far more power behind such roadmap and its execution. Layer7 needs an upgrade and needs to move to the cloud. CA has a clear Cloud strategy it executes on – look at IAM and Service Management where a large portion of the products is available as cloud service; there is a strong potential for CA putting far more pressure behind the required move of Layer 7 to the cloud. Let’s see what happens there.
MuleSoft’s acquisition of ProgammableWeb is a little weird. John Musser is an independent well-spoken representative of the API Economy. MuleSoft has an agenda with its own platform. Does MuleSoft let Musser continue to be an independent spokesperson? Where does this lead to? All answers unknown.
3Scale gets a round of funding for 4.2M. It plans to add more extensions to the product and grow its international distribution with the funds.
Lots of activity here. Curious to see what happens next.
However, one thing is clear: The API Economy is going mainstream.
From partnership to acquisition Let there be no confusion. Intel is a hardware company. It makes microchips. This is its core business. History shows that companies do best when they stick to their roots. There are exceptions. At the same time, Intel has always dabbled in software at some level. Mostly in products that support the chip architecture. Compilers, development tools and debuggers. From time to time, however, Intel ventures into the software business with more serious intentions. Back in 1991, Intel acquired LAN Systems in attempt to get more serious into the LAN utility business. This direction was later abandoned and Intel went back to its knitting as a chip vendor. Recently, Intel has started again to be serious about being in the software business. Its most serious foray was with the purchase of McAfee in 2010 to the tune of some 7.6 billion. A pretty serious commitment. We wrote recently about Intel’s intent to be a serious player in the Identity Management business with its composite platform Expressway API management. With that approach, Intel was clear that it had an “investment” in Mashery that would remain an arm’s length relationship best supporting the customer and allowing maximum flexibility for Mashery. In general, I like this approach better than an acquisition. Acquisitions by big companies of little companies are don’t always turn out for the best for anyone. Since then, it is clear that Intel management has shifted its view and thinks that outright ownership of Mashery is a better plan. While we agree that outright ownership can mean more control and management of direction, it can also mean the marginalization of and independent group that could possibly act more dynamically on its own. It is still too early to tell exactly how this will turn out for Intel and its customers, it will be important to watch and see how the organization is integrated into the company.
When things go bad, it goes really bad
At KuppingerCole we use Office365 extensively to manage our documents and keep track of document development and distribution.
On April 9, 2013, Microsoft released a normal sized Tuesday update to Windows and Office products. The only thing is, this time the update completely broke the functionality of Office 365 and Office 2013. Trying to open a document stored in SharePoint would result in a recursive dialogue box asking for you to authenticate to the SharePoint server. Same thing would happen when trying to upload a document. Excel and PowerPoint documents had the same problem.
Going to the Office365 forum resulted in a bevy of customers complaining about the problem. A Microsoft tech support person was offering possible solutions, all of which were just time wasters and solved nothing.
“First, please run desktop setup by following Set up your desktop for Office 365
If the issue persists, please remove saved login credentials from the Windows credential manager and then sign into the MS account.
Finally, two days later a customer posted a solution.
“KB2768349 is definitely the culprit. I uninstalled this on Windows RT and login worked again across all Office 2013 RT apps. Reinstalling broke it. Uninstalling again fixed it.
Replicated on my Windows 8 desktop with Office 2013.
For the time being I have hidden KB2768349 from Windows Update until this is fixed.”
As soon as I deleted the KB2768349 update the problem went away. I also learned what “hiding” an update entails.
For those of you dying to know, here is how you fix this thing.
control panel>windows update>view update history>installed updates
Scroll down thru the Office 2013 updates until you find KB2768349. Select and then uninstall.
Of course once you uninstall an update, it’s going to show back up again and try to update. The way you prevent this is to “hide” the update so it doesn’t keep showing up. To hide and update, you open Windows Update and right mouse the update you want to hide and select “hide update.” There you go.
So for two days the normal operation of Office365 was frustratingly broken. Now this was not just for me and my colleagues, but for everyone on the planet that used Office365 and installed these updates. At the same time, the fix applies to everyone on the planet using Office365 as well. In other words, critical apps in the cloud that go bad, go bad hard. They also heal big. Part of the deal.
I was surprised that I was the only one tweeting and complaining about it. I didn’t see one article or public view on this major screw up. The only place I saw any complaining was on the Office365 forum. So glad that was happening.
Identity Management is a universal problem
When I pay my electric bill I usually just call the power company and give them my credit card. This month I decided that I should go set up auto payments on the web site and be done with it. So I opened the power company web site and attempted to login. Clearly the site recognized me, the login name I usually use was being recognized, but I just could not remember my password. I tried all of the normal passwords I use and none of them were working.
So I attempted to retrieve my password, it gave me an option of having the password reset sent to my email address or answering secret questions. I opted to have it sent to my email address. I waited. Nothing showed up in my email box. I looked in the spam folder, still nothing. I went back to the web site and this time I opted for being asked the secret question…..”What is your favorite color”. Oh man, I don’t know. Depends on my mood and what day it is. I don’t remember what I put in there for my favorite color. Ok. Let’s try “Blue.” Good, that worked. Wow. I am in. Hey. This isn’t my account? WTF?
Now I know there are two other Craig Burton’s living in Utah. Apparently I have just accessed the electricity billing account of one of them by guessing both the user name and secret question. And the secret question was “blue?”
Off the top of my head I would say the Electric Company has a severe security leak in it.
Of course I didn’t do anything to this account. I could see that his email address was just sent another request to change the name and password. I hope he did that.
This was an ugly incident that could have been much uglier if I was malicious.
Here is my point, a uniform cloud-based Identity management system could be used to prevent this kind of thing. As it stands, every single web site has its own set of code used to prevent inappropriate access. A scenario bound to create the blatant hole I ran into.
Of course the other side of the coin is that if the cloud-based identity management system had a hole in it, everybody would have the hole. Then again, the fix would fix everybody. Trade-offs but I still think the cloud-based Identity Management as a Service is where we are headed in the future.
With the rapidly emerging cloud-mobile-social Troika coupled with the API Economy, there are so many questions about how to design systems that can allow application access to internal information and resources via APIs that will not compromise the integrity of enterprise assets. And on the other hand, how do we prevent inappropriate personal information from propagating inappropriately as personal data stores and information is processed and accessed? Indeed, I have read so many articles lately that predict utter catastrophe from the inevitable smart phone and tablet application rush that leverages the burgeoning API economy.
In recent posts, I have posited that one approach to solving the problem is by using an IdMaaS design for authentication and authorization.
Another proposed approach—that keeps coming up—is a system construct that is referred to as the “Façade Proxy.”
A place to start to understand the nature of Facades is in an article by Bruno Pedro entitled “Using Facades to Decouple API Integrations.”
In this article Bruno explains:
A Façade is an object that provides simple access to complex - or external - functionality. It might be used to group together several methods into a single one, to abstract a very complex method into several simple calls or, more generically, to decouple two pieces of code where there's a strong dependency of one over the other.
Figure 1 - Facade Pattern Design Source: Cloudwork
What happens when you develop API calls inside your code and, suddenly, the API is upgraded and some of its methods or parameters change? You'll have to change your application code to handle those changes. Also, by changing your internal application code, you might have to change the way some of your objects behave. It is easy to overlook every instance and can require you to double-check multiple lines of code.
There's a better way to keep API calls up-to-date. By writing a Façade with the single responsibility of interacting with the external Web service, you can defend your code from external changes. Now, whenever the API changes, all you have to do is update your Façade. Your internal application code will remain untouched.
To shed even more light on how a Façade Proxy is designed and can be used to address yet another problem is blog post from Kin Lane. Kin is an API Evangelist extraordinaire and I learn a lot from him in his writings. Kin recently wrote in a blog post entitled “An API that Scrubs Personally Identifiable Information from Other APIs”:
I had a conversation with one UC Berkeley analyst about a problem that isn’t just unique to a university, but they are working on an innovative solution for.
UCB Developers are creating Web Services that provide access to sensitive data (e.g. grades, transcripts, current enrollments) but only trusted applications are typically allowed to access these Web Services to prevent misuse of the sensitive data. Expanding access to these services, while preserving the confidentiality of the data, could provide student and third party developers with opportunities to create new applications that provide UCB students with enhanced services.
Wrapping untrusted applications in a “Proxied Façade Service” framework that passes anonymous tickets through the “untrusted” application to underlying services that can independently extract the necessary personal information provides a secure way of allowing an application to retrieve a Web User’s Business data (e.g. their current course enrollments) WITHOUT exposing any identifying information about the user to the untrusted application.
I find their problem and solution fascinating, I also think it is something that could have huge potential. When data leaves any school, healthcare provider, financial services or government office, the presence of sensitive data is always a concern. More data will be leaving these trusted systems, for use in not just apps, but also for analysis and visualizations, and the need to scrub personally identifiable information will only grow.
Finally, Intel recently announced its Expressway API Manger product suite. EAM is a new category of service that Intel is calling a “Composite API Platform.” It is referred as such as the platform is a composite of a premise-based gateway that allows organizations to create and secure APIs that can be externalized for secure access through a cloud-based API management service from Mashery designed to help organizations expose, monetize and manage APIs to developers. In its design, Intel has created a RESTful Façade API that exposes APIs to developers for internal information and resources of an organization. It is very similar to the design approach outlined by Kin. This approach looks to be an elegant use of the Façade pattern to efficiently manage authorization and authentication of mobile apps to information that needs to remain secure.
Figure 2 - EAM Application Life Cycle Source: Intel
I am learning a lot about the possible API designs—like the Façade Proxy—that can be useful constructs for organizations to successfully participate in the API economy and not give up the farm.
Making an API is hard. It is also a tough question. A small company out of England has figured out how to let anyone make an API with just:
- A Spreadsheet
- A Datownia SaaS account
One of the activities I practice to keep up with what is happening in the world of APIs is to subscribe to the ProgrammableWeb’s
newsletter. Every week the newsletter contains the latest APIs that have been added to the rapidly increasing list. While I seldom can get through the whole list, I inevitably find one or two new APIs that are really interesting.
Recently I ran into one that has an incredibly simple and effective method of creating an API out of a spreadsheet.
The Company is Datownia.com
I now have an API with a developer portal that is driven by data in a spread sheet.
I can distribute developer keys to any developer I choose and then that developer can access the data and integrate it into any app.
Further, any change I make to the spreadsheet get versioned and propagated to the API with just a click. To propagate the data, all I do is modify the spreadsheet and drop it into the linked DropBox folder.
Here is what my spreadsheet looks like.
Here is what the JSON look like when you make a restful call to the API location created for me by Datownia.
I have been talking a lot about companies that manage already existing APIs. But what about organizations that need to create APIs?
A few weeks ago, I received an email from the CEO of Datownia wanting to give me a small gift to chat with him about what I was doing with their technology.
Of course as an analyst I can’t accept any gifts, but I had a great conversation with William Lovegrove about the technology and where the idea came from.
From one-offs to a SaaS
Basically William’s little consulting firm was busy building and evangelizing APIs to organizations. When a company was confronted with making an API, often progress would screech to halt or at least be diverted while things were sorted out. Often IT departments simply could not deal with making an API for anything. Other times they would be engaged into creating a one-time API for a company.
Complicated, expensive and not very efficient.
Datownia then came up with the idea of building a service in the cloud that automates the process of building and API.
I think this is brilliant.
If you need ana API, or just want to play with a prototype, you should take a look at how simple this is.
Thanks William Lovegrove and crew.
The three biggest trends impacting computing today are what I call the Computing Troika. Cloud Computing, Mobile Computing and Social Computing.
There is a fourth trend that is on par with each of the Troika movements. The API Economy.
Finally there is the question of the role of standards in these trends.
First, here is my definition of Cloud Computing—and its opposite—Non-cloud Computing.
Cloud Computing involves offering network computing services with the following three characteristics:
- IT Virtualization
- Service re-usability
IT Virtualization—Network services, including management and support, that are geographically independent. That is not to say that services are not on-premise, it just means that it doesn’t matter.
Multi-tenancy—Network services that offered to more than one tenant at a time.
Service re-usability—Network services that can be used and built upon for all tenants over and over.
All three are important. The one that needs explaining and is not so obvious is the Service Re-usability feature. AD FS (Active Directory Federated Services) integrated into WAAD (Windows Azure Active Directory) is a good example. Because it is virtual and multitenant, a single SAML instrumentation to WAAD gives permissioned SSO and integration to EVERY customer connected WAAD by default. This makes it highly leveragable and "reusable." Further, all of the services to all tenants of WAAD have the same APIs, the same console UI, indeed, the same infrastructure from top to bottom. This lets IT departments be much more efficient and competitive.
But here is where the new rubber meets the road. WAAD is specifically designed to give the customer real freedom of choice. This is done by not trying to keep the customer captive with either architecture design or terms of service.
The architecture design moves from keeping the customer captive in a Silo with two main features.
- Standards support
- APIs for everything
Standards support is the traditional bailiwick for interoperability. Interoperability is the key feature of services that are vendor independent. But standards are not a panacea. Standards movement is slow. Further, it is a myth that standards compliance guarantees interoperability. One vendor’s standard floor is another vendor’s standard ceiling. To remain competitive, vendors tend to tweak the standard game—sometimes in excess—to maintain an advantage.
The new equalizer—for both the customer and the vendor—is the API Economy. By providing open simple API access to everything, a vendor can still differentiate and yet offer real freedom of choice to its services. With complete APIs infrastructure, services are no longer Silos. Any customer or competitor can duplicate or extend the Apps that use the services (such as an admin console, user portal or developer portal) without repercussion.
"Non-cloud" then becomes any architecture design that does not include all three of these features. Note that this puts even more importance on the API Economy. The IT computing silo prison can only be broken through an active API Economy. The key to the successful customer-centric product design is giving the customer Freedom of Choice. Freedom of Choice is not freedom of captor. Freedom of Choice must be vendor independent. Independence can only be gained thru the API Economy coupled with traditional standards process.
There is more than one type of standard.
The three main types are:
- de Facto
- de Jure
- de Riguer
De facto—is the Latin definition “by default”. TCP/IP is actually a de facto standard. It was declared by governments that the standard network protocol would be the de jure OSI standard. As we all know, OSI never happened. TCP/IP is the de facto standard of the internet.
De Jure—is the Latin definition “by jury” or committee. HTTP is a de jure standard by the WC3.
De Riguer—is French for standard by current fashion. De Riguer goes far beyond “fashion”. Both de facto and de jure standards are very slow moving. In fact, a de jure standard is—except for governments— obsolete and certainly out of fashion by the time the committee ratifies the standard.
I bring up the distinction of these three standards because I think that what we can expect is to see a rapid proliferation of de Riguer standards that are built on top both de Facto and de Jure standards that are highly usable and can be referred to and used as “standards” that can provide interoperability without either a laborious and expensive de Jure process or the expectation of an accidental de Facto crown.
For example, the use and creating of “Graph API” design and methods as we see in the Facebook and WAAD API design are going to become standards independently of any committee. Of course the thought of this kind of talk scares a lot of people to death because of the kind of crazy behavior we see from vendors like Twitter and what it has done to its developers and its API.
But it is my opinion that when vendors act in such irresponsible ways they do so at their own peril I believe in the long term that we can successfully lean on rational thought and behavior that will support a strong three standard ecosystem that works.
Starting at the EIC 2012
I have been talking and presenting a lot about The API Economy. The API Economy has become a strategic topic for organizations. As one can expect with a hot topic, there are many opinions and views on the matter. Therefore there a many comments, blog posts and articles written about The API Economy.
Needless to say it is tough to keep track of everything being said or to track any given thread. I should start off by saying the questions asked by this blog post are appropriate and need to be answered.
The DataBanker thread
An interesting thread that I have been following for a while has inspired me to make a few comments about exactly what I mean by an API and to add additional data about the actual API numbers.
The people over at DataBanker published a piece in Sept. entitled “Personal Identity in the Cloud. What’s a Programmer to Do?
The author then goes on to cite the numbers I have used in several presentations to derive the actual number of APIs that we are looking at dealing with over the next five years. First he questions the accuracy of the numbers and their implications.
“I have to admit, the statistics from the Apple announcement, especially when combined with the view from Cisco, definitely make one stop and think. But Craig Burton’s blog post has apocalyptic overtones that I don’t think are accurate.”
Next he starts to ask questions about what I actually mean when referring and API.
“When Craig Burton refers to “20+ billion APIs all needing distinct identities”, what I believe he is actually referring to is interconnections and not discrete APIs.”
And finally the author states that the Identity Ecosystem being established by NSTIC will be used to address the problems brought on by The API Economy.
“Managing identity – entity or personal – within the Cloud certainly has some unique challenges. Fortunately, there are substantial communities such as the NSTIC Identity Ecosystem and projectVRM that are focused on defining standards for creating, validating, managing, and transacting trusted identities as well as looking at the broader issue of how individuals can control and assert their identity and preferences when engaging products, services, and vendors within this expanding internet of things. Multiple solutions will likely be proposed, developed, will co-exist, and eventually consolidate based on the collective wisdom and adoption of the cloud community. That community – or ecosystem – is really the key.”
So let me address each of these in turn.
The Apple and Cisco numbers and their apocalyptic overtones
First off, let me say that the numbers I quote from the iPhone5 announcement — while a little overwhelming — are very conservative. Mary Meeker — partner with Kleiner Perkins, Caufield and Byers — recently gave a talk about the growth of the device market
. In that talk, she pointed out that the Android Phone is ramping up 6 times faster than the iPhone.
“By the end of 2013, Meeker expects there to be 160 million Android devices, 100 million Windows devices, and 80 million iOS devices shipped per quarter.”
If you can believe the first axiom of the The API Economy — Everything and Everyone will be API Enabled — the significance of this additional research on the numbers of devices being shipped is non-trivial. The current methods being used to provision and manage the identities associated with these devices are broken and cannot scale to address the issue. Call that Apocalyptic if you want, but ignoring the facts do not make them go away.
Interconnections not APIs
As I pointed out earlier DataBanker then supposes that what I mean 26+ billion APIs is referring to “interconnections and not discrete APIs.”
I am actually referring to a conservative number of discrete APIs. Here is how APIs work. Every API must have a unique identity. Not necessarily unique functionality, but a unique ID.
But DataBanker did find the missing information in my numbers. I didn’t include relationships and interconnections. I didn’t include them in the equation because I wanted to keep things somewhat simple. Fact is, each interconnection and relationship also needs an API and a unique ID. Thus the number of actual APIs we are looking at are 3 to 5 times bigger than the numbers I outlined originally.
NSTIC Identity Ecosystem will address the problem — NOT
Here is where DataBanker and I start to agree — at least sort of.
It will take a community to address The API Economy explosion in identities management requirements. Further the NSTIC and ProjectVRM communities can help, but neither of these in their current state address the matter. For more information about what NSTIC is in this context, read this blog post.
The Ecosystem required to address billions of Identities and APIs is one that can be automated. Programmed. In order to address a programmable web, we need a programmable ecosystem to accompany it.
We are calling this ecosystem Identity Management as a Service.
I continue to stand by my numbers and projections of the implications being brought on by the API Economy. I see that in the near future, everything and everyone will be API enabled.
I also see a great number of people and organizations that do understand this issue and are moving forward with intention to address it and to succeed with the API Economy.
The Intersection of Policies, Standards & Best Practices for Robust Public Sector Cloud Deployments
Last week I was invited to attend the 2012 International Oasis Cloud Symposium.
I was very impressed. The attendance was not large—in fact—the organizers limited the number of attendees to 125 people. I was not able to attend the first day, but the second day was lively with many interesting presentations and discussions.
I won’t go over the complete agenda, if you want to it can be located in PDF format here.
Overall I would say every presentation given was worth listening to and the information was both valuable and informative. Not all of the presentations have been posted yet but a good number of them—including mine—can be found at this location.
I wanted to highlight a few of the presentations that were especially interesting. Again, I think all of them are worth looking at, but here are some highlights.
Privacy by Design
The day started out with the Information and Privacy Commissioner of Ontario Canada—Dr. Ann Cavoukian—giving a presentation via video
to the group on Privacy by Design. Her message was that she and Dr. Dawn Jutla—more about Dr. Jutla in a second—are co-chairing a technical committee on Privacy by Design for software Engineers.
“It’s all about developing code samples and documentation for software engineers and coders to embed privacy by design into technology. We are going to drill down into the “how to” in our technical committee.”
Following the video by Dr. Cavoukian, Dr. Dawn Jutla gave a presentation about Privacy by Design (PbD).
Now I have heard of Dr. Cavoukian and the PbD movement. But I had never been exposed to any details. The details were amazing and I like the 7 Foundational Principles.
1. Proactive not Reactive; Preventative not Remedial
2. Privacy as the Default Setting
3. Privacy Embedded into Design
4. Full Functionality—Positive-Sum, not Zero-Sum
5. End-to-End Security—Full Lifecycle Protection
6. Visibility and Transparency—Keep it Open
7. Respect for User Privacy—Keep it User-centric
These are sound principles that make a lot of sense. So much so that I invited Dr. Jutla to attend the Internet Identity Workshop (IIW) and to jointly present with me a discussion about Privacy and Identity in an API Economy.
Dr. Jutla agreed and we will lead the discussion on both Tuesday and Wednesday of next week (October 23, 24) at IIW.
If you look at the agenda, the rest of the speakers presenting on privacy were stellar. I learned a lot.
I strongly recommend looking over the agenda and reviewing the presentations that interest you. For most organizations, this should be every plenary and every discussion group.
I was also impressed with the Oasis’ ability and willingness to invite seemingly competitive groups, like iso.org, ANSI, and Kantara. This is the way standards body should work when it has the best interest of the industry and objective of open standardization.
Kudos to Laurent Liscia and the entire OASIS organization for the execution of a great event.