What does The Orange Box tell us about the future of computing?

Will future computing systems be more visible, or less, than today’s clouds?

In How Big is the Cloud Really? I described how Shutterfly, a photo editing and sharing service, has contracted to deploy another 1,000 eight-foot high racks of co-located compute and storage capacity, to support their business growth. This remarkable story and others like it illustrate the scale of what has become known as “the cloud.”

Today’s industrial-scale Web computing centers dwarf typical enterprise datacenters, even those owned by the largest Fortune 500 corporations. This centralization of IT has come as a surprise, even to many industry insiders. Is greater centralization of IT therefore the future? Perhaps we should not forget that we each carry a Unix server in our pocket, and there are likely to be several more lying around in most family homes.

The history of computing has followed a dual track: greater centralization and greater distribution. It is as true inside the four walls of the enterprise (growth of client-server computing following the mainframe era) as it is across the consumer-IT landscape and the embedded computers that dominant our critical National Information Infrastructures (NII).

The story of Shutterfly led me once again to ponder the future of computing. Will we see more centralization of capacity or greater capability at the “edge”? Will computers vanish into the background (buried in a cloud bunker dug into the side of a mountain) or will we be able to point to powerful computers as distinct and attractively designed objects in their own right? Do we need clouds? What would it mean, for example, if the computer that we could carry around – or place in the corner of our offices – had effectively infinite capacity (more than its owner needed) to collect and process private and public data?

orangebox

Take this example. Canoncial, the company that provide the popular Linux distribution Ubuntu, has announced a hardware product called “The Orange Box.” It is a suitcase-sized data center running OpenStack, a ubiquitous cloud operating system. The fact that this box is even possible speaks volumes. A coffee table super-computer is close at hand! Stop press: IBM have just announced something of the kind. It’s called the z13 mainframe. It’s the size of a typical garden shed and it can process 2.5 billion transactions a day.

In describing the minuscule Orange Box during the launch party, the Canonical marketing team used the phrase “You can do anything with these boxes.” They also listed a few specialised applications such as data processing on the battlefield – or indeed any situation where advanced IT is needed but not readily available. How about taking a “big data” processor into a disaster zone to help model and simulate recovery plans as events unfold?

According to media reports, Canonical never expected The Orange Box to be a sales item. Rather, it was conceived to highlight the power of the cloud by creating a physical manifestation of that capability. This is, of course, the very opposite of ethereal virtual machines and application containers available from cloud computing providers such as Amazon Web Services (AWS).

When using a typical utility cloud, developers never see real machines, let alone touch them. Their only connection to hardware is through Web-based dashboards that plot the progress of IT workloads as they run inside (equally ethereal) virtual clusters of machines rented on a pay-as-you-go basis. So perhaps it was natural that Canonical’s marketing team never expected anyone to actually want to buy an Orange Box as a CapEx expense. Yet, surprise surprise, they had individuals approaching them to buy a hundred at a time!

Why?

Some large companies see products such as The Orange Box as part of a future computing landscape – a quick way to deploy a cloud in remote offices and smaller branches, at the “edge” of the corporation. Canonical also report talks with even smaller companies that see the advantages of a one-stop shop appliance to add a private cloud to their IT operation with minimal fuss.

I personally believe that the future of IT will be littered with highly visible devices of all shapes and sizes – some co-located, many distributed. These machines (some of which will be embedded in other products) will chat together over the Internet/Web in ways we can hardly imagine today.

With the rise of the Internet of Things (IoT) there will be a greater emphasis on distributed computing architectures, in all their guises, from simple remote calls to industrial-scale connected thing grids.

In such grids, each computing device will be, in effect, an autonomous agent (a broad computer science term). Each agent will be able to compute and store data in its own right. There will be things it knows, and knows how to do. For everything else it will invoke the capabilities of other agents. Within the limits of its software (which will be upgraded much as smartphone apps today) each agent will be able to enlist the capabilities of other agents visible to it over the Web, or via credentials, registries and directories that define industrial-grid architectures.

Some of these agents will be small, lightweight processors (software threads) dedicated to a simple well-defined (and often specialised) task. An example might be end-point connectivity for a piece of wearable home or office tech, or perhaps a smart sensor embedded within industrial machinery. Other computing agents will be more robotic in nature, quite general purpose in their capability and able to mediate work with other agents. The Orange Box would be capable of just that. Imagine, for example, it sitting at the edge of a grid of Internet-connected things, hooked back to an enterprise supply chain plan.

The future lies in interactive (distributed) computing. Think swim lanes.  Think parallelism. Think processes.

How such edge-computing capabilities will interoperate with each other, and with centralised systems, will be one more skill that future enterprise architects will have to wrestle with.

human agents

In the early 90s, at Stanford University’s Knowledge Systems Laboratory (KSL), researchers put in place the seeds of what will be required for large-scale agent grids to be viable. The Stanford team – specialised in knowledge-based systems and agent-based distributed computing, demonstrated how industrial applications that were not designed to work together could, nevertheless, find ways to do so by reasoning about each other’s capabilities – with very few preconditions.

A system needing to know the weather, for example, could advertise for weather-predicting services over a network and then bind to a suitable agent, after which weather prediction would become part of its capability. This agent-to-agent interaction is similar to the way we humans interact. We find an expert, plug into their expertise, and then carry on with our work. We tell others about our work, they find ways to use it. And so on and so forth. The Stanford team called such a protocol an Agent Communication Language (ACL). It allowed a computing system to hold a rich conversation with other agents to ascertain, and negotiate access to, each other’s capabilities.

It used to be the case that Web APIs (application programming interfaces) could really only provide access to well-defined discrete functions – such as mashing together the different modules of an application within a Web UI or invoking a well-defined algorithm. No more. An “API” in today’s computing landscape is a permeable window from one system to another. Perhaps we should update our vocabulary?

Imagine, for example, a consumer app or industrial Web service, which depends upon a machine learning algorithm, which resides on a clustered computing system within a remote cloud, and is able to scale elastically to thousands of virtual machines as demands are placed upon it from the workloads it receives via a RESTful Web service. Phew!  That type of scenario could be the future of computing. The Remote Procedure Call (RPC) of yesteryear was perfectly capable of extending a database transaction into the distributed environment, but those protocols are insufficient for building the Internet of Computing Things.

Edge computing capabilities are expanding.

watson

Sitting right behind Brad Rutter and Ken Jennings on the US quiz show Jeopardy!  – at the back of the TV studio was a new IBM super-computer called Watson. When Watson won against human contestants many pronounced it a new miracle of computing. The generality of the algorithms employed by Watson appeared to eclipse IBM’s previous grand challenge and advancement in AI the chess-playing computer Deep Blue.

Watson’s ability to infer (quotes “think”) over world knowledge can now be embedded in any other computer program, running on any device or mobile computer, via a Web-based API. Watson in the cloud is a reality. At the same time, IBM are also making this magical capability available in the form of an easy–to-deploy hardware appliance, i.e. just as Google did with the Google Search appliance, and Canonical have done with The Orange Box.

Access to services like Watson, from any other computing agent, will be the norm. Here’s another example:

WolframAlpha

Are you familiar with the market-leading numeric and symbolic processing software Mathematica? It is described by some as the world’s most sophisticated desktop application.

Two years ago, the only way to use Mathematica was to buy a copy of the desktop software itself. Now it is available in the cloud. Its inventor, Stephen Wolfram, is also hooking it up to all kinds of world knowledge and “big data.” The result: Wolfram Alpha, another miracle of computing. Take a look at the examples here, which range from pure mathematics to linguistic processing, the analysis of history, a chemistry workbench, finance and money calculations, art and design, socioeconomic data analysis, health and medical research, environmental simulations, and more.

Anyone who has played with Alpha (which can be as simple to use as typing a question at Google) is amazed by its capabilities. And Wolfram Research are now also embedding Alpha’s API and engine into tiny devices such as the popular Raspberry Pi. On the desktop, in the cloud, in every device mathematics on steroids will soon be available for embedding in any other application.

These examples tell me that the future of computing is both more centralised and more distributed. Capability at the edge will approach that at the center.

internet

John Gage, 5th employee of Sun Microsystems, is credited with creating the phrase “The Network is the Computer.” We know what he meant, but the network only carries the conversation. It is the content and nature of those conversations that are the real (distributed) computing. The future lies in more machine-to-machine dialog. Those conversations will be increasingly fascinating to watch as the Internet of Computing Things unfolds in front of our eyes.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: