If you think these little snippets of Linux source code
might have limited revenue-bearing potential given the fact that anyone can
activate them on an open source basis, then you might want to consider
DockerCon 2016, which was held June 19-20 at the Washington State Convention
Center in Seattle. DockerCon is an
annual technology conference for Docker Inc., the much touted San
Francisco-based start-up that developed and popularized Docker runtime Linux
containers, which are no longer proprietary but hosted as an open source
project under the Linux Foundation.
Docker Inc. (the company) is among the rarified “unicorns” of Silicon
Valley – start-ups with valuations exceeding $1 billion based on a really hot
idea, but with nascent business models and perhaps limited revenue streams at
this stage of their development.
Even with a conference ticket price of $990, DockerCon 2016
in Seattle was completely sold out. Over
4,000 attendees showed up and there was a substantial waiting list. For
comparison, last year, DockerCon in San Francisco had about 2,000 people. The
inaugural DockerCon event in 2014 was attended by about 500 people. The
conference featured company keynotes, technology demonstrations, customer
testimonials, and an exhibition area with dozens of vendors rushing into this
space. Big companies exhibiting at DockerCon included Microsoft, IBM, AWS,
Cisco, NetApp, HPE and EMC.
Punching way above its size, Docker rented Seattle's Space
Needle and EMP museum complex to feed and entertain all 4,000+ guests on the
evening of the summer solstice.
Clearly, Docker’s investors are making a big bet that the
company grow from being the inventor of an open source standard.
Why should the
networking and telecom community care about a new virtualization format at the
OS level?
There is a game plan afoot to put Docker at the crossroads
of application virtualization, cyber security, service orchestration, and cloud
connectivity. Docker enables
applications to be packed into a standard shipping container, enabling software
contained within to run the same regardless of the underlying infrastructure.
Compared with virtual machines (VMs), containers launch quicker. The container includes the application and
all of its dependencies. However,
containers make better use of the underlying servers because they share the
kernel with other containers, running as isolated processes in user space on
the host operating system. The vision is
to allow these shipping containers to move easily between servers or between
private and public clouds. As such, by
controlling the movement of containers, you essentially control the movement of
workloads locally and across the wide area network. The applications running
within containers need to remain securely connected to data and processing
resources from wherever the container may be located. Thus, software-defined
networking becomes part of the containerization paradigm. Not surprisingly, we
are seeing a lot of Silicon Valley’s networking talent move from the
established hardware vendors in San Jose to the new generation of software
start-ups in San Francisco, as exemplified by Docker Inc.
The Timeline of Significant Events for Docker
Docker was started by Solomon Hykes as an internal project
at dotCloud, a platform-as-a-service company based in France and founded around
2011. The initial Docker work appears to have started around 2012/3 and the
project soon grew to become the major focus of the company, which adopted the
Docker name. The official launch of
Docker occurred on March 13, 2013 in a presentation by Solomon Hykes entitled
“The Future of Linux Containers” hosted at the PyCon industry conference. Soon after, the Docket whale icon was posted
and a developer community began to form.
In May 2013, dotCloud hired Ben Golub as CEO with a goal of
restructuring from the PaaS business to the huge opportunity it now saw in
building and orchestrating cloud containers. Previously, Golub was CEO of
Gluster, another open source software company but which focused on scale-out
storage. Gluster offered an open-source
software-based network-attached filesystem that could be installed on commodity
hardware. The Silicon Valley company
successfully raised venture funding, grew its customer based quickly, and was
acquired by Red Hat in 2011.
Within 3 months of joining Docker, Golub established an
alliance with Red Hat. A second round of venture funding, led by Greylock
Partners, brought in $15 million. Headquarter were moved to San Francisco. In June 2014, Docker 1.0 was officially
released, marking an important milestone for the project.
In August 2014, Docker sold off its original dotCloud (PaaS)
business to Berlin-based cloudControl, however, the operation was shut down
earlier this year after a two-year struggle. Other dotCloud engineers credited
with work on the initial project include Andrea Luzzardi and Francois-Xavier
Bourlet. A month later, in September 2014, Docker secured $40 million in a
series C funding round that was led by Sequoia Capital and included existing
investors Benchmark, Greylock Partners, Insight Ventures, Trinity Ventures, and
Jerry Yang.
In October 2014, Microsoft announced integration of the
Docker engine into its upcoming Windows Server release, and native support for
the Docker client role in Windows. In
December 2014, IBM announced a strategic partnership with Docker to integrate
the container paradigm into the IBM Cloud.
A year and a half later, in June 2015, IBM's Bluemix
platform-as-a-service began supporting Docker containers. IBM Bluemix also
supports Cloud Foundry and OpenStack as key tools for designing portable
distributed applications. Additionally, IBM claims the industry's best
performance of Java on Docker. IBM Java is optimized to be two times faster and
occupies half the memory when used with the IBM Containers Service. Moreover,
as a Docker based service, IBM Containers include open features and interfaces
such as the new Docker Compose orchestration services.
In March 2015, Docker acquired SocketPlane, a start-up
focused on Docker-native software defined networking. SocketPlane had only been
founded a few months earlier by Madhu Venugopal, who previously worked on SDN
and OpenDaylight while at Cisco Systems, before joining Red Hat as Senior
Principal Software Engineer. These SDN
capabilities are now being integrated into Docker.
In April 2015, Docker raised $95 million in a Series D round
of funding led by Insight Venture Partners with new contributions from Coatue,
Goldman Sachs and Northern Trust. Existing investors Benchmark, Greylock
Partners, Sequoia Capital, Trinity Ventures and Jerry Yang’s AME Cloud Ventures
also participated in the round.
In October 2015, Docker acquired Tutum, a start-up based in
New York City. Tutum developed a cloud service that helps IT teams to automate
their workflows when building, shipping or running distributed applications.
Tutum launched its service in October 2013.
In November 2015, Docker extended is Series D funding round
by adding $18 million in new investment.
This brings total funding for Docker to $180 million.
In January 2016, Docker acquired Unikernel Systems, a
start-up focused on unikernel development, for an undisclosed sum. Unikernel
Systems, which was based in Cambridge, UK, was founded by pioneers from Xen,
the open-source virtualization platform. Unikernels are defined by the company
as specialized, single-address-space machine images constructed by using
library operating systems. The idea is to reduce complexity by compiling source
code into a custom operating system that includes only the functionality
required by the application logic. The unikernel technology, including
orchestration and networking, is expected to be integrated with the Docker
runtime, enabling users to choose how they ‘containerize’ and manage their
application - from the data center to the cloud to the Internet of Things.
Finally, at this year’s DockerCon conference, Docker
announced that it will add built-in orchestration capabilities to it Docker
Engine. This will enable IT managers to
form a self-organizing, self-healing pool of machines on which to run
multi-container distributed applications – both traditional apps and
microservices – at scale in production. Specifically, Docker 1.12 will offer an
optional “Swarm mode” feature that users can select to “turn on” built-in
orchestration, or they can also elect to use either their own custom tooling or
third-party orchestrators that run on Docker Engine. The upcoming Docker 1.12
release simplifies the process of creating groups of Docker Engines, also known
as swarms, which are now backed by automated service discovery and a built-in
distributed datastore. The company said that unlike other systems, the swarm
itself has no single point of failure. The state of all services is replicated
in real time across a group of managers so containers can be rescheduled after
any node failure. Docker orchestration includes a unique in-memory caching
layer that maintains state of the entire swarm, providing a non-blocking
architecture which assures scheduling performance even during peak times. The
new orchestration capabilities go above and beyond Kubernetes