mesosphere

Welcome to the era of Container 2.0

Aug 01, 2016

Florian Leibert

D2iQ

4 min read

 
It's time to advance the discussion about what's possible with containers. We need to move past all the Container 1.0 talk about how containers can revolutionize application development, and begin to focus on what comes next. Specifically, we need to focus on the game-changing benefits that containers will bring to how large enterprises manage their applications, as well as the datacenters and the clouds that power them.
 
Container 1.0 brought us a long way. It made it so developers can package their binaries in a universal format (e.g., Docker or appc). Container 1.0 also made it possible to deploy stateless binaries to almost any server or cloud instance, and even coordinate containers with lightweight orchestration. However, while Container 1.0 is a useful conversation starter and a great way to get people interested in building with containers, it's actually of limited utility.
 
Let's talk Container 2.0
At its simplest, Container 2.0 is the ability to run (and orchestrate) both stateless and stateful services on the same set of resources. This is how modern applications should be built and operated if we want to use them to their full potential for curing diseases, solving business problems or delivering the next great consumer experience. If we can't finally—and completely—knock down the siloes between applications and infrastructure, then the core components of modern applications—efficient code deployment on containers and powerful data processing and analytics—will only be as good as the networks between them.
 
However, while this stateless-plus-stateful definition is accurate, easy to grasp and very powerful—especially for anyone who has built applications that connect to big data systems such as Kafka or Cassandra—even it is not comprehensive. Realistically, delivering Container 2.0 means delivering a platform that can run application logic along with the gamut of backend services on shared infrastructure, combining all workloads onto a single platform that improves efficiency and simplifies complex operations. The collection of capabilities that modern applications require includes monitoring, continuous deployment, relational databases, web servers, virtual networking and more.
 
Container 2.0 is also where businesses start to see value from containers. Companies only start seeing real, impactful business improvements when they move beyond individual containers and start operating with higher-level abstractions. In this 2.0 world, containers become an implementation detail, part of the solution.
 
Container 2.0 lets operators think of entire applications and services as deployable objects. A Container 2.0 application can consist of hundreds of containers and rely on dozens of infrastructure services, such as databases and message queues, but be deployed and scaled in the datacenter or cloud as a single cohesive unit.
 
Container 2.0 also lays the foundation for new ways of building applications, such as serverless computing where developers only need to think about application logic. Many of these new capabilities leverage containers as a means to ends that is a much bigger idea than just a way to package, isolate and run code—which is how we define Container 1.0.
 
How DC/OS delivers Container 2.0 today
That the Container 2.0 era is upon us is not up for debate. Look around, and you will see every container company and project implementing alpha features for stateful services, or at least laying out a roadmap for stateful support. Yet only one container orchestration platform—DC/OS—is already powering Container 2.0 operations in production environments at some of the world's largest companies and most-innovative startups. These include Verizon, Esri, Autodesk, Wellframe, Time Warner Cable and many more.
 
The secret sauce behind how DC/OS is able to power such robust applications really is no secret at all: it's the open source Apache Mesos technology on which DC/OS is built. When my co-founder at Mesosphere, Ben Hindman, created Mesos at UC-Berkeley in 2009, he and his co-creators did something very smart by building in a two-level scheduler. (By the way, Mesos just hit a major milestone by announcing its 1.0 release last week!)
 
You can read the details of two-level scheduling in the paper Ben and his colleagues published, but the gist is that Mesos cedes much of its scheduling authority over to the frameworks (in DC/OS, we call them "services") that are running on it. This means that all of the services running on DC/OS can have their own scheduler and that each scheduler can be specifically optimized for unique kinds of workloads and constraints. Furthermore, these schedulers are customized to simplify "Day 2" operations by making services easy to install, scale and upgrade without downtime, among other benefits.
 
Here's a simplistic explanation of that would work on a DC/OS cluster running Marathon (for Docker container orchestration), Confluent Platform, DataStax Enterprise and Apache Spark:
 
  • Mesos would schedule each of those services somewhere on the cluster based on resource requirements, isolated inside a Linux container group.
  • Marathon is responsible for scheduling Docker containers onto the resources on which it's running.
  • Confluent is responsible for managing Kafka workflows on the resources on which it's running.
  • And so on for DataStax, Spark and whatever other DC/OS services are installed.
 
Running multiple schedulers on the same cluster—simultaneously, multi-tenant on shared nodes—is the only way to maximize resource utilization and accommodate the wide range of Container 2.0 workloads. Container 1.0 systems, including Kubernetes and Docker Swarm, use a single monolithic scheduler. And because there is no single scheduler that can optimize for all workloads, users end up with non-optimal operating constraints, including being forced to create separate clusters for each service.
 
 
Container 1.0 is not enough
Container 1.0 hacks make it possible to stand up a Cassandra database on a Kubernetes or Docker Swarm cluster, but it won't be a very robust or realistic production solution. Without Container 2.0 constructs such as two-level scheduling and stateful services, Cassandra and other workloads won't operate well on the same cluster in any kind of production environment.
 
Compare this to DC/OS. Aside from being able to orchestrate many thousands of Docker containers with Marathon (the native engine for container orchestration and managing other long-running services in DC/OS), DC/OS can run dozens of other important services on the same cluster. Here is a small list of the other services currently available in DC/OS Universe:
 
  • Jenkins
  • Elasticsearch
  • HDFS
  • Avi Networks
  • BitBucket
  • ArangoDB
  • NGINX
  • Sysdig
  • DataDog
  • Zeppelin
  • Calico
 
And Container 2.0 platforms like DC/OS foreshadow the next leap in how we build applications, where we might not even think about containers at all. Just last month, Galactic Fog implemented its Gestalt Framework for serverless computing onto DC/OS. That's right: DC/OS is the only container platform currently offering support for production Container 2.0 features and community members like Galactic Fog are already letting developers move beyond containers (thinking about them, at least) to the world of event-driven lambda architectures.
 
Big industry players endorse Container 2.0
Today, we turned over a new chapter in our Container 2.0 story by working with partners DataStax, Confluent and Lightbend to get their respective offerings available and commercially supported on DC/OS. This means DC/OS users can install the Cassandra-based DataStax Enterprise database system and the Kafka-based Confluent Platform with a single click via the DC/OS Universe.
 
Running backend and infrastructure services in production requires more than just software. For many enterprises—even in the Container 2.0 era—it also requires the backing of companies devoted to each of these key new technologies. Even in the Container 2.0 era, you need enterprise-grade backing of your key technologies to help run your systems in the real world.
 
DataStax and Confluent offer enterprise grade support for their respective Cassandra and Kafka based platforms, so that you can get assistance from the experts behind all these great technologies. Lightbend is providing support for Spark and all of the components of its Reactive Platform, including the Akka message-driven runtime and the Scala programming language, which have been integrated with DC/OS.
 
Of course, Mesosphere provides 24x7 support and additional features to our Enterprise DC/OS customers, as well.
 
The support of partners like these—and the successes we've already had with them—is proof that the world is heading toward Container 2.0, and that DC/OS is already there. Innovative companies are well down the road toward modern containerized and data-driven applications, and now they're looking for the right software to brings those apps from the lab and into production. They know the power of Mesos, Cassandra, Kafka and Spark, and now they're taking it to the next level with Mesosphere, DataStax, Confluent and Lightbend.
 
And the best of all this is that you can experience the possibilities of Container 2.0 immediately and entirely open source by simply installing DC/OS on your laptop, in your datacenter or in the cloud. There is not a faster, easier or more powerful platform for building the types of applications that can propel your business into the next—much more demanding—evolution of the digital age.
 
Learn more about:
 

Ready to get started?