mesosphere

Four Tenets of the Modern Application Platform

Sep 26, 2017

Florian Leibert

D2iQ

7 min read

  
Digital transformation is inevitable and unending.
 
In the early 2000s, we saw the rise of broadband internet and the birth of Web 2.0, a period that was largely defined by data and user-generated content; including the start of social media and e-commerce. Later came the advent of cloud computing and APIs, which increased the pace of innovation and opened the doors to a new wave of applications. In the early 2010's there was the explosion of mobile applications and the era of big data.
 
Now we are entering another new era, where billions of people, devices, and "things" are all generating data. Today, things like connected cars, home automation, and jet engines are generating staggering volumes of data every day. Most of us in the tech industry have heard the adage "data is the new oil"; the value in this digital era will be derived from data, not from code, and it is going to result in architects having to figure out what can be stored and how so much data can be processed.
 
It also requires architects, engineers, and companies as a whole to change how they think and work.
 
Applications Have Evolved Dramatically
Today we are seeing an explosion of data and advances in artificial intelligence (AI) and machine learning drive another wave of digital transformation and reimagination. It's not just tech companies that are reinventing themselves, but also traditional businesses such as retailers, banks, cruise lines, and healthcare companies. Many traditional businesses are building applications unlike any seen before – applications that apply AI and machine learning to massive volumes of human and machine-generated data for real-time or predictive decision making.
 
Modern applications require that companies consider new development approaches including moving to a microservices architecture and the further adoption of DevOps and continuous integration and delivery (CI/CD). Just as critically, companies must also consider and implement modern infrastructure best practices.
 
Platform Best Practices and Requirements Had to Change
I joined Twitter back in 2009 when the platform had a few million active users and was on a path of hyper growth. Twitter was experiencing frequent "fail whales," and part of my job was deploying technologies to resolve performance issues. After two years at Twitter, I joined the engineering team at Airbnb. Airbnb was experiencing many of the same challenges as Twitter, but also faced the challenge of having to run a lot of analytical queries. We needed to automate these analytical queries, and then later we refocused on running data-intensive applications and data services.
 
Applications were already starting to evolve to be web scale and data intensive.
 
At Twitter, the move to microservices meant individual components could be scaled independently from one another, and engineering teams could write code in a more parallel manner. Each team could focus on their own small project, and they no longer had to interact with one monolithic code base. There were some obstacles in getting to a microservices-based architecture though. One obstacle was that our operations team was trained primarily to deploy this monolithic application, and now the team had to figure out how to deploy individual components. This effort resulted in the creation of many hardware profiles, and different types of servers for various types of applications. Some of these hardware profiles were more CPU bound which meant they required machines with more CPU capacity. Others were memory bound, so they had to have more memory and other components. So the level of complexity increased for the operations team and it took a long time to deploy applications.
 
While I did help the engineering teams at both Twitter and Airbnb move from a monolithic application to a microservices-based architecture, the issue was not just microservices. There was also the massive amount of data that had to be processed and directed at any point in time, and the requirements that microservices and big data tools put on infrastructure.
 
Four Tenets of the Modern Application Platform
Our experiences building out platforms for web-scale platform companies taught us that a new approach was needed for all application development and operations teams. Below are what I believe to be the most critical platform characteristics needed to power modern, data-intensive applications at scale.
 
1) Containers and Orchestration
Containers and container orchestration are a huge trend right now. Containers are popular because they are highly portable; once you put something (e.g. code, configuration, dependencies) in a container, you could hypothetically pass that container around on your laptop and run it the same way on any server. It makes it easy to drag and drop a container anywhere. Another key thing containers provide, besides the packaging, is isolation. You can run processes next to each other but also isolated from one another, similar to a virtual machine. However, containers do not have the down sides of virtual machines (such as long start-up times). Containers are much more lightweight compared to virtual machines, which really matters as thousands of instances are copied around within a network. Similar to the idea of VMs, containers also allow you to drive up utilization in your data center or cloud by running multi-tenant on a single machine.
 
2) Fast Data and Analytics
The amount of data generated by humans and machines has grown exponentially, and it requires more specialized tools to allow that data to be leveraged for applications and analysis. For example, there have been many developments in the way of tools that apply machine learning and deep learning to data. For example, TensorFlow is a popular open source library for machine learning, and the availability and ease-of-use of these projects lowers the barrier of entry making machine learning accessible to a wider set of data scientists and developers.
 
When we are talking about data analytics, we are talking about the need for infrastructure that can ingest, process, and respond to dramatic volumes of data correctly and quickly. Whether it's analytics for a social media platform like Twitter, a connected car with more than a million lines of code, or any other large-scale connected platform, getting this type of architecture up and running and building data-centric applications is a massive undertaking.
 
3) Performance, Security, and Scale Must be Respected
When building infrastructure, platforms, and applications, it is crucial that everything possible is done in a way that ensures performance, security, and scale. How you approach your architecture must be well thought out and properly implemented for every piece of a stack, infrastructure, and application. This concept is one of the biggest concerns that keeps most architects and engineers up at night. Can the application be scaled elastically? What if it can't. Should the application then be scaled using vertical scaling, horizontal scaling, or some other method? And that is just thinking about scale; there are the same concerns about security and performance. Performance, security, and scale are challenging to do, and they are all crucial to the success of your infrastructure and application.
 
At Twitter, one of our key problems was that the platform was real time and users could follow as many people as they wanted. We had what we internally called the "Justin Bieber Problem." Because every time Justin Bieber would send a tweet it would result in our system having to deliver millions of messages in real time (since he had millions of followers). When major events would come up like the Soccer World Cup, high-profile concerts, and presidential elections, our website was often experiencing outages during the most critical moments. The underlying architecture could not handle the traffic patterns, so we had to fundamentally reshape the way the Twitter application was built and deployed.
 
4) Rapid Support for Open Source
There's been a Cambrian explosion of open source projects, and platforms must be able to support a diverse universe of open source software. Thanks to GitHub and other development collaboration platforms, tens of millions of developers actively support open source software. Many major technology companies have launched open source software projects as well. Our own DC/OS platform is an open source software project and contains a number of other open source projects such as Kubernetes, Kafka, Apache Mesos, Apache Spark, and many more.
 
With so many actively supported open source software projects available, companies often find it difficult to choose what technologies to use. It is also time-consuming to try out open source software. For example, if you are looking to do stream processing you could try running Spark and Flink. But to figure out which one you like more, you have to review the documentation, set-up, install, configure, and troubleshoot both software packages. The process is quite lengthy, and it can take months to get to deployment.
 
Simplifying the Adoption of these New Processes and Tools
Many companies are finding that modern application development requires that they must not only quickly build complex distributed data-centric applications but also continuously deliver updates and improvements to those applications. CI/CD and the rapid pace of application development makes it crucial for companies to operationalize and automate best practices. Fortunately, best practices are baked into DC/OS; any platform and application running on DC/OS are automatically optimized so that they follow best practices for deployment, security, performance, and scaling.
 
Many companies are underestimating just how difficult it is to build modern data-rich applications and the underlying infrastructure that powers them. Many are stitching technologies together, spending much of their time setting up and testing out open source technologies. Discovering just how difficult it is to set up a production deployment of open source technologies. They are downloading an array of open source tools and quickly learning just how demanding it can be to make all these technologies work together.
 
But it doesn't have to take months for organizations to try out and deploy open source software. It doesn't have to be difficult to integrate and manage services and data tools. DC/OS is built on top of Apache Mesos, a distributed systems kernel that can run on any machine. Mesos is used by some of the largest web scale companies as it can turn an entire data center or set of virtual machines into a single set of optimized compute, which allows for portability of applications across any infrastructure. This foundation makes it easy for DC/OS users to integrate and manage containerized microservices and fast-data tools side by side.
 
It also shouldn't require a lot of effort to implement best practices, deploy new platform services, and provision resources. With DC/OS, we've taken all these best practices and turned them into code. You get one-click installation for more than 100 of the most popular development tools including Kafka, Kubernetes, and TensorFlow as well as elastic scaling. DC/OS automatically handles all the underlying technologies, resources, provisioning, and best practices so you don't have to.

Ready to get started?