Lessons from Running Containers, Microservices, and Stateful Big Data Services in Production
As Marc Andreessen foresaw almost five years ago: Software is eating the world. Businesses of all types need to develop and deploy new software services, quickly, to stay competitive. This turns out to be quite challenging because enterprises must: (1) Quickly adopt entirely new processes for building and deploying software, including such modern concepts as microservices, containers, and continuous integration and deployment; (2) Ingest and store vast amounts of data, in real time, such as from machine sensors and customer and business activities; and (3) Derive actionable insight from data, again in real time, in order to save money, respond to market conditions more quickly, and deliver better products and services.
IT organizations must meet these challenges while addressing the traditional concerns of efficiency, security, service quality, and operational flexibility. Early web companies like Google and Facebook were the first to encounter these challenges and found the answer in hyperscale computing — modern applications composed of distributed microservices with big-data built-in — often running on commodity hardware. For mainstream enterprises, building and operating modern apps can be a significant challenge.
The Datacenter Operating System (DC/OS) applies best practices established by early web companies, and is powered by the production-proven Apache Mesos distributed systems kernel. With DC/OS, modern apps are now practical for mainstream enterprises.