Total Pageviews

Sunday, 4 May 2014

Docker: the Linux container engine( 这是一种LINUX的虚拟化技术)


Docker is an open source project to pack, ship and run any application as a lightweight container.
Docker containers are both hardware-agnostic and platform-agnostic. This means that they can run anywhere, from your laptop to the largest EC2 compute instance and everything in between - and they don't require that you use a particular language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases and backend services without depending on a particular stack or provider.
Docker is an open-source implementation of the deployment engine which powers dotCloud, a popular Platform-as-a-Service. It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands of applications and databases.
Docker L

Better than VMs

A common method for distributing applications and sandboxing their execution is to use virtual machines, or VMs. Typical VM formats are VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In theory these formats should allow every developer to automatically package their application into a "machine" for easy distribution and deployment. In practice, that almost never happens, for a few reasons:
  • Size: VMs are very large which makes them impractical to store and transfer.
  • Performance: running VMs consumes significant CPU and memory, which makes them impractical in many scenarios, for example local development of multi-tier applications, and large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
  • Portability: competing VM environments don't play well with each other. Although conversion tools do exist, they are limited and add even more overhead.
  • Hardware-centric: VMs were designed with machine operators in mind, not software developers. As a result, they offer very limited tooling for what developers need most: building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.
By contrast, Docker relies on a different sandboxing method known as containerization. Unlike traditional virtualization, containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with openvz, vserver and more recently lxc, Solaris with zones and FreeBSD with Jails.
Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves all 4 problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead, they are completely portable and are designed from the ground up with an application-centric design.
The best part: because Docker operates at the OS level, it can still be run inside a VM!

Plays well with others

Docker does not require that you buy into a particular programming language, framework, packaging system or configuration language.
Is your application a Unix process? Does it use files, tcp connections, environment variables, standard Unix streams and command-line arguments as inputs and outputs? Then Docker can run it.
Can your application's build be expressed as a sequence of such commands? Then Docker can build it.

Escape dependency hell

A common problem for developers is the difficulty of managing all their application's dependencies in a simple and automated way.
This is usually difficult for several reasons:
  • Cross-platform dependencies. Modern applications often depend on a combination of system libraries and binaries, language-specific packages, framework-specific modules, internal components developed for another project, etc. These dependencies live in different "worlds" and require different tools - these tools typically don't work well with each other, requiring awkward custom integrations.
  • Conflicting dependencies. Different applications may depend on different versions of the same dependency. Packaging tools handle these situations with various degrees of ease - but they all handle them in different and incompatible ways, which again forces the developer to do extra work.
  • Custom dependencies. A developer may need to prepare a custom version of their application's dependency. Some packaging systems can handle custom versions of a dependency, others can't - and all of them handle it differently.
Docker solves dependency hell by giving the developer a simple way to express all their application's dependencies in one place, and streamline the process of assembling them. If this makes you think of XKCD 927, don't worry. Docker doesn't replace your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers.
Docker defines a build as running a sequence of Unix commands, one after the other, in the same container. Build commands modify the contents of the container (usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous commands, the order in which the commands are executed expresses dependencies.
Here's a typical Docker build process:
FROM ubuntu:12.04
RUN apt-get update
RUN apt-get install -q -y python python-pip curl
RUN curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
RUN cd helloflask-master && pip install -r requirements.txt
Note that Docker doesn't care how dependencies are built - as long as they can be built by running a Unix command in a container.

Getting started

Docker can be installed on your local machine as well as servers - both bare metal and virtualized. It is available as a binary on most modern Linux systems, or as a VM on Windows, Mac and other systems.
We also offer an interactive tutorial for quickly learning the basics of using Docker.
For up-to-date install instructions and online tutorials, see the Getting Started page.

Usage examples

Docker can be used to run short-lived commands, long-running daemons (app servers, databases etc.), interactive shell sessions, etc.
You can find a list of real-world examples in the documentation.

Under the hood

Under the hood, Docker is built on the following components:
from https://github.com/dotcloud/docker
----------------------
Docker简介
Docker 是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的 Linux 机器上,也可以实现虚拟化。容器是完全使用沙箱机制,相互之间不会有任何接口(类似 iPhone 的 app)。几乎没有性能开销,可以很容易地在机器和数据中心中运行。最重要的是,他们不依赖于任何语言、框架或包装系统。
我觉得简单来说, Docker 就是一个应用程序执行容器,类似虚拟机的概念。但是与虚拟化技术的不同点在于下面几点:
虚拟化技术依赖物理CPU和内存,是硬件级别的;而docker构建在操作系统上,利用操作系统的containerization技术,所以docker甚至可以在虚拟机上运行。
虚拟化系统一般都是指操作系统镜像,比较复杂,称为“系统”;而docker开源而且轻量,称为“容器”,单个容器适合部署少量应用,比如部署一个redis、一个memcached。
传统的虚拟化技术使用快照来保存状态;而docker在保存状态上不仅更为轻便和低成本,而且引入了类似源代码管理机制,将容器的快照历史版本一一记录,切换成本很低。
传统的虚拟化技术在构建系统的时候较为复杂,需要大量的人力;而docker可以通过Dockfile来构建整个容器,重启和构建速度很快。更重要的是 Dockfile可以手动编写,这样应用程序开发人员可以通过发布Dockfile来指导系统环境和依赖,这样对于持续交付十分有利。
Dockerfile可以基于已经构建好的容器镜像,创建新容器。Dockerfile可以通过社区分享和下载,有利于该技术的推广。
Docker的主要特性如下(摘自 Docker:具备一致性的自动化软件部署 ):
文件系统隔离:每个进程容器运行在完全独立的根文件系统里。
资源隔离:可以使用cgroup为每个进程容器分配不同的系统资源,例如CPU和内存。
网络隔离:每个进程容器运行在自己的网络命名空间里,拥有自己的虚拟接口和IP地址。
写时复制:采用写时复制方式创建根文件系统,这让部署变得极其快捷,并且节省内存和硬盘空间。
日志记录:Docker将会收集和记录每个进程容器的标准流(stdout/stderr/stdin),用于实时检索或批量检索。
变更管理:容器文件系统的变更可以提交到新的映像中,并可重复使用以创建更多的容器。无需使用模板或手动配置。
交互式Shell:Docker可以分配一个虚拟终端并关联到任何容器的标准输入上,例如运行一个一次性交互shell。
目前Docker正处在开发阶段,官方不建议用于生产环境。 另外,Docker是基于Ubuntu开发的,所以官方推荐将其安装在Ubuntu的操作系统上,目前只能安装在linux系统上 。