Docker, an open-source technology, isn’t just the dearest of Linux powers such as Red Hat. Proprietary software companies such as Microsoft has also embraced docker.
So why does everyone love containers and Docker? Parallel’s CTO of server virtualization and a leading Linux kernel developer, explained that VM hypervisor, such as Hyper-V, KVM, and Xen, all are “based on emulating virtual hardware. That means they’re fast in terms of system requirements.
Containers, however, use shared operating systems. That means they are much more efficient than, hypervisor in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This in turn means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application,” said Bottomley.
Virtualization has swept through the data center in recent years, enabling IT transformation and serving as the secret sauce behind cloud computing. Now it’s time to examine what’s next for virtualization as the data center options mature and virtualization spreads to desktops, networks, and beyond.
Therefore, with a perfectly tuned container system, you can have as many as four-to-six times the number of server application instances as you can using Xen or KVM VMs on the same hardware.
Sounds great right? You get a lot more application bang for your server buck. So, why hasn’t anyone done it before? Well, actually they have. Containers are an old idea. Docker haw been working in such open-source projects as OpenVZ and LXC to make containers work well and securely.