December 25, 2024, Wednesday, 359

Virtualization

From NeoWiki

Jump to: navigation, search

In computing, virtualization is a broad term that refers to the abstraction of computer resources. One useful definition, from independent IT analyst firm Enterprise Management Associates, is "a technique for hiding the physical characteristics of computing resources from the way in which other systems, applications, or end users interact with those resources. This includes making a single physical resource (such as a server, an operating system, an application, or storage device) appear to function as multiple logical resources; or it can include making multiple physical resources (such as storage devices or servers) appear as a single logical resource."

However, the term is an old one: It has been widely used since the 1960s or earlier, and has been applied to many different aspects and scopes of computing — from entire computer systems to individual capabilities or components. The common theme of all virtualization technologies is the hiding of technical detail, through encapsulation. Virtualization creates an external interface that hides an underlying implementation, e.g. by multiplexing access, by combining resources at different physical locations, or by simplifying a control system. Recent development of new virtualization platforms and technologies has refocused attention on this mature concept.

Like such terms as abstraction and object orientation, virtualization is used in many different contexts. This article reviews common uses of the term, divided into two main categories:

  • Platform virtualization involves the simulation of virtual machines.
  • Resource virtualization involves the simulation of combined, fragmented, or simplified resources.

Of course, virtualization is also an important concept in non-computer contexts. Many control systems implement a virtualized interface to a complex device; thus a modern car's gas pedal does much more than just increase the flow of fuel to the engine; and a fly-by-wire system presents a simplified "virtual airplane" which may have little to do with the physical implementation.

Contents

Platform virtualization

The original sense of the term virtualization, dating from the 1960s, is in the creation of a virtual machine using a combination of hardware and software. For convenience, we will call this platform virtualization. The term virtual machine apparently dates from the IBM M44/44X experimental paging system. The creation and management of virtual machines has also been referred to as creating pseudo machines, in the early IBM CP-40 days, and server virtualization more recently. The terms virtualization and virtual machine have both also acquired additional meanings through the years.

Platform virtualization is performed on a given hardware platform by "host" software (a control program), which creates a simulated computer environment (a virtual machine) for its "guest" software. The "guest" software, which is often itself a complete operating system, runs just as if it were installed on a stand-alone hardware platform. Typically, many such virtual machines are simulated on a given physical machine. For the "guest" system to function, the simulation must be robust enough to support all the guest system's external interfaces, which (depending on the type of virtualization) may include hardware drivers.

There are several approaches to platform virtualization, listed below based on how complete a hardware simulation is implemented. (The following terms are not universally-recognized as such, but the underlying concepts are all found in the literature.)

Emulation or simulation 
the virtual machine simulates the complete hardware, allowing an unmodified "guest" OS for a completely different CPU to be run. This approach has long been used to enable the creation of software for new processors before they were physically available. Examples include Bochs, PearPC, PPC version of Virtual PC, QEMU without acceleration, and the Hercules emulator. Emulation is implemented using a variety of techniques, from state machines to the use of dynamic recompilation on a full virtualization platform.
Native virtualization and full virtualization 
the virtual machine simulates enough hardware to allow an unmodified "guest" OS (one designed for the same CPU) to be run in isolation. Typically, many instances can be run at once. This approach was pioneered in 1966 with IBM CP-40 and CP[-67]/CMS, predecessors of IBM's VM family. Examples include Virtual Iron, VMware Workstation, VMware Server (formerly GSX Server), Parallels Desktop, Adeos, Mac-on-Linux, Win4BSD, Win4Lin Pro, and z/VM.
Partial virtualization (and including "address space virtualization") 
the virtual machine simulates multiple instances of much (but not all) of an underlying hardware environment, particularly address spaces. Such an environment supports resource sharing and process isolation, but does not allow separate "guest" operating system instances. Although not generally viewed as a virtual machine category per se, this was an important approach historically, and was used in such systems as CTSS, the experimental IBM M44/44X, and arguably such systems as OS/VS1, OS/VS2, and MVS. (Many more recent systems, such as Microsoft Windows and Linux, as well as the remaining categories below, also use this basic approach.)
Paravirtualization 
the virtual machine does not necessarily simulate hardware, but instead (or in addition) offers a special API that can only be used by modifying the "guest" OS. This system call to the hypervisor is called a "hypercall" in Xen, Parallels Workstation and Enomalism; it is implemented via a DIAG ("diagnose") hardware instruction in IBM's Conversational Monitor System under VM (which was the origin of the term hypervisor). Examples include VMware ESX Server, Win4Lin 9x, and z/VM.
Operating system-level virtualization 
virtualizing a physical server at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" OS environments share the same OS as the host system – i.e. the same OS kernel is used to implement the "guest" environments. Applications running in a given "guest" environment view it as a stand-alone system. Examples are Linux-VServer, Virtuozzo, OpenVZ, Solaris Containers, and FreeBSD Jails.
Application Virtualization 
running a desktop or server application locally, using local resources, within an appropriate virtual machine; this is in contrast with running the application as conventional local software, i.e. software that has been 'installed' on the system. (Compare this approach with Software installation and Terminal Services.) Such a virtualized application runs in a small virtual environment containing the components needed to execute – such as registry entries, files, environment variables, user interface elements, and global objects. This virtual environment acts as a layer between the application and the operating system, and eliminates application conflicts and application-OS conflicts. Examples include the Sun Java Virtual Machine, Softricity, Thinstall, Altiris, and Trigence. (This approach to virtualization is clearly different from the preceding ones; only an arbitrary line separates it from such virtual machine environments as Smalltalk, FORTH, Tcl, P-code, or any interpreted language.)

As of 2006, recent tools and technologies for virtualization are providing new alternatives for application virtualization and application streaming.

Resource virtualization

The basic concept of platform virtualization, described above, was later extended to the virtualization of specific system resources, such as storage volumes, name spaces, and network resources.

  • Resource aggregation, spanning, or concatenation combines individual components into larger resources or resource pools. For example:
    • RAID and volume managers combine many disks into one large logical disk.
    • Storage Virtualization refers to the process of completely abstracting logical storage from physical storage, and is commonly used in SANs. The physical storage resources are aggregated into storage pools, from which the logical storage is created. Multiple independent storage devices, which may be scattered over a network, appear to the user as a single, location-independent, monolithic storage device, which can be managed centrally.
    • Channel bonding and network equipment use multiple links combined to work as though they offered a single, higher-bandwidth link.
    • Virtual Private Network (VPN), Network Address Translation (NAT), and similar networking technologies create a virtualized network namespace within or across network subnets.
    • Multiprocessor and multi-core computer systems often present what appears as a single, fast processor.
  • Computer clusters, grid computing, and virtual servers use the above techniques to combine multiple discrete computers into larger metacomputers.
  • Partitioning is the splitting of a single resource (usually large), such as disk space or network bandwidth, into a number of smaller, more easily utilized resources of the same type. This is sometimes also called "zoning," especially in storage networks.
  • Encapsulation is the hiding of resource complexity by the creation of a simplified interface. For example, CPUs often incorporate cache memory or pipelines to improve performance, but these elements are not reflected in their virtualized external interface. Similar virtualized interfaces hiding complex implementations are found in disk drives, modems, routers, and many other "smart" devices.

Virtualization examples

The following examples illustrate recent applications of virtualization.

Server consolidation 
Virtual machines are used to consolidate many physical servers into fewer servers, which in turn host virtual machines. Each physical server is reflected as a virtual machine "guest" residing on a virtual machine host system. This is also known as Physical-to-Virtual or 'P2V' transformation.
Disaster recovery 
Virtual machines can be used as "hot standby" environments for physical production servers. This changes the classical "backup-and-restore" philosophy, by providing backup images that can "boot" into live virtual machines, capable of taking over workload for a production server experiencing an outage.
Testing and training 
Hardware virtualization can give root access to a virtual machine. This can be very useful such as in kernel development and operating system courses. Examining VMware Dr. Dobb’s Journal August 2000 By Jason Nieh and Ozgur Can Leonard.
Portable applications 
The Microsoft Windows platform has a well-known issue involving the creation of portable applications, needed (for example) when running an application from a removable drive, without installing it on the system's main disk drive. This is a particular issue with USB drives. Virtualization can be used to encapsulate the application with a redirection layer that stores temporary files, Windows Registry entries, and other state information in the application's installation directory – and not within the system's permanent file system. See portable applications for further details. It is unclear whether such implementations are currently available.
Portable workspaces 
Recent technologies have used virtualization to create portable workspaces on devices like iPods and USB memory sticks. These products include:
  • Application Level – Thinstall – which is a driver-less solution for running application directly from removable storage without system changes or needing Admin rights
  • OS-level – MojoPac, Ceedo, and U3 – which allows end users to install some applications onto a storage device for use on another PC.
  • Machine-level – moka5 and LivePC – which delivers an operating system with a full software suite, including isolation and security protections.

Virtualization Comparison

See Virtualization Comparison and Comparison of Application Virtual Machines.

See also

External links

Skeptics about recent virtualization trends
Blogs discussing virtualization technologies