This guide explains how you can install and use KVM for creating and running virtual machines on a CentOS 6.3 server. I will show how to create image-based virtual machines and also virtual machines that use a logical volume (LVM). KVM is short for Kernel-based Virtual Machine and makes use of hardware virtualization, i.e., you need a CPU that supports hardware virtualization, e.g. Intel VT or AMD-V.
I do not issue any guarantee that this will work for you!
1 Preliminary Note
I’m using a CentOS 6.3 server with the hostname server1.example.com and the IP address 192.168.0.100 here as my KVM host.
I had SELinux disabled on my CentOS 6.3 system. I didn’t test with SELinux on; it might work, but if not, you better switch off SELinux as well:
vi /etc/selinux/config
Set SELINUX=disabled…
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted |
… and reboot:
reboot
We also need a desktop system where we install virt-manager so that we can connect to the graphical console of the virtual machines that we install. I’m using a Fedora 17 desktop here.
2 Installing KVM
CentOS 6.3 KVM Host:
First check if your CPU supports hardware virtualization – if this is the case, the command
egrep ‘(vmx|svm)’ –color=always /proc/cpuinfo
should display something, e.g. like this:
[root@server1 ~]# egrep ’(vmx|svm)’ –color=always /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy misalignsse
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy misalignsse
[root@server1 ~]#
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy misalignsse
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy misalignsse
[root@server1 ~]#
If nothing is displayed, then your processor doesn’t support hardware virtualization, and you must stop here.
Now we import the GPG keys for software packages:
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY*
To install KVM and virtinst (a tool to create virtual machines), we run
yum install kvm libvirt python-virtinst qemu-kvm
Then start the libvirt daemon:
/etc/init.d/libvirtd start
To check if KVM has successfully been installed, run
virsh -c qemu:///system list
It should display something like this:
[root@server1 ~]# virsh -c qemu:///system list
Id Name State
———————————-
Id Name State
———————————-
[root@server1 ~]#
If it displays an error instead, then something went wrong.
Next we need to set up a network bridge on our server so that our virtual machines can be accessed from other hosts as if they were physical systems in the network.
To do this, we install the package bridge-utils…
yum install bridge-utils
… and configure a bridge. Create the file /etc/sysconfig/network-scripts/ifcfg-br0 (please use the IPADDR, PREFIX, GATEWAY, DNS1 and DNS2 values from the /etc/sysconfig/network-scripts/ifcfg-eth0 file); make sure you use TYPE=Bridge, not TYPE=Ethernet:
vi /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE="br0" NM_CONTROLLED="yes" ONBOOT=yes TYPE=Bridge BOOTPROTO=none IPADDR=192.168.0.100 PREFIX=24 GATEWAY=192.168.0.1 DNS1=8.8.8.8 DNS2=8.8.4.4 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System br0" |
Modify /etc/sysconfig/network-scripts/ifcfg-eth0 as follows (comment out BOOTPROTO, IPADDR, PREFIX, GATEWAY, DNS1, and DNS2 and add BRIDGE=br0):
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0" #BOOTPROTO=none NM_CONTROLLED="yes" ONBOOT=yes TYPE="Ethernet" UUID="73cb0b12-1f42-49b0-ad69-731e888276ff" HWADDR=00:1E:90:F3:F0:02 #IPADDR=192.168.0.100 #PREFIX=24 #GATEWAY=192.168.0.1 #DNS1=8.8.8.8 #DNS2=8.8.4.4 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth0" BRIDGE=br0 |
Restart the network…
/etc/init.d/network restart
… and run
ifconfig
It should now show the network bridge (br0):
[root@server1 ~]# ifconfig
br0 Link encap:Ethernet HWaddr 00:1E:90:F3:F0:02
inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:460 (460.0 b) TX bytes:2298 (2.2 KiB)
br0 Link encap:Ethernet HWaddr 00:1E:90:F3:F0:02
inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:460 (460.0 b) TX bytes:2298 (2.2 KiB)
eth0 Link encap:Ethernet HWaddr 00:1E:90:F3:F0:02
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18455 errors:0 dropped:0 overruns:0 frame:0
TX packets:11861 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:26163057 (24.9 MiB) TX bytes:1100370 (1.0 MiB)
Interrupt:25 Base address:0xe000
inet6 addr: fe80::21e:90ff:fef3:f002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18455 errors:0 dropped:0 overruns:0 frame:0
TX packets:11861 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:26163057 (24.9 MiB) TX bytes:1100370 (1.0 MiB)
Interrupt:25 Base address:0xe000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2456 (2.3 KiB) TX bytes:2456 (2.3 KiB)
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2456 (2.3 KiB) TX bytes:2456 (2.3 KiB)
virbr0 Link encap:Ethernet HWaddr 52:54:00:AC:AC:8F
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
[root@server1 ~]#
3 Installing virt-viewer Or virt-manager On Your Fedora 17 Desktop
Fedora 17 Desktop:
We need a means of connecting to the graphical console of our guests – we can use virt-manager for this. I’m assuming that you’re using a Fedora 17 desktop.
Become root…
su
… and run…
yum install virt-manager libvirt qemu-system-x86 openssh-askpass
… to install virt-manager.
(If you’re using an Ubuntu 12.04 desktop, you can install virt-manager as follows:
sudo apt-get install virt-manager
)
4 Creating A Debian Squeeze Guest (Image-Based) From The Command Line
CentOs 6.3 KVM Host:
Now let’s go back to our CentOS 6.3 KVM host.
Take a look at
man virt-install
to learn how to use virt-install.
We will create our image-based virtual machines in the directory /var/lib/libvirt/images/ which was created automatically when we installed KVM in chapter two.
To create a Debian Squeeze guest (in bridging mode) with the name vm10, 512MB of RAM, two virtual CPUs, and the disk image /var/lib/libvirt/images/vm10.img (with a size of 12GB), insert the Debian Squeeze Netinstall CD into the CD drive and run
virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /dev/cdrom –vnc –noautoconsole –os-type linux –os-variant debiansqueeze –accelerate –network=bridge:br0 –hvm
Of course, you can also create an ISO image of the Debian Squeeze Netinstall CD (please create it in the /var/lib/libvirt/images/ directory because later on I will show how to create virtual machines through virt-manager from your Fedora desktop, and virt-manager will look for ISO images in the /var/lib/libvirt/images/ directory)…
dd if=/dev/cdrom of=/var/lib/libvirt/images/debian-6.0.5-amd64-netinst.iso
… and use the ISO image in the virt-install command:
virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /var/lib/libvirt/images/debian-6.0.5-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debiansqueeze –accelerate –network=bridge:br0 –hvm
The output is as follows:
[root@server1 ~]# virt-install –connect qemu:///system -n vm10 -r 512 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /var/lib/libvirt/images/debian-6.0.5-amd64-netinst.iso –vnc –noautoconsole –os-type linux –os-variant debiansqueeze –accelerate –network=bridge:br0 –hvm
Starting install…
Allocating ’vm10.img’ | 12 GB 00:00
Creating domain… | 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
[root@server1 ~]#
Allocating ’vm10.img’ | 12 GB 00:00
Creating domain… | 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
[root@server1 ~]#
from http://www.howtoforge.com/virtualization-with-kvm-on-a-centos-6.3-server
----------
virt-install – provision new virtual machines
Synopsis
virt-install [ OPTION ]…
Description
virt-install is a command line tool for creating new KVM or Xen virtual machines using the "libvirt" hypervisor management library. See the EXAMPLES section at the end of this document to quickly get started.
virt-install tool supports both text based & graphical installations, using VNC or SDLgraphics, or a text serial console. The guest can be configured to use one or more virtual disks, network interfaces, audio devices, physical USB or PCI devices, among others.
The installation media can be held locally or remotely on NFS , HTTP , FTP servers. In the latter case "virt-install" will fetch the minimal files necessary to kick off the installation process, allowing the guest to fetch the rest of the OS distribution as needed. PXE booting, and importing an existing disk image (thus skipping the install phase) are also supported.
Given suitable command line arguments, "virt-install" is capable of running completely unattended, with the guest ‘kickstarting’ itself too. This allows for easy automation of guest installs. An interactive mode is also available with the –prompt option, but this will only ask for the minimum required options.
Options
Most options are not required. Minimum requirements are –name, –ram, guest storage (–disk or –nodisks), and an install option.
- -h, –help
- Show the help message and exit
- –connect=CONNECT
- Connect to a non-default hypervisor. The default connection is chosen based on the following rules:
- xen
If running on a host with the Xen kernel (checks against /proc/xen)
- qemu:///system
- If running on a bare metal kernel as root (needed for KVM installs)
- qemu:///session
- If running on a bare metal kernel as non-rootIt is only necessary to provide the "--connect" argument if this default prioritization is incorrect, eg if wanting to use QEMUwhile on a Xen kernel.
General Options
- General configuration parameters that apply to all types of guest installs.
- -n NAME , –name=NAME
- Name of the new guest virtual machine instance. This must be unique amongst all guests known to the hypervisor on the connection, including those not currently active. To re-define an existing guest, use the virsh(1) tool to shut it down (‘virsh shutdown’) & delete (‘virsh undefine’) it prior to running "virt-install".
- -r MEMORY , –ram=MEMORY
- Memory to allocate for guest instance in megabytes. If the hypervisor does not have enough free memory, it is usual for it to automatically take memory away from the host operating system to satisfy this allocation.
- –arch=ARCH
- Request a non-native CPU architecture for the guest virtual machine. If omitted, the host CPU architecture will be used in the guest.
- –machine=MACHINE
- The machine type to emulate. This will typically not need to be specified for Xen or KVM , but is useful for choosing machine times of more exotic architectures.
- -u UUID , –uuid=UUID
- UUID for the guest; if none is given a random UUID will be generated. If you specify UUID , you should use a 32-digit hexadecimal number. UUID are intended to be unique across the entire data center, and indeed world. Bear this in mind if manually specifying a UUID
- –vcpus=VCPUS[,maxvcpus=MAX][,sockets=#][,cores=#][,threads=#]
- Number of virtual cpus to configure for the guest. If ‘maxvcpus’ is specified, the guest will be able to hotplug up to MAX vcpus while the guest is running, but will startup with VCPUS.
CPU topology can additionally be specified with sockets, cores, and threads. If values are omitted, the rest will be autofilled prefering sockets over cores over threads.
- –cpuset=CPUSET
- Set which physical cpus the guest can use. "CPUSET"is a comma separated list of numbers, which can also be specified in ranges. Example:
0,2,3,5 : Use processors 0,2,3 and 5 1-3,5,6-8 : Use processors 1,2,3,5,6,7 and 8
- If the value ‘auto’ is passed, virt-install attempts to automatically determine an optimal cpu pinning using NUMA data, if available.
- –cpu MODEL[,+feature][,-feature][,match=MATCH][,vendor=VENDOR]
- Configure the CPU model and CPU features exposed to the guest. The only required value is MODEL , which is a valid CPUmodel as listed in libvirt’s cpu_map.xml file.
Specific CPU features can be specified in a number of ways: using one of libvirt’s feature policy values force, require, optional, disable, or forbid, or with the shorthand ‘+feature’ and ‘-feature’, which equal ‘force=feature’ and ‘disable=feature’ respectively
Some examples:
- –cpu core2duo,+x2apic,disable=vmx
- Expose the core2duo CPU model, force enable x2apic, but do not expose vmx
- –cpu host
- Expose the host CPUs configuration to the guest. This enables the guest to take advantage of many of the host CPUs features (better performance), but may cause issues if migrating the guest to a host without an identical CPU .
- –description
- Human readable text description of the virtual machine. This will be stored in the guests XML configuration for access by other applications.
- –security type=TYPE[,label=LABEL]
- Configure domain security driver settings. Type can be either ‘static’ or ‘dynamic’. ‘static’ configuration requires a security LABEL . Specifying LABELwithout TYPE implies static configuration.
Installation Method options
- -c CDROM , –cdrom=CDROM
- File or device use as a virtual CD-ROM device for fully virtualized guests. It can be path to an ISO image, or to a CDROM device. It can also be a URL from which to fetch/access a minimal boot ISO image. The URLs take the same format as described for the "--location" argument. If a cdrom has been specified via the "--disk" option, and neither "--cdrom" nor any other install option is specified, the "--disk" cdrom is used as the install media.
- -l LOCATION , –location=LOCATION
- Installation source for guest virtual machine kernel+initrd pair. The "LOCATION" can take one of the following forms:
- DIRECTORY
- Path to a local directory containing an installable distribution image
- nfs:host:/path or nfs://host/path
- An NFS server location containing an installable distribution image
- http://host/path
- An HTTP server location containing an installable distribution image
- ftp://host/path
- An FTP server location containing an installable distribution image
- Some distro specific url samples:
- Fedora/Red Hat Based
- http://download.fedoraproject.org/pub/fedora/linux/releases/10/Fedora/i386/os/
- Debian/Ubuntu
- http://ftp.us.debian.org/debian/dists/etch/main/installer-amd64/
- Suse
- http://download.opensuse.org/distribution/11.0/repo/oss/
- Mandriva
- ftp://ftp.uwsg.indiana.edu/linux/mandrake/official/2009.0/i586/
- –pxe
- Use the PXE boot protocol to load the initial ramdisk and kernel for starting the guest installation process.
- –import
- Skip the OS installation process, and build a guest around an existing disk image. The device used for booting is the first device specified via "--disk" or "--file".
- –livecd
- Specify that the installation media is a live CD and thus the guest needs to be configured to boot off the CDROM device permanently. It may be desirable to also use the "--nodisks" flag in combination.
- -x EXTRA , –extra-args=EXTRA
- Additional kernel command line arguments to pass to the installer when performing a guest install from "--location". One common usage is specifying an anaconda kickstart file for automated installs, such as –extra-args “ks=http://myserver/my.ks”
- –initrd-inject=PATH
- Add PATH to the root of the initrd fetched with "--location". This can be used to run an automated install without requiring a network hosted kickstart file:
–initrd-injections=/path/to/my.ks –extra-args “ks=file:/my.ks”
- –os-type=OS_TYPE
- Optimize the guest configuration for a type of operating system (ex. ‘linux’, ‘windows’). This will attempt to pick the most suitable ACPI & APICsettings, optimally supported mouse drivers, virtio, and generally accommodate other operating system quirks.
By default, virt-install will attempt to auto detect this value from the install media (currently only supported for URL installs). Autodetection can be disabled with the special value ‘none’
See "--os-variant" for valid options.
- –os-variant=OS_VARIANT
- Further optimize the guest configuration for a specific operating system variant (ex. ‘fedora8′, ‘winxp’). This parameter is optional, and does not require an "--os-type"to be specified.
By default, virt-install will attempt to auto detect this value from the install media (currently only supported for URL installs). Autodetection can be disabled with the special value ‘none’.
Valid values are:
- windows
- win7
- Microsoft Windows 7
- vista
- Microsoft Windows Vista
- winxp64
- Microsoft Windows XP (x86_64)
- winxp
- Microsoft Windows XP
- win2k
- Microsoft Windows 2000
- win2k8
- Microsoft Windows Server 2008
- win2k3
- Microsoft Windows Server 2003
- unix
- openbsd4
- OpenBSD 4.x
- freebsd8
- FreeBSD 8.x
- freebsd7
- FreeBSD 7.x
- freebsd6
- FreeBSD 6.x
- solaris
- solaris9
- Sun Solaris 9
- solaris10
- Sun Solaris 10
- opensolaris
- Sun OpenSolaris
- other
- netware6
- Novell Netware 6
- netware5
- Novell Netware 5
- netware4
- Novell Netware 4
- msdos
- MS-DOS
- generic
- Generic
- linux
- debiansqueeze
- Debian Squeeze
- debianlenny
- Debian Lenny
- debianetch
- Debian Etch
- fedora14
- Fedora 14
- fedora13
- Fedora 13
- fedora12
- Fedora 12
- fedora11
- Fedora 11
- fedora10
- Fedora 10
- fedora9
- Fedora 9
- fedora8
- Fedora 8
- fedora7
- Fedora 7
- fedora6
- Fedora Core 6
- fedora5
- Fedora Core 5
- mes5.1
- Mandriva Enterprise Server 5.1 and later
- mes5
- Mandriva Enterprise Server 5.0
- mandriva2010
- Mandriva Linux 2010 and later
- mandriva2009
- Mandriva Linux 2009 and earlier
- rhel6
- Red Hat Enterprise Linux 6
- rhel5.4
- Red Hat Enterprise Linux 5.4 or later
- rhel5
- Red Hat Enterprise Linux 5
- rhel4
- Red Hat Enterprise Linux 4
- rhel3
- Red Hat Enterprise Linux 3
- rhel2.1
- Red Hat Enterprise Linux 2.1
- sles11
- Suse Linux Enterprise Server 11
- sles10
- Suse Linux Enterprise Server
- ubuntumaverick
- Ubuntu 10.10 (Maverick Meerkat)
- ubuntulucid
- Ubuntu 10.4 (Lucid Lynx)
- ubuntukarmic
- Ubuntu 9.10 (Karmic Koala)
- ubuntujaunty
- Ubuntu 9.04 (Jaunty Jackalope)
- ubuntuintrepid
- Ubuntu 8.10 (Intrepid Ibex)
- ubuntuhardy
- Ubuntu 8.04 LTS (Hardy Heron)
- virtio26
- Generic 2.6.25 or later kernel with virtio
- generic26
- Generic 2.6.x kernel
- generic24
- Generic 2.4.x kernel
- none
- No OS version specified (disables autodetect)
- –boot=BOOTOPTS
- Optionally specify the post-install VM boot configuration. This option allows specifying a boot device order, permanently booting off kernel/initrd with option kernel arguments, and enabling a BIOSboot menu (requires libvirt 0.8.3 or later)
–boot can be specified in addition to other install options (such as –location, –cdrom, etc.) or can be specified on it’s own. In the latter case, behavior is similar to the –import install option: there is no ‘install’ phase, the guest is just created and launched as specified.
Some examples:
- –boot cdrom,fd,hd,network,menu=on
- Set the boot device priority as first cdrom, first floppy, first harddisk, network PXE boot. Additionally enable BIOS boot menu prompt.
- –boot kernel=KERNEL,initrd=INITRD,kernel_args=”console=/dev/ttyS0″
- Have guest permanently boot off a local kernel/initrd pair, with the specified kernel options.
Storage Configuration
- –disk=DISKOPTS
- Specifies media to use as storage for the guest, with various options. The general format of a disk string is
--disk opt1=val1,opt2=val2,...
- To specify media, the command can either be:
--disk /some/storage/path,opt1=val1
- or explicitly specify one of the following arguments:
- path
- A path to some storage media to use, existing or not. Existing media can be a file or block device. If installing on a remote host, the existing media must be shared as a libvirt storage volume.Specifying a non-existent path implies attempting to create the new storage, and will require specifyng a ‘size’ value. If the base directory of the path is a libvirt storage pool on the host, the new storage will be created as a libvirt storage volume. For remote hosts, the base directory is required to be a storage pool if using this method.
- pool
- An existing libvirt storage pool name to create new storage on. Requires specifying a ‘size’ value.
- vol
An existing libvirt storage volume to use. This is specified as ‘poolname/volname’.
- Other available options:
- device
- Disk device type. Value can be ‘cdrom’, ‘disk’, or ‘floppy’. Default is ‘disk’. If a ‘cdrom’ is specified, and no install method is chosen, the cdrom is used as the install media.
- bus
Disk bus type. Value can be ‘ide’, ‘scsi’, ‘usb’, ‘virtio’ or ‘xen’. The default is hypervisor dependent since not all hypervisors support all bus types.
- perms
- Disk permissions. Value can be ‘rw’ (Read/Write), ‘ro’ (Readonly), or ‘sh’ (Shared Read/Write). Default is ‘rw’
- size
- size (in GB ) to use if creating new storage
- sparse
- whether to skip fully allocating newly created storage. Value is ‘true’ or ‘false’. Default is ‘true’ (do not fully allocate).The initial time taken to fully-allocate the guest virtual disk (spare=false) will be usually by balanced by faster install times inside the guest. Thus use of this option is recommended to ensure consistently high performance and to avoid I/O errors in the guest should the host filesystem fill up.
- cache
- The cache mode to be used. The host pagecache provides cache memory. The cache value can be ‘none’, ‘writethrough’, or ‘writeback’. ‘writethrough’ provides read caching. ‘writeback’ provides read and write caching.
- format
- Image format to be used if creating managed storage. For file volumes, this can be ‘raw’, ‘qcow2′, ‘vmdk’, etc. See format types in <http://libvirt.org/storage.html> for possible values. This is often mapped to the driver_typevalue as well.With libvirt 0.8.3 and later, this option should be specified if reusing and existing disk image, since libvirt does not autodetect storage format as it is a potential security issue. For example, if reusing and existing qcow2 image, you will want to specify format=qcow2, otherwise the hypervisor may not be able to read your disk image.
- driver_name
- Driver name the hypervisor should use when accessing the specified storage. Typically does not need to be set by the user.
- driver_type
- Driver format/type the hypervisor should use when accessing the specified storage. Typically does not need to be set by the user.
- io
Disk IO backend. Can be either “threads” or “native”.
- See the examples section for some uses. This option deprecates "--file", "--file-size", and "--nonsparse".
- –nodisks
- Request a virtual machine without any local disk storage, typically used for running ‘Live CD ‘ images or installing to network storage (iSCSI or NFS root).
- -f DISKFILE , –file=DISKFILE
- This option is deprecated in favor of "--disk path=DISKFILE".
- -s DISKSIZE , –file-size=DISKSIZE
- This option is deprecated in favor of "--disk ...,size=DISKSIZE,..."
- –nonsparse
- This option is deprecated in favor of "--disk ...,sparse=false,..."
Networking Configuration
- -w NETWORK , –network=NETWORK,opt1=val1,opt2=val2
- Connect the guest to the host network. The value for "NETWORK" can take one of 3 formats:
- bridge=BRIDGE
- Connect to a bridge device in the host called "BRIDGE". Use this option if the host has static networking config & the guest requires full outbound and inbound connectivity to/from the LAN . Also use this if live migration will be used with this guest.
- network=NAME
- Connect to a virtual network in the host called "NAME". Virtual networks can be listed, created, deleted using the "virsh" command line tool. In an unmodified install of "libvirt"there is usually a virtual network with a name of "default". Use a virtual network if the host has dynamic networking (eg NetworkManager), or using wireless. The guest will be NATed to the LAN by whichever connection is active.
- user
- Connect to the LAN using SLIRP . Only use this if running a QEMU guest as an unprivileged user. This provides a very limited form of NAT .
- If this option is omitted a single NIC will be created in the guest. If there is a bridge device in the host with a physical interface enslaved, that will be used for connectivity. Failing that, the virtual network called "default" will be used. This option can be specified multiple times to setup more than one NIC.
Other available options are:
- model
- Network device model as seen by the guest. Value can be any nic model supported by the hypervisor, e.g.: ‘e1000′, ‘rtl8139′, ‘virtio’, …
- mac
Fixed MAC address for the guest; If this parameter is omitted, or the value "RANDOM" is specified a suitable address will be randomly generated. For Xen virtual machines it is required that the first 3 pairs in the MAC address be the sequence ’00:16:3e’, while for QEMU or KVM virtual machines it must be ’52:54:00′.
- –nonetworks
- Request a virtual machine without any network interfaces.
- -b BRIDGE , –bridge=BRIDGE
- This parameter is deprecated in favour of "--network bridge=bridge_name".
- -m MAC , –mac=MAC
- This parameter is deprecated in favour of "--network NETWORK,mac=12:34..."
Graphics Configuration
- If no graphics option is specified, "virt-install" will default to –vnc if the DISPLAYenvironment variable is set, otherwise –nographics is used.
- –graphics TYPE ,opt1=arg1,opt2=arg2,…
- Specifies the graphical display configuration. This does not configure any virtual hardware, just how the guest’s graphical display can be accessed. Typically the user does not need to specify this option, virt-install will try and choose a useful default, and launch a suitable connection.
General format of a graphical string is
--graphics TYPE,opt1=arg1,opt2=arg2,...
- For example:
--graphics vnc,password=foobar
- The supported options are:
- type
- The display type. This is one of:vncSetup a virtual console in the guest and export it as a VNC server in the host. Unless the "port" parameter is also provided, the VNC server will run on the first free port number at 5900 or above. The actual VNC display allocated can be obtained using the "vncdisplay" command to "virsh" (or virt-viewer(1) can be used which handles this detail for the use).
sdlSetup a virtual console in the guest and display an SDL window in the host to render the output. If the SDL window is closed the guest may be unconditionally terminated.spiceExport the guest’s console using the Spice protocol. Spice allows advanced features like audio and USB device streaming, as well as improved graphical performance.noneNo graphical console will be allocated for the guest. Fully virtualized guests (Xen FV or QEmu/KVM) will need to have a text console configured on the first serial port in the guest (this can be done via the –extra-args option). Xen PV will set this up automatically. The command ‘virsh console NAME ‘ can be used to connect to the serial device. - port
- Request a permanent, statically assigned port number for the guest console. This is used by ‘vnc’ and ‘spice’
- tlsport
- Specify the spice tlsport.
- listen
- Address to listen on for VNC/Spice connections. Default is typically 127.0.0.1 (localhost only), but some hypervisors allow changing this globally (for example, the qemu driver default can be changed in /etc/libvirt/qemu.conf). Use 0.0.0.0 to allow access from other machines. This is use by ‘vnc’ and ‘spice’
- keymap
- Request that the virtual VNC console be configured to run with a specific keyboard layout. If the special value ‘local’ is specified, virt-install will attempt to configure to use the same keymap as the local system. A value of ‘none’ specifically defers to the hypervisor. Default behavior is hypervisor specific, but typically is the same as ‘local’. This is used by ‘vnc’
- password
- Request a VNC password, required at connection time. Beware, this info may end up in virt-install log files, so don’t use an important password. This is used by ‘vnc’ and ‘spice’
- passwordvalidto
- Set an expiration date for password. After the date/time has passed, all new graphical connections are denyed until a new password is set. This is used by ‘vnc’ and ‘spice’The format for this value is YYYY-MM-DDTHH:MM:SS , for example 2011-04-01T14:30:15
- –vnc
- This option is deprecated in favor of "--graphics vnc,..."
- –vncport=VNCPORT
- This option is deprecated in favor of "--graphics vnc,port=PORT,..."
- –vnclisten=VNCLISTEN
- This option is deprecated in favor of "--graphics vnc,listen=LISTEN,..."
- -k KEYMAP , –keymap=KEYMAP
- This option is deprecated in favor of "--graphics vnc,keymap=KEYMAP,..."
- –sdl
- This option is deprecated in favor of "--graphics sdl,..."
- –nographics
- This option is deprecated in favor of "--graphics none"
- –noautoconsole
- Don’t automatically try to connect to the guest console. The default behaviour is to launch a VNC client to display the graphical console, or to run the "virsh" "console" command to display the text console. Use of this parameter will disable this behaviour.
Virtualization Type options
- Options to override the default virtualization type choices.
- -v, –hvm
- Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor.
- -p, –paravirt
- This guest should be a paravirtualized guest. If the host supports both para & full virtualization, and neither this parameter nor the "--hvm" are specified, this will be assumed.
- –virt-type
- The hypervisor to install on. Example choices are kvm, qemu, xen, or kqemu. Availabile options are listed via ‘virsh capabilities’ in the <domain> tags.
- –accelerate
- Prefer KVM or KQEMU (in that order) if installing a QEMU guest. This behavior is now the default, and this option is deprecated. To install a plain QEMUguest, use ‘–virt-type qemu’
- –noapic
- Override the OS type / variant to disables the APIC setting for fully virtualized guest.
- –noacpi
- Override the OS type / variant to disables the ACPI setting for fully virtualized guest.
Device Options
- –host-device=HOSTDEV
- Attach a physical host device to the guest. Some example values for HOSTDEV:
- –host-device pci_0000_00_1b_0
- A node device name via libvirt, as shown by ‘virsh nodedev-list’
- –host-device 001.003
- USB by bus, device (via lsusb).
- –host-device 0×1234:0×5678
- USB by vendor, product (via lsusb).
- –host-device 1f.01.02
- PCI device (via lspci).
- –soundhw MODEL
- Attach a virtual audio device to the guest. MODEL specifies the emulated sound card model. Possible values are ich6, ac97, es1370, sb16, pcspk, or default. ‘default’ will be AC97 if the hypervisor supports it, otherwise it will be ES1370.
This deprecates the old boolean –sound model (which still works the same as a single ‘–soundhw default’)
- –watchdog MODEL[,action=ACTION]
- Attach a virtual hardware watchdog device to the guest. This requires a daemon and device driver in the guest. The watchdog fires a signal when the virtual machine appears to hung. ACTION specifies what libvirt will do when the watchdog fires. Values are
- reset
- Forcefully reset the guest (the default)
- poweroff
- Forcefully power off the guest
- pause
- Pause the guest
- none
- Do nothing
- shutdown
- Gracefully shutdown the guest (not recommended, since a hung guest probably won’t respond to a graceful shutdown)
- MODELis the emulated device model: either i6300esb (the default) or ib700. Some examples:
Use the recommended settings:
–watchdog default
Use the i6300esb with the ‘poweroff’ action
–watchdog i6300esb,action=poweroff
- –parallel=CHAROPTS
- –serial=CHAROPTS
- Specifies a serial device to attach to the guest, with various options. The general format of a serial string is
--serial type,opt1=val1,opt2=val2,...
- –serial and –parallel devices share all the same options, unless otherwise noted. Some of the types of character device redirection are:
- –serial pty
- Pseudo TTY . The allocated pty will be listed in the running guests XML description.
- –serial dev,path=HOSTPATH
- Host device. For serial devices, this could be /dev/ttyS0. For parallel devices, this could be /dev/parport0.
- –serial file,path=FILENAME
- Write output to FILENAME .
- –serial pipe,path=PIPEPATH
- Named pipe (see pipe(7))
- –serial tcp,host=HOST:PORT,mode=MODE,protocol=PROTOCOL
- TCP net console. MODE is either ‘bind’ (wait for connections on HOST:PORT ) or ‘connect’ (send output to HOST:PORT ), default is ‘connect’. HOST defaults to ’127.0.0.1′, but PORT is required. PROTOCOLcan be either ‘raw’ or ‘telnet’ (default ‘raw’). If ‘telnet’, the port acts like a telnet server or client. Some examples:Connect to localhost, port 1234:–serial tcp,host=:1234
Wait for connections on any address, port 4567:–serial tcp,host=0.0.0.0:4567,mode=bindWait for telnet connection on localhost, port 2222. The user could then connect interactively to this console via ‘telnet localhost 2222′:–serial tcp,host=:2222,mode=bind,protocol=telnet - –serial udp,host=CONNECT_HOST:PORT,bind_port=BIND_HOST:BIND_PORT
- UDP net console. HOST:PORT is the destination to send output to (default HOST is ’127.0.0.1′, PORT is required. BIND_HOST:PORT is the optional local address to bind to (default BIND_HOST is 127.0.0.1, but is only set if BIND_PORTis specified.) Some examples:Send output to default syslog port (may need to edit /etc/rsyslog.conf accordingly):–serial udp,host=:514
Send output to remote host 192.168.10.20, port 4444 (this output can be read on the remote host using ‘nc -u -l 4444′:–serial udp,host=192.168.10.20:4444 - –serial unix,path=UNIXPATH,mode=MODE
- Unix socket (see unix(7). MODE has similar behavior and defaults as ‘tcp’.
- –channel
- Specifies a communication channel device to connect the guest and host machine. This option uses the same options as –serial and –parallel for specifying the host/source end of the channel. Extra ‘target’ options are used to specify how the guest machine sees the channel.
Some of the types of character device redirection are:
- –channel SOURCE ,target_type=guestfwd,target_address=HOST:PORT
- Communication channel using QEMU usermode networking stack. The guest can connect to the channel using the specified HOST:PORT combination.
- –channel SOURCE ,target_type=virtio[,name=NAME]
- Communication channel using virtio serial (requires 2.6.34 or later host and guest). Each instance of a virtio –channel line is exposed in the guest as /dev/vport0p1, /dev/vport0p2, etc. NAME is optional metadata, and can be any string, such as org.linux-kvm.virtioport1. If specified, this will be exposed in the guest at /sys/class/virtio-ports/vport0p1/NAME
- –console
- Connect a text console between the guest and host. Certain guest and hypervisor combinations can automatically set up a getty in the guest, so an out of the box text login can be provided (target_type=xen for xen paravirt guests, and possibly target_type=virtio in the future).
Example:
- –console pty,target_type=virtio
- Connect a virtio console to the guest, redirected to a PTY on the host. For supported guests, this exposes /dev/hvc0 in the guest. See http://fedoraproject.org/wiki/Features/VirtioSerial for more info. virtio console requires libvirt 0.8.3 or later.
- –video=VIDEO
- Specify what video device model will be attached to the guest. Valid values for VIDEO are hypervisor specific, but some options for recent kvm are cirrus, vga, or vmvga (vmware).
Miscellaneous Options
- –autostart
- Set the autostart flag for a domain. This causes the domain to be started on host boot up.
- –print-xml
- If the requested guest has no install phase (–import, –boot), print the generated XML instead of defining the guest. By default this WILLdo storage creation (can be disabled with –dry-run).
If the guest has an install phase, you will need to use –print-step to specify exactly what XML output you want. This option implies –quiet.
- –print-step
- Acts similarly to –print-xml, except requires specifying which install step to print XML for. Possible values are 1, 2, 3, or all. Stage 1 is typically booting from the install media, and stage 2 is typically the final guest config booting off hardisk. Stage 3 is only relevant for windows installs, which by default have a second install stage. This option implies –quiet.
- –noreboot
- Prevent the domain from automatically rebooting after the install has completed.
- –wait=WAIT
- Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the console to close (not neccessarily indicating the guest has shutdown), or in the case of –noautoconsole, simply kick off the install and exit. Any negative value will make virt-install wait indefinitely, a value of 0 triggers the same results as noautoconsole. If the time limit is exceeded, virt-install simply exits, leaving the virtual machine in its current state.
- –force
- Prevent interactive prompts. If the intended prompt was a yes/no prompt, always say yes. For any other prompts, the application will exit.
- –dry-run
- Proceed through the guest creation process, but do NOT create storage devices, change host device configuration, or actually teach libvirt about the guest. virt-install may still fetch install media, since this is required to properly detect the OS to install.
- –prompt
- Specifically enable prompting for required information. Default prompting is off (as of virtinst 0.400.0)
- –check-cpu
- Check that the number virtual cpus requested does not exceed physical CPUs and warn if they do.
- -q, –quiet
- Only print fatal error messages.
- -d, –debug
- Print debugging information to the terminal when running the install process. The debugging information is also stored in "$HOME/.virtinst/virt-install.log" even if this parameter is omitted.
Examples
Install a Fedora 13 KVM guest with virtio accelerated disk/network, creating a new 8GB storage file, installing from media in the hosts CDROM drive, auto launching a graphical VNC viewer
# virt-install \ --connect qemu:///system \ --virt-type kvm \ --name demo \ --ram 500 \ --disk path=/var/lib/libvirt/images/demo.img,size=8 \ --graphics vnc \ --cdrom /dev/cdrom \ --os-variant fedora13
Install a Fedora 9 plain QEMU guest, using LVM partition, virtual networking, booting from PXE , using VNCserver/viewer
# virt-install \ --connect qemu:///system \ --name demo \ --ram 500 \ --disk path=/dev/HostVG/DemoVM \ --network network=default \ --virt-type qemu --graphics vnc \ --os-variant fedora9
Install a guest with a real partition, with the default QEMU hypervisor for a different architecture using SDLgraphics, using a remote kernel and initrd pair:
# virt-install \ --connect qemu:///system \ --name demo \ --ram 500 \ --disk path=/dev/hdc \ --network bridge=eth1 \ --arch ppc64 \ --graphics sdl \ --location http://download.fedora.redhat.com/pub/fedora/linux/core/6/x86_64/os/
Run a Live CDimage under Xen fullyvirt, in diskless environment
# virt-install \ --hvm \ --name demo \ --ram 500 \ --nodisks \ --livecd \ --graphics vnc \ --cdrom /root/fedora7live.iso
Install a paravirtualized Xen guest, 500 MB of RAM , a 5 GBof disk, and Fedora Core 6 from a web server, in text-only mode, with old style –file options:
# virt-install \ --paravirt \ --name demo \ --ram 500 \ --file /var/lib/xen/images/demo.img \ --file-size 6 \ --graphics none \ --location http://download.fedora.redhat.com/pub/fedora/linux/core/6/x86_64/os/
Create a guest from an existing disk image ‘mydisk.img’ using defaults for the rest of the options.
# virt-install \ --name demo --ram 512 --disk /home/user/VMs/mydisk.img --import
Test a custom kernel/initrd using an existing disk image, manually specifying a serial device hooked to a PTYon the host machine.
# virt-install \ --name mykernel --ram 512 --disk /home/user/VMs/mydisk.img --boot kernel=/tmp/mykernel,initrd=/tmp/myinitrd,kernel_args="console=ttyS0" --serial pty
Authors
Written by Daniel P. Berrange, Hugh Brock, Jeremy Katz, Cole Robinson and a team of many other contributors. See the AUTHORS file in the source distribution for the complete list of credits.
Bugs
Please see http://virt-manager.org/page/BugReporting
Copyright
Copyright © 2006-2009 Red Hat, Inc, and various contributors. This is free software. You may redistribute copies of it under the terms of the GNU General Public License "http://www.gnu.org/licenses/gpl.html". There is NO WARRANTY , to the extent permitted by law.
See Also
from http://linux.die.net/man/1/virt-install
----------
In the previous chapter we explored the creation and management of KVM guest operating systems using the virt-manager graphical tool. In this chapter we will turn our attention to the creation of KVM guest operating system using the virt-installcommand-line tool.The virt-install tool is supplied to allow new virtual machines to be created by providing a list of command-line options. Whilst most users will probably stay with the graphical virt-mamager tool, virt-install has the advantage that virtual machines can be created when access to a graphical desktop is not available, or when creation needs to be automated in a script.
----------
In the previous chapter we explored the creation and management of KVM guest operating systems using the virt-manager graphical tool. In this chapter we will turn our attention to the creation of KVM guest operating system using the virt-installcommand-line tool.The virt-install tool is supplied to allow new virtual machines to be created by providing a list of command-line options. Whilst most users will probably stay with the graphical virt-mamager tool, virt-install has the advantage that virtual machines can be created when access to a graphical desktop is not available, or when creation needs to be automated in a script.
This chapter assumes that the necessary KVM tools are installed and that the system was rebooted after these were installed. For details on these requirements read Installing and Configuring Fedora KVM Virtualization.
| ||
Preparing the System for virt-install
virt-install provides the option of supporting graphics for the guest operating system installation. This is achieved through use of QEMU. If graphics support is disabled (the default is to enable it) during the virt-install session, the standard text based installer will be used.
Running virt-install to Build the KVM Guest System
virt-install must be run as root and accepts a wide range of command-line arguments that are used to provide configuration information related to the virtual machine being created. Some of these command-line options are mandatory (specifically name, ram and disk storage must be provided) while others are optional. A summary of these arguments is outlined in the following table:
Argument | Description |
---|---|
-h, –help | Show the help message and exit |
–connect=CONNECT | Connect to a non-default hypervisor. |
-n NAME, –name=NAME | Name of the new guest virtual machine instance. This must be unique amongst all guests known to the hypervisor on the connection, including those not currently active. To re-define an existing guest, use the virsh(1) tool to shut it down (’virsh shutdown’) & delete (’virsh undefine’) it prior to running “virt-install”. |
-r MEMORY, –ram=MEMORY | Memory to allocate for guest instance in megabytes. If the hypervisor does not have enough free memory, it is usual for it to automatically take memory away from the host operating system to satisfy this allocation. |
–arch=ARCH | Request a non-native CPU architecture for the guest virtual machine. The option is only currently available with QEMU guests, and will not enable use of acceleration. If omitted, the host CPU architecture will be used in the guest. |
-u UUID, –uuid=UUID | UUID for the guest; if none is given a random UUID will be generated. If you specify UUID, you should use a 32-digit hexadecimal number. UUID are intended to be unique across the entire data center, and indeed world. Bear this in mind if manually specifying a UUID |
–vcpus=VCPUS | Number of virtual cpus to configure for the guest. Not all hypervisors support SMP guests, in which case this argument will be silently ignored |
–check-cpu | Check that the number virtual cpus requested does not exceed physical CPUs and warn if they do. |
–cpuset=CPUSET | Set which physical cpus the guest can use. “CPUSET” is a comma separated list of numbers, which can also be specified in ranges. If the value ’auto’ is passed, virt-install attempts to automatically determine an optimal cpu pinning using NUMA data, if available. |
–os-type=OS_TYPE | Optimize the guest configuration for a type of operating system (ex. ’linux’, ’windows’). This will attempt to pick the most suitable ACPI & APIC settings, optimally supported mouse drivers, virtio, and generally accommodate other operating system quirks. See “–os-variant” for valid options. For a full list of valid options refer to the man page (man virt-install). |
–os-variant=OS_VARIANT | Further optimize the guest configuration for a specific operating system variant (ex. ’fedora8’, ’winxp’). This parameter is optional, and does not require an “–os-type” to be specified. For a full list of valid options refer to the man page (man virt-install). |
–host-device=HOSTDEV | Attach a physical host device to the guest. HOSTDEV is a node device name as used by libvirt (as shown by ’virsh nodedev-list’). |
–sound | Attach a virtual audio device to the guest. (Full virtualization only). |
–noacpi | Override the OS type / variant to disables the ACPI setting for fully virtualized guest. (Full virtualization only). |
-v, –hvm | Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor. |
-p, –paravirt | This guest should be a paravirtualized guest. If the host supports both para & full virtualization, and neither this parameter nor the “–hvm” are specified, this will be assumed. |
–accelerate | When installing a QEMU guest, make use of the KVM or KQEMU kernel acceleration capabilities if available. Use of this option is recommended unless a guest OS is known to be incompatible with the accelerators. The KVM accelerator is preferred over KQEMU if both are available. |
-c CDROM, –cdrom=CDROM | File or device use as a virtual CD-ROM device for fully virtualized guests. It can be path to an ISO image, or to a CDROM device. It can also be a URL from which to fetch/access a minimal boot ISO image. The URLs take the same format as described for the “–location” argument. If a cdrom has been specified via the “–disk” option, and neither “–cdrom” nor any other install option is specified, the “–disk” cdrom is used as the install media. |
-l LOCATION, –location=LOCATION | Installation source for guest virtual machine kernel+initrd pair. The “LOCATION” can take one of the following forms:
|
–pxe | Use the PXE boot protocol to load the initial ramdisk and kernel for starting the guest installation process. |
–import | Skip the OS installation process, and build a guest around an existing disk image. The device used for booting is the first device specified via “–disk” or “–file”. |
–livecd | Specify that the installation media is a live CD and thus the guest needs to be configured to boot off the CDROM device permanently. It may be desirable to also use the “–nodisks” flag in combination. |
-x EXTRA, –extra-args=EXTRA | Additional kernel command line arguments to pass to the installer when performing a guest install from “–location”. |
–disk=DISKOPTS | Specifies media to use as storage for the guest, with various options. |
–disk opt1=val1,opt2=val2,… | To specify media, one of the following options is required:
|
-f DISKFILE, –file=DISKFILE | Path to the file, disk partition, or logical volume to use as the backing store for the guest’s virtual disk. This option is deprecated in favor of “–disk”. |
-s DISKSIZE, –file-size=DISKSIZE | Size of the file to create for the guest virtual disk. This is deprecated in favor of “–disk”. |
–nonsparse | Fully allocate the storage when creating. This is deprecated in favort of “–disk” |
–nodisks | Request a virtual machine without any local disk storage, typically used for running ’Live CD’ images or installing to network storage (iSCSI or NFS root). |
-w NETWORK, –network=NETWORK | Connect the guest to the host network. The value for “NETWORK” can take one of 3 formats:
|
-b BRIDGE, –bridge=BRIDGE | Bridge device to connect the guest NIC to. This parameter is deprecated in favour of the “–network” parameter. |
-m MAC, –mac=MAC | Fixed MAC address for the guest; If this parameter is omitted, or the value “RANDOM” is specified a suitable address will be randomly generated. For Xen virtual machines it is required that the first 3 pairs in the MAC address be the sequence ’00:16:3e’, while for QEMU or KVM virtual machines it must be ’54:52:00’. |
–nonetworks | Request a virtual machine without any network interfaces. |
–vnc | Setup a virtual console in the guest and export it as a VNC server in the host. Unless the “–vncport” parameter is also provided, the VNC server will run on the first free port number at 5900 or above. The actual VNC display allocated can be obtained using the “vncdisplay” command to “virsh” (or virt-viewer(1) can be used which handles this detail for the use). |
–vncport=VNCPORT | Request a permanent, statically assigned port number for the guest VNC console. Use of this option is discouraged as other guests may automatically choose to run on this port causing a clash. |
–sdl | Setup a virtual console in the guest and display an SDL window in the host to render the output. If the SDL window is closed the guest may be unconditionally terminated. |
–nographics | No graphical console will be allocated for the guest. Fully virtualized guests (Xen FV or QEmu/KVM) will need to have a text console configured on the first serial port in the guest (this can be done via the –extra-args option). Xen PV will set this up automatically. The command ’virsh console NAME’ can be used to connect to the serial device. |
–noautoconsole | Don’t automatically try to connect to the guest console. The default behaviour is to launch a VNC client to display the graphical console, or to run the “virsh” “console” command to display the text console. Use of this parameter will disable this behaviour. |
-k KEYMAP, –keymap=KEYMAP | Request that the virtual VNC console be configured to run with a non- English keyboard layout. |
-d, –debug | Print debugging information to the terminal when running the install process. The debugging information is also stored in “$HOME/.virtinst/virt-install.log” even if this parameter is omitted. |
–noreboot | Prevent the domain from automatically rebooting after the install has completed. |
–wait=WAIT | Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the console to close (not neccessarily indicating the guest has shutdown), or in the case of –noautoconsole, simply kick off the install and exit. Any negative value will make virt-install wait indefinitely, a value of 0 triggers the same results as noautoconsole. If the time limit is succeeded, virt-install simply exits, leaving the virtual machine in its current state. |
–force | Prevent interactive prompts. If the intended prompt was a yes/no prompt, always say yes. For any other prompts, the application will exit. |
–prompt | Specifically enable prompting. Default prompting is off (as of virtinst 0.400.0) |
Example virt-install Command
With reference to the above command-line argument list, we can now look at an example command-line construct using the virt-install tool.
The following command creates a new KVM virtual machine configured to run Windows XP. It creates a new, 6GB disk image, assigns 512MB of RAM to the virtual machine, configures a CD device for the installation media and uses VNC to display the console:
virt-install --name myWinXP --ram 512 --disk path=/tmp/winxp.img,size=6 \ --network network:default --vnc --os-variant winxp --cdrom /dev/sr0
Once the guest system has been created, the virt-viewer screen will appear containing the operating system installer loaded from the specified installation media:
Follow the standard installation procedure for the guest operating system.
Follow the standard installation procedure for the guest operating system.
Summary
In the chapter we have looked at the steps necessary to create a KVM Virtual System using the virt-install command line tool.
Once the installation is completed the next step is to learn how to administer KVM virtual systems system. This can be achieved the graphical virt-manager tool. (see Managing and Monitoring Fedora based KVM Guest Systems).
-------------
-------------
KVM虚拟机的创建、管理与迁移
一、安装guest虚拟机
1、直接通过virt-manager安装、管理虚拟机(略)
2、通过命令行安装guest虚拟机
qemu-img create -f qcow2 /images/centos6.3-x86_64.img 10G
chown qemu:qemu /images/centos6.3-x86_64.img
virt-install –name centos6.3 –ram=1024 –arch=x86_64 –vcpus=1 –check-cpu –os-type=linux –os-variant=’rhel6′ -c /tmp/CentOS-6.3-x86_64-minimal.iso –disk path=/images/centos6.3-x86_64.img,device=disk,bus=virtio,size=10,format=qcow2 –bridge=br100 –noautoconsole –vnc –vncport=5902 –vnclisten=0.0.0.0二、利用virsh对虚拟机管理
2、开关机
virsh start centos6.3 #开机
virsh create /etc/libvirt/qemu/centos6.3.xml #直接通过主机配置文档启动主机
virsh shutdown centos6.3 #关机
virsh destroy centos6.3 #强制关闭电源
virsh list –all #查看虚拟机状态
1、直接通过virt-manager安装、管理虚拟机(略)
2、通过命令行安装guest虚拟机
qemu-img create -f qcow2 /images/centos6.3-x86_64.img 10G
chown qemu:qemu /images/centos6.3-x86_64.img
virt-install –name centos6.3 –ram=1024 –arch=x86_64 –vcpus=1 –check-cpu –os-type=linux –os-variant=’rhel6′ -c /tmp/CentOS-6.3-x86_64-minimal.iso –disk path=/images/centos6.3-x86_64.img,device=disk,bus=virtio,size=10,format=qcow2 –bridge=br100 –noautoconsole –vnc –vncport=5902 –vnclisten=0.0.0.0二、利用virsh对虚拟机管理
2、开关机
virsh start centos6.3 #开机
virsh create /etc/libvirt/qemu/centos6.3.xml #直接通过主机配置文档启动主机
virsh shutdown centos6.3 #关机
virsh destroy centos6.3 #强制关闭电源
virsh list –all #查看虚拟机状态
3、添加删除虚拟机
virsh define /etc/libvirt/qemu/node5.xml #根据主机配置文档添加虚拟机
virsh list –all #node5已经添加
virsh undefine node5 #移除虚拟机
ls /etc/libvirt/qemu
virsh list –all #node5已经被移除
virsh define /etc/libvirt/qemu/node5.xml #根据主机配置文档添加虚拟机
virsh list –all #node5已经添加
virsh undefine node5 #移除虚拟机
ls /etc/libvirt/qemu
virsh list –all #node5已经被移除
4、使用已存在的虚拟机配置文档安裝新的虚拟机
qemu-img create -f qcow2 /virhost/kvm_node/node6.img 20G#为新建虚拟机生产磁盘镜像文件
virsh list
virsh dumpxml node4 >/etc/libvirt/qemu/node6.xml#导出虚拟机node6的硬件配置信息为/etc/libvirt/qemu/node6.xml
qemu-img create -f qcow2 /virhost/kvm_node/node6.img 20G#为新建虚拟机生产磁盘镜像文件
virsh list
virsh dumpxml node4 >/etc/libvirt/qemu/node6.xml#导出虚拟机node6的硬件配置信息为/etc/libvirt/qemu/node6.xml
vim /etc/libvirt/qemu/node6.xml
<domain type=’kvm’ id=’20′> #修改node6的id号
<name>node6</name> #虚拟机node6的name
<uuid>4b7e91eb-6521-c2c6-cc64-c1ba72707fc7</uuid> #uuid必须修改,否则会和node4的冲突
<source file=’/virhost/node4.img’/> #指定新虚拟机的硬盘文件
<domain type=’kvm’ id=’20′> #修改node6的id号
<name>node6</name> #虚拟机node6的name
<uuid>4b7e91eb-6521-c2c6-cc64-c1ba72707fc7</uuid> #uuid必须修改,否则会和node4的冲突
<source file=’/virhost/node4.img’/> #指定新虚拟机的硬盘文件
virsh define /etc/libvirt/qemu/node6.xml #使用虚拟描述文档建立虚拟机,
可用virsh edit node6修改node6的配置文件
virsh start node6#启动虚拟机5 为虚拟机开启vnc
virsh edit node4 #编辑node4的配置文件;不建议直接通过vim node4.xml修改。
<graphics type=’vnc’ port=’-1′ listen=’127.0.0.1′ keymap=’en-us’/>
#port=’-1′ :port自动分配,监听回环网络(virt-manager管理需要listen=’127.0.0.1′),无密码
改为
<graphics type=’vnc’ port=’5904′ listen=’0.0.0.0′ keymap=’en-us’ passwd=’xiaobai’/>
#固定vnc管理端口5904,不自动分配,vnc密码xiaobai,监听所有网络
可用virsh edit node6修改node6的配置文件
virsh start node6#启动虚拟机5 为虚拟机开启vnc
virsh edit node4 #编辑node4的配置文件;不建议直接通过vim node4.xml修改。
<graphics type=’vnc’ port=’-1′ listen=’127.0.0.1′ keymap=’en-us’/>
#port=’-1′ :port自动分配,监听回环网络(virt-manager管理需要listen=’127.0.0.1′),无密码
改为
<graphics type=’vnc’ port=’5904′ listen=’0.0.0.0′ keymap=’en-us’ passwd=’xiaobai’/>
#固定vnc管理端口5904,不自动分配,vnc密码xiaobai,监听所有网络
远程vnc访问地址:192.168.32.40:5904
三、存储池和存储卷的管理
三、存储池和存储卷的管理
1.创建 KVM主机存储池
1).创建基于文件夹(目录)的存储池
virsh pool-define-as vmware_pool –type dir –target /virhost/vmware#定义存储池vmware_pool或
virsh pool-create-as –name vmware_pool –type dir –target /virhost/vmware
#创建存储池vmware_pool,类型为文件目录,/virhost/vmware,与pool-define-as结果一样
virsh pool-define-as vmware_pool –type dir –target /virhost/vmware#定义存储池vmware_pool或
virsh pool-create-as –name vmware_pool –type dir –target /virhost/vmware
#创建存储池vmware_pool,类型为文件目录,/virhost/vmware,与pool-define-as结果一样
2).创建基于文件系统的存储池
virsh pool-define-as –name vmware_pool –type fs –source-dev /dev/vg_target/LogVol02 –source-format ext4 –target /virhost/vmware
或
virsh pool-create-as –name vmware_pool –type fs –source-dev /dev/vg_target/LogVol02 –source-format ext4 –target /virhost/vmware
virsh pool-define-as –name vmware_pool –type fs –source-dev /dev/vg_target/LogVol02 –source-format ext4 –target /virhost/vmware
或
virsh pool-create-as –name vmware_pool –type fs –source-dev /dev/vg_target/LogVol02 –source-format ext4 –target /virhost/vmware
3).查看存储池信息
virsh pool-info vmware_pool #查看存储域(池)
virsh pool-info vmware_pool #查看存储域(池)
4).启动存储池
virsh pool-start vmware_pool #启动存储池
virsh pool-list
virsh pool-start vmware_pool #启动存储池
virsh pool-list
5)销毁存储域,取消存储池
virsh pool-destroy vmware_pool #销毁存储池
virsh pool-list –all
virsh pool-undefine vmware_pool #取消存储池的定义
virsh pool-list –all
virsh pool-destroy vmware_pool #销毁存储池
virsh pool-list –all
virsh pool-undefine vmware_pool #取消存储池的定义
virsh pool-list –all
2.创建了存储池后,就可以创建一个卷,这个卷是用来做虚拟机的硬盘
virsh vol-create-as –pool vmware_pool –name node6.img –capacity 10G –allocation 1G –format qcow2#创建卷 node6.img,所在存储池为vmware_pool,容量10G,初始分配1G,文件格式类型qcow2
virsh vol-info /virhost/vmware/node6.img #查看卷信息名称: node6.img类型: 文件容量: 10.00 GB分配: 136.00 KB
virsh vol-create-as –pool vmware_pool –name node6.img –capacity 10G –allocation 1G –format qcow2#创建卷 node6.img,所在存储池为vmware_pool,容量10G,初始分配1G,文件格式类型qcow2
virsh vol-info /virhost/vmware/node6.img #查看卷信息名称: node6.img类型: 文件容量: 10.00 GB分配: 136.00 KB
3.在存储卷上安装虚拟主机
virt-install –connect qemu:///system \-n node7 \-r 512 \-f /virhost/vmware/node7.img \–vnc \–os-type=linux \–os-variant=rhel6 \–vcpus=1 \–network bridge=br0 \-c /mnt/rhel-server-6.0-x86_64-dvd.iso
virt-install –connect qemu:///system \-n node7 \-r 512 \-f /virhost/vmware/node7.img \–vnc \–os-type=linux \–os-variant=rhel6 \–vcpus=1 \–network bridge=br0 \-c /mnt/rhel-server-6.0-x86_64-dvd.iso
from http://blog.chinaunix.net/uid-7934175-id-3396840.html
----------
----------
KVM(Kenerl-based virtual machine)安装包de 分析研究
kvm主要包括kvm、kmod-kvm、etherboot-zroms-kvm三个包,下面是对各个包的研究,通过研究这些文件,可以了解kvm的构成结构。
一、包内容分析
1、kvm 包 1)概要说明
------------------------------------------------------------------------------------------------ Name : kvm Relocations: (not relocatable) Version : 83 Vendor: CentOS Release : 239.el5.centos Build Date: Fri 22 Jul 2011 09:52:35 PM CST Install Date: Mon 21 Nov 2011 02:47:25 PM CST Build Host: builder10.centos.org Group : Development/Tools Source RPM: kvm-83-239.el5.centos.src.rpm Size : 2126435 License: GPLv2 Signature : DSA/SHA1, Sat 13 Aug 2011 05:26:42 AM CST, Key ID a8a447dce8562897 URL : http://kvm.sf.net Summary : Kernel-based Virtual Machine Description : KVM(基于内核的虚拟机)是一个针x86平台上Linux的全虚拟化解决方案。 KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware.
利用KVM,一个机器能够运行多个之上运行未经修改linux或windows操作系统的虚拟机。每一个虚拟机拥有其专有的 虚拟硬件:如网络接口卡、显卡等。 Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
2)此包在操作系统上安装的文件 /etc/sysconfig/modules/kvm.modules --kvm 模块加载配置文件 /etc/udev/rules.d/65-kvm.rules --kvm 设备的udev规则文件 /usr/bin/ksmctl --内核共享页合并程序(Kernel Sharedpage Merging) 需要在/dev下有ksm设备 /usr/libexec/qemu-kvm --基于qemu的虚拟机仿真程序,所有的虚拟机均通过此程序启动 /usr/share/kvm
--各类虚拟BIOS文件 /usr/share/kvm/bios.bin /usr/share/kvm/extboot.bin extboot is an x86 Option ROM that passes through int13 functions to a VMM which allows a VMM to expose an arbitrary block device as the primary BIOS disk. It can be used to boot SCSI or paravirtual devices. --以下为通过PXE启动用到的网卡ROM BIOS,这些文件是指向/usr/share/qemu-pxe-roms/目录下,zrom格式的文件 /usr/share/kvm/pxe-e1000.bin /usr/share/kvm/pxe-ne2k_pci.bin /usr/share/kvm/pxe-pcnet.bin /usr/share/kvm/pxe-rtl8139.bin /usr/share/kvm/pxe-virtio.bin /usr/share/kvm/vgabios-cirrus.bin /usr/share/kvm/vgabios.bin --键盘映射文件 /usr/share/kvm/keymaps /usr/share/kvm/keymaps/ar /usr/share/kvm/keymaps/common /usr/share/kvm/keymaps/da /usr/share/kvm/keymaps/de /usr/share/kvm/keymaps/de-ch /usr/share/kvm/keymaps/en-gb /usr/share/kvm/keymaps/en-us /usr/share/kvm/keymaps/es /usr/share/kvm/keymaps/et /usr/share/kvm/keymaps/fi /usr/share/kvm/keymaps/fo /usr/share/kvm/keymaps/fr /usr/share/kvm/keymaps/fr-be /usr/share/kvm/keymaps/fr-ca /usr/share/kvm/keymaps/fr-ch /usr/share/kvm/keymaps/hr /usr/share/kvm/keymaps/hu /usr/share/kvm/keymaps/is /usr/share/kvm/keymaps/it /usr/share/kvm/keymaps/ja /usr/share/kvm/keymaps/lt /usr/share/kvm/keymaps/lv /usr/share/kvm/keymaps/mk /usr/share/kvm/keymaps/modifiers /usr/share/kvm/keymaps/nl /usr/share/kvm/keymaps/nl-be /usr/share/kvm/keymaps/no /usr/share/kvm/keymaps/pl /usr/share/kvm/keymaps/pt /usr/share/kvm/keymaps/pt-br /usr/share/kvm/keymaps/ru /usr/share/kvm/keymaps/sl /usr/share/kvm/keymaps/sv /usr/share/kvm/keymaps/th /usr/share/kvm/keymaps/tr /usr/share/man/man1/qemu-kvm.1.gz --qemu-kvm 帮助文件
从上面的文件类型来看,此包只涵盖KVM管理相关的文件或程序,不包括KVM核心模块,这些模块在哪里呢?在另一个 名称为kmod-kvm的包中,将在后面进行说明。 3)kvm包依赖的其他包 >alsa-lib --分组:System Environment/Libraries --说明:The Advanced Linux Sound Architecture (ALSA) 库. ALSA向Linux操作系统提供音频和MIDI功能。
>bash --分组:System Environment/Shells --说明:The GNU Bourne Again shell (bash)
>celt051 --分组:System Environment/Libraries --说明:CELT (Constrained Energy Lapped Transform) 一个提供低延迟语音和音频通信的编解码器
>etherboot-zroms-kvm --分组:Development/Tools --说明:网络引导 KVM支持的.zrom格式的引导ROM(只读存储器)
>qcairo --分组:System Environment/Libraries --说明:一个二维图形库 This is a version of the cairo 2D graphics library, with additional features required to support the implementation of the spice protocol. Cairo is a 2D graphics library designed to provide high-quality display and print output. Currently supported output targets include the X Window System, OpenGL (via glitz), in-memory image buffers, and image files (PDF, PostScript, and SVG).
>glibc --分组:System Environment/Libraries --说明:GNU libc库, The glibc package contains standard libraries which are used by multiple programs on the system. In order to save disk space and memory, as well as to make upgrading easier, common system code is kept in one place and shared between programs. This particular package contains the most important sets of shared libraries: the standard C library and the standard math library. Without these two libraries, a Linux system will not function.
>gnutls --分组:System Environment/Libraries --说明:TLS协议实现 GnuTLS is a project that aims to develop a library which provides a secure layer, over a reliable transport layer. Currently the GnuTLS library implements the proposed standards by the IETF's TLS working group.
>initscripts --分组:System Environment/Base --说明:inittab 文件和 /etc/init.d 脚本 The initscripts package contains the basic system scripts used to boot your CentOS system, change runlevels, and shut the system down cleanly. Initscripts also contains the scripts that activate and deactivate most network interfaces.
>kmod-kvm --分组:System Environment/Kernel --说明:KVM内核模块 This package provides the kvm kernel modules built for the Linux kernel 2.6.18-274.el5 for the x86_64 family of processors.
>libgcrypt --分组:System Environment/Libraries --说明:一个通用加密库。
>libgpg-error --分组:System Environment/Libraries --说明:This is a library that defines common error values for all GnuPG components. Among these are GPG, GPGSM, GPGME, GPG-Agent, libgcrypt,pinentry, SmartCard Daemon and possibly more in the future.
>libpng --分组:System Environment/Libraries --说明:提供操作PNG格式图形文件功能的库。 he libpng package contains a library of functions for creating and manipulating PNG (Portable Network Graphics) image format files. PNG is a bit-mapped graphics format similar to the GIF format. PNG was created to replace the GIF format, since GIF uses a patented data compression algorithm. Libpng should be installed if you need to manipulate PNG format image files.
>libXrandr --分组:System Environment/Libraries --说明:X.Org X11 libXrandr运行时库
>log4cpp --分组:Development/Libraries --说明: C++ logging library
>nspr --分组:System Environment/Libraries --说明:Netscape Portable Runtime NSPR provides platform independence for non-GUI operating system facilities. These facilities include threads, thread synchronization, normal file and network I/O, interval timing and calendar time, basic memory management (malloc and free) and shared library linking.
>openssl --分组:System Environment/Libraries --说明:OpenSSL 工具包 The OpenSSL toolkit provides support for secure communications between machines. OpenSSL includes a certificate management tool and shared libraries which provide various cryptographic algorithms and protocols.
>qffmpeg-libs --分组:ystem Environment/Libraries --说明:Codec and format libraries for qffmpeg intended for use with the SPICE virtual desktop protocol.
>qspice-libs --分组:System Environment/Libraries --说明: Libraries for qspice This package contains the runtime libraries for any application that wishes to be a qspice server.
>SDL --分组:System Environment/Libraries --说明:A cross-platform multimedia library. Simple DirectMedia Layer (SDL) is a cross-platform multimedia library designed to provide fast access to the graphics frame buffer and audio device.
>shadow-utils --分组:System Environment/Base --说明:Utilities for managing accounts and shadow password files The shadow-utils package includes the necessary programs for converting UNIX password files to the shadow password format, plus programs for managing user and group accounts. The pwconv command converts passwords to the shadow password format. The pwunconv command unconverts shadow passwords and generates an npasswd file (a standard UNIX password file). The pwck command checks the integrity of password and shadow files. The lastlog command prints out the last login times for all users. The useradd, userdel, and usermod commands are used for managing user accounts. The groupadd, groupdel, and groupmod commands are used for managing group accounts.
>zlib --分组:System Environment/Libraries --说明:The zlib compression and decompression library. Zlib is a general-purpose, patent-free, lossless data compression library which is used by many different programs. ================================================================================================ 2、kmod-kvm包 1)概要说明 ------------------------------------------------------------------------------------------------ Name : kmod-kvm Relocations: (not relocatable) Version : 83 Vendor: (none) Release : 274.el5.centos.2 Build Date: Thu 17 Nov 2011 10:19:56 AM CST Install Date: Mon 21 Nov 2011 02:51:30 PM CST Build Host: localhost.localdomain Group : System Environment/Kernel Source RPM: kmod-kvm-83-274.el5.centos.2.src.rpm Size : 476848 License: GPL Signature : (none) URL : http://www.qumranet.com Summary : kvm kernel module Description : 这个内核模块提供对使用硬件虚拟化(Intel VT-x&VT-i 或 AMD SVM)的虚拟机的支持。 This kernel module provides support for virtual machines using hardware support (Intel VT-x&VT-i or AMD SVM).
2)此包在操作系统上安装的文件 /lib/modules/2.6.18-274.el5/extra/kmod-kvm /lib/modules/2.6.18-274.el5/extra/kmod-kvm/ksm.ko /lib/modules/2.6.18-274.el5/extra/kmod-kvm/kvm-amd.ko /lib/modules/2.6.18-274.el5/extra/kmod-kvm/kvm-intel.ko /lib/modules/2.6.18-274.el5/extra/kmod-kvm/kvm.ko /lib/modules/2.6.18-274.7.1.el5/extra/kvm-amd.ko /lib/modules/2.6.18-274.7.1.el5/extra/kvm-intel.ko /lib/modules/2.6.18-274.7.1.el5/extra/kvm.ko
从安装文件内容来看,kvm内核模块文件都在这个包中。
3)kmod-kvm包依赖的其他包 此包只依赖内核,不涉及其他的库或者工具包
================================================================================================ 3、etherboot-zroms-kvm包 1)概要说明 ------------------------------------------------------------------------------------------------ Name : etherboot-zroms-kvm Relocations: (not relocatable) Version : 5.4.4 Vendor: CentOS Release : 13.el5.centos Build Date: Mon 05 Apr 2010 03:05:57 AM CST Install Date: Wed 16 Nov 2011 11:04:10 AM CST Build Host: builder10.centos.org Group : Development/Tools Source RPM: etherboot-5.4.4-13.el5.centos.src.rpm Size : 196608 License: GPLv2 Signature : DSA/SHA1, Tue 27 Apr 2010 07:41:01 AM CST, Key ID a8a447dce8562897 URL : http://etherboot.org Summary : Etherboot - boot roms supported by KVM, .zrom format Description : 此包中包含kvm仿真的通过网卡引导的.zrom格式的ROM文件。
2)此包在操作系统上安装的文件 /usr/share/etherboot /usr/share/etherboot/e1000-82542.zrom /usr/share/etherboot/ne.zrom /usr/share/etherboot/pcnet32.zrom /usr/share/etherboot/rtl8029.zrom /usr/share/etherboot/rtl8139.zrom /usr/share/etherboot/virtio-net.zrom /usr/share/qemu-pxe-roms
3)etherboot-zroms-kvm包依赖的其他包 bash 只需要有shell支持即可。 ================================================================================================ 三、kvm模块配置文件
配置文件位置:/etc/sysconfig/modules/kvm.modules
文件内容如下: ---------------------------------------------------------------------------------------------- if [ $(grep -c vmx /proc/cpuinfo) -ne 0 ]; then --判断是否为Intel CPU modprobe kvm-intel >/dev/null 2>&1 fi
if [ $(grep -c svm /proc/cpuinfo) -ne 0 ]; then --判断是否为AMD CPU modprobe kvm-amd >/dev/null 2>&1 fi
modprobe ksm >/dev/null 2>&1 ---------------------------------------------------------------------------------------------- 本配置前两个语句主要是检查host CPU 类型,然后加载对应的模块,Intel 为 kvm-intel\AMD 为kvm-amd 无论是Intel 还是 AMD CPU,均需要加载 ksm模块。
>/dev/null 2>&1 是指将模块加载命令的输出信息及错误信息都重定向到/dev/null设备中,默认情况下这两类 信息都输出到屏幕
================================================================================================ 四、综合分析
通过对kvm包极其涵盖的文件分析,kvm提供的程序和文件主要包括三部分: 1、内核模块:kvm.ko、kvm-amd、kvm-intel.ko、ksm.ko 2、虚拟机仿真、管理相关文件:qemu-kvm、*.bin 3、网卡仿真BIOS文件。
明确了包含哪些文件,通过分析这些文件的作用,即可对kvm的运行机制有一个概要的理解.
from http://blog.csdn.net/starshine/article/details/7012500
---------
virt-install 使用说明
kvm's commands以下是翻译的virt-install man页,从文本中粘贴过来可能有格式变化情况。有些文字比较晦涩,比较难于翻译。virt-install 命令说明 1、命令作用 建立(provision)新虚拟机2、语法 virt-install [选项]…3、说明(DESCRIPTION) virt-install是一个使用“libvirt” hypervisor 管理库构建新虚拟机的命令行工具,此工具使用串行控制台,SDL(Simple DirectMedia Layer)图形或者VNC客户端/服务器, 支持基于命令行和图形安装。所建立的客户机(在虚拟化中,把运行运行虚拟机服务器称为host,把虚拟机称为guest)能够配置使用一个或多个虚拟磁盘、网卡、音频设备和物理 主机设备(USB、PCI) virt-install is a command line tool for provisioning new virtual machines using the “libvirt” hypervisor management library. The tool supports both text based & graphical installations, using serial console, SDL graphics or a VNC client/server pair. The guest can be configured to use one or more virtual disks, network interfaces, audio devices, and physical host devices (USB, PCI).安装媒介可以本地或基于NFS、HTTP、FTP服务器远程连接,基于后者,virt-install将提取必要的最小限度的文件开始安装过程,在安装过程中,允许客户机根据需要提取其他的 文件,也支持PXE引导和导入已有的磁盘映像(此操作跳过安装阶段)。 The installation media can be held locally or remotely on NFS, HTTP, FTP servers. In the latter case “virt-install” will fetch the minimal files necessary to kick off the installation process, allowing the guest to fetch the rest of the OS distribution as needed. PXE booting, and importing an existing disk image (thus skipping the install phase) are also supported.给予适合的命令行变量,“virt-install”具有完全无人值守安装的能力,这允许更容易的客户机自动化安装。本工具也支持交互模式通过提供 –prompt选项,但是这种方式只要求 最小的必要选项。 Given suitable command line arguments, “virt-install” is capable of running completely unattended, with the guest ’kickstarting’ itself too. This allows for easy automation of guest installs. An interactive mode is also available with the –prompt option, but this will only ask for the minimum required options.4、选项(OPTIONS) 大部分选项不是必须的。最小的需求是: –name –ram ,存储选项(–disk –nodisk)以及一个安装选项。 Most options are not required. Minimum requirements are –name, –ram, guest storage (–disk or –nodisks), and an install option.-h, –help 显示帮助信息并退出 Show the help message and exit–connect=CONNECT 连接到一个非默认hypervisor,选择默认链接基于以下规则: Connect to a non-default hypervisor. The default connection is chosen based on the following rules:xen If running on a host with the Xen kernel (checks against /proc/xen)qemu:///system If running on a bare metal kernel as root (needed for KVM installs)qemu:///session If running on a bare metal kernel as non-root只有在上述默认优先级不正确时才有必要提供“–connect”参数,例如如果想要在Xen内核上使用qemu。 It is only necessary to provide the “–connect” argument if this default prioritization is incorrect, eg if wanting to use QEMU while on a Xen kernel.5、通用选项(General Options)通用配置参数适用于所有类型客户机安装。 General configuration parameters that apply to all types of guest installs.-n NAME, –name=NAME 新客户虚拟机实例名字,在连接的hypervisor已知的所有虚拟机中必须唯一,包括那些当前未活动的虚拟机。想要重新定义一个已存在的虚拟机,在运行‘virt-install’之前 使用virsh工具关闭(‘virsh shutdown’)和删除(‘virsh undefine’)此虚拟机。 Name of the new guest virtual machine instance. This must be unique amongst all guests known to the hypervisor on the connection, including those not currently active. To re-define an existing guest, use the virsh(1) tool to shut it down (’virsh shutdown’) & delete (’virsh undefine’) it prior to running “virt-install”.-r MEMORY, –ram=MEMORY 以M为单位指定分配给虚拟机的内存大小,如果hypervisor没有足够的可用内存,它通常自动从主机操作系统使用的内存中获取,以满足此操作分配需要。 Memory to allocate for guest instance in megabytes. If the hypervisor does not have enough free memory, it is usual for it to automatically take memory away from the host operating system to satisfy this allocation.–arch=ARCH 为虚拟机请求一个非本地CPU架构,这个选项当前只对qemu客户机有效,但是不能够使用加速机制。如果忽略,在虚拟机中将使用主机CPU架构。 Request a non-native CPU architecture for the guest virtual machine. The option is only currently available with QEMU guests, and will not enable use of acceleration. If omitted, the host CPU architecture will be used in the guest.-u UUID, –uuid=UUID 虚拟机的唯一编号;如果没有指定,将生成一个随机UUID。如果指定,应当使用一个32为的十六进制数字。UUID保证跨整个数据中心甚至世界的唯一性,在手工指定 UUID时这一点要记住。 UUID for the guest; if none is given a random UUID will be generated. If you specify UUID, you should use a 32-digit hexadecimal number. UUID are intended to be unique across the entire data center, and indeed world. Bear this in mind if manually specifying a UUID–vcpus=VCPUS 虚拟机的虚拟CPU数。不是所有hypervisor都支持SMP虚拟机,在这种情况下这个变量将被忽略。 Number of virtual cpus to configure for the guest. Not all hypervisors support SMP guests, in which case this argument will be silently ignored–check-cpu 检查指定的虚拟CPU数不要超过无论CPU,如果超过将返回警告信息。 Check that the number virtual cpus requested does not exceed physical CPUs and warn if they do.–cpuset=CPUSET 设置哪个物理CPU能够被虚拟机使用。“CPUSET”是一个逗号分隔数字列表,也可以指定范围,例如: Set which physical cpus the guest can use. “CPUSET” is a comma separated list of numbers, which can also be specified in ranges. Example:0,2,3,5 : Use processors 0,2,3 and 5 –使用0,2,3 和5 处理器 1-3,5,6-8 : Use processors 1,2,3,5,6,7 and 8 –使用1,2,3,5,6,7,8处理器如果此参数值为‘auto’,virt-install将使用NUMA(非一致性内存访问)数据试图自动确定一个优化的CPU定位。 If the value ’auto’ is passed, virt-install attempts to automatically determine an optimal cpu pinning using NUMA data, if available.–os-type=OS_TYPE 针对一类操作系统优化虚拟机配置(例如:‘linux’,‘windows’),这将试图选择最适合的ACPI与APIC设置,支持优化鼠标驱动,virtio以及通常适应其他操作系统特性。 参见”–os-variant” 选项 Optimize the guest configuration for a type of operating system (ex. ’linux’, ’windows’). This will attempt to pick the most suitable ACPI & APIC settings, optimally supported mouse drivers, virtio, and generally accommodate other operating system quirks. See “–os-variant” for valid options.–os-variant=OS_VARIANT 针对特定操作系统变体(例如’fedora8’, ’winxp’)进一步优化虚拟机配置,这个参数是可选的并且不需要与 “–os-type”选项并用,有效值包括: Further optimize the guest configuration for a specific operating system variant (ex. ’fedora8’, ’winxp’). This parameter is optional, and does not require an “–os-type” to be specified. Valid values are:linux debianetch Debian Etchdebianlenny Debian Lennyfedora5 Fedora Core 5fedora6 Fedora Core 6fedora7 Fedora 7fedora8 Fedora 8fedora9 Fedora 9fedora10 Fedora 10fedora11 Fedora 11fedora12 Fedora 12generic24 Generic 2.4.x kernelgeneric26 Generic 2.6.x kernelvirtio26 Generic 2.6.25 or later kernel with virtiorhel2.1 Red Hat Enterprise Linux 2.1rhel3 Red Hat Enterprise Linux 3rhel4 Red Hat Enterprise Linux 4rhel5 Red Hat Enterprise Linux 5rhel5.4 Red Hat Enterprise Linux 5.4 or laterrhel6 Red Hat Enterprise Linux 6sles10 Suse Linux Enterprise Serverubuntuhardy Ubuntu 8.04 LTS (Hardy Heron)ubuntuintrepid Ubuntu 8.10 (Intrepid Ibex)ubuntujaunty Ubuntu 9.04 (Jaunty Jackalope)other generic Genericmsdos MS-DOSnetware4 Novell Netware 4netware5 Novell Netware 5netware6 Novell Netware 6solaris opensolaris Sun OpenSolarissolaris10 Sun Solaris 10solaris9 Sun Solaris 9unix freebsd6 Free BSD 6.xfreebsd7 Free BSD 7.xopenbsd4 Open BSD 4.xwindows vista Microsoft Windows Vistawin2k Microsoft Windows 2000win2k3 Microsoft Windows 2003win2k8 Microsoft Windows 2008winxp Microsoft Windows XP (x86)winxp64 Microsoft Windows XP (x86_64)–host-device=HOSTDEV 附加一个物理主机设备到客户机。HOSTDEV是随着libvirt使用的一个节点设备名(具体设备如’virsh nodedev-list’的显示的结果) Attach a physical host device to the guest. HOSTDEV is a node device name as used by libvirt (as shown by ’virsh nodedev-list’).6、完全虚拟化特定选项(Full Virtualization specific options)在完全虚拟化客户机安装时的特定参数。 Parameters specific only to fully virtualized guest installs.–sound 附加一个虚拟音频设备到客户机 Attach a virtual audio device to the guest.–noapic覆盖操作系统类型/变体使APIC(Advanced Programmable Interrupt Controller)设置对全虚拟化无效。 Override the OS type / variant to disables the APIC setting for fully virtualized guest.–noacpi 覆盖操作系统类型/变体使ACPI(Advanced Configuration and Power Interface)设置对全虚拟化无效。 Override the OS type / variant to disables the ACPI setting for fully virtualized guest.7、虚拟化类型选项(Virtualization Type options)这些选项覆盖默认虚拟化类型选择。 Options to override the default virtualization type choices.-v, –hvm 如果在主机上全虚拟化和para 虚拟化(para-irtualization如何解释还没有定论,有人称之为半虚拟化),请求使用全虚拟化(full virtualization)。如果在 一个没有硬件虚拟化支持的机器上连接Xen hypervisor,这个参数不可用,这个参数意指连接到一个基于qemu的hypervisor。 Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor.-p, –paravirt 这个参数意指构建一个paravirtualized客户机。如何主机既支持para又支持 full虚拟化,并且既没有指定本选项也没有指定”–hvm”选项,这个选项是假定选项。 This guest should be a paravirtualized guest. If the host supports both para & full virtualization, and neither this parameter nor the “–hvm” are specified, this will be assumed.–accelerate 当安装QEMU客户机时,如果支持可用KVM或KQEMU内核加速能力。除非一个客户机操作系统不兼容加速,这个选项是推荐最好加上。如果KVM和KQEMU都支持,KVM加速 器优先使用。 When installing a QEMU guest, make use of the KVM or KQEMU kernel acceleration capabilities if available. Use of this option is recommended unless a guest OS is known to be incompatible with the accelerators. The KVM accelerator is preferred over KQEMU if both are available.8、安装方法选项(Installation Method options)-c CDROM, –cdrom=CDROM 对应全虚拟化客户机,文件或设备作为一个虚拟化CD-ROM设备使用,可以是ISO映像路径或者一个CDROM设备,它也可以是一个能够提取/访问最小引导ISO映像的URL, URL使用与在 “–location” 选项中说明的相同的格式。如果一个CDROM已经通过 “–disk”选项指定,并且 “–cdrom”和其他任何选项都没有指定,”–disk” cdrom 将作为安装媒介使用。 File or device use as a virtual CD-ROM device for fully virtualized guests. It can be path to an ISO image, or to a CDROM device. It can also be a URL from which to fetch/access a minimal boot ISO image. The URLs take the same format as described for the “–location” argument. If a cdrom has been specified via the “–disk” option, and neither “–cdrom” nor any other install option is specified, the “–disk” cdrom is used as the install media.-l LOCATION, –location=LOCATION 客户虚拟机kernel+initrd 安装源。LOCATION使用以下格式: Installation source for guest virtual machine kernel+initrd pair. The “LOCATION” can take one of the following forms:DIRECTORY 指向一个包含可安装发行版映像的目录。 Path to a local directory containing an installable distribution imagenfs:host:/path or nfs://host/path 指向包含可安装发行版映像的NFS服务器位置。 An NFS server location containing an installable distribution imagehttp://host/path 指向包含可安装发行版映像的http服务器位置。 An HTTP server location containing an installable distribution imageftp://host/path 指向包含可安装发行版映像的FTP服务器位置。 An FTP server location containing an installable distribution image下面是指定几个特定发行版url的例子: Some distro specific url samples:Fedora/Red Hat Based http://download.fedoraproject.org/pub/fedora/linux/releases/10/Fedora/i386/os/–pxe 使用PXE(preboot execute environment)加载初始ramdisk 和 kernel,从而起动客户机安装过程。 Use the PXE boot protocol to load the initial ramdisk and kernel for starting the guest installation process.–import 跳过操作系统安装过程,围绕一个存在的磁盘映像建立客户机。引导使用的设备是通过”–disk” or “–file”指定的第一个设备。 Skip the OS installation process, and build a guest around an existing disk image. The device used for booting is the first device specified via “–disk” or “–file”.–livecd 指定安装媒介是一个可引导操作系统CD(A live CD, live DVD, or live disc is a CD or DVD containing a bootable computer operating system),因此需要 将虚拟机配置成永不从CDROM引导。这也许需要与”–nodisks” 标准组合使用。 Specify that the installation media is a live CD and thus the guest needs to be configured to boot off the CDROM device permanently. It may be desirable to also use the “–nodisks” flag in combination.-x EXTRA, –extra-args=EXTRA 当执行从”–location”选项指定位置的客户机安装时,附加内核命令行参数到安装程序。 Additional kernel command line arguments to pass to the installer when performing a guest install from “–location”.9、存储配置选项(Storage Configuration)–disk=DISKOPTS 用不同的选项,指定作为客户机存储的媒介。通常的磁盘串格式是: Specifies media to use as storage for the guest, with various options. The general format of a disk string is–disk opt1=val1,opt2=val2,…要知道媒介,必须提供下面选项其中之一: To specify media, one of the following options is required:path 要使用的一个指向某些存在后不存在存储媒介的路径。存在的媒介可以是文件或块设备。如在远程主机安装,存在的媒介必须被共享为一个libvirt存储卷。 A path to some storage media to use, existing or not. Existing media can be a file or block device. If installing on a remote host, the existing media must be shared as a libvirt storage volume.指定一个不存在的路径意指试图建立一个新的存储,并且需要知道一个‘size’值。如果路径的基目录是一个在主机上的libvirt存储池,新存储将被建立为一个 libvirt存储卷。对于远程主机,如果使用此方法,基目录需要是一个存储池。 Specifying a non-existent path implies attempting to create the new storage, and will require specifyng a ’size’ value. If the base directory of the path is a libvirt storage pool on the host, the new storage will be created as a libvirt storage volume. For remote hosts, the base directory is required to be a storage pool if using this method.pool 一个要在其上建立新存储的已有的libvirt存储池名,需要指定一个‘size’值。 An existing libvirt storage pool name to create new storage on. Requires specifying a ’size’ value.vol 要使用的一个已有的libvirt存储卷,指定格式类似’poolname/volname’ An existing libvirt storage volume to use. This is specified as ’poolname/volname’.10、其他可用选项(Other available options)device 磁盘设备类型。取值是’cdrom’, ’disk’, or ’floppy’,默认为 ’disk’。如果’cdrom’作为指定值并且没有选择安装方法,cdrom将被作为安装媒介。 Disk device type. Value can be ’cdrom’, ’disk’, or ’floppy’. Default is ’disk’. If a ’cdrom’ is specified, and no install method is chosen, the cdrom is used as the install media.bus 磁盘总线类型,取值是’ide’, ’scsi’,’usb’, ’virtio’ 或 ’xen’,由于不是所有的hypervisor对所有总线类型都支持,因此默认值为依赖于所使用的hypervisor。 Disk bus type. Value can be ’ide’, ’scsi’, ’usb’, ’virtio’ or ’xen’. The default is hypervisor dependent since not all hypervisors support all bus types.perms 磁盘权限,取值为’rw’ (读/写), ’ro’ (只读), or ’sh’ (共享 读/写),默认值为’rw’ Disk permissions. Value can be ’rw’ (Read/Write), ’ro’ (Readonly), or ’sh’ (Shared Read/Write). Default is ’rw’size 以GB为单位的新建存储大小。 size (in GB) to use if creating new storagesparse 指定建立的存储是否跳过完全分配。取值为 ’true’ 或 ’false’。 whether to skip fully allocating newly created storage. Value is ’true’ or ’false’. Default is ’true’ (do not fully allocate). –所谓的完全分配是指在建立文件后即分配给其规定的所有空间,所谓的sparse是指根据使用情况逐渐增加空间。初始时对客户机虚拟磁盘采用全分配策略(sparse=false)通常在客户机内部通过提供更快的安装时间获得平衡。因此在主机文件系统可能被填满时推荐使用 此选项以确保高性能和避免I/O错误。 The initial time taken to fully-allocate the guest virtual disk (spare=false) will be usually by balanced by faster install times inside the guest. Thus use of this option is recommended to ensure consistently high performance and to avoid I/O errors in the guest should the host filesystem fill up.cache 使用缓存模式,,主机页面缓存提供内存缓存。此选项取值包括’none’, ’writethrough’, 或 ’writeback’, ’writethrough’提供读缓存,’writeback’提供 读和写缓存。 The cache mode to be used. The host pagecache provides cache memory. The cache value can be ’none’, ’writethrough’, or ’writeback’. ’writethrough’ provides read caching. ’writeback’ provides read and write caching.参加例子一节中的一些使用。这个选项屏蔽 “–file”, “–file-size”和 “–nonsparse”选项。 See the examples section for some uses. This option deprecates “–file”, “–file-size”, and “–nonsparse”.-f DISKFILE, –file=DISKFILE 指向作为客户机虚拟磁盘后台存储的文件、磁盘分区或逻辑卷。这个选项与”–disk”选项指定一个即可。 Path to the file, disk partition, or logical volume to use as the backing store for the guest’s virtual disk. This option is deprecated in favor of “–disk”.-s DISKSIZE, –file-size=DISKSIZE 作为客户机虚拟磁盘的文件大小。这个选项不能与”–disk”选项同时使用。 Size of the file to create for the guest virtual disk. This is deprecated in favor of “–disk”.–nonsparse 指定在建立存储时机分配全部空间。这个选项不能与”–disk”选项同时使用。 Fully allocate the storage when creating. This is deprecated in favort of “–disk”–nodisks 请求一个没有任何本地磁盘存储的虚拟机,典型应用在运行’Live CD’映像或安装到网络存储(iSCSI或NFS root)时。 Request a virtual machine without any local disk storage, typically used for running ’Live CD’ images or installing to network storage (iSCSI or NFS root).12、网络配置选项(Networking Configuration)-w NETWORK, –network=NETWORK 连接客户机到主机网络。”NETWORK”可采用一下任何一种值: Connect the guest to the host network. The value for “NETWORK” can take one of 3 formats:bridge:BRIDGE 连接到主机上名称为”BRIDGE”的桥接设备。如果主机具有静态网络配置和客户机需要与局域网进行全面的入站出站连接时使用此选项。在客户机使用在线迁移时也 使用此选项。 Connect to a bridge device in the host called “BRIDGE”. Use this option if the host has static networking config & the guest requires full outbound and inbound connectivity to/from the LAN. Also use this if live migration will be used with this guest.network:NAME 连接到主机上名称为”NAME”的虚拟网络。虚拟网络可以使用”virsh”命令行工具列出、建立和删除。未经修改的“libvirt”安装通常有一个名字为“default”的虚拟 网络。在主机使用动态网络或无线网时使用虚拟网络。任何一个连接活动时客户机将通过地址转换将连接请求转到局域网。 Connect to a virtual network in the host called “NAME”. Virtual networks can be listed, created, deleted using the “virsh” command line tool. In an unmodified install of “libvirt” there is usually a virtual network with a name of “default”. Use a virtual network if the host has dynamic networking (eg NetworkManager), or using wireless. The guest will be NATed to the LAN by whichever connection is active.user 使用SLIRP连接到局域网。只有没有特权的用户运行一个QEMU客户机时才使用本选项。这种方法从网络地址转换(NAT)提供了非常有限的方式。 Connect to the LAN using SLIRP. Only use this if running a QEMU guest as an unprivileged user. This provides a very limited form of NAT.如果忽略此选项,将在客户机中建立一个单网络接口卡(NIC),如果在主机中有一个与物理网卡绑定的桥接设备,将用此设备进行网络连接。做不到这一点,被成之为 “default”的虚拟网络将被使用。这个选项可以被指定多次从而设置多个网卡。 If this option is omitted a single NIC will be created in the guest. If there is a bridge device in the host with a physical interface enslaved, that will be used for connectivity. Failing that, the virtual network called “default” will be used. This option can be specified multiple times to setup more than one NIC.-b BRIDGE, –bridge=BRIDGE 指定连接客户机网卡的桥接设备。这个参数不能与 “–network”选项共同使用。指定 Bridge device to connect the guest NIC to. This parameter is deprecated in favour of the “–network” parameter.-m MAC, –mac=MAC 指定客户机网卡物理地址;如果忽略这个参数或者指定了值”RANDOM”,将随机产生一个适当的地址。对应基于Xen的虚拟机,物理地址中最先的3对必须是’00:16:3e’, 而QEMU或KVM虚拟机必须是’54:52:00’。 Fixed MAC address for the guest; If this parameter is omitted, or the value “RANDOM” is specified a suitable address will be randomly generated. For Xen virtual machines it is required that the first 3 pairs in the MAC address be the sequence ’00:16:3e’, while for QEMU or KVM virtual machines it must be ’54:52:00’.–nonetworks 请求一个没有任何网卡的虚拟机。 Request a virtual machine without any network interfaces.13、图形化配置(Graphics Configuration)如果没有指定图形选项,在DISPLAY环境变量已经设置的情况下,”virt-install” 将默认使用–vnc选项,否则将使用–nographics选项。 If no graphics option is specified, “virt-install” will default to –vnc if the DISPLAY environment variable is set, otherwise –nographics is used.–vnc 在客户机中设置一个虚拟控制台并且将其导出为一个VNC服务。除非”–vncport” 参数也已提供,VNC服务将运行在5900或其之上第一个未用的端口号。实际的VNC显示 可以使用”virsh”的”vncdisplay”命令(或者使用virt-viewer处理这个细节)。 Setup a virtual console in the guest and export it as a VNC server in the host. Unless the “–vncport” parameter is also provided, the VNC server will run on the first free port number at 5900 or above. The actual VNC display allocated can be obtained using the “vncdisplay” command to “virsh” (or virt-viewer(1) can be used which handles this detail for the use).–vncport=VNCPORT 为客户机VNC控制台请求一个永久、静态的指定端口号。当其他客户机自动选择端口号时不鼓励使用此选项,因为可能产生冲突。 Request a permanent, statically assigned port number for the guest VNC console. Use of this option is discouraged as other guests may automatically choose to run on this port causing a clash.–sdl 在客户机中设置一个虚拟控制台并且在主机中显示一个SDL窗口来呈现输出。如果SDL窗口被关闭,客户机将被无条件终止。 Setup a virtual console in the guest and display an SDL window in the host to render the output. If the SDL window is closed the guest may be unconditionally terminated.–nographics 指定没有控制台被分配给客户机。全虚拟化客户机(Xen FV或者QEMU/KVM)将需要在客户机第一个串口有一个文本控制台配置(这可以通过–extra-args选项实现)。 Xen PV将自动进行设置。命令’virsh console NAME’被用来连接串行设备。 No graphical console will be allocated for the guest. Fully virtualized guests (Xen FV or QEmu/KVM) will need to have a text console configured on the first serial port in the guest (this can be done via the –extra-args option). Xen PV will set this up automatically. The command ’virsh console NAME’ can be used to connect to the serial device.–noautoconsole 使用本选项指定不自动试图连接到客户机控制台。默认行为是调用一个VNC客户端显示图形控制台,或者运行 “virsh” “console”命令显示文本控制台。 Don’t automatically try to connect to the guest console. The default behaviour is to launch a VNC client to display the graphical console, or to run the “virsh” “console” command to display the text console. Use of this parameter will disable this behaviour.-k KEYMAP, –keymap=KEYMAP 请求将虚拟VNC控制台配置为非英语键盘布局。 Request that the virtual VNC console be configured to run with a non-English keyboard layout.14、Miscellaneous Options-d, –debug 在安装过程中,打印调试信息到终端。即使忽略此选项,调试信息也保存在当前用户home目录下的.virtinst/virt-install.log文件中。 Print debugging information to the terminal when running the install process. The debugging information is also stored in “$HOME/.virtinst/virt-install.log” even if this parameter is omitted.–noreboot 防止域(虚拟机)在安装完成后自动重启。 Prevent the domain from automatically rebooting after the install has completed.–wait=WAIT 设置以分钟为单位的等待虚拟机完成其安装的时间。没有这个的选项,virt-install将等待控制台关闭(不必要指示客户机已经关闭),或者在–noautoconsole选项 指定的情况下,简单地开始安装并退出。任何负值将使virt-install无限等待,0值将触发与–noauotoconsole选项相同的结果。如果超出时间限制,virt-install只是 简单的退出,保留虚拟机在其当前状态。 Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the console to close (not neccessarily indicating the guest has shutdown), or in the case of –noautoconsole, simply kick off the install and exit. Any negative value will make virt-install wait indefinitely, a value of 0 triggers the same results as noautoconsole. If the time limit is exceeded, virt- install simply exits, leaving the virtual machine in its current state.–force 防止交互式提示。如果预期的提示为是/否,总是回答是。对应任何其他提示,应用将退出。 Prevent interactive prompts. If the intended prompt was a yes/no prompt, always say yes. For any other prompts, the application will exit.–prompt 提供交互模式,提示选择和输入建立虚拟机必要的信息。默认情况下提示功能是关闭的。 Specifically enable prompting for required information. Default prompting is off (as of virtinst 0.400.0)15、例子(EXAMPLES)Install a KVM guest, creating a new storage file, virtual networking, booting from the host CDROM, using VNC server/viewer# virt-install \ –connect qemu:///system \ –name demo \ –ram 500 \ –disk path=/var/lib/libvirt/images/demo.img,size=5 \ –network network:default \ –accelerate \ –vnc \ –cdrom /dev/cdromInstall a Fedora 9 KVM guest, using LVM partition, virtual networking, booting from PXE, using VNC server/viewer# virt-install \ –connect qemu:///system \ –name demo \ –ram 500 \ –disk path=/dev/HostVG/DemoVM \ –network network:default \ –accelerate \ –vnc \ –os-variant fedora9Install a QEMU guest, with a real partition, for a different architecture using SDL graphics, using a remote kernel and initrd pair:# virt-install \ –connect qemu:///system \ –name demo \ –ram 500 \ –disk path=/dev/hdc \ –network bridge:eth1 \ –arch ppc64 \ –arch ppc64 \ –sdl \ –location http://download.fedora.redhat.com/pub/fedora/linux/core/6/x86_64/os/Run a Live CD image under Xen fullyvirt, in diskless environment# virt-install \ –hvm \ –name demo \ –ram 500 \ –nodisks \ –livecd \ –vnc \ –cdrom /root/fedora7live.isoInstall a paravirtualized Xen guest, 500 MB of RAM, a 5 GB of disk, and Fedora Core 6 from a web server, in text-only mode, with old style –file options:# virt-install \ –paravirt \ –name demo \ –ram 500 \ –file /var/lib/xen/images/demo.img \ –file-size 6 \ –nographics \ –location http://download.fedora.redhat.com/pub/fedora/linux/core/6/x86_64/os/Create a guest from an existing disk image ’mydisk.img’ using defaults for the rest of the options.# virt-install \ –name demo –ram 512 –disk path=/home/user/VMs/mydisk.img –import--------------
http://chengavin.blogspot.jp/2011/04/kvm.html
查看 virt-install 各項的參數,執行:
virt-install –help
virt-install –help
查看 virt-install 完整的使用說明,執行:
man virt-install
man virt-install
執行:
virt-install \
–connect qemu:///system \
–name= 虛擬機器的名稱 \
–ram=分配的記憶體大小 [MB] \
–os-type=作業系統類型 [ex: linux] \
–os-variant=作業系統的版本名稱 [ex: ubuntujaunty] \
–hvm [全虛擬化,hvm 與 paravirt 擇其一,請參考附錄] \
–paravirt [半虛擬化,hvm 與 paravirt 擇其一,請參考附錄] \
–accelerate [KVM 加速器] \
–cdrom=系統安裝光碟的路徑 [ex: *.iso] \
–file=虛擬硬碟的路徑 [ex: *.qcow2] \
–file-size=虛擬硬碟的大小 [GB] \
–bridge=br0 \
–vnc \
–noautoconsole \
–debug
virt-install \
–connect qemu:///system \
–name= 虛擬機器的名稱 \
–ram=分配的記憶體大小 [MB] \
–os-type=作業系統類型 [ex: linux] \
–os-variant=作業系統的版本名稱 [ex: ubuntujaunty] \
–hvm [全虛擬化,hvm 與 paravirt 擇其一,請參考附錄] \
–paravirt [半虛擬化,hvm 與 paravirt 擇其一,請參考附錄] \
–accelerate [KVM 加速器] \
–cdrom=系統安裝光碟的路徑 [ex: *.iso] \
–file=虛擬硬碟的路徑 [ex: *.qcow2] \
–file-size=虛擬硬碟的大小 [GB] \
–bridge=br0 \
–vnc \
–noautoconsole \
–debug
一個完整的例子如下:
virt-install \
–connect qemu:///system \
–name=imVM \
–ram=1024 \
–os-type=linux \
–os-variant=ubuntujaunty \
–hvm \
–accelerate \
–cdrom=~/ubuntu-9.04.iso \
–file=~/imVM.qcow2 \
–file-size=8 \
–bridge=br0 \
–vnc \
–noautoconsole \
–debug
virt-install \
–connect qemu:///system \
–name=imVM \
–ram=1024 \
–os-type=linux \
–os-variant=ubuntujaunty \
–hvm \
–accelerate \
–cdrom=~/ubuntu-9.04.iso \
–file=~/imVM.qcow2 \
–file-size=8 \
–bridge=br0 \
–vnc \
–noautoconsole \
–debug
順 利執行完畢,虛擬機器就存在了。
新建的虛擬機器,其描述檔為:
/etc/libvirt/qemu/虛擬機器名稱.xml
新建的虛擬機器,其描述檔為:
/etc/libvirt/qemu/虛擬機器名稱.xml
要 將新建的虛擬機器進行第一次開機,執行:
virsh
virsh# start 虛擬機器名稱
virsh# list –all
virsh# quit
virsh
virsh# start 虛擬機器名稱
virsh# list –all
virsh# quit
確定虛擬機器開機後,到 X window 環境的機器底下,執行:
sudo apt-get install virt-viewer
sudo apt-get install virt-viewer
執行:
virt-viewer –connect qemu+ssh://使用者帳號@虛擬機器的母體主機位址/system 虛擬機器名稱
virt-viewer –connect qemu+ssh://使用者帳號@虛擬機器的母體主機位址/system 虛擬機器名稱
成功登入之後,將會出現遠端的虛擬機器畫面。
進行正常 的作業系統安裝動作,完成後關機。
進行正常 的作業系統安裝動作,完成後關機。
再次進入 virsh 開機,然後從其他機器 SSH 連線測試。
如果出現問題,請使用 virt-viewer 檢查虛擬機器狀況。
如果出現問題,請使用 virt-viewer 檢查虛擬機器狀況。
×
四、使用已存在的虛擬硬碟檔安裝新的虛擬機器
執行:
virt-install \
–connect=qemu:///system \
–name=新的虛擬機器名稱 \
–ram=新的虛擬機器記憶體 大小 [MB] \
–os-type=作業系統類型 \
–os-variant=作業系統名稱 \
–accelerate \
–file=已存在的虛擬硬碟路徑 [ex: *.qcow2] \
–bridge=br0 \
–vnc \
–noautoconsole \
–debug \
–import
virt-install \
–connect=qemu:///system \
–name=新的虛擬機器名稱 \
–ram=新的虛擬機器記憶體 大小 [MB] \
–os-type=作業系統類型 \
–os-variant=作業系統名稱 \
–accelerate \
–file=已存在的虛擬硬碟路徑 [ex: *.qcow2] \
–bridge=br0 \
–vnc \
–noautoconsole \
–debug \
–import
×
五、複製虛擬機器
執行:
virt-clone \
–connect=qemu:///system \
-o 舊的虛擬機器名稱 \
-n 新的虛擬機器名稱 \
-f 新的虛擬硬碟路徑 [ex: *.qcow2]
virt-clone \
–connect=qemu:///system \
-o 舊的虛擬機器名稱 \
-n 新的虛擬機器名稱 \
-f 新的虛擬硬碟路徑 [ex: *.qcow2]
×
六、虛擬機器的管理
執行:
virsh
virsh
# 查看所有可以用指令
virsh# help
virsh# help
# 取出虛擬機器描述檔
virsh# dumpxml 虛擬機器名稱 /tmp/虛擬機器描述檔 [ex: *.xml]
virsh# dumpxml 虛擬機器名稱 /tmp/虛擬機器描述檔 [ex: *.xml]
# 使用虛擬機器描述檔建立虛擬機器
virsh# define /etc/libvirt/qemu/虛擬機器描述檔 [ex: *.xml]
virsh# define /etc/libvirt/qemu/虛擬機器描述檔 [ex: *.xml]
# 移除虛擬機器
virsh# undefine 虛擬機器名稱
virsh# undefine 虛擬機器名稱
# 列出所有虛擬機器
virsh# list –all
virsh# list –all
# 啟動虛擬機器
virsh# start 虛擬機器名稱
virsh# start 虛擬機器名稱
# 關閉虛擬機器
virsh# shutdown 虛擬機器名稱
virsh# shutdown 虛擬機器名稱
# 拔除虛擬機器電源
virsh# destory 虛擬機器名稱
virsh# destory 虛擬機器名稱
-----------
virt-install xml格式配置文件粗解
http://my.oschina.net/guol/blog/73300
在用virt-install生成虚拟机时会自动的生成一个默认xml格式的配置文件在/etc/libvirt/qemu目录下,以后需要调整虚拟机参 数时可以修改此配置文件,然后使虚拟机生效。在初次建立虚拟机时里面的参数都是根据第一次生成虚拟机的配置指定的,下面分析一下在此xml配置文件中都可 以使用哪些参数。
翻译时间比较久远,一些文字已经丢失,仅作参考!!!
一般元数据介绍:
<domain type=’kvm’>
domain 是一个所有虚拟机都需要的根元素,它有两个属性,type定义使用哪个虚拟机管理程序,值可以是:xen、kvm、qemu、lxc、kqemu,第二个参数是id,它唯一的标示一个运行的虚拟机,不活跃的客户端没有id。
<domain type=’kvm’>
domain 是一个所有虚拟机都需要的根元素,它有两个属性,type定义使用哪个虚拟机管理程序,值可以是:xen、kvm、qemu、lxc、kqemu,第二个参数是id,它唯一的标示一个运行的虚拟机,不活跃的客户端没有id。
<name>kvm_test3</name>
name参数为虚拟机定义了一个简短的名字,必须唯一。
name参数为虚拟机定义了一个简短的名字,必须唯一。
<uuid>f7333079-650e-8bea-4c36-184480afa0ba</uuid>
uuid为虚拟机定义了一个全球唯一的标示符,uuid的格式必须遵循RFC 4122指定的格式,当创建虚拟机没有指定uuid时会随机的生成一个uuid。
uuid为虚拟机定义了一个全球唯一的标示符,uuid的格式必须遵循RFC 4122指定的格式,当创建虚拟机没有指定uuid时会随机的生成一个uuid。
<title>This is my first test kvm</title>
title参数提供一个对虚拟机简短的说明,它不能包含换行符。
title参数提供一个对虚拟机简短的说明,它不能包含换行符。
操作系统启动介绍:
有多重不同的方法引导虚拟机:
BIOS bootloader #通过BIOS启动支持全虚拟化
BIOS bootloader #通过BIOS启动支持全虚拟化
<os>
<type arch=’x86_64′>hvm< pe>
type参数指定了虚拟机操作系统的类型,内容:hvm表明该OS被设计为直接运行在裸金属上面,需要全虚拟化,而linux(一个不好的名字)指OS支 持XEN3hypervisor的客户端ABI,type同样有两个可选参数:arch指定虚拟机的CPU构架,machine指定机器的类型。
<type arch=’x86_64′>hvm< pe>
type参数指定了虚拟机操作系统的类型,内容:hvm表明该OS被设计为直接运行在裸金属上面,需要全虚拟化,而linux(一个不好的名字)指OS支 持XEN3hypervisor的客户端ABI,type同样有两个可选参数:arch指定虚拟机的CPU构架,machine指定机器的类型。
<boot dev=’hd’/>
dev属性的值可以是:fd、hd、cdrom、network,它经常被用来指定下一次启动。boot的元素可以被设置多个用来建立一个启动优先规则。
<os>
dev属性的值可以是:fd、hd、cdrom、network,它经常被用来指定下一次启动。boot的元素可以被设置多个用来建立一个启动优先规则。
<os>
CPU分配:
<vcpu placement=’static’ cpuset=”1-4,^3,6″ current=”1″>2</vcpu>
vcpu的内容是为虚拟机最多分配几个cpu,值处于1~maxcpu之间,可选参数:cpuset参数指定虚拟cpu可以映射到那些物理cpu上,物理 cpu用逗号分开,单个数字的标示单个cpu,也可以用range符号标示多个cpu,数字前面的脱字符标示排除这个cpu,current参数指定虚拟 机最少,placement参数指定一个domain的cpu的分配模式,值可以是static、auto。
<vcpu placement=’static’ cpuset=”1-4,^3,6″ current=”1″>2</vcpu>
vcpu的内容是为虚拟机最多分配几个cpu,值处于1~maxcpu之间,可选参数:cpuset参数指定虚拟cpu可以映射到那些物理cpu上,物理 cpu用逗号分开,单个数字的标示单个cpu,也可以用range符号标示多个cpu,数字前面的脱字符标示排除这个cpu,current参数指定虚拟 机最少,placement参数指定一个domain的cpu的分配模式,值可以是static、auto。
内存分配:
<memory unit=’KiB’>524288</memory>
memory 定义客户端启动时可以分配到的最大内存,内存单位由unit定义,单位可以是:K、KiB、M、MiB、G、GiB、T、TiB。默认是KiB。
memory 定义客户端启动时可以分配到的最大内存,内存单位由unit定义,单位可以是:K、KiB、M、MiB、G、GiB、T、TiB。默认是KiB。
<currentMemory>1024000</currentMemory>
currentMemory 定义实际分给给客户端的内存她小于memory的定义,如果没有定义,值和memory一致。
currentMemory 定义实际分给给客户端的内存她小于memory的定义,如果没有定义,值和memory一致。
控制周期:
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
当一个客户端的OS触发lifecycle时,它将采取新动作覆盖默认操作,具体状态参数如下:
on_poweroff:当客户端请求poweroff时执行特定的动作
on_reboot:当客户端请求reboot时执行特定的动作
on_crash:当客户端崩溃时执行的动作
每种状态下可以允许指定如下四种行为:
destory:domain将会被完全终止,domain的所有资源会被释放
restart:domain会被终止,然后以相同的配置重新启动
preserver:domain会被终止,它的资源会被保留用来分析
rename-restart:domain会被终止,然后以一个新名字被重新启动
on_poweroff:当客户端请求poweroff时执行特定的动作
on_reboot:当客户端请求reboot时执行特定的动作
on_crash:当客户端崩溃时执行的动作
每种状态下可以允许指定如下四种行为:
destory:domain将会被完全终止,domain的所有资源会被释放
restart:domain会被终止,然后以相同的配置重新启动
preserver:domain会被终止,它的资源会被保留用来分析
rename-restart:domain会被终止,然后以一个新名字被重新启动
Hypervisor的特性:
<features>
<pae/>
<acpi/>
<apic/>
<hap/>
<privnet/>
</features>
Hypervisors允许特定的CPU/机器特性打开或关闭,所有的特性都在fearures元素中,以下介绍一些在全虚拟化中常用的标记:
pae:扩展物理地址模式,使32位的客户端支持大于4GB的内存
acpi:用于电源管理
hap:Enable use of Hardware Assisted Paging if available in the hardware.
<features>
<pae/>
<acpi/>
<apic/>
<hap/>
<privnet/>
</features>
Hypervisors允许特定的CPU/机器特性打开或关闭,所有的特性都在fearures元素中,以下介绍一些在全虚拟化中常用的标记:
pae:扩展物理地址模式,使32位的客户端支持大于4GB的内存
acpi:用于电源管理
hap:Enable use of Hardware Assisted Paging if available in the hardware.
时间设置:
<clock offset=”localtime” />
客户端的时间初始化来自宿主机的时间,大多数操作系统期望硬件时钟保持UTC格式,UTC也是默认格式,然而Windows机器却期望它是’localtime’
clock的offset属性支持四种格式的时间:UTC localtime timezone variable
UTC:当引导时客户端时钟同步到UTC时钟
localtime:当引导时客户端时钟同步到主机时钟所在的时区
timezone:The guest clock will be synchronized to the requested timezone using the timezone attribute.
客户端的时间初始化来自宿主机的时间,大多数操作系统期望硬件时钟保持UTC格式,UTC也是默认格式,然而Windows机器却期望它是’localtime’
clock的offset属性支持四种格式的时间:UTC localtime timezone variable
UTC:当引导时客户端时钟同步到UTC时钟
localtime:当引导时客户端时钟同步到主机时钟所在的时区
timezone:The guest clock will be synchronized to the requested timezone using the timezone attribute.
设备设置:
<devices>
所有的设备都是一个名为devices元素的子设备(All devices occur as children of the main devices element.),以下是一个简单的配置:
<emulator>/usr/bin/kvm</emulator>
emulator元素指定模拟设备二进制文件的全路径
所有的设备都是一个名为devices元素的子设备(All devices occur as children of the main devices element.),以下是一个简单的配置:
<emulator>/usr/bin/kvm</emulator>
emulator元素指定模拟设备二进制文件的全路径
<disk type=’block’ device=’disk’>
<driver name=’qemu’ cache=’none’/>
<source dev=’/dev/cciss/c0d0p6′/>
<target dev=’vda’ bus=’virtio’/>
</disk>
<disk type=’block’ device=’cdrom’>
<target dev=’hdc’ bus=’ide’/>
<readonly/>
</disk>
所有的设备看起来就像一个disk、floppy、cdrom或者一个 paravirtualized driver,他们通过一个disk元素指定。
disk是一个描述disks的主要容器,type特性包括:file,block,dir,network。device描述disk如何受到客户端 OS的,特性包括:floppy、disk、cdrom、lun,默认是disk。snapshot属性表明默认行为在磁盘做snapshot的时 候,snapshot的参数有:internal ,在snapshot的时候可以存储改变的数据。external,在snapshot时分开活动的数据。no,disk不参加snapshot,只读磁 盘默认是no。
<driver name=’qemu’ cache=’none’/>
<source dev=’/dev/cciss/c0d0p6′/>
<target dev=’vda’ bus=’virtio’/>
</disk>
<disk type=’block’ device=’cdrom’>
<target dev=’hdc’ bus=’ide’/>
<readonly/>
</disk>
所有的设备看起来就像一个disk、floppy、cdrom或者一个 paravirtualized driver,他们通过一个disk元素指定。
disk是一个描述disks的主要容器,type特性包括:file,block,dir,network。device描述disk如何受到客户端 OS的,特性包括:floppy、disk、cdrom、lun,默认是disk。snapshot属性表明默认行为在磁盘做snapshot的时 候,snapshot的参数有:internal ,在snapshot的时候可以存储改变的数据。external,在snapshot时分开活动的数据。no,disk不参加snapshot,只读磁 盘默认是no。
source元素:在disk的type是file时,file属性指定一个合格的全路径文件映像作为客户端的磁盘,在disk的type是block 时,dev属性指定一个主机设备的路径作为disk,在disk的type是dir时,dir属性指定一个全路径的目录作为disk,在disk的 type是network时,protocol属性指定协议用来访问镜像,镜像的值可以是:nbd,rbd,sheepdog。当protocal的属性 值是rbd或者sheepdog时,必须用一个额外的name属性指定使用那个镜像,当type的值是network时,source可以有0个或者多个 host字属性指定连接哪些主机。
target元素:控制总线设备在某个磁盘被选为客户端的OS时,dev属性表明本地磁盘在客户端上的实际名称,因为实际设备的名称指定并不能保证映射到 客户端OS上的设备。bus属性指定了哪种类型的磁盘被模拟,值主要有:ide、scsi、virtio、xen、usb、sata。如果省略,总线类型 从设备名来推断,例如设备名是sda,则使用scsi类型的总线。tray属性指定可移动磁盘的状态,例如cdrom或者floppy,它的值是open 或closed,默认是closed。
driver允许更进一步的指定hypervisor driver的相关细节。如果hypervisor支持多个后端驱动程序,name属性选择一个主要的后端驱动的名称,可选type参数可以指定一个子类 型,例如:xen支持的名称包括tap、tap2、phy、file,qemu只支持qemu名称,但是多类型的包括raw、bochs、qcow2、 qed等。cache属性控制cache机制,值可以是:default、none、writethtough、writeback、 directsync、unsafe。error_policy属性指定当hypervisor在读写磁盘出现错误时的行为,值可以是:stop、 report、ignored、enospace,默认值是report。io属性控制IO策略,qemu客户端支持threads、native。
readonly元素:指定客户端不能修改设备。当一个disk含有type=cdrom,readonly则是默认值。
host元素:有两个属性name和port,分别指定了hostname和port。
网络接口:
有好几种网络接口访问客户端:Virtual network、Bridge to LAN、Userspace SLIRP stack、Generic ethernet connection、Direct attachment to physical interface。
Virtual network:这种推荐配置一般是对使用动态/无线网络环境访问客户端的情况。
Bridge to LAN:这种推荐配置一般是使用静态有限网络连接客户端的情况。
<interface type=’bridge’>
<source bridge=’br0′/>
<mac address=’52:54:00:ad:82:97′/>
<model type=’virtio’/>
</interface>
有好几种网络接口访问客户端:Virtual network、Bridge to LAN、Userspace SLIRP stack、Generic ethernet connection、Direct attachment to physical interface。
Virtual network:这种推荐配置一般是对使用动态/无线网络环境访问客户端的情况。
Bridge to LAN:这种推荐配置一般是使用静态有限网络连接客户端的情况。
<interface type=’bridge’>
<source bridge=’br0′/>
<mac address=’52:54:00:ad:82:97′/>
<model type=’virtio’/>
</interface>
输入设备:
输入设备允许使用图形化界面和虚拟机交互,当有图形化framebuffer的时候,输入设备会被自动提供的。
<input type=’mouse’ bus=’ps2′/>
input元素:input元素含有一个强制的属性,type属性的值可以是mouse活tablet,前者使用想对运动,后者使用绝对运动。bus属性指定一个明确的设备类型,值可以是:xen、ps2、usb。
输入设备允许使用图形化界面和虚拟机交互,当有图形化framebuffer的时候,输入设备会被自动提供的。
<input type=’mouse’ bus=’ps2′/>
input元素:input元素含有一个强制的属性,type属性的值可以是mouse活tablet,前者使用想对运动,后者使用绝对运动。bus属性指定一个明确的设备类型,值可以是:xen、ps2、usb。
图形设备:
图形设备允许有个图形接口和客户端进行交互,客户端有图形接口和text console模式允许admin进行交互。
<graphics type=’vnc’ port=’-1′ keymap=’en-us’/>
<graphics type=’vnc’ port=’5904′>
<listen type=’address’ address=’1.2.3.4′/>
</graphics>
graphics元素:graphics含有一个强制的属性type,type的值可以是:sdl、vnc、rdp、desktop。vnc则启动vnc 服务,port属性指定tcp端口,如果是-1,则表示自动分配,vnc的端口自动分配的话是从5900向上递增。listen属性提供一个IP地址给服 务器监听,可以单独在listen元素中设置。passwd属性提供一个vnc的密码。keymap属性提供一个keymap使用。
Rather than putting the address information used to set up the listening socket for graphics types vnc and spice in the <graphics> listen attribute, a separate subelement of <graphics>, called <listen> can be specified (see the examples above)since 0.9.4. <listen> accepts the following attributes:
listen元素:listen元素专门针对vnc和spice设置监听端口等。它包含以下属性:type、address、network。type的 值可以是address或network。如果设置了type=address,那么address属性设置一个ip地址或者主机名来监听。如果 type=network,则network属性设置一个网络名称在libvirt‘s的网络配置文件中。
图形设备允许有个图形接口和客户端进行交互,客户端有图形接口和text console模式允许admin进行交互。
<graphics type=’vnc’ port=’-1′ keymap=’en-us’/>
<graphics type=’vnc’ port=’5904′>
<listen type=’address’ address=’1.2.3.4′/>
</graphics>
graphics元素:graphics含有一个强制的属性type,type的值可以是:sdl、vnc、rdp、desktop。vnc则启动vnc 服务,port属性指定tcp端口,如果是-1,则表示自动分配,vnc的端口自动分配的话是从5900向上递增。listen属性提供一个IP地址给服 务器监听,可以单独在listen元素中设置。passwd属性提供一个vnc的密码。keymap属性提供一个keymap使用。
Rather than putting the address information used to set up the listening socket for graphics types vnc and spice in the <graphics> listen attribute, a separate subelement of <graphics>, called <listen> can be specified (see the examples above)since 0.9.4. <listen> accepts the following attributes:
listen元素:listen元素专门针对vnc和spice设置监听端口等。它包含以下属性:type、address、network。type的 值可以是address或network。如果设置了type=address,那么address属性设置一个ip地址或者主机名来监听。如果 type=network,则network属性设置一个网络名称在libvirt‘s的网络配置文件中。
字符设备提供同虚拟机进行交互的接口,Paravirtualized consoles, serial ports, parallel ports and channels 都是字符设备,它们使用相同的语法。
串行端口:
<serial type=’pty’>
<target port=’0′/>
</serial>
<console type=’pty’>
<target type=’serial’ port=’0′/>
</console>
串行端口:
<serial type=’pty’>
<target port=’0′/>
</serial>
<console type=’pty’>
<target type=’serial’ port=’0′/>
</console>
Pseudo TTY 分配使用/dev/ptmx,A suitable client such as ‘virsh console’ can connect to interact with the serial port locally.
<parallel type=’pty’>
<source path=’/dev/pts/2′/>
<target port=’0′/>
</parallel>
<parallel type=’pty’>
<source path=’/dev/pts/2′/>
<target port=’0′/>
</parallel>
在每组指令中,最顶层的指令(parallel, serial, console, channel)描述设备怎样出现在客户端中,客户端接口通过target配置。
The interface presented to the host is given in the type attribute of the top-level element. The host interface is configured by the source element
The interface presented to the host is given in the type attribute of the top-level element. The host interface is configured by the source element
主机接口通过source元素配置。
声音设备:
<video>
<model type=’cirrus’/>
</video>
video元素:是描述声音设备的容器,为了向后完全兼容,如果没有设置video但是有graphics在xml配置文件中,这时libvirt会按照 客户端类型增加一个默认的video,。model元素有一个强制的type属性,它的值可以是:vga、cirrus、vmvga、xen、vbox、 qxl。例如一个客户端类型为kvm,那么默认的type值是cirrus。
</devices>
----------------
qemu-kvm基础
qemu-kvm是当前比较热门的虚拟化技术,接下来的一段时间里我将会在这里为大家呈现系列qemu-kvm内容。这里所要谈到的虚拟化,是指在CPU硬件支持基础之上的虚拟化技术。
KVM(Kernel-based Virtual Machine)官网:http://www.linux-kvm.org/page/Main_Page
介绍:
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
The kernel component of KVM is included in mainline Linux, as of 2.6.20.
KVM is open source software.
qemu官网:http://wiki.qemu.org/Main_Page
介绍:
QEMU is a generic and open source machine emulator and virtualizer.
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performance.
When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, and S390 guests.
准 确来说,KVM是Linux kernel的一个模块。可以用命令modprobe去加载KVM模块。加载了模块后,才能进一步通过其他工具创建虚拟机。但仅有KVM模块是 远远不够的,因为用户无法直接控制内核模块去作事情,你还必须有一个运行在用户空间的工具才行。这个用户空间的工具,kvm开发者选择了已经成型的开源虚 拟化软件 QEMU。说起来QEMU也是一个虚拟化软件。它的特点是可虚拟不同的CPU。比如说在x86的CPU上可虚拟一个Power的CPU,并可利用它编译出 可运行在Power上的程序。KVM使用了QEMU的一部分,并稍加改造,就成了可控制KVM的用户空间工具了。所以你会看到,官方提供的KVM下载有两 大部分(qemu和kvm)三个文件(KVM模块、QEMU工具以及二者的合集)。也就是说,你可以只升级KVM模块,也可以只升级QEMU工具。这就是 KVM和QEMU 的关系。
Linux Kernel-based Virtual Machine (KVM) 是一款 Linux 开放源码虚拟化软件,基于硬件虚拟化扩展(Intel VT-X 和 AMD-V)和 QEMU 的修改版。KVM 的实现模块又两个,分别是: kvm.ko是提供核心虚拟化的基础架构;特定于处理器的模块 kvm-intel.ko 和 kvm-amd.ko 。其设计目标是在需要引导多个未改动的 PC 操作系统时支持完整的硬件模拟。
from http://blog.csdn.net/cenziboy/article/details/6953647
----------
The qemu-kvm command uses the following syntax:
This section introduces general qemu-kvm options and
options related to the basic emulated hardware, such as virtual
machine’s processor, memory, model type, or time processing methods.
Following is an example of a working qemu-kvm command line:
QEMU virtual machines emulate all devices needed to run a VM Guest.
QEMU supports, for example, several types of network cards, block
devices (hard and removable drives), USB devices, character devices
(serial and parallel ports), or multimedia devices (graphic and sound
cards). For satisfactory operation and performance of the virtual
machine, some or all of these devices must be configured correctly. This
section introduces options to configure various types of supported
devices.
Block devices are vital for virtual machines. In general, these are
fixed or removable storage media usually referred to as ‘drives’. One of
the connected hard drives typically holds the guest operating system to
be virtualized.
Virtual machine drives are defined with
This section describes QEMU options affecting the type of the
emulated video card and the way VM Guest graphical output is displayed.
QEMU uses
The following options affect the way VM Guest graphical output is displayed.
There are basically two ways to create USB devices usable by the VM
Guest in KVM: you can either emulate new USB devices inside a VM Guest,
or assign an existing host USB device to a VM Guest. To use USB devices
in QEMU you first need to enable the generic USB driver with the
Although QEMU supports much more types of USB devices, SUSE currently only supports the types
To assign an existing host USB device to a VM Guest, you need to find out its host bus and device ID.
PCI Pass-Through is a technique to give your VM Guest exclusive access to a PCI device.
KVM also supports PCI device hotplugging to a VM Guest. To achieve this, you need to switch to a QEMU monitor (see Chapter 14, Administrating Virtual Machines with QEMU Monitor for more information) and issue the following commands:
Use
By default QEMU creates a set of character devices for serial and
parallel ports, and a special console for QEMU monitor. You can,
however, create your own character devices and use them for just
mentioned purposes. The following options will help you:
Use the
Use
The
The VM Guest allocates an IP address from a virtual DHCP server. VM Host Server (the DHCP server) is reachable at 10.0.2.2, while the IP address range for allocation starts from 10.0.2.15. You can use ssh to connect to VM Host Server at 10.0.2.2, and scp to copy files back and forth.
This section shows several examples on how to set up user-mode networking with QEMU.
With the
First, create a network bridge and add a VM Host Server physical network interface (usually
Use the following example script to connect VM Guest to the newly created bridge interface
13.4.4. Accelerated Networking with
The
To make use of the module, verify that the host’s running Kernel has
QEMU normally uses an SDL (a cross-platform multimedia library) window to display the graphical output of a VM Guest. With the
The first suboption of
The default VNC server setup does not use any form of authentication.
In the previous example, any user can connect and view the QEMU VNC
Session from any host on the network.
There are several levels of security which you can apply to your VNC client/server connection. You can either protect your connection with a password, use x509 certificates, use SASL authentication, or even combine some of these authentication methods in one QEMU command.
See Section A.2, “Generating x509 Client/Server Certificates” for more information about the x509 certificates generation. For more information about configuring x509 certificates on a VM Host Server and the client, see Section 7.2.2, “Remote TLS/SSL Connection with x509 Certificate (
The Vinagre VNC viewer supports advanced authentication mechanisms. Therefore, it will be used to view the graphical output of VM Guest in the following examples. For this example, let us assume that the server x509 certificates
VM Guests usually run in a separate computing space — they are
provided their own memory range, dedicated CPUs, and filesystem space.
Ability to share parts of VM Host Server’s filesystem makes the
virtualization environment more flexible by simplifying mutual data
exchange. Network filesystems, such as CIFS and NFS, have been the
traditional way of sharing folders. But as they are not specifically
designed for virtualization purposes, they suffer from major performance
and feature issues.
KVM introduces a new and more optimized tool called VirtFS (sometimes referred to as a “filesystem pass-through”). VirtFS uses a paravirtual filesystem driver, which avoids converting the guest application filesystem operations into block device operations, and then again into host filesystem operations. VirtFS uses Plan-9 network protocol for communication between the guest and the host.
You can typically use VirtFS to
In QEMU, the implementation of VirtFS is facilitated by defining two types of devices:
Kernel SamePage Merging (KSM) is a Linux Kernel feature which merges
identical memory pages from multiple running processes into one memory
region. Because KVM guests run as processes under Linux, KSM provides
the memory overcommit feature to hypervisors for more efficient use of
memory. Therefore, if you need to run multiple virtual machines on a
host with limited memory, KSM is the best solution for you.
To make use of KSM, do the following.
For more information on the meaning of the
<video>
<model type=’cirrus’/>
</video>
video元素:是描述声音设备的容器,为了向后完全兼容,如果没有设置video但是有graphics在xml配置文件中,这时libvirt会按照 客户端类型增加一个默认的video,。model元素有一个强制的type属性,它的值可以是:vga、cirrus、vmvga、xen、vbox、 qxl。例如一个客户端类型为kvm,那么默认的type值是cirrus。
</devices>
----------------
qemu-kvm基础
qemu-kvm是当前比较热门的虚拟化技术,接下来的一段时间里我将会在这里为大家呈现系列qemu-kvm内容。这里所要谈到的虚拟化,是指在CPU硬件支持基础之上的虚拟化技术。
KVM(Kernel-based Virtual Machine)官网:http://www.linux-kvm.org/page/Main_Page
介绍:
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
The kernel component of KVM is included in mainline Linux, as of 2.6.20.
KVM is open source software.
qemu官网:http://wiki.qemu.org/Main_Page
介绍:
QEMU is a generic and open source machine emulator and virtualizer.
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performance.
When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, and S390 guests.
准 确来说,KVM是Linux kernel的一个模块。可以用命令modprobe去加载KVM模块。加载了模块后,才能进一步通过其他工具创建虚拟机。但仅有KVM模块是 远远不够的,因为用户无法直接控制内核模块去作事情,你还必须有一个运行在用户空间的工具才行。这个用户空间的工具,kvm开发者选择了已经成型的开源虚 拟化软件 QEMU。说起来QEMU也是一个虚拟化软件。它的特点是可虚拟不同的CPU。比如说在x86的CPU上可虚拟一个Power的CPU,并可利用它编译出 可运行在Power上的程序。KVM使用了QEMU的一部分,并稍加改造,就成了可控制KVM的用户空间工具了。所以你会看到,官方提供的KVM下载有两 大部分(qemu和kvm)三个文件(KVM模块、QEMU工具以及二者的合集)。也就是说,你可以只升级KVM模块,也可以只升级QEMU工具。这就是 KVM和QEMU 的关系。
Linux Kernel-based Virtual Machine (KVM) 是一款 Linux 开放源码虚拟化软件,基于硬件虚拟化扩展(Intel VT-X 和 AMD-V)和 QEMU 的修改版。KVM 的实现模块又两个,分别是: kvm.ko是提供核心虚拟化的基础架构;特定于处理器的模块 kvm-intel.ko 和 kvm-amd.ko 。其设计目标是在需要引导多个未改动的 PC 操作系统时支持完整的硬件模拟。
from http://blog.csdn.net/cenziboy/article/details/6953647
----------
Running Virtual Machines with qemu-kvm
http://doc.opensuse.org/products/draft/SLES/SLES-kvm_sd_draft/cha.qemu.running.html#cha.qemu.running.networking.nicChapter 13. Running Virtual Machines with qemu-kvm¶
Contents
Once you have a virtual disk image ready (for more information on disk images, see Section 12.2, “Managing Disk Images with qemu-img”), it is time to start the related virtual machine. Section 12.1, “Basic Installation with qemu-kvm” introduced simple commands to install and run a VM Guest. This chapter focuses on a more detailed explanation of qemu-kvm usage, and shows solutions of more specific tasks. For a complete list of qemu-kvm‘s options, see its manual page (man 1 qemu-kvm).13.1. Basic qemu-kvm Invocation¶
qemu-kvm options
disk_img
13.2. General qemu-kvm Options¶
-name
name_of_guest
- Specifies the name of the running guest system. The name is displayed in the window caption and also used for the VNC server.
-boot
options
- Specifies the order in which the defined drives will be booted.
Drives are represented by letters, where ‘a’ and ‘b’ stands for the
floppy drives 1 and 2, ‘c’ stands for the first hard disk, ‘d’ stands
for the first CD-ROM drive, and ‘n’ to ‘p’ stand for Ether-boot network
adapters.For example,
qemu-kvm [...] -boot order=ndc
first tries to boot from network, then from the first CD-ROM drive, and finally from the first hard disk. -pidfile
fname
- Stores the QEMU’s process identification number (PID) in a file. This is useful if you run QEMU from a script.
-nodefaults
- By default QEMU creates basic virtual devices even if you do not specify them on the command line. This option turns this feature off, and you must specify every single device manually, including graphical and network cards, parallel or serial ports, or virtual consoles. Even QEMU monitor is not attached by default.
-daemonize
- ‘Daemonizes’ the QEMU process after it is started. QEMU will detach from the standard input and standard output after it is ready to receive connections on any of its devices.
13.2.1. Basic Virtual Hardware¶
-M
machine_type
- Specifies the type of the emulated machine. Run qemu-kvm -M ?to view a list of supported machine types.
tux@venus:~> qemu-kvm -M ? Supported machines are: pc Standard PC (alias of pc-0.12) pc-0.12 Standard PC (default) pc-0.11 Standard PC, qemu 0.11 pc-0.10 Standard PC, qemu 0.10 isapc ISA-only PC
Currently, SUSE supports only pc-0.12
andpc-0.14
machine types. -m
megabytes
- Specifies how many megabytes are used for the virtual RAM size. Default is 512 MB.
-balloon virtio
- Specifies a paravirtualized device to dynamically change the amount
of virtual RAM memory assigned to VM Guest. The top limit is the amount
of memory specified with
-m
. -cpu
cpu_model
- Specifies the type of the processor (CPU) model. Run qemu-kvm -cpu ?to view a list of supported CPU models.
tux@venus:~> qemu-kvm -cpu ? x86 qemu64 x86 phenom x86 core2duo x86 kvm64 x86 qemu32 x86 coreduo x86 486 x86 pentium x86 pentium2 x86 pentium3 x86 athlon x86 n270
-smp
number_of_cpus
- Specifies how many CPUs will be emulated. QEMU supports up to 255 CPUs on the PC platform (up to 64 with KVM acceleration used). This option also takes other CPU-related parameters, such as number of sockets, number of cores per socket, or number of threads per core.
qemu-kvm -name "SLES 11 SP1" -M pc-0.12 -m 512 -cpu kvm64 \ -smp 2 /images/sles11sp1.raw
-no-acpi
- Disables ACPI support. Try to use it if VM Guest reports problems with ACPI interface.
-S
- QEMU starts with CPU stopped. To start CPU, enter
c
in QEMU monitor. For more information, see Chapter 14, Administrating Virtual Machines with QEMU Monitor.
13.2.2. Storing and Reading Configuration of Virtual Devices¶
-readconfig
cfg_file
- Instead of entering the devices configuration options on the command line each time you want to run VM Guest, qemu-kvm can read it from a file which was either previously saved with
-writeconfig
or edited manually. -writeconfig
cfg_file
- Dumps the current virtual machine devices configuration to a text file. It can be consequently re-used with the
-readconfig
option.
tux@venus:~> qemu-kvm -name "SLES 11 SP1" -M pc-0.12 -m 512 -cpu kvm64 \ -smp 2 /images/sles11sp1.raw -writeconfig /images/sles11sp1.cfg (exited) tux@venus:~> more /images/sles11sp1.cfg # qemu config file [drive] index = "0" media = "disk" file = "/images/sles11sp1_base.raw"
This way you can effectively manage the configuration of your virtual machines’ devices in a well-arranged way.
13.2.3. Guest Real-time Clock¶
-rtc
options
- Specifies the way the RTC is handled inside a VM Guest. By default,
the clock of the guest is derived from that of the host system.
Therefore, it is recommended that the host system clock is synchronized
with an accurate external clock (for example, via NTP service).If you
need to isolate the VM Guest clock from the host one, specify
clock=vm
instead of the defaultclock=host
. You can also specify a ‘starting point’ for VM Guest clock with thebase
option:
qemu-kvm [...] -rtc clock=vm,base=2010-12-03T01:02:00
Instead of a timestamp, you can specifyutc
orlocaltime
. The former instructs VM Guest to start at the current UTC value (Coordinated Universal Time, see http://en.wikipedia.org/wiki/UTC), while the latter applies the local time setting.
13.3. Using Devices in QEMU¶
13.3.1. Block Devices¶
Virtual machine drives are defined with
-drive
. This option uses many suboptions, some of which are described in this section. For their complete list, see the manual page (man 1 qemu-kvm
).
Sub-options for the
-drive
Optionfile=image_fname
- Specifies the path to the disk image which will be used with this drive. If not specified, an empty (removable) drive is assumed.
if=drive_interface
- Specifies the type of interface to which the drive is connected. Currently only
floppy
,ide
, orvirtio
are supported by SUSE.virtio
defines a paravirtualized disk driver. Default iside
. index=index_of_connector
- Specifies the index number of a connector on the disk interface (see the
if
option) where the drive is connected. If not specified, the index is automatically incremented. media=type
- Specifies the type of the media. Can be
disk
for hard disks, orcdrom
for removable CD-ROM drives. format=img_fmt
- Specifies the format of the connected disk image. If not specified, the format is autodetected. Currently, SUSE supports
qcow2
,qed
andraw
formats. cache=method
- Specifies the caching method for the drive. Possible values are
unsafe
,writethrough
,writeback
, ornone
. For theqcow2
image format, choosewriteback
if you care about performance.none
disables the host page cache and, therefore, is the safest option. Default iswritethrough
.
To simplify defining of block devices, QEMU understands several shortcuts which you may find handy when entering the qemu-kvm command line.You can use
qemu-kvm -cdrom /images/cdrom.isoinstead of qemu-kvm -drive file=/images/cdrom.iso,index=2,media=cdromand qemu-kvm -hda /images/imagei1.raw -hdb /images/image2.raw -hdc \ /images/image3.raw -hdd /images/image4.rawinstead of qemu-kvm -drive file=/images/image1.raw,index=0,media=disk \ -drive file=/images/image2.raw,index=1,media=disk \ -drive file=/images/image3.raw,index=2,media=disk \ -drive file=/images/image4.raw,index=3,media=disk |
Using Host Drives Instead of Images | |
---|---|
Normally you will use disk images (see Section 12.2, “Managing Disk Images with qemu-img”)
as disk drives of the virtual machine. However, you can also use
existing VM Host Server disks, connect them as drives, and access them
from VM Guest. Use the host disk device directly instead of disk image
filenames.To access the host CD-ROM drive, use
qemu-kvm [...] -drive file=/dev/cdrom,media=cdromTo access the host hard disk, use qemu-kvm [...] -drive file=/dev/hdb,media=diskWhen accessing the host hard drive from VM Guest, always make sure the access is read-only. You can do so by modifying the host device permissions. |
13.3.2. Graphic Devices and Display Options¶
13.3.2.1. Defining Video Cards¶
-vga
to define a video card used to display VM Guest graphical output. The -vga
option understands the following values:none
- Disables video cards on VM Guest (no video card is emulated). You can still access the running VM Guest via the QEMU monitor and the serial console.
std
- Emulates a standard VESA 2.0 VBE video card. Use it if you intend to use high display resolution on VM Guest.
cirrus
- Emulates Cirrus Logic GD5446 video card. Good choice if you insist
on high compatibility of the emulated video hardware. Most operating
systems (even Windows 95) recognize this type of card.
For best video performance with the cirrus
type, use 16-bit color depth both on VM Guest and VM Host Server.
13.3.2.2. Display Options¶
-nographic
- Disables QEMU’s graphical output. The emulated serial port is redirected to the console.After starting the virtual machine with
-nographic
, press Ctrl+A H in the virtual console to view the list of other useful shortcuts, for example, to toggle between the console and the QEMU monitor.
tux@venus:~> qemu-kvm -hda /images/sles11sp1_base.raw -nographic C-a h print this help C-a x exit emulator C-a s save disk data back to file (if -snapshot) C-a t toggle console timestamps C-a b send break (magic sysrq) C-a c switch between console and monitor C-a C-a sends C-a (pressed C-a c) QEMU 0.12.5 monitor - type 'help' for more information (qemu)
-no-frame
- Disables decorations for the QEMU window. Convenient for dedicated desktop workspace.
-full-screen
- Starts QEMU graphical output in full screen mode.
-no-quit
- Disables the ‘close’ button of QEMU window and prevents it from being closed by force.
-alt-grab, -ctrl-grab
- By default QEMU window releases the ‘captured’ mouse after Ctrl+Alt
is pressed. You can change the key combination to either Ctrl+Alt+Shift (
-alt-grab
), or Right Ctrl (-ctrl-grab
).
13.3.3. USB Devices¶
-usb
option. Then you can specify individual devices with the -usbdevice
option.13.3.3.1. Emulating USB Devices in VM Guest¶
mouse
and tablet
.
Types of USB devices for the
-usbdevice
Optionmouse
- Emulates a virtual USB mouse. This option overrides the default PS/2
mouse emulation. The following example shows the hardware status of a
mouse on VM Guest started with
qemu-kvm [...] -usbdevice mouse
:
tux@venus:~> hwinfo --mouse 20: USB 00.0: 10503 USB Mouse [Created at usb.122] UDI: /org/freedesktop/Hal/devices/usb_device_627_1_1_if0 [...] Hardware Class: mouse Model: "Adomax QEMU USB Mouse" Hotplug: USB Vendor: usb 0x0627 "Adomax Technology Co., Ltd" Device: usb 0x0001 "QEMU USB Mouse" [...]
tablet
- Emulates a pointer device that uses absolute coordinates (such as touchscreen). This option overrides the default PS/2 mouse emulation. The tablet device is useful if you are viewing VM Guest via the VNC protocol. See Section 13.5, “Viewing a VM Guest with VNC” for more information.
13.3.3.2. USB Pass-Through¶
tux@vmhost:~> lsusb [...] Bus 002 Device 005: ID 12d1:1406 Huawei Technologies Co., Ltd. E1750 [...]In the above example, we want to assign a USB stick connected to the host’s USB bus number 2 with device number 5. Now run the VM Guest with the following additional options:
qemu-kvm [...] -usb -device usb-host,hostbus=2,hostaddr=5After the guest is booted, check that the assigned USB device is present on it.
tux@vmguest:~> lsusb [...] Bus 001 Device 002: ID 12d1:1406 Huawei Technologies Co., Ltd. E1750 [...]
The guest operating system must take care of mounting the assigned USB device so that it is accessible for the user. |
13.3.4. PCI Pass-Through¶
To make use of PCI Pass-Through, your motherboard chipset, BIOS, and CPU must have support for AMD’s IOMMU (or VT-d in Intel speak) virtualization technology. To make sure that your computer supports this feature, ask your supplier specifically to deliver a system that supports PCI Pass-Through. |
Assignment of graphics cards is not supported by SUSE. |
Procedure 13.1. Configuring PCI Pass-Through
- Make sure that
CONFIG_DMAR_DEFAULT_ON
is set in the host’s running Kernel:grep CONFIG_DMAR_DEFAULT_ON /boot/config-`uname -r`
If this option is not set, edit your boot loader configuration and addintel_iommu=on
(Intel machines) oriommu=pt iommu=1
(AMD machines). Then reboot the host machine. - Check that IOMMU is actively enabled and recognized on the host. Run dmesg | grep -e DMAR -e IOMMU on Intel machines, or dmesg | grep AMD-Vi on AMD machines. If you get no output, check carefully if your hardware supports IOMMU (VT-d) and check that it has been enabled in BIOS.
- Identify the host PCI device to assign to the guest.
tux@vmhost:~> lspci -nn [...] 00:1b.0 Audio device [0403]: Intel Corporation 82801H (ICH8 Family) \ HD Audio Controller [8086:284b] (rev 02) [...]
Note down the device (00:1b.0
) and vendor (8086:284b
) ID. - Unbind the device from host Kernel driver and bind it to the PCI stub driver.
tux@vmhost:~> modprobe pci_stub tux@vmhost:~> echo "8086 284b" > /sys/bus/pci/drivers/pci-stub/new_id tux@vmhost:~> echo "0000:00:1b.0" > /sys/bus/pci/devices/0000:00:1b.0/driver/unbind tux@vmhost:~> echo "0000:00:1b.0" > /sys/bus/pci/drivers/pci-stub/bind
- Now run the VM Guest with the PCI device assigned.
qemu-kvm [...] -device pci-assign,host=00:1b.0
If the PCI device shares IRQ with other devices, it cannot be assigned to a VM Guest. |
- hot add:
device_add pci-assign,host=00:1b.0,id=new_pci_device
- hot remove:
device_del new_pci_device
13.3.5. Character Devices¶
-chardev
to create a new character device. The option uses the following general syntax:qemu-kvm [...] -chardevwherebackend_type
,id=id_string
backend_type
can be one of null
, socket
, udp
, msmouse
, vc
, file
, pipe
, console
, serial
, pty
, stdio
, braille
, tty
, or parport
.
All character devices must have a unique identification string up to
127 characters long. It is used to identify the device in other related
directives. For the complete description of all backend’s suboptions,
see the manual page (man 1 qemu-kvm). A brief description of the available backends
follows:null
- Creates an empty device which outputs no data and drops any data it receives.
stdio
- Connects to QEMU’s process standard input and standard output.
socket
- Creates a two-way stream socket. If
path
is specified, a Unix socket is created:
qemu-kvm [...] -chardev \ socket,id=unix_socket1,path=/tmp/unix_socket1,server
Theserver
suboption specifies that the socket is a listening socket.
Ifport
is specified, a TCP socket is created:
qemu-kvm [...] -chardev \ socket,id=tcp_socket1,host=localhost,port=7777,server,nowait
The command creates a local listening (server
) TCP socket on port 7777. QEMU will not block waiting for a client to connect to the listening port (nowait
). udp
- Sends all network traffic from VM Guest to a remote host over the UDP protocol.
qemu-kvm [...] -chardev udp,id=udp_fwd,host=mercury.example.com,port=7777
The command binds port 7777 on the remote host mercury.example.com and sends VM Guest network traffic there. vc
- Creates a new QEMU text console. You can optionally specify the dimensions of the virtual console:
qemu-kvm [...] -chardev vc,id=vc1,width=640,height=480 -mon chardev=vc1
The command creates a new virtual console calledvc1
of the specified size, and connects the QEMU monitor to it. file
- Logs all traffic from VM Guest to a file on VM Host Server. The
path
is required and will be created if it does not exist.
qemu-kvm [...] -chardev file,id=qemu_log1,path=/var/log/qemu/guest1.log
-serial
char_dev
- Redirects the VM Guest’s virtual serial port to a character device
char_dev
on VM Host Server. By default, it is a virtual console (vc
) in graphical mode, andstdio
in non-graphical mode. The-serial
understands many suboptions. See the manual page man 1 qemu-kvmfor their complete list.You can emulate up to 4 serial ports. Use-serial none
to disable all serial ports. -parallel
device
- Redirects the VM Guest’s parallel port to a
device
. This option supports the same devices as-serial
.
With SUSE Linux Enterprise Server as a VM Host Server, you can directly use the hardware parallel port devices /dev/parportN
whereN
is the number of the port.-parallel none
to disable all parallel ports. -monitor
char_dev
- Redirects the QEMU monitor to a character device
char_dev
on VM Host Server. This option supports the same devices as-serial
. By default, it is a virtual console (vc
) in a graphical mode, andstdio
in non-graphical mode.
13.4. Networking in QEMU¶
-net
option to define a network interface and a
specific type of networking for your VM Guest. Currently, SUSE supports
the following options: none
, nic
, user
, and tap
. For a complete list of -net
suboptions, see the manual page (man 1 qemu-kvm).
Supported
-net
Suboptionsnone
- Disables a network card emulation on VM Guest. Only the loopback
lo
network interface is available. nic
- Creates a new Network Interface Card (NIC) and connects it to a specified Virtual Local Area Network (VLAN). For more information, see Section 13.4.1, “Defining a Network Interface Card”
user
- Specifies a user-mode networking. For more information , see Section 13.4.2, “User-mode Networking”.
tap
- Specifies a bridged or routed networking. For more information, see Section 13.4.3, “Bridged Networking”.
13.4.1. Defining a Network Interface Card¶
-net nic
to add a new emulated network card:qemu-kvm [...] -net nic,vlan=1
,macaddr=00:16:35:AF:94:4B
,\ model=virtio
,name=ncard1
13.4.2. User-mode Networking¶
-net user
option instructs QEMU to use a user-mode
networking. This is the default if no networking mode is selected.
Therefore, these command lines are equivalent:qemu-kvm -hda /images/sles11sp1_base.raw
qemu-kvm -hda /images/sles11sp1_base.raw -net nic -net userThis mode is useful if you want to allow the VM Guest to access the external network resources, such as Internet. By default, no incoming traffic is permitted and therefore, the VM Guest is not visible to other machines on the network. No administrator privileges are required in this networking mode. The user-mode is also useful to do a ‘network-booting’ on your VM Guest from a local directory on VM Host Server.
The VM Guest allocates an IP address from a virtual DHCP server. VM Host Server (the DHCP server) is reachable at 10.0.2.2, while the IP address range for allocation starts from 10.0.2.15. You can use ssh to connect to VM Host Server at 10.0.2.2, and scp to copy files back and forth.
13.4.2.1. Command Line Examples¶
Example 13.1. Restricted User-mode Networking¶
qemu-kvm [...] -net user
,vlan=1
,name=user_net1
,restrict=yes
Example 13.2. User-mode Networking with Custom IP Range¶
qemu-kvm [...] -net user,net=10.2.0.0/8
,host=10.2.0.6
,dhcpstart=10.2.0.20
,\ hostname=tux_kvm_guest
Example 13.3. User-mode Networking with Network-boot and TFTP¶
qemu-kvm [...] -net user,tftp=/images/tftp_dir
,bootfile=/images/boot/pxelinux.0
Example 13.4. User-mode Networking with Host Port Forwarding¶
qemu-kvm [...] -net user,hostfwd=tcp::2222-:22Forwards incoming TCP connections to the port 2222 on the host to the port 22 (
SSH
) on VM Guest. If sshd
is running on VM Guest, enterssh qemu_host -p 2222where
qemu_host
is the hostname or IP address of the host system, to get a SSH
prompt from VM Guest.13.4.3. Bridged Networking¶
-net tap
option, QEMU creates a network bridge
by connecting the host TAP network device to a specified VLAN of VM
Guest. Its network interface is then visible to the rest of the network.
This method does not work by default and has to be explicitly
specified.First, create a network bridge and add a VM Host Server physical network interface (usually
eth0
) to it:- Start YaST Control Center and select Network Devices+Network Settings.
- Click Add and select Bridge from the Device Type drop-down list in the Hardware Dialog window. Click Next.
- Choose whether you need a dynamically or statically assigned IP address, and fill the related network settings if applicable.
- In the Bridged Devices pane, select the Ethernet device to add to the bridge. Click Next. When asked about adapting an already configured device, click Continue.
- Click OK to apply the changes. Check if the bridge is created:
tux@venus:~> brctl show bridge name bridge id STP enabled interfaces br0 8000.001676d670e4 no eth0
br0
. Several commands in the script are run via the sudo mechanism because they require root
privileges.Make sure the tunctl and bridge-utils packages are installed on the VM Host Server. If not, install them with zypper in tunctl bridge-utils. |
#!/bin/bash bridge=br0
tap=$(sudo tunctl -u $(whoami) -b)
sudo ip link set $tap up
sleep 1s
sudo brctl addif $bridge $tap
qemu-kvm -m 512 -hda /images/sles11sp1_base.raw \ -net nic,vlan=0,model=virtio,macaddr=00:16:35:AF:94:4B \ -net tap,vlan=0,ifname=$tap
,script=no
,downscript=no sudo brctl delif $bridge $tap
sudo ip link set $tap down
sudo tunctl -d $tap
13.4.4. Accelerated Networking with vhost-net
¶
vhost-net
module is used to accelerate KVM’s
paravirtualized network drivers. It provides better latency and greater
throughput for network.To make use of the module, verify that the host’s running Kernel has
CONFIG_VHOST_NET
turned on or enabled as a module:grep CONFIG_VHOST_NET /boot/config-`uname -r`Also verify that the guest’s running Kernel has
CONFIG_PCI_MSI
enabled:grep CONFIG_PCI_MSI /boot/config-`uname -r`If both conditions are met, use the
vhost-net
driver by starting the guest with the following example command line:qemu-kvm [...] -netdev tap,id=guest0,vhost=on,script=no -net nic,model=virtio,netdev=guest0,macaddr=00:16:35:AF:94:4BNote that
guest0
is an identification string of the vhost-driven device.13.5. Viewing a VM Guest with VNC¶
-vnc
option specified, you can make QEMU listen on a specified VNC display and redirect its graphical output to the VNC session.When working with QEMU’s virtual machine via VNC session, it is useful to work with the -usbdevice tablet option.Moreover, if you need to use another keyboard layout than the default en-us , specify it with the -k option. |
-vnc
must be a display value. The -vnc
option understands the following display specifications:host:display
- Only connections from
host
on the display numberdisplay
will be accepted. The TCP port on which the VNC session is then running is normally a 5900 +display
number. If you do not specifyhost
, connections will be accepted from any host. unix:path
- The VNC server listens for connections on Unix domain sockets. The
path
option specifies the location of the related Unix socket. none
- The VNC server functionality is initialized, but the server itself is not started. You can start the VNC server later with the QEMU monitor. For more information, see Chapter 14, Administrating Virtual Machines with QEMU Monitor.
tux@venus:~> qemu-kvm [...] -vnc :5 (on the client:) wilber@jupiter:~> vinagre venus:5905 &
13.5.1. Secure VNC Connections¶
There are several levels of security which you can apply to your VNC client/server connection. You can either protect your connection with a password, use x509 certificates, use SASL authentication, or even combine some of these authentication methods in one QEMU command.
See Section A.2, “Generating x509 Client/Server Certificates” for more information about the x509 certificates generation. For more information about configuring x509 certificates on a VM Host Server and the client, see Section 7.2.2, “Remote TLS/SSL Connection with x509 Certificate (
qemu+tls
)” and Section 7.2.2.3, “Configuring the Client and Testing the Setup”.The Vinagre VNC viewer supports advanced authentication mechanisms. Therefore, it will be used to view the graphical output of VM Guest in the following examples. For this example, let us assume that the server x509 certificates
ca-cert.pem
, server-cert.pem
, and server-key.pem
are located in the /etc/pki/qemu
directory on the host, while the client’s certificates are distributed in the following locations on the client:/etc/pki/CA/cacert.pem |
/etc/pki/libvirt-vnc/clientcert.pem |
/etc/pki/libvirt-vnc/private/clientkey.pem |
Example 13.5. Password Authentication¶
qemu-kvm [...] -vnc :5,password -monitor stdioStarts the VM Guest graphical output on VNC display number 5 (usually port 5905). The
password
suboption initializes a simple password-based authentication method.
There is no password set by default and you have to set one with the change vnc password command in QEMU monitor:QEMU 0.12.5 monitor - type 'help' for more information (qemu) change vnc password Password: ****You need the
-monitor stdio
option here, because you would not be able to manage the QEMU monitor without redirecting its input/output.
Example 13.6. x509 Certificate Authentication¶
The QEMU VNC server can use TLS encryption for the session and x509
certificates for authentication. The server asks the client for a
certificate and validates it against the CA certificate. Use this
authentication type if your company provides an internal certificate
authority.
qemu-kvm [...] -vnc :5,tls,x509verify=/etc/pki/qemu
Example 13.7. x509 Certificate and Password Authentication¶
You can combine the password authentication with TLS encryption and
x509 certificate authentication to create a two-layer authentication
model for clients. Remember to set the password in the QEMU monitor
after you run the following command:
qemu-kvm [...] -vnc :5,password,tls,x509verify=/etc/pki/qemu -monitor stdio
Example 13.8. SASL Authentication¶
Simple Authentication and Security Layer (SASL) is a framework for
authentication and data security in Internet protocols. It integrates
several authentication mechanisms, like PAM, Kerberos, LDAP and more.
SASL keeps its own user database, so the connecting user accounts do not
need to exist on VM Host Server.
For security reasons, you are advised to combine SASL authentication with TLS encryption and x509 certificates:
For security reasons, you are advised to combine SASL authentication with TLS encryption and x509 certificates:
qemu-kvm [...] -vnc :5,tls,x509,sasl -monitor stdio
13.6. VirtFS: Sharing Folders between Host and Guests¶
KVM introduces a new and more optimized tool called VirtFS (sometimes referred to as a “filesystem pass-through”). VirtFS uses a paravirtual filesystem driver, which avoids converting the guest application filesystem operations into block device operations, and then again into host filesystem operations. VirtFS uses Plan-9 network protocol for communication between the guest and the host.
You can typically use VirtFS to
- access a shared folder from several guests, or to provide guest-to-guest filesystem access.
- replace the virtual disk as the root filesystem to which the guest’s ramdisk connects to during the guest boot process
- provide storage services to different customers from a single host filesystem in a cloud environment
13.6.1. Implementation¶
virtio-9p-pci
device which transports protocol messages and data between the host and the guest.fsdev
device which defines the export filesystem properties, such as filesystem type and security model.
Example 13.9. Exporting Host’s Filesystem with VirtFS¶
Such an exported filesystem can be mounted on the guest like this
qemu-kvm [...] -fsdev local,id=exp1
,path=/tmp/
,security_model=mapped
-device virtio-9p-pci,fsdev=exp1
,mount_tag=v_tmp
mount -t 9p -o trans=virtio v_tmp /mntwhere
v_tmp
is the mount tag defined earlier with -device mount_tag=
and /mnt
is the mount point where you want to mount the exported filesystem.13.7. KSM: Sharing Memory Pages between Guests¶
To make use of KSM, do the following.
- Verify that KSM is enabled in your running Kernel:
grep KSM /boot/config-`uname -r` CONFIG_KSM=y
If KSM is enabled in the running Kernel, you will see the following files under the/sys/kernel/mm/ksm
directory:
ls -l /sys/kernel/mm/ksm total 0 drwxr-xr-x 2 root root 0 Nov 9 07:10 ./ drwxr-xr-x 6 root root 0 Nov 9 07:10 ../ -r--r--r-- 1 root root 4096 Nov 9 07:10 full_scans -r--r--r-- 1 root root 4096 Nov 9 07:10 pages_shared -r--r--r-- 1 root root 4096 Nov 9 07:10 pages_sharing -rw-r--r-- 1 root root 4096 Nov 9 07:10 pages_to_scan -r--r--r-- 1 root root 4096 Nov 9 07:10 pages_unshared -r--r--r-- 1 root root 4096 Nov 9 07:10 pages_volatile -rw-r--r-- 1 root root 4096 Nov 9 07:10 run -rw-r--r-- 1 root root 4096 Nov 9 07:10 sleep_millisecs
- Check if KSM feature is turned on.
cat /sys/kernel/mm/ksm/run
If the command returns0
, turn KSM on with
echo 1 > /sys/kernel/mm/ksm/run
- Now run several VM Guests under KVM and inspect the content of files
pages_sharing
andpages_shared
, for example:while [ 1 ]; do cat /sys/kernel/mm/ksm/pages_shared; sleep 1; done 13522 13523 13519 13518 13520 13520 13528
/sys/kernel/mm/ksm/*
files, see /usr/src/linux/Documentation/vm/ksm.txt
(package kernel-source
).
DocumentationVirtualization with KVMManaging Virtual Machines with QEMU
--------------------
Debian中安装qemu-kvm; qemu-kvm中安装Gentoo
桥接:桥接可以让Guest OS (KVM中的OS)和Host OS(此处是Debian)位于同一局域网内,并且可以和局域网内其他PC相互通信。
一. 加载kvm模块
# modprobe kvm
# modprobe kvm_amd #intel CPU 是 kvm_intel
由于实验环境是WMWare,硬件(CPU)是虚拟化出来的,所以下面这条命令会出错:
不管了,无视此错误,直接跳过 !
二. 安装qemu工具
# apt-get install qemu-kvm #如下图所示
安装过程结束时报错了,继续无视跳过!(qemu 是 KVM 在用户空间到管理工具,不安装qemu-kvm则无法使用相关命令)
三. 安装桥接网络的工具
# apt-get install bridge-utils
# apt-get install uml-utilities (命令#tunctl要使用)
四. 配置网络
# vi /etc/network/interfaces
#编辑文件内容如下
auto lo
iface lo inet loopback
auto br0
iface br0 inet static #dhcp
bridge_ports eth0
address 192.168.1.39
netmask 255.255.255.0
gateway 192.168.1.6
#bridge_stp off
#bridge_maxwait 0
#bridge_fd 0 (不知道这三行有何用,直接注释掉了)
五. restart the network(重启网络看看OK不?)
# /etc/init.d/networking restart
六. 加载TUN/TAP模块并设置桥接口
# modprobe tun
# tunctl #创建虚拟网卡tapX (X代表数字0,1,2…本文使用的是tap0)
# brctl addif br0 tap0 #将上一步创建的 tapX 加入网桥 br0
# ifconfig tap0 promisc up #启用tapX 并设置为promisc 模式
七. 创建虚拟硬盘,安装系统
#kvm-img create disk.img 4G #真实系统(非虚拟机中)可能是qemu-img命令
#kvm -cdrom xp.iso -hda disk.img -boot d #无网络“开机”并从xp.iso启动,安装系统
或者使用下条命令联网安装系统,非dhcp需配置网络:IP,DNS,Gateway…
#kvm -cdrom xp.iso -hda disk.img -net nic,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no -boot d
接下来就进入安装界面了,这里不再赘述,需要自己配置网络:IP,DNS,Gateway… (下图是我的kvm启动界面)
进入kvm虚拟机(这里是Gentoo ),配置网络后可以联网(见下图,192.168.1.6是我Win 7 系统IP,整个宿舍其它PC的网关)
OK,在KVM虚拟机Gentoo中已经可以ping通baidu.com, 大功告成 !
八. 用以安装了系统的disk.img启动虚拟机,并启用桥接网络
#kvm -net nic,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no disk.img
注:tapX对应于第6步创建的tapX, 如需创建多台guest OS,重复步骤6,7,8 即可。qemu-kvm 参数较多,如后台运行虚拟机,用vnc远程连接等方法请参考响应文档,例如man
总结:以前听人说过虚拟机中不可以继续虚拟机,后来google了下说是可以,今天证明了确实可以!只不过是在WMWare中KVM罢了.
-----------
如何使用 KVM (Kernel-based Virtual Machines)
環境:Ubuntu 9.10 Server x86_64
一、安裝 KVM
首先,確認 CPU 是否支援虛擬指令集。
開啟終端機,執行:
如果畫面印出相關資訊,並且有 highlight 的 vmx 或 svm 的字串,代表 CPU 有支援。
執行:
執行:
# 安裝過程中會出現 Postfix 的問項,可以選 Internet。
安裝完成,執行:
將當前登入的使用者加入 libvirtd 群組,方能執行 virsh 指令。
執 行完畢,請登出,再重新登入,讓設定生效。
執行:
有 成功印出訊息,代表安裝成功。
×
二、設定橋接網路 ( bridge network )
# 以下操作可能無法從遠端執行,因為涉及網路的中斷及重開。
安裝橋接用的套件,執行:
# 注意!執行以下指令會讓網路切斷,SSH 將無法遠端連線!
為了確保設定期間沒有特殊問 題,要先停掉網路服務,執行:
執行:
將 interfaces 內容編輯成類似:
# 以下為 TKG 網格的 Private Network 設定,請根據需求自行調整。
存檔離開後,執行:
網路順利啟動 後,可以試著用 SSH 遠端登入看看。
可以執行更新或 ping DNS 等動作進行測試。
虛擬機器的各項預設值,可以參考 /etc/vmbuilder/libvirt/libvirtxml.tmpl 檔案。
×
三、安裝第一台虛擬機器
KVM 目前有三套主流管理工具:
virt-manager:一個圖形介面的管理工具,可以安裝在有 X window 的 Linux 機器上。
virt- install:一個用 python 撰寫的文字介面管理工具,Red Hat 開發。
ubuntu-vm-builder:文字介面管理工 具,Canonical 開發。
virt-install 有比較大的操作自由度,所以選用它進行管理操作。
先把它裝起 來,執行:
查看 virt-install 各項的參數,執行:
查看 virt-install 完整的使用說明,執行:
執行:
一個完整的例子如下:
順 利執行完畢,虛擬機器就存在了。
新建的虛擬機器,其描述檔為:
要 將新建的虛擬機器進行第一次開機,執行:
確定虛擬機器開機後,到 X window 環境的機器底下,執行:
執行:
成功登入之後,將會出現遠端的虛擬機器畫面。
進行正常 的作業系統安裝動作,完成後關機。
再次進入 virsh 開機,然後從其他機器 SSH 連線測試。
如果出現問題,請使用 virt-viewer 檢查虛擬機器狀況。
×
四、使用已存在的虛擬硬碟檔安裝新的虛擬機器
執行:
×
五、複製虛擬機器
執行:
×
六、虛擬機器的管理
執行:
# 查看所有可以用指令
# 取出虛擬機器描述檔
# 使用虛擬機器描述檔建立虛擬機器
# 移除虛擬機器
# 列出所有虛擬機器
# 啟動虛擬機器
# 關閉虛擬機器
# 拔除虛擬機器電源
×
Appendix:
Full Virtualization specific options
Parameters specific only to fully virtualized guest installs.
–sound
Attach a virtual audio device to the guest.
–noapic
Override the OS type / variant to disables the APIC setting for fully virtualized guest.
–noacpi
Override the OS type / variant to disables the ACPI setting for fully virtualized guest.
Virtualization Type options
Options to override the default virtualization type choices.
-v, –hvm
Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor.
-p, –paravirt
This guest should be a paravirtualized guest. If the host supports both para & full virtualization, and neither this parameter nor the “–hvm” are specified, this will be assumed.
–accelerate
When installing a QEMU guest, make use of the KVM or KQEMU kernel acceleration capabilities if available. Use of this option is recommended unless a guest OS is known to be incompatible with the accelerators. The KVM accelerator is preferred over KQEMU if both are available.
×
Reference:
01. https://help.ubuntu.com/community/KVM
02. http://thundersha.blogspot.com/2008/07/ubuntu-kvmgui-sector2.html
03. http://www.boobooke.com/v/bbk1819/
04. http://southbrain.com/south/2009/08/youtube-examples-of-xvm-virtin.html
--------------------------------
--------------------
Debian中安装qemu-kvm; qemu-kvm中安装Gentoo
桥接:桥接可以让Guest OS (KVM中的OS)和Host OS(此处是Debian)位于同一局域网内,并且可以和局域网内其他PC相互通信。
一. 加载kvm模块
# modprobe kvm
# modprobe kvm_amd #intel CPU 是 kvm_intel
由于实验环境是WMWare,硬件(CPU)是虚拟化出来的,所以下面这条命令会出错:
不管了,无视此错误,直接跳过 !
二. 安装qemu工具
# apt-get install qemu-kvm #如下图所示
安装过程结束时报错了,继续无视跳过!(qemu 是 KVM 在用户空间到管理工具,不安装qemu-kvm则无法使用相关命令)
三. 安装桥接网络的工具
# apt-get install bridge-utils
# apt-get install uml-utilities (命令#tunctl要使用)
四. 配置网络
# vi /etc/network/interfaces
#编辑文件内容如下
auto lo
iface lo inet loopback
auto br0
iface br0 inet static #dhcp
bridge_ports eth0
address 192.168.1.39
netmask 255.255.255.0
gateway 192.168.1.6
#bridge_stp off
#bridge_maxwait 0
#bridge_fd 0 (不知道这三行有何用,直接注释掉了)
五. restart the network(重启网络看看OK不?)
# /etc/init.d/networking restart
六. 加载TUN/TAP模块并设置桥接口
# modprobe tun
# tunctl #创建虚拟网卡tapX (X代表数字0,1,2…本文使用的是tap0)
# brctl addif br0 tap0 #将上一步创建的 tapX 加入网桥 br0
# ifconfig tap0 promisc up #启用tapX 并设置为promisc 模式
七. 创建虚拟硬盘,安装系统
#kvm-img create disk.img 4G #真实系统(非虚拟机中)可能是qemu-img命令
#kvm -cdrom xp.iso -hda disk.img -boot d #无网络“开机”并从xp.iso启动,安装系统
或者使用下条命令联网安装系统,非dhcp需配置网络:IP,DNS,Gateway…
#kvm -cdrom xp.iso -hda disk.img -net nic,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no -boot d
接下来就进入安装界面了,这里不再赘述,需要自己配置网络:IP,DNS,Gateway… (下图是我的kvm启动界面)
进入kvm虚拟机(这里是Gentoo ),配置网络后可以联网(见下图,192.168.1.6是我Win 7 系统IP,整个宿舍其它PC的网关)
OK,在KVM虚拟机Gentoo中已经可以ping通baidu.com, 大功告成 !
八. 用以安装了系统的disk.img启动虚拟机,并启用桥接网络
#kvm -net nic,macaddr=00:00:00:00:00:01 -net tap,ifname=tap0,script=no,downscript=no disk.img
注:tapX对应于第6步创建的tapX, 如需创建多台guest OS,重复步骤6,7,8 即可。qemu-kvm 参数较多,如后台运行虚拟机,用vnc远程连接等方法请参考响应文档,例如man
总结:以前听人说过虚拟机中不可以继续虚拟机,后来google了下说是可以,今天证明了确实可以!只不过是在WMWare中KVM罢了.
-----------
如何使用 KVM (Kernel-based Virtual Machines)
環境:Ubuntu 9.10 Server x86_64
一、安裝 KVM
首先,確認 CPU 是否支援虛擬指令集。
開啟終端機,執行:
egrep '(vmx|svm)' --color=always /proc/cpuinfo |
執行:
sudo apt-get update && sudo apt-get upgrade |
sudo aptitude install kvm libvirt-bin ubuntu-vm-builder bridge-utils |
安裝完成,執行:
sudo adduser ` id -un` libvirtd |
執 行完畢,請登出,再重新登入,讓設定生效。
執行:
virsh -c qemu:///system list |
×
二、設定橋接網路 ( bridge network )
# 以下操作可能無法從遠端執行,因為涉及網路的中斷及重開。
安裝橋接用的套件,執行:
sudo apt-get install bridge-utils |
為了確保設定期間沒有特殊問 題,要先停掉網路服務,執行:
sudo invoke-rc.d networking stop |
sudo vim /etc/network/interfaces |
# 以下為 TKG 網格的 Private Network 設定,請根據需求自行調整。
auto lo |
iface lo inet loopback |
auto eth0 |
iface eth0 inet manual |
auto br0 |
iface br0 inet static |
address 192.168.100.主機號碼 |
network 192.168.100.0 |
netmask 255.255.255.0 |
broadcast 192.168.100.255 |
gateway 192.168.100.1 |
bridge_ports eth0 |
bridge_stp off |
bridge_fd 0 |
bridge_maxwait 0 |
sudo /etc/init.d/networking restart |
可以執行更新或 ping DNS 等動作進行測試。
虛擬機器的各項預設值,可以參考 /etc/vmbuilder/libvirt/libvirtxml.tmpl 檔案。
×
三、安裝第一台虛擬機器
KVM 目前有三套主流管理工具:
virt-manager:一個圖形介面的管理工具,可以安裝在有 X window 的 Linux 機器上。
virt- install:一個用 python 撰寫的文字介面管理工具,Red Hat 開發。
ubuntu-vm-builder:文字介面管理工 具,Canonical 開發。
virt-install 有比較大的操作自由度,所以選用它進行管理操作。
先把它裝起 來,執行:
sudo apt-get install python-virtinst |
virt- install --help |
man virt- install |
virt- install \ |
--connect qemu:///system \ |
--name= 虛擬機器的名稱 \ |
-- ram =分配的記憶體大小 [MB] \ |
--os- type =作業系統類型 [ex: linux] \ |
--os-variant=作業系統的版本名稱 [ex: ubuntujaunty] \ |
--hvm [全虛擬化,hvm 與 paravirt 擇其一,請參考附錄] \ |
--paravirt [半虛擬化,hvm 與 paravirt 擇其一,請參考附錄] \ |
--accelerate [KVM 加速器] \ |
--cdrom=系統安裝光碟的路徑 [ex: *.iso] \ |
-- file =虛擬硬碟的路徑 [ex: *.qcow2] \ |
-- file -size=虛擬硬碟的大小 [GB] \ |
--bridge=br0 \ |
--vnc \ |
--noautoconsole \ |
--debug |
virt- install \ |
--connect qemu:///system \ |
--name=imVM \ |
-- ram =1024 \ |
--os- type =linux \ |
--os-variant=ubuntujaunty \ |
--hvm \ |
--accelerate \ |
--cdrom=~/ubuntu-9.04.iso \ |
-- file =~/imVM.qcow2 \ |
-- file -size=8 \ |
--bridge=br0 \ |
--vnc \ |
--noautoconsole \ |
--debug |
新建的虛擬機器,其描述檔為:
/etc/libvirt/qemu/虛擬機器名稱.xml |
virsh |
virsh # start 虛擬機器名稱 |
virsh # list --all |
virsh # quit |
sudo apt-get install virt-viewer |
virt-viewer --connect qemu+ ssh ://使用者帳號@虛擬機器的母體主機位址/system 虛擬機器名稱 |
進行正常 的作業系統安裝動作,完成後關機。
再次進入 virsh 開機,然後從其他機器 SSH 連線測試。
如果出現問題,請使用 virt-viewer 檢查虛擬機器狀況。
×
四、使用已存在的虛擬硬碟檔安裝新的虛擬機器
執行:
virt- install \ |
--connect=qemu:///system \ |
--name=新的虛擬機器名稱 \ |
-- ram =新的虛擬機器記憶體 大小 [MB] \ |
--os- type =作業系統類型 \ |
--os-variant=作業系統名稱 \ |
--accelerate \ |
-- file =已存在的虛擬硬碟路徑 [ex: *.qcow2] \ |
--bridge=br0 \ |
--vnc \ |
--noautoconsole \ |
--debug \ |
-- import |
五、複製虛擬機器
執行:
virt-clone \ |
--connect=qemu:///system \ |
-o 舊的虛擬機器名稱 \ |
-n 新的虛擬機器名稱 \ |
-f 新的虛擬硬碟路徑 [ex: *.qcow2] |
六、虛擬機器的管理
執行:
virsh |
virsh # help |
virsh # dumpxml 虛擬機器名稱 /tmp/虛擬機器描述檔 [ex: *.xml] |
virsh # define /etc/libvirt/qemu/虛擬機器描述檔 [ex: *.xml] |
virsh # undefine 虛擬機器名稱 |
virsh # list --all |
virsh # start 虛擬機器名稱 |
virsh # shutdown 虛擬機器名稱 |
virsh # destory 虛擬機器名稱 |
Appendix:
Full Virtualization specific options
Parameters specific only to fully virtualized guest installs.
–sound
Attach a virtual audio device to the guest.
–noapic
Override the OS type / variant to disables the APIC setting for fully virtualized guest.
–noacpi
Override the OS type / variant to disables the ACPI setting for fully virtualized guest.
Virtualization Type options
Options to override the default virtualization type choices.
-v, –hvm
Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor.
-p, –paravirt
This guest should be a paravirtualized guest. If the host supports both para & full virtualization, and neither this parameter nor the “–hvm” are specified, this will be assumed.
–accelerate
When installing a QEMU guest, make use of the KVM or KQEMU kernel acceleration capabilities if available. Use of this option is recommended unless a guest OS is known to be incompatible with the accelerators. The KVM accelerator is preferred over KQEMU if both are available.
×
Reference:
01. https://help.ubuntu.com/community/KVM
02. http://thundersha.blogspot.com/2008/07/ubuntu-kvmgui-sector2.html
03. http://www.boobooke.com/v/bbk1819/
04. http://southbrain.com/south/2009/08/youtube-examples-of-xvm-virtin.html
--------------------------------
虚拟化方案之kvm简单教程
http://forum.ubuntu.org.cn/viewtopic.php?f=65&t=154792
研究了很久的KVM,感觉是我用过的最快的虚拟机。对比常用的虚拟机,Vmware的功能全面,设置简单,
但其速度不是很好;VirtualBox的效率虽然比Vmware高一些,但是存在不少缺点,感觉在运行时比较抢CPU,现在virtualbox已经支
持smp,和虚拟化技术,但整体效率还是没有KVM高(但是图形效率作的不错);KVM(Kernel-based Virtual
Machine),基于内核的虚拟机,是我用过的最快的虚拟机,需要CPU支持虚拟化技术,并且在BIOS里打开虚拟化选项,效率可达到物理机的80%以
上,对SMP的支持很好。所以现在强烈吐血卖命推荐KVM。
(使用磁盘方式 以更新,请大家注意!!!)
没有废话,以下是在UBUNTU 804.4 64BIT下的方法
获得KVM:
KVM 的网站:http://sourceforge.net/projects/kvm/files/
下载最新的qemu-kvm- 0.12.4.tar.gz
解压:
三步 曲:
下面介绍配置KVM桥接网络的方法: \\特别注意,大部分不能桥接无线网卡。。。只能桥接PCI网卡。。。。
安装 桥接工具:
KVM的使用方法:
KVM 的使用方法具体可以参考
创建虚拟磁盘(用qemu-img命令):
开始启动装系统了吧?是不是非常的快?如果你机器可以的话大概在15分钟左右就把XP装好了。
启 动装好的虚拟机(很简单,上面的命令改两个参数就行):
如果同时运行多个GUEST OS ,则网络设置要改一下,在/etc/network/interfaces 里加几个tap界面就行了,每个GUEST OS单独使用一个TAP,比如说现在要同时运行3个GUEST OS ,则配置文件如下:
要注意,系 统重启后要重新加载kvm内核模块:
同理,可以用此方法安装LINUX。装完了可以对比一下,是不是比VB和VM要爽得多呢?
其 他比如像USB连接问题可以参考论坛里的帖子
我已经在我的系统里同 时运行了4个CentOS 4.8 1个winxp sp3 1个win2003 sp2 5个FreeBSD 8.0
速度太快 了,难以置信。
系统配置为:Athlon X2 5000+ 8G RAM 跑的Ubuntu 8.04.4 64bit
其实 KVM的安装和使用都很方便简单的,大家要理解KVM各个参数的含义。最关键的就是KVM的网络桥接的设置,在这里大家要多看软件自身的文档,会有很大的 帮助。
以上是KVM的最基础的实现方法,望大家多看文档,以便掌握更多更灵活的功能。
BTW:
现在已经找到了原来磁盘性能糟糕的原因,按照以往的方法是用 -hda disk.img 的方法来使用虚拟磁盘,现在版本更新以后时候 -drive file=/home/lm/kvm/winxp.img,cache=writeback 来使用虚拟磁盘,请广大使用KVM的用户注意这里的变化。
-hda / hdb 参数主要用户使用物理硬盘上的一个分区。
注:Ubuntu 10.04 LTS 下的安装方法(qemu-kvm 0.12.3):
直接
大家注意一个问题,如果你虚拟的是WIN2003,那么切勿在-net参数中使用model=e1000,否则 HOST和GUEST之间不能PING通.
------------
A couple of assumptions are made for this work:
There’s a simple init.d/ script included that can be used to auto-start VMs when the host system boots. Just symlink config files into /etc/kvm/auto/ and then add the script to whichever runlevel you want. Just be sure to set it to start after networking is up.
To see a blurb that describe all the options:
brctl addbr br0 # 创建一个桥接口 以后要添加网卡到这个桥接接口只需:
brctl addif br0 eth0 # 添加eth0到br0,重要 我的网络配置文件 /etc/network/interfaces 内容如下:
auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp auto br0 iface br0 inet dhcp bridge_ports eth0 主要思路:把 br0 接口配置成和以前正常工作的网口一样(我这里是 eth0), 然后禁用以前网口的配置(我把 eth0 的配置注释掉了),最后在 br0 配置里面 加上一句 “bridge_ports eth0″ 。
说明: 如果是静态地址,可参照相应设置更改。
#!/bin/sh # 简介:快速试用 KVM # KVM 程序地址 KVM_CMD=/usr/bin/kvm QEMU_IFUP=/etc/init.d/qemu-ifup HOST=172.16.70.3 # 默认值 MEM=512 # 内存 TAP=2 # 第几块网卡(0<TAP<10) # 您的磁盘映像 DISK= # 所有其他的 KVM 参数 OTHER= test -n "$1" && TAP=$1 test -n "$DISK" && HDA="-hda $DISK" || HDA="-hda $2" shift 2 OTHER=$@ # 以来其他变量的变量要最后指明 NET="-net nic,macaddr=32:32:32:32:32:3$TAP -net tap,ifname=tap$TAP,script=$QEMU_IFUP" VNC="-vnc $HOST:$TAP" RUN_CMD="$KVM_CMD -m $MEM $HDA $NET $VNC -localtime --daemonize $OTHER" echo "运行命令:$RUN_CMD" $RUN_CMD if test $? = 0; then echo "KVM 运行成功,请用 VNC 链接 $HOST:$TAP ..." exit 0 else echo "KVM 运行失败,请检查命令行是否有错误!" exit 1 fi 其中用到的 /etc/init.d/qemu-ifup 文件如下:
#!/bin/bash switch=br0 if [ -n "$1" ];then /sbin/ip link set $1 up sleep 0.5s /usr/sbin/brctl addif $switch $1 exit 0 else echo "Error: no interface specified" exit 1 fi uqkvm 脚本使用方法如下:
./uqkvm 3 GreenOS.img -cdrom /data/lab/LessWatts/GTGS-lesswatts_xfce-201004201555.iso -boot d 如果仅仅需要启动系统:
# ./uqkvm 3 GreenOS.img 运行命令:/usr/bin/kvm -m 512 -hda GreenOS.img -net nic,macaddr=32:32:32:32:32:33 -net tap,ifname=tap3,script=/etc/init.d/qemu-ifup -vnc 172.16.70.3:3 -localtime --daemonize pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" KVM 运行成功,请用 VNC 链接 172.16.70.3:3 ... 现在查看桥接设备情况:
$ sudo brctl show bridge name bridge id STP enabled interfaces br0 8000.0001028c5009 no eth0 tap3 可见 tap2 和 eth0 都在 br0 上。
# cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 ONBOOT=yes BOOTPROTO=static IPADDR=172.16.70.30 NETMASK=255.255.252.0 GATEWAY=172.16.68.1 TYPE=Bridge # cat /etc/sysconfig/network-scripts/ifcfg-eth0 # 3Com Corporation 3c905B 100BaseTX [Cyclone] DEVICE=eth0 #BOOTPROTO=dhcp #HWADDR=00:01:02:8C:50:09 ONBOOT=yes BRIDGE=br0 增加一个桥接网络接口,并把刚才配置的eth0添加到桥接口
brctl addbr br0 # 创建一个桥接口 brctl addif br0 eth0 # 添加eth0到br0,重要 使用一个下面的脚本:
# cat /etc/init.d/qemu-ifup #!/bin/bash switch=br0 if [ -n "$1" ];then /sbin/ip link set $1 up sleep 0.5s /usr/sbin/brctl addif $switch $1 exit 0 else echo "Error: no interface specified" exit 1 fi
# qemu-kvm -m 512 -hda turbo-10.5.5-rc2.img -kernel vmlinuz \ -initrd initrd.img -net nic -net tap,script=/etc/init.d/qemu-ifup --daemonize 也可以使用iso安装,这样对于多张iso发行版的linux需要在qemu的控制 台换盘:
# qemu-kvm -m 512 -hda turbo-10.5.5-rc2.img -cdrom 你的iso路径 \ -net nic -net tap,script=/etc/init.d/qemu-ifup -boot d --daemonize 进入qemu的控制台,在鼠标聚焦到qemu界面时候,按住 Ctrl+Alt+2 进 入控制台:
(qemu) change cdrom 你的另外一张iso 上面命令如果提示:”device not found”。可能是设备映射出错,可以这样:
(qemu) info block ... (qemu) change ide1-cd0 iso文件
# launch QEMU instance (note mcast address selected is UML's default) qemu linux.img -net nic,macaddr=52:54:00:12:34:56 -net socket,mcast=239.192.168.1:1102 # launch UML /path/to/linux ubd0=/path/to/root_fs eth0=mcast
我们可以用下面的命令来创建一个磁盘映像:
qemu-img create myimage.img mysize 这里myimage.img是磁盘映像的文件名,而mysize是以K表示的尺寸.我们 可以使用M前缀来使用M表示尺寸或是G作为前缀使用G表示尺寸.
qemu-img选项
可以支持下面的一些命令:
`create [-e] [-b base_image] [-f fmt] filename [size]' `commit [-f fmt] filename' `convert [-c] [-e] [-f fmt] filename [-O output_fmt] output_filename' `info [-f fmt] filename 命令参数
apt-get install qemu qemu-kvm
cd /home/zhiwei/kvm/
创建磁盘, 5G足够了
qemu-img create -f qcow2 squeeze.img 5G
启动安装
kvm -m 512 -hda squeeze.img -cdrom debian-testing-amd64-netinst.iso -boot d -vnc :1
用远程桌面查看器(vncviewer)来连接到 192.168.1.101:1
kvm -m 512 -nographic -daemonize -hda squeeze.img -redir tcp:9527::22
-redir tcp:5222::5222
Port 5222 listens for c2s connections with STARTTLS,
and also allows plain connections for old clients.
其他端口,没有隐射出来.
其实比较好的虚拟环境还有Xen,不过由于据说kvm的性能比较好,就选择了它
首先当然是安装好ubuntu server 12.04,本来是很顺利的,但不知我怎么想的,把RAID和LVM放在一起用,然后使引导无法写入硬盘,最后只好清空分区表,重新分区,做RAID,才搞定。
接下来才是主要环节(其实上面才是我花了最多时间的,囧)
一、检查系统是否支持kvm
egrep ‘(vmx|svm)’ /proc/cpuinfo
如果有显示vmx或svm就表明cpu支持虚拟化技术
kvm-ok
如果显示正常的话,表明主板支持;如果显示有误的话,在主板的设置里打开虚拟化技术
二、安装kvm
sudo apt-get install kvm qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virt-manager virt-viewer
然后等待就可以了
三、安装个基本的桌面环境(自己比较懒,不是很想敲命令行)
sudo apt-get install x-window-system-core
sudo apt-get install gnome-core
安装就这么完成了,接下来就是配置了
一、用linux管理
在另外的机器上安装virt-manager
sudo apt-get install virt-manager virt-viewer
二、用windows管理
在另外的机器上安装一个 X Window Server,我用的是XMing,然后安装putty。在用putty连接远程计算机时打开X11 映射:
在ubuntu server 12.04 上安装和配置kvm - 小球 - 春天的希望
在用putty连接后,输入virt-manager,putty就会自动调用XMing来显示窗口
PS:如果想用命令行管理虚拟机,我这提供一个创建的脚本,详细还是参考相关手册吧
virt-install\
–connect qemu:///system\
–name=test\
–ram=512\
–file=test.qcow2\
–file-size=20\
–network=network:default \
–cdrom=/vhdd/iso/ubuntu-12.04-server-amd64.iso\
–vnc\
–noautoconsole\
–os-type=linux\
–os-variant=ubuntuPrecise\
–accelerate
PSS:如果想使用网卡桥接,而不是NAT的话,改一下interface文件就可以了
auto eth0
iface eth0 inet manual
auto virbr0
iface virbr0 inet static
address 192.168.3.249
gateway 192.168.3.254
netmask 255.255.255.0
broadcast 192.168.3.255
bridge_ports eth0
bridge_stp off
-----------
I do not issue any guarantee that this will work for you!
I will show how to install a CentOS 6.2 guest in this tutorial.
We also need an Ubuntu 12.04 LTS desktop so that we can connect to the graphical console of our KVM guests. It doesn’t matter if the desktop is installed on the Ubuntu 12.04 LTS KVM server or on a remote system (there are small differences if the desktop is installed on the KVM host compared to a remote desktop, but I will outline these differences, so read carefully).
Open a terminal and install virt-install:
sudo apt-get install virtinst
We need a means of connecting to the graphical console of our guests – we can use virt-viewer or virt-manager (see KVM Guest Management With Virt-Manager On Ubuntu 8.10) for this. I’m assuming that you’re using an Ubuntu 12.04 desktop (it doesn’t matter if it is a remote desktop of if the desktop is installed on the Ubuntu 12.04 KVM server!).
I suggest you use virt-manager instead of virt-viewer because virt-manager lets you also create and delete virtual machines and do other tasks. virt-manager can be installed as follows:
sudo apt-get install virt-manager
Now let’s go back to our Ubuntu 12.04 KVM host.
Take a look at
man virt-install
to learn how to use it.
We will create our image-based virtual machines in the directory /var/lib/libvirt/images/ which was created automatically when we installed KVM.
To create a CentOS 6.2 guest (in bridging mode) with the name vm10, 1024MB of RAM, two virtual CPUs, and the disk image /var/lib/libvirt/images/vm10.img (with a size of 12GB), insert the CentOS DVD into the CD drive and run
sudo virt-install –connect qemu:///system -n vm10 -r 1024 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /dev/cdrom –vnc –noautoconsole –os-type linux –accelerate –network=bridge:br0 –hvm
Of course, you can also create an ISO image of the CentOS DVD (please create it in the /var/lib/libvirt/images/ directory because later on I will show how to create virtual machines through virt-manager from your Ubuntu desktop, and virt-manager will look for ISO images in the /var/lib/libvirt/images/ directory)…
sudo dd if=/dev/cdrom of=/var/lib/libvirt/images/CentOS-6.2-x86_64-bin-DVD1.iso
… and use the ISO image in the virt-install command:
sudo virt-install –connect qemu:///system -n vm10 -r 1024 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /var/lib/libvirt/images/CentOS-6.2-x86_64-bin-DVD1.iso –vnc –noautoconsole –os-type linux –accelerate –network=bridge:br0 –hvm
The output is as follows:
administrator@server1:~$ sudo virt-install –connect qemu:///system -n vm10 -r 1024 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /var/lib/libvirt/images/CentOS-6.2-x86_64-bin-DVD1.iso –vnc –noautoconsole –os-type linux –accelerate –network=bridge:br0 –hvm
Starting install…
Allocating ’vm10.img’ | 12 GB 00:00
Creating domain… | 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
administrator@server1:~$
The KVM guest will now boot from the CentOS 6.2 DVD and start the graphical CentOS installer – that’s why we need to connect to the graphical console of the guest. You can do this with virt-manager on the Ubuntu 12.04 desktop (see KVM Guest Management With Virt-Manager On Ubuntu 8.10).
Start Virtual Machine Manager (you can search for virt-manager in Unity to find it):
When you start virt-manager for the first time and no KVM is installed on your Ubuntu desktop (i.e., the KVM host is not identical to your dekstop), you will most likely see the following message (Could not detect a default hypervisor.). You can ignore this because we don’t want to connect to the local libvirt daemon, but to the one on our remote Ubuntu 12.04 KVM host.
In virt-manager, go to File > Add Connection… to connect to your Ubuntu 12.04 KVM host. In my virt-manager, I couldn’t find the File menu, it was missing, so I had to right-click a certain area just below the cross that is used to close the program, and then the menu opened from where I could select Add Connection…:
Select QEMU/KVM as Hypervisor. If the KVM host is identical to your desktop, you are done now and can click on Connect.
But if the KVM host is on a remote Ubuntu 12.04 server, then check Connect to remote host, select SSH from the Method drop-down menu, fill in root in the Username field, type in the hostname or IP address (192.168.0.100) of the Ubuntu 12.04 KVM host in the Hostname field, and click on Connect.
(Replace 192.168.0.100 with the IP address or hostname of the KVM host. Please note that the root account must be enabled on the KVM host, and that root logins must be allowed on the KVM host. To enable the root login on an Ubuntu system, run
sudo passwd root
To check if root logins are allowed check the directive PermitRootLogin in /etc/ssh/sshd_config – you might have to restart the SSH daemon afterwards. )
If this is the first connection to the remote KVM server, you must type in yes and click on OK:
Afterwards type in the root password of the Ubuntu 12.04 KVM host:
You should see vm10 as running. Mark that guest and click on the Open button to open the graphical console of the guest:
Type in the root password of the KVM host again:
You should now be connected to the graphical console of the guest and see the CentOS installer:
Now install CentOS as you would normally do on a physical system. Please note that at the end of the installation, the CentOS system needs a reboot. The guest will then stop, so you need to start it again, either with virt-manager or like this on the KVM host’s command line:
Ubuntu 12.04 KVM Host:
sudo virsh –connect qemu:///system
start vm10
quit
Afterwards, you can connect to the guest again with virt-manager and configure the guest. You can as well connect to it with an SSH client (such as PuTTY).
Instead of creating a virtual machine from the command line (as shown in chapter 4), you can as well create it from the Ubuntu desktop using virt-manager (of course, the virtual machine will be created on the Ubuntu 12.04 KVM host – in case you ask yourself if virt-manager is able to create virtual machines on remote systems).
To do this, click on the following button:
The New VM dialogue comes up. Fill in a name for the VM (e.g. vm11), select Local install media (ISO image or CDROM), and click on Forward:
Next select Linux in the OS type drop-down menu and Red Hat Enterprise Linux 6 in the Version drop-down menu, then check Use ISO image and click on the Browse… button:
Select the CentOS-6.2-x86_64-bin-DVD1.iso image that you created in chapter 4 and click on Choose Volume:
Now click on Forward:
Assign memory and the number of CPUs to the virtual machine and click on Forward:
Now we come to the storage. Check Enable storage for this virtual machine, select Create a disk image on the computer’s hard drive, specify the size of the hard drive (e.g. 12GB), and check Allocate entire disk now. Then click on Forward:
Now we come to the last step of the New VM dialogue. Go to the Advanced options section. Select Specify shared device name; the Bridge name field will then appear where you fill in the name of your bridge (if you have used the Virtualization With KVM On Ubuntu 12.04 LTS guide to set up the KVM host, this is br0). Click on Finish afterwards:
The disk image for the VM is now being created:
Afterwards, the VM will start. If you use a remote KVM host, type in the root password of the KVM host:
You should now be connected to the graphical console of the guest and see the CentOS installer:
Now install CentOS as you would normally do on a physical system.
The python-virtinst package comes with a second tool, virt-clone, that lets you clone guests. To clone vm10 and name the clone vm12 with the disk image /var/lib/libvirt/images/vm12.img, you simply run (make sure that vm10 is stopped!)
sudo virt-clone –connect qemu:///system -o vm10 -n vm12 -f /var/lib/libvirt/images/vm12.img
Afterwards, you can start vm12 with virt-manager or like this…
sudo virsh –connect qemu:///system
start vm12
quit
… and connect to it using virt-manager.
-----------------------------------
(使用磁盘方式 以更新,请大家注意!!!)
没有废话,以下是在UBUNTU 804.4 64BIT下的方法
获得KVM:
KVM 的网站:http://sourceforge.net/projects/kvm/files/
下载最新的qemu-kvm- 0.12.4.tar.gz
解压:
代码:
tar -xzvf qemu-kvm-0.12.4.tar.gz
需要用到的包:
代码:
sudo apt-get install gcc libsdl1.2-dev zlib1g-dev libasound2-dev
linux-kernel-headers pkg-config libgnutls-dev libpci1 pciutils-dev
在 UBUNTU 10.04中 ,可以使用
代码:
sudo apt-get build-dep qemu-kvm
来解决依赖关系。三步 曲:
代码:
cd qemu-kvm-0.12.4
./configure –prefix=/usr/local/kvm
make
sudo make install
安装好以后加载KVM模块./configure –prefix=/usr/local/kvm
make
sudo make install
代码:
sudo modprobe kvm
sudo modprobe kvm-intel //如果你的是INTEL处理器就用这个
sudo modprobe kvm-amd //如果你的是AMD处理器就用这个
这 样就装好了。sudo modprobe kvm-intel
sudo modprobe kvm-amd
下面介绍配置KVM桥接网络的方法: \\特别注意,大部分不能桥接无线网卡。。。只能桥接PCI网卡。。。。
安装 桥接工具:
代码:
sudo apt-get install bridge-utils
安装创建TAP界面的工具:
代码:
sudo apt-get install uml-utilities
编辑网络界面配置文件(
代码:
sudo vi /etc/network/interfaces
),根据你的情况加入以下内 容:
代码:
auto eth0
iface eth0 inet manual auto tap0
iface tap0 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
tunctl_user lm \\lm是我的用户名,在这里换为你的用户名
auto br0
iface br0 inet static \\当然这里也可以使用DHCP分配
bridge_ports eth0 tap0
address 192.168.1.3
netmask 255.255.255.0
gateway 192.168.1.1
激活tap0和br0: //有些时候会不奏效,但重启后就行了iface eth0 inet manual auto tap0
iface tap0 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
tunctl_user lm
auto br0
iface br0 inet static
bridge_ports eth0 tap0
address 192.168.1.3
netmask 255.255.255.0
gateway 192.168.1.1
代码:
sudo /sbin/ifup tap0
sudo /sbin/ifup br0
好 了以后ifconfig看一下,多了一个tap0和br0, br0上的IP地址就是你本机的IP地址。sudo /sbin/ifup br0
KVM的使用方法:
KVM 的使用方法具体可以参考
代码:
/usr/local/kvm/bin/qemu-system-x86_64 –help
下 面具体举几个例子:创建虚拟磁盘(用qemu-img命令):
代码:
mkdir kvm
cd kvm
/usr/local/kvm/bin/qemu-img create -f qcow2 winxp.img 10G
创建虚拟机:cd kvm
/usr/local/kvm/bin/qemu-img create -f qcow2 winxp.img 10G
代 码:
sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 512 -drive
file=/home/lm/kvm/winxp.img,cache=writeback -localtime -net
nic,vlan=0,macaddr=52-54-00-12-34-01 -net
tap,vlan=0,ifname=tap0,script=no -boot d -cdrom /home/lm/iso/winxp.iso
-smp 2 -soundhw es1370
这里对各个参数说明一下:
代码:
-m 512
分 配512MB的内存给GUEST OS
代码:
-drive file=/home/lm/kvm/winxp.img,cache=writeback
使 用虚拟磁盘的文件和路径,并启用writeback缓存。
代码:
-localtime
使用本地时间(一定要加这个参数,不然虚拟机时间会有问题)
代码:
-net nic,vlan=0,macaddr=52-54-00-12-34-01 -net tap,vlan=0,df=h,ifname=tapo,script=no
使用网络,并连接到一个存在的网络设备tap0,注意 mac地址一定要自己编一个,特别是如果你虚拟了多个系统并且要同时运行的话,不然就MAC冲突了,在KVM-87下去掉df=h
代码:
-boot d
从 光盘启动 (从硬盘启动则为 -boot c )
代码:
-cdrom /home/lm/iso/winxp.iso
使用的光盘镜像,如果要使用 物理光驱的话就是 -cdrom /dev/cdrom
代码:
-smp 2
smp处理器个数为2个,如果你是4核处理器,后面的数字就为4(如果 不开启此选项,则只能以单核模式运行)开始启动装系统了吧?是不是非常的快?如果你机器可以的话大概在15分钟左右就把XP装好了。
启 动装好的虚拟机(很简单,上面的命令改两个参数就行):
代码:
sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 512 -drive
file=/home/lm/kvm/winxp.img,cache=writeback -localtime -net
nic,vlan=0,macaddr=52-54-00-12-34-01 -net
tap,vlan=0,ifname=tap0,script=no -boot c -smp 2 -soundhw es1370
然 后在客户端里设置好IP地址就可以使用了,但是KVM的显卡能力不行,可以通过rdesktop远程连接解决
代码:
rdesktop 192.168.1.4:3389 -u administrator -p ****** -g 1280×750 -D -r sound:local \\分辨率可以自行设定,是不是比VirtualBox的无缝连接模式要爽??
补 充:如果同时运行多个GUEST OS ,则网络设置要改一下,在/etc/network/interfaces 里加几个tap界面就行了,每个GUEST OS单独使用一个TAP,比如说现在要同时运行3个GUEST OS ,则配置文件如下:
代码:
auto tap0
iface tap0 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
tunctl_user lm \\lm是我的用户名,在这里换为你的用户名
auto tap1
iface tap1 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
tunctl_user lm \\lm是我的用户名,在这里换为你的用户名
auto tap2
iface tap2 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
tunctl_user lm \\lm是我的用户名,在这里换为你的用户名
auto br0
iface br0 inet static \\当然这里也可以使用DHCP分配
bridge_ports eth0 tap0 tap1 tap2
address 192.168.1.3
netmask 255.255.255.0
gateway 192.168.1.1
启动GUEST OSiface tap0 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
tunctl_user lm
iface tap1 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
tunctl_user lm
auto tap2
iface tap2 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
tunctl_user lm
auto br0
iface br0 inet static
bridge_ports eth0 tap0 tap1 tap2
address 192.168.1.3
netmask 255.255.255.0
gateway 192.168.1.1
代码:
sudo /usr/local/kvm/bin/qemu-system-x86_64 -m 512-drive
file=/home/lm/kvm/winxp.img,cache=writeback -localtime -net
nic,vlan=0,macaddr=52-54-00-12-34-01 -net
tap,vlan=0,ifname=tap0,script=no -boot c -smp 2 -clock rtc -soundhw
es1370
把ifname=tap0换为你要tap1或者tap2就行了,MAC也要改噢。。要注意,系 统重启后要重新加载kvm内核模块:
代码:
sudo modprobe kvm
sudo modprobe kvm-amd //如果使用AMD处理器
sudo modprobe kvm-intel //如果是用INTEL处理器
当然,你也 可以修改系统相关设置在启动时自动加载该模块。sudo modprobe kvm-amd
sudo modprobe kvm-intel
同理,可以用此方法安装LINUX。装完了可以对比一下,是不是比VB和VM要爽得多呢?
其 他比如像USB连接问题可以参考论坛里的帖子
我已经在我的系统里同 时运行了4个CentOS 4.8 1个winxp sp3 1个win2003 sp2 5个FreeBSD 8.0
速度太快 了,难以置信。
系统配置为:Athlon X2 5000+ 8G RAM 跑的Ubuntu 8.04.4 64bit
其实 KVM的安装和使用都很方便简单的,大家要理解KVM各个参数的含义。最关键的就是KVM的网络桥接的设置,在这里大家要多看软件自身的文档,会有很大的 帮助。
以上是KVM的最基础的实现方法,望大家多看文档,以便掌握更多更灵活的功能。
BTW:
现在已经找到了原来磁盘性能糟糕的原因,按照以往的方法是用 -hda disk.img 的方法来使用虚拟磁盘,现在版本更新以后时候 -drive file=/home/lm/kvm/winxp.img,cache=writeback 来使用虚拟磁盘,请广大使用KVM的用户注意这里的变化。
-hda / hdb 参数主要用户使用物理硬盘上的一个分区。
注:Ubuntu 10.04 LTS 下的安装方法(qemu-kvm 0.12.3):
直接
代 码:
sudo apt-get install qemu-kvm
网 络配置如上,是一样的。大家注意一个问题,如果你虚拟的是WIN2003,那么切勿在-net参数中使用model=e1000,否则 HOST和GUEST之间不能PING通.
------------
kvmctl to manage KVM-based VPS
Intro
I have written a utility called kvmctl to manage KVM-based VMs, along with a configuration file format, and other associated utilities. Feel free to use it, or to expand on it. While not strictly required by the license, I would appreciate knowing if you use it, and would appreciate being credited if you expand upon it.A couple of assumptions are made for this work:
- all the VMs have unique host names.
- all the VMs have config files in /etc/kvm that are named <host>.kvm. These are shell script fragments that initialise variabless related to the VM.
- all the VMs are given unique numeric identifiers between 00 and 99. This is used for the VNC port number and the last 2 digits of the virtual MAC address.
- there is a single bridge configured for all VMs to use, and all VMs will use bridged networking
There’s a simple init.d/ script included that can be used to auto-start VMs when the host system boots. Just symlink config files into /etc/kvm/auto/ and then add the script to whichever runlevel you want. Just be sure to set it to start after networking is up.
Download
Version 2.0.4
Stable and working. Auto-shutdown doesn’t work yet, as I haven’t found a way to send a “powerdown” event to the guest OS to tell it to initiate a clean shutdown.- Download the tarball here (rename to .tbz2): kvmctl-2.0.4.jpg
Version 2.1.0
I’ve added a possibility to start a monitor port as telnet server and added a shutdown. Since debian has removed bash tcp support i need netcat (nc) to send a power-button-press to the monitor of virtual machine. So if you like to test the shutdown, start the machine with a monitor and install “nc”. This Version includes some other changes and is not roughly tested.- Download the tarball here: kvmctl-2.1.0.tar.gz
Note: The tarball for 2.1.0 is gone, nobody has a copy, don't report this issue. The above paragraph is here strictly for people who already have kvmctl 2.1.0 and for historical purposes.
Usage
The script can be run as a normal user. It uses sudo internally for the start/stop commands (all the rest are run as the normal user). Currently, all kvm processes are run as root, as this was developed on Debian Lenny which (for whatever reason) decided to include kernel capabilities which prevents non-root users from accessing tun devices.To see a blurb that describe all the options:
# kvmctl help kvmctl 2.0.4 Licensed under BSDL Copyright 2009 kvmctl is a management and control script for KVM-based virtual machines. Usage: kvmctl start host - start the named VM kvmctl startvnc host - start the named VM, and then connect to console via VNC kvmctl stop host - stop the named VM (only use if the guest is hung) kvmctl restart host - stop and then start the named VM (only use if the guest is hung) kvmctl vnc host - connect via VNC to the console of the named VM kvmctl whichvnc host - show which VNC display port is assigned to the named VM kvmctl killvnc host - kills any running vncviewer processes attached to the named VM kvmctl edit host - open config file for host using $EDITOR, or create a new config file based on a template kvmctl status - show the names of all running VMs kvmctl status kvm - show full details for all running kvm processes kvmctl status host - show full details for the named kvm process kvmctl help - show this usage blurb ** Using stop is the same as pulling the power cord on a physical system. Use with caution.To start a VM named webmail:
# kvmctl start webmail Starting webmail. The VNC port for webmail is :05To start a VM named webmail, and then immediately attach to the console via VNC:
# kvmctl startvnc webmail Starting webmail. The VNC port for webmail is :05 <vncviewer is started>To check the status of all running VMs (just outputs the name of the running VMs):
# kvmctl status The following VMs are running: fcsync webmailTo see the process info for all the running VMs:
# kvmctl status kvm The following VMs are running: 3792 /usr/bin/kvm -name fcsync -daemonize -localtime -usb -usbdevice tablet -smp 1 -m 1048 -vnc :02 -pidfile /var/run/kvm/fcsync.pid -net nic,macaddr=00:16:3e:00:00:02,model=rtl8139 -net tap,ifname=tap02 -boot c -drive index=0,media=disk,if=ide,file=/dev/mapper/vol0-fcsync 5123 /usr/bin/kvm -name webmail -daemonize -localtime -usb -usbdevice tablet -smp 2 -m 2048 -vnc :05 -pidfile /var/run/kvm/webmail.pid -net nic,macaddr=00:16:3e:00:00:05,model=e1000 -net tap,ifname=tap05 -boot c -drive index=1,media=disk,if=scsi,file=/dev/mapper/vol0-webmail--storage -drive index=0,media=disk,if=ide,file=/dev/mapper/vol0-webmailTo see the process info for a specific VM:
# kvmctl status webmail VM for host webmail is running with: 5123 /usr/bin/kvm -name webmail -daemonize -localtime -usb -usbdevice tablet -smp 2 -m 2048 -vnc :05 -pidfile /var/run/kvm/webmail.pid -net nic,macaddr=00:16:3e:00:00:05,model=e1000 -net tap,ifname=tap05 -boot c -drive index=1,media=disk,if=scsi,file=/dev/mapper/vol0-webmail--storage -drive index=0,media=disk,if=ide,file=/dev/mapper/vol0-webmailTo “pull the power cord” of a running VM:
# kvmctl stop webmail Attempting to stop VM for webmail VM for webmail has stoppedTo “powercycle” a running VM:
# kvmctl restart webmail Attempting to stop VM for webmail VM for webmail has stopped Starting webmail. The VNC port for webmail is :05To see which VNC port has been assigned to a VM:
# kvmctl whichvnc webmail The VNC port for webmail is :05To connect to the VNC port of a VM (requires vncviewer installed on the host):
# kvmctl vnc webmail <vncviewer is started>To create a new config file for a VM:
# kvmctl edit newvm /etc/kvm/test.kvm does not exist. Would you like to create one from the template? (y/n) <if yes, $EDITOR is opened with the template loaded>To edit an existing config file:
# kvmctl edit webmail <$EDITOR is opened with /etc/kvm/webmail.kvm loaded.>
kvmctl 2.0 config file format
# kvmctl Version: 2.0.0 # The name of the VM must be unique across all VMs running on this server # This is not actually used anywhere, instead kvmctl uses the filename without # the .kvm as the host name. # host="hostname" # An ID number for the VM. # This is used to generate the MAC address of the virtual NIC, the tap device in the host, and # the VNC port for the VM's console. id="##" # How much RAM to associate with the VM. # This is the max amount of RAM that it will use. mem="1024" # Whether to enable ACPI support in the virtual BIOS # Default is to enable ACPI # noacpi cannot be set if cpus > 1. noacpi="" # The number of virtual CPUs to assign to the VM. # Stable values are 1-4. # cpus must be set to 1 if noacpi is set. cpus="1" # Which mouse device to use # Values: mouse, tablet # Default: tablet mouse="tablet" # The network chipset to use in the VM. # Values: rtl1389, e1000 # Default: rtl8139 nic="rtl8139" # Which virtual block device to boot from # Values: a=floppy0, b=floppy1, c=disk0, d=disk1 # Default: c boot="c" # If the VM is set to boot from "d" and "d" is a CD-ROM, an extra '-no-reboot' # option is added to the kvm commandline. This will cause the VM to treat a # "reboot" command as if it were a "shutdown" command. # Values for disktype: ide, scsi, virtio # Default for disktype: ide # If the value for disktyp0 is scsi or virtio, an extra ',boot=on' option will # be added to the kvm commandline. This is needed in order to boot from SCSI # and paravirtualised block devices. # Values for media: disk, cdrom # Default for disktype: disk # Values for disk: a path to either a disk image file, or an LVM logical v volume # Default for disk: /dev/mapper/vol0-${host} # The first virtual block device # For IDE devices, this is primary master. disktype0="ide" media0="disk" disk0="/path/to/diskimage" # The second virtual block device # For IDE devices, this is primary slave. disktype1="" media1="" disk1="" # The third virtual block device # For IDE devices, this is secondary master # USE THIS FOR CD-ROMS OR PERFORMANCE WILL SUFFER GREATLY!! disktype2="ide" media2="cdrom" disk2="/path/to/osinstall.iso" # The fourth virtual block device # For IDE devices, this is secondary slave disktype3="" media3="" disk3=""
----------
KVM虚拟机和qemu
KVM
安装
Ubuntu
【测试环境 10.04】安装 kvm 和 网卡桥接工具:
sudo aptitude install kvm qemu bridge-utils uml-utilities配置桥接网络
先创建 br0 设备:brctl addbr br0 # 创建一个桥接口 以后要添加网卡到这个桥接接口只需:
brctl addif br0 eth0 # 添加eth0到br0,重要 我的网络配置文件 /etc/network/interfaces 内容如下:
auto lo iface lo inet loopback #auto eth0 #iface eth0 inet dhcp auto br0 iface br0 inet dhcp bridge_ports eth0 主要思路:把 br0 接口配置成和以前正常工作的网口一样(我这里是 eth0), 然后禁用以前网口的配置(我把 eth0 的配置注释掉了),最后在 br0 配置里面 加上一句 “bridge_ports eth0″ 。
说明: 如果是静态地址,可参照相应设置更改。
启动虚拟机
配置好虚拟机,怎么使用就因人而异了,我的使用方式是,后台执行,使用 vnc 远程连接虚拟机。因此我用下面的小脚本启动 uqkvm (如果你不了解 vnc ,或 者不太明白 shell 脚本,可能下面的说明对你无用。由于精力和能力有限,下面 仅给出脚本,无法一一阐述明了。):#!/bin/sh # 简介:快速试用 KVM # KVM 程序地址 KVM_CMD=/usr/bin/kvm QEMU_IFUP=/etc/init.d/qemu-ifup HOST=172.16.70.3 # 默认值 MEM=512 # 内存 TAP=2 # 第几块网卡(0<TAP<10) # 您的磁盘映像 DISK= # 所有其他的 KVM 参数 OTHER= test -n "$1" && TAP=$1 test -n "$DISK" && HDA="-hda $DISK" || HDA="-hda $2" shift 2 OTHER=$@ # 以来其他变量的变量要最后指明 NET="-net nic,macaddr=32:32:32:32:32:3$TAP -net tap,ifname=tap$TAP,script=$QEMU_IFUP" VNC="-vnc $HOST:$TAP" RUN_CMD="$KVM_CMD -m $MEM $HDA $NET $VNC -localtime --daemonize $OTHER" echo "运行命令:$RUN_CMD" $RUN_CMD if test $? = 0; then echo "KVM 运行成功,请用 VNC 链接 $HOST:$TAP ..." exit 0 else echo "KVM 运行失败,请检查命令行是否有错误!" exit 1 fi 其中用到的 /etc/init.d/qemu-ifup 文件如下:
#!/bin/bash switch=br0 if [ -n "$1" ];then /sbin/ip link set $1 up sleep 0.5s /usr/sbin/brctl addif $switch $1 exit 0 else echo "Error: no interface specified" exit 1 fi uqkvm 脚本使用方法如下:
./uqkvm 3 GreenOS.img -cdrom /data/lab/LessWatts/GTGS-lesswatts_xfce-201004201555.iso -boot d 如果仅仅需要启动系统:
# ./uqkvm 3 GreenOS.img 运行命令:/usr/bin/kvm -m 512 -hda GreenOS.img -net nic,macaddr=32:32:32:32:32:33 -net tap,ifname=tap3,script=/etc/init.d/qemu-ifup -vnc 172.16.70.3:3 -localtime --daemonize pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" KVM 运行成功,请用 VNC 链接 172.16.70.3:3 ... 现在查看桥接设备情况:
$ sudo brctl show bridge name bridge id STP enabled interfaces br0 8000.0001028c5009 no eth0 tap3 可见 tap2 和 eth0 都在 br0 上。
RHEL/Fedora/CentOS
yum install bridge-utils kvm bridge-utils是网卡桥接工具,示例1:Redhat系统KVM
创建磁盘
# qemu-img create -f qcow2 turbo-10.5.5-rc2.img 20G Formatting 'turbo-10.5.5-rc2.img', fmt=qcow2, size=20971520 kB # file turbo-10.5.5-rc2.img turbo-10.5.5-rc2.img: QEMU Copy-On-Write disk image version 2, size 5 + 0配置桥接
配置/etc/sysconfig/network-scripts
下面的网络脚本# cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 ONBOOT=yes BOOTPROTO=static IPADDR=172.16.70.30 NETMASK=255.255.252.0 GATEWAY=172.16.68.1 TYPE=Bridge # cat /etc/sysconfig/network-scripts/ifcfg-eth0 # 3Com Corporation 3c905B 100BaseTX [Cyclone] DEVICE=eth0 #BOOTPROTO=dhcp #HWADDR=00:01:02:8C:50:09 ONBOOT=yes BRIDGE=br0 增加一个桥接网络接口,并把刚才配置的eth0添加到桥接口
brctl addbr br0 # 创建一个桥接口 brctl addif br0 eth0 # 添加eth0到br0,重要 使用一个下面的脚本:
# cat /etc/init.d/qemu-ifup #!/bin/bash switch=br0 if [ -n "$1" ];then /sbin/ip link set $1 up sleep 0.5s /usr/sbin/brctl addif $switch $1 exit 0 else echo "Error: no interface specified" exit 1 fi
使用桥接网卡
可以使用桥接网卡启动qemu-kvm,并使用GTES10.5.5的两个文件通过 NFS安装系统:# qemu-kvm -m 512 -hda turbo-10.5.5-rc2.img -kernel vmlinuz \ -initrd initrd.img -net nic -net tap,script=/etc/init.d/qemu-ifup --daemonize 也可以使用iso安装,这样对于多张iso发行版的linux需要在qemu的控制 台换盘:
# qemu-kvm -m 512 -hda turbo-10.5.5-rc2.img -cdrom 你的iso路径 \ -net nic -net tap,script=/etc/init.d/qemu-ifup -boot d --daemonize 进入qemu的控制台,在鼠标聚焦到qemu界面时候,按住 Ctrl+Alt+2 进 入控制台:
(qemu) change cdrom 你的另外一张iso 上面命令如果提示:”device not found”。可能是设备映射出错,可以这样:
(qemu) info block ... (qemu) change ide1-cd0 iso文件
QEMU常见使用
使用vnc
# qemu-kvm -m 512 -hda xp.img -net nic,macaddr=00:00:11:33:02:02 \ -net tap,ifname=tap2,script=/etc/init.d/qemu-ifup \ -localtime -vnc 172.16.70.30:2 --daemonize 上面让kvm虚拟机在后台运行,并启动vnc服务器,这样我们可以用vnc客 户端链接172.16.70.30:5092这个地址。QEMU选项:
一般选项
- -M machine
- 选择模拟的机器(我们可以输入-M?提到一个模拟的机器列表)
- -fda file
- -fdb file
- 使用file作为软盘镜像.我们也可以通过将/dev/fd0作为文件名来使用主机软盘.
- -hda file
- -hdb file
- -hdc file
- -hdd file
- 使用file作为硬盘0,1,2,3的镜像.
- -cdrom file
- 使用文件作为CD-ROM镜像(但是我们不可以同时使 用’-hdc’和’-cdrom’).我们可以通过使用’/dev/cdrom’作为文件名来 使用主机的CD-ROM.
- -boot [a|c|d]
- 由软盘(a),硬盘(c)或是CD-ROM(d).在默认的情况下由硬盘启动.
- -snapshot
- 写入临时文件而不是写入磁盘镜像文件.在这样的情况下,并没有写回 我们所使用的磁盘镜像文件.然而我们却可以通过按下C-a s来强制写 回磁盘镜像文件.
- -m megs
- 设置虚拟内存尺寸为megs M字节.在默认的情况下为128M.
- -smp n
- 模拟一个有n个CPU的SMP系统.为PC机为目标,最多可以支持255个CPU.
- -nographic
- 在通常情况下,Qemu使用SDL来显示VGA输出.使用这个选项,我们可以 禁止所有的图形输出,这样Qemu只是一个简单的命令行程序.模拟的串 口将会重定向到命令行.所以,我们仍然可以在Qemu平台上使用串口命 令来调试Linux内核.
- -k language
- 使用键盘布局语言(例如fr为法语).这个选项只有在不易得到PC键盘 的情况下使用.我们在PC/Linux或是PC/Windows主机不需要使用这个 选项.默认的为en-us,可用的布局如下:
- -audio-help
- 这个选项将会显示声音子系统的帮助:驱动列表以及可调用的参数.
- -soundhw card1,card2 …. or -soundhw all
- 允许声音并选择声音硬件.使用?可以列出所有可用的声音硬件
- -localtime
- 设置时钟为本地时间(默认为UTC时间).如果在MS-DOS或是Windows上 这个选项则需要正确的日期.
- -full-screen
- 以全屏方式启动.
- -pidfile file
- 在file文件中存许Qemu的进程PID.如果我们是由脚本启动的,这个选 项是相当有用的.
- -win2k-hack
- 当安装Windows 2000时可以使用这个选项来避免磁盘错误.在安装上 Windows 2000系统,我们就不再需要这个选项(这个选项降低IDE的传 输速度).
USB选项
- -usb
- 允许USB驱动(很快就将成为默认的选项)
- -usbdevice devname
- 添加USB设备名.我们可以查看监视器命令usb_add来得到更为详细的信息.
网络选项
- -net nic[,vlan=n][,macaddr=addr]
- 创建一个新的网卡并与VLAN n(在默认的情况下n=0)进行连接.在PC机 上,NIC当前为NE2000.作为可选项的项目,MAC地址可以进行改变.如果 没有指定-net选项,则会创建一个单一的NIC.
- -net user[,vlan=n]
- 使用用户模式网络堆栈,这样就不需要管理员权限来运行.如果没有指 定-net选项,这将是默认的情况.
- -net tap[,vlan=n][,fd=h][,ifname=name][,script=file]
- 将TAP网络接口name与VLAN n进行连接,并使用网络配置脚本file进行 配置.默认的网络配置脚本为/etc/qemu-ifup.如果没有指定name,OS 将会自动指定一个.fd=h可以用来指定一个已经打开的TAP主机接口的 句柄.例如:
下面的是一个更为复杂的例子(两个NIC,每一个连接到一个TAP设备):qemu linux.img -net nic,vlan=0 -net tap,vlan=0,ifname=tap0 \ -net nic,vlan=1 -net tap,vlan=1,ifname=tap1
- -net socket[,vlan=n][,fd=h][,listen=[host]:port][,connect=host:port]
- 使用TCP socket 将VLAN n与远程的另一个Qemu虚拟机的VLAN进行连 接.如果指定了listen,Qemu将在port端口监听连入请求(host是可选 的), connect可以用来使用listen选项与另一个Qemu实例进行连 接.fd=h指定了一个已经打开的TCP socket.例如:
- -net socket[,vlan=n][,fd=h][,mcast=maddr:port]
- 创建一个VLAN n,并使用UDP 多址通信套掊口与其他的QEMU虚拟机进 行共享,尤其是对于每一个使用多址通信地址和端口的QEMU使用同一 个总线. 在这里我们要注意以下几点:
- 几个QEMU可以运行在不同的主机上但却使用同一个总线(在这里假设 为这些主机设置了正确的多址通信)
- mcast支持是与用户模式Linux相兼容的.
- 使用fd=h指定一个已经打开的UDP 多址通信套接口.例如:
# launch QEMU instance (note mcast address selected is UML's default) qemu linux.img -net nic,macaddr=52:54:00:12:34:56 -net socket,mcast=239.192.168.1:1102 # launch UML /path/to/linux ubd0=/path/to/root_fs eth0=mcast
- -net none
- 表明没有网络设备需要进行配置.如果没有指定-net选项,则会用来覆 盖活跃的默认配置.
- -tftp prefix
- 当使用用户模式网络堆栈,激活一个内置的TFTP服务器.所有的以 prefix开始的文件将会使用一个TFTP客户端从主机下载到本地.在本 地的TFTP客户端必须以二进制模式进行配置(使用Unix的TFTP客户端 的bin命令).在客户机上的主机IP地址如通常的10.0.2.2.
- -smb dir
- -redir [tcp|udp]:host-port:[guest-host]:guest-port
- 当使用用户模式网格栈,将连接到主机端口host-port的TCP或是UDP 连接重定向到客户机端口guest-port上。如果没有指定客户机端口, 他的值为10.0.2.15(由内建的DHCP服务器指定默认地址)。例如: 要重定向从screen 1到客户机screen 0的X11连接,我们可以使用下 面的方法:
然后当我们在主机telnet localhost 5555上使用时,我们连接到了 客户机的telnet服务器上。
Linux启动选项
当我们使用这些选项时,我们可以使用一个指定的内核,而没有将他 安装在磁盘镜像中。这对于简单的测试各种内核是相当有用的。
- `-kernel bzImage’
- 使用bzImage作为内核映像。
- `-append cmdline’
- 使用cmdline作为内核的命令行。
- `-initrd file’
- 使用file作为初始的ram磁盘。
调试选项
- `-serial dev’
- 重定向虚拟串到主机的设备dev。可用的设备如下:
- vc
- 虚拟终端
- pty
- (Linux)伪TTY(自动分配一个新的TTY)
- null
- 空设备
- /dev/XXX”
- (Linux)使用主机的tty。例如,’/dev/ttyS0′。主机的串口参数通过模拟进行设置。
- /dev/parportN
- (Linux)使用主机的并口N。当前只可以使用SPP的并口特征。
- file:filename
- 将输出写入到文件filename中。没有字符可读。
- stdio
- (Unix)标准输入/输出
- pipe:filename
- (Unix)有名管道filename
在图形模式下的默认设备为vc,而在非图形模式下为stdio.这个选项 可以被多次使用,最多可以模拟4个串口。
- ‘-parallel dev’
- 重定向虚拟并口到主机的设备dev(与串口相同的设备)。在Linux主 机上,`/dev/parportN’可以被用来使用与相应的并口相连的硬件设 备。这个选项可以使用多次,最多可以模拟3个并口。
- `-monitor dev’
- 重定向临视器到主机的设备dev(与串口相同的设备)。在图形模式 下的默认设备为vc,而在非图形模式下为stdio。
- ‘-s’
- 等待gdb连接到端口1234.
- `-p port’
- 改变gdb连接端口。
- `-S’
- 在启动时并不启动CPU(我们必须在监视器中输入’c')
- ‘-d’
- 输出日志到/tmp/qemu.log
- `-hdachs c,h,s,[,t]‘
- 强制硬盘0的物理参数(1 <= c <= 16383, 1 <= h <= 16, 1 <= s <=63),并且可以选择强制BIOS的转换模式(t=none, lba or auto).通 常QEMU可以检测这些参数.这个选项对于老的MS-DOS磁盘映像是相当 有用的.
- `-std-vga’
- 模拟一个Bochs VBE扩展的标准VGA显卡(默认情况下为Cirrus Logic GD5446 PCI VGA)
- `-loadvm file’
- 从一个保存状态启动.
组合键
在图形模拟时,我们可以使用下面的这些组合键:- Ctrl-Alt-f
- 全屏
- Ctrl-Alt-n
- 切换虚拟终端’n’.标准的终端映射如下:
- n=1 : 目标系统显示
- n=2 : 临视器
- n=3 : 串口
- Ctrl-Alt
- 抓取鼠标和键盘在虚拟控制台中,我们可以使用Ctrl-Up, Ctrl-Down, Ctrl-PageUp 和 Ctrl-PageDown在屏幕中进行移动.
- Ctrl-a h
- 打印帮助信息
- Ctrl-a x
- 退出模拟
- Ctrl-a s
- 将磁盘信息保存入文件(如果为-snapshot)
- Ctrl-a b
- 发出中断
- Ctrl-a c
- 在控制台与监视器进行切换
- Ctrl-a Ctrl-a
- 发送Ctrl-a
磁盘映像
从0.6.1起,QEMU支持多种磁盘映像格式,包括增长的磁盘映像,压缩与加 密的磁盘映像.我们可以用下面的命令来创建一个磁盘映像:
qemu-img create myimage.img mysize 这里myimage.img是磁盘映像的文件名,而mysize是以K表示的尺寸.我们 可以使用M前缀来使用M表示尺寸或是G作为前缀使用G表示尺寸.
qemu-img选项
可以支持下面的一些命令:
`create [-e] [-b base_image] [-f fmt] filename [size]' `commit [-f fmt] filename' `convert [-c] [-e] [-f fmt] filename [-O output_fmt] output_filename' `info [-f fmt] filename 命令参数
- filename
- 磁盘映像文件名.
- base_image
- 只读的磁盘映像,可以作为拷贝到写映像的基础.写映像 上的拷贝只存储修改的数据.
- fmt
- 磁盘映像格式.在大多数情况下可以自动检测.可以支持下面的格式:
- rawraw 磁盘格式(默认).这种格式有简单并且易于移植到其他模拟器的 优点.如果我们的文件系统支持holes(例如在Linux上的ext2或是 ext3),然后只有写入的部分保持空白.使用qemu-img info来得到映 像使用的实际的大小或是在Unix/Linux上使用 ls -ls.
- qcowQEMU映像格式.最通用的格式.使用他可以获得较小的映像(如果我们 的文件系统不支持holes,例如在Windows上,这是相当有用的),可以 选用AES加密或是基于zlib的压缩.
- cow在写映像格式上的用户模式的Linux拷贝.在QEMU中作为增长的映像 格式使用.这个选项只是为了与以前版本的兼容,并不能在Win32上使 用.
- vmdkVMware 3 或是 4 兼容的映像格式.
- cloopLinux压缩的循环映像,重用直接压缩的CD-ROM映像.
- size以K表示的磁盘映像的尺寸.同时可以支持M或是G作为前缀.
- output_filename目的磁盘映像文件名
- output_fmt目标格式
- -c表明目标映像必须是压缩的(只是qcow格式)
- -e表明目标映像必须是加密的(只是qcow格式)
kvm利用vnc实现远程安装操作系统
首先必须安装qemu和qemu-kvmapt-get install qemu qemu-kvm
cd /home/zhiwei/kvm/
创建磁盘, 5G足够了
qemu-img create -f qcow2 squeeze.img 5G
启动安装
kvm -m 512 -hda squeeze.img -cdrom debian-testing-amd64-netinst.iso -boot d -vnc :1
用远程桌面查看器(vncviewer)来连接到 192.168.1.101:1
kvm -m 512 -nographic -daemonize -hda squeeze.img -redir tcp:9527::22
-redir tcp:5222::5222
Port 5222 listens for c2s connections with STARTTLS,
and also allows plain connections for old clients.
其他端口,没有隐射出来.
3 Comments to “kvm利用vnc实现远程安装操作系统”
其实比较好的虚拟环境还有Xen,不过由于据说kvm的性能比较好,就选择了它
首先当然是安装好ubuntu server 12.04,本来是很顺利的,但不知我怎么想的,把RAID和LVM放在一起用,然后使引导无法写入硬盘,最后只好清空分区表,重新分区,做RAID,才搞定。
接下来才是主要环节(其实上面才是我花了最多时间的,囧)
一、检查系统是否支持kvm
egrep ‘(vmx|svm)’ /proc/cpuinfo
如果有显示vmx或svm就表明cpu支持虚拟化技术
kvm-ok
如果显示正常的话,表明主板支持;如果显示有误的话,在主板的设置里打开虚拟化技术
二、安装kvm
sudo apt-get install kvm qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virt-manager virt-viewer
然后等待就可以了
三、安装个基本的桌面环境(自己比较懒,不是很想敲命令行)
sudo apt-get install x-window-system-core
sudo apt-get install gnome-core
安装就这么完成了,接下来就是配置了
一、用linux管理
在另外的机器上安装virt-manager
sudo apt-get install virt-manager virt-viewer
二、用windows管理
在另外的机器上安装一个 X Window Server,我用的是XMing,然后安装putty。在用putty连接远程计算机时打开X11 映射:
在ubuntu server 12.04 上安装和配置kvm - 小球 - 春天的希望
在用putty连接后,输入virt-manager,putty就会自动调用XMing来显示窗口
PS:如果想用命令行管理虚拟机,我这提供一个创建的脚本,详细还是参考相关手册吧
virt-install\
–connect qemu:///system\
–name=test\
–ram=512\
–file=test.qcow2\
–file-size=20\
–network=network:default \
–cdrom=/vhdd/iso/ubuntu-12.04-server-amd64.iso\
–vnc\
–noautoconsole\
–os-type=linux\
–os-variant=ubuntuPrecise\
–accelerate
PSS:如果想使用网卡桥接,而不是NAT的话,改一下interface文件就可以了
auto eth0
iface eth0 inet manual
auto virbr0
iface virbr0 inet static
address 192.168.3.249
gateway 192.168.3.254
netmask 255.255.255.0
broadcast 192.168.3.255
bridge_ports eth0
bridge_stp off
-----------
Installing KVM Guests With virt-install On Ubuntu Server
Unlike virt-manager, virt-install is a command line tool that allows you to create KVM guests on a headless server. You may ask yourself: “But I can use vmbuilder to do this, why do I need virt-install?” The difference between virt-install and vmbuilder is that vmbuilder is for creating Ubuntu-based guests, whereas virt-install lets you install all kinds of operating systems (e.g. Linux, Windows, Solaris, FreeBSD, OpenBSD) and distributions in a guest, just like virt-manager. This article shows how you can use it on an Ubuntu 12.04 LTS KVM server.I do not issue any guarantee that this will work for you!
1 Preliminary Note
I’m assuming that KVM is already installed (e.g. as shown here: Virtualization With KVM On Ubuntu 12.04 LTS). My KVM host has the IP address 192.168.0.100.I will show how to install a CentOS 6.2 guest in this tutorial.
We also need an Ubuntu 12.04 LTS desktop so that we can connect to the graphical console of our KVM guests. It doesn’t matter if the desktop is installed on the Ubuntu 12.04 LTS KVM server or on a remote system (there are small differences if the desktop is installed on the KVM host compared to a remote desktop, but I will outline these differences, so read carefully).
2 Installing virt-install
Ubuntu 12.04 KVM Host:Open a terminal and install virt-install:
sudo apt-get install virtinst
3 Installing virt-manager On Your Ubuntu 12.04 Desktop
Ubuntu 12.04 Desktop:We need a means of connecting to the graphical console of our guests – we can use virt-viewer or virt-manager (see KVM Guest Management With Virt-Manager On Ubuntu 8.10) for this. I’m assuming that you’re using an Ubuntu 12.04 desktop (it doesn’t matter if it is a remote desktop of if the desktop is installed on the Ubuntu 12.04 KVM server!).
I suggest you use virt-manager instead of virt-viewer because virt-manager lets you also create and delete virtual machines and do other tasks. virt-manager can be installed as follows:
sudo apt-get install virt-manager
4 Creating A CentOS 6.2 Guest
Ubuntu 12.04 KVM Host:Now let’s go back to our Ubuntu 12.04 KVM host.
Take a look at
man virt-install
to learn how to use it.
We will create our image-based virtual machines in the directory /var/lib/libvirt/images/ which was created automatically when we installed KVM.
To create a CentOS 6.2 guest (in bridging mode) with the name vm10, 1024MB of RAM, two virtual CPUs, and the disk image /var/lib/libvirt/images/vm10.img (with a size of 12GB), insert the CentOS DVD into the CD drive and run
sudo virt-install –connect qemu:///system -n vm10 -r 1024 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /dev/cdrom –vnc –noautoconsole –os-type linux –accelerate –network=bridge:br0 –hvm
Of course, you can also create an ISO image of the CentOS DVD (please create it in the /var/lib/libvirt/images/ directory because later on I will show how to create virtual machines through virt-manager from your Ubuntu desktop, and virt-manager will look for ISO images in the /var/lib/libvirt/images/ directory)…
sudo dd if=/dev/cdrom of=/var/lib/libvirt/images/CentOS-6.2-x86_64-bin-DVD1.iso
… and use the ISO image in the virt-install command:
sudo virt-install –connect qemu:///system -n vm10 -r 1024 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /var/lib/libvirt/images/CentOS-6.2-x86_64-bin-DVD1.iso –vnc –noautoconsole –os-type linux –accelerate –network=bridge:br0 –hvm
The output is as follows:
administrator@server1:~$ sudo virt-install –connect qemu:///system -n vm10 -r 1024 –vcpus=2 –disk path=/var/lib/libvirt/images/vm10.img,size=12 -c /var/lib/libvirt/images/CentOS-6.2-x86_64-bin-DVD1.iso –vnc –noautoconsole –os-type linux –accelerate –network=bridge:br0 –hvm
Starting install…
Allocating ’vm10.img’ | 12 GB 00:00
Creating domain… | 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.
administrator@server1:~$
5 Connecting To The Guest
Ubuntu 12.04 Desktop:The KVM guest will now boot from the CentOS 6.2 DVD and start the graphical CentOS installer – that’s why we need to connect to the graphical console of the guest. You can do this with virt-manager on the Ubuntu 12.04 desktop (see KVM Guest Management With Virt-Manager On Ubuntu 8.10).
Start Virtual Machine Manager (you can search for virt-manager in Unity to find it):
When you start virt-manager for the first time and no KVM is installed on your Ubuntu desktop (i.e., the KVM host is not identical to your dekstop), you will most likely see the following message (Could not detect a default hypervisor.). You can ignore this because we don’t want to connect to the local libvirt daemon, but to the one on our remote Ubuntu 12.04 KVM host.
In virt-manager, go to File > Add Connection… to connect to your Ubuntu 12.04 KVM host. In my virt-manager, I couldn’t find the File menu, it was missing, so I had to right-click a certain area just below the cross that is used to close the program, and then the menu opened from where I could select Add Connection…:
Select QEMU/KVM as Hypervisor. If the KVM host is identical to your desktop, you are done now and can click on Connect.
But if the KVM host is on a remote Ubuntu 12.04 server, then check Connect to remote host, select SSH from the Method drop-down menu, fill in root in the Username field, type in the hostname or IP address (192.168.0.100) of the Ubuntu 12.04 KVM host in the Hostname field, and click on Connect.
(Replace 192.168.0.100 with the IP address or hostname of the KVM host. Please note that the root account must be enabled on the KVM host, and that root logins must be allowed on the KVM host. To enable the root login on an Ubuntu system, run
sudo passwd root
To check if root logins are allowed check the directive PermitRootLogin in /etc/ssh/sshd_config – you might have to restart the SSH daemon afterwards. )
If this is the first connection to the remote KVM server, you must type in yes and click on OK:
Afterwards type in the root password of the Ubuntu 12.04 KVM host:
You should see vm10 as running. Mark that guest and click on the Open button to open the graphical console of the guest:
Type in the root password of the KVM host again:
You should now be connected to the graphical console of the guest and see the CentOS installer:
Now install CentOS as you would normally do on a physical system. Please note that at the end of the installation, the CentOS system needs a reboot. The guest will then stop, so you need to start it again, either with virt-manager or like this on the KVM host’s command line:
Ubuntu 12.04 KVM Host:
sudo virsh –connect qemu:///system
start vm10
quit
Afterwards, you can connect to the guest again with virt-manager and configure the guest. You can as well connect to it with an SSH client (such as PuTTY).
6 Creating A CentOS 6.2 Guest (Image-Based) From The Desktop With virt-manager
Ubuntu 12.04 Desktop:Instead of creating a virtual machine from the command line (as shown in chapter 4), you can as well create it from the Ubuntu desktop using virt-manager (of course, the virtual machine will be created on the Ubuntu 12.04 KVM host – in case you ask yourself if virt-manager is able to create virtual machines on remote systems).
To do this, click on the following button:
The New VM dialogue comes up. Fill in a name for the VM (e.g. vm11), select Local install media (ISO image or CDROM), and click on Forward:
Next select Linux in the OS type drop-down menu and Red Hat Enterprise Linux 6 in the Version drop-down menu, then check Use ISO image and click on the Browse… button:
Select the CentOS-6.2-x86_64-bin-DVD1.iso image that you created in chapter 4 and click on Choose Volume:
Now click on Forward:
Assign memory and the number of CPUs to the virtual machine and click on Forward:
Now we come to the storage. Check Enable storage for this virtual machine, select Create a disk image on the computer’s hard drive, specify the size of the hard drive (e.g. 12GB), and check Allocate entire disk now. Then click on Forward:
Now we come to the last step of the New VM dialogue. Go to the Advanced options section. Select Specify shared device name; the Bridge name field will then appear where you fill in the name of your bridge (if you have used the Virtualization With KVM On Ubuntu 12.04 LTS guide to set up the KVM host, this is br0). Click on Finish afterwards:
The disk image for the VM is now being created:
Afterwards, the VM will start. If you use a remote KVM host, type in the root password of the KVM host:
You should now be connected to the graphical console of the guest and see the CentOS installer:
Now install CentOS as you would normally do on a physical system.
7 Cloning Guests
Ubuntu 12.04 KVM Host:The python-virtinst package comes with a second tool, virt-clone, that lets you clone guests. To clone vm10 and name the clone vm12 with the disk image /var/lib/libvirt/images/vm12.img, you simply run (make sure that vm10 is stopped!)
sudo virt-clone –connect qemu:///system -o vm10 -n vm12 -f /var/lib/libvirt/images/vm12.img
Afterwards, you can start vm12 with virt-manager or like this…
sudo virsh –connect qemu:///system
start vm12
quit
… and connect to it using virt-manager.
8 Links
- KVM (Ubuntu Community Documentation): https://help.ubuntu.com/community/KVM
- Ubuntu: http://www.ubuntu.com/
-----------------------------------
用-vnc 同 –daemonize 联合使用
-vnc :1 –daemonize
http://www.tightvnc.com/download/1.3.10/tightvnc-1.3.10_x86_viewer.zip 是windows下的客户端, 单独一个可执行文件,无需安装
还可以为VNC指定密码 –vnc :1,password
还要加 -M pc 的选项,不然似乎 硬件虚拟花技术用不了
如果使用如下选项
-nographic -daemonize -redir tcp:3389::3389
就使用远程终端, 不用vnc了
windows系统必须用 -localtime, 不然用utc时间就不对了
Link | 一月 27th, 2010 at 00:39
kvm -cpu core2duo -smp 2 -m 1024 -nographic -daemonize -hda /dev/sdb -redir tcp:9527::22 -redir tcp:7080::4080
kvm -cpu core2duo -smp 2 -m 1024 -nographic -daemonize -hda /dev/sdb -net nic -net tap,ifname=tap0,script=no
kvm -cpu core2duo -smp 2 -m 1024 -hda /dev/sdb -net nic,macaddr=00:30:1b:2A:3B:4D -net tap,ifname=tap10,script=no,downscript=no -vnc :1
Link | 五月 1st, 2010 at 10:38
然后 VNC 连接到 5901 端口
kvm -cpu core2duo -m 512 -hda /dev/sdb -cdrom /tmp/mini.iso -boot d -vnc :1
kvm -cpu core2duo -m 512 -nographic -daemonize -hda /dev/sdb -redir tcp:9527::22
from https://zhiwei.li/text/2010/01/21/kvm%e5%88%a9%e7%94%a8vnc%e5%ae%9e%e7%8e%b0%e8%bf%9c%e7%a8%8b%e5%ae%89%e8%a3%85%e6%93%8d%e4%bd%9c%e7%b3%bb%e7%bb%9f/
-------------
ubuntu server中,安装kvm
什么叫kvm?
kernel-based Virtual Machine的简称,是一个开源的系统虚拟化模块,自Linux 2.6.20之后集成在Linux的各个主要发行版本中。
官网地址:
http://www.linux-kvm.org/page/Main_Page
就我的理解吧,他就类似与vmware,只不过vmware,你只需通过鼠标点击选择一些选项为虚拟机加配置,而这个kvm需要通过命令来操作,废话不多说,下面直接讲下过程。
其实安装过程很简单:
首先,查看CPU信息,看是否支持虚拟化:
cat /proc/cpuinfo
在列出来的一系列信息中,注意这一行:
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts pni monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr sse4_1 lahf_lm
这里面如果有SVM (AMD cpu)或者 VMX (Intel cpu)说明CPU是支持安装虚拟机的。
然后首先安装kvm,qemu
创建虚拟磁盘:
我是在当前目录下创建了一个名叫ubuntu的虚拟磁盘,大小是10G。
安装ubuntu server之前需要确保,你有个这镜像或者光盘。
-hda指定了XP的硬盘是那个虚拟磁盘,这里用我们刚刚创建的winxp.img
-cdrom指定cdrom是那一个,可以用iso文件,也可以用机器的光驱,我们选择用iso文件,如果用光驱尝试-cdrom /dev/cdrom
-boot指定XP启动的时候从磁盘,硬盘,光驱还是网路上启动,我们安装的时候选择从光盘启动,所以用d
-m虚拟机使用的内存大小,单位是MB,默认是128mb
到此,安装完成。注:我这里使用的vnc,刚开始用的是putty安装的,但是每次出现init kdb,然后ctrl+c结束后,会报这个错
Could not open SDL display (640x480x32): Couldn’t set console screen info
后来找到了这个vnc,通过这个可以很顺利的安装。
首先在ubuntu server上安装vnc
修改密码:
这个密码会在客户端登录的时候用到。
启动vnc4server:
vnc4server
desktop的名称,这个名称在客户端连接的时候用到。如果是desktop:1,在客户端连接的时候就填写:ip:5901,依此类推,如果是Desktop 2,
在客户端的时候填写:ip:5902….
修改.vnc/xstartup,改成如下形式:
紫色字体是后来加上的。
服务器端搞定后,就是客户端.
下vncviewer
http://www.realvnc.com/cgi-bin/download.cgi
下载完成后,打开vncviewer
在server中填入:ip:5902….这个5902是在服务器端生成的。
输入刚才在服务器端设置的密码:
就登陆进去了.
然后在该界面下,在kvm中安装某个linux发行版。
ok,这就是整个配置过程。
------------------------
在ubuntu server,安装kvm