Wednesday, June 3, 2009

Virtualization in 2009.06 (Part 2, VirtualBox)

To continue the "getting started" tutorials on virtualization on OSOL 2009.06 we will take a look at VirtualBox. VirtualBox might be a good Solution for remote desktop installations, fully virtualized machines with graphical console over the network. As with the previous posts this one will be focused on how to deploy it on your server. If you run virtualbox on your desktop it is probably easier to use the supplied GUI, which is on pair with the VMWare workstation GUI.

VirtualBox only supports full virtualization, so the performance will not be as good as a paravirtualized xVM domain or Solaris zone, but it works good for graphical access to virtual machines over network with lighter workloads.

Install VirtualBox

VirtualBox is not hosted in the standard OSOL repositories, it is in the extra repository (along with flash, JavaFX SDK and others). There is some minor hassle to get access to this repository, you will ned to register an account with sun then download and install certificates. You can register here and download the certificate, there are install instructions on the site, but here is the procedure:
$ pfexec mkdir -m 0755 -p /var/pkg/ssl
$ pfexec cp -i ~/Desktop/OpenSolaris_extras.key.pem /var/pkg/ssl
$ pfexec cp -i ~/Desktop/OpenSolaris_extras.certificate.pem /var/pkg/ssl
$ pfexec pkg set-authority \
-k /var/pkg/ssl/OpenSolaris_extras.key.pem \
-c /var/pkg/ssl/OpenSolaris_extras.certificate.pem \
-O https://pkg.sun.com/opensolaris/extra/ extra
Now install VirtualBox:
$ pfexec pkg install virtualbox virtualbox/kernel
Create virtual machines

Create and install a virtual machine:
$ cd /opt/VirtualBox
$ VBoxManage createvm --name "WinXP" --register
$ VBoxManage modifyvm "WinXP" --memory "512MB" \
--acpi on --boot1 dvd --nic1 nat
$ VBoxManage createhd --filename "WinXP.vdi" \
--size 5000 --remember
$ VBoxManage modifyvm "WinXP" --hda "WinXP.vdi"
$ VBoxManage controlvm "WinXP" dvdattach \
/path_to_dvd/winxp.iso
$ VBoxHeadLess --startvm "WinXP"

Connect to the server with an RDP client and perform the installation. There are free RDP Clients available for most Operating systems:
Solaris/OpenSolaris/*BSD: rdekstop (pkg install SUNWrdesktop)
MacOS X: Remote Desktop Connection
After the installation is done, the guest can be controlled with VBoxManage. VBoxManage can control all aspects of the guest, snapshots, power, USB sound. Here are a few basic commands:
Start: VBoxHeadLess -s winxp
Poweroff: VboxManage controlvm WinXP poweroff
Reset: VboxManage controlvm WinXP reset
If you want to install the VirtualBox Guest additions, just inject the DVD:

$ VBoxManage controlvm "WinXP" dvdattach \
/opt/VirtualBox/additions/VBoxGuestAdditions.iso


Tuesday, June 2, 2009

Virtualization in 2009.06 (Part 1, xVM)

Install xVM

It seems that i have started a little tutorial trail for working with OSOL 2009.06. A few of my friends are about to install this release, so i thought i might as well make it a few blog entries, there are probably other people out there that want some help to get a quick start of OSOL 2009.06.

This will be the first entry about virtualization, first we get xVM running, later entries will describe some basic setup of Solaris zones and VirtualBox.

Install the xVM packages:
pfexec pkg install xvm-gui SUNWvdisk
Edit /rpool/boot/grub/menu.lst, copy your current entry and modify it to something similar to this:
title OpenSolaris 2009.06 xVM
findroot (pool_rpool,1,a)
bootfs rpool/ROOT/opensolaris
kernel$ /boot/$ISADIR/xen.gz
module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=text
module$ /platform/i86pc/$ISADIR/boot_archive
Reboot into this grub entry and if everything works set this as your default boot entry:
$ bootadm list-menu
the location for the active GRUB menu is: /rpool/boot/grub/menu.lst
default 0
timeout 30
0 OpenSolaris 2009.06
1 OpenSolaris 2009.06 xVM

$ bootadm set-menu default=1
Enable the xVM services

If you want to be able to connect to VNC over the network to perform the installation, make xen listen on external addresses for VNC connections:
$ pfexec svccfg -s xvm/xend setprop config/vnc-listen = astring: \"0.0.0.0\"
Enable the xVM services and set a password for VNC connections:
$ pfexec svccfg -s xvm/xend setprop config/vncpasswd = astring \"yourpass\"
$ pfexec svcadm refresh xvm/xend
$ pfexec svcadm enable -r xvm/virtd
$ pfexec svcadm enable -r xvm/domains
( Ignore messages about multiple instances for dependencies )

Installing domains

Now we should be able to create our domU instances, First we create a zvol to be used as disk for the domU:
$ pfexec zfs create -o compression=on -V 10G zpool01/myzvol
Install a paravirtualized domain e.g. OSOL 2009.06:
$ pfexec virt-install --nographics -p -r 1024 -n osol0906 -f /dev/zvol/zpool01/myzvol -l /zpool01/dump/osol-0906-x86.iso
Connect to the console and answer the language questions:

$ pfexec xm console osol0906

Back on dom0 get the address, port and password for the VNC console for the OSOL installation, first get the domain id:

$ pfexec virsh domid osol0906

Get the address, port and password using the domain id:
$ pfexec /usr/lib/xen/bin/xenstore-read /local/domain//ipaddr/0
$ pfexec /usr/lib/xen/bin/xenstore-read /local/domain//guest/vnc/port
$ pfexec /usr/lib/xen/bin/xenstore-read /local/domain//guest/vnc/passwd

Connect with a VNC client to address and port, authenticate using the password.

Install an OS without support for paravirtualization, e.g. Windows:
$ pfexec virt-install -v --vnc -n windows -r 1024 -f /dev/zvol/dsk/zpool01/myzvol -c /zpool01/dump/windows.iso --os-type=windows
Connect to the xVM VNC console using the password provided earlier with svccfg/vncpasswd.

When installation is done domains can be listed with xm list and started with xm start.

Monday, June 1, 2009

OpenSolaris 2009.06 quick guide

OpenSolaris 2009.06 was released today! I have written a very quick guide for customizing and adding some basic services to an OSOL 2009.06 server from the shell.

Package operations

Install the storage-nas cluster (CIFS,iSCSI, NDMP etc.)
$ pfexec pkg install storage-nas
Add compilers (sunstudio, it can be replaced with e.g. gcc-dev-4)
$ pfexec pkg install sunstudio
Add the contrib repository for contributed packages
$ pfexec pkg set-publisher -O http://pkg.opensolaris.org/contrib contrib
Other packages that can be of interest:
SUNWmysql51, ruby-dev, SUNWPython26, SUNWapch22m-dtrace, amp-dev, gcc-dev-4

List available and installed packages with search string
$ pkg list -a SUNWgzip
NAME (PUBLISHER) VERSION STATE UFIX
SUNWgzip 1.3.5-0.111 installed ----
$ pkg list -a '*Python26*'
NAME (PUBLISHER) VERSION STATE UFIX
SUNWPython26 2.6.1-0.111 known ----
SUNWPython26-extra 0.5.11-0.111 known ----
Sharing

Create a ZFS filesystem with compression enabled
$ pfexec zfs create -o compression=on rpool/export/share
Share with NFS

Enable NFS service:
$ pfexec svcadm enable -r nfs/server
Enable sharing over NFS for the share filesystem
$ zfs set sharenfs=on rpool/export/share
Share with CIFS
$ pfexec svcadm enable smb/server
$ pfexec zfs set sharesmb=on rpool/export/share
$ pfexec zfs set sharesmb=name=mysharename rpool/export/share
To enable users to access the CIFS share add the following line to /etc/pam.conf and reset users pw with passwd(1):
other password required pam_smb_passwd.so.1 nowarn
Enable auto ZFS-snapshots

Disable snapshots globally for the whole pool:
$ pfexec zfs set com.sun:auto-snapshot=false rpool
Enable snapshots for the share:
$ pfexec zfs set com.sun:auto-snapshot=true rpool/export/share
Enable daily snapshots (can be frequent, hourly, daily, weekly or monthly):
$ pfexec svcsadm enable auto-snapshot:daily
List snapshots:
$ zfs list -t snapshot

If you are unfamiliar with Solaris, read the manual pages for the following commands:
prstat, fsstat, pkg, powertop, zfs, zpool, sharemgr, ipfilter, dladm, fmdump

NOTE: There are graphical options for snapshot setup and the package manager that can be used from graphical console, VNC or forwarded X. Launch them with with "time-slider-setup" or "packagemanager".

Thursday, April 30, 2009

Solaris 10 Update 7

Solaris 10 05/09 was has been released, which probably is the last release from an independent Sun.

Noteworthy changes are:

SSH with PKCS#11 support which means use hardware acceleration for SSH on UltraSPARC T2 and Sun Crypto Accelerator 6000.

Improved power management for Intel CPU (T-State, Power aware dispatcher and deep C-state)

Enhanced observability for intel CPU ( performance counters, Nahalem turbo mode)

Major iSCSI improvements and bug fixes.

IPSec improvements (SMF service, stronger algorithms)

Support for backing out patches with update on attach for zones.

Additional network drivers (NetXen 10, hyper GigE, Intel ICHI 10 and Hartwell)

And lots of bug fixes...

The complete document: Solaris 10 5/09 What's New

Monday, April 20, 2009

The end of the world, as we know it?

I guess you all heard by now that Oracle is buying Sun. I can only hope that Oracle will continue to invest in technologies such as OpenSolaris, VirtualBox, ZFS/OpenStorage and SPARC(R) Rock.

I'm not quite sure exactly what Oracle is after, but at least they can stop developing their own filesystem now that they have ZFS and Java is probably something Oracle would like to have more control over. It must also be convenient to get that little free database, they already have a quite complex licensing model, ready to be applied to another engine ;)

I guess it much better for Solaris than if IBM was the buyer, Oracle have had Solaris as their preferred platform for a long time and doesn't have any own hardware or Unix flavor. We have to hope that all the great minds of Sun stays with the ship and gets to continue their work.

At least some good news has come out of this:

"In our opinion, Sun Solaris is by far the best Unix operating system in the business. With the acquisition of Sun, we will be able to uniquely integrate Sun Solaris and the Oracle database." -Larry Ellison

Perhaps this could be the beginning of a beautiful friendship?

Friday, April 3, 2009

Zones and Parallel Patching

Jeff Victor has a interesting entry on his blog regarding performance of zone patching with soon to be released parallel patching and SSDs compared to serial patching and spinning rust. It's nice to finally see a near time solution to this since it has been a problem for quite a while.

In short he was able to speed up the patching of 16 zones five times with SSDs and parallel patching and 3 times with HDDs and parallel patching:

Tuesday, March 10, 2009

Bug Hunting

The last few weeks I have been a good Solaris citizen and reported several bugs against Solaris Nevada: (I have provided the links if the CR is available online)

6803634: virt-install panics in module mac with Jumbo frames enabled
This bug made the system panic when installing a domU which used a network with jumbo frames enabled, it has now been fixed.

6805659: Phantom volume in ZFS pool
This bug causes phantom device links to be left after a zvol has been destroyed.

6815540: Live upgrade is too picky with datasets in non root pools
This is related to live upgrade, live upgrade keeps track of all filesystems used on a host, if any one of them have been removed lumount and ludelete stops working. If you want to delete your old boot environment a few weeks after upgrade and any filesystem has been removed, it will fail before this is fixed.

6815701: snv_109 hangs in boot with SATA enabled on GeForce 8200
My storage node at home hangs when booting snv_109 and SATA is enabled in the BIOS.

As you probably know, Solaris Nevada is the development branch of Solaris. It's not production software so when you use it your are part of the test process.