OpenIndiana has finally been updated (to oi_151a8) and with it a few new features and of course many bug fixes. All this should also be true for OmniOS r151006 which has been out for a few month already (and probably SmartOS). Here are a few new features I noticed more than others while scanning the illumos commits since the last update.
3035 LZ4 compression support in ZFS and GRUB
3236 zio nop-write
3137 L2ARC compression
3122 zfs destroy filesystem should prefetch blocks
3805 arc shouldn't cache freed blocks
3561 arc_meta_limit should be exposed via kstats
3537 want pool io stats
3507 Tunable to allow block allocation even on degraded vdevs
749 /usr/bin/kstat" should be rewritten in C
Bug fixes
3422 zpool create/syseventd race yield non-importable pool
3606 zpool status -x shouldn't warn about old on-disk format
Hardware support
3815 AHCI: Support for Marvell 88SE9128
3797 AHCI: Support for ASMedia ASM106x
3500 Support LSI SAS2008 (Falcon) Skinny FW for mr_sas(7D)
3178 Support for LSI 2208 chipset in mr_sas
3408 detect socket type of newer AMD CPUs
3701 Chelsio Terminator 4 NIC driver for illumos
More in depth information on the two larger enhancements to ZFS can be found here:
LZ4 Compression
L2ARC Compression
Sunday, September 1, 2013
Tuesday, April 23, 2013
ZFS Analytics
While woking with ZFS performance I created a dashboard to get a good overview with lots of different statistics. It's powered by Dtrace, python and graphite. There is a high level of detail but still easy to correlate different statistics.
It feels almost like fishworks analytics lite but without advanced features such as drill-down and heat maps. An example from a box running OpenIndiana:
You get a good view of how the layers interact, the latency for reads in ZFS compared to reads in from the physical disks, average latency, maximum latency, average read size and see how much more data ZFS reads form prefetch including hit rate etc.
I based this on the iomon dtrace script with some glue to send it into graphite, I also added ARC statistics and CPU/Network statistics. ( There is a iomon-graphite effort available on the web but that did not give me correct statistics and did not include things like CPU and network utilization ).
iomon.d
It feels almost like fishworks analytics lite but without advanced features such as drill-down and heat maps. An example from a box running OpenIndiana:
You get a good view of how the layers interact, the latency for reads in ZFS compared to reads in from the physical disks, average latency, maximum latency, average read size and see how much more data ZFS reads form prefetch including hit rate etc.
I based this on the iomon dtrace script with some glue to send it into graphite, I also added ARC statistics and CPU/Network statistics. ( There is a iomon-graphite effort available on the web but that did not give me correct statistics and did not include things like CPU and network utilization ).
iomon.d
Wednesday, March 27, 2013
SPARC T5 and M5 systems released
Oracle have announced new systems based on the new T5 and M5 processors. The T5 has doubled the number of S3 cores from the T4 and increased the clock frequency to 3.6GHz.
The M5 processor is also based on the S3 core (rebranded M4) clocked at 3.6Ghz but is has 6 cores and 48MB L3$. The M systems supports up to 32 M5 processor so a fully configured systems will have 192 cores and 1536 strands (hardware threads). The M5-32 have 32TB of memory in a single system.
Old and existing processors for reference:
T3 16 cores @ 1.65GHz 6Mb L2$ 1-4 socket systems PCIe 2.0
T4 8 cores @ 3.0GHz 4MBL3$ 1-4 socket systems PCIe 2.0
New processors:
T5 16 cores @ 3.6GHz 8MB L3$1-8 socket systems PCIe 3.0
M5 6 cores @ 3.6Ghz 48MB L3$ 32 socket system PCIe 3.0
Oracle claims 2.3x performance gain compared to the T4 with increased single-thread performance.
All new systems both T5 and M5 supports LDOM (Oracle VM for SPARC).
Did everyone read that the M5-32 Supports 32TB of memory? No wounder they had to rewrite the virtual memory subsystem in Solaris 11.1
SPARC Servers
The M5 processor is also based on the S3 core (rebranded M4) clocked at 3.6Ghz but is has 6 cores and 48MB L3$. The M systems supports up to 32 M5 processor so a fully configured systems will have 192 cores and 1536 strands (hardware threads). The M5-32 have 32TB of memory in a single system.
Old and existing processors for reference:
T3 16 cores @ 1.65GHz 6Mb L2$ 1-4 socket systems PCIe 2.0
T4 8 cores @ 3.0GHz 4MBL3$ 1-4 socket systems PCIe 2.0
New processors:
T5 16 cores @ 3.6GHz 8MB L3$1-8 socket systems PCIe 3.0
M5 6 cores @ 3.6Ghz 48MB L3$ 32 socket system PCIe 3.0
Oracle claims 2.3x performance gain compared to the T4 with increased single-thread performance.
All new systems both T5 and M5 supports LDOM (Oracle VM for SPARC).
Did everyone read that the M5-32 Supports 32TB of memory? No wounder they had to rewrite the virtual memory subsystem in Solaris 11.1
SPARC Servers
Saturday, February 23, 2013
T5,M4 and Athena support in S10U11
The kernel patch for Solaris 10 update 11 show support for several new platforms: SPARC T5, SPARC M4 and the Fujitsu Athena (SPARC64-X).
getupdates.oracle.com/readme/147147-26
7086173 Solaris support for SPARC M4 platforms 7086179 Solaris support for SPARC T5 platforms
7124696 Solaris support for Athena processor
7142242 Solaris support for Fujitsu's SPARC64-X Athena platforms
The Oracle SPARC T5 and M4 processors are unannounced when I write this but the T5 are expected early 2013.
getupdates.oracle.com/readme/147147-26
Wednesday, February 13, 2013
LDOM 3.0, T5 and M10
The release notes for LDOM 3.0 confirms what we already knew, logical domains will be supported not only on upcoming SPARC T5 systems but also on the new generation for Fujitsu SPARC Server, M10.
From the Register: Fujitsu launches 'Athena' Sparc64-X servers in Japan
"To enable all the OracleVM Server for SPARC 3.0 features, you must run the required
system firmware versions on the following platforms:
UltraSPARC T2 server Run at least version 7.4.2 of the system firmware. UltraSPARC T2 Plus server Run at least version 7.4.2 of the system firmware. SPARC T3 server Run at least version 8.2.1.b of the system firmware. SPARC T4 server Run at least version 8.2.1.b of the system firmware. SPARC T5 server Use any installed version of the system firmware.
This firmware is preinstalled on the SPARC T4 and SPARC T5 servers. The required firmware for Fujitsu M10 systems is preinstalled on your system. For information about the required Oracle Solaris OS version, see “Required and Recommended Oracle Solaris OS Versions” on page 12."
The new SPARC M10 systems can now be seen on Fujitsus site (English translation).
From the Register: Fujitsu launches 'Athena' Sparc64-X servers in Japan
Sunday, February 10, 2013
Solaris 10 1/13 released
The last update of Solaris 10 has now been released, update 11. The final name of the release is Solaris 10 1/13 and not 8/12 as plans earlier indicated. This is probably due to the upcoming SPARC T5 systems, to have the releases close in time. Supports the latest and greatest X86 systems, SPARC M-series, SPARC T4/T5 while still working fine on my 15 year old Ultra 2.
Oracle Solaris 10 Downloads
Oracle Solaris 10 1/13 What's New
- Live Upgrade and Zone Preflight checkers
- Install and boot from iSCSI
- pkgdep command
- SSH, SCP, and SFTP Speed Improvements
- USB 3 Support
- Support for new hardware
- Enhanced FMA (AMD,Intel)
- Samba 3.6.8 with SMB2 support
Oracle Solaris 10 Downloads
Oracle Solaris 10 1/13 What's New
Wednesday, November 28, 2012
LDOM 3.0
Oracle have release Oracle VM for SPARC 3.0 (LDOM).
- Enhances the resource affinity capability to address memory affinity.
- Retrieves a service processor (SP) configuration from the SP into the bootset, which is on the control domain.
- Enhances the domain shutdown process. This feature enables you to specify whether to perform a full shutdown (default), a quick kernel stop, or force a shutdown.
- Adds Oracle Solaris 11 support for Oracle VM Server for SPARC Management Information Base (MIB).
- Enables a migration process to be initiated without specifying a password on the target system.
- Enables the live migration feature while the source machine, target machine, or both, have the power management (PM) elastic policy in effect.
- Enables the dynamic resource management (DRM) feature while the host machine has the PM elastic policy in effect.
I hope this release removes the restriction that prevented dynamic reconfiguration of resources after a live migration.
This release also seems to have been tested on upcoming SPARC processors: "7159011 M4/T5: Migration fails initialization on Logical Domains Manager startup" including Fujitsu Athena: "RFE: Cross CPU Migration support between Athena server and T series".
Subscribe to:
Posts (Atom)