Thursday, August 31, 2006

Random password one-liner

I recently came up with this method for generating reasonable random 8-character passwords:

$ dd if=/dev/random bs=6 count=1 2>/dev/null | openssl base64
LCia46S4
If 8 characters is not long enough, increase the number after bs= to 75% of the number of characters you would like in the password.

Saturday, August 26, 2006

Install Solaris from DVD image on disk

My personal SPARC machine is pathetic by today's standards - An Ultra II with a pair of 300 MHz processors, 768 MB RAM, and a very slow CDROM drive. This is pretty much the slowest machine that is supported by Solaris 10. That, and today I decided it was time to get a fresh installation of Solaris Express (build 46) on it.

I first tried the live upgrade route. However, that didn't work out too well because I had previously used bfu to get some newer OpenSolaris bits on the machine. I really did not want to repeat the download process for all the CD ISO's (already had downloaded the DVD ISO). Now, if you think that downloading and burning is slow - you should see the speed of the installation on this CDROM drive. It was probably OK in the days when Solaris fit on one CD, but not today with 5(?) CD's to complete the installation.

The disk layout of the machine was as follows:

  • c0t0d0 32 GB disk
    • c0t0d0s0 - 4.5 GB available for new /
    • c0t0d0s1 - ~500 MB swap
    • c0t0d0s7 - remainder as zfs pool "pool0"
  • c0t1d0 4 GB disk
    • c0t1d0s0 - Root with build 36 (?) + random BFU bits
I had the DVD image in a subdirectory of my home directory that was in the pool0/home file system in the zfs pool. To make use of that DVD image without buying a SCSI DVD drive, I did the following:
  1. Burn build 46 CD0 to a CD-R
  2. Boot from the CD-R
  3. Go clean up the shop from the woodworking I was doing earlier
  4. Do some laundry
  5. Return to the Ultra II to find that it was just about to ask me which language I speak. Really, it was still working on it. Now do you know why I didn't want to feed it 5 CD's?
  6. Answer sysidcfg questions
  7. Exit the installer
  8. zpool import pool0. After the import was complete but before mounting file systems, zpool crashed with a segv. Later I saved that core file to /a for later analysis
  9. zfs set mountpoint=/tmp/home pool0/home
  10. zfs mount pool0/home
  11. lofiadm -a /tmp/home/build46.iso
  12. umount /cdrom
  13. mount -F hsfs -o ro /dev/lofi/1 /cdrom
  14. install-solaris
  15. Go blog about a cool hack. :)
The installation is now about 40% done. Looks like the hack is working just fine. I wonder if I could bundle this all up in a begin script (especially the laundry) to automate the installation from an ISO image after booting from local media.

Tuesday, February 28, 2006

Update on zoneadm create with zfs

After reaching out to Sun to work on getting my work integrated into OpenSolaris, I found that Sun was already working on this feature. Subsequently, they indicated that the code made it into some internal source code tree. As such, I am holding off on future development until I can get at that code.

However, if you are wanting to try it out, I have posted the code for others to play with. If you have a working OpenSolaris build environment, you should be able to drop in my modified zoneadm.c, run dmake all, then use the resulting zoneadm command. Alternatively, the sparc version of the zoneadm binary is also available.

Enjoy!

T:

Sunday, February 19, 2006

Zone created in 0.922 seconds

I noticed today that in the latest OpenSolaris code that "zoneadm clone" exists. Unfortunately, cloning a zone only offered the copy mechanism that was essentially "find | cpio". A bit of hacking later and we have this:

# time ksh -x /var/tmp/clone
+ newzone=fast
+ template=template
+ zoneadm=/ws/usr/src/cmd/zoneadm/zoneadm
+ PATH=/usr/bin:/usr/sbin
+ zonecfg -z fast create -t template
+ zonecfg -z fast set zonepath=/zones/fast
+ /ws/usr/src/cmd/zoneadm/zoneadm -z fast clone -m zfsclone template
Cloning zonepath /zones/template...

real    0m0.922s
user    0m0.128s
sys     0m0.171s
This comes is achieved using zfs to create a snapshot of the template zone, then clone the snapshot to create the zonepath of the new zone. A bit of cleanup is needed, but goodness is on the way.

T:

Wednesday, February 15, 2006

hunting bugs in filebench

I've been using filebench a bit at work and decided that I would like to try a few things out at home. My home machine is not quite as beefy as the V40z's that I have been testing on at work. Getting filebench to compile in the first place is a bit of work. Probably works really well on someone else's system, but mine is obviously different. That's another story though. After compiling filebench, I ran it for the first time and saw this:

$ /opt/filebench/bin/filebench
Segmentation fault (core dumped)
Bummer. Well, let's see where that is at:
$ gdb /opt/filebench/bin/filebench core
GNU gdb 6.4-debian

. . .

(gdb) where
#0  0x37dd84aa in memset () from /lib/tls/i686/cmov/libc.so.6
#1  0x0807b01e in ?? ()
#2  0x080522da in ipc_init () at ipc.c:264
#3  0x08058bc1 in main (argc=1, argv=0x3f8fdcf4) at parser_gram.y:1140
OK, so let's go with the assumption that the bug is in the code listed as alpha on the web site, and not libc. So we go up the stack a couple levels.
(gdb) up 2
#2  0x080522da in ipc_init () at ipc.c:264
264             memset(filebench_shm, 0, c2 - c1);
(gdb) print filebench_shm
$1 = (filebench_shm_t *) 0xffffffff
Hmmm... 0x with a bunch of f's looks like -1. Perhaps some system call on Solaris (presumably where filebench started) returns NULL on error and on Linux it returns -1. Let's go looking for that system call.
(gdb) list
259     #endif /* USE_PROCESS_MODEL */
260
261             c1 = (caddr_t)filebench_shm;

262 c2 = (caddr_t)&filebench_shm->marker; 263 264 memset(filebench_shm, 0, c2 - c1); 265 filebench_shm->epoch = gethrtime(); 266 filebench_shm->debug_level = 2; 267 filebench_shm->string_ptr = &filebench_shm->strings[0]; 268 filebench_shm->shm_ptr = (char *)filebench_shm->shm_addr;

Nope, not there. Maybe a bit further up.
(gdb) list 250
245     #endif
246
247             if ((filebench_shm = (filebench_shm_t *)mmap(0, sizeof(filebench_shm_t),
248                     PROT_READ | PROT_WRITE,
249                     MAP_SHARED, shmfd, 0)) == NULL) {
250                     filebench_log(LOG_FATAL, "Cannot mmap shm");
251                     exit(1);
252             }
253
254     #else
It looks like mmap may be the culprit. I first asked man, but this is Linux, not Solaris. No man page for mmap! Next try google. Google comes up with this page that looks a lot like a man page. Why isn't that found on my system? Another thing for another day. Anyway, it says:
RETURN VALUE

On success, mmap returns a pointer to the mapped area. On error, the value MAP_FAILED (that is, (void *) -1) is returned, and errno is set appropriately. On success, munmap returns 0, on failure -1, and errno is set (probably to EINVAL).

Ok, so it is returning -1 because it doesn't like something. Let's see what it is trying to mmap:
(gdb) print sizeof(filebench_shm_t)
$2 = 907368000
(gdb) print sizeof(filebench_shm_t) / 1024 / 1024

$3 = 865 (gdb)

That 'splains it. It looks like it is trying to set up a shared memory segment that is 865 MB. My poor little system only has 512. FWIW, I have created a patch that addresses this one problem but I haven't had a chance to test it on Solaris yet. Unfortunately, with the patch, it just tells me that the mmap failed. It doesn't address the fact that it is trying to allocate a shared memory segment larger than the size of RAM on my system.

Update 1:

I have posted several patches to the bug tracking system at sourceforge.net. This particular one is 1432638. It turns out that mmap on Solaris also returns MAP_FAILED so the patch is simpler than I originally expected.

T:

Tuesday, January 31, 2006

Download and gunzip in one step

I was feeling the need to take a look at Nexenta and decided that I wasn't terribly interested in waiting for a download, then waiting for a gunzip. Why not do them both at the same time?

$ wget -O /dev/stdout  http://www.gnusolaris.org/gsmirror/genunix.org/elatte_installcd_alpha2_i386.iso.gz  | gunzip > elatte_installcd_alpha2_i386.iso         => `/dev/stdout'
--20:21:47--  http://www.gnusolaris.org/gsmirror/genunix.org/elatte_installcd_alpha2_i386.iso.gz
Resolving www.gnusolaris.org... 216.129.112.21
Connecting to www.gnusolaris.org|216.129.112.21|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.genunix.org/distributions/gnusolaris/elatte_installcd_alpha2_i386.iso.gz [following]
--20:21:48--  http://www.genunix.org/distributions/gnusolaris/elatte_installcd_alpha2_i386.iso.gz
Resolving www.genunix.org... 204.152.191.100
Connecting to www.genunix.org|204.152.191.100|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 567,433,011 (541M) [text/plain]

13% [====>                                ] 77,025,880   359.25K/s    ETA 22:27
Just 22 minutes to go. I guess at this rate I could have piped it through cdrecord with "speed=2".

T:

Friday, January 13, 2006

patch_order made easy

Some of my most tedious times as a Solaris administrator have been when I needed to create a patch_order file for a custom patch cluster. For a long time I have intended to just write a script...

But now, I don't have to do that any longer! Today I discovered that smpatch(1M) now has an order subcommand. This makes it really quite simple for me to create a patch_order file for a very long list of patches. In this example, I create the patch_order file for the patches in the Solaris 10 Update 1 UpgradePatches directory:

# cd /mnt/Solaris_10/UpgradePatches
# ls > /tmp/patches
# smpatch order -d `pwd` -x idlist=/tmp/patches > /tmp/patch_order
Now, if you want to go the full length and create a patch cluster for it:
# mkdir /tmp/10U1_UpgradePatches
# cd /tmp/10U1_UpgradePatches
# mv /tmp/patch_order .
# ln -s /mnt/Solaris_10/UpgradePatches/* .
# cp /somewhere/10_Recommended/install_cluster .
Modify the SUPPLEMENT_NAME="..." line in install_cluster to be more descriptive for this patch cluster. Be sure to not use characters like /, \, |, etc.
# cd /tmp
# zip -rq 10U1_UpgradePatches.zip 10U1_UpgradePatches
At this point, you can copy the 10U1_UpgradePatches around to your various machines and use it just like you would a 10_Recommended bundle. Enjoy!