Tags: zfs

11/04/13

  07:23:00 pm, by The Dreamer   , 2301 words  
Categories: Hardware, Computer, Operating Systems, FreeBSD

Upgraded to FreeBSD 9.2

So, the announcement of FreeBSD 9.2 came out on Monday [September 30th], which I missed because I was focused on my UNMC thing. But, once it appeared, I knew that I was going to want to upgrade to it sooner than later.

From its highlights, the main items that caught my attention were:

  1. The ZFS filesystem now supports TRIM when used on solid state drives.
  2. The ZFS filesystem now supports lz4 compression.
  3. DTrace hooks have been enabled by default in the GENERIC kernel.

But, I did start this upgrade on October 4th....where for an unknown reason, I launched the freebsd-update process on cbox, the busier of the two headless servers. I suspect I went with doing the upgrade on my headless servers, because they are entirely running on SSD and would likely see the benefit of lz4 compression. And, perhaps I did cbox, because it was the system that could most gain from lz4.

It took a couple iterations through freebsd-update, before I got an upgrade scenario that could proceed. And, it took a long time given the high load that is cbox.

That is cbox is an Atom D2700 (2.13GHz, dual core) processor. And, cacti (especially with the inefficient, processor/memory intensive percona monitoring scripts -- might help if only scrpt server support worked, and wasn't just a left over from what it was based on.) being the main source of load. That is usually in the 11.xx area, except during certain other events (like, since 3.5, when cf-agent fires...cbox is set to run at a lower frequency than my other systems.) or when the majority of logs get rotated and bzip'd. And, there's also some impact when zen connects to rsyncd each day for backuppc. But, these spikes weren't that significant. Though the high load would cause cf-agent runs to take orders of magnitude longer than other systems, including its 'twin' dbox.

Also ran into a problem (again?) where a lot of the differences that freebsd-update needed resolved were differences in revision tags....some as silly as '9.2' vs '9.1', others had new time stamps or usernames, but seldom any changes to the contents of the file. Which I then discovered a problem from having some of these files under cfengine control. cfengine would revert these files back to having '9.1' revision strings, which confused the freebsd-update. I ended up updating all the files in cfengine to have the 9.2 versioning, though I thought about just removing/replacing it with something else entirely, though wasn't sure the impact that would have on current/future freebsd-update upgrades.

Though it did seem to cause problem with the other two upgrades, where it would say that some of these files were now removed and asked if I wanted to remove these. Which doesn't make sense, since it didn't say that with the first upgrade. It was probably just angry that these files already claimed to be from FreeBSD 9.2.

It also didn't like that I use sendmail, therefore my sendmail configs are specific to my configuration, or that I use cups, so printercap is the one auto-generated by cups, etc.

But, once it got to where it would let me run my first "freebsd-update install". I ran it, rebooted, ran it again, rebooted, updated stuff (though it didn't complain as much, perhaps because some of the troublesome kernel mod ports had corrected the problem of installing into /boot/kernel, or perhaps enough stayed the same between 9.1 and 9.2, that things didn't freak out like before. And, this includes the virtualbox kernel mod, when I did the upgrade on zen, and later mew. But, I re-installed these ports and lsof. I did a quick check of other services, and then upgraded the 'zroot' zpool to have feature flags (which now means it no longer has a version, apparently instead of jumping the numbers to distinguish from Sun/Oracle it has eliminated having version numbers (for beyond 28) and having flags for the features added since. Wonder if the flags capture all has changed since 28, since I thought there have been other improvements internal that aren't described by version numbers. Namely, I seem to recall that there have been improvements in recoverability....namely it had been suggested, when I was trying to recover a corrupt 'zroot' on mew, to try finding a v5000 ZFS live CD. Which I don't think I ever found, and gave up anyways when I concluded the level of corruption was too great for any hope of recovery and that I needed to resort to a netbackup restore, before the last successful full get's expired. Though being that it was nearly 90 days old, the other two month fulls didn't exist due to system instability that eventually caused the corrupted zpool (eventually found to be a known bad revision of the Cougar Point chipset and a bad DIMM...things seem to finally be stable from using a SiI3132 SATA controller instead of the on board, and getting that bad DIMM replaced....was weird that it was a Dell Optiplex 990, purchased new over a year after the problem had been identified and a newer revision of the chipset was released. I did eventually convince Dell support to send me a new motherboard and replace the DIMM. The latter was good, since I had to use DIMMs from another Dell that had been upgraded, so I had less memory for a while. But, while at first I did use the onboard SATA again, eventually I started having problems that would result in losing a disk from the mirrored zpool, to eventually causing a reboot where they would both be present again [though gmirror would need manual intervention]....and moving back to the SiI3132 has finally gotten things stable again. Though the harddrives in mew are SATA-III, so it would've been desirable to have stayed on the SATA-III onboard ports, where it was these ports that were the main source of problems in the prior defective version. Perhaps the fact that the prior version had a heatsink and the new version didn't, wasn't because they didn't need it to try to compensate for the problems caused by over-driving the silicon for the SATA-III portion. But, an oversight with the newer revision motherboard. The problem did tend to occur in the early morning hours on the weekend, when not only is there a lot of daily disk activity, but there is also a lot of weekly disk activity, etc. Oh well.)

So, after upgrading the zpool, and reinstalling the boot block/code. I then rebooted the system again. I had already identified the zfs filesystems where I had 'compression=on', so had written a script to change all these to 'compression=lz4'. Which I now ran.

And, then I turned my attention to doing dbox.

Upgraded to FreeBSD 9.2

Full story »

Pages: 1· 2

01/24/13

  10:23:00 pm, by The Dreamer   , 1549 words  
Categories: Hardware, Computer, Storage, Ubuntu, FreeBSD

SSD Craziness

Over the last year, I started buying SSD drives. It used to be that they seemed pretty expensive, and of questionable performance and reliability. But, all things have improved over the years. And, when 120GB drives dropped under $1/GB (initially after rebates, later before rebate [if any]). I didn't have an immediate need for an SSD drive at the time, but I envisioned replacing the drive in my (u) laptop. And, perhaps my (w) laptop....beyond that I wasn't sure.

 3/05/12 - Patriot Pyro 120GB             - $159.99-$40 rebate = 119.99
 3/20/12 - OCZ Agility 3 120GB            - $139.99-$30 rebate = 109.99
 4/27/12 - Mushkin Enhanced Chronos 120GB - $ 99.99
 5/16/12 - OCZ Solid 3 60GB               - $ 74.99-$20 rebate =  54.99
 8/10/12 - Sandisk 128GB                  - $ 79.99
11/21/12 - Kingston HyperX 2K 240GB       - $149.99
 1/18/13 - Sandisk Extreme 120GB          - $ 89.99-$15 reward =  74.99
 1/18/13 - Sandisk Extreme 120GB          - $ 89.99-$15 reward =  74.99

But, during this time...I had the lhaven misstep Where I had picked up the 60GB drive for that, but ended up using the Mushkin 120GB drive instead. The OCZ Agility 3 120GB had gone in as OS drive for my Xen Cloud Experiment. And, stayed when I went on to making it FreeBSD. Cut up initially as 64k boot, 32GB swap, 16GB L2ARC for the mirrored 1.5TB drives to help with dedup...and the rest...63GB root zpool.

Somehow the Patriot got misplaced for a while, so it got overlooked during the chaos.

After a 'break'. I picked up the Sandisk 128GB drive. Thinking that it might be a better choice to use to finally replace the (u) laptop harddrive. Though I waited until after the NN conference in October to do it, but before I went to LISA in December. Though I didn't finally upgrade the the OS to 12.04 (from 10.04) until just last week. I had thought about doing a clean install to 64-bit....since there were some issues since I upgraded the memory to 8GB. But, changing the hibernate method seemed to have solved the issue....so decided to leave it 32-bit. Though my (w) laptop is 64-bit...though it only has 4GB. Not sure if I'll upgrade it to 8GB. Or when I'll upgrade its harddrive to SSD.

Things have been kind of tight since...on the 128GB Sandisk drive. partly because swap got a bit bigger...had suspected that 8GB swap was iffy for hibernation....so had bumped that up. Plus the original harddrive was 160GB. But, the lion share of space consumption is my Windows XP VM. But, it get's the job done.

Meanwhile...around this time I got the idea that instead of making the risky upgrade of my two Ubuntu servers from 10.04 to 12.04, that I would set up two new FreeBSD servers and migrate the essential services over before deciding the future of those systems. So, I acquired a pair of Shuttle XS36V's...4GB of memory for each...and then eventually the plan was to acquire a pair of SSDs for them. Which I finally did last week as a pair of SanDisk Extreme 120GB drives. (with the help of $30 from Best Buy Reward Zone....and this purchase should get me a $5 reward zone soon.) And, these will probably get installed as FreeBSD 9.1.

Pages: 1· 2

12/22/12

  09:58:00 pm, by The Dreamer   , 1422 words  
Categories: Hardware, Software, Computer, Storage, Operating Systems, Ubuntu, FreeBSD, Virtualization, Other Linux

zen resurrection

This was originally going to be a very long post, but I kept putting this off ... and now I just feel that something needs to be said.

The story starts with waking up on February 15th, to find zen was dead. It had self updated overnight, and now it was unbootable, and the start repair couldn't get me back. Apparently, the problem had started long ago with all the previous times where Windows 7 would lock up...usually under intense disk activity...and the afterwards, the intel matrix raid would require re-initialization of my 1.5TB RAID 1 array.

Apparently, it was slowly corrupting my drive....because trying to restore from WindowsImageBackup was also a failure. Since this happened the day before Gallifrey One, I had to wait until I got back to do some more serious attempts to recovery, during which I ordered a full copy of Windows 7 Professional, hoping that a repair install might be an option. It isn't because the repair option can only be invoked inside a running Windows 7 system .... in need of repair. Not by booting the disk. ARGH! :##

At least I should have the data in BackupPC to restore from.....though hopefully before the bit rot of its ext4 filesystem makes it go away. Plus I had hoped to get some configuration going where I could mount the RR62x RAID 5 array, and get at the Oops!Backup store.

So, the plan now was to wait until Ubuntu 12.04LTS to land and then maybe some configuration of running Windows 7 in VirtualBox and recovering into that, etc.

Pages: 1· 2· 3

11/05/12

  10:30:00 am, by The Dreamer   , 2100 words  
Categories: Hardware, Software, Computer, Storage, Ubuntu, FreeBSD, VirtualBox

Orac is looking strangely bare, with Zen taking over.

For a long time, I've been running a 6 drive RAID 10 array of Hitachi 5K3000 2TB drives in Orac for backuppc. This configuration got me at 5.4TB array, and somewhat better performance than when I tried a RAID6 configuration. But, eventually, I kept running out of space and the price of harddrives went up so expanding the array over time didn't happen as I had hoped. Being RAID10, the options were other concat another array, either 2 in RAID1 or 4 in RAID10 or 4 as 2 RAID1....using volume manager. Or maybe see if RAID10 would deal with having all 6 drives upgraded to 3TB, though hadn't considered the transition of 512 to 4k and how it would cope with that.

Though I did, eventually find out when I upgraded a 1.5TB RAID1 set to become a 2TB RAID1 set....going from ST31500341AS to ST2000DL003, where I contributed my experience here: http://askubuntu.com/questions/141669/creating-properly-aligned-partitions-on-a-replacement-disk/ Though it first started because one of the ST31500341AS drives had failed.

Before the failure of one of the 1.5TB drives in the above mentioned RAID1 set, I had 4 ST31500341AS in a RAID5 on old-Zen. It had been done in under RR622, under Windows, and NTFS partitions, etc. I had tried copying the data at various times, not really having anywhere that would hold the data elsewhere...but wanting to get it over to FreeBSD for recovery. While I got the rr622 driver working, and it saw that I had a single array (rather than the native driver that would see the 4 individual disks.) I couldn't get access to the data. Though it had worked when I was previously playing around with Xen (had tried copying it then....to a 2 1TB RAID0 set, but then one of the 1TB drives died....so I lost the copy, I had then replaced it with a 2TB RAID1 set....using an ST2000DL003 and an ST2000DM003, the DL being a 5900RPM drive and having a 5 year warranty...while the DM drive is a 7200RPM drive, but with only a 1 year warranty. And, turns out the 1 year is generous.

At work, I had built my FreeBSD desktop using a pair of the ST1000DM003 drives...and 3 drive failures later....it is now a pair of ST2000DL003 drives. Yeah...I was having trouble with the array, and apparently using XFS was a mistake too...because I thought it was recovering, but instead it was slowly eating the data. When I had nuked the RR622 RAID5 array, and had switched to using it as JBOD and create a RAIDZ set under FreeBSD...I found that there was nothing to copy back from the RAID1 array. D'Oh! >:XX

Though I had also copied the Microsoft WindowsImageBackup files, to see if I could mount the VHD file under VirtualBox to help in recovery. I largely had the data in bits and pieces elsewhere, it was the environment I was wanting to recreate...and Oops!Backup didn't back up that part anyways (the data I was mainly trying to migrate). The image mounted, and I could see it...but soon after Windows would try to fix it and then it would disappear....kind of like what it did on February 15th to make the original Zen go away. No idea what kind of disk rotting the Intel Matrix RAID had been doing, when it had to initialize the array again every time after a Windows crash. I've had Ubuntu crashes, but the RAID arrays remained stable...usually. While Windows & Intel RST....it was pretty much every time. I'm sure it was slowly corrupting things overtime to where things wouldn't recover, though it choose to do that after an automatic reboot for Windows updates...and the day before I left for my first Gallifrey One made things even more annoying.

Anyway with another 1.5TB drive freed up, I contemplated adding it to the RAIDZ I had made of the 4 1.5TB drives, keeping it as a hot spare, or just use it by itself -- living dangerously. I ended up with the latter for some temporary data. Because in my mind I was starting to lean to what happened next.

Pages: 1· 2

09/09/12

  07:06:00 pm, by The Dreamer   , 145 words  
Categories: Software, Computer, FreeBSD

NetBackup 7.5 on FreeBSD 9.0

There's a FreeBSD 6.x client for NetBackup, but when we upgraded to NetBackup 7.5...client installs stopped working, after ignoring the problem for a while (since the 7.0 client installs were working) until I had set up a new FreeBSD 9.0 server.

What I did was

  1. Install 'misc/compat6x' and 'java/openjdk6'
  2. Obtain 'fakegetfsstat.c' - and build it on a 32-bit FreeBSD host (I used the '/compat/i386' environment from building "32bit Wine on FreeBSD/amd64") and put it in '/usr/local/etc' -- there's probably a better place, but :lalala:
  3. check that '/usr/openv/netbackup/client/INTEL/FreeBSD6.0/client_config' has "compat_dir" set to '/usr/local/lib32/compat'
  4. Install NBU Client -- It still won't find java, so perform the steps it says using '/usr/local/bin/java'
  5. Edit '/usr/local/etc/netbackup'
  6. Shell

    --- netbackup.orig    2012-09-08 14:46:28.794900000 -0500
    +++ netbackup    2012-09-08 14:50:57.848900734 -0500
    @@ -73,10 +73,12 @@
         FreeBSD*)
             PS="/bin/ps -ax"
             # This can be removed once $ORIGIN starts working.
    +        LD_32_PRELOAD=/usr/local/etc/fakegetfsstat.so
    +        export LD_32_PRELOAD
             compat_dir=""
             os_major=`uname -r | cut -f1 -d"."`
             if [ "${os_major}-gt 6 ] ; then
    -            compat_dir=":/usr/local/lib/compat"
    +            compat_dir=":/usr/local/lib32/compat"
             fi
             ProcessorType=`uname -p`
             if [ "${ProcessorType}!= "i386" ] ; then
  7. /usr/local/etc/rc.d/S77netbackup.sh stop
  8. /usr/local/etc/rc.d/S77netbackup.sh start

Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

Latest Poopli Updaters -- http://lkc.me/poop

bloglovin

There are 20 years 7 months 19 days 20 hours 49 minutes and 16 seconds until the end of time.
And, it has been 4 years 5 months 8 days 17 hours 13 minutes and 40 seconds since The Doctor saved us all from the end of the World!

Search

May 2017
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        
Google

Linkblog

  XML Feeds

Who's Online?

  • Guest Users: 2
This seal is issued to lawrencechen.net by StopTheHacker Inc.
blog engine

hosted by
Green Web Hosting! This site hosted by DreamHost.

monitored by
Monitored by eXternalTest
SiteUptime Web Site Monitoring Service
website uptime