Categories: "Operating Systems" or "FreeBSD" or "Other Linux" or "Ubuntu" or "Windows"

Pages: 1 2 3 5 7


  07:32:00 pm, by The Dreamer   , 693 words  
Categories: Hardware, Computer, Networking, Storage, Ubuntu

Time machine & NFS


So, during the long weekend...I got to wonder why I couldn't have my work MacBookPro do Time Machine backups to a network share, I had envisioned coming up with a drive to attach to one of my Airport Extremes and doing that some day. Though it doesn't look like that's actually going to happen anytime soon.

Did a quick google search on Time Machine and SMB, and found out about:

defaults write TMShowUnsupportedNetworkVolumes 1

And, so trying to make it work started. I created a Samba shared named 'SIDRAT' on ORAC...which I think is the name of the removable 160G drive that I had been using at home for Time Machine backups. The one at work is named 'TARDIS'. But, I already have a Samba shared named 'TARDIS' on ORAC....that's the 1TB volume for the HSTi Media Stick. Also not to be confused with computers named TARDIS and SIDRAT. Though SIDRAT died a long time ago, and I still haven't gotten around to replacing was my Windows laptop.

At first I didn't have dedicated storage on ORAC for this...I was just exporting the subdirectory of another Samba share as SIDRAT. And, it didn't really make sense for it to be a subdirectory on that share...but it was where I had at least 80GB to spare to this....the drive in the MacBookPro is only 80GB.

It seemed to start, but then it would fail. I had picked just the usual options for creating the Samba share, and wondered if the Mac needed special stuff to work. So, I went looking....

Read about the:


trick...but that didn't work. And, wasn't necessary when I did get it working. :))

So, then I read about creating the sparsebundle to get it started. There was conflicting info on how to name it....the usual was start Time Machine, and see what name it creates and use that.

So, was creating "<host>.tmp.sparsebundle" And, there were some variations on how to invoke hdiutil to create this. Well, it went slowly made it get smaller and smaller until it was gone and then went on to try to create "<host>.sparsebundle". Creating that as well didn't help. I had heard about "<host>_<MAC_ADDRESS>.sparsebundle", but the hits I found didn't explain just what MAC_ADDRESS would look like, or which one to use. Decided this wasn't going anywhere and the long weekend was nearly over....I did at least (after many long attempts) get a backuppc backup of parts of my MacBookPro. So, I got rid of the Samba share and undid most of the other stuff.

Full story »


  08:38:00 pm, by The Dreamer   , 601 words  
Categories: Hardware, Software, Computer, BOINC, Storage, Ubuntu

Got another brain for Orac

So, the other day, I was poking around on, trying to decide if I wanted to get the upcoming ShellShocker. I spotted a single slot GT440 for a good price among the daily deals. While I was looking around, I saw other things that caught my interest...a powersupply tester. Or things that I needed to go with the card, like a DVI-I to VGA adapter. Also decided to grab a new fan, since it is probably only a matter of time that the fan in the other drive array on Orac to fail.

I ended up not getting the ShellShocker item.

I've been eyeballing getting a single slot 400 Series card for Orac for a while. Was originally going to get a 200 Series, and I did actually get one in a different ShellShocker deal. But, when I opened up Orac, I realized it wouldn't fit....due to its huge fan.

Not sure what I'll do with that card. But, I had been looking at 400 Series, because Zen came with a GT420....which I've learned is a pretty horrible, compared to the rest of the line. Especially when it comes to doing BOINC. I'll probably go GTS450 or GTX460....single slot, though I need to take a look inside someday to see if there's actually juice in there. Otherwise, maybe I'll want a second one of these...

So, I had picked up the card from the UPS Store on Friday...but I didn't get around to installing it today. I stopped checkarray before I shutdown. I changed the scheduling of checkarray to spread things out better.

Though I had a brief distraction this morning on another matter, which I'll be posting about soon...

But, I went and put it in...its a pretty tight fit sitting next to my PM eSATA card.

I wonder if I'll look into a slot fan to go in the open PCI slot there....the lower PCI slot is where 4 other eSATA....oh wait, I had gotten a PCI card that I was going to put into Orac. I wonder what happened to that project....

Pages: 1· 2


  09:48:00 am, by The Dreamer   , 892 words  
Categories: Hardware, Computer, BOINC, Roku XDS, Ubuntu, Other Linux

LHAVEN is dead, long live LHAVEN?

Well, I was getting really annoyed with it complaining and kicking out the replaced disk in LHAVEN. So, I shutdown the system and took out the disk, figuring it should just continue to run fine degraded until I get around to replacing it.

Well, it wouldn't power on after that. I went through everything, no joy. It had done this a while back when I tried adding something to it. It could just be the power supply has flaked out, but I don't have a tester or multimeter I can't really test it. Could buy a new PS...and I may do that.

But, I kind of suspected the drive issues weren't actually the drive but possibly some deeper hardware problem. So, I had been planning to replace LHAVEN at some point. It has done quite well, being circa 2002. Started out as a 64MB Duron 800MHz machine, eventually peaked at 2GB Athlon XP 3000+. It had replaced a Cyrix PR233 box that had gone up in smoke during a hot summer day in 2002, when the transformer outside blew....the computer survived the brownout and then blackout, but the fans didn't spin back up when power it burned itself up. Later I found one problem with this new system. It wouldn't resume after losing power. No BIOS setting to alter this behavior and I did try to see if there was alternate BIOS updates for it. It was kind of a painful machine to manage, because for some time kernels didn't have built-in support for all the SIS chipset stuff in an update would roll out, and NIC was usually one of the things to definitely go missing. Which made for fun to rebuild a custom kernel upgrade for it. Eventually it got stable.

But, needing essentials like DNS, DHCP to be available after an extended outage...I moved these to another server (originally an old Pentium 75, which has evolved into what is known as 'box' today). For the longest time it was RedHat 7.2, and then RedHat 7.3 when fedora legacy switched to only supporting 7.3 and 9 releases. It continued after fedoralegacy stopped supporting it, I was building some of the packages for it by bind (in response to the Kaminsky exploit). There were parts of my network that wouldn't function without an old Windows 2000 box that was barely functioning...(old Gumby).

I nearly lost it during the Icepocalyse....but I tracked down motherboard replacement for it from It was a slighty newer mobo, but equivalent chipset. It did have some things the old mobo didn't, like USB 2.0 support (I didn't use USB and still don't), support for 2GB of RAM instead of I upped it for better BOINC'ng. And, faster I upped from 2200 to 3000.

At one time, I had turned it off to do some upgrades...which didn't pan out. Tried to slap a gigabit card into it...didn't work. Wouldn't get along with the on board stuff, and the BIOS didn't have ways to get things out of the way enough. It also never fixed the resume after power loss issue. But, during this process it failed to power on....but after a couple days, it came back and I decided to upgrade its UPS and hopefully it would make one last outage. Well, there were several others since then where it came back afterwards. But, looks like this is the last time for it.

Pages: 1· 2

  12:03:00 am, by The Dreamer   , 1362 words  
Categories: Software, Computer, Networking, WiFi, Ubuntu

Freeradius & DHCP Failover


So, ever since I looked at adding Mac Address Access Controls to my Airport Extreme...on top of WPA2 Personal, and the fact that my DHCP server only does reserved IPs, security. I used to do Mac Address Access Controls on my previous routers, but it was an easier interface to work with on those. And, I didn't realize how the Time Access worked on the Airport Extreme, the default allow all the time rule at the top tripped me up. So, I thought if I wanted it, I would need a RADIUS server...and I didn't know if I wanted to do that....yet.

But, after I woke one morning and couldn't seem to account for why there seemed to be so much data streaming through my Cox connection...there had been strange spikes in the past, but always figured it was something updating itself while I wasn't home (like iTunes and my podcast subscriptions). But, this one morning...there was no corresponding activity from any of my computers, and I didn't see anything obvious with my TiVos/ReplayTVs. Though I could've just missed it.

So, I fixed the Timed Access control and put my current devices in. With a note that I should really look into installing RADIUS somewhere, so that it would be easier to maintain the list than the airport utility. I would lose being able to find the MAC address of some new wireless device that doesn't have the MAC address stamped on it....for addition to my DHCP server.

Later during the setup in: Another Airport comes to Lunatic Haven I had wiped out the settings....and didn't feel like putting it back in again. Which made it more urgent (in my mind) to get RADIUS working.

So, I went online and searched and searched and searched...on how to do this. I had looked before, and wasn't all that successful. There's no simple how-to apparently. But, I found bits and pieces around, and decided to just go for it.

First, I installed freeradius on my Ubuntu server 'box'.

sudo apt-get install freeradius

It starts right away, now to make it work. And, debug it. Well, most of the examples were for older freeRADIUS versions, so things weren't where it said, or command line switches were different, or it didn't work. I did find some examples of MAC address authorization, but they involved 'Auth-Type := Local' in the /etc/freeradius/users file. But, the clients.conf part seemed right. I strongly considered just doing 'Auth-Type := Accept'...but I wanted to figure this mess out.

client {
        secret = testing123
        shortname = airport
        nastype = other

So kept searching and searching....eventually, I found fragments on site called "Deploying RADIUS: Practices and Principles". It confirmed that I was basically on the right track, I just needed to figure out what to put in the users file to make it go from Auth-Reject to Auth-Accept.

Well, the example for MAC Address entry for users I had found was:

001122-334455  Auth-Type := Local,  User-Password == "testing123"

At first I was pointing my Airport Extreme at it and watching the debug output, and watching everything stop working now and then. But, eventually I used 'radtest' to test my freeRADIUS configuration. And, eventually, I found that what I needed was:

001122-334455  Cleartext-Password := "testing123"

And, all was good. I pointed my main Airport Extreme to it, and it everything adjusted and worked. I then pointed the new Airport Extreme at it and things continued to work.

Yay! &#58;&#99;&#111;&#111;&#108;&#58;

Pages: 1· 2


  07:57:00 am, by The Dreamer   , 1339 words  
Categories: Hardware, Computer, Storage, Ubuntu

Worked on Orac last night

Back on March 28th, when I last wrote about working on Orac, I mentioned that I looked at the harddrive cage to see about the condition of the fan on there....only to find that there was no fan there.

Reviewing the manual on the Gateway website, I found that the cage is used in more than one model...and that some of those models have fans, while evidently mine did not. Browsing the parts list for some of the other Gateway models that used the same drive cage, I found reference to a 60mm x 10mm fan, which I deduced was probably the fan that I would need to get for this location. I did find that from the Gateway manual for my model, that the motherboard did have a front chassis fan connector.

So, after some thought and checking first, it struck me that eBay might be the better place to go. So I found a seller on eBay that explicitly said he shipped by USPS and bought one, and from another seller I got fan screws (a bunch of them, because I've needed them in the past before and I'm sure I'll have need for them in the it was I didn't actually need them this time though.)

Because I had recently built my new backuppc pool (should be posting about that adventure some day), I had been waiting for a moment when Orac was idle again and not busy refilling the pool with full backups of everything....It hadn't gotten any fulls of Zen yet, it didn't detect that Zen had gone away to apply the recent Microsoft patches during its recent attempt, so I had to step in and stop it. So that seemed like a good time to take Orac down.

Pages: 1· 2· 3· 4


  01:40:00 pm, by The Dreamer   , 389 words  
Categories: Software, Networking, Cox HSI, AT&T DSL, Ubuntu

ddclient & squid

In the aftermath of the summer storm of August 13th, (hmmm, totally missed that it was a Friday the 13th), I made a tweak to my ddclient config for updating dyndns for my DSL line. Because I found that it wasn't able to update the IP change while Cox was down.

Couldn't find a way to make ddclient to bind to the local IP that routes out by DSL (or use non-default gateway). But, since I have squid proxy on the same box...and depending on what port I come in on, it can use either of my connections.

I set

Couldn't use localhost, because ddclient does some kind of validation to require an fqdn+port, and localhost isn't an fqdn. And, yes, I use my dyndns domain as my home domain. So I can have bookmarks that'll work whether I'm at home or on the road &#59;&#68;

But, this change wasn't it has been less than 28 days for a refresh, and no IP change.

That was until this morning, when my IP did change.

The updates weren't working....seems that ddclient wants to do SSL all the way or not at all. No using an http proxy to connect out on SSL. But, I didn't feel like sending my dyndns password out non-SSL.... So, after some thought, I decided I would figure out how to set up SSL on squid.

I made the necessary configuration change, but no go. Seems that ubuntu doesn't distribute squid with SSL, because squid and openssl have incompatible open source licenses. So, I did a quick search to find the ubuntu way of rebuilding it from source.

apt-get source squid
apt-get build-dep squid
apt-get install devscripts build-essential fakeroot
cd squid-2.7.STABLE7
vi debian/rules
     Add --enable-ssl \ to “# Configure the package” section
debuild -us -uc -b
cd ..
dpkg -i squid??? squid-common???

Change to, and it worked. &#58;&#99;&#111;&#111;&#108;&#58;

Full story »


  06:05:00 pm, by The Dreamer   , 477 words  
Categories: Hardware, Software, Computer, BOINC, Ubuntu, Other Linux

I tried to upgrade 'lhaven' more...

...was less than successful.

Back following the icepocalypse, I resurrected 'lhaven' by changing its motherboard. The new motherboard was an upgrade to the old one, but I wasn't taking advantage of any of it. But, in my mind I thought that someday I might.

Having finally upgraded the OS from RedHat 7.3 to Ubuntu 10.04....and being able to participate in all BOINC projects again, I decided that I might look at upgrading it finally.

So I went looking for the fastest FSB 333MHz CPU that I could put into it. And, went looking on eBay for one. After a few unsuccessful attempts, I went and bought an Athlon 3000+ (and an Athlon 2400+). And, while I was on there, I got a new CPU Heatsink and Fan and a 1GB stick of DDR333 memory.

I wasn't sure what kind of memory was in the machine, but somehow I thought it to be a 1GB stick of DDR333 (since I for some reason had an extra DDR333 1GB stick that I gave away recently)....but memory told me the machine had two slots, so I'm not sure why I didn't upgrade the memory to more than 1GB...

Well, turns out my memory was faulty (as was lshw)...the machine had two 512MB DDR226 memory sticks. Which I makes sense, since the old CPU was FSB266 (a 2000+)....guess that means I shouldn't have given away that 1GB stick of DDR333.

What I did was upgrade the memory to 1.5GB by putting the new 1GB DDR333 stick in the first memory slot, and a 512MB DDR400 stick in the second slot. I then looked at the CPU.

First I put the Athlon 3000+ in....and the new CPU heatsink/fan. It is a touch bigger, actually its a lot bigger than the old....not quite compatible with my case...though it should be a simple enough mod once I get everything working again. Except I didn't. No POST.

Full story »


  05:57:00 pm, by The Dreamer   , 1339 words  
Categories: Software, Computer, Storage, Ubuntu

declaring 'orac' alive again.

'orac' is back, long live 'orac'.

'orac' had gone away on Thursday, it actually started in Wednesday.... For some unknown reason, I decided to start the upgrade of 'orac' from 8.04LTS to 10.04LTS.

The upgrade seemed to go okay, but when I wouldn't boot. I scrambled around looking for a livecd to boot with....did have any 64bit ones handy and 10.04 was hit and miss on booting. But, I could see vg0 (root, home, swap) with all the livecds except 10.04.

Something about 10.04 didn't like how vg0 was set up. I looked at 'box', which had been running 10.04 for a while and was working. Noticed the main difference between the all the other md's and vg's...was that they weren't fdisk'd md's. When I had originally set up 'orac', I had used a recipe that I had found online on how to do it...since initial install of ubuntu onto Raid wasn't straight forward then. and after md1 was created, I had fdisk'd and then pvcreate/vgcreate, etc. on it.

When I added the other two raid sets....I just pvcreated the md and then used fdisk of the md.

In retrospect I should probably have done a backup of the filesystems while I was running a livecd that could see it....and then perhaps remade the disk and restored into it. But, I didn't and while trying to 'fix' the problem...I corrupted things, and I wasn't able to recover (tried pvcreate -u + vgcfgrestore...but couldn't get the filesystems to come back.

After lots of other attempts and such, I reached the point where I decided things were lost and set on to figure out if I can get back.

backuppc localhost doesn't back up the entire localhost....but it had my home directory backed up (but not all of /home), and it was also backing up /etc. Realized the the one thing it wouldn't get is my crontab files. But I had a dd image of most of the disk (the front part munged) I string'd and recovered my crontab from it. And, then set out to install 10.04LTS new.

Since md1 was going to get redone, I decided to redo md0..../boot. Made it a bit bigger than before so that it won't get as crowded from accumulating every new kernel that is released until 12.04LTS comes out. Or some future release needed more than the old size to work. Had actually run into this problem years ago at my previous job....

But, the install kept failing....asking for the disk again....I burned several copies, download from other sources, but no go. So, I tried to 9.10 (thinking I would upgrade to 10.04 immediately), but it also didn't work. Eventually, I found a reference on google that said the 10.04 (and 9.10) installs were picky about what kind of optical drive was being used. It specifically mentioned a particular DVD burner, with 'orac' came with a DVD burner...though I have never used the drive since I did the original install. I tried a USB stick, but couldn't get the machine to use that, and the other external drive I could find was firewire (and there's no BIOS option to boot from firewire).

So, I took apart my machine and the firewire drive. And, put the DVD-ROM drive that was in the firewire enclosure into the machine. The DVD-ROM drive had original come from my circa 2003 Dell, known as 'tardis'.....when I had swapped in a DVD burner.

'box' is the Linux machine that I do cd burning on....and I hardly do DVDs anymore, though I did get a new DVD burner recently for the new 'gumby'.

Anyways....with the 'new' optical drive in 'orac'....I was able to get 10.04LTS to install, and then started the long process of getting my environment back. First problem was backuppc installed with a different uid/ it wouldn't access the old /var/lib/backuppc storage. I opted to fix it the wrong way....that took forever....doing the chown of the files/directories. Should've just changed the uid/gid that backuppc had installed itself as. But, then I tried to do the problem was sudoers, had forgotten about that.

I then restored /etc to another place, and restored my home directory along side of the one the install had created. When done I swapped my home dir and rebooted. That was a mess. The desktop was all messed up...not just missing apps. Figured it was because the desktop didn't like the old settings....but couldn't find any details on how to upgrade desktop, if I hadn't done an upgrade. I thought other OSs handled this kind of thing. But, not here. So, I flipped back and slowly copied stuff over and restored settings etc. And, reinstalled various apps as I went.

Then I decided that I had made sufficient progress on getting 'orac' configured, that I didn't want to lose things. I had disabled backupc from doing automatic backups during the restore. So I turned it back on and back up the other systems around my home (after first doing a new backup of the new 'orac'). It didn't like .gvfs in my home old version it complained but continued, in this version it complained and the tar returned an error due to previous error...causing the rest of the backup to fail (the localhost backup does 4 'shares') I tweaked the config to get it to work.

The only thing I saw was that it tried to backup 'ulkc', my ubuntu notebook...but it wasn't on at the moment, so it failed. It wasn't going to automatically back up anything else yet, since its one of my blackout periods. So, I started kicking off manual backups of various things, and watched it work. First thing that didn't work was a backup of a windows share. Seems smbclient changed since 8.04LTS...and one of the options was used in backuppc 3.0 didn't work anymore. The new is 3.1, and the default options is minus that one I matched things up and it did its thing.

The only problem that still didn't work, which I think was the reason I decided to bite the bullet and upgrade to 10.04LTS (to get the newer backuppc) was incremental backup where a very large file has changed since the full. Usually the virtual harddrive of a VirtualBox. Sadly this wasn't fixed in 3.1. Guess I'll have to look at some other way to deal with it. Like exclude it from the regular policy and create a new one that only does that one file....or something. Or maybe ....

I continued to work on other things, as I worked through comparing the old /etc with the new 'orac' /etc. Though interrupted by trying to get file shares working (I had exported my Dropbox directory from another server for access from other computers on my home network....though LAN sync is available now, I haven't undone the stuff yet). And, getting snmpd working I could see the impact to resources and network from all the backuppc jobs running in cacti.

One strange problem is that 'orac' can not access the NFS shares on 'lhaven' (RedHat 7.3), but 'box' can....the only difference I can think of is that 'box' was upgraded to 10.04LTS.... Though I guess it is time I worked on moving services off of 'lhaven' and figure out what I'm going to do to either replace or upgrade it.

The other issue was Handbrake. Originally, I had built it by hand...since there wasn't a package for 64-bit Linux. Though later, I got a newer version via PPA. But, the version that I got via PPA for Lucid didn't work. So, I eventually went and built my own again...from the build works....

Hopefully, when 12.04LTS comes along....things'll just upgrade smoothly.

The other odd thing, is that 10.04LTS seems slower....particular in file I/O. The machine seems to really degrade in performance doing a large file copy (as was happening when I was copying directories from my old home dir to the new one.) All the CPUs are bored while this is happening (it's a 2.4GHz Core 2 Quad....with 8G of memory....) Hopefully something that'll resolve itself later?


  08:29:00 am, by The Dreamer   , 703 words  
Categories: Software, Computer, Networking, BOINC, Ubuntu

'Accidentally' Upgraded 'box' to Lucid

Last night I wasn't thinking....running into the fact that my boinc-client package is broken on 'box', I thought..."why isn't it running the boinc-client from Lucid?" And, I proceeded to fix it with "apt-get"...and then realized that I'm still running karmic and unbreaking it wasn't. The boinc-client pre-Lucid is too old for a project I'm in, and it is annoying how ubuntu doesn't update packages within releases (which get's really annoying with LTS.)

Anyways....I decided the way to resolve the mess I had now made by downgrading my boinc-client, was to upgrade to Lucid Lynx 10.04LTS. It was my plan to upgrade both my Ubuntu servers to 10.04LTS (the other being 'orac' which is currently 8.04LTS)....but I was going to put it off to when I had more time and allow time for the release to stabilize (given all the issues I had when I first started running box on fresh Karmic).

So, it started....things got a little annoying, in that the upgrade requires attention....and I didn't want to spend my whole evening watching it upgrade. So, I'd check it now and then, and sometimes find it stuck waiting for me to make a decision for it, though the first one where it wants to restart stuff after a pam upgrade...that's annoying. Just do it okay, stuff it restarted weren't working right along the way anyways. Namely I found that I couldn't access first I thought the squid had become dorked, and I couldn't restart it because invoking /etc/init.d/squid said it was now a service...but using service didn't find it. Though later I realized it wasn't working because DNS was broke ('box' is my primary DNS server)... I was able to restart that in the usual manner.

Partly because of the pauses, and a bit due to slow wasn't looking like I would finish before 'bedtime'....but it seemed that it would be close enough.... It wasn't, but I ended up staying up to the bitter end. So, that I could reboot and do a quick check that all was clean.

Namely, I checked that named, dhcpd, ntpd and squid were running (since these were some key services of this server, and were ones that often failed to start after boot in karmic...especially named. Manual start always one time I was restarting these services in rc.local where it always worked rather than at the 'normal' time).

Then I went to fill in the missing icons in my launcher panel....there were two holes, there also seemed to be a hole in the tray area. The missing icons were Evolution (which I right-clicked to make reappear), and boxee (which I installed the latest of). I then called it a night.

The next morning, I continued to poke around some more....adjust appearance, add some chat/broadcast accounts...and look into my ubuntuone issue....first the tray thing was missing, apparently its by design...but, I don't remember that. I poked around some more and some more....then I looked at 'ulkc' and saw that it also didn't have the tray applet anymore, and I just didn't miss it... So, I then went to check that it was connected to my account and sync'd. Actually, it really was syncing, etc. When I was at Penguicon, I had put a copy of the pictures I had taken into my cloud...expecting to see them later on 'box' when I got home. But, it would never appear. Upgrading 'ulkc' didn't help things.

During the poking around, I disconnected 'box' and reconnected...and that apparently fixed whatever was wrong, as it started syncing...and soon the folders appeared.

While this was going on, I thought that maybe I would need to finish things up by sadly I discovered that remote desktop still doesn't work in Lucid. Turning it on, consumes 50% of my CPU. My 8.04LTS (orac) doesn't suffer from this problem. But, I want to bring both servers in sync eventually, so I can see about getting failover for dhcp working between the two.

Oh least it seems more successful than when I had upgraded 'box' to Karmic. And, now I'm at an LTS where it can stay for the next 2 years.... We'll see what happens when I upgrade from 8.04LTS to 10.04LTS next month....


  04:20:00 pm, by The Dreamer   , 1373 words  
Categories: Software, Computer, Storage, Ubuntu

5-bay RAID and backuppc

Ever since I learned about 5-bay port multiplier SATA enclosures, I've been wanting to get one to play with. And, when I acquired 'orac'...I sought out a port multiplier eSATA adapter to include in the build of the system. And, then waited for support of the adapter in ubuntu. Stopped on 8.04LTS and waited for inclusion in a kernel.

I took a step closer when I finally got the 2-bay enclosure to mirror a pair of drives I had impulsively purchased. Since it worked, I then then bought a 5-bay enclosure (it came up as a NewEgg ShellShocker).

But, after some thought, I decided that what I wanted to do was build a 5 disk RAID5 using 1.5TB it would get me at least 5TB of real storage. And, set that I would wait until 1.5TB drives were under $100.

In the meantime, I collected a bunch of old SATA drives to see what I trouble I could get didn't last long, as one was definitely dead and two quickly failed. Of course, two of the had been kicked out of RAID1 setups previously, and the other was a failing drive that I replaced/upgraded. So, it was back to waiting.

Eventually, I got 3 Samsung 1.5TB drives through NewEgg ShellShocker. And, I set up a 3 disk RAID5. Sadly, it didn't last very of the drives failed while I was building the array. I exchanged it for a new drive with NewEgg, and waited for it before trying again. While I was waiting, I formatted it a bunch of times and learning about smartctl and various things of drive repair. One of the remaining two original drives, was showing signs of slow death and one unrecoverable sector. When, the new drive arrived...I did a long self test and a quick, but full, format....taking a couple days before I created the 3 disk RAID 5.

Once, that was done, I created a new big filesystem that is /var/lib/backuppc to have backuppc on 'orac'. Initially a 1TB filesystem.

Then one day, the bad happened. A drive on 'gumby' failed. It was time to stop putting off figuring out how to setup backuppc and start putting stuff into it. It turned out to be a lot easier than it thought to get started, though I did have to rebuild smbclient to have a longer timeout to get it successfully complete a full backup of a given partition. I then learned some other stuff and fine tuned the configs.... One big change was I broke out the windows hosts into individual partitions (ie, gumby_c, gumby_e, gumby_f), and came up with a 'semaphore' solution so that it wouldn't backup more than one partition at a time from a windows host. Eventually, I had all my systems in it (it fully backs up everything except itself, where it backs up only key parts....) I even got it backing up my work Mac Book Pro.

I later grew the filesystem out to 2TB.

Pages: 1· 2

1 2 3 5 7

Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

Latest Poopli Updaters --


There are 19 years 3 months 24 days 18 hours 42 minutes and 22 seconds until the end of time.
And, it has been 5 years 9 months 2 days 19 hours 20 minutes and 34 seconds since The Doctor saved us all from the end of the World!


September 2018
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30


  XML Feeds

Who's Online?

  • Guest Users: 1
This seal is issued to by StopTheHacker Inc.
Multiblog engine

hosted by
Green Web Hosting! This site hosted by DreamHost.

monitored by
Monitored by eXternalTest
SiteUptime Web Site Monitoring Service
website uptime