« I have my own YOURLSAirport Express service for Lunatic Haven »

New arrays on Orac

07/09/11

  09:03:13 pm, by The Dreamer   , 1290 words  
Categories: Hardware, Software, Computer, Storage

New arrays on Orac

This is an owed post (of over 3 months in the making....) I had thought of lots of things to write for this since I meant to write it, but didn't write anything. And, now its so old...that I'm just going to be really terse....

:lalala:

The lead in to this posting was Worked on Orac yesterday. So, I got those 5 2TB drives...and first tried to make it RAID5 using mdadm. And, started filling it up....though the performance didn't seem to be much better using XFS, plus the new backuppc seemed to have regressed on handling XFS.... so I heard about ZFS on Linux....as kernel modules that do the taint. And, tried out RAIDZ, with its variable striping to maybe to do better on performance. It seemed nice, but then stack dumps filling up my logs....rebooted and then did a zpool scrub, and it went to work fixing lots and lots of errors, and then a few unrecoverable errors.

Guess no ZFS, and the XFS didn't seem enough of a gain to make it different from everything else. Even though there's one really big problem with ext4. Resizing it causes corruption. Supposed to be fixed now, but before I switched to the new array, I grew the backuppc volume for 3.75TB to 3.8TB, offline...and it resulted in corruption still.

So, I was making the new backuppc volume the full size of the array. As I mulled over the situtation...I decided that I would try RAID10, sure that wouldn't get as huge a boost over the old RAID6 array of 1.5TB, but it would still be an increase...and no idea what I'm going to do when I need more space. But, first I had to get a 6th 2TB drive to do RAID10. Plus it would mean that I would need to do something else with the 1.5TB drives. One was going to have to come out. Sure it could run degraded initially (though did find that it would come back after a reboot, guess the boot making degraded arrays come online...only does it for OS filesystems. Though the boot get's stuck because it can't mount filesystems and the recovery of mounting just doesn't work still. Getting into single user is such a pain too. Though after I broke my sudoers file recently, I turned off hiddenmenu.

The hard part about making the RAID10, was figuring out the ordering of devs...so that it was mirror across the eSATA channels and stripping with in. There's only 2 channels, and 6 drives in this array...so figured that was the best way to go. Though now that I've realized that its only PCI express 1.0, not sure if that was the best way to go. Though to try the other, would call for me to get 8 drives? And, who knows what the future holds for supporting backuppc....

internal bitmap on doesn't seem to degrade the RAID10 as much as RAID5 or RAID6, so I have that turned on now.

Now there was the question of what to do with the 4 remaining drive bays and the 4 1.5TB drives. Would I got with another RAID10, a couple of RAID1...would I concat them into the LVM for growth or make them separate.

Well, on the old 1TB array was my old MMC volume, which needed more space and part of the 5x1.5TB RAID6 was a backup volume...also in need of more space. I decided that I would go with two separate RAID1s using the Seagate 1.5TB drives (I had pulled the Samsung 1.5TB to make it degraded....that drive has since been hooked up to TARDIS for local backups).

Around this time, my Roku appeared...and I went through the various attempts to map a network drive to it for local content. Settling on the HSTI Media Stick and its 1TB maximum. I settled on going with 2 1.5TB RAID1 arrays. One of them was made fully available to be the new MMC volume. And, the other...I made a 1TB volume (called TARDIS) for the HSTI Media Stick, and the rest became the new backup volume.

The old 1TB RAID1 array stayed online, until it was recently repurposed into the volume for Time Machine backups (only 931GB).

Now I've been debating playing around with external bitmaps and/or external journaling to see if I can get more performance. But, external bitmaps go away after a reboot, and that just seems to be a bug that isn't going to get fixed anytime soon. external journaling would require me to find some devices for that....so I got the idea of PCI Compact Flash adapter and some compact flash cards....I have a couple of the PCI Compact Flash adapters....and I do have a few compact flash cards around. Though when I was playing around with ReadyBoost on Zen, I found they weren't really that great on speed. And, I was going to want to get as much speed out of things as I could here....so I did some checking and now I'm waiting to get around to buying some 600x CF cards to get around to trying this. The fastest card I have available in my collection is a 133x. And, I don't recall why I bought that one. Ones earlier than that would be left overs from when I had my PowerShot S20, then an REB1200 and then a Nikon Coolpix 5700.

These days, I'm all about the SD cards for my digital cameras (Eye-Fi), though been thinking of getting an ultracompact (and wearing my holster less often)....and some that I've looked at use MicroSD cards. Don't have any extra Class 10 ones laying about though....though I do have a couple of Class 6 8GB cards that need a home (I got them in a 2for1 sale, and the need for 1 didn't happen as planned....and when I do get around to getting that device, I'm sure I'll probably cave and get a 16GB or 32GB Class 10 MicroSD card.)

So, it has been over a couple months...and 5.4TB backuppc volume had seemed to stablize around 4.7TB....with most of the regular backups having 3 fulls (in fact, its time for ZEN to run a new full, which suggests I really been putting this posting off...) Though recently it has started creeping up again. Of course it would, since I keep adding more content here and there, particular to both MMC and TARDIS, and now the new SIDRAT volume. Plus I only had 2 fulls of my MacBookPro until recently.

But, hopefully, I still won't need to be worrying about making backuppc bigger yet. Though I am plotting how to build a RAID5 array on Zen.

I wanted to go with some 1.5TB drives...but those are more expensive than 2TB drives. So, I was looking at 5 2TB drives and a new array alongside the current 1.5TB RAID1, though that seemed kind of wasteful. So, now I'm thinking that there's a those 1.5TB drives in ORAC that I could upgrade with 2TB drive and free up those drives for ZEN.

Not exactly sure if I'll get 4 2TB drives right away though....guess it depends what I'm planning to do with the 1.5TB drives on ZEN. Though pretty much have to go with at least 2 to begin with.

Though wonder if I would grow the existing 1.5TB RAID1s with new drives. or build an all new 2TB RAID1 and move to it. Guess it depends on what to build as VG.....MMC is good for size now, TARDIS is going to stay...SIDRAT doesn't need to be as big as it is now, though backup could use more space. And, it was always an eventuality to play with having the TR2UT go on my Airport Extreme (though not sure which one now, or the Express?)

And, I'm still getting more and more behind on other pieces of acquired tech.... :crazy:

2 comments

Comment from: Doug [Visitor]
Doug

There are no known data corruption issues with the Linux ZFS port. If it was detecting corruption it’s likely that something in your system was corrupting your data. It could be your memory, your SATA controller, your drives, or something else. I’d run it down before putting real data on the box.

07/09/11 @ 23:08
Comment from: The Dreamer [Member]  

Sure, like any software is going is perfect from the very first release…which was the release I tested. Or the many of the more recent ones that fail to build under dkms, aren’t hiding other problems. And, like I said…it was the kernel module that was filling my logs with stack traces when it was under load. So, how can you say that the kernel module was guaranteeing everything was written correctly before it threw its exceptions.

This is a real server that has been running fine for over 3.5 years now.

Even real live ZFS has bugs….I’ve had to restore a few datasets here during the recent liveupgrades to update 9 at work.

Then again, bugs are good…I used to make a living squashing them. Alas, companies don’t make a ton of money fixing its bugs….it’s new product development that drives that.

07/09/11 @ 23:34
Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

Latest Poopli Updaters -- http://lkc.me/poop

bloglovin

There are 20 years 2 months 22 minutes and 52 seconds until the end of time.
And, it has been 4 years 10 months 28 days 13 hours 40 minutes and 4 seconds since The Doctor saved us all from the end of the World!

Search

November 2017
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      
Google

Linkblog

  XML Feeds

Who's Online?

  • Guest Users: 2
This seal is issued to lawrencechen.net by StopTheHacker Inc.
blogsoft

hosted by
Green Web Hosting! This site hosted by DreamHost.

monitored by
Monitored by eXternalTest
SiteUptime Web Site Monitoring Service
website uptime