Tags: 2tb

11/05/12

  10:30:00 am, by The Dreamer   , 2100 words  
Categories: Hardware, Software, Computer, Storage, Ubuntu, FreeBSD, VirtualBox

Orac is looking strangely bare, with Zen taking over.

For a long time, I've been running a 6 drive RAID 10 array of Hitachi 5K3000 2TB drives in Orac for backuppc. This configuration got me at 5.4TB array, and somewhat better performance than when I tried a RAID6 configuration. But, eventually, I kept running out of space and the price of harddrives went up so expanding the array over time didn't happen as I had hoped. Being RAID10, the options were other concat another array, either 2 in RAID1 or 4 in RAID10 or 4 as 2 RAID1....using volume manager. Or maybe see if RAID10 would deal with having all 6 drives upgraded to 3TB, though hadn't considered the transition of 512 to 4k and how it would cope with that.

Though I did, eventually find out when I upgraded a 1.5TB RAID1 set to become a 2TB RAID1 set....going from ST31500341AS to ST2000DL003, where I contributed my experience here: http://askubuntu.com/questions/141669/creating-properly-aligned-partitions-on-a-replacement-disk/ Though it first started because one of the ST31500341AS drives had failed.

Before the failure of one of the 1.5TB drives in the above mentioned RAID1 set, I had 4 ST31500341AS in a RAID5 on old-Zen. It had been done in under RR622, under Windows, and NTFS partitions, etc. I had tried copying the data at various times, not really having anywhere that would hold the data elsewhere...but wanting to get it over to FreeBSD for recovery. While I got the rr622 driver working, and it saw that I had a single array (rather than the native driver that would see the 4 individual disks.) I couldn't get access to the data. Though it had worked when I was previously playing around with Xen (had tried copying it then....to a 2 1TB RAID0 set, but then one of the 1TB drives died....so I lost the copy, I had then replaced it with a 2TB RAID1 set....using an ST2000DL003 and an ST2000DM003, the DL being a 5900RPM drive and having a 5 year warranty...while the DM drive is a 7200RPM drive, but with only a 1 year warranty. And, turns out the 1 year is generous.

At work, I had built my FreeBSD desktop using a pair of the ST1000DM003 drives...and 3 drive failures later....it is now a pair of ST2000DL003 drives. Yeah...I was having trouble with the array, and apparently using XFS was a mistake too...because I thought it was recovering, but instead it was slowly eating the data. When I had nuked the RR622 RAID5 array, and had switched to using it as JBOD and create a RAIDZ set under FreeBSD...I found that there was nothing to copy back from the RAID1 array. D'Oh! >:XX

Though I had also copied the Microsoft WindowsImageBackup files, to see if I could mount the VHD file under VirtualBox to help in recovery. I largely had the data in bits and pieces elsewhere, it was the environment I was wanting to recreate...and Oops!Backup didn't back up that part anyways (the data I was mainly trying to migrate). The image mounted, and I could see it...but soon after Windows would try to fix it and then it would disappear....kind of like what it did on February 15th to make the original Zen go away. No idea what kind of disk rotting the Intel Matrix RAID had been doing, when it had to initialize the array again every time after a Windows crash. I've had Ubuntu crashes, but the RAID arrays remained stable...usually. While Windows & Intel RST....it was pretty much every time. I'm sure it was slowly corrupting things overtime to where things wouldn't recover, though it choose to do that after an automatic reboot for Windows updates...and the day before I left for my first Gallifrey One made things even more annoying.

Anyway with another 1.5TB drive freed up, I contemplated adding it to the RAIDZ I had made of the 4 1.5TB drives, keeping it as a hot spare, or just use it by itself -- living dangerously. I ended up with the latter for some temporary data. Because in my mind I was starting to lean to what happened next.

Pages: 1· 2

08/12/11

  07:50:08 am, by The Dreamer   , 216 words  
Categories: Hardware, Storage

ST31500341AS vs 0F12117

So, thinking about how the sync of the ST1500341AS to 0F12117 was slightly slower than the sync of 0F12117 to 0F12117, I wondered what specs about the new drive would edge out.

Specs for the ST31500341AS that I could track down include:

Formatted capacity               1500GB
Guaranteed sectors               2930277168
Heads                            8
Discs                            4
Bytes per sector                 512
CHS                              63/16/16383
Recording density                1462 kbits/in max
Track density                    190 ktracks/in avg
Areal density                    277 Gbits/in2 avg
Spindle Speed                    7200 RPM
Internal data transfer rate      1709Mbps max
Sustained data transfer rate OD  135 MBps max
I/O data-transfer rate           300 MBps max
Cache buffer                     32MB
Average Latency                  4.16 ms
Track-to-track seek time         <0.8ms read
                                 <1.0ms write
Average seek, read               <8.5ms
Average seek, write              <10.0ms

Specs for the 0F12117 that I could track down include:

Recording density                1443 Kbpi
Track density                    285 Ktpi
Max areal density                411 Gbits/in2
Data buffer                      32MB
Rotational Speed                 Coolspin (5940 RPM)
Media transfer rate              1366 Mbps max
Typ Sustained transfer rate      136 MBps max
Interface transfer rate          600 MBps max
Heads                            6
Discs                            3
Bytes per sector                 512
CHS                              63/16/16383
Total Logical Data Bytes         2000398934016 bytes
Number of sectors                3907029168
Track-to-track seek time         0.5ms typ 0.7ms max read
                                 0.6ms typ 0.8ms max write
Average seek, read               <8.5ms
Average seek, write              <10.0ms
Average Latency                  5.05 ms

Probably all comes down to recording density....the Hitachi 2TB having more data stored on in less area than the Seagate 1.5TB, makes the Hitachi faster than the Seagate....

08/11/11

  07:47:49 am, by The Dreamer   , 718 words  
Categories: Hardware, Software, Computer, Storage

The storage upgrades on Orac and Zen begin

Because one of the 1TB RAID1 drives was starting to fail...I was looking at options to replace it.

I original had a plan to upgraded one of my 1.5TB RAID1's to a pair of 2TB drives....freeing one of the 1.5TB drives to test out the OCE/ORLM feature of the RR622 & TR5M-BP on Zen...converting the current 2x1.5TB RAID0 to a 3x1.5TB RAID5 (or perhaps 4x1.5TB) Oops!Backup seems to be doing well and its filling up the space, I'm sure there's a setting to tell it not to, but not an issue at the moment. Guess I should buy it before the 30 day trial runs out.

At first I was looking at new 1TB drives....though for $20 more I could get 2TB drives, the same kind as the 6x2TB RAID10. The 1TB and 1.5TB drives have been 7200RPM (as were all previous drives)....the Hitachi's are the first of the greener drives....5940RPM. Given that I'm doing PM RAID in a first gen PCI Express box, I'm probably still not seeing the full potential of the greener drives.

Full story »

07/09/11

  09:03:13 pm, by The Dreamer   , 1290 words  
Categories: Hardware, Software, Computer, Storage

New arrays on Orac

This is an owed post (of over 3 months in the making....) I had thought of lots of things to write for this since I meant to write it, but didn't write anything. And, now its so old...that I'm just going to be really terse....

&#58;&#108;&#97;&#108;&#97;&#108;&#97;&#58;

The lead in to this posting was Worked on Orac yesterday. So, I got those 5 2TB drives...and first tried to make it RAID5 using mdadm. And, started filling it up....though the performance didn't seem to be much better using XFS, plus the new backuppc seemed to have regressed on handling XFS.... so I heard about ZFS on Linux....as kernel modules that do the taint. And, tried out RAIDZ, with its variable striping to maybe to do better on performance. It seemed nice, but then stack dumps filling up my logs....rebooted and then did a zpool scrub, and it went to work fixing lots and lots of errors, and then a few unrecoverable errors.

Guess no ZFS, and the XFS didn't seem enough of a gain to make it different from everything else. Even though there's one really big problem with ext4. Resizing it causes corruption. Supposed to be fixed now, but before I switched to the new array, I grew the backuppc volume for 3.75TB to 3.8TB, offline...and it resulted in corruption still.

So, I was making the new backuppc volume the full size of the array. As I mulled over the situtation...I decided that I would try RAID10, sure that wouldn't get as huge a boost over the old RAID6 array of 1.5TB, but it would still be an increase...and no idea what I'm going to do when I need more space. But, first I had to get a 6th 2TB drive to do RAID10. Plus it would mean that I would need to do something else with the 1.5TB drives. One was going to have to come out. Sure it could run degraded initially (though did find that it would come back after a reboot, guess the boot making degraded arrays come online...only does it for OS filesystems. Though the boot get's stuck because it can't mount filesystems and the recovery of mounting just doesn't work still. Getting into single user is such a pain too. Though after I broke my sudoers file recently, I turned off hiddenmenu.

The hard part about making the RAID10, was figuring out the ordering of devs...so that it was mirror across the eSATA channels and stripping with in. There's only 2 channels, and 6 drives in this array...so figured that was the best way to go. Though now that I've realized that its only PCI express 1.0, not sure if that was the best way to go. Though to try the other, would call for me to get 8 drives? And, who knows what the future holds for supporting backuppc....

internal bitmap on doesn't seem to degrade the RAID10 as much as RAID5 or RAID6, so I have that turned on now.

Now there was the question of what to do with the 4 remaining drive bays and the 4 1.5TB drives. Would I got with another RAID10, a couple of RAID1...would I concat them into the LVM for growth or make them separate.

Well, on the old 1TB array was my old MMC volume, which needed more space and part of the 5x1.5TB RAID6 was a backup volume...also in need of more space. I decided that I would go with two separate RAID1s using the Seagate 1.5TB drives (I had pulled the Samsung 1.5TB to make it degraded....that drive has since been hooked up to TARDIS for local backups).

Around this time, my Roku appeared...and I went through the various attempts to map a network drive to it for local content. Settling on the HSTI Media Stick and its 1TB maximum. I settled on going with 2 1.5TB RAID1 arrays. One of them was made fully available to be the new MMC volume. And, the other...I made a 1TB volume (called TARDIS) for the HSTI Media Stick, and the rest became the new backup volume.

The old 1TB RAID1 array stayed online, until it was recently repurposed into the volume for Time Machine backups (only 931GB).

Now I've been debating playing around with external bitmaps and/or external journaling to see if I can get more performance. But, external bitmaps go away after a reboot, and that just seems to be a bug that isn't going to get fixed anytime soon. external journaling would require me to find some devices for that....so I got the idea of PCI Compact Flash adapter and some compact flash cards....I have a couple of the PCI Compact Flash adapters....and I do have a few compact flash cards around. Though when I was playing around with ReadyBoost on Zen, I found they weren't really that great on speed. And, I was going to want to get as much speed out of things as I could here....so I did some checking and now I'm waiting to get around to buying some 600x CF cards to get around to trying this. The fastest card I have available in my collection is a 133x. And, I don't recall why I bought that one. Ones earlier than that would be left overs from when I had my PowerShot S20, then an REB1200 and then a Nikon Coolpix 5700.

These days, I'm all about the SD cards for my digital cameras (Eye-Fi), though been thinking of getting an ultracompact (and wearing my holster less often)....and some that I've looked at use MicroSD cards. Don't have any extra Class 10 ones laying about though....though I do have a couple of Class 6 8GB cards that need a home (I got them in a 2for1 sale, and the need for 1 didn't happen as planned....and when I do get around to getting that device, I'm sure I'll probably cave and get a 16GB or 32GB Class 10 MicroSD card.)

Full story »

03/28/11

  01:43:48 pm, by The Dreamer   , 837 words  
Categories: Hardware, Computer, Storage

Worked on Orac yesterday

Yesterday, I decided that I should poke around Orac in preparation for building that new 5x2TB RAID5 array. My backuppc pool keeps filling up, and deleting old backups isn't freeing up enough space anymore (5x1.5TB RAID6 array.)

At first it was just going to be a simple procedure to add two eSata cables from the computer to the Sans Digital MS2T+B, and then shutdown and move the two 1TB drives in RAID 1 from the bottoms slots of my TR5M-B (upper 3 are unoccupied). But, as I was pushing on the cables the bracket popped in...so I had to open the case to put it back in place.

While I was in, I decided to see what kind of fan is on the drive cage. A couple of weeks earlier, I had gotten a SMART alert of high temperature and cacti showed that there was a rise for a few hours in the late evening. It had been warm that day, warm enough that I touched the AC briefly in the earlier part of the evening. Guess I needed it to stay on the rest of the evening.... But, there was no fan...though there were mount holes for a fan, and there's supposed to be a fan connector on the motherboard for it. Though not all models had one, so evidently not the GT5635E. Though wonder if I should, maybe the addition of the second internal drive makes it desirable (so far I've had to replace the second drive twice....the current one is now a newer generation than the original drives, and runs cooler.)

Since there wasn't a fan, I didn't take the cage out....so I didn't measure what size fan might go in the spot. Though I have a rough guess as what size it is, so perhaps later on I'll pick up a fan to put there. I should probably see about a new rear fan....perhaps a faster one might help.

But, I put things back together and moved the drives, started things back up.

Of course, the first problem is now all my drive letters are different....which messed things up like cacti (hddtemp recording), gkrellm and rc.local (where I changed the i/o schedule for the RAID6 array drives to noop). There was also the complication that the filesystem on the drives I just moved hadn't been fsck'd in over 6 months. So, it needed to fsck them. root also needed be fsck'd. It took a long time to boot, but eventually did. Though the filesystem on the moved RAID1 array hadn't finished checking...had set 'nobootwait' on filesystem. I'll probably remove those now...since they have fixed Bug #563916. But, after fixing rc.local (and before fixing cacti) I rebooted....so it still needed to do the fsck.

Later I still had to reboot again, because two services that reference the filesystem....weren't happy...well, one was really unhappy and the other wasn't reporting valid information for it and needed a restart to make it work (I couldn't probably have fixed the issue without a reboot, but I wanted to make sure it rebooted right after another change I had made. There's still something odd going on, affecting a different RAID1 array....will look into that later (or I could opt to move the filesystem to the RAID6 array)....since it could use a grow.

Full story »

Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

Latest Poopli Updaters -- http://lkc.me/poop

bloglovin

There are 20 years 3 months 25 days 22 hours 31 minutes and 17 seconds until the end of time.
And, it has been 4 years 9 months 1 day 15 hours 31 minutes and 39 seconds since The Doctor saved us all from the end of the World!

Search

September 2017
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30  
Google

Linkblog

  XML Feeds

Who's Online?

  • Guest Users: 1
This seal is issued to lawrencechen.net by StopTheHacker Inc.
powered by b2evolution

hosted by
Green Web Hosting! This site hosted by DreamHost.

monitored by
Monitored by eXternalTest
SiteUptime Web Site Monitoring Service
website uptime