[identity profile] daddykatt.livejournal.com posting in [community profile] techrecovery
I work for a Hosting Company. We build a lot of servers on any given day.

Tonight I got an order for the following build

Dual quad-core xeon's (not uncommon)
CentOS 32 bit (kay, easy enough)
standard-issue 100meg connection
32 Gigs of RAM (8x4 gig sticks...on a hosted box that the customer will never physically see)
9 2 TB drives... Configured on RAID Zero

Take a moment and wrap your head around this...

I don't THINK that our boards will recognize 32 gigs of memory, but there is an offchance they will... But they want a 32 bit OS... on dual 64 bit quad-cores? with THAT kind of memory?

Oh wait... 18 TB on RAID Zero makes it all work better... except that the 2TB drives have more bugs that a SID release of Debian... and RAID Zero really should never ever ever be used again... ever... for anything

Let's assume, for just a moment, that everything worked, just as the customer ordered it... the memory and board played nice, the drives didnt choke on their own blood, and the OS ran smoothly... I want to know just what the HELL this guy is running, on a HOSTED MACHINE that REALLY needs that kind of oomph... We don't install any GUI's, and all the users do is SSH into the boxes... What the heck does he NEED it for, and, if he is willing to pay through the nose (like we are charging him) then why doesnt he just get this all set up and host it himself? it would be WAY cheaper than almost a grand per month that we are charging

My little internal geek just died a little

Date: 2010-07-11 08:38 am (UTC)
From: [identity profile] kyhwana.livejournal.com
Oh well! When one of those drives dies and he can't access more than 4GB of RAM, nothing you do, he ordered it that way, sooo.

Date: 2010-07-11 10:05 am (UTC)
ext_8716: (Default)
From: [identity profile] trixtah.livejournal.com
Who the fuck uses RAID 0 for anything?

And have you told them that a 32-bit OS can't physically address 7/8ths of the memory they are purchasing? And who installs a 32-bit *nix these days anyway?

Date: 2010-07-11 11:55 am (UTC)
From: [identity profile] lihan161051.livejournal.com
WTF is it with people and RAID 0? Why do they do it? I have to talk someone out of this almost every other day, and explain that they really do want redundancy, and for a hosted box, maybe a hot spare as well.

But the 32 bit OS on a 64 bit board tells me exactly what kind of customer this is .. :p

Date: 2010-07-11 12:08 pm (UTC)
From: [identity profile] taleya.livejournal.com
...I don't know, but when he complains that it's not what he wants and you tell him he's not getting a refund and he orders new kit like a complete arse.....BAGS THAT HARDWARE

Date: 2010-07-11 12:52 pm (UTC)
wibbble: A manipulated picture of my eye, with a blue swirling background. (Default)
From: [personal profile] wibbble
The application I support at work doesn't play nicely with 64-bit Linux. We keep having to tell customers who supply their own hardware to stop installing CentOS 5 64-bit.

We've got boxes with 12GB of RAM and 32-bit CentOS 4 and it's just fine. As I understand it, you're limited to 4GB per process. If you're only running one main application process per box then you're fucked on modern multi-core systems anyway.

Date: 2010-07-11 12:57 pm (UTC)
wibbble: A manipulated picture of my eye, with a blue swirling background. (Default)
From: [personal profile] wibbble
There's definitely cases for RAID 0. We're looking at using two SSDs in RAID 0 for speed for data than can be trivially recreated if a drive dies. At the moment we're using a single SSD, but if that doesn't provide enough of a speed boost we'll use RAID 0.

Now, using all of your disks on RAID 0 is really stupid. :o)

Date: 2010-07-11 01:01 pm (UTC)
From: [identity profile] tullamoredew.livejournal.com
if it was an x86_64 OS, I'd say the guy is aiming at KVM/Xen/OVZ, but i386?

Date: 2010-07-11 01:26 pm (UTC)
ext_8716: (Default)
From: [identity profile] trixtah.livejournal.com
Sure, there may be occasional apps that have that kind of problem, although I really think it's pitiful in something released in the last 5 years in particular. But unless you're running VMs, 3/4s of the memory of those boxes is wasted. Sure it's fine, but why pay for hardware you aren't using? (Unless you repurpose things sometimes).

For apps that aren't multi-threaded (again, the heavy ones should really get with the picture), they can still leverage memory even if they're just sitting on the one core. So many database apps thump memory but barely touch CPU. And many single-threaded apps will be able to have CPU affinity set, so multi-cores aren't a complete waste (even if there's only one app, you can have background processes running on one CPU and "the app" running on another).

Date: 2010-07-11 01:44 pm (UTC)
wibbble: A manipulated picture of my eye, with a blue swirling background. (Default)
From: [personal profile] wibbble
No, it's not automatically wasted. You can have 4GB per process. If you have application processes that chew up the RAM you can easily have eight of those munching a total of 32GB.

In our case, we have some mod_perl horrors and a collection of background processes - and we want to make sure we have lots of RAM left over for disk caching, as IO is a major problem on our systems. So we have 12GB of RAM, an SSD for certain data (and possibly two SSDs in RAID 0 if a single SSD isn't fast enough after the next round of testing).

I don't doubt that the hardware the OP's been asked to put together will be wasted, but there's definitely cases where you can use 32-bit Linux with a pile of RAM and where RAID 0 is appropriate.

Date: 2010-07-11 03:06 pm (UTC)
From: [identity profile] snyperwolf.livejournal.com
Not sure about *nix systems, but I thought that a 32-bit OS could only address a few TB of hard drive space? Oh wait ... 32 bit with 64k per address should be able to handle like 256 TB of space.

Still, the typical FAT32 configuration caps at around 8TB.

Date: 2010-07-11 03:50 pm (UTC)
From: [identity profile] cheezemeister-x.livejournal.com
We use RAID 0 for arrays involved in temporary data storage during large processing tasks. We want the performance gain at minimal cost. The effect is negligible if one of these arrays is lot since it's only temporary data storage.

Date: 2010-07-11 04:07 pm (UTC)
From: [identity profile] tecie.livejournal.com
That is kind of a poor setup. If nothing else, memory is very cheap these days, and if you're going to production with a web server, you may have well design it to handle a lot.
As far as RAID0 -- I've actually never even seen that used in the real world. It seems like a very, very stupid idea. RAID 10 if you're super desperate, but 9/10 times RAID 1 works just fine for boot drives, and then RAID 5 for the rest. A single RAID with 18T of usable space on it will be interesting to manage. You might even want to go RAID6 for that. Or better yet, use a SAN.
Customers will often nickel and dime the hardware, and find out too late that there's a reason you want to over build. Of course then it's your problem to handle the emergency upgrades. I'd keep good notes if I were you. That way when they bitch and moan about performance, you have your notes.

I see a lot of 32 bit installs going through in my environment. Some apps just don't like 64 bit yet.

Date: 2010-07-11 06:12 pm (UTC)
From: [identity profile] tullamoredew.livejournal.com
Not too sure, I think that would depend on the FS. Nobody said the entire logical disk will not be carved up into a bunch of small LVs

Date: 2010-07-11 08:15 pm (UTC)
jecook: (Default)
From: [personal profile] jecook
O.O

That box, if the storage was configured as raid 5, is close to what we are running VMware on for something like 20 machines- We actually have two of them set up as an HA/DRS cluster, and we are using an NFS share on our SAN for the datastores.

Date: 2010-07-11 08:35 pm (UTC)
From: [identity profile] lihan161051.livejournal.com
for data than can be trivially recreated if a drive dies

That's the key criterion for reasonable use of RAID 0. Because RAID 0 is only as strong as the weakest link ..

Date: 2010-07-11 08:42 pm (UTC)
From: [identity profile] lihan161051.livejournal.com
I'd say a single drive not in the RAID set for boot, and a RAID 5 of all the rest (ideally with a hot spare, see earlier comments) for /usr, /etc, and maybe /bin, plus document root for Apache, whatever directory you've got your SQL store in, and aliased root folders for your major web apps.

The idea being if the boot drive dies, you either reimage onto the new one (you kept backups, right?) or reinstall the core OS (if you didn't), and your data are undisturbed on the RAID set. And in the unlikely event something pooches the primary boot system, you can reimage from a stable backup or reinstall as well. It's a pretty robust system for a standalone server, and good enough for a hosted box. ;)

Date: 2010-07-11 08:44 pm (UTC)
From: [identity profile] lihan161051.livejournal.com
If you're really paranoid, and you have enough slots in your drive bay, you could even set up the boot volume on a RAID 1. Still need only one hot spare for that, just let the controller grab it for whichever one went degraded on you ..

Date: 2010-07-11 09:19 pm (UTC)
From: [identity profile] tecie.livejournal.com
lilhan: I was about to post that exact same thing.
A boot drive is a lousy SPoF, especially when you can fix the biggest vunerablility for a realitively low overhead.
Now if the RAID1 controller goes, then it's up to the customer to justify getting a system with multiple controllers (don't laugh, it's happened.) Local disks and SSDs go every day.

The other thing I've seen is to set up the environment to be SAN booting, and then have a SW mirror going to the local drive. Since it's an OS only drive, there shouldn't be a very high performance toll with the software mirror. It could easily be configured to go in the opposite direction, wherein the system boots to the local drive and there's a GRUB option to boot to local.
The main objective being that when the boot drive dies, the whole system doesn't go with it.

Date: 2010-07-12 05:07 am (UTC)
From: [identity profile] http://users.livejournal.com/_caecus_/
This is what happens when a tech only has experience with software, and not so much with hardware.... or can't RTFM.

Date: 2010-07-12 04:08 pm (UTC)
From: [identity profile] ghostdandp.livejournal.com
18 TB is not temmporary storage.. or if it is I'm really curious what he's running

Date: 2010-07-13 06:40 pm (UTC)
From: [identity profile] goose-entity.livejournal.com
you wouldn't be using FAT on a *nix system anyway.

Unless you are demented.

.... OK, that's a given with the spec outlined above o.O

Date: 2010-07-23 06:41 pm (UTC)
From: [identity profile] mix-hyenataur.livejournal.com
Depends, can't you assign it to virtual memory for virtual desktops?

Otherwise, I'd just steal it, claiming he can't see the rest due to the 32bit os.

Date: 2010-07-23 10:26 pm (UTC)
From: [identity profile] kyhwana.livejournal.com
I wouldn't imagine so, unless there's kernel VM code in there somewhere that will let you do it.

Date: 2010-08-01 04:30 pm (UTC)
pauamma: Cartooney crab wearing hot pink and acid green facemask holding drink with straw (Default)
From: [personal profile] pauamma
Coming late: would a 64-bit CPU with 32 GB do whatever the 32-bit OS needs to use it as PAE memory?

Profile

techrecovery: (Default)
Elitist Computer Nerd Posse

April 2017

S M T W T F S
      1
2345678
91011121314 15
16171819202122
23242526272829
30      

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Mar. 20th, 2026 06:10 am
Powered by Dreamwidth Studios