64 bit computing in a 32 bit world
Jul. 11th, 2010 01:40 amI work for a Hosting Company. We build a lot of servers on any given day.
Tonight I got an order for the following build
Dual quad-core xeon's (not uncommon)
CentOS 32 bit (kay, easy enough)
standard-issue 100meg connection
32 Gigs of RAM (8x4 gig sticks...on a hosted box that the customer will never physically see)
9 2 TB drives... Configured on RAID Zero
Take a moment and wrap your head around this...
I don't THINK that our boards will recognize 32 gigs of memory, but there is an offchance they will... But they want a 32 bit OS... on dual 64 bit quad-cores? with THAT kind of memory?
Oh wait... 18 TB on RAID Zero makes it all work better... except that the 2TB drives have more bugs that a SID release of Debian... and RAID Zero really should never ever ever be used again... ever... for anything
Let's assume, for just a moment, that everything worked, just as the customer ordered it... the memory and board played nice, the drives didnt choke on their own blood, and the OS ran smoothly... I want to know just what the HELL this guy is running, on a HOSTED MACHINE that REALLY needs that kind of oomph... We don't install any GUI's, and all the users do is SSH into the boxes... What the heck does he NEED it for, and, if he is willing to pay through the nose (like we are charging him) then why doesnt he just get this all set up and host it himself? it would be WAY cheaper than almost a grand per month that we are charging
My little internal geek just died a little
Tonight I got an order for the following build
Dual quad-core xeon's (not uncommon)
CentOS 32 bit (kay, easy enough)
standard-issue 100meg connection
32 Gigs of RAM (8x4 gig sticks...on a hosted box that the customer will never physically see)
9 2 TB drives... Configured on RAID Zero
Take a moment and wrap your head around this...
I don't THINK that our boards will recognize 32 gigs of memory, but there is an offchance they will... But they want a 32 bit OS... on dual 64 bit quad-cores? with THAT kind of memory?
Oh wait... 18 TB on RAID Zero makes it all work better... except that the 2TB drives have more bugs that a SID release of Debian... and RAID Zero really should never ever ever be used again... ever... for anything
Let's assume, for just a moment, that everything worked, just as the customer ordered it... the memory and board played nice, the drives didnt choke on their own blood, and the OS ran smoothly... I want to know just what the HELL this guy is running, on a HOSTED MACHINE that REALLY needs that kind of oomph... We don't install any GUI's, and all the users do is SSH into the boxes... What the heck does he NEED it for, and, if he is willing to pay through the nose (like we are charging him) then why doesnt he just get this all set up and host it himself? it would be WAY cheaper than almost a grand per month that we are charging
My little internal geek just died a little
no subject
Date: 2010-07-11 08:38 am (UTC)no subject
Date: 2010-07-11 10:05 am (UTC)And have you told them that a 32-bit OS can't physically address 7/8ths of the memory they are purchasing? And who installs a 32-bit *nix these days anyway?
no subject
Date: 2010-07-11 10:36 am (UTC)And I use a 32 bit *nix system... as a firewall :D
no subject
Date: 2010-07-11 11:55 am (UTC)But the 32 bit OS on a 64 bit board tells me exactly what kind of customer this is .. :p
no subject
Date: 2010-07-11 12:08 pm (UTC)no subject
Date: 2010-07-11 12:52 pm (UTC)We've got boxes with 12GB of RAM and 32-bit CentOS 4 and it's just fine. As I understand it, you're limited to 4GB per process. If you're only running one main application process per box then you're fucked on modern multi-core systems anyway.
no subject
Date: 2010-07-11 12:57 pm (UTC)Now, using all of your disks on RAID 0 is really stupid. :o)
no subject
Date: 2010-07-11 01:01 pm (UTC)no subject
Date: 2010-07-11 01:26 pm (UTC)For apps that aren't multi-threaded (again, the heavy ones should really get with the picture), they can still leverage memory even if they're just sitting on the one core. So many database apps thump memory but barely touch CPU. And many single-threaded apps will be able to have CPU affinity set, so multi-cores aren't a complete waste (even if there's only one app, you can have background processes running on one CPU and "the app" running on another).
no subject
Date: 2010-07-11 01:44 pm (UTC)In our case, we have some mod_perl horrors and a collection of background processes - and we want to make sure we have lots of RAM left over for disk caching, as IO is a major problem on our systems. So we have 12GB of RAM, an SSD for certain data (and possibly two SSDs in RAID 0 if a single SSD isn't fast enough after the next round of testing).
I don't doubt that the hardware the OP's been asked to put together will be wasted, but there's definitely cases where you can use 32-bit Linux with a pile of RAM and where RAID 0 is appropriate.
no subject
Date: 2010-07-11 03:06 pm (UTC)Still, the typical FAT32 configuration caps at around 8TB.
no subject
Date: 2010-07-11 03:50 pm (UTC)no subject
Date: 2010-07-11 04:07 pm (UTC)As far as RAID0 -- I've actually never even seen that used in the real world. It seems like a very, very stupid idea. RAID 10 if you're super desperate, but 9/10 times RAID 1 works just fine for boot drives, and then RAID 5 for the rest. A single RAID with 18T of usable space on it will be interesting to manage. You might even want to go RAID6 for that. Or better yet, use a SAN.
Customers will often nickel and dime the hardware, and find out too late that there's a reason you want to over build. Of course then it's your problem to handle the emergency upgrades. I'd keep good notes if I were you. That way when they bitch and moan about performance, you have your notes.
I see a lot of 32 bit installs going through in my environment. Some apps just don't like 64 bit yet.
no subject
Date: 2010-07-11 06:12 pm (UTC)no subject
Date: 2010-07-11 08:15 pm (UTC)That box, if the storage was configured as raid 5, is close to what we are running VMware on for something like 20 machines- We actually have two of them set up as an HA/DRS cluster, and we are using an NFS share on our SAN for the datastores.
no subject
Date: 2010-07-11 08:35 pm (UTC)That's the key criterion for reasonable use of RAID 0. Because RAID 0 is only as strong as the weakest link ..
no subject
Date: 2010-07-11 08:42 pm (UTC)The idea being if the boot drive dies, you either reimage onto the new one (you kept backups, right?) or reinstall the core OS (if you didn't), and your data are undisturbed on the RAID set. And in the unlikely event something pooches the primary boot system, you can reimage from a stable backup or reinstall as well. It's a pretty robust system for a standalone server, and good enough for a hosted box. ;)
no subject
Date: 2010-07-11 08:44 pm (UTC)no subject
Date: 2010-07-11 09:19 pm (UTC)A boot drive is a lousy SPoF, especially when you can fix the biggest vunerablility for a realitively low overhead.
Now if the RAID1 controller goes, then it's up to the customer to justify getting a system with multiple controllers (don't laugh, it's happened.) Local disks and SSDs go every day.
The other thing I've seen is to set up the environment to be SAN booting, and then have a SW mirror going to the local drive. Since it's an OS only drive, there shouldn't be a very high performance toll with the software mirror. It could easily be configured to go in the opposite direction, wherein the system boots to the local drive and there's a GRUB option to boot to local.
The main objective being that when the boot drive dies, the whole system doesn't go with it.
no subject
Date: 2010-07-12 05:07 am (UTC)no subject
Date: 2010-07-12 04:08 pm (UTC)no subject
Date: 2010-07-13 06:40 pm (UTC)Unless you are demented.
.... OK, that's a given with the spec outlined above o.O
no subject
Date: 2010-07-23 06:41 pm (UTC)Otherwise, I'd just steal it, claiming he can't see the rest due to the 32bit os.
no subject
Date: 2010-07-23 10:26 pm (UTC)no subject
Date: 2010-08-01 04:30 pm (UTC)