Monday, March 26, 2007

sleeping giant

The title of this post refers to both the computer, and this rotund fellow taking his lunch break.

I would like to point out that he is using a piece of wood as a pillow. Hardcore. And, as an aside, it looks like he sleeps with his hand in his pants. Just sayin'.

Friday, March 23, 2007

liquid goodness

So, some pics from the inside of the machine room, now that stuff is finally moving forward. First off, we can see the in-row cooling units have been moved into the machine room from outside, and they are being placed where they will reside permanently. These liquid-cooled buggers can move a large amount of hot air very efficiently. There will be a total of 115 of them.

Next up, is a picture of the inside of one of these things.
You might notice the guy's hand with the wrench in the lower left hand side of the photo. This is almost an 'action shot'. This good contractor was hooking up those white pipes you see inside the cooling units. Those are chilled-water lines that come from the four massive chillers that were recently delivered and installed outside the building. These units will pull the hot air out of the hot-aisles, cool it down, then push it outside of the room. We sort of built in some overkill, but you never know when extra funds might unexpectedly drop out of nowhere for some extra nodes or racks, so it's possibly a good idea.

This next picture is a photo of the air-handling units (typically called 'cracs', tho I don't know what the acronym stands for). The orange line is, I think, for either power, electricity or water. If I find out, I'll let you know.In case you are wondering why all my photos look like crap, it's because I am using the piece of crap camera on my cellphone. Maybe someday I'll start using a camera, but for now, I have to kind of keep this blog under wraps, since everything we are doing is non-disclosure at the moment, and I could potentially get in trouble for revealing too much at this stage. Exciting!

Wednesday, March 21, 2007

coolers have arrived

The in-row coolers and the end-row racks have arrived.

It's really kind of nuts. The big, empty racks you see there will be at both ends of the hot-aisles, which will be fully contained with plexiglass. That's fully contained hot-aisles, man! Talk about air circulation. We are putting our huge 48-disk pci-x disk arrays in these apc racks (24 terabytes in each 4-U box, and there will be 72 of these buggers - crazy!)

These nice, hardhatted fellows are putting the in-row cooling units in place (although it looks like they're putting the second row in the wrong place, as that will be the cold aisle where they are currently standing). The in-row coolers will be liquid-cooled air handlers that will help circulate all the intense heat from the 48 node/chassis racks, while cooling it at the same time.

I had the pleasure of hearing one of those disk-system boxes. It's loud as hell, easily 50 decibels. When just the 72 boxes of disks are turned on, your ears will easily be damaged. With the pdus, the nodes themselves, the disks, the major 40-ton air handlers and the in-row coolers are all running concurrently, it will easily be louder than airplanes at the threshhold of human hearing destruction. SWEET.

Tuesday, March 20, 2007

quick post

Just a quick post today. We were adding 4 more racks (160 new 2-dual core nodes) to our already existing cluster, and in the process, I took a picture of 4 out of the 8 infiniband switches that is the network fabric for the machine:
That's just half of the interconnects for this massive beyotch. Your fingers get a little sore plugging in that many cables, lemme tell ya.

That's it for now...

Monday, March 12, 2007

what about some other clusters you built?

What about my other clusters, you might ask? What makes me qualified to run this one?

Well, my linux-fu is quite strong, and I learned from the best. But linux-fu is not all that is required.

Let me see if I can construct a quick illustrated history of my past work:

(...comes back to blogger after unsuccessfully searching for photos)

Well, illustrations be damned - my former place of business is 'redesigning' their website, so all the photos of my clusters are now unreachable. I've been gone for 1.5 years, and they still have 'this site under construction' horsepucky up there.

(looks some more, this time using 'teh google')

  • 1*
so, here is the first cluster I cut my teeth on:
Impressive? Not by today's standards, that's for sure. But, what was impressive about this machine was the year it was built (1997 or 8 I think), it's size and interconnect. What you see here are 64 Pentium III's (whoo!), each with about a 5G hard drive, and I think 512M ram. The interconnect is Myrinet B (if memory serves correctly), a highly proprietary network fabric that consists of these weird hand-cut platinum cables that cost something like $100 a foot or some such weirdness. It was one of the fastest around at the time, with something like 1Gb/s uncompressed bandwidth. It was a real pain in the rear getting this beast to install, let alone run, and I was lucky to be allowed to touch it, let alone completely rebuild this beast. Most of the scripts were written by my insanely intelligent coworkers/gurus to handle job submission back before you kids got all this fancy job submission and monitoring software.

  • 2*
Next up, we have my 2nd cluster:
This pretty fellow is me, screwing in the 4U dual-proc bleeding edge (for 2002) monsters.

This is the final pic, just after running linpack on it, I believe. 44 dual proc amd mobos with 1G each of memory. This system was pretty advanced, and getting all the parts working correctly was difficult, given the lack of drivers that were available at the time (it probably wouldn't surprise you that the windows drivers were all available up front, those jerks). Each machine had I think a 40G hard drive, and those disk arrays you see there in white above the monitors were the filesystem for /home - mirrored arrays that were each something like 160G apiece. By the time I left this job, parts of this machine were starting to fail at an alarming rate, and I think only about 20 of these are actually up at this time. I know the disk array was limping along with a broken mirror and one drive reporting errors when I left. The interconnect was a 1G ethernet network for os-provisioning, etc, and a myrinet fiber fabric for the inter-process communication. Rocks was the provisioning os, using redhat (9?) at the time. I would like to point out that this machine was conceived of and designed by my former brilliant co-worker and linux guru, who now has a phd in NeuroBiology.

Pretty slick for a small time department run by a psychopathic old professor that, aside from bringing in tons of oil-related grants for her tenured chair, would stumble through the halls in a mumu shouting profanity that would make a sailor's ears burn. She could be extremely nice about 10% of the time, and the other 90% was spent berating people or insulting them. I once heard her yell at a Chinese grad student 'Get out of my office, and don't come back until you learn how to [expletive]ing speak [expletive]ing English!' She was never rude to me for some reason, but hated my entire group of coworkers. PhD's are crazy as well most of the time. Maybe 20% are normal, decent humans, while the rest are soulless freaks. Look it up!

  • 3*

After that, I built a cluster using Apple XServes, a complete and utter slow-motion disaster (no pic - hell, just imagine a rack full of silver XServes - it looks cool, but boy does it suck). The company sent a 'tech' to help who knew absolutely nothing about what he was doing. You could tell he ran a help support desk for Macs on a network, but never an OSX provisioning cluster for HPC. He wasted over 3 days of my precious time. I ended up complaining to Apple so badly that they sent me and my coworkers three free ipods as restitution. I still use that ipod 4 years later, but that's beside the point.

Don't ever build a cluster out of XServes with OSX! One university built something on the order of a 1000-node version of this, and ended up pushing the OS to each node by hand with an image loaded onto each node by ipod via firewire. I kid you not. Completely retarded.

I spent about 1 week solid writing my own auto-install scripts and setting up the environment, but didn't get to see the project to fruition, because I got a better job building MUCH larger systems (where I currently work, to be exact).

  • 4*

At my new job, I assembled one of the largest computers in the world (here, I kid not again - this machine is in the top 30 machines in the world as of 2007 - I would get more specific and say the exact rank in the top500 list, but then you could probably guess who I am and where I work, so I can't do that). This is not to say I single-handidly built the entire machine - I'm just saying I did most of the work putting the hardware together, from wiring the racks internally, to installing the power, the chassis, the nodes and the hca's in every node. Other intelligent people were involved in the design, administration and software implementation of this machine where I work, I just assembled the majority of it.

The last machine I assembled was insane. Here is a picture of just 1/10th of the cables. This construction almost killed me. And you could have hidden my body in the cables...(and yes, that is a Cray you see there, as well as an SGI)

and here is a pic of part of the final product - just one row, mind you

So there you have it. I have a very weird job, and how I got here was even weirder.

My, what a fun trip it's been down memory lane!

coolers arrived

The cooling units for the new machine arrived. They are freaking HUGE. 4 of them, easily 800 sq ft of cooling equipment alone. Ouch:

Like I said, there were 4 of these trucks. It's really amazing to think about the large number of people and companies involved in making this project happen.
The crane you see there was for moving the coolers from the trucks into the buildings that were made to house them. Looks like the schedule is moving along as planned.

I was very skeptical about this project at first, but it may end up actually working! 2/6 of the main components of this cluster actually appear to be up to snuff enough to not be impediments. Here are the 6 components:

  1. HCA's (these are the infiniband internal cards for each node) - apparently ready
  2. Software needed to schedule jobs - not yet ready, perhaps the beta will work in time
  3. Processors - seemingly ready - people are already testing them in the wild
  4. Main infiniband switch. The prototype (first of its kind in the world) has been reported as working
  5. Racks - I've seen one picture of the prototype, so I know it exists
  6. Actual nodes themselves - I've heard they managed to get one to install and boot - w00t!

So there you have it. No equipment has arrived yet, since there is not yet a finalized contract for this thing yet. When you have 3 large entities trying to work out a multi-million dollar contract, deadlines are almost never met. I hear rumors that we are probably 1-2 weeks from actually getting it finished. Once it's finished, the equipment will actually begin to arrive, and we can start on figuring out how to get everything to run the way it should!

Wednesday, March 7, 2007

A little pic 4 U


So, here is the first pic I have available of our new machine room.

As you can see, it is not yet completed. But that's ok, since none of the equipment we will be using to build this monster even exists yet! The switches, the cpus, the nodes, the racks, the HCAs, nothing. It's all prototypes and design specs at the moment. Let's hope all of the different partners/companies actually end up making this all work correctly at the same time.

You might wonder - what are those large white boxes in there, and why is the room already half full, if the machine room is not yet done, and there are not yet parts that exist for this massive monster?

Well, those are PDUs. Power Distribution Units. Each of those babies has 80 30A circuits. Half of the room will be dedicated to power alone. Another 3rd will be cooling. This machine is really going to be a wonder to be adored.

what tha hell is this?


So, I'm starting this blog to detail the creation of one of the biggest computers in the world.

How is this so, you might ask? Well, I work for [name redacted], and we received a grant for [amount redacted] from the [name redacted], an agency of the US Government, to build the fastest, biggest machine in the world.

I kid you not.

As of this date, I am unable to reveal any details about the machine, since the company, [name redacted], has tons of non-disclosures that I am beholden to. There's also scads of insider trading bs that I am unable to reveal regarding the system, so I am not really at liberty, at this point, to reveal where the parts for this machine will come from, or what they will consist of. If you read any news related to supercomputing and universities, you can probably figure out not only who I am, but where I live, who I work for, and where the parts are coming from. And if you do, please don't reveal it here!

Anyhow, I will be keeping a journal of sorts about the planning, design and implementation of this enormous undertaking.

How enormous of an undertaking? Let's just say that it will have over 53,000 processors, and will have petabytes of clustered filesystems that are capable of aggregate write speeds of 32G/s, and will pull 2.5 MEGAWATTS of power. Ouch.

And, you may ask, who tha hell are you? Well, I am a Senior Linux Cluster Administrator. That's longhand for 'guy that runs big computers'. I build and run these massive beyotches. I am part of a team that is currently working on this monstrosity, and will help build it, install it and be responsible for the daily upkeep of said monstrosity. I have been named the 'System Lead', which in itself, pretty much will be my entire resume from now on:

Resume for Super Geek

Jan 2007 - Present
Work for: [name redacted].
Duties: Built, installed and ran the biggest supercomputer in the world.

The End

That will be it. One freaking line. I don't like to summon hubris, but I think my career is set now.