gr ( grumpy_sysadmin) wrote,
gr
grumpy_sysadmin

Pictures from the $CURRENT_EMPLOYER data center.

I was doing some recabling of several systems today as part of upgrading systems from ESX 2.5 to ESX 3, and it occured to me:
  1. that I've filled a rack (to power, network patch panel, and SAN patch panel limitations, there's another 20 or so RU of physical space... but if it gets filled, it'll need a fifth and sixth power feed--from separate PDUs, of course--and a second and third each network and SAN patch panels. I've already added second and third power distributions) with nothing but ESX systems. On the up side, these five systems represent about 80 logical servers, but racking and stacking ESX servers is kind of a pain, what with all the cables and all;
  2. that I've never posted pictures from work here, nor given even those of you whom I know personally a tour (because we have real security, and there'd need to be a real reason for you to get past the guards down stairs).
Anyway,

Rack M6, front Rack M6, back
Rack M6, front
I racked, stacked, and configured everything in this rack. They're all VMware ESX servers.
Rack M6, back
It's not perfect (the arm-control bits of the bottom two servers' cable management arms don't exist: they're refurb hardware), but I'm content with the cleanliness of this rack. It's a damn sight better than the rest of the DC...
Rack M6, back detail (HPaq DL 585 G2) Rack M6, back detail (HPaq DL 580 G2)
Rack M6, back detail (HPaq DL 585 G2)
ESX servers want a lot of cables. That's two quad gig-E cards, only three ports on each filled (no, one quad and one dual wouldn't suffice, as they wouldn't provide sufficient redundancy), and two Emulex LPe11000 FC cards. The loop-backs in the builtin-ports are because HPaq's monitoring software whines if interfaces are live but lack them. (We can't use them because ESX doesn't like having disparate chipsets as interfaces for a vSwitch and we can't use them together because they're not, internally, separate hardware, so if they fail they both fail.)
Rack M6, back detail (HPaq DL 580 G2)
This one has a total of 10 gig-E interfaces, all populated: ESX console interface; VMkernel (VMotion, mostly) interface; regular internal networks; private DMZ; public DMZ (all redundant across physical cards). No, it's not long-term acceptable for DMZ and internal networks to co-exist physically on the same system, but it's necessary to do so temporarily while migrating from ESX 2.5 to ESX 3.


Update: Note that you should PLEASE take this as an example of the most basic level of acceptable rack usage. There are things here I'm not proud of, actually, but it's basically clean. I wouldn't eat out of this rack (and I've seen racks I would have), but it meets a reasonable baseline. Should there come a time when I'm running a data center, anything messier than this will be a firing offense. It's important to note the details here:
  • the vendor gave you cable management for a reason, so use it (I should be able to pull the server out the front to full extension and open it--all real servers have hot-plug down to PCI cards these days--without unplugging ANYthing);
  • label fucking everything, including the server name on the back, including power cables (with the port address on the in-rack PDU, and the breaker address for bonus points) with two endpoints, never more, never fewer;
  • cable delivery, once in the rack, is at the SIDES, never straight up the back... the reasons should be obvious;
  • don't ever buy these racks unless you're doing forced air delivery of cooling from the floor and a serious fan on the ceiling (we have both) they've solid doors on the front and the back, which is ultra-lame, and only actually works if you fill all empty space with spacer panels (which nobody bothered to buy, lame)... ventilated front and back doors are much less of a pain in the ass;
  • it's a truism that I assume, but "redundancy is good" (the converse being, "lack of redundancy is broken by design").
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

  • 31 comments