Jump to content

How does this build look?


elite90

Recommended Posts

Hopefully we will be ordering a few of these. :cool This is a significant investment, so I want to be sure that I haven't missed anything or could do something a better way. :confused:

 

25qc4tz.png

(Prices are in CAD)

 

The box has 4 nodes in it and I plan to put the drives in RAID1. Please share your thoughts/comments/advice. Thanks!

Link to comment
Share on other sites

Very nice build. But ask yourself this question. What if something were to happen to this machine? CPU, RAM, HDD failure? Due to the size of the machine I assume you will plan on having many clients on it. This means if the machine dies for any reason you have all your eggs in 1 basket so to speak. All of those clients are out of a server until you can have it fixed.

 

Judging by the way gamers are you will lose a lot of those clients due to the downtime. If your entire clientbase is on this machine then you have lost your business.

 

Just food for thought.

Link to comment
Share on other sites

Add to that addons do not allways play together well. I have had to move clients on several occasions becuase two different customers addons were cuasing lag being on the same machine.

With this rig there is a LOT of potential for that happening due to the number of games you'd be putting on it to make it pay.

Link to comment
Share on other sites

This particular chassis, in fact, is hot-pluggable, meaning you can pull any one of the nodes out of the back of the chassis with out needing to power the others down. It is a much smaller version of the large blade setups around.

 

The only things shared between the 4 nodes is the PSU, which happens to be redundant, and of course the chassis.

 

If you look at it, it's not combining 4 nodes into one, but condensing the space needed for 4 nodes so they only require .5U each.

 

If you want to go with this type of setup, you need to determine if the savings in space justifies putting 4 nodes on a single (redundant) PSU.

 

 

The only thing I would change in your order is the RAM. The E5520 will only take up to DDR3 1066 (http://ark.intel.com/Product.aspx?id=40200). It will post with 1333, but it will run it at 1066. You can save a buck and get 1066.

Link to comment
Share on other sites

If money is not an issue I would go with what you have. If you want to spread out the startup cost and money is tight I would go with individual servers as opposed to one large server.

 

You can build and deploy single servers as demand increases. This reduces your startup cost. Why spend $10,000 to build a server that can handle 150 customers if you don't have the customer base to support it. Deploy single servers and as your customer base grows deploy additional servers to meet the demand. Single servers also allow you to increase or decrease the number of Data Centers offered as demand increases or decreases.

 

It is not necessary to run the drives in RAID 1 unless you plan on hosting websites, which I do not recommend mixing web hosting and game hosting on the same server. You should separate the game installs from the operating system. We run the operating systems on smaller drives and keep all of the game installs on a larger drive. If an operating system drive fails just overnight a new drive to the data center with the operating system preinstalled. If a game drive fails just replace it and reinstall the games. The down time will be minimal, usually less than 24 hours.

Link to comment
Share on other sites

Thanks for all of the input.

 

We are currently renting servers with similar specs and have not had any problems with lag etc. With about 40 servers on one of these, the peak CPU usage is 35% and RAM 9GB.

 

I would always have at least one free unit in these, so failure wouldn't be too much of a problem because I could swap out the drives. I'm also thinking of using the third bay in each unit for a spare drive. I've had data loss before because of hard drives crapping out, and it seems to be detrimental for our customers. I'd like to avoid that if possible. RAID1 seems like the best choice.

 

As for the RAM, I know that the faster speed is not supported, but I figured that I might as well go for the faster stuff because it was another $4 or $5 for the same sticks at 1333.

 

We currently sell 7 servers per day avg, so no problems with the customer base.

 

 

Finally, the quote that I got from Ubiquity was $590 for:

 

-2U

-4 100 Mbps drops

-4 OOB drops

-15 Mbps (more than I'm using avg x4)

-2 outlets

-Class C (some people go for 27015)

-6A power (this is estimated, I still need to measure one of these)

 

This price seems on par with the single server colocation that I have been looking at.

Link to comment
Share on other sites

The best performance for dual CPUs is at least 3 for each CPU. Populating P1-1A, P1-2A, P1-3A, P2-1A, P2-2A, P2-3A. On your board, I believe it's every blue slot. As for only 3 sticks you can try P1-1A, P1-2A, P1-3A. (all next to CPU1).

 

In your case, 3 sticks total, would be the best for a single CPU configuration.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Who's Online   0 Members, 0 Anonymous, 10 Guests (See full list)

    • There are no registered users currently online
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use