Jump to content

SSD. Vs HDD.


Evilsystem

Recommended Posts

Hello there..

 

I was wondering how much of a difference this makes?

 

I'm going to buy a Dedicated Server, but I don't know if I should pay 20? more and get a server with SSD's instead of HDD's?

 

I mean I know that SSD's is A LOT faster but how much of a difference does it make when you are hosting game servers?

 

Like for example: Minecraft world loading?

And does it even change anything in games like CSS, CSGO, and TF2?

 

And one more thing you guys might be able to help my with.. My server provider give "20TB in Traffic" every month, is that enough for running all those game servers, and people using the control panel?

Link to comment
Share on other sites

To be honest, I have checked this for when I'd buy new servers. Checked the stats/monitors, disk activity is nihil. Only when the server starts up, the rest seems to be done all in RAM. So no, I don't really find the SSD's worth it. New gameservers are still up within minutes. There is no reason to have SSDs over HDDs for gameservers. This is just my opinion with a slight factual reasoning.

Link to comment
Share on other sites

Alright then.

I can choose between this.

 

Hard Disks:

2 x 2 TB SATA 6 Gb/s 7200 rpm

HDD (Software-RAID 1)

 

OR

 

Hard Drives:

2 x 240 GB SATA 6 Gb/s SSD

(Software-RAID 1)

 

26$ more for the SSD's. Should I go for the HDD's after all then?

Link to comment
Share on other sites

Depends, 26$ on a bucketload of servers can make a big difference. But if it's a new one every now and then, 26$ isn't a big price for SSD's. And the guy that said mapchange? This is a server, not a client. Mapchanges are already pretty much near instant on UDK and Source engine games, on my servers atleast. But for 26$ extra, why not get state of the art things, why software RAID though?

Link to comment
Share on other sites

Depends, 26$ on a bucketload of servers can make a big difference. But if it's a new one every now and then, 26$ isn't a big price for SSD's. And the guy that said mapchange? This is a server, not a client. Mapchanges are already pretty much near instant on UDK and Source engine games, on my servers atleast. But for 26$ extra, why not get state of the art things, why software RAID though?

 

Map is still read from the drive.... :D

 

As far as the dual SSD drives...I'd forego RAID and put system files on one and user files on the other.

Link to comment
Share on other sites

Map is still read from the drive.... :D

 

As far as the dual SSD drives...I'd forego RAID and put system files on one and user files on the other.

 

Hmm okay. But if I run it with Hardware RAID then it won't use CPU, RAM, and so on. Isn't that kind a impotent?

 

Will it effect the speed of my drives if I run it without RAID? Isn't raid just to secure your files?

Link to comment
Share on other sites

No RAID at all.

 

Alright.. I've been reading up on it for the last couple of hours..

 

I understand it a little better now..

 

RAID 1: Just copies all the files to both drives, which makes the read speed a little faster but the write speed slower..

 

Please correct my if I'm wrong but isn't it a lot fast to just have the system files, and the TCAdmin on one drive, and the the users/server on the other drive? (Just like you said.)

 

Just so I'm 100% clear, I'm going to setup my OS (CentOS), like you normally would, Setup the partitions (boot, swap, and ext3.)

 

Is there any problem in doing it like this? Or is there something I should know before doing it?

Link to comment
Share on other sites

Then the SSD's won't be using RAID, would they?

It would be a waste of space, just having system files on one drive.

Yes you are right about that..

 

Is't better to just have the (OS) on one drive and then install all of TCAdmin on the other one?

 

Or have the (OS) and the TCAdmin game files on one and the users/servers on the other one?

 

I mean does it effect the speed or anything? I'm thinking it would slow down the servers on the drive with game files when it have to copy files to the other drive while saving logs and etc?

 

If not. Then this is the right way of doing it right: Just install the OS, and TCAdmin on one drive and then create a virtual server with the "User Files Path" to the other drive?

Link to comment
Share on other sites

Why wouldn't you use RAID, is your clients data not important?

 

It's but a lot of people tell my not to use it.. And just put the the user files on the other drive.. That kind a works like RAID 1. I just have 480GB instead of only 240GB.

 

I could also just take a backup every day my hosting provider give 200GB in backup space? But I don't know though.

Link to comment
Share on other sites

RAID 1 is a must. Drives can fail. Now you are atleast certain that if one drive fails, the server won't die. Your clients won't notice a thing and you can change the failed drive without ever needing to shut down the server.

Link to comment
Share on other sites

RAID 1 is a must. Drives can fail. Now you are atleast certain that if one drive fails, the server won't die. Your clients won't notice a thing and you can change the failed drive without ever needing to shut down the server.

 

You need to explain what your saying a little better Raid 1 is not really a good option and the down time on the server is dependent on if you have hot swap bays in the chassis or not. Raid 1 on a single raid controller does not increase the read speed as you would think and only if your using 2 separate disk controllers does it increase to a point for improved performance. You may shave a second or two off a map load time but this is still based off the users connection to the server as well. The cost related to having two drives + an OS drive and a hardware raid controller (software raid is just stupid) and the percentage of actual drive failure is not worth the added cost. Western Digital Black/ENT 64M drives are very reliable nowadays. If your renting the box from a DC ask them what manufacture do they use for the drives and make your decision off that.

Link to comment
Share on other sites

You need to explain what your saying a little better Raid 1 is not really a good option and the down time on the server is dependent on if you have hot swap bays in the chassis or not. Raid 1 on a single raid controller does not increase the read speed as you would think and only if your using 2 separate disk controllers does it increase to a point for improved performance. You may shave a second or two off a map load time but this is still based off the users connection to the server as well. The cost related to having two drives + an OS drive and a hardware raid controller (software raid is just stupid) and the percentage of actual drive failure is not worth the added cost. Western Digital Black/ENT 64M drives are very reliable nowadays. If your renting the box from a DC ask them what manufacture do they use for the drives and make your decision off that.

 

Thanks man.. This was useful.

Link to comment
Share on other sites

RAID 1 is a must. Drives can fail. Now you are atleast certain that if one drive fails, the server won't die. Your clients won't notice a thing and you can change the failed drive without ever needing to shut down the server.

 

That is not true. On some RAID controllers, the bus stalls when the array goes into degraded mode. And some RAID cards will actually slow down the 'working' drive or cause bus stalls when a rebuild is in progress.

Link to comment
Share on other sites

You need to explain what your saying a little better Raid 1 is not really a good option

 

RAID1 is a good option for SSDs, SSDs can and will fail more commonly than a hard disk due to bad blocks.

 

I recommend the Samsung 830 series for these tasks as they're reliable and don't require 0% activity to clean sectors

 

If you're going to use hard disks then i recommend a hardware raid controller with a write back cache and a disk speed of 10K or over. the write back cache will give you better IOPS so quick writes aren't killing your drives with a large disk que length

 

a typical server will have x4 3.5" bays (1U is common for game servers)

Zeus uses HP P410/P400 512mb WBC/WB and x4 WD 10k Velociraptors in a RAID10 configuration and it works a treat.

Link to comment
Share on other sites

"If you're going to use hard disks then i recommend a hardware raid controller with a write back cache and a disk speed of 10K or over. the write back cache will give you better IOPS so quick writes aren't killing your drives with a large disk que length"

 

Write back cache is dangerous without redundant power supplies or a BBU. Also, your information on 'writes aren't killing your drives with a large disk que length' is wrong. SATA drives do not support disconnected writes, which is a significant performance bottleneck when writing to disk, only disconnected reads are supported. On SATA, you only get 1 write transaction per disk even with the butchered NCQ (which is a cheapo version of TCQ).

 

I would never use SATA drives in any kind of RAID setup for a dedicated server, period.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Who's Online   0 Members, 0 Anonymous, 20 Guests (See full list)

    • There are no registered users currently online
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Terms of Use