Recently we ordered a few servers with soft raid1, but seems the soft raid doesn't perform well, copy a 1G file, it takes about 40 seconds.
also sometimes it takes very very long time, I haven't count it, at least a few minutes, just like the server is dead.
I have run this command.
dd if=/dev/zero of=test.bin bs=1000M count=1
1+0 records in
1+0 records out
1048576000 bytes (1.0 GB) copied, 1.14416 seconds, 916 MB/s
But obviously it takes more more than 1 second, why it shows only takes 1 second?
As per topic, what is the best method to do to the hardware to improve bandwidth / uplink speed of server?
I have a production server that is used for regular file serving.
P4 3.0Ghz 4GB RAM 500GB + 160GB + 160GB hard disk 2Mbps Dedicated + 10Mbps shared.
However, the most I can pulled through the whole server is always between 1.5Mbps - 3Mbps, anyway to pull the speed up to around 10Mbps should there be availability of bandwidth for me to burst?
Dedicated server has 2 HDD but I am not going to pay another $25/month for the hardware RAID solution (already stretched too far).
My plan is to install FreeBSD 6 and use Gmirror to establish a raid-1 "soft" mirror.
Advantages: Entire drive is mirrored including the OS. Drives can be remotely inserted or removed from the mirror set using a console command so its possible to uncouple the mirror and perform software updates on a single drive then re-establish the mirror only after the updates have proved successful.
Disadvantages: Lower I/O than hardware solution (not a problem for me) others???
I rarely see people consider software raid for a tight-budget server and I am wondering why? Could it be that other OS's dont have a solution as good as gmirror? Or is it just that crappy soft-raid in the past has left a bitter taste in admins mouths? Or perhaps admins need the extra I/O of hardware?
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
how to tune linux (currently running fedora4) server to improve download speeds for the client. I have observed that if i download from sun it is very fast compared my own server which is hosted on cogent backbone or any other.
Please take into account that both mine and other server both are at relatively same number of hops and ping time. and my server and the carrier has enough bandwidth to fill whole 100mbps, even the client is a server which is on a different backbone which can fill whole 100mbps, also guys i was able to fill whole 100mbps while downloading from sun. but when i download test file from my server it is lagging at 3mbps around. only single thread using wget command. Now what is it i need to do to get my server able to push full pipe even for single thread, ping time is 25ms. if not full at least half. let me know if you have some workaround, and server is apache no load.
I need to look at a way to run a specific application over the internet, inexpensively of course. The issue is we have a professional association that wants to run an email discussion list using specific email discussion software (ListServ from L-soft). The association currently has a hosted website but no capacity within their current hosting contract to run a specific application like this. I am told that dedication servers run to the cost of $1000/month, which is way too expensive and not very cost effective for this single specific type of application for a 1300 member association.
Are there other options for running ths type of application on a shared server? Anybody have any ideas where I should look? Would prefer an Australian server, but as long as it can use our domain for emails, I would be happy.
SOFT LAYERS data center did not fulfill its legal obligations with regards to the protection against hackers. I complained to SOFT LAYERS against the server hosting the Hackers sites two times and although they shut them down they keep coming back. They don't reply to my messages. These hackers are causing a lot of damage, what can I do to shut them down permanently. Please advice, SOFT LAYERS only talk to these guys to ask them to remove the hacking contents but they never do and SL don't bother to check on the.
I have been reading quite a bit lately about the Internap FCP. I am wondering how much it actually improves network performance and how it compares to BGP4. We currently use BGP4, but are considering using a data center with the Internap FCP for a project for a client.
I am looking for reviews from others who have experience with the Internap FCP and it's performance. How does it compare to a network using BGP4? I know that FCP uses more intelligent routing than BGP, but how big of an improvement does it make?
I have a cheap managed server and support is of little help BUT they will do what I tell them, where should I tell them to start to fix high mysql load?
The server currently has a large IPB forum, a large 4images gallery, and a couple popular Wordpress blogs, and this combo keeps taking mysql down.
What can I do server side to help this? And what shouldn't I do?
- Webserver for member to download which is using user/pass from .htpasswd to allow anybody can see file or download.
What i need: - any software(code) can show me who log in and when, how much they download, how many time did they login. ---> this to prevent account from to be shared between many people.
- some sort of tool or software can allow me to add new user/pass quick since i am manuallly adding new username/password encrypted to .htpasswd file.
what software to use( or to buy if not so exspensive).
What steps can we take to improve SpamAssassin performance? I know this was talked about in older Plesk versions, but we may be able to do more now. What configurations can I do to stop more spam?
We are getting trouble with a site into a dedicadte server, more specific performance issue due to many visitors on same time 1000. Mysql is overload. We would like to use xcache to improve performance, using xcache with plesk 12 centos? its safe, works fine?
Is there a colo provider that has a free private network for use between their locations, similar to what SL has for their dedicated servers between facilities? They have 10GigE between their locations, with free unlimited usage.
We have around 40 servers now, and colo would really make sense, but we are doing multicasting stuff so we really need a backend network to support our services, as well as many locations for better delivery quality.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?