Recently, my access to my server is very slow. I am on a VPS with 2 Quad E5320 processors and with Softlayer datacenter. I have been monitoring ping latency and it is acceptable at 50ms monitored every 10 minutes for several weeks.
Below is a record of the server memory I got from CPanel WHM. Can you help interpret what these numbers mean and how this affects the performance of the server?
I'm running into a problem with a relatively new (2 months old) server. I have just a few accounts on it, and I'm already noticing unusual loads, for the... load. After some benchmarking with bonnie++ (and plain old "dd") there is clearly a problem.
Isn't a write-speed over 7MB/s reasonable to expect? Also look at the low CPU times...
Anyway running the same test on a similar but older AND busier server showed much better results than this. In fact dd'ing a 1GB file from /dev/zero "finished" in about 10 seconds but then pegged the server at 99% iowait (wa) for a full three minutes (until done being written from cache I assume), bringing load to 15.00
That's all the info I have so far... the data center just replaced the card (which gave no errors) with no effect. Running these benchmark tests is about the extend of my hardware experience.
I never use RAID1, but consider to use RAID1 server.
Let's say one hard drive failed and the DC replaced it with the new one, I was wondering if the system will automatically copy the data into the new drive or we should do some commands to copy the data into the new drive?
I have run RAID1 (software) on my server. Is it posible to disable RAID1 and use secondary 400gb H.D.D as additional H.D.D (800gb) and add another 1000gb H.D.D on my server and run RAID1 on server again? For have mirror data as RAID1 in 1000gb H.D.D
If this is not in the right forum for this... I'm sorry didnt knew where else it may go.
I have to build a new server with RAID 1 and WHM/Cpanel installed (in fact i dont have to, but i need to learn ASAP and my boss gave me an old server for practice).
I've seen the installation guide of cpanel but the sizes of the partitions apply to a disk of 80 GB (i think so) so is there any way to calculate the size of the partitions, regardless of disk size? cuz mine are of 250 gb each.
I'm trying to install it on centos 5 on text mode, so far i have been able to successfully install the system (with partitions of any size... since is a test doesnt matter) with RAID 1.
After that i ran cat /proc/mdstat and in some partitions shows me this
Rsync=Delayed
I've read in some places that this is not a big issue... but in other places says it is... maybe i did something wrong
I've got 25 domains on a Virtuozzo/Plesk8.6/CentOS5 VPS. Each domain has one up-to-date install of WordPress, most have very little traffic (average 200mb per month), maybe 2 domains get 5-7gb traffic per month.
I monitor port 80 connections and rarely see more than 10 at a time. That should in my opinion be no problem at all for a VPS with 768mb guaranteed ram and 2.4ghz cpu. I've got 30gb hard drive spare too.
But.... about 8 or 10 times a day it grinds to a complete halt: server load at 500-1000%, sites timing out, plesk takes 3mins to load, often I can't even connect with SSH, and the plesk web server, apache
80 seconds sounds like a huge amount of time for a MySQL insert to me! Does anyone know if this is likely to be the cause of my trouble? Some problem with Plesk and the database? Or could it be something else?
I have installed 3dm for checking 3ware 8086 card status, but when going to [url] it doesnt show anything. It seems cannot connect to 1080 port, even I have turned off the firewall. Have checked in its config file already to make sure the port is 1080
Is there anyone having experience with 3dm software?
When trying to load an ovzkernel it loses support for the 3ware 8006 controller. As suggested on ovzforums, it says to load the new 3dm2 tools. I tried this but when I try to install it, it says it can't find the 3dm2 binary.
I just got 2 dedicateds, and while creating software raid 1, upon initial sync up I'm getting around 7 megabytes per second (6700 kb/s) in write speed I assume. This is a quad core, sata2 setup...
However, after extracting it, when I run the setup (./setupLinux_x64.bin -console) I get "Bundled JRE is not binary compatible with host OS/Arch or it is corrupt. Testing bundled JRE failed."
Can anyone give me the steps for installing 3dm2 on a centos/WHM box?
Not sure if this is too specific for this forum or not, but since I've gotten great advice here in the past I'll give it a shot.
I have a colo'd production server with a 3ware 9500S-12 RAID card and 12 400GB drives attached. The drives form 3 arrays:
1) 2 drive RAID 1 (400GB)
2) 2 drive RAID 1 (400GB)
3) 6 drive RAID 5 (2TB)
plus 2 global hot spares.
For a variety of reasons I need to change this setup so that array 1) and 2) remain as is, and array 3) is removed and those 6 drives replaced with 6 new 750GB drives in JBOD mode. I've copied all the data from the RAID5 array number 3) onto 3 of the new 750 drives (the 2TB array wasn't completely full,) and I have 3 other blank 750GB drives.
What's the best / safest way to do this? Ideally I'd like to remove the 6 old 400GB drives and retain the ability to plug them all back in and get my old array back (if something goes horribly wrong doing the switch.)
Do I need to reboot into 3BM (3ware Bios Manager) to do this, or can I do it from the command line?
Is there any problem with having a drive that already contains data written to it by another system, and bringing it up on the 3ware card in JBOD mode with the data intact? (All filesystems are ext3.) I'm not going to have to reformat the drive, am I?
Is there any problem with the new drives being SATAII (Seagate Barracuda ES 750GB) but the old drives (and I think the 3ware card, and certainly my motherboard) being SATAI? I've read that this should "just work" but of course I am nervous! There are no jumpers I can see on the 750GB drives.
Will it be possible to remove the RAID 5 in such a way that I could plug the drives back in and get the array back?
I have a bunch of 3Ware 95** RAID arrays with BBUs. Lately the BBUs are sensing high and too high temps a lot.
The dc reports that intake temp is 74 degress and exhaust is 91 degrees. Since the RAID cards and the BBUs are at the back of the machine, its getting more hot air than cool.
Probably going to give this a shot in the near future anyway, but just wanted to check whether anyone has tried and had success putting either 3Ware 8006-2LP or 9550SX-4LP cards in Dell PowerEdge 860 systems with a couple of SATA drives instead of using the Dell PERC controllers?
we use 3Ware's 8006-2LP sata raid controller in each of our servers for RAID1. Our servers are all Supermicro boxes with hot-swap driver carriers (ie. the 2 raided drives are in them)
One of the drives appears to be starting to crap itself as smartd is reporting issues (although tw_cli c0 shows the raid to be OK) incl. multi zone errors.
Anyway, I'd like to replace the failing drive before it becomes a real issue so i've bought a replacement (74gb raptor, just like the original) drive.
Now I've never had to replace a failing drive in any of our servers before and I used to think it would be a simple matter of simply popping out the failing drive's carrier, put the new drive in the carrier and stick it back in the server... and the raid controller would do the rest.
Yes a little naive I know but i've never had to do it before so never paid much attention .. Anyway, I've just read and re-read the 3ware docs for my controller and their instructions are VERY VAGUE ... however I do get the feeling that the process is move involved ie. I need to tell the controller (via cli or 3dm) to first 'remove' the failing drive from the raid ..and then add a new drive and then rebuild
However there is one catch .. 3dmd/3dm2 has NEVER worked on our (centos 4) servers - 3dmd crashes regularly and 3dm2 never worked. So yes I am stuck with the 3ware cli ... which I don't mind as long as someone can tell me the sequency of commands I need to issue
As this point I'm thinking what I need to do via cli is
1) tell raid controlloer to remove the failing drive on port 0
2) eject the drive carrier with the drive in question
3) insert new drive in carrier and re-insert into server
4) using tw_cli tell the controller to add the new drive to the array and to rebuild the array
Am I anywhere close to being correct? I'm sure there are some of you out there who've done this countless times before with the 3ware controllers and hotswap drive carriers ..
Dedicated server has 2 HDD but I am not going to pay another $25/month for the hardware RAID solution (already stretched too far).
My plan is to install FreeBSD 6 and use Gmirror to establish a raid-1 "soft" mirror.
Advantages: Entire drive is mirrored including the OS. Drives can be remotely inserted or removed from the mirror set using a console command so its possible to uncouple the mirror and perform software updates on a single drive then re-establish the mirror only after the updates have proved successful.
Disadvantages: Lower I/O than hardware solution (not a problem for me) others???
I rarely see people consider software raid for a tight-budget server and I am wondering why? Could it be that other OS's dont have a solution as good as gmirror? Or is it just that crappy soft-raid in the past has left a bitter taste in admins mouths? Or perhaps admins need the extra I/O of hardware?
3w-9xxx: scsi0: AEN: INFO (0x04:0x0053): Battery capacity test is overdue:.
When I'm in the CLI console (tw_cli) and tries to test the battery, I'm seeing the following:
Quote:
//vpsXX1> /c0/bbu test Depending on the Storsave setting, performing the battery capacity test may disable the write cache on the controller /c0 for up to 24 hours. Do you want to continue ? Y|N [N]:
This is a live production server with client VPSs on it. Anyone here actually did 3ware battery test before on production system? Is it ok to do this? I'm seeking someone actually performed test operation before, not from someone just assumes it will be ok.
I have a 3ware 9650SE-24M8 RAID Controller. It was working fine for a few days and today while I was changing the RAID configs and installing different OSs, it just stopped working. Now when I boot my machine up it does not even detect any hard drives or RAID controller. I looked inside the box and the LED light on the RAID controller that is usually solid green is now blinking red. I googled for solutions but all searches lead me to useless information such as blinking red lights on the server case.
After seeing a topic a week or go discussing RAID cards I decided to give a hardware raid card a go to see if the performance will increase in one of our boxes.
Just for the simplicity of the test, I have put them into a RAID0 formation for purely performance tests and no redundancy. I choose a 3ware RAID card and went for the 2 port 8006-2LP option rather than the 9600 (as they had the 8006-2lp and risers in stock and what I've always been told is that SATA1 and SATA2 is really a selling point rather than any performance increase but we will leave that argument there). Because we run mainly Windows systems, I have put on Windows Server 2003 x64 R2. What I am finding after installing it all is it seems pretty "slow".
The rest of the hardware is a Dual, Quad Xeon (E5410x2), 8GB ram on a Tyan motherboard. Hard drives are 160GB Western Digital 7200 RPM so I can't see quite why it feels like its not running at a peak level.
Does anyone have any applications or software to give this RAID array a test as I really don't want to order any more or roll them out on to the network to find that software raid would be a better improvement. I did try a burn in app which tests everything but it according to the 20 seconds I ran it, in average it only transferred at 2mbs.. That cant be right..
I think one possibility is the RAID drivers arn't installed correctly as its still coming "Unknown Devices" in Device Manager and it seems It wont let me manually install the drivers for the 3ware device as it doesn't like the OS even though I have the correct ones and it installed Windows with it fine (a bit longer than normal anyway)