One of our 3Ware RAID controllers is showing a battery backup unit BBU temperature TOOHIGH error, and the status of the BBU controller is now in FAULT.
I presume the while BBU unit will need to be replaced, but what is the immediate effect or consequences of leaving the BBU in this condition?
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
My website seems to be showing me lots of PHP & SQL errors.
For example, my live-chat shows me:
Warning: session_start() [function.session-start]: open(/tmp/sess_8aa74272d583973018e8f0805a796df2, O_RDWR) failed: Read-only file system (30) in /home/polar/domains/.../auth.php on line 20
I'm getting this error message when logwatch run, --------------------- Kernel Begin ------------------------
WARNING: Segmentation Faults in these executables httpd : 9 Time(s)
---------------------- Kernel End ------------------------- "
Have recompile apache twice, but it doesn't help.
I'm running this, root@host [~]# php -v PHP 4.4.6 (cli) (built: May 15 2007 12:54:50) Copyright (c) 1997-2007 The PHP Group Zend Engine v1.3.0, Copyright (c) 1998-2004 Zend Technologies with Zend Extension Manager v1.0.10, Copyright (c) 2003-2006, by Zend Technologies with Zend Optimizer v3.0.1, Copyright (c) 1998-2006, by Zend Technologies
and this is my server, Processor: Dual Xeon E5310 Quad Core (Clovertown) Memory: 4GB DDR Registered ECC Hd1: Dual 73GB SCSI / Hardware Raid 1
I have had this segmentation problem since the first day I got my server. Need advice what to do about this?
Friend of mine was talking about upgrade to php version 5, but I'm not sure thats the way to do it. Having about 40 accounts/domains and I'm not sure that web sites will work with php 5.
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
Trying to enable xcache on my cpanel, centos server (suphp *not* enabled)
Followed this guideline [url]
Xcache shows up in php -m output, however I get this output as well
Code: root@server [/tmp]# php -v PHP 5.2.8 (cli) (built: Jan 5 2009 16:23:03) Copyright (c) 1997-2008 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies with the ionCube PHP Loader v3.1.34, Copyright (c) 2002-2009, by ionCube Ltd., and with Zend Extension Manager v1.2.2, Copyright (c) 2003-2007, by Zend Technologies with Zend Optimizer v3.3.3, Copyright (c) 1998-2007, by Zend Technologies Segmentation fault (core dumped) In the /usr/local/lib/php.ini file I had this portion setup before the Zend portion...
Code: [xcache-common]
extension = xcache.so
[xcache.admin] xcache.admin.auth = On xcache.admin.user = "" xcache.admin.pass = ""
[xcache] ; ini only settings, all the values here is default unless explained
; to disable: xcache.size=0 ; to enable : xcache.size=64M etc (any size > 0) and your system mmap allows xcache.size = 128M
; set to cpu count (cat /proc/cpuinfo |grep -c processor) xcache.count = 4
; just a hash hints, you can always store count(items) > slots xcache.slots = 8K
; ttl of the cache item, 0=forever xcache.ttl = 0
; interval of gc scanning expired items, 0=no scan, other values is in seconds xcache.gc_interval = 0
; same as aboves but for variable cache xcache.var_size = 0M xcache.var_count = 1 xcache.var_slots = 8K ; default ttl xcache.var_ttl = 0 xcache.var_maxttl = 0 xcache.var_gc_interval = 300
xcache.test = Off ; N/A for /dev/zero xcache.readonly_protection = Off ; for *nix, xcache.mmap_path is a file path, not directory. ; Use something like "/tmp/xcache" if you want to turn on ReadonlyProtection ; 2 group of php won't share the same /tmp/xcache ; for win32, xcache.mmap_path=anonymous map name, not file path xcache.mmap_path = "/dev/zero"
; leave it blank(disabled) or "/tmp/phpcore/" ; make sure it's writable by php (without checking open_basedir) xcache.coredump_directory = ""
; per request settings xcache.cacher = On xcache.stat = On xcache.optimizer = Off
[xcache.coverager] ; per request settings ; enable coverage data collecting for xcache.coveragedump_directory and xcache_coverager_start/stop/get/clean() functions (will hurt executing performance) xcache.coverager = Off
; ini only settings ; make sure it's readable (care open_basedir) by coverage viewer script ; requires xcache.coverager=On xcache.coveragedump_directory = ""
; Memcache Section extension = memcache.so memcache.allow_failover = 0 When I search for "zend_extension" this is the result:
Code: ; Directory in which the loadable extensions (modules) reside. extension_dir = "/usr/local/lib/php/extensions/no-debug-non-zts-20060613" zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.2.so" zend_extension_ts="/usr/local/IonCube/ioncube_loader_lin_5.2_ts.so" zend_extension="/usr/local/lib/php/extensions/no-debug-non-zts-20060613/xcache.so" If I move the xcache line to the first line, before the ioncube one, I get this error:
Recently ive gone through lot's of providers, and my latest one computhings.co.uk (was recommended) has recently lost support (their license expired) and my domain is having problems!
I tried a tracert to track the problem and i get an error in resolving the domain name. My DNS servers are set to where they should be, and i can get to the servers via a web browser (ns1/ns2) but when accessing my domain i get a server not found.
Is this my fault or their fault? As if its mine i will hang on and if its theirs im moving servers today!
if MRGT graph to mesure BW can be config to show faulty results? like 20% more than what i am actualy using?
The reason i am wondering is because i have a server which is an image hosting and running lighttpd as webserver. The only thing consumming BW now is lighttpd for images.
While the status page of lighty show i am using about 4.2Mbytes ~ 32-34mbits then the BW graph of data center show that i am using something like close to 52mbits. Please see the pictures below and let me know what you guys think.
The BW graph here is Past 4 Hours (1 minute average) and lighty screenshot is real time.
I'm transcoding videos on a web server using ffmpeg and can successfully transcode some video formats but am having a few issues.
One of them is that I get a "Segmentation Fault" when trying to transcode a video in h.264 codec. I can't find much by searching and am wondering if that's a codec issue, or something else.
Post these changes when I try starting apache I get the error - Segmentation Fault - core dumped
Checked the Error Logs it shows -
[Mon May 27 14:09:24 2013] [notice] child pid 29964 exit signal Segmentation fault (11) [Mon May 27 14:09:32 2013] [notice] child pid 29963 exit signal Segmentation fault (11) [Mon May 27 14:11:14 2013] [notice] caught SIGTERM, shutting down [Mon May 27 14:11:19 2013] [warn] RSA server certificate CommonName (CN) `BUNTY RAY' does NOT match server name!? [Mon May 27 14:11:19 2013] [warn] No JkLogFile defined in httpd.conf. Using default /opt/apache-2.2.24/logs/mod_jk.log
we have little problem on our server - from some time it starts reporting some errors:
kernel: spamd[6479]: segfault at 9a16000 ip 467840ac sp bffe9b5c error 6 in libc-2.5.so[46713000+13e000] kernel: webalizer[12318]: segfault at 81a80cc ip 080d9279 sp bff2f230 error 4 in webalizer[8048000+b2000] kernel: spamd[6515]: segfault at 9cbb000 ip 467840ac sp bffe9b5c error 6 in libc-2.5.so[46713000+13e000] kernel: pure-quotacheck[16285]: segfault at bf3c9ff8 ip 46769d76 sp bf3c9fec error 6 in libc-2.5.so[46713000+13e000] kernel: php[14910]: segfault at bf727da0 ip 080b0edc sp bf727d30 error 6 in php[8048000+64d000]
errors appear 2-3 times every 10min and always in this 4 programs: webalizer, php, spamd, pure-quotacheck
and second thing there is problem with some file caching or sth - for example when we restarts named it reports:
/etc/named.conf:23564: open: /var/named/slaves/slaves.named.conf: file not found
file of course exist but funniest thing is when we remove this line from named.conf and tries restart it, error appear again, even when this line is empty in named.conf and there is no other include of this file even after server restart (without this include in named.conf) it still reports this error
server config: C2Q Q9550, 8GB ram, 2x500GB in hw Raid1, Centos 5.3 32bit, cPanel maybe someone have any idea what it could be, and what else we can check ?
Parallels Plesk autoinstaller emailed me 03:34 to confirm that Parallels Plesk was successfully updated, but ever since then /var/log/httpd/error_log has entries every 5 minutes stating 'child pid xxxxxx exit signal Segmentation fault (11)'.
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?