I just got 2 dedicateds, and while creating software raid 1, upon initial sync up I'm getting around 7 megabytes per second (6700 kb/s) in write speed I assume.
This is a quad core, sata2 setup...
I'm using WAMP and IIS for the first time so bear with me if I overlooked something obvious.
I have a simple script that creates a few files and then attempts to move them to a subdirectory. I am getting a permission denied error when attempting to move the scripts in WAMP. I am uncertain how to set directory permissions in WAMP. I am also unsure how this will affect the script when I migrate it to an IIS server.
we've been testing CentOS 5.3 on Intel DG35EC board (G35+ICH8+82566 Gb NIC), and found that the write speed out of 7200rpm SATA-II drive, connected to on-board ICH8 controller, is consistently under 10MB/sec which is quite horrible!
the same set of hardware can get 100MB+/sec transfer rate with Debian 5.0 and FreeBSD 7.1, just not the CentOS5.3! it doesn't matter whether AHCI mode is selected in BIOS or not, and of course BIOS has been updated with latest version.
I've had my account less than 2 months. In that time, my account was actually usable, a couple of weeks. I've now been waiting for a resolution to the "Out of Space" errors for nearly 7 days. The most recent communication was this morning (one of only two responses in 7 days) stating that he would "try moving your account to a newer server. I should have another update later today." It has been 6 hours since that message and still no resolution.
BQ has a very good reputation around here. So, I was very hopeful that they would live up to that reputation. However, 7 days to resolve an issue doesn't seem all too responsive to me. Apparently, I'm not the only one who has experienced this same problem of delayed responses and slow resolution.
Before you ask... yes, I used the correct email for support.
So, I have to ask... Is this typical of the support that I should expect? If I have to ever restore my data, will it take a weeks to resolve any issues that I may experience?
I recently took over as webmaster for my employer. We want to move to a CMS (Wordpress) for our site. We've had a shared account with Tera-Byte for years. I go to install Wordpress and it says it needs MYSQL version 4 of greater.
Tech support is willing to move me on to a newer server that has MYSQL 4, but doing so would mean copying everything over and reconfiguring anything that needs to be reconfigured (i.e., all our staff's email accounts) from scratch. Is this standard practice?
I've checked the average page download time that the Googlebot reports in Google Webmaster Tools and, from what I've seen elsewhere, I think the number is good -- less than 200 milliseconds. However, my pages are compressed and small (>1.5K). This gives me a download time of ~7500 bytes/sec for the Googlebot.
what kind of page download speeds do others get with Googlebot? What's typical/good/bad?
I believe there is a gap in my understanding of VPS with regards to just how much control you really have over the system. VPS gives you "root access" which typically means top-level access... but in reality, the true super user is a boot-up console user. Few VPS providers that I've investigated offer console access to your VPS while booting.
What, than, can be done about system upgrades or using advanced features like root filesystem encryption? Say, for instance, that my provider offers openSUSE 10.1 and I want 10.2. I would be loath to do such a thing if I can't reboot and watch things as it goes. What if the upgrade failed and you need to drop to a single-user mode to fix it?
Or maybe my real misunderstanding here is that you can't upgrade a system in a VPS if the provider doesn't offer the upgrade?
And what if I want my entire system (other than a boot partition) to be encrypted. This would include an encrypted root and swap. This also requires a password at bootup well before any services (like sshd) start.
Again, maybe the real answer is that I can't do that at all anyway and so it doesn't matter.
I've taken the scalable approach when it comes to servers for my various sites. With shared servers, I never really worried about backup or even hard drives going down. Same goes for VPS. For some reason, when I moved to dedicated servers, I outfitted them with 74GB SATA drives in a RAID setup. My understanding is that it protects me if one drive happens to fail. I've been lucky and haven't had that problem.
I'm at the point now where I'm looking to upgrade from a VPS paying around $75 per month to a dedicated server. I can stand to be down a day if a hard drive goes, if it means $75 a month in savings. My biggest concern would be suggestions on the best way to protect myself in the event of a catastrophe.
Contacted SoftLayer about possibly adding a second server for me and honoring the price I'm paying on my old server.
Finally, both the old and new site are seeing roughly 3,000 visits per day. The server I'm considering is a Clovertown 5320 1.86 dual quadcore, 4GB RAM, RAID, 2 74GB Cheetah drives,100mbps, 2000GB bandwidth. Is this overkill or the right server for the job?
Is 40 max_user_connections for MySql typical in a Shared Hosting environment? Or are there Shared Hosts out there that allow more than 40 max_user_connections per account?
I never use RAID1, but consider to use RAID1 server.
Let's say one hard drive failed and the DC replaced it with the new one, I was wondering if the system will automatically copy the data into the new drive or we should do some commands to copy the data into the new drive?
I have run RAID1 (software) on my server. Is it posible to disable RAID1 and use secondary 400gb H.D.D as additional H.D.D (800gb) and add another 1000gb H.D.D on my server and run RAID1 on server again? For have mirror data as RAID1 in 1000gb H.D.D
If this is not in the right forum for this... I'm sorry didnt knew where else it may go.
I have to build a new server with RAID 1 and WHM/Cpanel installed (in fact i dont have to, but i need to learn ASAP and my boss gave me an old server for practice).
I've seen the installation guide of cpanel but the sizes of the partitions apply to a disk of 80 GB (i think so) so is there any way to calculate the size of the partitions, regardless of disk size? cuz mine are of 250 gb each.
I'm trying to install it on centos 5 on text mode, so far i have been able to successfully install the system (with partitions of any size... since is a test doesnt matter) with RAID 1.
After that i ran cat /proc/mdstat and in some partitions shows me this
Rsync=Delayed
I've read in some places that this is not a big issue... but in other places says it is... maybe i did something wrong
I recently worked on an issue involving a severe performance issue between "write back" and "write through" caching on the RAID/HD.
Long story short, we purchased 12 IBM x3550 M2s came with LSI SAS1068E/SR-BR10i (gimp redheaded stepchild of the MR series. No BBU, no onboard DIMM.) RAID controller and had very bad and inconsistent write throughput with it. Sometimes it writes out 300-400MB/s (dd test, I know... don't flame. I know dd is NOT a good test.), somethings as low as 30MB/s. The server is configured with 2.5" SATA 500GB on HW RAID-1.
From dmesg log it was default to "write through" on sda. I figured out via lsiutil, you can set the drives to "write back". Once we did that, the write performance is more consistent. NOTE, this enables write back on the SATA drives, NOT the controller itself.
I loop out lspci on all our VPS servers. Found out those with LSI (SAS 8344ELP) cards have sda set to "write through w/ FUA". Those are already all RAID-10s, and I have not heard a single complaint from any customer stating poor I/O performance.
I believe the 8344ELP do have a BBU, I can double-check with DC. The DC is on UPS as well. So that rules out the shortcomings of enabling write back caching.
I want to ask those of you using Xen (3.3 & 3.4). Do you guys have better I/O performance with "write back" or "write through" caching? I'm actually looking for real world results. Where you guys actually deployed production VPS server with clients on it.
Dedicated server has 2 HDD but I am not going to pay another $25/month for the hardware RAID solution (already stretched too far).
My plan is to install FreeBSD 6 and use Gmirror to establish a raid-1 "soft" mirror.
Advantages: Entire drive is mirrored including the OS. Drives can be remotely inserted or removed from the mirror set using a console command so its possible to uncouple the mirror and perform software updates on a single drive then re-establish the mirror only after the updates have proved successful.
Disadvantages: Lower I/O than hardware solution (not a problem for me) others???
I rarely see people consider software raid for a tight-budget server and I am wondering why? Could it be that other OS's dont have a solution as good as gmirror? Or is it just that crappy soft-raid in the past has left a bitter taste in admins mouths? Or perhaps admins need the extra I/O of hardware?
config.php to not have write permissions for everyone? I am running cpanel 10x with whm/extras Here is full report when i try and load fantastico scripts. You must secure this program. Insecure permissions on config.php While installing CSLH you might of needed to change the permissions of config.php so that it is writable by the web server. config.php no longer needs to be written to so please chmod config.php to not have write permissions for everyone. you can do this by UNCHECKING the box that reads write permissions for the file:
I use a persistent and session cookie in order to help track return visitors, latent conversions and keyword tracking. My problem is that my hosting company, BlueHost, does not allow me to write the cookie data to my raw access log which would enable me to analyze this information.
Does anyone have any experience with using a script in order to get around this issue and write the cookie data to my log file....or as an alternative can people recommend a good hosting company which enables this functionality?
BlueHost has generally worked fine, so a work-around would be preferred, but if needed I may switch.
All is fine but the writes are very slow; I supose it's because write cache not enabled.
I have enable it and make some test and there is a lot of difference:
[url]
I'm worried about enabling it because if the server goes down I think it can cause disk corruption and loose data and maybe the OS. What do you think about that? Data is a priority!
Disks have this technology: Seagate-exclusive IRAW (Idle Read After Write) enhances data protection by verifying—during drive idle time—that data in thedrive buffer was properly written.
The servers are on a datacenter and have RAID 1 with Cheetah® 15K.5 SAS 3Gb/s 146.8-GB Hard Drive - ST3146855SS [url]
I'm also VERY worried about the coments about this card:
[url]
Is it true that the raid don't get rebuilded? If it is like this I don't know why I'm running raid.
I have learned some bits of regular expressions for simple scripting, writing a .htaccess file is, uh, syntaxically daunting.
THE CASE :
The URLs of my site used to be of the form [URL] ... . They are now of the form [URL]......
I am trying to perma-redirect (301) the old format (affiche_fiche.php) to the new format (fiche.php) using a .htaccess.
So far all I have achieved is a hatred of punctuation signs. What's the correct syntax to have a .htaccess that does the redirect ?
THE CONTEXT : The format change took place more than six months ago, but the Google Webmaster Tools still spits 450 problems a day with 404s on URLs using the old format. I had assumed that these would just fade away, but they don't. So I guess that 301'ing them is cleaner. Or would be, if I understood the syntax.