I took two harddrives out of a windows 2003 server and imported them as foreign disks in my pc
the problem is... when i imported them as foreign disks, windows xp decided to mark every partition on both disks as failed... even though it hasn't failed.
Problem is now, i cant map to a drive... i need to do this so i can do an NT backup from the data off the drive, then restore that data to a new drive.
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
If you want a quick run down as to WHY I want to do this, read here
Basically, my ISP could not get my server running stable on a simple raid 1 (or raid 5) so what it came down to was having them install my system on a single disk. I don't exactly like this, main reason being, if the system (or HDD) crashes, I'll end up with another several hours of down time... So here is my proposal:
Please Note: This will have to be accomplished on a live System (full backups!) over ssh as I don't trust my ISP to do things right as described in my post above.
mkfs -t ext3 -m 1 /dev/vg0/lvboot mkfs -t ext3 -m 1 /dev/vg0/lvroot mkfs -t ext3 -m 1 /dev/vg0/lvtmp mkfs -t ext3 -m 1 /dev/vg0/lvhome Now, I'd like to 'init 1' at this stage but I can't, so I won't (possible solutions?? Possible to umount the / partition??)
Assuming I'd have to do this on a fully live system, I'd disable all services that I can
Code: /etc/init.d/sendmail stop /etc/init.d/postfix stop /etc/init.d/saslauthd stop /etc/init.d/httpd stop /etc/init.d/mysql stop /etc/init.d/courier-authlib stop /etc/init.d/courier-imap stop /etc/init.d/amavisd stop /etc/init.d/clamd stop /etc/init.d/pure-ftpd stop /etc/init.d/fail2ban stop /etc/init.d/syslogd stop Then we copy all of our data from the single partitions to the raid disks
Code: mount /dev/vg0/lvboot /mnt/newroot/boot mount /dev/vg0/lvroot /mnt/newroot/root mount /dev/vg0/lvtmp /mnt/newroot/tmp mount /dev/vg0/lvhome /mnt/newroot/home (I think I covered everything)
Code: umount -l /dev/sda1 (/boot) umount -l /dev/sda3 (/home) cp -dpRx /* /mnt/newroot/ mount /dev/sda1 /boot cp -dpRx /boot/* /mnt/newroot/boot/ mount /dev/sda3 /home cp -dpRx /home/* /mnt/newroot/home/ Once we have everything copied, update /etc/fstab and /etc/mtab to reflect the changes we made: vi /etc/fstab
Code: title CentOS (2.6.18-164.el5) root (hd3,0) kernel /vmlinuz-2.6.18-164.el5 ro root=/dev/sda2 initrd /initrd-2.6.18-164.el5.img Where (hd3,0) is /dev/sdc. If the system fails to boot to the raid then it'll auto boot to the single disk (/dev/sda)
then update my ramdisk: mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_bak mkinitrd /boot/initrd-`uname -r`.img `uname -r`
And now to set up grub...
Code: grub > root (hd0,0) > setup (hd0) we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > root (hd3,0) > setup (hd3) Again, we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > quit From here I think we're ready to reboot, can't see where I missed anything. If all goes well then I should see my volume groups listed in 'df- h'
I'm running into a problem with a relatively new (2 months old) server. I have just a few accounts on it, and I'm already noticing unusual loads, for the... load. After some benchmarking with bonnie++ (and plain old "dd") there is clearly a problem.
Isn't a write-speed over 7MB/s reasonable to expect? Also look at the low CPU times...
Anyway running the same test on a similar but older AND busier server showed much better results than this. In fact dd'ing a 1GB file from /dev/zero "finished" in about 10 seconds but then pegged the server at 99% iowait (wa) for a full three minutes (until done being written from cache I assume), bringing load to 15.00
That's all the info I have so far... the data center just replaced the card (which gave no errors) with no effect. Running these benchmark tests is about the extend of my hardware experience.
I have a dell POWER VAULT 725N utilizing a 4 HDD RAID 5 setup.
Server has died and bios error message shows that 2 hard drives had failed. I can not boot to windows.
Data is very crucial, what are my options for data recovery?
I really hope I can recover the data, I doubt that two HDD actually failed at the same time without giving any warnings. I hope its the raid controller.
Would like to hear pointers from the community on how to recover important data from the RAID.
Are there any companies/software that would help in this assuming it is a hdd failure and not a controller issue?
We had one of RAID controllers failed on our IBM RS/6000 server. There are two RAID controllers on this server, one holds the OS (AIX) and the other one holds our database and this is the one that failed.
Anyway, I've always thought that once a RAID controller failed and we put in a replacement controller, it will reformat all the hard drives that were connected to the failed controller, which means we would have to restore the data from backup once the new controller is in place. However, the IBM technician we dispatched was able to build the new controller and connect all the drives to the new controller without reformatting the drives. I think he copied the RAID controller's configurations using SMIT. I think that was amazing; it saved us a lot of time.
My question is, is this something unique to IBM hardware/AIX or other hardware and OSes (Linux, Windows, etc.) have similar capability?
trying to get my Win2k3 Standard R2 server to work.
Problem:
I did some Windows update around 3am this morning but during the update process. I lost Remote desktop connection as my machine is do several attempts to connect with the remote server. I cancel the attempt after several try. I figure i can remote desktop back in again if i wait for a bit but I couldn't re-establish the connection with my remote server. So, I drove to the Datacenter to check it out and it turn out the machine have rebooted and I'm stuck with "NTDetect failed" error message. After some googling, I was trying to fix this error by copying NTLDR and NTDetect.COM into C: but when i tried to use the Win2k3 CD to go into Windows Setup and repair. I got an error message telling me that "Setup did not find any hard disk drives installed in your computer...".
After some googling, I found the two Maxtor SATA hard disks might be the problem and one soultion that i found was to disable the SATA or set it to IDE since the motherboard have nvida RAID controller onboard. I check the BIOS and the RAID is set to disabled and then i tried to disable SATA and only enable IDE. It still give me the same error message about not able to find any hard disk.
I'm on my last rope here. I'm short of reformatting the hard disk and do a fresh reinstall but I will like to avoid this.
Does anyone know if cancel Remote desktop connection attempt during Windows update process cause this NTdetect failed problem or something else that might related to my hardware. I can assure you that two SATA hard disks shown on BIOS.
The soultion to NTDecte failed problem seem to be just copying NTLDR and NTdectect.com into C: but since i can't even get to Repair mode during Windows setup with no disk install error. Can someone recommand how to solve this problem?
The motherboard that i use is ASUS M2N-MX SE AM2 NVIDIA GeForce6100 / nForce430 and the BIOS is American Megatrends Inc.
If there is a failed drive in RAID-1 running on Dell 2850, FreeBSD 5.4, can I just take out the failed drive and replace it with a new one, while the server is running? Will FreeBSD cope & rebuild the drive on the fly?
I have a disk in raid, but it seems raid is not working correctly. I took it out, and plug into another server without raid. However, fdisk shows error
Quote:
#fdisk /dev/sdb device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable.
The number of cylinders for this disk is set to 20023. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help):
Should I correct the partition table now, or should I put it in another raid for checking?
I am having problems with two accounts, I get the following errors:
Code: Failed to copy files storage to destination path. stderr: filemng: Cannot open destination file '/var/www/vhosts/domain.tld/httpdocs/index.html.Chn3rn' System error 122: Disk quota exceeded stdout: filemng: Cannot open destination file '/var/www/vhosts/domain.tld/httpdocs/index.html.Chn3rn' System error 122: Disk quota exceeded
I am running Windows Server 2003 Standard Edition and something is using alot of my disk space, how can I find out what folders/files are using alot of disk space (As the items may be small, but there may be lots of them).
What should be the best option to do I have a couple of hard drives (5 x 750GB) I want to use RAID on them and I have this RAID ware controller
[url]
Which one would be the best option RAID 6 right? Since I am going to have more than 5 hard drives soon ( I will be adding 3-4 in the next month *750GBS*)
Also If I add the hard drives later on it won't affect the data at all ? Like I would not have to erase anything before adding the new hard drive space , I will just have the data still + the new space?
As I understand it, PGP Desktop is not compatible with server os's. Also, TrueCrypt and BestCrypt containers have i/o overhead. What else is left? Environment is 2008 Server with high i/o throughput requirements..
I normally do windows defragmentation every weekend for my servers most are hardware raid 1 or 5, can I know is it actually beneficial to the server or does it harm the server by doing defragmentation on a weekly basis?
I'm starting a webhosting business in the next few months (working on the panel), and was wondering what is the best method to limit the amount of disk usage the user can use? I know about Disk Quota, but that would be a pain to use. Is there anything built into IIS7?
Also, is it possible to use a SQL 05 DB for FTP user accounts with IIS7? If not, is there any other way to have FTP accounts *without* having to create a windows user account?
I have a Windows 2000 server running Raid 1 software raid. Recently, one hard disk in the mirror crash, I replace another trying to build the mirror. Problem is the existing hard disk has a few bad blocks, even after chkdsk it still failed to rebuild the software raid, error msg was due to bad blocks in the existing hard disk.
When restoring a backup on Plesk 12, the error below is generated for some sites and the sites are only partially restored. Although the message talks about disk space, there is in fact plenty of free disk space - i.e. many GB. The backup was created on Plesk 11.5. The restore on Plesk 11.5 works.
I noticed that all the failed domains exceed the disk space allowed by their Service Plan. However, the 'Overuse is allowed' setting is selected. Strangely, I tried changing the Service Plan and retrying the backup and restore, and the same errors were generated.
<object type="domain" name="domain.com"> <object type="hosting" name="domain.com"> <message code="CantUnpackDomainContent" severity="warning" id="f3946c79-7ae2-4be2-8300-ba766bea7869"> <description>Can not deploy content of domain domain.com</description>
Windows Application 2003 crashed on RAID 5 server, we tried to take the NTFS files from the hardrive and mount them on a knobix which was booted from a cdrom drive. Knopix could read the files but it was unable to mount them I guess for compatiblity reasons.
Is there anyway we can get a backup of that ntfs file and restore our data?
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
i upgraded plesk few days ago to the latest version to be able to use it on my tablet.when i done so My mail enable webmail is not functioning anymore, it opens the default plesk website.i have mail enable enterprise version 6.6 licensed.i tried to rebuild i tried everything till now i tried to go to server components and try to disable the webmail and re enable it and see. so it disabled fine, when i tried to enable it gave this error: Error: Set default component failed: defpackagemng failed: Execute websrvmng --add-webmail-site --webmail=mewebmail failed with error code 1: Cannot find hosting by domain ID 47
how to get my webmail back up. an update, i have tried enabling horde first, then i tried accessing the webmail it worked with horde, then i went back to server components, when i tried enabling mailenable webmail now it didnt give the error and worked. (ONLY when horde is enabled with it) then i went back to mail settings tried to use mail enable instead of horde, now it gives service is unavailable! a complete new error. i went to iis i found that MailEnableAppPool status: stopped, i tried enabling then restart iis, once restart it gets back to disabled,
Details: ERROR: error during prepare patch panel-12.0.18~patch51...Failed to download the package URL.... The requested URL returned error: 500 Internal Server ErrorNot all packages were installed.Please try installing packages again later. I did verify with a web browser that I can access and download the WMIMSDNSProvider.dll file specified in the update. Is this just due to a timeout and the system retries the update? If so, how can I confirm that patch 51 got applied at some point.
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.