Invalid Cross-device Link - Additional Hard Drives Linux

Mar 25, 2008

OP: Linux Centos

I just got an additional 500GB hard drive added and mounted it to /home2

There are files that are in /home1 (orginal HD) that will need to be constantly moved over to /home2 via a ftp

But i keep getting this error

550 Rename/move failure: Invalid cross-device link

Does anyone have any ideas? I tried changing permissions but no luck also tried mounting the 2nd hard drive within a directory in /home1. Still gives the error.

View 5 Replies


ADVERTISEMENT

Linux Hard Drives

Jan 12, 2008

On my centos webserver I currently am using a 250gb ide drive.

I just bought 2 Western Digital Raptor WD1500ADFD 150GB 10,000 RPM 16MB hard drives.

And now I am wondering what kind of setup I should have?

Should I have the 250gb hd as a backup drive now and have the two raptors in a raid 0 array.

What would be the best configuration?

View 10 Replies View Related

Plesk 11.x / Windows :: Additional FTP Account - Directory Name Invalid

Dec 22, 2013

I have a vps server and i am trying to create ftp account and i am getting this error:

Unable to start (/c ""C:Program Files (x86)ParallelsPleskadminbinserverconf.exe" --list"): (267) The directory name is invalid.
---------------------- Debug Info -------------------------------
0: Exec.php:43
plesk_proc_open(string '"C:Program Files (x86)ParallelsPlesk/admin/binserverconf" --list', array, string 'C:Program Files (x86)ParallelsPlesk/tmpagent78f0fd54abecb83b12f6c9fdde9b5b84', array)
1: Exec.php:43

[Code] ....

View 9 Replies View Related

Hard Drives For VPS

Apr 23, 2009

I'm building a couple of VPS host servers for a client.

Each server have to host 20 VPS and each server will be 4 cores with 32GB of ram. So CPU and ram should be just fine, my interrogatioon now is hard drives. The company owns the machines, but not the drives yet.

I searched a lot on your forums but found nothing relating on VPS. I'm basicly a DBA IRL, so I have experience in hardrives when it comes to databases, but it's completely different for VPS.

According to my boss, each VPS will run a LAMP solution (having a separeted DB cluster is out of question for some reason).

First, raid1 is indeed a must. There is room for 2x 3.5 drives. I might be able to change the backplane for 4x2.5, but i'm not sure...

I've came to several solutions:
2x SATA 7.2k => comes to about 140$
2x SATA 10k (velociraptor) => comes to about 500$
2x SAS 10k with PCIe controller => comes to about 850$
2x SAS 15k with PCIe controller=> comes to about 1000$

They need at least 300GB storage.

But my problem is that the servers do not have SAS onboard so I need a controller and in my case the cheapest solution is best.

But I'm not sure that SATA 7.2k will hold the charge of 20 complete VPS.

Does it worth it to go with SAS anyway or SATA should be just fine? With SATA better use plain old sata 7.2k or 10k drives?

That's a lot of text for not much: What is best for VPS: SATA 7.2k, SATA 10k or SAS 10k?

View 14 Replies View Related

What To Do With 6 Hard Drives

Mar 25, 2007

I am about to buy a Compaq server with 6 SCSI hard drives. In you opinion, what is the best RAID configuration with 6 HDs?

View 14 Replies View Related

RLX Blade Hard Drives?

Jan 7, 2008

Do the old RLX Blade servers use 'mini' hard drives? I can't find an answer anywhere. I seem to recall that they use smaller 2.5" drives. Is this the case?

And, if so, do they make "good" drives worthy of being in a server in that size? Are they essentially just a laptop drive?

View 0 Replies View Related

Where/How To Duplicate Hard Drives

Jul 3, 2007

I am in a little bit of trouble I got a couple (5) of 750GB hdds that I need backed up to another couple (5?) of 750GB hdds so I can save the data storage on them. They are in a Linux box with a LVM setup I also have a RAID ware card on it but not using any RAID # on them. I decided after finding out what I could do with it to go to Windows 2003 on the server and installing RAID5/6 on it.

It seems that I will have to give up all my data and have everything wiped off from the hard drives this is very sad for me but I still have a chance to save the data on them. So I am thinking of copying them to another bunch of hard drives and then re-add it once the system is in place.

I was looking at this
[url]

But thats clearly too expensive as I just need to back up 5 hard drives (750GB/each) and just need to do it one time. Anyone have any suggestions to this or how should I go about doing it. It doesnt have to be right away but its good to know my options.

Is there any place where they might to do this kind of stuff they let you rent their machine for a couple of hours for a fee so you can back up your data? The server is a COLO and the hardware is mine so I have every right to take it off and back it up with no problem from the datacenter.

View 10 Replies View Related

Server With 2 Hard Drives

Sep 11, 2007

am getting new server with 2 (73GB) hard drives i need to know the following:

1.I need to put /home in one hard drive 73GB and the other partitions like /boot, /tmp,/usr and /var on the other drive

where should i put /home? on the primary or or secondary drive?is there any effect on the speed?

2. Am used to servers with 1 drive. is there any difference when it comes to security aplications such as APF,BFD,mod security and other aplicatuions settings?

3. in general should i take the same actions when handling a server with 1 drive and server with 2 drives?

View 5 Replies View Related

1 Domain To Use 2 Hard Drives

May 31, 2007

i just got another HDD on my server, but i want to use 1 domain and use both hard drives for the 1 domain.

How do i set it up so that both hdd's can be used by the 1 domain..im using WHM but cant seem to do it.

View 13 Replies View Related

What Is The Difference NAS And Many Hard Drives Server

Mar 28, 2008

i feel the NAS are not very cheap,

it looks likely just a low-level server with many hhds,

but why people not buy a server and put many hhds on it?

View 4 Replies View Related

Performance With 500GB+ SATA Hard Drives

Jan 16, 2008

running new servers on 500GB+ hard drives, how are these drives performing when they become 50% full?

Can they properly be 50% or more utilized on a cPanel like server with 200+ acccounts?

View 0 Replies View Related

Software RAID5 Not Booting Up Without All Hard Drives

May 14, 2007

I setup a Software RAID5 the following way:

/dev/sda:
1: /boot 101MB
2: software raid ALL

/dev/sdb
1: software raid ALL

/dev/sdc
1: software raid ALL

/dev/sdd
1: software raid ALL

/dev/md0: ext3 mounted as / for all of the software RAID partitions.

I was left to believe this would create redundancy as long as only one drive is removed from the array. Although when I unplug any of the hard drives (one at a time) I get input/output errors and when I try to reboot I get kernel sync errors.

What exactly am I doing wrong when trying to create redundancy? I know that SDA contains the /boot/ partition so it wouldn't boot without that but even if I unplug B,C, and D it still can't sync.

View 14 Replies View Related

Shared Hosting Clients' Preference On Hard Drives

May 2, 2009

I know SCSI drives are better than SATA, but I wonder if the community will prefer SATA or SCSI especially when you will be paying more for less.

Heres an example.

You can get:

2GB SATA for $5 & 500MB SCSI for $5, which will you choose?

View 14 Replies View Related

Replacing Or Upgrading Hard Drives...remote Method

Dec 20, 2007

I want to try something different on our methods of replacing or upgrading hard drives.

I want to be able to do most of it via our KVM/IP instead of babysitting the server(s) for so long in the DC.

My thoughts are, how can I add the new hard drive in the DC, and move the data over via the KVM/IP. Can this be done with just a raw drive added (no new setup) using DD or even rsync, or is it better to setup a new installation of CentOS on the new drive, and use rsync to move the data over. Then how do I get the proper drive to boot until I go back into the DC to remove the bad or old drive? I'd be interested in how some of you folks are doing this, as far as what's easiest and could be done over the KVM/IP once the new drive is connected.

Or on systems that have 2 drives with cPanel/WHM, how can we temporarily on an emergency basis untilize the backup drive to do a new setup, copy the data over from the drive that is failing, then just replace the bad drive as a backup drive next time you go in the DC? We have an external USB CD in place to allow remote installs...just curious if anyone does something like this or has ideas how we could make this work.

We use cloning software now, but can end up babysitting a clone for a long period in the DC like this.

View 3 Replies View Related

How To Distribute Service/content Across Multiple Hard Drives For Best Performance

Mar 19, 2007

Suppose I have only two phisical hard drives. What is the most optimal way to distribute the following (a windows server):

OS/IIS
Web pages and scripts
SQL server
SQL database

Should it be :

HD1 : OS/IIS + Web pages/scripts + SQL server
HD2: SQL database

or other setups?

View 4 Replies View Related

Google Doubts Hard Drives Fail Because Of Excessive Temperature, Usage

Feb 17, 2007

Mountain View (CA) - As a company with one of the world's largest IT infrastructures, Google has an opportunity to do more than just search the Internet. From time to time, the company publishes the results of internal research. The most recent project one is sure to spark interest in exploring how and under what circumstances hard drives work - or not.

There is a rule of thumb for replacing hard drives, which taught customers to move data from one drive to another at least every five years. But especially the mechanical nature of hard drives makes these mass storage devices prone to error and some drives may fail and die long before that five-year-mark is reached. Traditionally, extreme environmental conditions are cited as the main reasons for hard drive failure, extreme temperatures and excessive activity being the most prominent ones.

A Google study presented at the currently held Conference on File and Storage Technologies questions these traditional failure explanations and concludes that there are many more factors impacting the life expectancy of a hard drive and that failure predictions are much more complex than previously thought. What makes this study interesting is the fact that Google's server infrastructure is estimated to exceed a number of 450,000 fairly mainstream systems that, in a large number, use consumer-grade devices with capacities ranging from 80 to 400 GB in capacity. According to the company, the project covered "more than 100,000" drives that were put into production in or after 2001. The drives ran at a platter rotation speed of 5400 and 7200 rpm, came from "many of the largest disk drive manufacturers and from at least nine different models."

Google said that it is collecting "vital information" about all of its systems every few minutes and stores the data for further analysis. For example, this information includes environmental factors (such as temperatures), activity levels and SMART parameters (Self-Monitoring Analysis and Reporting Technology) that are commonly considered to be good indicators to describe the health of disk drives.

In general, Google's hard drive population saw a failure rate that was increasing with the age of the drive. Within the group of hard drives up to one year old, 1.7% of the devices had to be replaced due to failure. The rate jumps to 8% in year 2 and 8.6% in year 3. The failure rate levels out thereafter, but Google believes that the reliability of drives older than 4 years is influenced more by "the particular models in that vintage than by disk drive aging effects."

Breaking out different levels of utilization, the Google study shows an interesting result. Only drives with an age of six months or younger show a decidedly higher probability of failure when put into a high activity environment. Once the drive survives its first months, the probability of failure due to high usage decreases in year 1, 2, 3 and 4 - and increases significantly in year 5. Google's temperature research found an equally surprising result: "Failures do not increase when the average temperature increases. In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at very high temperatures is there a slight reversal of this trend," the authors of the study found.

In contrast the company discovered that certain SMART parameters apparently do have an effect drive failures. For example, drives typically scan the disk surface in the background and report errors as they discover them. Significant scan errors can hint to surface errors and Google reports that fewer than 2% of its drives show scan errors. However, drives with scan errors turned out to be ten times more likely to fail than drives without scan errors. About 70% of Google's drives with scan errors survived the first eight months after the first scan error was reported.

Similarly, reallocation counts, a number that results from the remapping of faulty sectors to a new physical sector, can have a dramatic impact on a hard drive's life: Google said that drives with one or more reallocations fail more often than those with none. The observed average impact on the average fail rate came in at a factor of 3-6, while about 85% of the drives survive past eight months after the first reallocation.

Google discovered similar effects on hard drives in other SMART categories, but them bottom line revealed that 56% of all failed drives had no count in either one of these categories - which means that more than half of all failed drives were put out of operation by factors other than scan errors, reallocation count, offline reallocation and probational counts.

In the end, Google's research does not solve the problem of predicting when hard drives are likely to fail. However, it shows that temperature and high usage alone are not responsible for failures by default. Also, the researcher pointed towards a trend they call "infant mortality phase" - a time frame early in a hard drive's life that shows increased probabilities of failure under certain circumstances. The report lacks a clear cut conclusion, but the authors indicate that there is no promising approach at this time than can predict failures of hard drives: "Powerful predictive models need to make use of signals beyond those provided by SMART."

View 6 Replies View Related

400GB Hard Disk Drives In RAID 0, RAID 5 And RAID 10 Arrays: Performance Analysis

Mar 7, 2007

Quote:

Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.

[url]

View 0 Replies View Related

Linux Cross-connect Routing

Mar 14, 2007

I have two RedHat EL 4 boxes linked via a cross-connect. One is a web server (10.0.0.3) and one is a mySQL server (10.0.0.2), the interface between them is eth1 on both machines and a second interface eth0 connects to the internet.

I want to use the web server to send queries to the database server via eth1, 10.0.0.2:3306 in this case. If I send a database query via eth1 there is a delay of about 10-20 seconds before the result comes back. If I send the same query to the database server but use it's main IP instead of the internal IP so that the query is being sent to it over the internet (xx.xx.xx.xx:3306), the result comes back instantly.

Similarly, if I send a query from any remote server the result is instant.

Why should there be such a huge delay when sending a query directly through the cross-connect?

The routing table ( ip route show ) for the web server is:

xx.xx.xx.xx/xx dev eth0 proto kernel scope link src xx.xxx.xx.xx
10.0.0.0/24 dev eth1 proto kernel scope link src 10.0.0.3
default via xx.xx.xx.xx dev eth0

and the routing table on the database server is:

xx.xx.xx.xx/xx dev eth0 proto kernel scope link src xx.xx.xx.xx
10.0.0.0/8 dev eth1 proto kernel scope link src 10.0.0.2
default via xx.xx.xx.xx dev eth0

I have ifcfg-eth1 on both boxes:

DEVICE=eth1
ONBOOT=yes
TYPE=Ethernet
IPADDR=10.0.0.3 / 10.0.0.2
NETMASK=255.255.255.0

Both boxes can ping each other and transfer files using wget without any apparant problems or delays.

Anyone have any ideas on how to fix this 10-20 second delay when sending queries through the cross-connect?

View 3 Replies View Related

Apache :: How To Redirect Old Blog Link To New Link

Mar 30, 2013

I moved my wordpress blog from blog.domain.info to domain.net i own both the tld domains. google search already had the posts from old url indexed. if someone clicks in it it shows 404 error. how do i redirect the traffic from old url to new url via htaccess. I dont want duplicate post on google. how to make the transition? i am confused with many online articles about htaccess redirect. 

View 2 Replies View Related

Do Hosts Use Enterprise Drives Or Desktop Drives

Nov 5, 2009

i want to know if there is a difference between enterprise drives and desktop drives and which ones hosts use

View 7 Replies View Related

Plesk 12.x / Linux :: Additional Nginx Directives

Sep 3, 2014

I am trying to add the following nginx configuration to Plesk through

Website & Domains -> Web Server Settings

The conf i am adding is:

location /folder/folder/ { try_files $uri /folder1/folder2/file.php; }
or
location ^~ /folder/folder/ { try_files $uri /folder1/folder2/file.php; }

but it doesn't seem to be working.

gives 404

View 1 Replies View Related

Plesk 11.x / Linux :: How To Add Additional SMTP Port To Qmail

May 13, 2014

I want to add a new port for SMTP (1024) as a client's ISP block 25, 587 and 2525. I tried following this guide: [URL] ... but it doesn't work with Parallels Plesk Panel 11.5 ...

How to add an additional SMTP port to Qmail in Plesk 11.5?

View 1 Replies View Related

Plesk 12.x / Linux :: Missing Packages On Additional PHP Versions?

Jun 11, 2015

I'm trying to make use of the multiple PHP versions in Plesk 12, however the newer versions are missing some packages my websites require.

When yum installing them it just tries to install to the default PHP5.3 version so says it is already installed.

As I didn't compile these php versions how can I install additional PHP packages to the supplied additional versions in Plesk?

View 3 Replies View Related

Plesk 12.x / Linux :: Unable To Find Additional PHP Versions

Jul 14, 2015

I have just installed a new Cloudlinux 6.6 Plesk 12 box but am unable to find the additional PHP versions as shown in the attached image.

I'm on update #55.

I'd prefer to use Plesk's built-in PHP versions instead of the Cloudlinux ones.

View 4 Replies View Related

Plesk 12.x / Linux :: Fatal Error Using Additional PHP Version

Jun 29, 2015

Using Plesk 12.0.18 and CentOS 6.6, I installed PHP 5.6 through the Plesk Updates menu. After that, the new version was not automatically selectable inside Plesk Panel. I added it using php_handler.However, when selecting PHP 5.6 for a website, I only get a 500 server error.

Trying to troubleshoot, I ran "/opt/plesk/php/5.6/bin/php-cgi -v", which also resulted in an error:Fatal Error Unable to allocate shared memory segment of 67108864 bytes: mmap: Cannot allocate memory (12)The same kind of error is logged in the error_log for the website I switched to 5.6.

The error also occurs when calling "php-cgi -v" directly in the command line.The same error occured when I tried the same thing with PHP 5.5.

View 5 Replies View Related

Plesk 12.x / Linux :: Locked Out From Panel After Installing Additional PHP-versions

Jun 25, 2015

For some application-testing I installed PHP 5.5 and PHP 5.6 alongside the existing PHP 5.3 installation. To do that I used the web-interface of the Plesk-installer.

Installing the software went well, or at least didn't show any errors.

After installing I went back to the login-page to get back into the portal, but that didn't work.

I am using the right username and password and don't see any errors, I just keep getting the login-screen and no errors. Logging in through SSH is working as it should. Only access to Plesk on 8443 seems to be denied.

Server reboot didn't work, restarting Plesk-services didn't work...

How can I restore this so I can get back to work?

This Plesk-installation is on a CentOS 6 server on which I have full rights.

View 2 Replies View Related

Plesk 11.x / Linux :: Catch-all Not Working On Additional Subscription Domains

May 26, 2014

URL....I trying to solve the problem fixing the psa.Parameters table, however i cannot find the relation between id on Parameters table and domains table.

View 2 Replies View Related

Plesk 11.x / Linux :: Additional Domain Pointing To Wrong Directory?

May 30, 2014

i'm running the latest version of Plesk 11 on a Ubuntu 12.04 system.

We have a customer with a domain and this customer added other domains to his account.

now, 2 domains are not working, he created the as usual, Plesk created the directories under /vhosts/domain.com/domain1.com and the vhosts.conf are also correct.

when i'm opening the domain in the browser, i get the following error message:

The requested URL /var/www/vhosts/domain.com/index.php was not found on this server.

View 3 Replies View Related

Does Linux SW Raid Benefit From TLER On WD Drives

Dec 21, 2008

I've just bought myself a linux based NAS for storage/backups at home and a couple of WD Greenpower (Non-RAID edition) HDDs.

For those who don't know what TLER is (Time Limited Error Recovery), without it enabled the HDD does its own error recovery, which may take longer than the acceptable time for a RAID Controller. In which case, the drive is kicked out of the array. With TLER on, the idea is that the drive keeps notifying the controller, or the controller handles the error.

So, my actual question is, does Linux Software RAID benefit from TLER being enabled? Or is it best to let the drive do it's own thing?

View 0 Replies View Related

Plesk 11.x / Linux :: Give Non-admin Users Access To PHP (Additional Configuration Directives)

Oct 21, 2014

Is there any way to give a reseller or customer access to the php custom settings box labeled "Additional configuration directives" on the website & domains -> php settings button that an admin can see and alter? We have attempted to give resellers the "Common PHP settings management" and "Setup of potentially insecure web scripting options that override provider's policy" options, but it still does not show up to a reseller.

View 1 Replies View Related

802.3ad (bonding/link Aggregation) On Cisco 2960 W/linux

Aug 14, 2009

I thought with 802.3ad I could aggregate the links and thus turn a 3x100 megabit pipes to one 300 meg pipe. The problem is when using the default options (and layer2 xmit mode) I only get the bandwidth of a single connection and I see no other traffic on the other two. Here is output from /proc/net/bonding/bond0:

Code:
Ethernet Channel Bonding Driver: v3.3.0 (June 10, 2008)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)

MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 3
Actor Key: 9
Partner Key: 3
Partner Mac Address: 00:1e:f6:xx:xx:xx

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1e:8c:xx:xx:xx
Aggregator ID: 1

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:04:23:xx:xx:xx
Aggregator ID: 1

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:04:23:xx:xx:xx
Aggregator ID: 1
Basically it works in that I get a connection but its no different from being on a regular 100/100 connection. Downloading files my machine from completely different networks always only goes out one connection. I don't even see any received/sent packets from the other two interfaces.

When I tell it to use layer3+4 via the xmit_hash_policy paramter when loading the module IE:

modprobe bonding mode=4 xmit_hash_policy=layer3+4 miimon=100

It seems to work as expected except it looks like all incomming traffic comes in on the same interface and its not much different from a normal load balancing (except from a single IP address). I will stick to this method if I have no choice as I don't really care about the download all that much and it seems to do a good job.

Here is /proc/net/bonding/bond0 from that config:

Code:
Ethernet Channel Bonding Driver: v3.3.0 (June 10, 2008)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 3
Actor Key: 9
Partner Key: 3
Partner Mac Address: 00:1e:f6:xx:xx:xx

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1e:8c:xx:xx:xx
Aggregator ID: 2

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:04:23:xx:xx:xx
Aggregator ID: 2

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:04:23:xx:xx:xx
Aggregator ID: 2

My understanding is that with layer2 it should just be one big fat pipe. IE: packets are fragmented and thus even a single connection should be able to sustain 300 megabits (if the other end could provide it). It seems like it just doesn't use my other interfaces at all when doing this.

Here is my config on the cisco 2960:

Code:
interface Port-channel3
switchport access vlan 100
switchport trunk allowed vlan 100
switchport mode access
switchport nonegotiate
spanning-tree portfast
!
interface FastEthernet0/10
switchport access vlan 100
switchport trunk allowed vlan 100
switchport mode access
switchport nonegotiate
channel-protocol lacp
channel-group 3 mode active
spanning-tree portfast
!
interface FastEthernet0/38
switchport access vlan 100
switchport trunk allowed vlan 100
switchport mode access
switchport nonegotiate
channel-protocol lacp
channel-group 3 mode active
spanning-tree portfast
!
interface FastEthernet0/40
switchport access vlan 100
switchport trunk allowed vlan 100
switchport mode access
switchport nonegotiate
channel-protocol lacp
channel-group 3 mode active
spanning-tree portfast

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved