We're outgrowing our current bulk storage system and I'd like to solicit opinions.
With 2 TB disks and a 16 disk array, it's possible to have a single 28 TB volume (after deducting RAID5 parity overhead and a hot-spare disk). I've seen arrays from Aberdeen with 48 and 96 disks, for nearly 200 TB. Windows supports up to 256 TB per volume when 64K cluster sizes are used.
Our backup system uses a ton of storage space, and it would be far more convenient, and more efficient from a utilization standpoint, to access that space as a single volume.
Breaking it up into smaller chunks, such as 2 TB each, means we have to make a "best guess" on balancing actual need.
For example, if we assign 25 servers to each 2 TB volume for backup storage purposes, some volumes might only see 800 GB of consumption (remaining 1.2 TB allocated but not used) while other volumes might get 1.6 TB used (remaining 400 GB allocated but not used). Key concept: wasted space, because we have to over-estimate need to assure adequate headroom.
From the opposite viewpoint, if we had a sudden increase in need that exceeded the available space allocated to that volume, we'd have to move that server to a different volume. Key concept: increased admin workload to monitor and re-balance distribution as needed.
Now if we used one giant volume, there would be no guesswork, no "allocating more than we think is needed" for a bunch of small volumes. All servers share one huge common pot.
But there has to be a practical limit from a system-overhead standpoint. Our backup sets consist of a few multi-gigabyte files, so using 64K clusters will not cause much waste from slack space.
I'd like to get your opinions on maximum disk volume sizes from a practical standpoint.
Domain has PHP Settings in Plesk set to 2G and I get this error when uploading a 48MB file using Wordpress. I assume I need ot modify this manually in conf file somewhere to allow uploading large files?
Requested content-length of 48443338 is larger than the configured limit of 10240000..
mod_fcgid: error reading data, FastCGI server closed connection...
I'm starting a webhosting business in the next few months (working on the panel), and was wondering what is the best method to limit the amount of disk usage the user can use? I know about Disk Quota, but that would be a pain to use. Is there anything built into IIS7?
Also, is it possible to use a SQL 05 DB for FTP user accounts with IIS7? If not, is there any other way to have FTP accounts *without* having to create a windows user account?
We have a questions for everyone and any help would be greatfull, we are looking to limit disk inodes on a per user basis or server wide. we would like to know if anyone ca referance us as to how this is accomplished.
I'm thinking of using one of my computers at home as a dedicated server to host my own sites, and I would like to get your opinion guys on whether that would be a practical thing to do or not.
Dedicated Server: I put together;
Intel Pentium Dual Core 2.8GHz 3GB DDR2 1TB Seagate HD GeForce 9800 GX2 1GB Gigabit LAN Windows XP Pro / IIS 5.1 Smart Firewall/Router Symantec APC Smart-UPS battery (Full 15 hrs.)
Dedicated Connection: Business account will run me $80 CDN/month;
Speed Download - Up to 16 Mbps Speed Upload - Up to 1 Mbps Transfer/month - 200 GB IP Addresses - 2 dynamic & 1 reserved.
The Sites:
Both sites are Academic based - together they receive approx. 200,000 visits a month / 50 to 60 GB transfer and growing. I'm also in process of publishing a 3rd site, but over all, I anticipate the 3 sites transfer to hover around 100GB/mo unless Digg/StumbleUpon/Google all decide to have an orgy-traffic linkage at once and push it higher.
I (and my clients) have a few very small, simple-minded websites...a few php programs for simple forms fetch-and-forward. Is there much PRACTICAL difference between a Windows-based host and a Linux-based host?
I have a Vista machine. I have installed CentOS 5.1 by selecting the C: (Active partition) and formatting it as ext3 partition. Then after installation, in the Hardware > Hard disks, it is showing only one NTFS partition. But actually I have 4 NTFS partitions. When I try to mount that partition using ntfs-3g, I am getting "/dev/sda3: permission denied" error.
more experience Linux users to partition my dedi into VPS. I have an Intel Quadcore 2.4 Ghz, 500GB HDD, 2GB DDR RAM, dedicated server with a max 100mbit connection and 2000GB BW/mo. It has Centos 5.3 (centos-release-5-3.el5.centos.1) installed on the server and I want to install the DirectAdmin CP soon.
I'm not a reseller or webhost and don't intend to become one. This server is for my exclusive use.
I want to use half the server to run virtual instances of a Windows 2008 server and a KDE or similar Linux virtual desktop using FreeNX as well as a 4PSA VoIP Now or similar software. The other half of the drive will be to run my businesses websites, mailserver, a DNS server, etc.
I have six IP addresses for this server that can be used to this end and will host at least three websites (under separate domain names) and one or two blogs for which I will install requisite software.
I understand that the RHEL 5 embedded virtualization software will allow me to partition the server into VPS for various purposes.
Here are the outputs from ckdisk -l and parted -l respectively for the current HDD partitions.
Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 60801 488279610 8e Linux LVM [root@denprivatevaert ~]# parted -l
Model: ATA ST3500320AS (scsi) Disk /dev/sda: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos
Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 500GB 500GB primary lvm
Error: Unable to open /dev/md0 - unrecognised disk label.
For the DA install, so I don't have to try to figure out where things are, I'd like to use their more complex partition structure as follows:
/boot 40 meg swap 2 x memory /tmp 1 Gig. Highly recommended to mount /tmp with noexec,nosuid in /etc/fstab / 6-10 Gig /usr 5-12 gig. Just DA data, source code, frontpage. /home rest of drive. Roughly 80% for user data. Mount with nosuid in /etc/fstab if possible.
I will install dovecot to be able to create SSL access to my webmail so don't need a '/var' directory.
What I want to know is:
1) Should install virtualization and partition the drive prior to having DA installed?
2) How do I best partition the drive into VPSes so I can run distinctly different virtual instances of different OS and/or programs on the VPS as well as use half for websites, blogs, servers, etc.?
3) What else do I have to keep in mind when doing this?
I'd appreciate any positive, useful response and information on getting this done and I'd like to try to get this done by Monday or Tuesday of next week so DA can be installed on the appropriate partition.
Is it possible to specify where your CPanel user's data is stored?
Let's say I have four hard drives without RAID, I have hard drive one on /home, hard drive two on /home2, hard drive three on /home3, and so on. Is it possible to setup users on the different partitions to spread out disk usage?
To explain further, I would like to set it so maybe one reseller account was using /home2, then another was using /home4, and another using /home.
Any ideas on how to go about splitting up user's data across seperate partitions?
can a Xen disk image be converted to a diskpartition?
Someone is asking whether I can host his disk image at his current host, which he is leaving for poor I/O (wonder why that would be ). I can host a diskimage, but I don't like diskimages (slow, and 100GB isn't very 'comfortable' either). Is there any way out there to convert a disk image into a normal partition?
I use apache with CentOS VPS hosting for my blog. I only host one blog in this VPS account. I have 1.5GB RAM and I have 7, 500 page preview per day. My page loading time is 2-3 seconds (according to the pingdom tool).
I want to know what is the best performance (faster web page loading) W3 Total cache option for VPS hosting blog. Currently I use Disk to enhance for page cache and database cache for disk.
I'm having a lengthy issue where my databases are to large to import in phpmyadmin using plesk. Unfortunately I dont have direct access to phpmyadmin and can only access it by DB user through plesk.
I have tried to edit php.ini in the following locations:
upload_max_filesize = changed this to 64M
post_max_size = changed this to 32M
maximum_execution_time = changed this to 300
maximum_input_time = changed this to 300
Why am I still not able to import my DB's which are about 8MB each?
I have a website which has about 20K users, and now I am using VPS plan at LunarPages.
However, I have encountered a trouble of out-of-memory. Although I have configured my Apache and MySQL carefully, the 512M memory is not enough. Therefore, the users' expirence is not good these days because my site is very unstable.
I contacted Lunarpages, asking them whether I can upgrade my VPS to bigger RAM, but they said the ONLY way to get a RAM bigger than 512M is to upgrade to dedicated hosting plan.
The following are some stats of my website:
Total Members: 20k Online at the same time: max 600, average 300
The Lunarpages VPS plan: www[dot]lunarpages[dot]com/virtual-private-server/ disk space: 20G RAM: 512M price: $42 / mo
Now I am not sure whether to migrate to didicated hosting plan, because currently, the main problem is just the size of RAM. Other resources e.g (CPU, network etc. ) are not my bottleneck. So I think it seems not worthwhile for me to migrate to the dedicated hosting plan with a doubled price (even more, almost 3x if I need 1G RAM), just for a larger size of RAM.
Can you guys give some suggestions to choose a VPS provider for my site? The factors taken into my consideration include:
* RAM size: at least (1G for peak, 768M garantee). The bigger, the better. Nice if can choose larger size when needed. * price * bandwidth: 1T/mon? * easy to upgrade to dedicated host: just in case that one day I will have to use dedicated. * whether there are coupons for a lower price.
I've been with zone.net for a couple months now, and I have a guaranteed 512MB of memory, which I seem to constantly be hitting, which seems to result in processes being killed and http access vanishing. Growing quite annoying.
I'm looking into moving onto a new provider that can provide more guaranteed RAM for about the same price.
Space isn't a huge deal, I'd do fine with a meager 5GB. Bandwidth I need at least 200GB, but wouldn't mind more.
I'd like to stay managed if possible, as I'm not as well versed in server workings as I should be. Also am in need of cPanel, which I know is a spendy sucker.
My budget is something around $70 a month, and I don't really want to go much higher than that. Still a poor college boy :/
Can anyone suggest such a provider? I've browsed around a lot of the VPS hosts but can't seem to find one that has as much RAM as I need for a decent price. All the ones that seem to have 512MB+ are pretty expensive, and offer a lot more other stuff (space/bandwidth) than I need.
As a final note, the line speed isn't that big of a deal. I'm currently on a 3mbit and am surviving, but going back to a higher speed line would be great
Just had a quick question about backing up a large MySQL DB. I have a database that is 50gb with about half a billion entries in it. One table itself is about 40gb, the other 10gb consists of smaller tables.
The problem is, I want to back the database up and be able to keep it LIVE at the same time (as it will fall behind quickly if it's pulled for more than a few hours, as there are somewhere in the area of a million entries an hour, plus other deletions and queries).
I'm currently using iptables to ban IP addresses from the servers, like:
Code: iptables -A INPUT -s xxx.xxx.xxx.xxx -j DROP I ran a "spam trap" for the last few months and now I have over 11000 IP addresses who were trying to spam on my website (guestbooks, phpBB and forms) and I want to ban them all (pretty sure bots run from them).
My question - is iptables the way to do it? I mean does banning such a large number of addresses have any significant performance or other issues I should be aware of (except of the fact I may be banning some legitimate traffic)? Is the -A INPUT the way to ban them all or is there a more appropriate way of baning such a number of addresses?
I'm on CentOS 4.5 i686, Apache/1.3.37, Pentium D 930, 2GB RAM.
I wasn't sure where to post this so here goes, I need to migrate a MySQL DB, in the past I have just created an SQL file and used that method (sometimes having to split the SQL file up) but now the DB is about 50 meg and 733,233 records.
Is there an easier way to migrate the Database from one server to another?
I'm selling downloads of music files. The zip files are quite large. I've had several people complain that they get a message that the server resets their connection before the download finishes.
I have a large directory which I want to copy to another account on the same server. Its 1 folder which contains 20000+ files and its around 2GB in size.