What's the best setup for me to have for a machine which will host web files that other servers have to read and serve to end users on the web?
For example, I have servers Web1 and Web2 serving the same content from Files1.
I assume it's best just to go with RAID 10 and be safe or?
More importantly for me, what's the best way for these systems to communicate? I.e.
what protocol should be used for the web servers to read the files from the file server?
I once used SSHFS for serving the same static files on a couple of machines from one location... but that's presumabely very slow (at the time wasn't a problem).
[Wed Jan 30 22:31:33 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/404.shtml [Wed Jan 30 22:31:33 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/favicon.ico [Wed Jan 30 22:29:36 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/404.shtml [Wed Jan 30 22:29:36 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/favicon.ico [Wed Jan 30 22:27:18 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/404.shtml [Wed Jan 30 22:27:18 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/favicon.ico [Wed Jan 30 22:26:48 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/404.shtml [Wed Jan 30 22:26:48 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/favicon.ico [Wed Jan 30 22:26:47 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/500.shtml [Wed Jan 30 22:26:47 2008] [alert] [client 150.101.99.206] /home/soupnazi/public_html/Dolphin/.htaccess: Invalid command 'php_flag', perhaps misspelled or defined by a module not included in the server configuration [Wed Jan 30 22:20:21 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/404.shtml [Wed Jan 30 22:20:21 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/favicon.ico [Wed Jan 30 22:20:19 2008] [error] [client 150.101.99.206] File does not exist: /home/soupnazi/public_html/500.shtml
I am sorry. We are not able to resolve the File Does Not Exist errors for you. You have to do it own.
The unique errors that you have today for Apache are as follows:
client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind File does not exist: /usr/local/apache/htdocs File does not exist: /usr/local/apache/htdocs/501.shtml File does not exist: /usr/local/apache/htdocs/images, referer: [url] File does not exist: /usr/local/apache/htdocs/jailbreak-iphone-3-0-1-with-redsn0w File does not exist: /usr/local/apache/htdocs/robots.txt Invalid method in request x16x03x01 Invalid method in request x80bx01x03x01 # Invalid URI in request alt="Follow%20via%20FreiendFFeed"%20title="Follow%20via%20FriendFeed"%20border="0"%20height="16"%20width="16"%20/>Home Top%20StuffiPhoneWindows%207FirefoxVistaTipsWordpressSubscribeSitemapAdvertise xc2xa0xc2xa0xc2xa0Vertical1240899%20=%20false;ShowAdHereBanner1240899%20=%20true;RepeatAll1240899%20=%20false;NoFollowAll1240899%20=%20false;Ban nerStyles1240899%20=%20new%20Array(%20%20%20%20"a{display:block;font-size:11px;color: HTTP/1.1 PHP Fatal error: Call to a member function get() on a non-object in /home/technob2/public_html/wp-includes/cache.php on line 93 PHP Fatal error: Call to a member function get() on a non-object in /home/technob2/public_html/wp-includes/cache.php on line 93, referer: [url] PHP Fatal error: Call to a member function get() on a non-object in /home/technob2/public_html/wp-includes/cache.php on line 93, referer: [url] PHP Fatal error: Call to a member function get() on a non-object in /home/technob2/public_html/wp-includes/cache.php on line 93, referer: [url] PHP Fatal error: Call to a member function get() on a non-object in /home/technob2/public_html/wp-includes/cache.php on line 93, referer: [url] request failed: error reading the headers
I would advise that you resolve as many of these errors as you are able.
As to what caused the load on your server, I can't say. There is nothing in your logs to indicate the source of the problem. During the 1PM period when your server overloaded last there were only 29 errors in Apache. The errors that occurred during this time would not cause excessive load.
I have installed a script to monitor your server. If this load issue occurs again within one week or before your next reboot, whichever comes first, we should be able to determine the source from the script's logs in /var/log/sys-snap.sh.
Code: [Wed May 23 15:58:24 2007] [error] [client ::1] File does not exist: /htdocs [Wed May 23 15:58:25 2007] [error] [client ::1] File does not exist: /htdocs [Wed May 23 15:58:26 2007] [error] [client ::1] File does not exist: /htdocs [Wed May 23 15:58:27 2007] [error] [client ::1] File does not exist: /htdocs
I'm getting a TON of these in my Apache Error log. Is this just bots? Any way to stop them?
I am using single Apache HTTP Server (2.2.23) as a Load Balancer with two IBM Websphere application server nodes (other machines). I have deployed the simple text based helloWorld application and it works fine with load balancer. But When I deploy the real application that contains images,css file , java script file. It loads the page without images and show me simple text and gives me the following Exception on error_logs and similar kind of exceptions
[error] [client 192.217.71.77] File does not exist: /usr/local/apache2/htdocs/application, referer: http://192.168.141.17/application/faces/test.jsp
Interestingly, when I access the application without load balancer, it also works fine.
Network File Storage (NFS), does anyone use it or find it useful?
From what I see are advantages:
- Raid redundant
- Good backup
- Cheap extra space
Cons:
- slower speeds read/write speeds?
Looking for someone to add on and change my point of view on NFS. Also if you like NFS what is a good model for us to get. We are currently looking at the the Dell PowerVaults...
I run several sites and all of them are hosted at invision. The main reason for having my sites hosted there is that my sites are "forum centered" and I'm very happy with the service that I have got over the last few years from them, so I don't want to change that.
However, I now wish to expand and provide my users with a file repository. The problem is, whilst hosting my sites at invision is fine, hosting my files there would be quite expensive...
Thus, I'm now looking for a host to host my files and nothing else. (I run chess sites, so I'll be providing my users with files and possibly a gallery. All legal material, of course).
I don't need any download manager or anything of the kind. Invision forums actually has its own download manager, and I can have my files hosted externally (i.e. other than on my site).
Thus, I'm looking for a host that offers specific packages for what I am looking for - I would not need scripting or any 'fancy' features, just file storage with FTP access.
How much space? around 1 GB, possibly 2 in the future, maybe 3 or 4 if I add the gallery one day. And bandwidth, as people will be downloading files from my site.
I've been looking around, but it's just so difficult: they all offer webhosting services for people who need to have their sites hosted, etc. and that's not what I need.
Any recommendations?
I don't have any fixed budgets, my focus will be on price, speed and reliability. Preferably a hosting company which has been around for a while and has good reviews.
is there any web hosting that can be used as file storage (>10G) and cheap? I have some huge files, but many web hostings can not accept non-web-content files.
Just got a letter from InMotion. They don't like I uploaded my backup files to them, which makes their hosting benefits totally pointless to me and I will be moving to some other cheaper hosting since I basically only use email.
So I need some place to store my backups... I need only about 5 Gb, and I don't care about bandwidth, as I don't plan to download them unless all my HDDs will burn or get stolen or something so I don't need bandwidth. And it should be no more than few bucks/mo.
For those host which are not overselling, they have obviously the space for file storage. But should they allow file storage on their shared hosting account, if they aren't overselling, and the files are legal?
Well, this can also be counted as a survey I need
It would be best if you provide a reason if your vote is no.
I am trying to find out how many 3 minute videos can be stored on a dedicated server that holds 500GB of Disk Space and how many videos can be viewed at the same time if it has 1500GB of Bandwidth.
We are starting a user-generated site that will potentially receive a lot of videos and someone is trying to persuade us to use a media platform that charges based on views instead of a dedicated server. They are saying the server will not hold many videos nor will large amounts of videos be able to be viewed at the same time.
We're currently serving approx. 150mbit of traffic (mainly video files) via 2 round robin'ed front-end servers. The front-end servers are NFS mounting the content blades so we can access all content via dedicated links to each blade, A,B,C like:
mainsite.com/A/files.wmv (blade A) mainsite.com/B/files.wmv (blade B) mainsite.com/C/files.wmv (blade C)
We're fed up with this way of structuring our content because of space issues and because we need to go to blade A to get file X.wmv and to blade C for Y.wmv
We're looking for a better solution and we need your help, I have looked at the producs from Coraid, e.g:
[url] look for "Bundle - CLN21 + SR1521"
Would this be a way to do it? it will give us plenty of storage space and if we could buy 2 sets, we could mirror the sets and mount a set to each front end server to have some failover protection? and we could remove the A,B,C areas and have it all under:
projectABC/ X.wmv Y.wmv (not so important just an easier way for us to keep track of it)
But is Coraid even used for this kinda task or is it a LAN product? Can it keep up with the 150mbit of file requests, 20-400mbit a piece.
Another idea was to simply build 2 sets of raid boxed with a 16channel raid card some some disks and it would give us something similar to the coraid?
how to handle the file storage of a youtube clone?
Is it just a matter of getting more servers with a few hdds or are there specialized companies that one can upload files over a distributed file streaming network?
The reason I ask is because I have thousands of gigabytes of videos and it appears to be impossible to upload it on 1 dedicated server or even a few.
on good hosting setups for getting large amounts of disk space.
I would like to be able to offer up to 2Gb storage space for 100s, maybe up to a few 1000 users - any solution should scale well. The files would be static files that might be up to 400Mb in size.
It would be nice to be able to give users FTP access to their disk space, although it's not a core requirement.
I'm currently running on a VPS. My site allows for large file uploads and downloads, with files over 600mb in size.
The server has issues when the site gets three or more requests for large file downloads. I'm trying to grow this site to thousands of users and it is hard to do when the site can't handle even three.
I've been told by my host that I need to upgrade to dedicated. My VPS only has 512mb RAM and one large file download is eating up that RAM. This is causing the issue.
I'm a newbie and while I knew I was risking a bit by going with VPS I do find it a bit annoying that these guys advertise 1TB of bandwidth per month but I can't even support downloading 1GB at the same time....maybe it's just me...
Anyway, I am now looking into moving the large files and the upload/download over to Amazon S3. If I do this I am expecting my RAM usage on the VPS to greatly decrease. Is this correct? If my PHP code is running on the VPS, but the actual file download via HTTP is coming from S3, that should not be a heavy load on my box, correct?
I have a lot of questions here so if you can't answer them all I understand. even pointing me somewhere where I could get the answers would be appreciated; hardware sites focusing on server hardware, forums focusing on such, etc.
we plan to have three different types of servers:
- db server (self explanatory. mysql. for forums, mysql driven sites.)
- file server (lots of files around ~2-10MB, consistant 70mbps right now, but we want more room for upgrades. needs a LOT of storage room.)
- web server (lots of php files, but also static things like plain html, images, etc. also includes all misc services for the setup-- dns, etc.)
could I be given a rundown for which hardware each of the three should have? I don't need specifics, even just knowing that more ram is important here while cpu doesn't matter as much, or that the fastest disks available are a must, etc would all be valuable info for me. despite that, I certainly wouldn't mind specific hypothetical hardware configs.
for the database server I'm assuming the more ram the better. not entirely sure about the cpu? also not positive on disks...
for the fileserver, how much ram would be practical or useful? disk io will be an issue I'm because plenty of people will be pulling files at once so the disk needs to read from multiple places. scsi (and even raptors) are not an option as we need 750GB+ of space on a reasonable budget. more ram will take some load of of the disks, but how much is neccessary / reasonable?
for the web server I'm assuming cpu first, then ram, but it'll likely need less ram than the db server?
I'm more lost on the disks than anything. scsi on the fileserver is not an option under any circumstances due to $/GB. for the db & web server I'm willing to pay for scsi if the performance increase really does warrant the extra money, but I'd like to be convinced before shelling it out. if you have benchmarks geared at server hardware when it comes to disks I'd really appreciate it.
also, what's the best way to network these together when colocated? each one with a dual gigabit ethernet port and then the communications go to and from the router?
I'm completely torn on going the absolute budget route vs spending more for something that'll allow easy upgradeability in the future. I basically need lots of space but file sending-- media like mp3s, video, etc.
it'll be raid 5 and I'll need at least 2-3TB initially but the ability to expand would be nice.
option 1: nice chassis with plenty of hotswap bays with sas expanders expensive sas raid card
option 2: cheap chassis to serve "immediate" needs and go with more later. not sure what I'd use as a card? maybe even onboard?
regarding reliability: I once saw a database of failure rates of different models. raptor was the most reliable of the "desktop" drives. anyone have the link? I'm wondering of the seagate ES drives are worth the extra money vs the non-ES drives. they're supposedely more reliable and the "server versions" of sata drives.
Is it normal practice to have shared filespace that multiple web servers can access? Then I just provide my developers access to that filespace (one server, instead of multiple users on multiple servers) to manage files for different sites easily... right? (also meaning multiple servers can serve the same content)
So.. what would be the best way to do this? We're talking Linux systems here by the way. What sort of specs would such a server need?
I *think* it's NAS that I'm trying to get at.. unsure though if that's correct or not!
Hello, we have a few web servers that run Windows 2003 Server and IIS for web page hosting. We develop custom applications and don't do "web hosting" per se.
What is the best way to do this in a load balanced environment? We have a Cisco load balancer out in front of these servers, but I'm curious about the following:
1) Is there a way to replicate IIS entries instead of having to configure the site on each server?
2) How does everyone handle file replication (hopefully in real time) across all servers?
how to setup Nginx webserver on a cpanel server to serve static contents, say /images folder from every domains hosted on the server so that Apache's load will decrease?
I am configuring a new Plesk server 12.x based on Linux. I already have a Plesk server in 10.x on Windows and i would like to transfer all the data we have on this one to the new server.
I tried to do a backup of the old server using the web interface but the zip file created was not compatible with the one using Linux.
I was thinking of putting together a DB of all the IPs I block due to spam, hackers, known proxies etc... this would help cut down on malicious use of services such as spamming forums, if I catch them once, they would not have chance to spam up my other forums if each forum checks the same block list. I could also auto block proxies by having it query up online proxy lists.
But before I reinvent the wheel, does such service online already exist? Kinda like RBL, but for web services. So you can basically block IPs off your site before they get used maliciously.
To be even more advanced some special port scanners could go around scan networks for infected machines and block those too.
How it would work:
Site A has a spammer from IP 1.2.3.4 and submits it to the block service. Spammer then goes on site B to try and spam that forum, but that IP is already blocked because the owner of site A submitted it to the list and site B checks against that list.
I was CHMODdin some directories and suddenly my FTP program says: Directory does not exist. When I refresh indeed the dir is gone, the site is not accesible anymore.