Huge Site Scalability
Jun 11, 2008
I'm just curious as to what kind of things the huge sites--Youtube, Myspace, etc.--are doing to try to keep scalable. What sites do you guys just hate for failing in this regard, and perhaps most importantly, what are some ways we can prevent downtime?
View 4 Replies
ADVERTISEMENT
Sep 16, 2007
I am currently moving from my current dedicated servers because they simply cannot handle the load. I have a site which frequently makes it onto radio, digg and other similar sites.
I need a dedicated server that can take a beating from Digg and offline Media. For most of the month the server load is really low, the site hardly uses up anything. However, when it hits those sites, it suffers.
I am OK with using Shell, just basic tars/logs/sqldumping/httpd.conf editing/rebooting etc.. anything beyond that like installing and configuring software I cant really do.
I guess I am looking at a dedicated option (linux based) with a host that'll setup software/modules modules for me when I ask, but doesn't really need to hold my hand all the time.
How are ThePlanet.com's servers? Do they manage the servers?
View 14 Replies
View Related
Nov 12, 2007
Current Forum users: 550,000 users
Traffic: 700,000 to 1,000,000 page views a day with 70,000 – 100,000 visits a day.
Monthly bandwidth needed around : 18000GB (18TB)
Now can someone tell me what kind of solution i need to host this website & keep sit running smoothly.
View 4 Replies
View Related
Nov 19, 2008
I have used this forum before to find good suggestions for VPSs, but now I'm in need of something that I have no idea where to go to to get the solution, or even what that solution may be.
First, my situation is this...
I've built a web application in PHP for an emerging company and it's primary function is to crawl remote websites that we are provided API access to (via lib_curl_multi). The company's clients login to their account and initiate the crawler on a domain of their choice that has API access. They are then added to a queue (to prevent abuse/server overloading) which performs the crawl. The crawl process takes anywhere from 1 minute to an hour and uses anywhere from 50KB of bandwidth to 30MB depending on the remote domain.
The client anticipates their first 3 months needs to be 1,000 crawls every 24 hour period, obviously meaning crawls would have to be done in simultaneous 'groups' to ensure all 1,000 in the queue can be done throughout the day.
That means for their first 3 months, they need bandwidth of about 10-15GB per day and I have NO CLUE what kind of hardware setup they'll need.
That's not really the issue though as I'm sure most dedicated server setups found here can support that. The problem is that after that 3 months they anticipate a ten-fold increase (obviously this would be a gradual build-up) in the number of crawls needed to be done daily, meaning 10,000.
Now that's a huge increase in bandwidth as well as CPU and memory needs. What kind of setup and/or host could accomodate this constant need for scaling without charging ridiculous prices?
My theory is there needs to be one dedicated server or VPS to serve up the website and its content, whereas there can be one or many dedicated servers (expand as they grow kind of deal) that process the crawler queue in the background (hopefully geographically dispersed as their clients are worldwide). EDIT: I forgot to mention that if the website server is separate from the others, they MUST share the same MySQL database as that is where the queue is stored.
I hope I didn't confuse anyone. I'm great with programming, but hardware and hosting's not my strong point so please let me know if you need clarification.
View 11 Replies
View Related
Jul 2, 2009
MySQL just released an update including "scalability improvements" -- how badly were these needed?
"An update has been released for initial preview release of MySQL 5.4. The release contains scalability improvements and additional DTrace probes for diagnostic troubleshooting on Solaris."
View 0 Replies
View Related
Dec 1, 2008
Does support matter if there was 100% uptime and scalability?
Our team has been developing scalable sites since 2004. We started renting servers from Layeredtech then, since they had good reviews and they were still good until we migrated away from dedicated server land. Although we have systems administration backgrounds, it still took time away from developing software in order to administer the servers (look over logs, backups/restores, performance graphs, hardware failure, etc). Having said that, one thing I've noticed is that customers are usually happy if servers are always running and running fast.
To get rid of the systems administration part we tried Mosso (they had just released, great support but a lot of problems), we tried mediatemple's grid (also had a lot of problems), couldnt try EC2 because of persistent storage, and lastly we are currently using thegridlayer (it lags, the initial request takes about a second to display a page with no load on the server).
The next things to try were VPS then managed dedicated servers. We decided to try VPSes so we can isolate sites from each other and add VPSes as needed for specific sites. So I got a zone.net and they were running fine until they had a problem mentioned here. People recommended them because they had fast servers, now is the opposite because of this one downtime.
So finally, my questions:
1) how much do you think support is needed if your hosts provides fast servers and 100% uptime?
2) What measures do you take (if any) to verify the host's procedures such as backups, company size, profitability, etc?
3) How do you verify that a host is not overselling before buying a hosting package (assuming shared or VPS)?
View 9 Replies
View Related
Jul 8, 2009
Just moved to a new server, and of course, 10GB doesn't seem that large for a server but for some reason wget is not able to handle the transfer of that backup for me... it transfers about 1MB then tells me "successful transfer..."
The old server is using cPanel, and the new server is just a plain old server that I haven't loaded up yet.
how I can get this full backup over to the new server?
View 11 Replies
View Related
Feb 3, 2008
I'm sure this question has been asked before, but I'm looking for a nice and simply way of breaking up log files into smaller chunks.
I've been running apache2 on a VPS for the past few months and one of the access.log files is now 700mb big... bit of a waste of space. I'm currently just doing:
CustomLog /var/www/logs/domain.com/access.log combined
ErrorLog /var/www/logs/domain.com/error.log
In my apache config.
Is there any easy way of telling apache to just keep the last week or months worth of logs?
View 7 Replies
View Related
May 15, 2008
The error logs on my web server keep growing to stupidly large sizes within a couple of weeks.
when i look through the error logs it seems to be showing exactly the same line but just from diffferent Ip addresses. the line is as follows
[Sun May 11 07:11:41 2008] [error] [client ###.###.###.###] File does not exist: /var/www/phpmyadmin/tracker
View 5 Replies
View Related
Mar 24, 2008
I've been using mod_security for a long time, but apparently I accidentally enabled some kind of log or something that uses mysql. I don't remember it being there before.. but the point is; the database is like 145100k!
Which is HUGE for a database..
How can I disable this stupid log?
View 2 Replies
View Related
Jun 23, 2008
I had several user accounts that were pushing their quota. I was digging around in SSH and found that the INBOX file in /home/username/mail was huge even though the user does not keep messages on the server. I deleted this file to free up space and all seems file. A couple seconds later I did check and the file was recreated with new incoming mail.
My question is how do I keep this file from growing out of control? One of the users I had for almost 2 years had an INBOX file of almost 2GB!
Server Details:
VPS running WHM 11.23.2 cPanel 11.23.3-R25623
Redhat 9
View 5 Replies
View Related
Oct 26, 2008
I want a new dedicated server that has +3TB bandwidth for the best price and quality
View 14 Replies
View Related
Oct 30, 2007
where do you go host HUGE websites, youtube like sites, with HUGE bandwidth usage?
I don't believe people go on host like rackspace, with their 150GB / month packages, unless they want to pay an absurd amount of $$.... so where do these guys go to host? What kind of hosts are these?
View 9 Replies
View Related
Oct 29, 2007
I have been receivig a huge logwatch report, seems that logwatch is not parsing the /var/log/secure file, but sending the log entries instead of any resume of it. I got thousands of lines like
Cp-Wrap: Pushing "47 GETDISKUSED pvargas lights.com.co" to '/usr/local/cpanel/bin/eximadmin' for UID: 47 : 25 Time(s)
Cp-Wrap: Pushing "47 GETDISKUSED r.perez konecrans.com" to '/usr/local/cpanel/bin/eximadmin' for UID: 47 : 69 Time(s)
Cp-Wrap: Pushing "47 GETDISKUSED r.rodriguez konecrans.com" to '/usr/local/cpanel/bin/eximadmin' for UID: 47 : 114 Time(s)
I have upgraded to the most recent version of Logwatch with default configuration. Any ideas on what could be wrong?
View 4 Replies
View Related
Nov 26, 2007
ways to improve the database performance in the situation when I have to modify a large table (several million rows), by e.g. adding a column. Currently this would take several hours which is too slow. The bottleneck is disk I/O. I am considering either partitioning the table over several innodb files on several disks, or going to a RAID-5 or RAID-10, it this will give me better write performance.
The database is 130GB large, and the problem table (which I make period changes to) is the largest table on the server. I cannot have downtime of 3 hours each time I make a change and adding blank fields (to be used later, when a new field is needed) is not an option.
Each time I add a column, the cpu goes into high (80%) io wait state for about 3 hours.
I have a hack which would allow me to split the large table into multiple smaller tables based on some criteria (for example, forumID or such). Here are a couple of things but would like to know which is best, and am open to new ideas. The ideas so far:
1. Split the table into 3 or 5 smaller tables each on it's own disk. The disk IO would then not be so bad, and it might only take 1 hour to perform the table change. But this might not work because the changes to the database (as in adding a column) might be serial, meaning only 1 disk is being written to at a time. (Then again, maybe it will work if I launch 3 different scripts, one to update each table at once).
2. Do RAID 5 or 10, and have 3 or 5 disks. This again might not help at all because of the above issue with MySQL writing serially.
I am using latest MySQL 5.0.45 with InnoDB engine on Debian etch Linux
View 4 Replies
View Related
Nov 6, 2009
i need around 300+GB bandwidth, 20+GB space with 2-5MB of sql database. is it suggestible to take hostgator starting plan (hatchling)?
is hostgator worth that?
View 14 Replies
View Related
Dec 11, 2008
I have one domain where is hosted a lot of subdomains,and for some reason it constantly have 4% cpu usage and 33% mem usage.Since that domain is inactive,could be that usage beacuse of addon domains but it simply not presented correctly in whm?
View 4 Replies
View Related
Jan 24, 2008
I had 18GB bandwidth.log file at /etc/log/ directory? What is the meaning of bandwidth.log file? And what may be reason increasing file size to 18GB, especially in one night.
View 4 Replies
View Related
Sep 21, 2008
I have done my research, befriend a few super proxy webmasters, and learned everything I need to know about being successful in the proxy business. So I am selling almost all my websites to fund this huge project. I will also be flipping proxies from time to time to fund the project even more. This will be a year long project and will be my full time job sooner or later. My goal is to have 1,000 proxy sites.
So with this knowledge, my questions are the following;
1) Which hosting plan should I get right now "Reseller" or "VPS"?
2) Which one would be more profitable in the short term?
View 7 Replies
View Related
Jul 8, 2007
Just few mins ago, my site went down so I went to check up through putty, and when i put Top this is what i got:
top - 09:49:35 up 5 days, 14:41, 2 users, load average: 192.59, 109.31, 62.29
Tasks: 299 total, 3 running, 296 sleeping, 0 stopped, 0 zombie
Cpu(s): 4.0% us, 5.3% sy, 0.0% ni, 0.0% id, 88.7% wa, 0.3% hi, 1.7% si
Mem: 1009272k total, 1001268k used, 8004k free, 124k buffers
Swap: 3919840k total, 1518816k used, 2401024k free, 14676k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14263 apache 17 0 201m 9m 3788 D 1.0 1.0 0:04.74 httpd
16772 apache 17 0 152m 13m 5340 R 1.0 1.4 0:00.82 httpd
16881 apache 16 0 155m 14m 5368 D 1.0 1.4 0:00.52 httpd
16767 apache 16 0 154m 14m 5352 D 0.7 1.4 0:00.48 httpd
16864 apache 16 0 155m 15m 5364 D 0.7 1.6 0:00.80 httpd
16874 apache 17 0 155m 14m 5416 D 0.7 1.4 0:00.60 httpd
8900 apache 17 0 200m 12m 3844 D 0.3 1.3 0:10.60 httpd
13680 apache 17 0 202m 10m 3944 D 0.3 1.0 0:06.05 httpd
14687 apache 17 0 202m 11m 4060 D 0.3 1.2 0:06.12 httpd
14838 apache 16 0 206m 16m 5624 D 0.3 1.6 0:08.19 httpd
15858 apache 17 0 152m 13m 5452 D 0.3 1.4 0:01.39 httpd
16593 apache 17 0 150m 9180 3664 D 0.3 0.9 0:00.49 httpd
16668 apache 17 0 200m 7304 3496 D 0.3 0.7 0:00.72 httpd
16703 apache 17 0 149m 7208 3192 D 0.3 0.7 0:00.61 httpd
16750 apache 17 0 151m 14m 5268 D 0.3 1.5 0:00.81 httpd
16855 apache 17 0 200m 6616 3480 D 0.3 0.7 0:00.68 httpd
16863 apache 17 0 156m 13m 5500 D 0.3 1.3 0:00.61 httpd
But after few mins, the server load went down to 5 What could've caused the huge server overload problem?
Server spec:
64 3500+
1Gb of Ram
View 9 Replies
View Related
Mar 13, 2007
A While back I found a great deal for SSL certficates so I purchased a bulk package of about 10 of them and used several of them at the time. Now when I went back to use the rest of my pre-purchased SSL certificates (more than a year later), the "contracts" have apparently EXPIRED and the money that was put into those contracts has been frozen along with the contracts! WHAT THE F#$@!
That is such BS! When you pay money for something you should get something in return.
What have I learned... That to me seems extremely manipulative of RapidSSL and Geotrust...
I WILL NEVER PURCHASE AN SSL FROM Rapid SSL or Geo Trust AGAIN! and I hope this post inspires others to select one of the many other certificate sellers out there that are more upfront about their business.
I have contacted both of them and both are telling me that they cannot help me.
Now that I am looking for a new SSL provider can someone give me a good respectable company.
View 12 Replies
View Related
Nov 8, 2007
server has huge serverloads of 25+ at random. When I login as root and type the top -s command, the highest cpu usage is less than 5%. The total is less than 50%. Yet my serverload can reach as high as 80.
I also get the "lfd: High 5 minute load average alert " email, but that also does not show what process uses such high resources.
How can the hugh serverload be seen and expained?
View 4 Replies
View Related
Feb 1, 2007
i am getting a huge DDoS attack in one of my servers they are botnets attacks came from Turkey's ip block where the computers have dynamic ips and every ip sends 1 packet 48 Byte and closing the connection To 80 22 110 25 ports so the machine became
unaccessiable because of the syn attack what would you advice do you advice cisco pix series or layeredtechs ddos protection PIX 501 Cisco PIX 501 Cisco PIX 501 - 1 Server Only - $99 Monthly Charge - $49 Set Up 99.0 i can buy this there are 1834 banned ips by the software firewall i am thinking is this cisco pix can handle a such attack
View 14 Replies
View Related
Jul 4, 2007
My Ip is being blocked by datacenter for the reason what a huge amount of OUTGOING data on port 53.
How do i close port 53 from SSH? I am now inside the datacenter
I understand port 53 is DNS Server and usually it is for incoming.. so what are these huge amount of outgoing is about?
View 14 Replies
View Related
Mar 26, 2008
Has anyone else been having big latency issues with ThePlanet (EV1 Houston 1 or 2) datacenter?
I called and they said they are having issues which was causing slow connection. If this was the case there would be at least some threads going on in here about it. Anyone else can confirm?
View 14 Replies
View Related
Feb 8, 2008
transfer a client's site files (over 220 MB) to my server. The client does not use cPanel or have SSH access.
FTP is horribly tedious. I have created the account on my server and have SSH enabled. I have a feeling I can use wget to download the files to the account's home directory, but I am not sure of the correct syntax to recursively download all the directories and the files.
View 6 Replies
View Related
Jan 6, 2008
I have a request to built a standard 32 bit Windows 2003 server as big as possible using standard parts. I am thinking if i use 750GB x 4 Raid 5, that will give me 2.1TB of usable space. Is there any limitations or bottlenecks I should be wary about?
View 7 Replies
View Related
Sep 30, 2007
Let's suppose you have a site on a shared hosting plan, and all of the sudden it gets a huge surge in traffic as a result of being featured in the news or something like that. What would be a good plan of action to deal with the surge quickly?
(ex. maybe your hosts takes the site offline from bandwidth overuse)
View 5 Replies
View Related
Jul 16, 2007
I have VPS account that is controlled my VM management, and VPS runs with WHM/cPanel.
When I login to VM control, it shows that I spent 9.5 Gb of traffic this month. When I check bandwidth in WHM it says 1.2 Gb!
Now, what do you think, which one is correct, and why there is so big difference?
View 1 Replies
View Related