# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
We are getting trouble with a site into a dedicadte server, more specific performance issue due to many visitors on same time 1000. Mysql is overload. We would like to use xcache to improve performance, using xcache with plesk 12 centos? its safe, works fine?
Now-a-days server is having too much load due to http and in access logs we see following message : ======================================== 127.0.0.1 - - [11/Oct/2008:01:40:02 -0700] "OPTIONS * HTTP/1.0" 200 - 127.0.0.1 - - [11/Oct/2008:01:40:03 -0700] "OPTIONS * HTTP/1.0" 200 - 127.0.0.1 - - [11/Oct/2008:01:40:02 -0700] "OPTIONS * HTTP/1.0" 200 -=============================================
And due to this there is load on server. We are not able to understand why this is happening and how to stop this. So please suggest with some solution.
I getting this error when our clients are sending mail from outlook but they are able to send mail from webmail.
we are using exim mail server and whm . i can't understand where is problem , anybody can help me ?
Sending' reported error (0x800CCC6A) : 'Your outgoing (SMTP) e-mail server has reported an internal error. If you continue to receive this message, contact your server administrator or Internet service provider (ISP). The server responded: 451 Please try again later.
Hi"Optimization of computing resources has long been an important management issue. One of its aspects concerns server scalability and the question of whether an organization should scale-up or scale out.Assume that the computing performance of the servers can be measured by variable 0 <=p, that their total cost is given by "c" and the relationship between server performance and cost is defined by c=αp^β"
a. What is the cost-performance elasticity(ђ), precisely?
b. What would be the range of values for ђ that would be expected by moore's law and what are its implications?
c. What would be the range of values for ђ that would lead managers to scale-out? Draw a graph and throughly explain the implications.
We are a web-based Yacht Charter company, with offices scattered around the world: www.boatbookings.com
Currently, both our web site and our back-office business management system are hosted on a single server in the UK, with an automatic fail-over to a server in Dallas, TX, USA.
The problem we are having is that our sales office in Singapore is having really slow response times and this is very frustrating for them.
Using an application called "JustPing" we see that response times From Singapore are much slower than other parts of the world. (the cities closer to London are fastest, the ones further away are slowest) JustPing Results
Is there anyway to improve this or is hosting our applications on multiple servers the only way to improve performance. What's the best cost-effective method of multiple server hosting?
(Incidentally, if I JutPing Google, response times are fantastic worldwide, but we know they're hosted on many, very large servers)
I was just playing around with litespeed and I thought I would switch back to apache for a few min to see how the server reacts. The load with litespeed was 1.00 - 3.00. I switched to apache the load jumped to 28.00 - 35.00. Its amazing how litespeed is handling connections.
Is there a site which will enable me to enter url of my website and it will simulate visitors from multiple locations. It needs to open the page completely, and run for example 10 minutes. Two things I found are host-tracker, but it just gets headers from multiple locations, and does it only once. Another thing is Paessler software which can test exactly what I want (number of visitors for some period of time with full page download) but it must be run from one (my) PC, so I can not test bandwidth from multiple locations. I need combination of these two, anyone knows for something like that on the net?
My dedicated server is sometimes sluggishly slow. I would like to get a grasp of its performance during a day, to get a better understanding of what's going on.
Therefore I am looking for a server performance monitoring service. All service I found so far were simply monitoring uptime (server is down / server is up). I need something more - a service that checks every 30 seconds or so the loading time of the main page.
Then it would allow me to download the data in CSV or draw a response time graph.
I've recently found (last 6 weeks) that the performance of my client's websites on my trusted host's servers isn't how it used to be and/or how it should be. The download seems to take way much longer than before. So far it hasn't resulted in a significant drop in average pageviews per unique visitor but it might do if things continue. The host claims a number of attacks on the mailserver and an unusually high load at one point in time. Above all it's just annoying - isn't reliable server performance what can be expected from a reputable host?
Here's the question: what's the best way of monitoring online how the web and mail servers are performing so that I can take this report to my host and urge them to take (more) action? Ideally I'd like to compare this with a seperate web server that I use for another client. I don't mind spending a little bit of money but high subscription fees are not in my budget.
I am working on a busy and popular website which has a large amount of database activity - and requires hourly backups of all database data.
At the moment the site is hosted on two servers - one for the front end web server, one for the database.
Both servers are running a RAID HDD system which allows quick swaps of faulty HDDs without data loss. An hourly full backup of database tables is running which is killing the server when it runs.
ISP has suggested installing a third server to run as a slave to the existing DB server, and hence always hold a duplicated of the live database.
I have a feeling however that this is basically just like having RAID mirroring, but on a different machine - so to solve the problem of a potential dodgy SQL statement wiping out ALL copies of the live database, we'd STILL need hourly backups to run, and hence would still see the major system speed drop each hour at the time of backup.
I am currently hosting my website on one server with the specs:
2.8ghz Dual Quad-Core processor + 8 gigs of ram + two 500 hard drives with a 50 mbps unmetered bandwidth package.
My current problem lies in high server loads and very slow server performance throughout the day.
I am considering migrating over to The Planet onto server with the specs:
3.0ghz Dual Quad-Core + 18 gigs of ram + two 50gb hard drives with 2TB of monthly bandwidth transfer.
In an attempt to have great bandwidth pricing and server performance, I plan on downgrading my current server with my current host to a lowe-end server and keeping it only to host my VIDEO and MUSIC files with the 50mbps unmetered package. The Planet will then host my database and all other web related files on their new server.
Is this a good idea as an attempt to save money in bandwidth costs and eliminating my server lag issues?
I was offered a setup of a separate web and database server at my current host but from what I have read, no one touches the performance and reliability The Planet has to offer.
I have a Windows 2000 Advanced Server where there's a performance issue with some of the .asp pages that retrieve data from Access databases, (I know Access databases aren't ideal for data). These pages will just get stuck/freeze, and then either suddenly spring back to life, or give a script timeout error 0113.
The largest Access database I've seen is 136MB (is that way too large?)
I will probably move some of the large Access databases onto a different server but before I do:
- Are there any tools you can recommend to diagnose exactly what files / databases are causing the problem. I don't think the Win 2000 performance monitor tools even work.
- Can anyone explain more about the technicalities behind this issue. I expect it has something to do with processes, threads, memory, Access drivers being loaded into memory etc etc. Can anyone tell me what they know to put me in the picture better?
it took me one year to develop the disk cache tool which can dramatically boost your host and save your harddisk. it is like supercache,but more cheap and better speed.
check picture to see what it can do.
i will offer free download to test the tool by first 10 people. if u host huge traffic website, do not hesitate to try it. i already test it for half a year. it is time to publish it. pm me or post here to get free download.
for setup and configuration instructions for setting up mail server on a Fedora Core 6 server. I googled it and most of the links are described with steps while installing OS, but i need to configure a mail server on a server where my site is already running.
I'm a a JAVA software developer and it happens that I need to configure a dedicated server running Win 2003.
It's already up and running with static IP address, however hosting offers a "Static IP Address", which doesn't really make sense, cause I already have one?
Other thing is domain name. Current domain name is registered with a different company and we don't really want to transfer it from that company. Can I get away with importing a domain name and changing MX records in the domain control panel?
Last thing is an email server. What do I need to set up an Email Server? I have a domain name with few POP3 email addresses. I thought about using hMail server for windows. What else do I need to do, pay for?
I would like to set up a localhost server with a mail server too, I usually use uniform server as I find it very easy to get up and running and carry around with me, are there any better solutions, I have found it difficult to get a mail server running on the uniform server. or maybe there are distributions that come with mail server ready built in?