I have purchased several VPS from a provider and found they do not provide swap space with VPS, and even with 256MB ram, I get 'out of memory' trying to compile a perl library... Creation of a swap file by myself doesn't work (operation is not permitted).
Hosting providers runs HyperVM.
So the question. Is it common or is it a misconfiguration? For now I got just 'checking on this for you' and three days of silence from their support.
I'm not opening hosting provider name, but I will if they say "You must pay for more RAM", just because other 5 VPS providers support my VPS servers with swap space.
im running out of space on one of my sites but i have more then plenty of data transfer.
Unfortunately my hosting packaged it weird where they provide not enough hd space. Im trying to figure out is there a way where i can use another server or hosting company that can provide space only and use their servers just for space?
i think amazon.com offered this but wasnt sure how exactly this works.
What would cause a Linux server to run out of swap? Would it be a memory leak? This happened today to my server and it had to be forcefully rebooted by the data center.
Opcode caches have stability problems with some PHP apps:
[url]
I've been trying to get XCache working with gallery2, right now apache has to restart every 2-4 hours but it's been worse at times and it'd be nice to know what's going on
I noticed that both XCache and eAccel use a small amount of swap space all the time, it grows slowly and if I'm lucky enough to go without a segfault it will reach maybe 4MB. That's not a lot but it doesn't happen at all without a PHP cache (swap stays locked at 72kb unless RAM runs out and the server thrashes).
Here's the error and my settings:
[Tue Feb 19 06:19:55 2008] [error] PHP Warning: Invalid argument supplied for foreach() in /home/cityv4/public_html/gallery/modules/customfield/classes/CustomFieldHelper.class on line 233 [Tue Feb 19 06:19:55 2008] [error] PHP Warning: Invalid argument supplied for foreach() in /home/cityv4/public_html/gallery/modules/customfield/classes/CustomFieldHelper.class on line 233 [Tue Feb 19 06:19:56 2008] [notice] child pid 31028 exit signal Segmentation fault (11) [Tue Feb 19 06:19:56 2008] [notice] child pid 10638 exit signal Segmentation fault (11)
I have talked with the moderators and they have agreed that I can start a new thread to clarify the issue of Xen and swapping as long the discussion remains technical.
For people who are curious, I would first like to explain why this is important. What we have here is someone making a specific technical accusation against Xen, and if it is indeed crucial, it needs to be solved, or otherwise people have to know it before they get into Xen.
Claim Number 1: The original claim is that users can create arbitrarily large swap and this can lead to the equivalent of overselling.
Fact: Arbitrarily large swap has absolutely zero effect on a normal system, since Linux treats swap as an auxiliary storage, and will not use it unnecessarily. Linux will always use the RAM to the full, but swap is used only when all the buffers have been cleared. If you run free on a dedicated hardware, you will see that swap usage is most of the times zero, even though you have assigned a very large swap to the system. In fact, you can try this by simply increasing the swap to very huge value, you will see that Linux will ignore it completely.
For large swap to cause a problem, the user not only has to assign a large swap, he has to run a really huge workload on his limited RAM, and this being physically impossible, will cripple his vps long before it has any serious impact on the host.
I think the above has been agreed to by the person who made the initial claim.
Claim Number 2 Merely having a swap can lead to vps getting slowed down.
Fact ; The use of swap does not mean thrashing .
Thrashing is a technical term, and it means that the application is registering a swap hit every few instructions. This is rare in normal systems, because of a property of programs that execution is always localized. That is, at any particular time, a certain portion of the program will be continuously being executed, and other portions would be idle. Linux has algorithms that will swap out the Least Recently Used page, and this will mean that the system will not run into too many swap hits.
Now again, like in the earlier case, there will be trouble if the vps customer is trying to run a 1GB workload in a 64MB RAM. This will actually cripple his vps, which is what's the right thing to happen. So normal usage of Swap will not lead to disk I/O, since Linux has explicit algorithms to reduce swap usage.
So for minor OverUsage of memory:
For virtuozzo: the application will crash
For Xen: there will be a very minor degradation of quality. And sometimes it won't have any affect at all, since as I said, just because swap drive is non-empty doesn't mean that linux is constantly swapping out.
So summarizing:
a) Inordinately large swap has zero effect on Linux
b) Non-empty swap doesn't mean that the system is registering swap hits. For normal workloads, the swap hits will be very minimal.
And an extra advice is that if you are using Xen, don't ever use snapshoting, as it will double the disk I/O, and the worst thing is that the vps that's causing this will not even be penalized. The overhead will be completely borne by the system.
I've come across a XEN based plan that offers additional 256MB swap space, and a few others Virtuozzo based plans that offer bursting capability (some even upto 8GB).
Which would be the better option?
Also, in Virtuozzo, will SLM allocation be significantly more beneficial than UBC for a small VPS?
Another question ... DirectAdmin officially claims to be able to run on a minimum of 64MB RAM. How would it perform on a 128 MB VPS ? I'm not really looking to do much, only to host a few (5-6) small sites along with some other non-webhosting applications (which is the reason for getting a VPS instead of a reseller).
I've seen most people suggest atleast 256MB for DirectAdmin, but that is beyond my budget. (I'm also seriously considering the option of employing vi as a control panel to further conserve my limited resources)
I have a handful of machines running Rails on CentOS 5, each with 8GB RAM. They're monitored with Nagios, so I get paged if we exceed 80% memory usage usage or 2% swap. FWIW, they're primarily dual x. dual-core Opterons, and we're using PAE.
The problem is that I keep getting paged in the middle of the night for situations where we're using 2-20% swap yet there are still several gigs of RAM free.
On some level, this makes sense -- if nothing is using a big chunk of RAM, we might as well move it out to disk. On the other hand, when there's still several gigs of RAM free, I don't see any point in bothering, especially when it causes me to get paged at 2am.
So my question is threefold:
Is there any easy way to see what's using swap? top can be made to show a swap column, but it's not what you'd think it is. (For example, most processes show as using lots of swap, and the sum vastly exceeds the actual size of our swap partition.) Can I get a list of what processes are actually swapped out to disk?
Is this a problem other people have? Are others just less aggressive in getting alerted at swap usage, or does this not happen to other people? Our setup is probably not too common (Opterons, PAE, adn 8GB on an older CentOS release?), which makes me think there's a small chance a bona fide bug plays into this somehow.
Is this a case for playing with the swappiness sysctl to make the machine less obsessed with swapping things out to disk? And if so, is there any good documentation on what the 0-100 value actually means, beyond the one sentence explaining that 100 is most likely to swap and 0 is least? Does the current setting of 60 "mean" anything, for example? 98% of the stuff online is just a mirror of the e-mail thread in which Andrew Morton and others go back and forth about what setting is ideal, which doesn't help and is tiring to read over and over.
if there is any specific way (maybe logs) to see what is using the swap memory of a dedicated server..
its a server with 8GB ram.. it has 60% of memory used, and a constant swap memory usage of 30%.. i thought that in normal conditions swap memory was not used..
i have a problem with my server it is a Xeon with 2Gb ram, i have a igh swap usage and when it reach the size of 4gb that i have set it go in kernel panic, this is the actual value
Up to now we've been using CentOS with SCSI/SATA disk shich weren't "hot swap", and now we're upgrading to a Dell PowerEdge 1950 revision III with SAS hot swap disks on a PERC RAID 6i (new model of raid controller from Dell).
OF COURSE, Dell ONLY supports Windows (and Red Hat at the very most on the Linux world) so we were told by a Microsoft Tech that to be able to extract a disk and replace with another it had to be done via software. (The software powers the disk down and then you replace it)
Does anyone use CentOS with hot swap SAS disks? Do you use any special software to monitor the disks and/or replace them?
I have a second Hdisk on my system, is it okay to sent the tmp swap file folder for the main drive on the second drive? This idea is to lower the disk access and stress of the main hdisk... i have 3gigs of ram and from what i'm reading i should have 6 gigs of swap in which i'm creating 6 swap files 1gig per file.
We're about to buy a Dell Poweredge 1950 with hot swap disks in a raid 1 configuration (might even think about other raid combinations).
We will be installing Centos 5 (never tried it - normally use Centos 4) + control panel
The question is: what happens when a disk fails? How do we find out?(Apart from looking at the server) Any software notices?
Once noticed, what is the standard procedure to replace the disk? (Remember they are "hot swap") Do you just pull one out and replace it? Surely you have to rebuild the array...
Since I got my server its ran with 1gb ram and has kept a free of at least 400mb since that time.
Now withing a few hours all of it is being kept in buffer/cache as the past week an showing about 15mb free (not counting buffer/cache) and has started in on the disk swap of about 400Kb.
So should I upgrade to 2gb now or wait till it goes deeper into the swap, and if so how far into the swap before you'd upgrade?
I have a server that I have had for a little while now that was runnning perfectly fine. All of the sudden it started just using up a ton of the memory and started using up swap. This has caused the server to slow down and e-mail to stop working. I would upgrade to more memory but I suspect it is someone on the server doing something they shouldn't be. My only reasoning for this is due to the fact that this problem just suddenly arised. I cannot for the life of me determine where this ram usage is coming from.
Here are a few of the errors I have recently got just while logging in..
RIGHT after logging in... id: cannot find name for group ID 0 id: cannot find name for user ID 0 [root@www root]# top top: Unknown terminal "xterm" in $TERM [root@www root]# killall -9 spamd bash: /usr/bin/killall: Too many open files in system
I have added 2 screen shots as well. First one (top1) is what happens once I can get the command "top" to actually run (normally takes quite a few tries). Second attachement (top2) is after I sorted by mem usage...
I was getting high load on one of our test machines -- it was high enough to where I couldn't ssh in. However, I happened to have a shell open at the time with top running. Load was very high and swap was full. -- afterwards, when I check /var/log/messages it looks as if the load affected the time (see below).
1) At first I was guessing that the load somehow affected the clock, but you'd think it would slow it down -- but apparently the pace of time was quicker?
2) perhaps the time was wrong in the first place - then upon reboot, ntp sync'd up properly.
3) time was originally correct but after the high load, it somehow erroneously sped up the ticks
Then again, from googling around someone on another forum was saying that the computer clock chip shouldn't ever be affected.
So my question isn't about what cause the load, but why was there a time descrepancy in /var/log/messages, and was it related to the load?
Jun 5 17:34:44 staging kernel: Out of Memory: Killed process 26522 (oracle). Jun 5 17:41:45 staging sshd(pam_unix)[25223]: session closed for user gin Jun 5 17:09:50 staging syslogd 1.4.1: restart.