I have been with SolarVPS for some months now, and I'm pretty happy with their services. But some months ago I have problems with very high CPU load on their server, every day for a week or so. Since then I have had little problems, before the last two days. Now they are back to the same problem as last time, almost 100% CPU load on their server, causing my sites to work VERY slow, and I’m getting timeouts on my MySQL database.
When sending tickets, they kill the process, but it’s only mather of time before the CPU skyrockets again. Yesterday they told me that it was a abusing client, and they was taking care of it. But today I got the same problem.
Are they hiding the fact that they have oversold the node I’m on? (the UK node), or do I only have bad luck? And how much can I take before I start looking for a new host? I think their service and support is very good, but not when I got problems with high CPU load. They reply fast on ticket, but I cannot monitor my server 24/7 to see if the CPU load is high.
What would you have done if it’s was you that had the same problem. Do I have to accept the fact that I only pay around $55/mo for a 384MB/25GB/150GB BW and 4 IP’s (with Windows 2003 and Plesk 30 domains) and cannot demand a VPS server that runs without problems?
Anyway, it’s my experience with SolarVPS after 3 months (and three days).
We have one Cloud Server hardware node that exists only as the backup location for containers and VMs in Cloud Server/PVA (it hosts no virtual servers). Can it also serve as a Backup Server Node?
A related question, in my case, is whether we can configure a Backup Server Node to use a particular drive/mount/directory, as we can with PVA? I couldn't find any info in the docs about any settings at all for the Backup Server Node.
I want to build a VPS Node using Quad Socket Tyan Motherboards and AMD Opteron 2376 Shanghai 2.3 GHz Processor (Quad Core).
I am in doubt since I notice that some providers (at least two rather big provider which I have account) are using Dual Core AMD Opteron 2212.
Is there any specific reason why those providers is not using 2376 AMD Processors? .. is it because the Dual Core AMD Opteron 2212 supports Virtualization Technology, while the 2376 is not?
I will use Xen Hypervisor, usually I use Intel 54xx Processors but looking for *cheaper* solution , so I am really new in AMD Base Server.
I build raid10 for VPS nodes. Now, I will use SAN, so all VPS will be created in SAN. With the host server, I assume that I don't need fast disk array, because all the disk requests have been processed in SAN already?
For a quad core processor with 16-32GB RAM and SAS raid 10, would 30 VPS's be a lot?
I have a VPS with 1Gb RAM on a machine as above. I'm running about 40 (relatively low-volume) websites on it. Seems to me that if 30 others were doing anything similar, that would overwhelm a single machine.
Am I just making an uninformed bad assumption? So far, performance on my VPS has been fine.
Well the age old question for virtual servers, would you rather a host put all his eggs into one basket "monster node" or several smaller ones. From a provider standpoint one server is easier to manage than several, although if that one goes down, all your customers do with it. Lower costs for the provider, s/he can then pass the savings along. Example Package:"2GB Ram Packages, 500GB bandwidth, and 20GB space."
The "EXAMPLE" Specs.
All in one
Max Clients: 126 4U Rackmount, 4 Quad Cores, "16 total cores" 256GB DDR2, 8 600GB SAS 10k, RAID10 Several Server setup: Max Clients: 14 1U Rackmounts, Single Quad Core, "4 cores" 32GB DDR2, 2 300GB SAS 10k, RAID1
and i have the main package of ips which i have received from datacenter
i orderd other package ips (13) ips
i have add them to ip pools in hypervm and it didn't working
Note : netmask are different for the first ips and the second ips
when i create vps i can't login and when i ping ip it didn't ping ( requested timed out )
i have one server from softlayer and the other from limestonenetworks when i order additonal ips from softlayer it works proberly .. now when i orderd those ips from limestonenetworks ; i think it must be modified manually
i have node 16 cpu AMD 64 gb ram HDD sas 15 with raid 10 /hypervm + openvz i host 10 vps my problem when any vps load go over 2 the Node load go to 30 - 40 some time 100 i set CPU UNITS , Number Of CPUS and Cpu Usage for every vps but the load in node is go up
how i can set limit in vps and prevent from effect on load Node?
GeekLayer is looking in to expanding in to the UK - We want to offer VPSs in the UK, but to be honest, I have no idea who people consider the SoftLayer of the UK hosting industry - Who has the best rep around WHT with reasonable(ish) pricing for something like a core2quad with 8gb ram and 2tb+ bandwidth?
I figure while I am twiddling my thumbs here waiting for my host to tell me what the heck happened for the second time in two or three months why they have to do an entire VPS hard restart, which of course causes another hour of fck delays, that I'd ask some of the more skilled and experience folks here, how?
Just before it happened, as I was watching, the load shotup over 1, 2, 4, 20, 30 boom. (I opened a ticket at 4)
Shouldn't virtuozzo always guarantee a certain amount of cpu and bandwidth to the node root? Why do they have to hard reboot and not access it directly and stop the badly behaving vps? Better yet, why isn't the badly behaving vps stopped automatically by virtuozzo?
(oh and am I an idiot for putting up with over two hours of downtime?)
I'm reseller vps, and now i decided to run own node, and sell vps, i choosed VDSMANAGER control panel, because virtuzzo is expensive.
Please help me to choose best options to run best and quality node,
VDSManager or VEportal ? (good optios&security&support&...)
XEN or OPENVZ ? (uutil now OPENVZ best for run static and dynamic site, but on xen can be run vpn & shoutcast & windows & linux & ...) RAM GB ? CPU ? Hard ?
can i use load balancing for vps node ? how many vps can be run on this node ?
I have this vps,which is pending cancellation in a few days. Hardware Node = EL 5 x86_64, Vz = Xen
I asked my provider for 32 bit centos, I even rebuilt it, but my "uname --all" still
Quote:
Linux xxx.xxx.com 2.6.18-128.1.10.el5xen #1 SMP Thu May 7 11:07:18 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
so I opened a ticket asking about the vps arch. The provider have told me that to run a vps 32 bit guest, what you have to do is (?)
+++++++++++++++++++++++++ yum clean all echo i686-redhat-linux > /etc/rpm/platform +++++++++++++++++++++++++
so yum will exclude any x86_64 packages. The provider has told me also that my "uname" display x86_64 because the hardware node is 64 bit. Is this true ?
I'm no linux or vps guru, but that's doesnt feel right at all. This has cause me some issues especially with kernel-headers and some program failed to compile.
So, is it possible to run a 32 bit OS as xen guest inside 64 bit node?
I want to know if there is a set of tools or a linux distro that I can use to create several Virtual Machines and make them use up all their allocated Memory/CPU to the max to see how much the VPS Node will hold.
I do this on Windows easily as I overclock my PC at home. But in linux, no idea. I need something that will do iterations like Prime95 or SuperPI.
I had 3 IP pools in HyperVM node. I removed (deleted) 2 pools completely because I had an error trying to create a VPS, told me the IP was being used by someone else. So I deleted the pools completely and I see they still ping. I did a tracert and the IPs come to my server so i know nobody else is using them or that they were assigned to somone else. I rebooted the server and while it was in the process I tried to ping the ips and they didn't respond. When node came up, the ips started pinging again. How can I manually remove the IPs so that my server doesn't respond to the pings and I can add them to the pool and use them correctly.
Not sure if this is an issue on my side or on the entire node, haven't seen any threads regarding it yet so thought I'd make one. It's been about 4 hours now and my ticket hasen't even been updated yet. Just wondering whats going on...
Ryans usually been on top of everything whenever something happened so I'm going to wait a few more hours before calling them.
I mistakenly ran ELS script [url]inside the NODE in root mode instead of inside the vps container I was intending to run the script in, and everything basically stopped working - even after a hard reboot.
No vps's load, no sites work. The vzagent is not pinging. Cannot connect via VZMC as a result.
Quote:
The following error(s) were detected:
Code Description 1 Most likely your service Virtual Private Server is down or you have entered invalid host address.
I stopped the ELS script at the mytop install yes/no step. I selected yes for all previous install options EXCLUDING apf and bfd. I did a yes on sysctl.conf hardening, disable register globals, chmod dangerous file folder (which probably caused the problem).
I have recently acquired a nice box (2x Dual Core Opteron 275 2.2 GHz, 16 GB RAM, 16x500 GB SATA @ RAID5/6) but I'm unsure if it isn't smarter to buy smaller machines (like X2 5600+, 4 GB RAM, 2x400 GB HDD) and put like 15-20 customers on it.
Yes, I know that the hard drives are more than oversized for VPS hosting
The Opteron server will have a very, very nice I/O performance so that customers are going to have a good feeling while working on it. To refinance the costs it would of course be neccessary to put not less than 125 VPS on one box, so I can imagine that there'll be a bottle neck somewhere (CPU power I'd guess?).
What is your opinion? Big boxes with high performance and loads of customers, or small boxes with not-so-many VPS?
Who are the most reliable Dedicated/Node Providers? I mean there are the cheap ones like Wholesale Internet and Joe's DC but have they proven to be reliable as well?
Our master node unable to connect to slave node for create, rebuild, manage.
But we can give access to slave with hypervm -> Servers and run anything on it.
I checked firewall, also im stopped iptables.
But master cannot connect to slave.
Where is issue and how can resolve?
Note: i can give ping or ssh access with both server to slave/master!
Help me please.
I have this when i run /script/upcp on main node:
[root@server ~]# /script/upcp Getting Version Info from the Server... Connecting... Please wait.... hypervm is the latest version Executing UpdateCleanup. This will take a long time. Please be patient
Is there any way to migrate a VPS from one HyperVM machine to another without the new machine being a slave or part of the same master as the other node? Maybe some kind of simple software that can copy it over? We have root access to the new node, and not the old one. We need to move 3 VPSs over. This is a personal favor for a friend, so I apologize for the lack of specific details. Long story short, he resells VPSs and needs to move hosts (again) as the one he is on turned crappy. He bought his own box @ LSN and wants to put these VPSs on there. It's currently set up as a HyperVM/OpenVZ box, same as the other box that's to be migrated *from*.
We are trying to add new service node but when we are trying to finish procedure we receive error: The usage of the resource "IP addresses" for subscription with ID "1" exceeds the value of "0".
We do not have a such subscription using ID "1". As I can see this is the ID in database for "Parallels Automation License Key", but it has unlimited license key and have already two nodes attached.
Are there any special practices to reboot a node in PPA?
So far a simple "shutdown -r now" hasn't broken anything but I want to make sure PPA doesn't make assumptions about the nodes being up at any point. (both management and service)...
I've been having trouble with my VPS for a while now. In the QoS alerts page in Virtuozzo it seems to be a problem with numtcpsock and tcprcvbuf, mainly numtcpsock.
Copy these into the browser: i18.photobucket.com/albums/b106/gnatfish/qosnumtcpsock2.jpg
I have recently picked up a CentOS5 server running on an OpenVZ box. Going thru various guides, I have seen repeatedly the importance of securing the /tmp partition. However, I am running into trouble when I try to follow the usual commands [1][2]
For example:
# mount -o nosuid,noexec /media/tmpFS /tmp mount: /media/tmpFS is not a block device (maybe try `-o loop'?) If I check for the presence of loop, it is missing:
# ls -ltr /dev/loop* ls: /dev/loop*: No such file or directory
If I try and create loop using /sbin/makedev loop and re-execute the mount command, I get a new error
mount: no permission to look at loop The nearest I have found so far is this thread [3], which suggests using
mount -t tmpfs tmpfs /tmp I believe the above will not persist across a reboot, so that defeat's the purpose.
Can you advise on how to mount /tmp in noexec,nosuid mode within the VPS environment?
I have a lot of experience with VPSs and recently have been working with dedicated servers but my partner and I are going to be providing VPSs and my main concern is securing the node the VPSs will be on. Would I secure it like a normal dedicated server?
I'm worried that if I secured it like I would my dedicated servers it would affect the VPS clients hosted on there. Any assistance is appreciated, even if it's just a recommendation for a management company or single user who could assist us.