Its becoming more and more difficult to manage VPS using HyperVM and I am personally sick of the control panel design.
So does anyone have any great ideas to move all the VPS in HyperVM (OVZ) to Virtuozzo (PIM)?
If there isn't, I think I'll have to move all data across the orthodox way - which I clearly want to avoid.
PS: OVZ is on a different node from the VZ (duh!) but also will be utilizing different IP ranges. So, I think I need advice on changing IPs of the VEs too.
As a provider of Virtual Private Servers im looking at ways to expand our business, I love the fact that we can offer our clients such a good service with the low price we presently charge but as a "Client" or "Potential Client" what would your views be on Virtuozzo?
Is it worth our business cutting our already low profits and going with Virtuozzo as our VPS Control Panel?
This is something we are very interested in doing and we feel it would be a big jump from the very low-end budget HyperVM/OpenVZ Approach.
Your Views on the Virtuozzo Vs HvM/OvZ would be very helpfull.
Would you rather buy a low priced Virtozzo VPS or an even lower priced HyperVM powered VPS? Im quite lost as to wether the financial investment would be worth it, I don't see how it wouldnt be as HyperVM is very buggy and really doesnt give a full sense of security in my personal opinion.
Does anyone know how you can move an entire HyperVM (OpenVZ) Node to Virtuozzo. What has to be copied because we tried coping /vz/private/<VEID> and /etc/sysconfig/vz-scripts/<VEID> and we were not able to get the VE to start. We were getting an error:
Starting Container ... Can't mount: /vz/template:/vz/private/1110 /vz/root/1110: No such file or directory Container start failed
We tried rebooting the server as some other sites mention but still no luck. Any help is greatly appreciated because when we contacted Parallels they did not seem to have answer. (we are still waiting on a ticket reply though)
I am trying to create some vps using openvz but after creating the vps with the Centos basic template 304mb i can't ping the ip and i can't view the centos welcome page in my browser (using ip and not domain, so i don't need to change there anything yet) and the most important is that i can't connect to the vps using ssh
I aks for many good users about my problem but i don' found a solution yet.
I ask my DC about ip's and they told me that the ip's are routed in my server.
When you're creating VPS container, you can ssh with that IP and login with your username and password correct? But, what is the problem when I successfully created a VPS, and when trying ssh into it, I can't. Ok, here we go. I want to create a 2nd vps and will be using it as my DNS server. I enter a set of IPs into HyperVM's IPpool. First IP: x.x.x.178 Last IP: x.x.x.182 Resolv Entries (space Separated) : Gateway (IP): x.x.x.x.177 NetMask: 255.255.255.248
Then I created a vps resource plan. And then I created a VPS and it use the first IP from above. I want to connect to SSH and want to make it as my DNS server but seems the IP are not responding, it wasn't even live. So, what is the problem here? Where I did wrong? Just to let you know, I successfully created 1 VPS before that and it works without any hiccups. Got even whm/cpanel installed as well on it. The thing that I suspected is that my DC pulled the IPs off me and assigned them to other server. Waiting for their reply on this though.
I need to move a slave server from one master to another, but when running "import HyperVM VPS" I get the classic error:
Quote:
Alert: The vpsid 470 : localhost exists on another server.
Please confirm exactly how to change the VPS ID for an OpenVZ VM as I've gone through all the LxLabs posts dealing with this subject and none of the proposed solutions work correctly.
because Virtuozzo is charged, i think it may more easy to manage for admin and user, xen has it's own feature than Virtuozzo,but i feel hypervm is not very friendly to use ( my personal feeling)
I would like to know which clients would prefer Virtuozzo over HyperVM. Lets say there is a company offering Virtuozzo on there servers. However, there is another company offering HyperVM for a cheaper price. Neither companies are overselling. Would you be willing to pay the extra money for Virtuozzo? Or would you be happy with OpenVZ and HyperVM.
I have used hypervm for over a year and can say it's my favorite. With my jump to futurehost I did not realize they used Virtuozzo until I logged in for the first time.
First reaction was disappointment but the reality is it's not that bad. I am hoping someone can comment on one thing though.
On hypervm if I loaded it with several sites that were being given a load I would typically use between 480mb - 700mb at any given time (Apache, not lighttpd with lxadmin) On Virtuozzo when I loaded 4 somewhat heavy sites on the VPS I am using 88 megs of ram out of 1.5gb and I am once again using apache and lxadmin. If these 4 sites were on hypervm with lxadmin I know I would idle at around 350mb and with decent load roughly 650-700mb.
The sites seem to run good but I just can't figure why hypervm would require up to 7.5x's the resources.
Since OpenVZ offers both base RAM and burstable RAM to VMs, checking how much RAM is still available for assigning to VMs is still done manually by me.
Is there a way to list the total amount of base RAM that has been assigned, and the total amount of burstable RAM that has been assigned, so that you know how many more VMs you can create/host on a server?
I had my box reinstalled with same OS as before, Centos x64. I am creating a NEW VPS and assigning it an IP from my Extended network (not primary) and the IPs are not pingable as if GATEWAY= was missing from the ifcfg config on network-scripts.
In the last two weeks I have noticed a major issue concerning memory usage. HyperVM and Top (via console) report two very different amounts of memory being used. On a fresh rebuild, my overall usage should be no more then 22mb. However, HyperVM reports 45mb whereas Top reports 11mb. Notice the huge gap?
I was told by my VPS host that OpenVZ / HyperVM is to blame. The overwhelming issue is: if I pay for 256mb of ram and I'm being cut short, then I'm obviously over paying. What's more: how can I tell whether or not I'm being cut short?
Has anyone run across these problems in the last two weeks?
This poll is purely for market research but I figure other VPS providers might get some info out of it. Its a simple question to customers more than hosts.
Do you prefer OpenVZ,Virtuozzo or Xen and why?
I personally know which I preferr from a providers point of view but wondered about clients
Assume hardware is equal. OpenVZ/Xen have a custom control panel (HyperVM or similar). Virtuozzo has VZPP
My company currently have some spam filtering problem with mailscanner and the Windows team was given a project to come up with a better solution to fight spams.
I work in Unix dept. I suggested to Windows admin I use ASSP personally and works great. I gave them the specs on my setups. Since the current front-end proxy is on RHEL, we all settled to try out ASSP on Ubuntu based server.
We scrap-find an older Dell PE 2850 we can use. I finally convinced company to deploy OpenVZ, this will be our first OpenVZ server public facing.
The Dell have two Intel 82541GI Gigabit NICs. We are VLAN-ing on Cisco switch level; eth0 will be on internal 10.0 network and eth1 on public port.
I already installed CentOS 5.2 plus HyperVM. I configured and brought up eth1 without TCP/IP, just on layer 2. Looks like OpenVZ is using eth0 right now.
For this new proxy, will be routing traffics through host eth1. What's the best way of going with this? The new proxy will be using veth so it will have its own MAC (for security reasons; and network team said this is mandatory).
Should I be using bridging? Or simple routing guests through VZ configured eth1 would work? Can anyone give me some ideas? I'm asking in WHT is because a lot of hosting companies probably have this setup already. I'm just absorbing ideas...
Anyone of the top of their head know how to fix the CPanel "Unlimited" Quota problem in CentOS/OpenVZ/HyperVM?
This post is not related to Infinitie, I personally have a few VPS servers I run and they are on CentOS 4.6 and the latest HyperVM, secondary quotas is enabled but I still keep having that problem where I cannot get quotas.
So a few days ago we had the wonderful experience of migrating Virtual Private Servers from HyperVM to Virtuozzo. After spending endless hours attempting to migrate with vzmigrate, vzp2v and rsync we were getting very frustrated and were just about to give up. With this we decided to contact our datacenter SoftLayer who usually is able to help out. Like always they managed to rise to the challange and save the day providing us with a solution thay may not have been the best, but probably the only one. We must have searched the entire Internet looking for guides and after we found out a solution we knew it had to be posted so that the frustration we went through would never have to happen again. While this solution is a little timely and requires some work it actually isn't as bad as it sounds. Also we have created a little shell scripts that does a good amount of the migration for you. Below I have made step by step instructions so that there will no longer be "no answer" to this question. Also just to let you know we contacted Parallels and unfortunatly we were told that "I can not find anything in our knowledgebase". Basiclly useless support... they regurgiate their online knowledgebase to you. So below is the guide.
Pre-Requisites:
1. Download the Migration Kit zip that includes the shell scripts for the migration.
Migration Kit Download: [url]
2. You must create a new container in Virtuozzo for each HyperVM VPS that you would like to migrate. You MUST use the same VEID and I you need to keep the hostname the same. Also the OS TEMPLATE you pick DOES NOT MATTER! If you are copying a customer HyperVM template don't worry cause the OS Template has no effect as far as I know. We transfered 30+ VPS's and the OS TEMPLATE made no difference.
3. Stop the container you created and mount it.
Code: # vzctl stop <VEID> #vzctl mount <VEID> For LIVE MIGRATION SKIP TO STEP 4b.
4a. Stop the VPS on the source server (HyperVM) and mount it.
Code: # vzctl stop <VEID> #vzctl mount <VEID> 4b. Leaving the VPS running while migrating has a risk of posible database corruption. During our migration we did it this way and we experienced no issues so I think it is safe to say that in rare cases there may be problems but usually not.
5. Unzip the Migration Kit and be inside the folder where it was unziped.
6. Execute the shell script.
Code: ./migrate.sh <IP-ADDRESS> IP-ADDRESS = the IP Address of the source VPS Node
7. Enter the CTID when the prompt requests it. (CTID = VEID)
8. Enter the root password for the server you are migrating from.
9. The script runs inside a screen session so to back out of it to do other things or start another migration you must hit:
Code: CTRL + A + D To list all the screens you have open.
Code: # screen -ls To enter a screen session
Code: # screen -r SESS-ID SESS-ID = the numbers before the period listed when you list all the open screens.
10. Once the migration is completed a broadcast message will alert you. Also if you check /var/log/migrate/migrate.log will contain all the migrations that have completed.
11. Once a migration has completed you must unmount the VPS and start it.
Code: # vzctl umount <VEID> # vzctl start <VEID>
If everything went "ok" than your VPS should start up without issues and should be just like it was on the old server. Lastly I would like to give credit to SoftLayer for the method of migration. Thanks again SoftLayer
We're in the process of setting up our new VPS Server, and we can create a VPS with 256MB memory and with 512MB memory fine, but when creating one with 1GB memory, we get the error:
Could Not Start Vps, Reason: Unable to fork: Cannot allocate memory: Not enough resources to start environment: Container start failed:
Even though the server has 4GB RAM and no other VPS's running. Any ideas? Thanks.
[Edit]We now seem to get the problem for all our VPS's. I think it may be something to do with the Server not unallocating the memory, as we've provisioned and de-provisioned quite a few Servers
I have recently created a bunch of OS templates for HyperVM as their current set were hugely outdated / unsuitable.
The images tagged modernadmin all include preconfigured DenyHosts to prevent SSH brute forcing of your customers VPS.
Available are the following for OpenVZ: centos-5.2-i386-hostinabox-modernadmin.tar.gz530,147.2KB centos-5.2-i386-modernadmin.tar.gz109,654.2KB centos-5.2-x86_64-modernadmin.tar.gz134,665.8KB debian-4.0-i386-modernadmin.tar.gz61,153.3KB debian-4.0-x86_64-modernadmin.tar.gz143,096.5KB debian-5.0-i386-modernadmin.tar.gz75,740.6KB debian-5.0-x86_64-modernadmin.tar.gz159,226.4KB fedora-core-10-i386-modernadmin.tar.gz165,429.6KB fedora-core-10-x86_64-modernadmin.tar.gz174,693.8KB ubuntu-7.10-i386-modernadmin.tar.gz76,415.5KB ubuntu-7.10-x86_64-modernadmin.tar.gz76,133.2KB ubuntu-8.04-i386-modernadmin.tar.gz70,725.7KB ...
Are there any calculators, white papers, or other guidelines that provide guidance for sizing a physical server to be used for openvz or Virtuozzo for Linux?
i.e. if you want to run “x” VPS nodes totaling “y” disk space needing “z” number of processes, etc. you will need “n” CPU’s of at least ____ Ghz with ____ RAM, and ____ hard drive space of which ________ should be reserved for the operating system and openvz system software
Also for openvz or Virtuozzo for Linux, which RAID provides the fastest performance without trading off too much for hard drive reliability?
I'm not sure if this should go in this forum or not so admins feel free to move it to the appropriate place.
I am looking at moving to a new VPS host and was just wondering what the best method of moving sites over while having the least amount of downtime.
Is it better to set up the new VPS with new nameservers and then change the nameservers for each domain once ready to move the sites over, or should I keep the nameservers and just change the IP they point to when ready to switch?