According to the documentation, Hyper-V VMs cannot boot from SCSI drives and requires an IDE drive for each virtualization. I'm new to Windows (Server 2008) and Hyper-V and planning out some hardware.
Does anyone know if it is possible to:
Set up the the server with 2 SATA Drives (Raid 1), along with 8 x Ultra320 SCSI Drives (Raid 5 or 6).
Load the OS and set up all Virtual slices on the SATA drives, so that that virtual boot sectors are on the IDE drives, but the main bulk of the clients allotted space on the SCSIs? Is there issue with that and if so, how do you manage that?
I am currently in the search of a new windows 2008 vps with hyper-v, I noticed that most hosts offer "Guaranteed RAM" which is great, but I found another host which will ask you to pay an additional monthly fee to guarantee this ram, even on Hyper-v, I am curious to know if not getting this will affect performance of my VPS.
We are looking for a 1GB RAM server since we only host around 6 websites with very small traffic, and only one of those has database connectivity, but still gets very low traffic. We will need to host DNS, IIS, Mail server to start, so, is 1 GB of ram ok for this and should we guarantee it?
I want to provide some windows vps, but not sure if hyper-v is best solution. I have several questions,
Q1. Is it possible to limit traffic or bandwidth for hyper-v windows vps? And is there any web GUI that can be provided to the users to manage their VPS, e.g. check the traffic had been used.
Q2. About windows license, I heard that If I run a Windows DataCenter version in the main node, then I do not need license for the vps. Does it mean when I install the windows 2003 as a guest, it will no longer require us to input the CD-KEY?
ive just installed Hyper VM using the download from the hypervm site, but a quick question i have is, is there a way of getting more than 5 VPS's on the server, i can't seem to find it anywhere.
Anyone aware of some good Hyper-V hosting? I must say I'm really sick and tired of Virtuozzo. Its a pain in my butt! I'd even take some VMWare or Xen hosting - just none of this fake virtualization stuff...there are way too many limits (e.g. I want to update my own core!).
Does anyone know if it is possible to monitor bandwidth for individual virtual environments within Hyper-V? I'm looking for an economical way of doing this, not through System Center. we're looking to provision a few Windows virtual environments over the next few weeks and want to see if there is an alternative to Parallels Virtuozzo.
With Virtuozzo, there is the panel to restart the vps and view bandwidth and server resources etc.
For Hyper-V what is there for me, a customer of the service. ie hosts are telling me they dont have a control panel - so how could I restart the hyper-v should the OS crash?
I'd really like to find a Hyper-V VPS provider (or a Xen/ESX provider) and I've been stunned thus far to see each provider charging more for Hyper-V than Virtuozzo (e.g.
VPSland and Crystal Tech.). Why does this surprise me? Well, Hyper-V is included with the OS, whereas Virtuozzo is an extra cost. You might say, "But yeah, Virtuozzo gets around having to have a separate license for each OS install since its actually just one OS." Actually, that's not true, Microsoft clarified their licensing position and said that each instance does need a license. I'm guessing most hosting providers know this...So why the price hike?
We have a few single CPU (54xx quad core)systems running Hyper-V and looking at the Hyper-V Logical Processer Total value in Perfmon its staying pretty much from 85% to 100% all day long. Perfomance is mostly ok with an occasional hesitation, but the biggest reason is we are trying to avoid doubling the cost of SPLA license by not adding the second CPU. Most motherboards we have only hold 16 gig to 24 gig memory and by adding a second CPU both will probably be less then 40% or 50%
Any problems keeping a 54xx or any CPU for that matter running flat out as long as its cooled OK?
Without any fanfare, at the beginning of September, Parallels released Virtuozzo Containers (formerly Virtuozzo) 4.5.
Version 4, launched in January, unified for the first time the Windows and Linux branches, introducing major new features like virtual SMP masking and support for Microsoft and Red Hat cluster services.
Version 4.5, which is built on this new architecture, brings in a wire range of new capabilities:
Support for Windows Server 2008 (32/64bit, with or without Hyper-V, up to Service Pack 1) and its new Failover Clustering
Support for Hyper-V (it’s not exactly clear if this just means that the Hyper-V parent partition can be segmented in containers, or something else)
Support for TCP/IP Offload Engine (TOE) NICs inside the containers
Support for new 3rd party backup and anti-virus solutions (including the ones provided by AVG, CA, EMC, IBM, McAfee, Symantec and F-Secure)
Support for iSCSI inside the containers (a container can be an Initiator)
Support for IPv6 addresses inside the containers
It’s not entirely clear why Parallels didn’t promote in any way what is still considered its flagship product. It is true that the large majority of the attention is focused on hardware virtualization, but the company OS virtualization platform should still have a competitive advantage over VMware, Citrix and Microsoft hypervisors in the hosting industry, which is well worth some more marketing effort.
We are wondering why Parallels haven't been shouting from the rooftops. This is a game changer.
I currently have a server (Xeon 1x5310, 4gb RAM, 4x500gb hdd in Raid 10) with Windows 2003. Now do to a project I'm looking at installing Windows 2008 and upgrading to 2x5310 and 16gb of RAM on my server.
I'm looking to create a virtualized test environment for development of a new web service I'm working on. What I'm looking to develop right now is 2 file servers, 3 web servers, 3 MS SQL database servers and 1 DNS server (would prefer but not sure if hardware can handle it. Virtualization would be ideal as this is very similar to what we believe will we have when we launch the service.
I have a few questions I'm hoping you might be able to answer:
1) With the upgraded hardware specs, should it be able to handle the load if I assign each virtual entity 1 core with 2gb of RAM each?
2) I would like to create each of the multiple servers in a cluster (ie cluster of webservers) as this is how it will be in production. But, I've never worked with clusters before so:
a) where can I learn about clustering windows 2008 servers?
b) is this possible to do in a virtualized environment?
3) How does MS work the licensing? I want to have each server running Windows 2008 and 2-3 of them running SQL Server 2005.
a) Do they charge extra for each virtualized server?
b) Does this mean I have to purchase 3 complete copies of SQL Server or is there a way I can pay a low license fee for use in a non-commercial, non-production environment?
4) Does anyone see any problems with this setup or have any suggestions for me?
* I do have money available to spend on a good solution, so if you have suggestions that cost please let me know. I just thought virtualization would be the way to go as the project will be in development for at least a year with no public access.
** I realize that Hyper-V hasn't been released yet (that I know of) so information on it might be limited
any recommendations for SCSI 10k or 15k? Core2Duo would be nice as well. ~4gb ram I dont need a lot of HDD space or bandwidth. I'm also open to "hybrid" servers as well
currently my home comp is using a WD 7200 rpm drive, im thinking of upgrading it to raid 0 10k rpm drives, here are the drives newegg.com/Product/Product.asp?item=N82E16822116006 and this is the raid card, newegg.com/Product/Product.asp?item=N82E16816118050 and then i was looking into cables for a scsi drive but i know nothing about them, my friend showed me these cables he found provantage.com/cables-go-09476~7CBTE01N.htm but it says there scsi3 now does this matter? what is scsi 3 and can it be used for these raid cards and hdd, the cables i was lookin at newegg.com/Product/Product.asp?Item=N82E16812193019 are 30 bucks each, now do i need to buy 2 of these for my raid 0 or what? any suggestions on what are the best scsi cables for me and best transfer rate? links would be great too.
I currently have a Dell Poweredge 2650 from a few years back, it is running...
2x Xeon 2.4ghz 512K 3GB DDR266 RAM 1x73GB SCSI
Back in the day this system cost $2000, now it's not worth close to that.
So my plans were to dump this bad boy as an SQL server, seeing it has the SCSI backplane and 3GB of RAM, and SQL usually doesn't need as much CPU as a web server.
Now my question, would it be better to use this server or would it be better to build a cheap Core 2 Duo with a RAID0 array with a few SATA drives?
Before you start going off on RAID0, it doesn't matter to me because I am using clustering/failover so data will not be lost and no downtime will be received if the array fails.
Basically what I want to know, is it worth it to keep this server and build upon it or would it be better to sell this server and look into spending an extra few hundred to build a new system with SATA RAID.
I'm going by price/performance rather than reliability as I am using failover to let you know once again .
To work on an HP ProLiant DL360/380. All I know is they are SCSI U320 drive bays, or that is the type of drive they take. Can anyone provide any insight on what may work? We are trying to get a more cost effective way to get more storage into a server. The largest SCSI drive I can find is 300GB for $200. You can get 2TB drives for that much these days.
is it really worth the money nowadays to put in SCSI or SAS instead of SATAII (single disk, non-raid here), IF reliability is the only concern (i.e. NOT i/o performance) during the usual 3 year life time of a server?
Actually, I was pretty amazed by the sata reliability, in the past 3 years the only hdd failure was two sata on a mismatched mobo, which didn't support SATAII (a lot of read/write error, eventually died). Although we have 0% scsi and sas failure.
I've got a Dell SC1425 1U Rackmount server right now with SATA. I have a new customer who needs a 73GB SCSI 15Krpm drive. Any suggestions as to what I should do for a SCSI controller and drive? I need something that is reliable and tested.