I was using apache on my old xp machine but recently got a new computer, vista 64 bit which comes with IIS server.
I am familiar with html/css and starting to learn php, asp etc. Am I correct in assuming that it would be good to setup both Apache and IIS on my machine so that I can test database driven sites on my system, because I will want to work on many client websites who some will use windows/IIS hosts and others apache.
Or is it that with just IIS, I can test all sites in my dev environment since IIS supports everything apache does and more?
I am not sure if when developing a site for a client with a linux/apache host there is a lot of apache configuration that I would not be able to test on my local IIS server?
In fact, I am not even sure if a web programmer would need to do anything different at all dependant on which server type their code was running on.
I recently got a 2nd dedicated server to run MySQL for me. I host game servers that require MySQL. I usually ran it locally on my dual xeon machine until it started using a lot of resources, more then all of the game servers combined. So I moved my MySQL to a dual core machine. The MySQL machine is in the same datacenter as my dual xeon machine (I rent from softlayer). So I am using the private network ip to communicate between the 2 servers. However the queries are lagging out a lot, and causing my game servers to freeze up. I never had this problem when it ran locally, both servers are 100mbit and the mysql machine has a 15K RPM hardrive. The queries seem to not go through as fast as they did when it ran locally (of course), but not as much as I thought it would. Is there anything I can do to make it operate faster? Also, both servers are Windows 2003 Server.
I feel like I'm making this much harder than it is. I have one server with multiple IPs. I list my ns1 as (example) 126.96.36.199 and ns2 as 188.8.131.52. All of that's squared away, and it all resolves properly.
Except that tinydns only listens on 184.108.40.206, and I can't for the life of me figure out how to make it listen on the second IP too. Consequently, queries to ns2 fail.
What I ended up doing was just starting a second session with /etc/tinydns2 (and /service/tinydns2)... This is surely not the right solution, but it's made even worse because my "cp -R /service/tinydns /service/tinydns2" command doesn't do anything.
to move data (a lot) from one server to another. The thing is that the old server's host will not allow SSH access, not even just for a few hours. The new server is a dedicated, so I will of course have SSH there, but how should I handle this situation?
The data in question is massive..Much too much to download to the PC via FTP and upload to the new server. I'm not too familiar with FTP on linux. Could I use SSH on the new machine to FTP into the old machine and recursively grab everything (IIRC, the FTP protocol doesn't allow recursive gets...although it's been awhile since I've used CLI FTP)
For some reason, every time I setup the SSL cert on my Windows box, I receive a error 400 when I try to visit the site. When I take the SSL cert out then the site loads again. Anyone know what the problem is here?
Currently running a server with a little less than 100 accounts, running 32 bit CentOS 4.6 on a single chip running a current version of Cpanel and we're going to upgrade to a two quad core chips, and I figured I'd take this time to upgrade to a 64 bit version of CentOS 5.1. Here's my question: We're running an old version of Apache, and PHP
and while all we have on the server now are wordpress blogs and a few forums, I have to confess a level of uninformed-ness when it comes to upgrading Apache and then transferring all the accounts.
These will be different machines, as I hope to make the backup/restore fairly seemless.
Should I upgrade Apache to the Apache2.* before I make the transfer, or does it matter? Same with PHP to PHP5.* There shouldn't be any conflicts, but I'm posting this because I haven't had to deal with it as yet and thought someone knew of an large issue I could be on the lookout.
Also, the RPM/Perl modules's I've installed over time. Uh, is there a "differential" list or am I going to have to create a list of the RPMs I've got now, and then check after I make the build on the new machine? There's probably a dozen or so ones specific to some applications on the server that aren't required for the core operation, and damned if I can remember which they were...
I have more worries, but most of it involves hand holding issues I think I'll work on privately, but if I do have any more questions, I'll add it to this thread.
I am currently researching the options open to me for Virtualisation, the two main ones I have seen are Xen or KVM.
I mainly use CentOS (RHEL), but have read that the version of Xen with it is very old, broken and unstable. KVM isn't included in the kernel that ships with CentOS, as it is too old, apparently it was first featured in Kernel v2.6.20. There isn't likely to be an update till RHEL6, which is due for release first quarter of 2010. I can't wait over a year, so need to find another Distro for use as the Host OS/Hypervisor.
I have built a pretty powerful server, it has an Intel Xeon 3230 which has VT - so I might be better off using KVM over Xen. I am going to collocate this server, so realistically I can make this decision only once - as it would be a PITA to re-install a host Linux distro remotely.
I did a search on distrowatch for distros with the latest version of the kernel, and Slackware came up as being just one minor version behind the most current (v220.127.116.11).
Now this distro is very mature, so should be a fairly safe bet, but it is a 32bit version and can't host 64bit VMs. I have 8GB of ram so want to be able to use it all, and offer the choice of 32/64bit VMs. So that's that out of the window.
I have used Arch Linux on and off for a couple of years as a workstation OS, but because it is so bleeding edge, when pacman updates it can break itself. But I suppose if I just use it as the Host OS, and never let it update/reboot, then it won't break. It should be fairly lightweight and stable, as I will be installing the bare minimum packages. I have a management card, so if the server fails to boot, then I can still remote in to fix it.
If I do want to update the kernel, is it possible to update without rebooting? I think it is somehow... unless I can just reboot during an unused time at 3am or something.
As you can tell I am leaning towards KVM on Arch Linux (x86_64). Is this a good plan?
I've set up a website using a no ip-account, nothing fancy or business-oriented, and I have it working this way:
no-ip(port 80 redirect to port 6500)->router(redirect port 6500 to pc2 with ip 192.168.1.3)->pc2(vmware server redirects port 6500 to virtual machine with ip 192.168.60.100, which is running an asp .net server serving on port 6500)
The router does not forward any port other than that one, but I would like to know if this has any risk for the other machines on the lan.
I am planning to get a Juniper firewall, but due to SSG140 has a maximum of 48,000 concurrent sessions per second, so it triggers me how do I measure the concurrent session of a linux server of the total throughput instead of just port 80?
I want to be able to clone a existing virtual machine image(whether it's openvz or xen) and deploy it.
It's sort of like Amazon EC2.
I want to set up a virtual machine image once. Then deploy it(it doesn't matter whether it's on the same physical machine or not) to a new machine or overriding existing image on a existing machine. It's fine that the VPS provider charge me a monthly charge for every virtual machine I deploy.
This way, I only need to maintain one master virtual machine. It'd be even better if I can export a image to my own machine, modify it locally, then upload it for deployment.
I run mdaemon mail server and IIS6. Recently the machine started acting real freaking slow and I noticed that mdaemon was taking up 25-50% cpu almost all the time. When browsing through my computer/explorer, it takes a number of seconds (about 5-10) to show all the folders as you drill down any directory tree. Also, when you right click on any folder/set of folders/set of files/individual file, it takes a good 5 seconds for the right click menu to show up.
The entire machine is just going butt slow for some reason and I'm at a total loss as to why. Here are a few bits of info regarding how the machine responds to things. If I reboot the machine, its totally fine for a short time... mail takes up the usual 1-5% cpu and all folder browsing is normal and speedy. When the machine does become slow as crap, if I turn off the mail server, the machine still runs super slow, nothing changes really. The folder browsing takes forever still and so does right clicking anything. I've done a full virus scan, and nothing. I'm not sure what else to do, but there are no processes running that are taking up a lot of cpu or ram that might be programs causing problems. Bandwidth is not being eaten up either, its on a T1 and on average does about 5-10GB of total IN/OUT bandwidth and that has not changed. I'm not stupid when it comes to computers, but I'm at a total loss as to what could be causing this.