I'm running apache to serve PHP files on (/home/www/) and thttpd to serve images on (/home/www/images)
thttpd runs on a different IP than apache, apache only listens to its IP.
After doing this, the number of apache processes decreased significantly, however performance has gone down, and apache is starting to crash very frequently (swapping).
Could there be a file-locking issue? Do I have to separate the images folder from the www folder?
I was using apache on my old xp machine but recently got a new computer, vista 64 bit which comes with IIS server.
I am familiar with html/css and starting to learn php, asp etc. Am I correct in assuming that it would be good to setup both Apache and IIS on my machine so that I can test database driven sites on my system, because I will want to work on many client websites who some will use windows/IIS hosts and others apache.
Or is it that with just IIS, I can test all sites in my dev environment since IIS supports everything apache does and more?
I am not sure if when developing a site for a client with a linux/apache host there is a lot of apache configuration that I would not be able to test on my local IIS server?
In fact, I am not even sure if a web programmer would need to do anything different at all dependant on which server type their code was running on.
I have been using IIS 7.5 on a Win7 32-bit computer. We moved to a Win7 64-bit machine and 1 app does not work. I am thinking of trying Apache 32-bit to see if that works. The script that is causing problems calls Office Word to create a document. Here is a simplified version of script.
I have build windows xp machine. This machine is running OTRS ticket system. All of our users login using [URL] .... and agent can login with [URL] ....
I have also build ubuntu 9.10 machine and installed OTRS ticket system. I like redirect the [URL] ..... in windows xp to Ubuntu 9.10 with [URL] ....
I searched httpd.conf file and tried to change virtual host but it did not worked.
I am 3 days new to figuring out how to get Perl scripts to run on my Windows XP box. I downloaded and installed the Apache installation file "httpd-2.0.65-win32-x86-openssl-0.9.8y.msi" and the "strawberry-perl-5.18.2.1-32bit.msi" from the perl.org site in hopes that I could get a feedback form to work for a web site that I am working on.
Out of the many pages that I have viewed online of how to configure the Apache Server, nothing has given any favorable results with their explanations.
My last attempt was [URL] ..... where I could not get the example to work. I did the changes to the Apache file "Edit the Apache httpd.conf Configuration File" fairly easily but I must be having problems with the test.pl because I can't get it to work.
I used a different version of Perl (Strawbery from perl.org because it installed without giving me an error pop up after installation) and after copy pasting the script, in an attempt to get it working, I ended up changing it in hopes that I could get it working, shown below.
where I assumed that "#!" meant the "C:" drive and substituted the first "/perl" with the folder the Strawbery Perl had installed itself to and left the second "/perl" in the first line thinking that it was referring to the executable in the "C:strawberryperlbin" folder.
This is the error I get when trying to get the script to run when typing "localhost/test.pl" in the address bar.
"Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. ........"
Know how I can change the title of this post to read "Configuring Apache HTTP Server 2.0 to run Perl in Windows"?
Ive been having some issues with my cpanel lately... it seems everytime I add an account via whm or a subdomain via cpanel it locks up when "Restarting Apache"
I have 2 identical servers, 1 runs fine... this one though ive reinstalled apache...multiple times with no results... now im reinstalling cpanel and it seems forzen at 50% and just says
i have question about locking in homedir.. i bought few weeks ago debian box, and i need to create shell accounts with locking axx to user home directory.. or block access to other users directory..
a way to lock users in their directory. E.g. if I host a domain mydomain.com, I need that the owner of this domain can access (read, write, execute via ssh) only the folder www/mydomain.com and nothing else. The solution does not to be a ultrasecure one.
1] adjusting privileges (e.g. deny execute on dirs for the "others") seems a sure way to make the server unfunctional
2] rbash - when I set shell to rbash for a test user, the user can no longer connect to server through winscp
I have a (dedicated) server out of control. It is managed by a 3rd party company who has never been able to get the spam and server load under control. Loads average over 5! and there is no activity in top other than sendmail and mailscanner (with Ensim).
I turned off mailscanner and sendmail while I typed this and server load went to .08.
I'm going to switch (dedicated) servers to a new provider (for reasons above plus a few others) which will include managed service from the server provider as well.
There are only a few programs that need to run on this server. VBulletin is the main concern.
I want to lock down all mail access. I want vbulletin to be able to send outgoing email as part of its administration and as part of its member notifications.
I don't want ANYONE OUTSIDE THE SERVER to be able to send mail through this server.
One idea I have had is to use DNS to assign all MX records of every domain on the machine to the free gmail service.
I have one domain on this machine (and important domain) that gets thousands and thousands of spam. I assigned its' MX records to NO-IP.com who filters and forwards email to me. That has worked - but server load never budged.
User Domain %CPU %MEM MySQL Processes mysql 3.30 45.97 0.0 Top Process %CPU 1.0 /usr/sbin/mysqld --basedir/ --datadir/var/lib/mysql --usermysql --pid-file/var/lib/mysql/hostname.com.pid --skip-locking Top Process %CPU 0.7 /usr/sbin/mysqld --basedir/ --datadir/var/lib/mysql --usermysql --pid-file/var/lib/mysql/hostname.com.pid --skip-locking Top Process %CPU 0.6 /usr/sbin/mysqld --basedir/ --datadir/var/lib/mysql --usermysql --pid-file/var/lib/mysql/hostname.com.pid --skip-locking
the values are higher before,
i want to ask what is skip-locking and if i add skip-locking in my.cnf,
will the problem been improved? and will it take any effect or problem?
I've seen that a similar topic was posted earlier today, but I have more specific question. I'm looking for the e-mail only hosting. Requirements:
* dedicated IP * SMTP + SSL (TLS) * POP3/IMAP + SSL (TLS) * up to 10 e-mail addresses from various domains * forwarders (10 or more) * 1-5 GB storage * 10 GB bandwidth * budget: the less the better, but I'll pay any reasonable amount of money if the service is good. Basically I need to find a provider at which I can keep my e-mail address once and for all. (I need a few e-mails only, a few forwarders, 100 MB storage and 1 GB bandwidth but I stated more so I don't run out on resources)
Is there any significant difference between SSL and TLS or is it all mainly in the proprietary vs open standard?
Is there any good reason why not to use Google Apps for this purpose? I've read that some people are concerned about privacy. Is there any pro that can comment on this (privacy issue) and remain objective?
One last (dumb?) question. What is the purpose of domain locking? Before AuthCodes were introduced I could see the reason for locking domains, but why would anyone want to lock the domain these days? (and yet I see majority still does) I mean, noone can initiate the transfer without providing AuthCode (can he?) and isn't that alone good enough to keep the domain safe? And if someone manages to gain control to the control panel to read AuthCode then he can easily unlock the domain so I see no additional layer of security.
I have server for testing UBC an SLM memory management (Virtuozzo 4). I use SLM memory limits and setup 2 VPS's with my hosting plan (1024MB guaranteed and 4096 MB dynamic).
Host machine have 8 GB of RAM. My problem is, that host machine has have 100 MB of memory free when these two VPS's are running.
I have a dedicated machine with Xen configured... Dom0 stuff runs great.
I have a pre-made image from jailtime.org, with an ubuntu.7-04.img, ubuntu-7-05.xen3.cfg, and ubuntu.swap -- obviously, the actual image, the config file, and the swap file.
Starting it fails:
Code: $ sudo xm create ./ubuntu.7-04.xen3.cfg Using config file "././ubuntu.7-04.xen3.cfg". The config file:
- Are the /boot and /dev/sda1 literal? That is, do they relate to Dom0 names (/boot on the server, and /dev/sda1, my "real" disk), or are they telling the new DomU what to call them?
- I don't want DHCP.... Do I change dhcp to "0" / "false," or do I specify an IP?
I've found a zillion guides out there, and they cover everything about setting up Xen, except for this one part, it seems?
I recently got a 2nd dedicated server to run MySQL for me. I host game servers that require MySQL. I usually ran it locally on my dual xeon machine until it started using a lot of resources, more then all of the game servers combined. So I moved my MySQL to a dual core machine. The MySQL machine is in the same datacenter as my dual xeon machine (I rent from softlayer). So I am using the private network ip to communicate between the 2 servers. However the queries are lagging out a lot, and causing my game servers to freeze up. I never had this problem when it ran locally, both servers are 100mbit and the mysql machine has a 15K RPM hardrive. The queries seem to not go through as fast as they did when it ran locally (of course), but not as much as I thought it would. Is there anything I can do to make it operate faster? Also, both servers are Windows 2003 Server.
I feel like I'm making this much harder than it is. I have one server with multiple IPs. I list my ns1 as (example) 1.2.3.4 and ns2 as 1.2.3.5. All of that's squared away, and it all resolves properly.
Except that tinydns only listens on 1.2.3.4, and I can't for the life of me figure out how to make it listen on the second IP too. Consequently, queries to ns2 fail.
What I ended up doing was just starting a second session with /etc/tinydns2 (and /service/tinydns2)... This is surely not the right solution, but it's made even worse because my "cp -R /service/tinydns /service/tinydns2" command doesn't do anything.
to move data (a lot) from one server to another. The thing is that the old server's host will not allow SSH access, not even just for a few hours. The new server is a dedicated, so I will of course have SSH there, but how should I handle this situation?
The data in question is massive..Much too much to download to the PC via FTP and upload to the new server. I'm not too familiar with FTP on linux. Could I use SSH on the new machine to FTP into the old machine and recursively grab everything (IIRC, the FTP protocol doesn't allow recursive gets...although it's been awhile since I've used CLI FTP)
I have a small linux box that I use as a router (CentOS 4.4 on OpenVZ).
I have quite a few clients connecting to it and using it as a gateway.
I would like to monitor their bandwidth usage if possible, I have iptables installed and am using iptables -L -v -n, which shows me the data transferred on specific ports that I am forwarding to them.
So, is there an bit of software out there that will monitor each IP for all UDP and TCP traffic, and wont be lost if I restart iptables.
I have looked at Cacti, but have never managed to get it to work...?