I was wondering I've been having problems with getting my web server going.
I'm using Apache 2 on my Windows XP SP2 machine. I have a Linksys WRT54GX2 router, and I have Charter 3MB Cable internet. I already called charter to see if they allowed web servers and they said they do. I also asked if they block ports, and they said they don't.
Now my problem... So, I originally thought maybe charter blocked port 80, so I used NO-IP to work around that, I did port forwarding through that. Well, I used there tool canyouseeme.org and it said it couldnt read any ports i put in.
All my firewalls were off and i had my router firewall off. I even put my computer in the DMZ part of my router. I'm about to see if i can connect my modem directly to my computer without having a problem but i wasn't able to a bit ago. Anyone got any clues to waht i could do if nothing changes when i put my modem to my computer.
Its quite a 'powerful' one (Q6600, 2GB, 2x160GB). It will be running Windows Server 2003 Standard edition.
I would also like to make a test 2003 installation on the server itself and most likely a Linux Ubuntu) installation too.
Now VPS looks good, but I have some questions about it.
- Can I run 2003 (virtual in VPS) with the same SPLA license?
- Is it possible to give the power of one core of the CPU (since I got four) to a VPS? (so it doesn't stress other cores)
- For example, I host an application on port 5000 (random chosen ) on the non-VPS system, is it possible to host the same application in the VPS system on the same port with an other IP address assigned to it?
- Whats the best (free if possible, shouldn't have many options (just an on/off switch )) way to make a VPS in Windows?
- Is it a problem with servers of these days to push the 'maximum' (or like 80%) out of the network connection (a gigabits or 100MBits)? Are the server response response times (pings) acceptable for gaming when its under such a load?
Let's say you ordered new server,do you make active same moment(install httpd server and all other components)or you running test before like memory and hardware test? If yes,which programs you would recommend to test fully hardware?
I've spent the last several months working on a huge upgrade of a couple dozen websites. The upgrades include modifying Apache so that visitors who arrive at links pointing to mysite/World/New_York are redirected to mysite/world/new-york. In other words, all my links now default to lower case, and underscores are replaced with dashes.
Unfortunately, publishing it has been an endless series of disasters. My websites are now all crashed, and the server is unbelievably slow. It takes pages forever to load (if they load at all), and I can scarcely publish files online.So the following notice sent to me by my webhost got my attention.
IT appears your own server IP is making GET requests to Apache, causing excessive loading and causing service failures. On today's date, your IP made almost 6,000 connections to Apache:<br><br>
I tried everything, nothing seems to work. I was in IIS for a good 2 hours on my windows box trying to figure out how to setup an FTP on a specific directory. Whats a good program to use for a windows FTP to easily setup a read only FTP. I just want to be able to allow me and my friends to grab the files out of one specific directory.
I am trying to customize the knoppix CD so that the SSH server can be started while the system has been booted up. I've tried to install the service and setup the appropriate run level (update-rc.d) but still the SSH daemon couldn't be started up automatically. I had to start it up manually while the system is booted up. I have an idea is to put the startup command in the /etc/rc.local, but not sure if that would work, but I prefer to start it up from the run-level. And what about the /etc/inittab, any idea on that?
Hey heres what happens when i do a traceroute to amazon - over a maximum of 30 hops:
11 33 ms 35 ms 36 ms ae-7.ebr3.Atlanta2.Level3.net [4.69.134.22] 12 58 ms 53 ms 54 ms ae-2.ebr1.Washington1.Level3.net [4.69.132.86] 13 53 ms 53 ms 53 ms ae-81-81.csw3.Washington1.Level3.net [4.69.134.1 38] 14 47 ms 48 ms 47 ms ae-3-89.edge1.Washington1.Level3.net [4.68.17.14 4] 15 * * AMAZONCOM.edge1.Washington1.Level3.net [4.79.20.14] repo rts: Destination net unreachable.
Trace complete.
Is there a way my server unreachable without installing a hardware firewall?
I've ordered 1gbit/s port with one my dedicated servers. But I am still unhappy with the speed of download.
I have 2 mbit DSL connection at home and I can download files with 90 kb/s from the server. I also see the same speed on a 100mbit port server. But I can download files from RapidShare with 210 kb/s..
What do you recommend me to do make faster downloads from server-side?
Hello. I owned a VPS not long ago, and hosted my WordPress enabled site on there. I used approximately 9 plugin's, all of which are very low-usage and mainly used for the backend. I noticed during every day use the VPS slowed down, is this due to the WordPress script or the VPS itself?
The VPS had 1024mb RAM (1531mb burst), and equal share CPU.
I don't own the VPS now, but would appreciate some answers as I may buy a new one soon to host the same type of site.
I did have a chance to really talk with the owner, Navid on the issues I did have and on what was really going on. One of the things I did have trouble was downtime, and I was assured no more of this, and latest news on what was going on.
Some of the new things they're doing - New support staff, and more - New servers from DC (Databank?) (which i'll be moved too ;]) - Less or No downtime at all; and total care support - More support options
and the results are being seen, they immediately solved all my issues hopefully I won't run into them, but dear members who read this, as a owner; I've decided to go sole proprietorship and work sales, support, billing; all from my blackberry and cell phone around the clock, and downtime is the last thing I can have. Currently I have over 125 accounts and being one of the top free hosts, and clients new to the web = lots of questions. So uptime and reliability from a powerful host is needed. I thank BuyAVPS for making the turnarounds and though they've been only for 1 year, they're one of the rare hosts with the right price and great deals.
I must say, they're support team is fabulous and has been helping me out constantly, from installing scripts to great support and now they're offering more support options.
I've been with them for now 3-4 months, or maybe more I've signed up when they started; and they're coming a long way now and soon to be one of the best vps hosts.
I am executive director of a non-profit organization. As part of our mission, we publish a monthly peer-reviewed academic journal. We receive from 70-100 submissions per month, each which must be reviewed by experts. I am interested in moving the entire peer-review process onto a secure website and need some advice. Here are our needs.
1. Run 10-20 independent peer-review sessions simultaneously.
2. Assign between 3-7 reviewers to individual discussion "rooms" for participation in the peer-review process. The editor must be able to monitor and manage the discussion process.
3. Upload a PDF version of a submitted article into a specific "room" for review.
4. Assign a unique user name that will allow each reviewer to remain anonymous during the review process.
5. Reviewers discuss the merit of the article in a blog format.
6. Close the review session down after one week (no more access). When the room is reopened for another article and new group of reviewers, it would have to be done so with new security settings (i.e. people participating in previous reviewers could no log on and access the same or other rooms at a later date).
7. Compile/summarize the discussion thread and send it to the author.
I was thinking that what we need is basically to set up individual blogs with security settings. However, there may be other better ways to go. I don't know. The most important thing is that the "discussion rooms" would be short-lived, would have a limited number of participants (3-7 plus editor), and a limited number of posts. Submitted articles are between 150-500KB and would be taken down off the site once the review session is closed.
I am having trouble determining what our needs will be for this project in terms of storage space, bandwidth, security requirements, etc. Although we want an attractive site, this will not be for public access. The most important design factors therefore are ease of use, functionality, and reliability.
I have a VPS with Linux and 128 MB RAM and the Control Panel is an Interworx one. Backups are made with SiteWorx (a panel within NodeWorx, and only visible one for Shared Host customers).
- The VPS is working properly the whole day;
- The content of my VPS (besides the necessary software) is a PHPBB 3.0.0 forum that is heavily visited. Its subject is World of Warcraft, a popular MMORPG;
- Making a backup is successful, and I am reported by the system that way by an e-mail which also reaches me;
- Shortly after that my VPS crashes and stays offline, until I can restart it or my webhost notices that it is offline. NodeWorx and SSH are inaccesible. As soon as I can access the SSH I can restart MySQL server and everything is working properly again.
I suspect that 128 MB RAM is too little for my VPS backups.
if I co-located in a 44U cabinet and I loaded it with 44 X 1U Rackmount servers so that the entire cabinet was full. Would all my servers fail/crash from touching one another due to over heating? Or would you say typically in a cooled datacenter with a hot and cold row setup this would not beca problem. The datacenter will let me add more amps per cabinet but their cabinets are only 44U. Has anyone attempted to do this? I hear rackable systems can do it but I plan on using 1u supermicro servers.
I'm tying to assign nameservers to my new server in whm.
I am trying to do ns1.mydomain.com and ns2.mydomain.com. I would like to manually choose which IPs are given to each nameserver, but WHM is doing it automatically and selecting an internal IP for one.
I 'just ordered a dedi server; they rung me to confirm the order and told me my server would be ready in few hours, but later asked me send copies of my debit card and passport which I refused to do as i never did this before, especially, here, in the UK
how to make an intranet, but I am thinking of having a go at it on a network I have access to.
Currently there is one computer that everybody here calls the fileserver. Its My Documents folder is everybody else's Z: drive. (This is an XP machine).
What steps are involved in hosting web pages on it that all the other computers can access with a browser?
to be accessing the server like localhost on a standalone machine, rather than just Z:/file.htm because I intend to install PHP.
I recently upgraded my Apache 2.2.22 installation on Win 8.1 to 2.4.9, making all necessary changes (I believe) to the conf files. I am puzzled that two files in the format authdigest_shm.xxxx now appear in my logs directory when the server is restarted. (Edit: there is also no httpd.pid file)I assume this is to do with running digest authentication, but is a new phenomenon since the upgrade.what conf file setting(s) have I screwed up?!
Is it possible to make these two work together? I can't seem to find any way to let Apache read /home/<username>/public_html without disabling selinux entirely.
I know you can do "chcon -t httpd_sys_content_t -R $HOME/public_html", but it seems like it would be a pain when adding users, especially if someone decides to delete their public_html and make a new directory.
Is it possible to create an exception to let httpd do whatever it wants?
Currently Plesk only has the ability to make a full backup of the data. Now, this is quite an issue since I have accounts on my server that are over 280GB. Making a full daily backup of that and allowing clients to have a history of 10 days, isn't viable.
Is there perhaps a way one can make differential backups?
I'm curious to see who here runs time services for their network / their machines. Also, if you do run time services, do you use a Stratum 0 time source (GPS, WWVB, DCF77, CDMA, et al) or do you just sync with pool.ntp.org? Is your NTP server in pool.ntp.org?
What I'm really curious to find out is if anyone here provides stratum 1 time sources (a time source that is directly sync'd to an external reference clock, like a GPS).
I noticed that there are huge pings to my server from time to time, example:
------------------ 64 bytes from HOSTNAME (server-IP): icmp_seq=0 ttl=60 time=2.93 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=1 ttl=60 time=2.70 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=2 ttl=60 time=1901 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=3 ttl=60 time=899 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=5 ttl=60 time=2.69 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=6 ttl=60 time=2.62 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=4 ttl=60 time=2132 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=8 ttl=60 time=2.57 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=7 ttl=60 time=1190 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=10 ttl=60 time=2.65 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=9 ttl=60 time=1048 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=12 ttl=60 time=2.74 ms 64 bytes from HOSTNAME (server-IP): icmp_seq=11 ttl=60 time=1205 ms ------------------
First I thought that it is network related, but most strange for me was that I did not have any packets lose.
Then I tried to ping from my server to other hosts - situation was the same - some ping were good and some were huge (700ms, 800ms, even 2000ms)
I checked: cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max and it was 65536
Then I checked: cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count and it was ~1600 so normal.
I did not have such dropped information on all my other servers. Dropped counter for RX was constatnly increasing.
So I decided to restart all services on the server. After restarting network and ipaliases - problem disappeared. RX dropped counter is still rising, but I do not have any slowdowns on the server and pings are normal.
My question is - does anyone could have any idea what can casue my problem and how can I prevent this in the future?
I am wondering how powerful of a computer/server I'm going to need for a project for work.
The server will only be accessible over the companies LAN, it will most probably need to be running Windows, because that's what the rest of the system runs. Even though I'd rather have it Linux. The server will be running Apache, PHP, and MySQL.
It will need be accessed by around 100-200 workstations (200 on the far out side max).
They don't need to write to the database. Just read. So possibly only 1 to 2 MySQL queries per page. PHP will be of course used to generate the pages.
So how powerful of a server that's just dedicated to this would be needed?
I realize this might not be the right forum for this, but the people here are so helpful
How does a hosting provider differentiate between network and server uptime.
In Serverpoint.com Policies I read that they offer 99.95% uptime guarantee We guarantee that 99.95% of the time your web site will be accessible via IP address to the world.
I have have Two Apache Webserver in One Network On one Static Ip. Both Apache Severs are installed on Ubuntu 12.04.First Webserver Setup hostname apache, domain name test.com.On this web server i Run My Website,Email, And 2 Php Webapps.last night i setup a second separate(own pc) apache Server(ubuntu 12.04) as a Cloud Server.(own cloud) hostname cloud, domain name cloud.test.com..my question is how can access both server via port 80 from the world.Right now i only can access Server 1 from the web.
I nat port 80 to both static lan adresses in the network.I use (pfsense) for the router.i try to reach my Second Server with cloud.test.com