1) We have 3 web servers each with IIS and ColdFusion. When updating the site which setup is better:
a) upload the changed file to all 3 web servers, keeping them in sync
b) move the source files to our storage server, then change the site root on the web servers to point to a network share in the storage server
Main issue: Will the network latency of fetching the source files be a performance problem?
2) We have a storage server that will serve up some audio/video via http. Which setup is better:
a) expose it to the Internet and serve it directly to the users via its own IIS
b) create a network share and let the 3 web servers serve the files
If you think long and hard about this issue you'll realize there are many cons and pros to each approach. I can't seem to make up my mind.
I've tried asking on the xen-users mailing list, but haven't received much response. So, i'm asking here.
I'm running Xen 3.1 with CentOS 5 64bit on a Dell 2950 with 2 x 2.33Ghz Quad-Core CPUs. This should/is be a very powerful system. However, when running Xen the performance drop is huge. The strange thing is, on the mailing list others were reporting much lower levels of performance loss. (Just to be clear, I'm using the XenSource compiled kernel, etc.)
With Xen running, my UnixBench results aren't too bad.
Code: INDEX VALUES TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 52116444.7 1383.2 Double-Precision Whetstone 83.1 2612.0 314.3 Execl Throughput 188.3 11429.1 607.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 155443.0 581.7 File Copy 256 bufsize 500 maxblocks 1077.0 37493.0 348.1 File Read 4096 bufsize 8000 maxblocks 15382.0 1475439.0 959.2 Pipe-based Context Switching 15448.6 548465.7 355.0 Pipe Throughput 111814.6 3313637.0 296.4 Process Creation 569.3 34050.6 598.1 Shell Scripts (8 concurrent) 44.8 3566.8 796.2 System Call Overhead 114433.5 2756155.3 240.9 ========= FINAL SCORE 510.9 However, once I boot into Xen, the Dom0 performance drops a lot.
Code: INDEX VALUES TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 50864253.7 1350.0 Double-Precision Whetstone 83.1 2617.9 315.0 Execl Throughput 188.3 2786.5 148.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 159749.0 597.9 File Copy 256 bufsize 500 maxblocks 1077.0 44884.0 416.8 File Read 4096 bufsize 8000 maxblocks 15382.0 1191772.0 774.8 Pipe-based Context Switching 15448.6 306121.8 198.2 Pipe Throughput 111814.6 1417645.2 126.8 Process Creation 569.3 4699.2 82.5 Shell Scripts (8 concurrent) 44.8 781.6 174.5 System Call Overhead 114433.5 1021813.7 89.3 ========= FINAL SCORE 261.6 Now, here is where it gets weird. The only running DomU which is CentOS 5 PVed, gets a higher score than Dom0.
So would the load times be noticeably longer if I ran load balancers, and then had my web servers nfs mounted to file servers / san and connecting over the network to database servers? It seems like a lot of network overhead to deal with.
I have a lot of international users connecting to my server. They are from all around the world including the philipines , germany, etc. Im thinking those users experience distance lag (Latency?). How do i cut down on this distance lag. Should i upgrade from 10mpbs to 100mpbs would that do any good?
The server is up and running, so the server is not an issue. I therefore suspect there must be a DNS problem somewhere.
The common denominator among those that are having the problem is that they are using connections with high latency (e.g. Satellite). Could high latency be the problem? If yes, is there anything I can do so my users will stop having this problem?
Id like to know if I could use a remote desktop ( remotely ) as I have about 350msec latency to the server where I am planning to install it. I am planning to use remotely anywhere server .
One of my apps is based on querying Youtube API. My response times are badly slow: something like 2 to 3 seconds. The guys on YouTube APIs Developer Forum suggested that the response time should be more like <.5 seconds.
Would you guys do me a favor and post your results for this command:
Okay, I have been trying to get a VPN network setup here between our DC and our office for weeks now and have not been sucessful.
Here is our goals:
-use 10.x.x.x/255.0.0.0 as a local backend network at our DC -be able to assign a 10.x.x.x address at the office to all workstations and be able to access any of the local machines at the DC -we have a asterisk server that we use and want to run that on the same network, asterisk box at the DC, phones at the office
We are wanting to impliment this for allot of security procedures and for ease.
But I also want to be able to have this like at my house so I can still be on the VPN. I want to have my house, office and DC always connected and then setup remote ability too to dial in via VPN.
What would be the best way to accomplish.
I have already tried having a few Linksys RV082 and WRV54G but the remote and local networks must be different networks, so this will not work here.
Does anybody know how can I determine which of the IPs within the network are used or not. I know that this can be archived by pinging each of the ips but there are 256 (192.168.1.0 - 192.168.1.255). I am using CentOS 5.
I bought another dedicated server yesterday and it was bought online same day. It was working fine yesterday during a few site transfers but now it would appear that I am losing network packets.
I have done a traceroute and ping tests and attached are the results. Please can anyone help. I think the problem is to do with NTT's network rather than server problems but please could someone else ping from their location to confirm this.
I have some VPS with Knownhost and i use it for hosting purposes.
First, I'm not from USA.
Here in my country we have several ISP but one of them ( i guess the biggest one ) is having problems with their link that connects to another countries ( including USA ).
Many of my customers that uses this ISP complains about their site being down and also slow download speeds ( 10kb/s when they usually download at 200kb/s ). When they run a traceroute i see that the problem is related to the ISP.
I have already contacted the ISP but they doesn't seen to "care" about their clients and i guess they won't solve this in a near future.
My question is if there is a way to solve this problem on my own?
I was thinking about getting a link with another ISP ( the one that really works ) with static IP and route this to Knownhost VPS. I know that this isn't a cheap solution but is it possible?
To make it simple, I am having some bays with dedicated servers. We offer 2 possibilities for bandwidth traffic: per Giga, or per MBit/s but I am having some problems. We currently use the router of our ISP better than buying a cheap low quality router.
- How can I know how much bandwidth does use on customer and how to limit if I have no access to the router ?
- How can I limit my customer from using free IP on the same block than he is ? We do configure server with IP and the same subnet, gateway and broadcast, so one customer could use free IP just so, and I would not even be able to know who is doing.
This is 4th day i am having network issue on HiVeloCity.
Does anyone else here experiencing the same problem, or its only rack where my server is located?
I am unable to use my server for almost 4 days as i already said, and they still have no solution for me.
Every time i open up a live chat with support, they tell me that they are checking, working on it, having someone see it, etc. but problem is still there.
What should i do?
I am going to post pings from SoftLayer and my home to their main ip (their websites' IP where i see pocket loss as well)
... because of this my websites are opening so slow, and many ppl are complaining about this.
Since there is many experts on this forum i would like advise from you guys. I would like to stay with HVC if they can fix this, if not looks like i will have to look for another provider.
Softlayer: PING hivelocity.net (69.46.24.178) 56(84) bytes of data. 64 bytes from hivelocity.net (69.46.24.178): icmp_seq=0 ttl=119 time=30.4 ms 64 bytes from hivelocity.net (69.46.24.178): icmp_seq=1 ttl=119 time=30.0 ms 64 bytes from hivelocity.net (69.46.24.178): icmp_seq=2 ttl=119 time=29.9 ms ...
Let say I have a computer network and the router is 192.168.1.1 and 192.168.1.2 to 192.168.1.10 is using workgroup called HOME and 192.168.1.11 to 192.168.1.50 is using workgroup called OFFICE.
All comp is WIN XP based client.
Now the qustion is, I'm sure that noone from home workgroup can access the office workgroup. But how about virus?
In case a computer which is reside into home workgroup is infected with network type virus, can that virus reach the comp which is reside into office workgroup?
is any software out there on helping me do a network install of centos or any other Linux base OS. I want to avoid downloading and burning CD for centos. And when I want install it on multiple machines I would need to put in the several CD to do a complete install of the OS.
We have two servers, one in our office, and the other in a colocation. There is a site-to-site VPN connection between them. I want to add the server in the office as a network place/drive to the server in the colocation, but I can't get it to work. I tried putting the local ip of the server in the office, didnt work.
Then I tried OFFICESERVERNAMESHARENAME and that didnt work either.
Office Server Local IP: 192.168.0.202 Colocation Local IP: 192.168.1.2
I've been lurking and posting on these forums long enough that I figure it's time for me to contribute something that might help other users: a quick review of 2AM Networks.
I have 2 dedicated servers with them and I couldn't be happier. Network uptime is 100% as far as I can see, the servers are delivered within their given timeframes (despite me ordering during a sales rush both times - I'm clever like that ) and with exactly the specifications requested. Support is fantastic. I haven't needed it a great deal but all responses have been within 20-30 minutes and the answers are spot on.
You can see how good the prices are for yourselves in the dedicated offers forum or on their dedicated servers page but I'm pleasantly surprised to find that even the add-ons (IPs, etc) are the same price as more expensive competitors.
So if anyone is researching dedicated servers I'd recommend taking a look at 2AM Networks
(I'll report my server IPs to give this review some credibility)
I have one VPS already and I was just wondering what would be the best way to utilise this and if anyone has any suggestions on how I could improve the network and how's best to set it up. I already have cPanel and WHM installed on the existing VPS.
i work on my site... and suddenly i can;t access cpanel whm shh2 all of them
when i made tracert to my site
it gives me the following
Code: 1 30 ms 143 ms 56 ms 10.0.0.138 2 35 ms 7 ms 7 ms ASHAMS-R01C-C-EG [163.121.170.168] 3 69 ms 67 ms 81 ms host-163.121.197.234.tedata.net [163.121.197.234 ] 4 117 ms 16 ms 9 ms host-163.121.183.137.tedata.net [163.121.183.137 ] 5 9 ms 8 ms 8 ms host-163.121.184.209.tedata.net [163.121.184.209 ] 6 9 ms 9 ms 9 ms host-163.121.186.253.tedata.net [163.121.186.253 ] 7 44 ms 66 ms 66 ms host-163.121.202.129.tedata.net [163.121.202.129 ] 8 308 ms 251 ms 188 ms pal5-telecom-egypt-1.pal.seabone.net [213.144.18 1.73] 9 86 ms 85 ms 86 ms mil52-mil26-racc2.mil.seabone.net [195.22.196.18 3] 10 86 ms 86 ms 86 ms ge-0-0-0-0.mil19.ip4.tinet.net [213.200.68.145]
11 91 ms 130 ms 124 ms xe-7-3-0.lon20.ip4.tinet.net [89.149.187.218] 12 91 ms 106 ms 91 ms rapidswitch-gw1.ip4.tinet.net [213.200.79.210] 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 * * * Request timed out.
.... 30 * * * Request timed out.
what is the problem ?
after that site return after 10 minutes it happened 3 times with me today
it happens with more than one people how can i check it ?
both of the VPSs we have with them have been offline for more than 15 minutes now. As usual, their support has been incredibly fast and I have heard back from a NOC staffer that they are experiencing network issues, but want to see if I can find out how widespread this is.
If you have services with ServInt, are your machines currently unreachable?