if there is any way to forward an external IP to an internal subnet without NAT.
I have a server that is configured with a 10.0.100.101 IP and the L3 switch doesn't support NAT, so I can't get on it right now without manually changing the IP on the NIC to a public IP address.
I have a linux router with 2 external and 2 internal ports.
Each port needs to route traffic to one of the internal ports, and the internal traffic between the 2 internal ports should not go out the external ports.
The IPS on the internal networks are global. ie. no NAT required.
I think what I need is this..
$ext_net1 = external nework IP/MASK 1 $EXT_IP1 = ip of external interface 1 $ext_net2 = external nework IP/MASK 2 $EXT_IP2 = ip of external interface 2 $int_net1 = internal network IP/MASK 1 $int_net2 = internal network IP/MASK 2
ip route add $ext_net1 dev eth0 src $EXT_IP1 table 1 ip route add default via $ext1_gw table 1 ip route add $int_net1 dev eth1
ip route add $ext_net2 dev eth2 src $EXT_IP2 table 2 ip route add default via $ext2_gw table 2 ip route add $int_net1 dev eth3
ip rule add from $int_net1 table 1 ip rule add from $int_net2 table 2
I have a Webmux load balancer and behind that a Cisco Pix. Behind that I have several servers. The Webmux and Cisco Pix do double NAT so his servers have public IPs.
The problem is that I've added a 4th server, I added it to the Webmux and it's get NATted to an 192.168.x.x IP. Now I just need to add it to the Cisco Pix, natting it back to the real IP BUT the Pix can only have one IP on its inside interface and the Server IP is not on the same subnet as that IP.
So when I try to add the real IP it asks me how to route it....
I'm trying to implement VLANs on my network and can't get connectivity to host servers. Here's how the network is configured. Pardon the bad ascii diagram.
In this example my upstream is providing two subnets:
111.111.111.16/28 (I'm using an IP from this subnet to manage the 3550)
222.222.222.16/29
I am attempting subdivide the /29 into two /30's in order to place a server into it's own /30 subnet & VLAN ............
I have a server that has multiple IPs, one of which I'm using for a VM that is bridged.
The issue is, internally, that IP is trying to point to itself rather then the bridged nic (which is technically a whole other server plugged into the same switch, logically).
I think I know why, I just don't know how to fix it. This is the config file for the ranges:
to expand our existing DNS setup with nodes in North America and Asia.
Therefore, we are searching ISPs that can provide dedicated servers and route an existing (RIPE PI) IP range to that server which will be anycastet for DNS service?
What company would be able to provide that service?
I put together a router running Zebra(yes, I know, should have used Quagga) with a few public ip addresses taking in a full BGP table.
There is a Win2k3 server behind the router running routing and remote access for VPN clients to connect to. Our team's project was to get the win2k3 server VPN clients out onto the public internet with public IP addresses.
I installed another NIC card onto the Win2k3 server and connected it with the router, and assigned the router and the server a private IP address. Both are pingable from both devices.
I then had a VPN client connect in, RRAS assigned the client a public IP address, the router was able to ping the VPN client and so was the Win2k3 server.
I tried pinging the VPN client from another machine on the network with a default gateway pointed toward the router, and there is no response.
Is there something I don't know about with Zebra and Redhat?
I'm experiencing some odd issues, I have a cpanel setup, however on port 2086 the server is currently listening however on port 80 it fails to listen. Apache is running and no errors appear in the errors log.
Running ifconfig shows that there are errors and dropped packets. I was changing ip routes earlier that day however all seems fine...
Oddly I can ping internally on the network and noticed a number of other servers in the broadcast range. These respond fine, however pinging google or outside the data center fails.
I have the following problem with a CentOS server:
The main IP of the server is yyy.zzz.www.qqq
We've just purchased 3 additional IPs: aaa.bbb.ccc.100, aaa.bbb.ccc.101, aaa.bbb.ccc.102.
First, all outgoing traffic used aaa.bbb.ccc.100, but after deleting the gateways from the additional IPs it seemed to work fine, until we found out the following:
Now all traffic to aaa.bbb.ccc.XXX uses aaa.bbb.ccc.100 as outgoing IP.
What command would change this to use our main IP?
I am setting up a small ccna lab and i have RIP working and i can ping my lan from both routers, but only certain hosts on the lan from the one router the setup is
I've been doing some traceroutes between Chicago and Dallas. Tracing from Chicago -> Dallas, I go through Denver almost 100% of the time. Tracing from Dallas -> Chicago, I go through Denver or Atlanta before routing to Chicago.
Is this normal? Looking at the Level 3 network map there seems to be several, much shorter routes.
I run a game server on The Planet, and lot of people have huge routing issues where their route randomly changes, and when it does, they'll get horrible packet loss and lag. It's totally random, one day it may happen to me, while it's not happening to someone else, then it will switch. But it's definately the host and not our home connections as it affects about half the server at any given time, it just picks different people.
Just wondering if anyone who uses The Planet has had issues like this? I pretty much debugged everything and tried everything to no avail and of course their support just said it's not at their end (all isps of any type say that regardless of the situation).
This is how a typical trace route would look like:
Code: 3 9 ms 9 ms 19 ms GE-2-1-ur01.N3Alpharetta.ga.atlanta.comcast.net [68.86.110.17] 4 8 ms 12 ms 7 ms 68.86.106.133 5 8 ms 14 ms 13 ms 68.86.106.129 6 9 ms 8 ms 19 ms 68.86.106.125 7 9 ms 7 ms 8 ms 68.86.106.13 8 22 ms 7 ms 8 ms 68.86.106.9 9 11 ms 11 ms 8 ms 68.86.90.121 10 29 ms 21 ms 39 ms te-0-7-0-0-cr01.nashville.tn.ibone.comcast.net [ 68.86.84.65] 11 31 ms 66 ms 30 ms te-0-0-0-4-cr01.chicago.il.ibone.comcast.net [68 .86.84.77] 12 50 ms 41 ms 56 ms 68.86.84.17 13 44 ms 45 ms 53 ms 68.86.85.38 14 53 ms 49 ms 50 ms 68.86.85.45 15 49 ms 51 ms 59 ms te-7-3.car1.Washington1.Level3.net [63.210.62.57 ] 16 57 ms 53 ms 54 ms ae-32-52.ebr2.Washington1.Level3.net [4.68.121.6 2] 17 79 ms 93 ms 86 ms ae-2.ebr2.Chicago1.Level3.net [4.69.132.69] 18 * * 103 ms ae-1-100.ebr1.Chicago1.Level3.net [4.69.132.41]
19 115 ms 110 ms 126 ms ae-3.ebr2.Denver1.Level3.net [4.69.132.61] 20 125 ms 178 ms 126 ms ae-1-100.ebr1.Denver1.Level3.net [4.69.132.37] 21 132 ms 128 ms * ae-2.ebr1.Dallas1.Level3.net [4.69.132.106] 22 141 ms 130 ms 131 ms ae-14-55.car4.Dallas1.Level3.net [4.68.122.144]
23 130 ms 140 ms 129 ms THE-PLANET.car4.Dallas1.Level3.net [4.71.122.2]
24 130 ms 141 ms 130 ms te7-2.dsr02.dllstx3.theplanet.com [70.87.253.26]
25 * 130 ms 134 ms vl42.dsr02.dllstx4.theplanet.com [70.85.127.91]
26 135 ms 138 ms * gi1-0-1.car11.dllstx4.theplanet.com [67.19.255.4 2] 27 127 ms 135 ms 133 ms a.c4.1343.static.theplanet.com [67.19.196.10]
Another:
Code: 4 209.226.50.77 (209.226.50.77) 49.145 ms 46.724 ms 47.563 ms 5 142.46.7.1 (142.46.7.1) 55.852 ms 56.377 ms 55.110 ms 6 142.46.128.53 (142.46.128.53) 59.420 ms 56.865 ms 59.141 ms 7 142.46.128.5 (142.46.128.5) 59.277 ms 61.681 ms 59.702 ms 8 ge-1-1-0.ar1.YYZ1.gblx.net (64.212.16.81) 59.951 ms 58.555 ms 58.397 ms 9 por4-0-0-10G.ar2.DAL2.gblx.net (67.17.105.38) 95.604 ms 98.524 ms 97.206 ms 10 The-Planet.GigabitEthernet7-3.ar2.DAL2.gblx.net (64.208.170.198) 252.656 ms 251.881 ms 251.271 ms 11 te7-2.dsr01.dllstx3.theplanet.com (70.87.253.10) 253.416 ms te9-2.dsr02.dllstx3.theplanet.com (70.87.253.30) 252.040 ms te7-2.dsr02.dllstx3.theplanet.com (70.87.253.26) 251.873 ms 12 vl41.dsr01.dllstx4.theplanet.com (70.85.127.83) 255.683 ms vl42.dsr02.dllstx4.theplanet.com (70.85.127.91) 257.144 ms vl41.dsr01.dllstx4.theplanet.com (70.85.127.83) 263.597 ms 13 gi1-0-1.car11.dllstx4.theplanet.com (67.19.255.42) 259.076 ms gi1-0-2.car11.dllstx4.theplanet.com (67.19.255.170) 262.143 ms gi1-0-1.car11.dllstx4.theplanet.com (67.19.255.42) 263.775 ms 14 a.c4.1343.static.theplanet.com (67.19.196.10) 264.516 ms 265.046 ms 264.407 ms -bash-3.1$
Actually if anyone is interested in looking this more I posted a thread here. But not needed. Just want to know if anyone else has had issues like this with The Planet. The only thing I can think of right now is switching hosts, but thats an expensive process as for the transition time I'll be paying for two hosts.
Our colo has two carriers, call them A and B. I have discovered the colo provider is round-robining traffic out it's two carriers on a per-packet basis, not per flow.
Assume we want to reach destination IP a.b.c.d.
%> traceroute -q5 a.b.c.d
Results show that at the hop leaving the colo's border router, some packets transit Carrier A and some Carrier B, to the same destination IP, during the same traceroute.
Is this a routing Best Practice, or am I correct in thinking this is the Lazy Man's way of load balancing across multiple circuits, multiple carriers? BGP route selection does not seem to apply here (i.e., either Carrier A or Carrier B but not both at the same time).
if using the Internap FCP technology to optimize the routing. I want a feedback on it, since I want to deploy this solution for have a better traffict routing.
Also, is anyone using avaya? i have looked in their website, but I have no information about their routing optimizer. Basically I want to go beyond of normal BGP since I will be deploying VoIP services soon.
We need traffic to a certain subnet to go out via a second interface IP, rather than the main IP.
I.E, eth0 has IP x.x.x.x and eth0:1 has IP x.x.x.y (on the same subnet). I want traffic to z.z.z.z to go with a source of x.x.x.y rather than x.x.x.x like all the other traffic.
However I add the route and specifiy the device eth0:1 it accepts it but it goes into the routing table as eth0, whether I do it through network-scripts/route-eth0:1 or route add -host z.z.z.z gw a.b.c.d dev eth0:1.
When I ping with the -I command for eth0:1 it works, so the idea works fine, I just don't want to have to specify the interface in the application, but to do it within the routing table.
This is on CentOS 5 under Xen but I've tested on CentOS 4 under Virtuozzo too and it's the same.
I have an exchange server with the webmin interface activated. I have port 80 forwarded on the router to this server for the webmin. Management wants to upgrade data service and move website server in-house.
So this is how I'll need to set up 2 websites on two servers for the same IP.
Server 1: Windows. accepts domain mail.domain.com
Server 2: Linux. accepts www.domain.com
I'm figuring I'll need to make changes at the router level, and I have a decent cisco router. What do I need to do? Point me in the right direction for googling.
Would it be easier to use 2 IPs? Both would come over the same line, how would I handle that on the router level?
we're making a major move to offer some new services, etc, so we're going to have a couple racks in a datacenter, and will be running our own network. Currently I've purchased a 2821 router and a 3750G-24-T-S switch. I've got an ethernet drop from Cogent and we're set up for static routing.
I'm waiting on pricing from Sprint, Verizon and Level3 for a second ethernet line, and I'd like to multihome between them. It's been a while since I've done much cisco configuration, so I'm a little rusty. I know I'll have to run BGP between my routers and the provider, and IBGP between the two routers.
I'm looking for suggestions to run internally. We'll be offering standard shared hosting, a vps/cloud solution, dedicated servers and standard colocation.
Some people have suggested to run each of my different products on a separate VLAN, but I'm worried that would be wasting IP addresses, since one would be needed for the default gateway of each. For instance if give a /29 with my dedicated servers or my higher VPS plans, then it uses up 8 ip addresses, but the customer only gets 6. That seems right, unless I'm thinking about this incorrectly.
So as for my main question... If I did it that way, would I be ok to just run EIGRP on the 3750 to route between vlans, and then redistribute a default route from BGP into EIGRP?
I have spent the last few days setting up a low-end VPS server as a VPN host, using OpenVPN on CentOS 5.
I've got everything set up, but one last (and most critical) component is still not working correctly.
Basically what I need, is that once users are on the VPN, they should be able to browse the internet through the VPN under that server's IP address rather than their own dynamic address. I was told that this VPN set up was the way to do it. However, right now when I connect into my VPN I can browse that specific server, but cannot access any other websites at all.
If it would help to see my config files, please let me know and I'll post. I'm really itching to have this up and running,
business is just getting out of hand. I applied several weeks ago for Whitelist status, and my issues finally went away for a little over a week (though I never received a response to my Postmaster requests). But then today -bam- 100% deferrals for going on 18 hours now, not a single message has gone through. And naturally no two Yahoo servers give me the same error message.
So...
At this point I'm ready to contract out my Outbound mail to Yahoo through a whitelisted 3rd party until I can get this resolved on my end. Would this be reasonable? Is anyone else doing this? I worked with an outsourced SMTP provider in another life for an internal company mailing list with good success.
Has anyone else noticed some weird Savvis routing in the NY/NJ area since their maintenance on Friday night?
I'm in NY and a trace to the NJ1 datacenter in Jersey City NJ shows:
Code: 1 <1 ms <1 ms <1 ms 10.0.0.3 2 24 ms 23 ms 23 ms 10.32.37.1 3 26 ms 24 ms 23 ms at-3-1-1-1732.CORE-RTR1.NY325.verizon-gni.net [1 30.81.11.173] 4 24 ms 24 ms 24 ms 130.81.20.176 5 * 30 ms 30 ms 0.so-3-1-0.XT1.NYC9.ALTER.NET [152.63.10.37] 6 32 ms 75 ms 34 ms 0.so-4-2-0.XL3.NYC4.ALTER.NET [152.63.0.213] 7 32 ms 32 ms 32 ms 0.so-6-2-0.BR1.NYC4.ALTER.NET [152.63.3.149] 8 32 ms 32 ms 32 ms bcs1-so-5-1-0.NewYork.savvis.net [204.70.1.5] 9 34 ms 32 ms 32 ms cr1-pos-0-0-5-2.Washington.savvis.net [204.70.195.1] 10 32 ms 32 ms 32 ms 204.70.197.5 11 33 ms 33 ms 33 ms 204.70.197.14 12 33 ms 32 ms 32 ms hr2-tenge-13-2.Weehawkennj2.savvis.net [216.35.78.6] 13 32 ms 33 ms 32 ms 204.70.196.74 14 33 ms 33 ms 33 ms 204.70.196.78 15 32 ms 32 ms 32 ms bhr2-ge-5-0.JerseyCitynj1.savvis.net [204.70.196.86] 16 33 ms 32 ms 32 ms csr22-ve241.Jerseycitynj1.savvis.net [216.32.223.51] Why are the packets going from NewYork to Washington to Weehawken and then to Jersey City? Also, what are those 4 unnamed nodes at 10,11, 13 and 14?
I'm also getting 200ms+ ping times and 13% loss to/from our offsite VPSs
[root@offsite ~]# traceroute 216.32.223.51 1 eqash79.keepitsecure.net (69.65.111.117) 0.173 ms 0.125 ms 0.063 ms 2 r02.iad.defenderhosting.com (69.65.112.2) 3.440 ms 0.345 ms 0.290 ms 3 ge2-10.as.eqxashva.aleron.net (205.198.14.245) 0.473 ms 0.554 ms 0.482 ms 4 ber1-ge-8-10.virginiaequinix.savvis.net (208.173.52.105) 0.591 ms 0.567 ms 0.438 ms 5 cpr2-ge-5-0.virginiaequinix.savvis.net (204.70.193.101) 0.588 ms 0.606 ms * 6 bcs2-so-2-0-0.washington.savvis.net (204.70.193.153) 119.863 ms 3.639 ms 3.378 ms 7 cr1-tengig-0-7-0-0.Washington.savvis.net (204.70.196.105) 198.659 ms 201.783 ms * 8 bcs2-so-2-0-0.NewYork.savvis.net (204.70.192.2) 202.751 ms 195.501 ms * 9 * dcr3-ge-0-2-1.newyork.savvis.net (204.70.193.98) 201.978 ms 198.180 ms 10 204.70.197.5 (204.70.197.5) 7.627 ms 6.984 ms 6.196 ms 11 204.70.197.14 (204.70.197.14) 6.822 ms 6.534 ms 6.460 ms MPLS Label=1640 CoS=5 TTL=1 S=0 12 hr2-tenge-13-2.Weehawkennj2.savvis.net (216.35.78.6) 6.752 ms 6.634 ms 6.509 ms MPLS Label=66 CoS=5 TTL=1 S=0 13 204.70.196.74 (204.70.196.74) 7.550 ms 6.600 ms 6.479 ms MPLS Label=339 CoS=5 TTL=1 S=0 14 204.70.196.78 (204.70.196.78) 6.607 ms 6.633 ms 6.482 ms MPLS Label=339 CoS=5 TTL=1 S=0 15 bhr2-ge-5-0.JerseyCitynj1.savvis.net (204.70.196.86) 198.841 ms * 201.303 ms 16 csr22-ve241.Jerseycitynj1.savvis.net (216.32.223.51) 196.147 ms * 199.857 ms The second trace shows that there is a path between New York and Weehawken without going through Washington, even though the first route went through Washington between NY and NJ. The only reason that I can think of is that someone in Washington wants to see the traffic (wink wink)?
I've contacted Savvis, but got the stock response "Savvis’ backbone routers forward traffic through the optimal logical path within our network. Although the physical path may seem odd occasionally, it is actually the optimal path."
how to configure linux so that it allows for two ip addresses on one machine?
I know this is possible because my server administrator setup one of my servers to have two different ip addresses so that I could have a static and dynamic http daemon (two different daemons but listening on different ips).
We have a VPS service with ServInt to host only our website (www.ourdomain.com for example) and not our email - the nameservers are also not on our VPS. This all works great but it means that any emails which are sent from our VPS to xxx@ourdomain.com are getting routed to the VPS and not the email server and so are being dropped.
Is there a way around this because ServInt support seem to be stumped because they have suggested the only way to fix it is to setup nameservers on the VPS instead of where they are now?
I have a windows 2003 server machine. This machine has 3 internet IPs.
Im running Microsoft Virtual Server and going to be running 2 virtual machines both running Linux (Ubuntu).
Ideally I would like the 2 VMs to use my spair 2 internet IPs. However they are assigned to the NIC on the real windows server. How can I route 2 of these ip addressess to the 2 VMs?
Baring in mind that the windows server has only 1 NIC (used for the internet)
I have two RedHat EL 4 boxes linked via a cross-connect. One is a web server (10.0.0.3) and one is a mySQL server (10.0.0.2), the interface between them is eth1 on both machines and a second interface eth0 connects to the internet.
I want to use the web server to send queries to the database server via eth1, 10.0.0.2:3306 in this case. If I send a database query via eth1 there is a delay of about 10-20 seconds before the result comes back. If I send the same query to the database server but use it's main IP instead of the internal IP so that the query is being sent to it over the internet (xx.xx.xx.xx:3306), the result comes back instantly.
Similarly, if I send a query from any remote server the result is instant.
Why should there be such a huge delay when sending a query directly through the cross-connect?
The routing table ( ip route show ) for the web server is:
xx.xx.xx.xx/xx dev eth0 proto kernel scope link src xx.xxx.xx.xx 10.0.0.0/24 dev eth1 proto kernel scope link src 10.0.0.3 default via xx.xx.xx.xx dev eth0
and the routing table on the database server is:
xx.xx.xx.xx/xx dev eth0 proto kernel scope link src xx.xx.xx.xx 10.0.0.0/8 dev eth1 proto kernel scope link src 10.0.0.2 default via xx.xx.xx.xx dev eth0