Now that you've been introduced to the benefits of DMVPN, how do set up your own implementation? Follow along and find out in this video.
- [Instructor] We're going to take a look here at the configuration of DMVPN. In our topology, you can see that we have an HQ router, which is going to be our DMVPN hub, and that's connected over the internet to three branch locations, labeled branches, A, B, and C. The HQ router again will be the DMVPN hub and those three branch locations will be DMVPN spokes. We also have PC one at branch A with the IP address of 22.214.171.124, and PC two at branch C with the IP address of 126.96.36.199. You can see that we have several different subnets here and we also have a note in that topology telling us that our GRE tunnel interface addresses need to use the 172.16.10.0/24 subnet. Now you see that we have a cloud in this topology, representing the public internet, and in my physical topology, I have a router labeled ISP that's acting as the public internet. I have EIGRP configured on the hub router, as well as the three branch routers. On the hub and spokes, I've added the GRE address space of 172.16.10.0/24 into the EIGRP network list. And on branches A and C, I've also added the private networks dedicated to the PCs that are connected into the EIGRP network list as well. So let's see what we currently have in place. Let's connect into the ISP router, which again is acting as the public internet here. Let's say show run pipe to begin the section interface. This is going to get us straight into our interface configuration outputs, and you can see that we have four different interfaces configured here. We have gig 0/1, with a description telling us this is connecting to the HQ router. We have gig 0/2, connecting to branch A. 0/3 going to branch B and 0/4 going to branch C. So we have a description for all of those, letting us know what they do. Gig 0/1, again, connects to our ISP router, and you can see that this has an IP address of 10.1.1.2, so it shares the same sub-net as the HQ router. If we go over to our HQ router and we say, show IP route static, you can see that I have a default route on this HQ router set to 10.1.1.2. This is the IP address of the interface on the ISP router. So that means that all of our traffic is going to go out to the ISP router, out to the public internet as the next hub if there isn't a more specific route found in the routing table. Next, still on this HQ router, let's create a new tunnel interface for our hub. So we're going to start here with the DMVPN hub configuration. By default, a tunnel interface we'll use GRE encapsulation. So let's create this tunnel interface by going under global configuration mode, and we'll say interface tunnel. And if we look at contextual help, you can see that we need to give that a tunnel number. I'm just going to make this tunnel zero in my case. And now we're under tunnel interface configuration mode. The next step is to identify the local source of the tunnel. So I'll say tunnel source, and we can follow that by either the actual IP address of the interface that would be going out to the public internet or the interface number. Either way will work, but I typically prefer to use the interface number myself. Now, when we start spoke configuration, if our spokes had dynamically assigned IP addresses on their interfaces, which is quite possible, you can see that this will be a case where we would not want to use the IP address for configuration. If the IP address changed, as they tend to do from time to time with dynamic configuration, that's going to bring that tunnel down. So generally I just prefer to use the interface number. So I'll say gig 0/1, I'll hit Enter. Next, we want to configure this tunnel interface for multipoint GRE or MGRE. Remember that's necessary for an interface to be able to form multiple GRE tunnels. So we do that by saying tunnel mode gre multipoint. You can see that once I hit Enter, we've got a console message letting us know that our tunnel interface is now in the up state. So let's go ahead and give this interface an IP address. Remember we want those tunnels to be in the 172. 16.10.0/24 subnet. So let's say ip address, we'll make this 172.16.10.1 with a 24 bit subnet mask. And we'll also say no shut, just for good measure. Next we need to enable next hop resolution protocol on the tunnel interface. This is going to allow our spokes later on to be able to query this HQ router for information about the other spokes, so that DMVPN connections can be initiated between the spokes themselves. So let's say ip nhrp. And if we look at contextual help, the option we're looking for is this network-id keyword. And you can see that this is the NBMA network identifier. NBMA, meaning non-broadcast multiple access. NHRP was originally developed for NBMA networks, such as ATM or frame relay, those were both in BMA network types. So we need to configure a network ID here that needs to match on the hub and the spokes. So we'll say network-id, I'm just going to give this an ID of one. The next thing we can do is to define a tunnel key. As you can imagine, these also need to match on both the hub and the spoke. So let's say tunnel key, and I'm going to make this, if we actually look at contextual help, you can see we have a very wide range of options here. I'm just going to make this 123. If we have multiple tunnel interfaces using the same tunnel interface source that we've defined, this is going to help identify the correct DMDVPN virtual tunnel interface using that tunnel key. Let's also add a password string for authentication that will add some security. So I'll say ip nhrp authentication, and I'll just make this a very simple authentication string of Cisco. And finally, something unique to hub configuration is determining if we want to enable multicast support. DMVPN can allow multicast traffic to flow over our tunnel interfaces, so let's go ahead and enable that. We can say ip nhrp map multicast, oops, I misspelled it, multicast, and we want to say dynamic here. There are other options we can configure if we want, things like the bandwidth for QOS, the MTU size and other things. We can set the MTU size by saying ip mtu. And I'll set that to 1400. We can set the maximum segment size also by saying ip tcp adjust-mss and I'll make that 1360. These are optional, so in a production environment, you'll probably want to do some experimentation to see what works best for your own network, but typically you want to make sure whatever you set the MTU size to that you set your maximum segment size 40 bytes under that, so that's the rule of thumb. One of the thing we want to do in order to make sure that EIGRP works correctly to advertise our routes over the GRE tunnel is to use the command no ip next-hop-self followed by our EIGRP autonomous system number, which is EIGRP one. We're using system one on this router and our other routers as well. And this is going to make sure that when the HQ router is learning routes, let's say that it's learning route from branch C and it wants to advertise that to branch A, this command is going to make sure that HQ router does not replace the next hop address with its own IP address, it can instead use the tunnel interface IP address of branch C as the next hop address. And likewise, we want to say no ip split-horizon, and we also want to do that for eigrp one. We need to be able to relay advertisements back out of the same interface on which they were received, because we're using a single interface here connected to multiple branches and disabling split horizon for EIGRP will actually take care of that for us. We have that completed, let's jump out of there, and let's go over to branch A, which is our first DMVPN spoke, and let's do a quick show run. We'll again, say pipe to the section beginning with interface, just so that we can see exactly what we have in place here. We already have an interface gig 0/1 from this branch, from branch A to the ISP, it's already configured, and if we go on down in our configuration output, you'll notice we have a default route set that's pointing to the ISP router interface to which this branch is connected, so that's good. Let's break out of this show command and let's go under global configuration mode and we're going to do a similar configuration here. Many of the same commands are used as we use on the hub, so let's first say interface tunnel zero. We could use another number by the way, but I'll just keep that the same just to make it simple. Let's say tunnel source gig 0/1 and that is the interface, as you can see in our topology, that is connected out to the public internet. Let's configure this interface for MGRE by saying tunnel mode gre multipoint. We'll set an IP address, so we'll say ip address 172.16.10, and we'll make this one .2, because remember our HQ is .1, we'll give it a 24 bit subnet mask, and we'll say no shut. Let's configure the network ID or the NBMA network identifier, and we want to make that match the HQ router, remember those need to match. So we can say ip nhrp network-id and we set that to one on our hub. We need to specify our authentication password and our key as well. So we'll say tunnel key. We set that to 123. We'll say ip nhrp authentication and we use Cisco as our authentication string here. Now here's something just a bit different. Remember on the HQ hub, we use the command ip nhrp map multicast. And if we look at contextual help, you can see that we previously used the dynamic option, to dynamically learn destinations from client registrations on the hub. In this case, we want to do something a bit different. We want to point that to the globally routable address of the HQ router itself, or in other words, the NBMA address of the HQ router. We don't want to use the dynamic option. And we do that by saying ip nhrp map multicast followed by the IP address, which is 10.1.1.1. That is the physical IP address of the interface pointing this way from the hub router, so we'll hit Enter, and next we want to create a manual mapping that tells our router that in order to reach the GRE tunnel interface of the HQ router, which is 172.16.10.1, we need to use the globally ratable address or the NBMA address, in other words of 10.1.1.1. So we'll say ip nhrp map, and we want to map 172.16.10.1 to 10.1.1.1. So we'll hit Enter there. So this single static mapping associates the NBMA address of the HQ router at 10.1.1.1, that we can see here at the end to the GRE tunnel interface address of 172.16.10.1. We want to follow that by defining the next hop server address for NHRP on this spoke. So we can say ip nhrp, and if we look at our contextual help options, the option we want is NHS this tells us that we can specify a next hop server. So we can follow that with the HQ tunnel interface, IP address of 172.16.10.1, once, oops, I left out the NHS option in there. So I need to say nhs first. Sorry about that. And then I need to say 172.16.10.1 I'll hit, Enter and break out of there. And you'll see momentarily in our console that we will have an adjacency form. We just saw that happen. We had an EIGRP neighbor forum using tunnel zero. So it tells us it's up. We have a new adjacency. If we jump back to the HQ router, we see the same thing has happened. We have a new tunnel. Tunnel zero is up, we have a new adjacency. So that's good. That's exactly what we want to see. And let's also one thing we still need to do that I actually just realized I forgot to do. Let's go back under interface zero, and let's set the ip mtu to 1400, and we'll say ip tcp adjust maximum segment size to 1360. And one thing that could be in on that, you'll notice that we had the tunnel come up here on the HQ router. Then it came back down, came back up. That's often indicated by some sort of MTU issue. So, that should correct that. So, we are all finished up on this first spoke. Let's jump over to branch B now, which is our second spoke. And we'll do this one just a little bit faster because much of the configuration is exactly the same as branch A. It's essentially the same other than the IP address assignment for the tunnel interface. So, let's say interface tunnel zero, we'll say tunnel source. And that is gig 0/1. Give that an IP address, 172.16.10. We'll make this one .3, 24 bit subnet mask. Just as before we'll say no shut. We'll say ip nrhp network-id. We need to set that to one. We need to set our tunnel key to 123. We'll set our authentication string as Cisco. And we want to say ip nhrp map. We want to map multicast to 10.1.1.1. And we want to say ip nhrp map. We want to map 172.16.10.1 to 10.1.1.1. We need to set our next hop in HRP server. So we'll say ip nhrp in HS next hop server to 172.16.10.1 and we'll set our mtu to 1400 set our maximum segment size to 1360, and I forgot to set my tunnel mode just noticed to gre multipoint. So we'll hit Enter on that to set that to multipoint. And when we do that, you'll notice that interface tunnel zero change to the upstate. We're told that we have a new adjacency formed 172.16.10.1. That is the GRE tunnel interface of our HQ router. If we look on the HQ router, we can see the 10.3 tunnel, for branch B. It tells us we have a new adjacency. That's great. So we'll break out of there. We'll jump to our final spoke, which is branch C and we'll do the exact same thing under global configuration mode. We'll say interface tunnel zero tunnel source gig 0/1, tunnel mode gre multipoints set the IP address 172.16.10.4 on this tunnel. 24 bit subnet mask say no shut. We'll set the network ID two one, the tunnel key 123. The authentication string is Cisco. Say ip nhrp map. Multicast to 10.1.1.1, ip nhrp map. And we want to map172.16.10.1 to 10.1.1.1. We'll set our next hop server to 172.16.10.1. We'll set our MTU size to 1400 and we'll set our maximum segment size to 1360. And you can see we already in our console had a message come up telling us that we have a new adjacency with 172.16.10.1. The tunnel IP address on the HQ router. We jump to the HQ router. Again, we see that confirmed here as well. 172.16.10.4 is seen as a new interface. So all of that looks great. Let's go back to branch C and just really quickly we'll say show dmvpn. And from this output, you can see that we have a single static mapping indicated by the S at the end of there that tells us this as static. And this tells us that if we need to reach the tunnel interface at 172.16.10.1 that's the hub tunnel interface that we should use the NBMA address of the hub 10.1.1.1. So we can also see the same information by saying, show ip nhrp. Here, we can see tunnel interface, zero. We can see when that was created. We can see that in order to reach the 172.16.10.1 IP address, we use the NBMA address of 10.1.1.1. So all of that looks good. Now let's jump onto PC two. Remember we can see in our topology that PC two is connected into branch C. So let's perform a trace route and try to get to PC one. So let's say trace route and remember PC one. If you see our typology, that is that branch A. So we'll say trace route 188.8.131.52. And that is the IP address of PC one. We'll give this just a little bit of time to complete, and we'll examine our traffic output and see what our traffic flow looks like. So we can see already. We have our first hop address, which is the IP address, 184.108.40.206. That is the interface that's connected to the PC from branch C. Next, we see that it uses the hub tunnel interface of 172.16.10.1. So this is the interface on the hub, and then it redirects to 172.16.10.2, which is branch A's interface. That's the branch A tunnel interface before it finally reaches the PC at 220.127.116.11. So, what we can see from this trace route is that, the traffic is going all the way back to the hub before it's being sent to PC one. However, what we don't see is that in the background, we have the NHRP resolution being done, and now a dynamic VPN tunnel has been built between branch A and branch C. If we run the exact same trace route again, if we, again trace route from PC two, up to PC one at 18.104.22.168, we'll be able to see the difference in our trace route output. We can see that's the same 22.214.171.124. That is the interface that's connected to the PC from branch C, but this time, the second hop you'll notice it's different. Instead of going back. If we look at our very first trace route command, where it went all the way back to the hub IP address of 172.16.10.1, this time, it goes directly to 172.16.10.2, and then to the PC itself. So now, this shows us that we have a dynamic tunnel created, and traffic is going directly from our branch C router over to the branch A router. And finally, to PC one, if we go back to our branch C router and we run this same show command, show ip nhrp. So this is our original show command, notice we had a single tunnel created. This is our new show command after we've traced route. And after we let NHRP do its magic in the background. Now, you'll see, we have actually two tunnels. We have our regular tunnel that we had statically configured. This is our single original tunnel that we saw, our statically configured tunnel to the hub. But now we have a second one and we can see that this is listed as a dynamic tunnel type and the MBNA address. And originally you can see that was, the address going to the hub router, this time it's 126.96.36.199. So the NBMA address is that of the branch A router. So this verifies that we have a DMVPN established, and then our traffic is now taking a much more efficient route between our sites. We can see that this will eventually expire. You see there's an expiration timer listed here at the end, but as long as there's traffic going between those sites, it's going to reset that tunnel timer for us, and it won't be turned down. So that's a look at how we can configure DMVPN between Cisco routers and allow for dynamic reachability between neighboring routers using multipoint GRE and NHRP.
This course was created by Kevin Wallace Training. We are pleased to offer this training in our library.
- MPLS overview
- DMVPN overview
- AAA troubleshooting
- IPv6 traffic filters
- Control plane policing
- IPV6 neighbor discovery inspection
- IPv4 ACL troubleshooting