RNP - Rede Nacional de Ensino e Pesquisa

português | español


 

 
RNP News 
 

"Let's activate multicast in all of RNP's PoPs"


Adenilson Raniery, a technician at RNP's Engineering and Operation Center (EOC), is responsible for the multicast project of this institution. Since last year he has been involved in studies and specifications regarding the use of this technology in RNP2 backbone. The experimental stage was concluded in 2001. Now it is time to use the accumulated experience extending the service to all the national academic backbone. From RNP's unit in Rio de Janeiro, where he works, Raniery gave this interview to RNP News.

How will the multicast service be implanted in RNP2?

Adenilson Raniery: The initial layout will be of a PIM-SM network with a single RP in PoP-RJ. We will activate multicast in all of RNP's PoPs [Points of Presence]. Those PoPs whose backbone router is a 7507 will have native multicast made available. In the PoPs whose backbone router cannot hold multicast, it is necessary to use tunneled multicast. In this case, we will implant a DVMRP tunnel between a station running mrouted in the PoP and the "nearest" 7507 router in the backbone's routing structure. For instance, we will have an mrouted station in PoP-TO, from which a DVMRP will leave until it reaches PoP-DF's 7507 router.

Translate "PIM-SM network with a single RP in PoP-RJ."

Adenilson Raniery: PIM means "Protocol Independent Multicast;" that is to say, it is a multicast routing protocol which operates based on the routes generated by unicast routing protocols such as OSPF, IS-IS, RIP, etc. Therefore, routers using PIM do not need a new routing chart for multicast. They use the unicast charts that already exist in order to make multicast routing decisions.

SM stands for "Sparse Mode," which is one of the 2 possible PIM operation modes (the other one is the DM, "Dense Mode"). PIM-SM networks need at least one RP; that is to say, a "Rendezvous Point." To make it simple, an RP can be regarded as the central element in the multicast infrastructure of the PIM-SM networks.

In short: the multicast routing in the backbone will be carried out using the PIM-SM protocol, with a single router acting as a RP in the PoP in Rio de Janeiro. Later, we will implant a second RP in the backbone in order to have RP's redundancy. Nowadays, it can be overlooked to favor greater simplicity in the structure of the backbone. Moreover, since the multicast connection with the Internet2 is in the PoP-RJ and a great part of the multicast traffic accessed at RNP will come through it, the existence of another RP ends up representing little help in the infrastructure of RNP2 backbone. This situation can change if other backbones (such as Embratel) start exchanging multicast traffic with RNP in other locations.

Won't there be compatibility problems between PIM-SM and DVMRP?

Adenilson Raniery: In fact, the PIM-SM and DVMRP protocols show incompatibilities due to their different operation philosophies. PIM-SM is a sparse-mode protocol; that is to say, it considers that sources and receivers are present in a sparse way in the network. Therefore, their operation mode is based on the explicit demand for multicast traffic. The routers will only send multicast traffic (of a certain G group) through interfaces where there are receivers that have previously requested this traffic. Otherwise, no traffic is transmitted, which prevents unnecessary passing band waste in the backbone. This kind of protocol is widely recommended, especially in WAN networks and backbones, where the costs of the links are high, and, therefore, there cannot be any passing band waste.

In contrast, DVMRP is a dense-mode protocol. Its philosophy considers that there are many transmitters and receivers in the network; consequently, it is more advantageous to transmit all the multicast flows in all the links of the network. If a router verifies that it has no interested "clients" in a certain multicast G group, it responds (on the opposite way of the multicast traffic) indicating that it does not want to receive G group traffic. This is accomplished by means of a prune message. One can notice that, besides the band waste implicit in this approach, there are also consequences to the routers, which are forced to keep information about all the active groups in the network, and not only those in which there are clients interested in the reception. Consequently, the router needs more memory to keep these data and more CPU to make all this processing.

Due to PIM-SM's and DVMRP's different operation philosophies, there can be problems in the interaction between them. These problems happen when a station in the DVMRP cloud wishes to receive multicast traffic from a multicast group whose source is located in the PIM-SM cloud. On the DVMRP side, the receiver will wait for the multicast traffic to reach it since, according to the dense-mode philosophy, all the active flows will reach the receiver (unless they are explicitly refused by the prune message). On the PIM-SM network side, the routers will wait until an explicit request for traffic of that multicast group is made, something that can only come from "inside" the PIM-SM network. At this moment, there is an impasse, and the client of the DVMRP network may not be able to access that desired multicast flow.

Although there have been efforts to implant an interconnection protocol between the DM and SM networks (RFC 2715), little progress has been made regarding this issue. The clear advantages of using PIM-SM have practically made it the standard as a multicast protocol in big backbones. As a result, the use of DVMRP has become stagnated (if not reduced) worldwide, while more and more routers have included support to PIM-SM and more and more networks have implanted PIM-SM.

In the case of RNP and its clients, there are still several networks whose routers do not have capacity to run native multicast. In these situations, using DVMRP is the only solution since it can be implanted without many problems in ordinary Unix stations through the mrouted software. In these cases, we will be applying some workarounds [solutions to escape from a problem without solving it; adjustments] proposed by Cisco (the manufacturer of RNP's current routers) to solve the incompatibility questions between DVMRP and PIM-SM. Sometimes these workarounds are applicable and other times they are not. The only long-term solution to the problem is replacing the routers' hardware/software by newer equipment/versions. It will inevitably happen in the years to come.

Is there any alternative to avoid the conflict between protocols?

Adenilson Raniery: Among the possible solutions, there are:

1-Assembling DVMRP tunnels directly to the network's RP, in PoP-RJ. Since in the PIM-SM network all the multicast groups must be registered in the RP, DVMRP tunnels connected to it would have access to all the active groups in the network.

2-Considering that all the receivers in the DVMRP network will also be transmitters. In this case, the traffic generated by the client in the DVMRP cloud reaches the PIM-SM network, which gets to "know" about the existence of this client and starts sending traffic to him too. It happens, for instance, with videoconference applications.

3-Using a DM protocol (such as PIM-DM) on the way between the DVMRP cloud and the RP. In this case, both networks have the same dense-mode operation philosophy, and the incompatibility problems are eliminated. However, the backbone starts suffering again from the traditional problems of a dense mode protocol, such as inefficiency.

All the solutions have advantages and disadvantages. We are still analyzing the seriousness of this problem and the alternatives to solve it, or, at least, to reduce its effects. Until we have conditions to take PIM-SM to the final user, by means of routers capable of doing so, the interoperation between PIM-SM and DVMRP will have to be maintained.

How long will it take for the service to be implanted in all the PoPs?

Adenilson Raniery: The implantation of multicast in the PoPs themselves can be made in little time. Nowadays, all the 13 PoPs equipped with 7507 routers all already running native multicast. In the remaining PoPs, as I have already said, we will have to implant a machine running mrouted in order to receive a multicast tunnel and do multicast routing. Although this second stage involves a lot of hard work, it should not last long, for we already have the know-how to do it. I hope we can finish his second stage in a few months.

The main question is extending the multicast service to the clients of the PoPs. This stage will be the longest and the most complicated of all since it will involve a great variety of equipment, such as routers, firewalls and several switches that can make the implantation more complicated. The active participation of the PoPs and client institutions in this process will be very important to spread the multicast service provided by RNP. It is important to stress that it will also be necessary for us to have attractive applications so that there can be interest in the multicast service. Personally, I believe that the transmission of important events (such as the Brazilian Symposium on Computer Networks, IETF and NANOG meetings, among others) via multicast can be a good way to make the service known and used at RNP.

Can MBone traffic be received by RNP's network?

Adenilson Raniery: Yes. All the active sessions of the MBone are available in the Internet2; therefore, RNP can access this traffic through its international connection to the Internet2. It is interesting to point out that this connection also ensures RNP's multicast accessibility to foreign receivers. Consequently, the multicast traffic generated by sources internal to RNP can be visualized abroad if the receivers have access to Internet2's backbone.

Can you list some of the PoPs that were pioneers in configurating their equipment to implant the multicast service on their own?

Adenilson Raniery: Rio Grande do Sul and Minas Gerais already have local multicast redistribution infrastructures, based on DVMRP. The interconnection of these DVMRP clouds with the PIM-SM backbone is made in the 7507 routers of the PoPs themselves. We have already been successful in transmitting multicast content to clients in these networks, like in the course "Professional Development to Middle School Math Teachers," transmitted from the Institute of Pure and Applied Math (Impa) in the beginning of this year[see the article RNP used multicast in the transmission of a course at Impa].

How will the multicast content be distributed to the clients of the PoPs?

Adenilson Raniery: It can be done in two ways: in native mode if the PoP has other routers capable of accomplishing this function in terms of hardware and software; or in tunneled mode, through the use of DVMRP tunnels and Unix stations (preferably FreeBSD) running mrouted. In the case of the PoPs that receive the transmission in tunneled mode, the distribution to the clients can only be made through DVMRP tunnels.



[RNP, 04.09.2002]

News search