Monday, November 29, 2010

Reliable transport vs. reliable transport

One of the most contradictory uses of terminology in communications concerns the word transport as used by the IETF and the ITU-T communities. To make matters worse, the term’s prevalent modifier reliable leads to even further divergence in meaning.

To the Internet community, transport refers to the fourth layer of the OSI layer stack (a layer stack known to the ITU-T as X.200, but largely assumed to have been superseded by the more flexible G.80x layering model). The transport layer sits above the network layer (IP), and is responsible (or not) for a range of end-to-end path functionalities. The IETF has defined four transport protocols – the two celebrated ones being UDP (unreliable) and TCP (reliable), and their less fĂȘted brethren are SCTP (highly reliable and message-oriented), and DCCP (unreliable but TCP-friendly).

Since the IP stack does not extend below OSI layer 3, the basic tenet is that nothing can be done about defects in lower layers (congestion, lost or misordered packets) and the best strategy is to compensate for any such problems using the layer over IP (e.g., retransmission, packet reordering). Reliability in this context thus means employing such compensation.

To the communications infrastructure community a transport network is a communications network that serves no function other than transport of user information between the network’s ingress and its egress. In particular, no complex processing such as packet retransmission of reordering is in scope. Reliability in this context means monitoring the functionality and performance of the lowest accessible layer (OAM), and bypassing defective elements in as short a time as possible (protection switching).

For many years the Rubicon between L2 and L3 so effectively separated the two communities that the disparate usages could continue un-noticed. But then came MPLS.

MPLS was originally invented as a method of accelerating the processing of IP packets, by parsing the IP header only at network ingress, and attaching a short label to be looked-up from then on. With advances in header parsing hardware and algorithms this acceleration became less and less significant. However, the possibility of treating a packet consistently throughout the network, and thus performing traffic engineering under layer three instead of compensation over it, kept MPLS from becoming marginalized. While IntServ at the IP level (implemented by RSVP) never caught on, traffic engineering at the MPLS layer (RSVP-TE) starting gaining momentum.

Of course, for MPLS to truly make IP more reliable, it required more “transport” functionality (in the ITU sense), such as stronger OAM and protection switching. This lead to the introduction of “Transport-MPLS” (T-MPLS), later to be renamed “MPLS-Transport Profile” (MPLS-TP).
Thus it became impossible to suppress the conflict between the two transports. In the IETF Work on MPLS Transport Profile in the IETF is obviously not performed in the “Transport Area” (having nothing to do with transport), but in the “Routing Area” (although it has very little to do with routing).

So what does the future hold for the words “transport” and “reliable” ? In theory it would be possible to adopt synonyms (such “carriage”, and “resilient”), although I doubt that either community would be willing to abandon its traditions. At a high enough level of abstraction the meanings coalesce, so perhaps the best tactic today (whenever there is room for error) is to say sub-IP transport and super-IP transport. Reliability can be left for the super-IP case (where it is not really apt) since there are multiple alternatives for the sub-IP case.

Y(J)S

Tuesday, November 23, 2010

IETF79 - Beijing !

I haven’t had much time to blog of late, having to catch up on work since returning from Beijing.

Beijing ? I hear you ask. Yes, the 79th IETF meeting was held 7-12 November in the Chinese capital.

This was my 26’th IETF, and things have changed since my first meeting. Back in the “old days” the meetings were mostly in the US (e.g., Minneapolis in the winter) and occasionally in Europe. This was then changed to a 3:2:1 rule, where half of the six meetings of two years taking place in North America (with Canada preferred to the US due to visa requirements), two meetings in Europe, and one meeting in Asia (Japan or S. Korea). Even then the default hotel chain with which the IETF had an arrangement was comfortably unvarying. The venue was so predictable that when I found out that the next European meeting was to be held in the French capital, I googled “Paris Hilton” and was surprised to retrieve photos of a scantily dressed heiress.

However, the proportion of Asian participants in SDOs has increased to such an extent that in the space of a single month, three of the SDOs that I follow held meetings in China – ITU-T SG15/Q13 (timing) met in Shenzhen 18 - 22 October, the MEF met 24-27 October in Beijing, followed by the IETF.

While the IETF general attendance figures were up (and for the first time the largest contingent was not from the US – but from China), several of the working groups that I attend suffered from a noticeable lack of major participants. In TICTOC, other than the two chairs and the Area Director, only two of the regulars were able to appear in person. This made it difficult to make any progress on the crucial issues.

However, the PWE3 session was lively, with the topics of making the control word mandatory and deprecating some of the VCCV modes drawing people to the mike. Unfortunately PWE3’s slot coincided with IPPM’s, but apparently IPPM was plagued with a situation similar to TICTOC’s. In the CODEC WG (whose chair couldn't make it to Beijing), the IPR-free audio codec for Internet use that is being developed was demoed. In the technical plenary there were interesting talks and exchanges in IPv6 operations and transitional issues, with the local speakers painting a grim picture of the IPv4 address availability.

All-in-all it was an interesting meeting in an interesting venue; a venue that I am certain to be visiting again.

Y(J)S

Tuesday, November 2, 2010

OAM for flows

Continuing my coverage of the recent joint IESG/IAB design team on OAM, this time I want to discuss the issue of OAM for flows in Packet Switched Networks (PSNs).

From a pure topology standpoint any communications network is imply a set of source ports (i.e., interfaces into which we may input information), a set of destination ports (i.e., interfaces from which we may receive information), and a set of links connecting source ports to destination ports. Of course, the destination ports will may be located very far from the source ports (and this is the reason we use the network in the first place), but this geometry is irrelevant from a topological point of view.

PSNs are communications networks that transfer information in units of packets. They can be classified as either Connection Oriented (CO) , or ConnectLess (CL). In a CO network the end-to-end path from source port to destination port needs to be set up (by physical connection, or by manual/management-system configuration, or by control-plane signaling) before information can be sent down the path. Once set up, it makes sense to call this end-to-end path a “connection”. A connection is essentially an association of a source port with a destination port that has been prepared to carry information.

In a CL PSN each packet of information is individually sent towards its destination. No set up is required as each packet is identified by a unique destination address. When we send a data packet from a source port to the desired destination we can still think about an association of the source and destination ports, but as this association is ephemeral, this association does not constitute a connection. If, however, many similar packets are sent from source to destination, it may be useful to speak of a “flow” of packets. Of course, there is no guarantee that all the packets travel from source to destination over precisely the same path through the network, but in many cases this is the case for substantial periods of time until a reroute event takes place. When load balancing is used the definition (and its consequences) becomes truly problematic.

OAM mechanisms were originally designed for Circuit Switched (CS) or CO communications systems, such as PDH, SDH, and ATM networks. For such networks it makes perfect sense to ask about continuity of the connection, or its performance parameters (e.g., delay). Thus Continuity Check (CC) OAM functions became standard, and PM functions recommended for CO networks. The issue is more complex for CL networks. Continuity doesn’t mean very much if every packet is sent to a different destination! It means somewhat more when there is a prolonged flow; but even then packet loss and delay are statistical combinations, since consecutive packets may traverse different network elements on their route from source to destination.

When packets are delivered to an incorrect destination we say a misconnection has occurred, and Connectivity Verification (CV) monitors for such events. (There is often confusion between CC – a functionality needed for CO and CL networks, and CV – which is only for CL.)

For some types of prolonged flows it makes sense to introduce OAM mechanisms to monitor continuity and performance parameters. Pseudostreaming of video over the Internet may involve hundreds of packets per second for many minutes. IPTV flows are even higher in rate and can last for hours. Ethernet Virtual Connections (EVCs) between customer sites last indefinitely.

In a future entry I will discuss when it doesn’t make sense to talk about flows, and what OAM means for such cases.

Y(J)S