Thursday, August 26, 2010

Bandwidth and utilization bottlenecks

Let us consider an end-to-end data transport path that can be decomposed into the following segments
* end-to-end path = LAN + access network + core network + access network + LAN
There may be distinct service providers for each of these segments, thus many different decompositions may make sense from the business perspective. Yet, the identity of the access network, and of its components
* access network = last mile + backhaul network
are useful constructs for more fundamental reasons.

These reasons emanate from the concepts of bandwidth and bandwidth utilization (the ratio of required to available bandwidth). In general :
1) LAN and core have high bandwidth, while the last mile has low bandwidth.
2) LAN and core enjoy low utilizations, while the backhaul network suffers from high utilization.
Let's see why.

LANs are the most geographically constrained of the segments, and thus physics enables them to effortlessly run at high bandwidth. On the other hand, LANS handle only their owner’s traffic, and thus the required bandwidth is low as compared with that available. And if the bandwidth requirements increase, it is a relatively simple and inexpensive matter for the owner to upgrade switches or cabling. So utilization is low.

Core networks have the highest bandwidth requirements, and are geographically unconstrained. This is indeed challenging, however, the challenge is actually financial rather than physical. Physics allows transporting without error any quantity of digital data over any distance; it just extracts a monetary penalty when both bandwidth and distance are large. Since it is the core function of core network operators to provide this transport, the monetary penalty of high bandwidth is borne. Whenever trends show that bandwidth is becoming tight, network engineering comes into play – that is, either some of the traffic is rerouted or the network infrastructure is upgraded.

Shannon’s capacity law universally restricts the bandwidth of DSL, radio, cable or PON links used in the last mile. However, utilization is usually not a problem as customers purchase bandwidth that is commensurate with their needs, and understand that it is worthwhile to upgrade their service bandwidth as these needs increase.

On the other hand, the backhaul network is a true utilization bottleneck. Frequently the access provider does not own the infrastructure, and purchases bandwidth caps instead. Since the backhaul is shared infrastructure, overprovisioning these rings or trees would seriously impact OPEX overhead. Even when the infrastructure is owned by the provider, adding new segments involves purchasing right-of-way or paying license fees for microwave links.

So, the sole bandwidth bottleneck is the last mile, while the sole utilization bottleneck is the backhaul network. Understanding these facts is critical for proper network design.

Y(J)S

Thursday, August 19, 2010

The access network equation

My last entry provoked several emails on the subject of the terms last/first mile vs. access networks. While answering these emails I found it useful to bring in an additional term – the backhaul network. Since these discussions took place elsewhere, I thought it would be best to summarize my explanation here.

Everyone knows what a LAN is and what a core network is. Simply put, the access network sits between the LAN or user and the core. For example, when a user connects a home or office LAN to the Internet via a DSL link, we have a LAN communicating over an access network with the Internet core. Similarly, when a smartphone user browses the Internet over the air interface to a neighboring cellsite, the phone connects over an access network to the Internet core.

However, the access network itself naturally divides into two segments, based on fundamental physical constraints. In the first example the DSL link can’t extend further than a few kilometers, due to the electrical properties of twisted copper pairs. In the second case when the user strays from the cell served by the base-station, the connection is reassigned to a neighboring cell, due to electromagnetic properties of radio waves. Such distance-limited media are the last mile (or first mile if you prefer).

DSLAMs and base-stations are examples of first aggregation points; they terminate last mile segments from multiple users and connect them to the core network. Since the physical constraints compel the first aggregation point to be physically close to its end-users, it will usually be physically remote from the core network. So an additional backhaul segment is needed to connect the first aggregation point to the core. Sometimes additional second aggregation points are used to aggregate multiple first aggregation points, and so on. In any case, we label the set of backhaul links and associated network elements the backhaul network.

We can sum this discussion up in a single equation:
* access network = last mile + backhaul network

I’ll discuss the consequences of this equation in future blog entries.

Y(J)S

Sunday, August 8, 2010

Last mile or first mile ?

Physical-layer access technologies with limited range are usually called last mile technologies. More specifically, we usually use the expression last mile when considering xDSL, that enables several Mbps to be transported over several kilometers, or a fiber optic link or PON, that enable hundreds or even thousands of Mbps to be transported over tens of kilometers.

In the year 2000 the IEEE started talking about “Ethernet in the First Mile” (EFM). In 2001 the EFM task force (802.3ah) was created, that developed extensions to Ethernet that are now incorporated into 802.3 as clauses 56 through 67. These extensions include:

  • a VDSL physical medium called 10PASS-TS (clause 62)
  • an SHDSL physical medium called 2BASE-TL (clause 63)
  • a new inverse multiplexing method (different from LAG, sometimes referred to as EFM bonding) called PME aggregation (subclause 61.2.2)
  • a 100Mbps 10 km point-point 2-fiber medium called 100BASE-LX10 and a single-fiber one called 100BASE-BX10 (clause 58)
  • a Gbps 10 km 2-point-point 2-fiber medium called 1000BASE-LX10 and a single-fiber one called 1000BASE-BX10 (clause 59)
  • a Gbps point-multipoint single-fiber medium with 10 km range called 1000BASE-PX10, and one with a 20 km range called 1000BASE-PX20 (clause 60)
  • logic for the EPON Ethernet Passive Optical Network (clause 64)
  • OAM features (clause 57)

The EFM task force closed down in 2004, and thus it is no longer accurate to say “EFM bonding” or “EFM OAM”. Yet the expression “first mile” remains in use. Is there a difference between the “last mile” and the “first mile”?

I was not there when the IEEE came up with the nomenclature, but I feel that I understand the idea behind it. The term “last mile” was invented by core network engineers. For someone who lives in the WAN, the short-range link that reaches the end-user is justifiably called the “last mile”. On the other hand, the IEEE 802 standards committee takes a LAN-centric point of view. For someone who lives in the LAN, the technology that provides the first link to the outside world is understandably called the “first mile”.

For those of us who live in the access network it doesn’t matter whether you call it first or last mile, we call it home.

Y(J)S

Monday, August 2, 2010

DNSSEC - Internet root signed

IP addresses (even 4-byte IPv4 ones), are generally not easy to remember, which is why humans prefer to type domain names into their browser address window, even if they are longer. It is job of the Domain Name System (DNS) to translate the domain name into the correct IP address, which is placed in the IP header and enables proper forwarding.

The DNS works recursively in the following way. When my application (for instance, my browser) needs the IP address for some domain name, it queries the operating system’s DNS resolver. If the resolver already knows the IP address (for example, it is preconfigured, or that domain name has been recently looked up and is cached) it returns it to the application. If not, the resolver will query a DNS server, that has been configured or found using DHCP. If this server knows the IP address (i.e., it is cached there), it returns it in an “A record” (or an “AAAA record” for IPv6 addresses); otherwise it recursively queries until it finds a server that has the required “A record”. It may eventually get to the authoritative DNS server for the domain in question; that is, the name server that didn’t learn the IP address from another name server, but was configured with it.

This system is hierarchical and distributed and thus very scalable, but is not very secure. The archetypical attack is DNS cache poisoning, which is carried out by impersonating a name server that knows the desired IP address, and causing a name server closer to the resolver to cache the incorrect result. When queried the attacker’s IP address is returned to the user who then browses to a malicious site where it is tricked into accepting fallacious content or infected with viruses to be exploited later.

DNSSEC (Domain Name System Security Extensions) adds source authentication and integrity checking to the DSN system in a backwards compatible way. In DNSSEC the DNS responses are cryptographically signed with public key signatures, and thus can’t be forged. This thwarts cache poisoning exploits. In addition, DNSSEC can also be used to protect non-DNS data, such as “CERT records” that can be used to authenticate emails.

DNSSEC is described in RFCs 4033, 4034, and 4035 from 2005, but the root zone of the Internet was only signed in July, 2010. This major milestone was celebrated last week at the Wednesday IETF-78 plenary with glasses of champagne and the handing out of stickers declaring IETF – DNSSEC – SIGNED.

Y(J)S