A computational resource (such as a server or a laptop computer), as well as a communications resource (such as a router or a switch), may be a genuine physical entity. You can touch such resources with your hand; and thus we call such instantiations – hardware. On the other hand an algorithm or a protocol, as well as data input, output or transferred, can’t be held in your hand or seen with your eyes. So we fancifully call them (after Tukey) software. Software is more logic or mathematics (i.e., pure thought) than physics.
Hardware – the part of a malfunctioning computing system that you can
kick.
Software - the part of a malfunctioning computing system that you can
merely curse.- Anonymous sage
However, it is commonplace to recognize that there is a spectrum between these two extremes. Closer to the software is the so-called firmware, which is software that is not frequently reprogrammed and lives in persistent memory. Closer to the hardware is an FPGA which can be held in your hand, but whose true significance lies in its HDL programming. In between we can have a processor core in an FGPA, network and DSP processors that are programmed but can’t be efficiently replaced with software on a general purpose CPU, driver software, and so on.
Every time we plan implementation of a task, we need to consider where to instantiate it on this spectrum. Usually it is optimal to situate simple logic that needs to run at high speeds as close to the hardware as possible; while complex logic that needs to run at low speeds is best located close to the software end of the spectrum. However, in many cases multiple factors need to be considered before making a final placement decision.
Concretization (well, maybe the word is not often used in this way) is the act of moving a task usually implemented closer to the software end of the spectrum towards the hardware end (from right to left in the above figure). Thus when an algorithm originally coded in a general purpose programming language in order to run on a CPU is rewritten in assembler to run on a DSP, or when a task in DSP code is re-implemented in FPGA gates, or when an FPGA design is cast into discrete unprogrammable hardware, we have performed concretization of the task.
Virtualization is the opposite procedure, i.e., moving a task usually implemented closer to the hardware end of the spectrum towards the software end (from left to right in the aforementioned figure). Thus when a task originally carried out by analog circuitry is converted to a DSP-specific algorithm, or a task running on a DSP is transferred to a code running on a general purpose processor, we have virtualized the task. The term virtualization is frequently reserved for the extreme leap of taking hardware functionality directly to software.
The reasons for performing concretization are many, and mostly fairly obvious. When mass-produced, application specific hardware is less expensive than general purpose processing power; making the initial development investment worthwhile for functionality that is widely deployed. Special purpose hardware may be miniaturized and packaged in ways enabling or facilitating desired applications. Dedicated hardware may attain higher communications data-rates than its software emulation. Constrained designs may achieve higher energy efficiency and lower heat dissipation than more flexible instantiations of the same functionality.
The reasoning behind virtualization is initially harder to grasp. For functionality of limited scope it may not be economically worthwhile to expend the development efforts necessary to implement close to hardware. Flexibility is certainly another desirable attribute to be gained by implementing communications functionality in software; virtualization facilitates upgrading a communications device’s functionality, or even adding totally new functionality (due to new operational needs or new protocol developments).
Software Defined Radio (SDR) is a highly developed example of virtualization of physical layer processing. Transmitters and receivers were once exclusively implemented by analog circuitry, which was subsequently replaced by DSP code both for higher accuracy (i.e., lower noise) and to enable more sophisticated processing. For example, the primitive AM envelope detector and FM ring demodulator are today be replaced by digitally calculated Hilbert transforms followed extraction of instantaneous amplitude and frequency. This leads to consequent noise reduction and facilitation of advanced features such as tracking frequency drift and notching out interfering signals. The SDR approach leveraged this development and further allowed downloading of DSP code for the transmitter and/or receiver of interest. Thus a single platform could be an LF AM receiver, or an HF SSB receiver, or a VHF FM receiver, depending on the downloaded executable software. A follow-on development was cognitive radio, where the SDR transceiver can dynamically select the best communications channel available (based on regulatory constraints, spectrum allocation, noise present at particular frequencies, measured performance, etc.) and set its transmission and reception parameters accordingly.
A more recent example of virtualization is Network Functions Virtualization (NFV). Here the communications functionalities being virtualized are higher up the stack at the networking layer. This concept is being evangelized by the ETSI Industry Specification Group on NFV, based on service provider requirements. The specific pain being addressed is that each new service offered requires deploying one or more new hardware appliances throughout the service provider's network. This then engenders not only the capital expense of acquiring these new devices, but allocation of shelf space and power to accommodate them, and training staff to configure and maintain them. As the number of new services accelerates while their lifecycles shorten, acute pressure is placed on the service provider's bottom line. NFV proposes implementing new functionalities in industry standard servers (located in datacenters, POPs, or customer premises), thus reducing CAPEX, consolidating multiple equipment types, reducing time-to-market, and simplifying both initial deployment and maintenance. The network functions that may be virtualized may include packet forwarding devices such as switches and routers; security appliances such as firewalls, virus scanners, IDS, and IPsec gateways; network optimization functions such as NATs, CDNs, load balancers and application accelerators; cellular network nodes such as HLR, MME, and SGSN; etc. NFV proponents believe that it is the only way to keep up with explosion in nework traffic volumes and service types while restraining CAPEX.
Virtualization as we have just defined deals with the instantiation of the communications functionality itself; leaving open the issues of mechanisms used to configure and maintain devices implementing these functionalities. That topic will be the subject of the next installment.
Y(J)S