Upload speed is the amount of information that can be transmitted along a channel.

Introduction

David R. Bull, Fan Zhang, in Intelligent Image and Video Compression (Second Edition), 2021

Available bandwidth

On the other side of the inequality are the bandwidths available in typical communication channels. Some common communication systems for broadcast and mobile applications are characterized in Table 1.2. This table must be read with significant caution as it provides the theoretical maximum bit rates under optimal operating conditions. These are rarely, if ever, achieved in practice. Bandwidth limitations are more stringent in wireless environments because the usable radio spectrum is limited, the transmission conditions are variable and data loss is commonplace.

Table 1.2. Theoretical download bandwidth characteristics for common communication systems.

Communication systemMaximum bandwidth
3G mobile (UMTS, basic)384 kbps
4G mobile (LTE cat 4)150 Mbps
4G+ LTE Advanced450 Mbps
5G mobile1–10 Gbps
Broadband (VSDL)55 Mbps
Broadband (VSDL2)200 Mbps
Broadband (VDSL2-VPlus)300 Mbps
WiFi (IEEE 802.11n)300 Mbps
WiFi 6 (IEEE 802.11ax)10 Gbps
Terrestrial TV (DVB-T2 (8 MHz))50 Mbps

The bit rates available to an individual user at the application layer (which is after all what we are interested in) will normally be greatly reduced from the figures quoted in Table 1.2. The effective throughput (sometimes referred to as goodput) is influenced by a large range of internal and external factors. These include:

overheads due to link layer and application layer protocols,

network contention and flow control,

network congestion and numbers of users,

asymmetry between download and upload rates,

network channel conditions,

hardware and software implementations that do not support all functions needed to achieve optimal throughput.

In particular, as channel conditions deteriorate, modulation and coding schemes will need to be increasingly robust. This will create lower spectral efficiency with increased coding overhead needed in order to maintain a given quality. The number of retransmissions will also inevitably increase as the channel worsens. As an example, a DVB-T2 channel (which typically supports multiple TV programs) will reduce from 50 Mbps (256 QAM @ 5/6 code-rate) to around 7.5 Mbps when channel conditions dictate a change in modulation and coding mode down to 1/2 rate QPSK. Similarly, for standard WiFi (802.11n) routers, realistic bandwidths per user can easily reduce well below 300 Mbps even with channel bonding in use – speeds lower than 100 Mbps are not uncommon and can reduce to 10 Mbps or less in congested networks. Emerging WiFi 6 networks targeted at dense Internet of Things applications have demonstrated 10 Gbps, but can reduce to around 10 Mbps for the lowest channel bandwidth and most robust modulation and coding modes. Typical broadband download speeds depend on the VDSL technology used and on the distance from the cabinet (where the fiber terminates). A 100-Mbps VDSL2 link can fall to around 20 Mbps at a distance of 1 km from the cabinet. 3G, 4G, and 5G mobile download speeds also never reach their stated theoretical maxima. Typical useful bandwidths per user for 4G at the application layer will rarely exceed 50% of the theoretical optimum and are more likely to be around 10%. The actual speeds for each user will depend on factors such as location, the distance from the mast, and the amount of traffic. It is also important to note that, for broadband and cellular networks, the upload speeds are typically between 10% and 50% of those for downloading data.

On that basis, let us consider a simple example which relates the raw bit rates in Table 1.1 to the realistic bandwidth available. Consider a digital HDTV transmission at 30 fps using DVB-T2, where the average bit rate allowed in the multiplex (per channel) is 15 Mbps. The raw bit rate, assuming a 4:2:2 original at 10 bits, is approximately 1.244 Gbps, while the actual bandwidth available dictates a bit rate of 15 Mbps. This represents a compression ratio of approximately 83:1.

Download sites such as YouTube typically support up to 6 Mbps for HD 1080p4 format, but more often video downloads will use 360p or 480p (640×480 pixels) formats at 30 fps, with a bit rate between 0.5 and 1 Mbps encoded using the H.264/AVC standard. In this case the raw bit rate, assuming color subsampling in 4:2:0 format, will be 110.6 Mbps. As we can see, this is between 100 and 200 times the bit rate supported for transmission.

Example 1.1

Compression ratio for UHDTV

Consider the case of 8K UHDTV with the original video in 4:2:0 format (a luminance signal of 7680×4320 and two chrominance signals of 3840×2160) at 10 bits per sample and a frame rate of 60 fps. Calculate the compression ratio if this video is to be transmitted over an internet link with an average bandwidth of 15 Mbps.

Solution

The 4:2:0 color subsampling method has, on average, the equivalent of 1.5 samples for each pixel (see Chapter 4). Thus, in its uncompressed form the bit rate is calculated as follows:

R=7680(H)×4320(V)×1.5(samples/pixel)×10(bits)×60(fps)=29859840000bps,

i.e., a raw bit rate approaching 30 Gbps. Assuming this needs to be transmitted in a channel of bandwidth of 15 Mbps, then a compression ratio of 1991:1 would be required!

CR=2985984000015000000≈1991.

Hopefully this section has been convincing in terms of the need for compression. The tension between user expectations in terms of quality and ease of access on the one hand and available bandwidth on the other has existed since the first video transmissions, and this has promoted vigorous research in the fields of both coding and networks. Fortunately the advances in communications technology have mirrored those in video compression, enabling the transmission of high (and in most cases very high)-quality video that meets user expectations.

In the next section we examine the applications that are currently driving video compression performance as well as those that are likely do so in the future.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128203538000104

Parallel patterns—parallel histogram computation

David B. Kirk, Wen-mei W. Hwu, in Programming Massively Parallel Processors (Third Edition), 2017

9.7 Aggregation

Some data sets have a large concentration of identical data values in localized areas. For example, in pictures of the sky, there can be large patches of pixels of identical value. Such high concentration of identical values causes heavy contention and reduced throughput of parallel histogram computation.

For such data sets, a simple and yet effective optimization is for each thread to aggregate consecutive updates into a single update if they are updating the same element of the histogram [Merrill 2015]. Such aggregation reduces the number of atomic operations to the highly contended histogram elements, thus improving the effective throughput of the computation.

Fig. 9.11 shows an aggregated text histogram kernel. Each thread declares three additional register variables curr_index, prev_index and accumulator. The accumulator keeps track of the number of updates aggregated thus far and prev_index tracks the index of the histogram element whose updates has been aggregated. Each thread initializes the prev_index to −1 (Line 6) so that no alphabet input will match it. The accumulator is initialized to zero (Line 7), indicating that no updates have been aggregated.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 9.11. An aggregated text histogram kernel.

When an alphabet data is found, the thread compares the index of the histogram element to be updated (curr_index) with the index of the one currently being aggregated (prev_index). If the index is different, the streak of aggregated updates to the histogram element has ended (Line 12). The thread uses atomic operation to add the accumulator value to the histogram element whose index is tracked by prev_index. This effectively flushes out the total contribution of the previous streak of aggregated updates. If the curr_index matches the prev_index, the thread simply adds one to the accumulator (Line 17), extending the streak of aggregated updates by one.

One thing to keep in mind is that the aggregated kernel requires more statements and variables. Thus, if the contention rate is low, an aggregated kernel may execute at lower speed than the simple kernel. However, if the data distribution leads to heavy contention in atomic operation execution, aggregation results in significant performance gains.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128119860000091

Switching

George Varghese, in Network Algorithmics, 2005

13.1 ROUTER VERSUS TELEPHONE SWITCHES

Given our initial analogy to telephone switches, it is worthwhile outlining the major similarities and differences between telephone and router switches. Early routers used a simple bus to connect input and output links. A bus (Chapter 2) is a wire that allows only one input to send to one output at a time. Today, however, almost every core router uses an internal crossbar that allows disjoint link pairs to communicate in parallel, to increase effective throughput. Once again, the electronics plays the role of the operator, activating transistor switches that connect input links to output links.

In telephony, a phone connection typically lasts for seconds if not for minutes. However, in Internet switches each connection lasts for the duration of a single packet. This is 8 nsec for a 40-byte packet at 40 Gbps. Recall that caches cannot be relied upon to finesse lookups because of the rarity of large trains of packets to the same destination. Similarly, it is unlikely that two consecutive packets at a switch input port are destined to the same output port. This makes it hard to amortize the switching overhead over multiple packets.

Thus to operate at wire speed, the switching system must decide which input and output links should be matched in a minimum packet arrival time. This makes the control portion of an Internet switch (which sets up connections) much harder to build than a telephone switch. A second important difference between telephone switches and packet switches is the need for packet switches to support multicast connections. Multicast complicates the scheduling problem even further because some inputs require sending to multiple outputs.

To simplify the problem, most routers internally segment variable-size packets into fixed-size cells before sending to the switch fabric. Mathematically, the switching component of a router reduces to solving a bipartite matching problem: The router must match as many input links as possible (to as many output links as possible) in a fixed cell arrival time. While optimal algorithms for bipartite matching are well known to run in milliseconds, solving the same problem every 8 nsec at 40 Gbps requires some systems thinking. For example, the solutions described in this chapter will trade accuracy for time (P3b), use hardware parallelism (P5) and randomization (P3a), and exploit the fact that typical switches have 32–64 ports to build fast priority queue operations using bitmaps (P14).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780120884773500163

SAN Extensions and IP Storage

Stephen R. Smoot, Nam K. Tan, in Private Cloud Computing, 2012

iSCSI protocol overview

This section discusses the following iSCSI concepts:

iSCSI packet format

iSCSI components

iSCSI naming schemes

iSCSI session types

iSCSI security authentication

iSCSI solicited and unsolicited data transfer

iSCSI initiator mechanisms and server virtualization

iSCSI packet format

iSCSI facilitates direct block-level access between an initiator and a target over TCP/IP networks. SCSI commands, data, and status are encapsulated within an iSCSI PDU. Transporting SCSI (I/O) over TCP ensures that high volume storage transfers have in-order delivery and error-free data with congestion control. This also overcomes distance limitations and allows IP hosts to gain access to previously isolated FC-based storage targets. Figure 7.21 illustrates the nesting of the various levels of protocol PDUs in an Ethernet frame during an iSCSI session establishment.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 7.21. iSCSI encapsulation

The TCP payload of an iSCSI packet contains iSCSI PDUs. All iSCSI PDUs begin with one or more header segments followed by zero or one data segment. The first segment is the basic header (BH) segment, a fixed-length 48-byte header segment. This can be followed by a number of optional additional header segments (AHSs), an optional header digest, an optional data segment, and an optional data digest.

All the headers are optional other than the BH. The optional digests are CRC-32 (32-bit cyclic redundancy check) values used to validate the contents of the respective segments (headers and data). Apart from a specific TCP destination port of 3260, an iSCSI packet looks just like any other IP packet. From the L2 perspective, the iSCSI packet looks like any ordinary Ethernet frame.

Note:

The header overhead in iSCSI encapsulation results in a lower effective throughput. For better effective throughput, jumbo frame support should be enabled.

iSCSI components

Figure 7.22 illustrates the iSCSI architecture and components. They include:

Upload speed is the amount of information that can be transmitted along a channel.

Figure 7.22. iSCSI architecture and components

Network entity: A network entity represents a device or gateway that is accessible from the IP network. It must have one or more network portals, each of which can be used by some iSCSI nodes contained in that network entity to gain access to the IP network.

iSCSI node: An iSCSI node represents a single iSCSI initiator or iSCSI target. There can be one or more iSCSI nodes within a network entity. The iSCSI node is accessible through one or more network portals and is identified by a unique iSCSI name. The separation of the iSCSI name from the IP addresses for the iSCSI node allows multiple iSCSI nodes to use the same IP addresses and the same iSCSI node to use multiple IP addresses.

Network portal: A network portal is responsible for implementing the TCP/IP stack and is used by an iSCSI node within that network entity for the connection(s) within one of its iSCSI sessions. In an iSCSI initiator entity, the network portal is identified by its IP address. In an iSCSI target entity, the network portal is identified by its IP address and its listening TCP port.

Portal groups (not shown in diagram): A portal group is a set of network portals identified within an iSCSI node by a portal group tag between 0 and 65535. It supports multiple TCP connections over multiple links per iSCSI session and provides multiple paths to the same iSCSI node. Both iSCSI initiators and iSCSI targets have portal groups although only the iSCSI target portal groups are used directly in the iSCSI protocol.

Note:

An iSCSI session is composed of one or more TCP connections from an initiator to a target. These TCP connections can be logically separated on the same physical link or they can be different connections on different physical links.

iSCSI naming schemes

Each iSCSI node, whether an initiator or target, requires an iSCSI name for the purpose of identification. An iSCSI node name is also the SCSI device name of an iSCSI device. The iSCSI name of an SCSI device is the principal object used in authentication of targets to initiators and initiators to targets. This name is also used to identify and manage iSCSI storage resources.

iSCSI names are associated with iSCSI nodes and not iSCSI network adapter cards. This ensures that the replacement of network adapter cards does not require reconfiguration of all SCSI and iSCSI resource allocation information. This also enables iSCSI storage resources to be managed independent of location (or address). iSCSI names must be globally unique and permanent (i.e., the iSCSI initiator node or iSCSI target node has the same name for its lifetime).

Three types of iSCSI node names are currently defined: iSCSI Qualified Name (IQN), Extended Unique Identifier (EUI), and Network Address Authority (NAA).

The IQN string is variable in length up to a maximum of 223 characters. It consists of the following in ascending order (from left to right):

1.

The string “iqn”

2.

A date code, in “yyyy–mm” format

3.

A dot (.)

4.

The reversed Fully Qualified Domain Name (FQDN) of the naming authority (person or organization) creating this iSCSI name

5.

An optional colon (:) or dot (.) prefixed qualifier string within the character set and length boundaries that the owner of the domain name deems appropriate. This can contain product types, serial numbers, host identifiers, or software keys (e.g., it can include colons to separate organization boundaries). With the exception of the colon prefix, the owner of the domain name can assign everything after the reversed domain name as desired. It is the responsibility of the entity, that is the naming authority, to ensure that the iSCSI names it assigns are unique worldwide. The colon separates the reversed domain name from its subgroup (i.e., subgroup naming authority) to prevent naming conflicts.

For instance, “Example Storage Arrays, Inc.” might own the domain name “example.com.” The following are examples of IQNs that might be generated by this organization:

iqn.2001-04.com.example:storage:diskarrays-sn-a8675309

iqn.2001-04.com.example

iqn.2001-04.com.example:storage.tape1.sys1.xyz

iqn.2001-04.com.example.storage:tape1.sys1.xyz

iqn.2001-04.com.example:storage.disk2.sys1.xyz

The length of an iSCSI node name of type EUI is fixed at 20 characters. The EUI format consists of two components: the type designator “eui.” followed a valid IEEE EUI-64 string. The length of an EUI-64 string is 8 bytes and is expressed in hexadecimal, for example:

eui.02004567A425678D

The length of an iSCSI node name of type NAA is either 20 or 36 characters. The iSCSI NAA naming format is “naa.” followed by an NAA identifier represented in hexadecimal. An example of an iSCSI name with an 8-byte NAA value follows:

naa.52004567BA64678D

An example of an iSCSI name with a 16-byte NAA value follows:

naa.62004567BA64678D0123456789ABCDEF

Note:

The iSCSI protocol does not use the iSCSI fully qualified address. Instead, it is used by management applications. The fully qualified address of an iSCSI node is specified as a URL (uniform resource locator):<domain-name>[:<port >]/<iSCSI-name>.

iSCSI session types

iSCSI implements two types of sessions: normal and discovery. Each discovery session uses a single TCP connection. Each normal session can use multiple TCP connections for load balancing and better fault tolerance. All iSCSI sessions proceed in two main phases: login and full-feature. The login phase always comes first and it consists of two subphases: security parameter negotiation and operational parameter negotiation. Each subphase is optional but at least one of the two subphases must occur.

There are three ways the iSCSI initiator can discover iSCSI targets:

Manual configuration (no discovery).

Using the “SendTargets” command (semi-manual configuration).

“Zero configuration” (or automated configuration) methods such as Service Location Protocol (SLPv216) and Internet Storage Name Service (iSNS17).

For brevity and simpler illustrations, only the “SendTargets” command is covered in this section. To establish a discovery session using the “SendTargets” command, the initiator needs to be manually configured with the IP address and TCP port number of the target entity. In discovery sessions, the purpose of login is to identify the initiator node to the target entity so that security filters can be applied to the responses. The initiator node name is required in the login request and not the target node name, because the target node name is not known to the initiator before discovery. Upon completion of login, the discovery session changes to the full feature phase. The “SendTargets” command is the only command that can be issued in the full feature phase. After initial discovery, the discovery session can be maintained or closed. Figure 7.23 illustrates a discovery session example.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 7.23. Discovery session example

For normal sessions, the initiator is required to specify the target node name in the login request. After the login phase, the normal session transits to the full feature phase where the initiator can issue iSCSI commands, as well as send SCSI commands and data. Figure 7.24 illustrates a normal session example.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 7.24. Normal session example

iSCSI security authentication

When the initiator enters the security authentication phase, its login request PDU specifies its supported authentication methods using the AuthMethod keyword-value pair in the data segment. All supported methods are listed in order of preference, including “none,” which means that the initiator is willing to skip the authentication phase. The target selects the first authentication protocol that it supports in the list of protocols provided by the initiator.

iSCSI requires support for the Challenge Handshake Authentication Protocol (CHAP), which is widely supported by most iSCSI vendors. Nevertheless, iSCSI can also support Secure Remote Password (SRP), Kerberos version 5, and the Simple Public Key Mechanism (SPKM-1 or SPKM-2).

Note:

The MDS IPS module supports the use of a local password database, a RADIUS server, or a TACACS+ server for CHAP authentication.

iSCSI solicited and unsolicited data transfer

After successful login and authentication, the initiator and target enter the full-feature phase. In this phase, the devices exchange normal command and data PDUs. There are two basic types of data transfers:

Unsolicited data transfer: A write command in which the initiator sends a write command PDU followed immediately by data-out PDUs. The initiator does not wait for an R2T (ready to transfer) PDU from the target. Targets can limit the size of unsolicited writes that they will accept by setting the FirstBurstLength key value in the login data segment. The left diagram of Figure 7.25 illustrates an example of an unsolicited data transfer.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 7.25. iSCSI solicited and unsolicited data transfer for write operation

Note:

Besides unsolicited data transfer, data can be included as part of the write command PDU. This is known as immediate data transfer.

Solicited data transfer: A write command in which the initiator sends a write command PDU and then waits for an R2T PDU from the target, or in which the initiator sends a read command PDU and waits for data-in PDUs. Initiators and targets can limit the size of solicited writes by setting the MaxBurstLength key value in the login data segment. The right diagram of Figure 7.25 illustrates an example of a solicited data transfer.

Note:

In solicited data transfer, iSCSI uses the R2T PDU as the primary flow control mechanism to control the flow of SCSI data during write commands.

iSCSI initiator mechanisms and server virtualization

In general, there are three different iSCSI initiator mechanisms:

iSCSI software driver with a standard NIC

NIC with a TCP offload engine (TOE) to reduce CPU utilization

iSCSI HBAs that offload both TCP and iSCSI operations

Revisiting server virtualization (for details, see Chapter 2), an iSCSI hardware implementation (TOE or iSCSI HBA) can reside in the hypervisor where iSCSI is terminated. The hypervisor uses a single iSCSI session for the entire physical server or one per VM. In iSCSI software implementation (iSCSI driver), iSCSI can run in the VM over a virtual NIC. In this case, the iSCSI traffic is transparent to the hypervisor, which only “sees” the TCP segments that carry the iSCSI PDUs.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123849199000076

The Evolution of Communication Systems

Vinod Joseph, Brett Chapman, in Deploying QoS for Cisco IP and Next Generation Networks, 2009

1.2 Transmission Infrastructure Evolution

In the late 1800s, signals were analog and were allocated a single channel per physical line for transmission—technology called circuit switching. Development of the vacuum tube led to analog systems employing Frequency-Division Multiplexing (FDM) in 1925, allowing multiple circuits across a single physical line. Coaxial cable infrastructure started deployment in the 1930s, allowing greater bandwidth (and resulting in more circuits) to the telecom provider and yielding a more efficient infrastructure.

In the early 1970s the invention of transistors and the concept of Pulse Code Modulation (PCM) led to the first digital channel bank featuring toll-quality transmission. Soon after, a high-bit-rate digital system employing Time-Division Multiplexing (TDM) was realized, allowing digital multiplexing of circuits and giving further efficiency in the use of physical communications infrastructure.

Advances in FDM and TDM allowed greater efficiency in physical infrastructure utilization. TDM communicates the bits from multiple signals alternatively in timeslots at regular intervals. A timeslot is allocated to a connection and remains for the duration of the session, which can be permanent, depending on the application and configuration. The timeslot is repeated with a fixed period to give an effective throughput.

Multirate circuit switching was the next step away from basic circuit switching. This is an enhancement to the synchronous TDM approach used initially in circuit switching. In circuit switching, a station must operate at a fixed data rate regardless of application requirements. In multirate switching, multiplexing of a base bandwidth is introduced. A station attaches to the network by means of a single physical link, which carries multiple fixed data-rate channels (for example, in the case of ISDN, B-channel at 64 kbps). The user has a number of data-rate choices through multiplexing basic channels. This allows for services of different rates to be accommodated, whereby the number of channels allocated is greater than or equal to the service bandwidth.

The next evolutionary step from pure circuit switching is fast circuit switching (FCS). This transfer mode attempts to address the problem of handling sources with a fluctuating natural information rate. FCS only allocates resources and establishes a circuit when data need to be sent. However, the rapid allocation and deallocation of resources required to achieve this goal proved complex and required high signaling overhead. Ultimately and quickly, FCS became infeasible as more high data-rate services emerged with the dominance of data over voice transport.

It was not until the advent of optical transmission systems that the very high-bandwidth systems we know today emerged. Optical transmission is accomplished by modulating transmitted information by a laser light-emitting diode, or LED, passing the information signal over optical fiber and reconstructing the information at the receiving end. This technology yielded 45 Mbps optical communications systems, which have developed to 1.2, 1.7, and 2.4 Gbps. The emergence of Dense Wave-Division Multiplexing (DWDM) technology has seen the potential bandwidth over a single fiber reach 400 Gbps and beyond.

In the mid-1980s the most common digital hierarchy in use was plesiosynchronous digital hierarchy (PDH). A digital hierarchy is a system of multiplexing numerous individual base-rate channels into higher-level channels. PDH is called plesio (Greek for almost) because the transmission is neither wholly synchronous nor asynchronous. PDH was superseded by synchronous digital hierarchy (SDH) and synchronous optical network (SONET), which took the PDH signals and multiplexed them into a synchronous time-division multiplexing (STM) of basic signals. So, development went from asynchronous multiplexing used in PDH to synchronous multiplexing in SDH/SONET.

In contemporary NGN systems with the emergence of IP-based networks, many service providers are using simple underlying DWDM optical switch physical infrastructure or even driving dark fiber directly from the IP routing and switching equipment.

DWDM works by combining and transmitting multiple signals simultaneously at different wavelengths on the same fiber. In effect, one fiber is transformed into multiple virtual fibers. So, if you were to multiplex eight 2.5 Gbps signals into one fiber, you would increase the carrying capacity of that fiber from 2.5 Gbps to 20 Gbps. DWDM technology can drive single fibers to transmit data at speeds up to 400 Gbps.

A key advantage to DWDM is that it is protocol and bit-rate independent. DWDM-based networks can transmit data in IP, ATM, SDH/SONET, and Ethernet and handle bit rates between 100 Mbps and multiples of 2.5 Gbps.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012374461600001X

Micro LEDs

Kanghwan Kim, ... Euisik Yoon, in Semiconductors and Semimetals, 2021

4.2 GaN LEDs on silicon

The design of an optoelectrode with monolithically-integrated microLEDs requires an appropriate choice of the substrate and the epitaxial stack to support the fabrication. Unlike hybrid-integrated microLEDs, which are first fabricated on a separate substrate, released, and then transferred to the electrode-array substrate, monolithically-integrated optoelectrodes remain on the substrate where the electrode array is subsequently formed. Therefore, the properties of the substrate and the epitaxial layer, and their compatibility should all be taken under consideration.

The key properties that determine the most appropriate combination of the substrate and the epitaxial stack for monolithically-integrated microLED optoelectrode fabrication include the emission wavelength(s), the emission efficiency, and the process compatibility. For example, GaN-on-sapphire, the most favored combination for display and lighting industries, unfortunately, is not optimal for microLED optoelectrode fabrication due to the incompatibility of the sapphire substrate with other microfabrication processes. Sapphire substrates cannot be precisely micromachined with standard microfabrication techniques (e.g., reactive ion etching (RIE)), and therefore the final device cannot be precisely defined with a minimal form factor. As an example, an optrode with five GaN quantum well (QW) microLEDs integrated on a single shank, which McAlinden et al. fabricated on a sapphire wafer with an epitaxially grown GaN layer on top (McAlinden et al., 2013), was too large for in vivo experiments (even without recording sites). McAlinden et al. utilized laser dicing and mechanical thinning, and these techniques were not capable of reducing the cross-sectional area of the shank smaller than 200 μm × 100 μm.

4.2.1 GaN-on-Si technology

Channelrhodopsin-2 (ChR2) is the most widely used type of opsin for optogenetics for neuroscience studies. It has a responsivity curve with a peak at approximately 470 nm and does not respond to light with wavelengths longer than 600 nm. Therefore, indium-gallium-nitride (InGaN) is an appropriate material choice for the active layer of the LED, given its appropriate range of bandgap. On the other hand, single-crystalline silicon is an ideal choice for the substrate of an implantable electrode array because well-established micromachining techniques can be used to precisely define the shape of the device either by wet chemical etching or by dry deep reactive ion etching (DRIE).

Gallium-nitride-on-silicon (GaN-on-Si) has not been the most preferred choice for commercial LED fabrication until recently. The large lattice mismatch between single-crystal Si and GaN resulted in high dislocation and defect densities; therefore, either sapphire or silicon carbide (SiC) substrates were preferred choices for better lattice match, lower defect densities, and higher emission efficiency. However, these substrates are not micromachining-friendly and are therefore not appropriate choices for optoelectrode fabrication, in which high mechanical precision is required.

Thankfully, there have been tremendous advances made in GaN-on-Si technology in the last couple of decades. Driven by the primary goal of reducing the fabrication cost in LED lighting applications, new technologies that take advantage of large-scale silicon wafer processing have been developed, and GaN layers are now being reliably and reproducibly grown on silicon substrates (Fig. 8). GaN LED layers emitting short-wavelength visible light are now being grown on Si wafers (Guha and Bojarczuk, 1998), with high-quality emission wavelength control available by optimal LED stacks, including multi-quantum-well (MQW) structures (Tran et al., 1999), with cost-effective and high-throughput methods such as metalorganic chemical vapor deposition (MOCVD) (Dadgar et al., 2000), and even on large-diameter wafers (Kim et al., 2012; Zhu et al., 2009). Thanks to these advances, custom high-quality GaN-on-Si LED wafers are now commercially available at low cost.

Upload speed is the amount of information that can be transmitted along a channel.

Fig. 8. An 8-in. GaN-on-Si InGaN MQW LED wafer (left) and its epitaxially grown LED stack (right).

Adapted with permission from Kim et al. (2012)Copyright © 2012 Society of Photo-Optical Instrumentation Engineers.

For the fabrication of monolithically-integrated microLED optoelectrodes, MOCVD grown GaN/InGaN MQW GaN-on-Si LED wafers are utilized. This combination of the substrate material and the LED layers provides process compatibility with Michigan Probe fabrication while enabling the high-density integration of microLEDs with sufficient optical power for in vivo ChR2 activation. Besides the process compatibility, there are several additional advantages of GaN-on-Si over GaN-on-sapphire. Since silicon has about five times the thermal conductivity of sapphire, the heat generated by LED operation can be more effectively dissipated. In addition, as opposed to LEDs-on-sapphire, which emits light isotropically through the transparent sapphire substrate, the opaque silicon substrate helps confine light emission to only the top side of the LEDs.

4.2.2 Emission efficiency of GaN-on-Si LEDs

An additional material consideration for monolithically-integrated LEDs for opto-electrophysiology, other than the ability to process the material itself, is the effectiveness of optical stimulation with integrated light sources. Not only does a considerable amount of light have to be generated for the activation of ChR2 in vivo, but the LEDs should also have reasonable plug efficiencies so that the tissue heating due to non-radiative recombination in the LED does not induce excessive heating in the surrounding tissue. It is generally agreed that an irradiance of greater than approximately 1 mW/mm2 (Stark et al., 2012) is required to stimulate the activity of ChR2-expressing neurons. On the other hand, a temperature increase of 1 °C is considered as the threshold for permanent damage to brain tissue (Sharma and Hoopes, 2003).

Theoretical calculations and bench-top testing have shown that GaN-on-Si LEDs can safely generate more than sufficient light for optical stimulation. A well-known drawback of LEDs fabricated using GaN-on-Si wafers is their lower emission efficiency due to higher interface defect densities, which are approximately 10 times higher than those of GaN-on-sapphire (Zhu et al., 2009). A first-order calculation utilizing the difference between the dislocation densities suggests that the IQE of GaN-on-Si LEDs would be approximate 33% that of GaN-on-sapphire LEDs (Zhu et al., 2011). Experimental results suggest that despite the low plug efficiency, GaN-on-Si LEDs generate sufficient light while consuming only tens of microwatts of electrical power (Kim et al., 2016; Wu et al., 2015). Even if all the power was converted to heat at the LED, with an appropriate (DR < 50%) pulse-width modulated stimulation scheme the heating of the brain tissue is calculated to be negligible (Kim et al., 2016).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0080878421000028

Spectrum Awareness and Access Considerations

Mr.Preston Marshall, in Cognitive Radio Technology (Second Edition), 2009

5.6 Dynamic Spectrum Access Objectives

This section establishes the process and anticipated performance of dynamic spectrum access within CR systems. This is important for three reasons:

1.

The DSA mechanism is an enabling requirement for many of the techniques and technologies discussed in other chapters, and it is an inherent element in any adaptive wireless structure. The DSA-enabled techniques include front-end linearity management, dynamic topology, and waveform selection. Even if the spectrum management benefits were nonexistent, the flexibility it affords to manage other aspects of the wireless environment justifies its inclusion.

2.

DSA offers capability benefits through pooling of spectrum resources and managing spectrum conflict resolution dynamically.

3.

DSA radios are inherently interference tolerant, and therefore radios sharing spectrum with CRs can be more aggressive in their spectrum reuse policies since, if they occasionally cause interference, the interference can be resolved directly by the CR.

In examining spectrum utilization for CRs, the following four spectrum usage density (SD) situations should be considered.

1.

SDNC-NC : The spectrum density of non-CRs that must be guaranteed a high level of interference protection, since they cannot be adaptive and mitigate in-channel interference for stationary nodes the location of which is known.

2.

SDMNC-MNC: As before, but for nodes that are also mobile within an operating region, need protection throughout the region, and could cause interference to other nodes throughout the region.

3.

SDCR-NC: The spectrum density of CRs sharing spectrum with non-CRs that must be guaranteed a high level of interference protection, since the radios cannot be adaptive and mitigate in-channel interference. No guarantees are assumed for the CRs.

4.

SDCR-CR: The spectrum density of CRs sharing spectrum with other CRs that are not guaranteed interference protection, since CRs can adaptively mitigate in-channel interference. The ceiling on capacity is given by maximizing the aggregate effective throughput of the radios in the band, rather than the performance of any given radio.

Before discussing a cognitive spectrum process, consider the classical spectrum management and assignment case. Once radios moved beyond spark-gap techniques (the original impulsive ultra-wideband radio), use of the spectrum has been deconflicted to avoid interference. Spectrum and frequency managers assign individual radios or networks discrete frequencies and attempt to ensure that the emissions from one do not adversely impact others. A not insignificant legal (and seemingly smaller technical) community has grown up around this simple principle.

A key measure of a DSA system is its ability to provision more spectrum access in order to create a corresponding increase in network capability. In examining the operation of DSA performance, we will consider three cases:

1.

A manually de-conflicted spectrum that is statically assigned without specific (real-time) knowledge of user location, actual usage, or bandwidth needs.

2.

Cognitive radios that share spectrum with, and avoid, interference with noncooperative incumbent systems.

3.

Cognitive radios sharing spectrum with other CRs, minimizing, but not necessarily avoiding, interference with other users, on the assumption that they can resolve interference autonomously.

It is important to recognize that this conservative manual planning is not inherent in the operation of the radio links; statistically it is significant only because it represents a set of cases in which the radio system would have no ability to operate since, without adaptation even if it recognizes that an interference condition exists, it cannot unilaterally implement and coordinate a strategy for migrating to a clear channel. In this chapter, we mostly consider spectrum strategies that use awareness to locate spectrum holes that are themselves often the result of the essential conservative nature of the planning process. However, an equally important rationale for their inclusion in real systems is the ability to locally resolve interference by using the same behaviors as used to locate new and unblocked spectrum. This feature of interference adaptive radios offers all users of the spectrum the ability to back off of the currently conservative assumptions that underlay spectrum planning.

Spectrum and frequency planners are inherently disadvantaged by a number of factors. For one, they have to assume that: interfering signals will propagate to the maximum possible range; and desired signals will need to be received without unacceptable link-margin degradation in the worst-case propagation conditions.

In practice, this means that interference analysis is often driven by two unlikely conditions, maximal propagation of interfering signals, and minimal propagation of the desired signal. Although individual situations vary widely, the range of conditions has been measured and its distribution characterized in a number of environments [28]. A summary of one set of measurements of the propagation exponent (n) and fade loss (σ) random variables is shown in Figure 5.25 (from [28]).

Upload speed is the amount of information that can be transmitted along a channel.

Figure 5.25. Illustrative spectrum measurements of propagation exponent and fade random variable.

Figure 5.26 illustrates the relationship between the various communications and interference ranges involved in DSA. Case (a) is the desired high-assurance communications range, which must assume worst-case propagation (αwc) and fade, and still ensure a signal level above Ereceive. This is a conservative range, but typical for the assumptions required for high-assurance link planning. Case (b) is the range by which radios must be separated for manual frequency de-confliction, reflecting that the victim radio may be in an advantaged position to the transmitter (αbc) with no fade condition present, and that the signal level in that situation must be below the interference threshold (Einterfere). The DSA separation, shown in case (c), is between the best and worst-case propagation (αtc); significant fading is not present, and the interference is also limited to Einterfere.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 5.26. Practical interference margins.

Most of this pessimism is not inherent in the operation of radios; it is strictly a consequence of having to plan for “edge cases” in advance of knowledge of the actual conditions. Static spectrum planning must assume that links operate at the maximally stressing conditions, while interference will occur when links are maximally configured to cause interference.

Dynamic spectrum access enables CRs to reduce both of these margins to reflect the actual conditions present for both intended and victim links. This produced capability through two mechanisms:

1.

The CR can exploit spectrum by reflecting other spectrum users actual usage (time, frequency, power, etc.), and thus create more assignments in the same spectrum.

2.

CRs can tolerate more aggressive spectrum reuse because they can move their spectrum assignments in response to any interference they do receive, allowing other radios to be less conservative in sharing spectrum with them.

The deployment of CR will thus yield two increments of benefit: the first when CRs more effectively share spectrum with non-CRs, and the second when CRs can assume that other radios are cognitive and thus can mitigate any (hopefully rare) situations of interference that they may have caused. This latter assumption allows the radio to reduce its protection margin since the network as a whole can tolerate occasional interference without disruptive consequences.

5.6.1 Interference-Intolerant Operation

As stated, the mobility metric is driven by the requirement that spectrum be deconflicted over the entire range the device might be located in, and out to the interference range, extending the interference range of the device, as depicted in Figure 5.27.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 5.27. Determination of mobility factor in spectrum management.

In this figure, the spectrum manager is aware of all of the paths (or locations of operation) that the device might operate, plus the interference radius by which it must be de-conflicted. This creates a reserved area (actually a volume, but the third dimension is rarely exploitable, so is ignored for this discussion) for which spectrum cannot be used for any other purpose without risk of mutual interference. If there is no preknowledge of mobility, the spectrum must be reserved over the entire extent for which operation is permitted to the level of operational availability required (A0). Note that the interference radius must be bilateral; it must include the effect of the power spectral density of both devices. In considering the determination of interference radius, it must reflect the larger of the two directionalities; that is, the greater of the range at which A might interfere with B, or B might interfere with A.

Power management enables the link to operate with power appropriate to the actual conditions of the link, rather than those of the worst-case conditions and fade margin. The conditions described here are rare enough not to influence the mean or median value of the operating network, but are significant enough to substantially influence its reliability. In many networks it is impractical to provide power management. The hardware used in the network is often not capable of significant ranges of output power; the feedback mechanisms cannot be provided in a simplex device; there are a significant number of receivers that must receive the broadcast; or some of the receivers have no feedback mechanism (e.g., a receive-only terminal). Although power management may be present, the spectrum planning process must assume that it does not reduce the maximal radiated power.

The dutycycle is the amount of time for which spectrum must be reserved compared to the actual time during which it is actually used. The definition of the term used is not necessarily just the time for which energy is being transmitted. Its evaluation is somewhat more subtle because of two effects. First, the time during which the receiver is sensitive to interference is certainly use of the spectrum and should be considered as “used.” Also, time periods that are too short to be exploited by other users are essentially “used.” Interleaving independent (noncooperative) users within medium access control (MAC) layer intervals is generally not practical, and thus the entire operating time could be considered as being used.

We postulate that CR offers the ability to manage this situation more effectively by using the ability to sense the actual propagation conditions that occur and to adjust the radio dynamically to best fit these conditions. To do this, we distinguish its operation with two objectives. In the first, it attempts to minimize its own spectral “footprint,” consistent with the environment and needs of the networks it supports. In the second, it adapts itself to fit within whatever spectrum is available based on local surveillance. When put together, we can conceive of a radio that can find holes and morph its emissions to fit within one or more of the holes. Such radios could offer radio services without any explicit assignment of spectrum, and still be capable of providing high-confidence services.

This author proposed a structure for segregating these two operating policies in 2004, and proposed it to the ITU as a starting point for regulatory consideration of CRs, shown in Figure 5.1. This partitioning was later adopted as the basis for the DARPA XG demonstrations. In the model the DSA spectrum reasoner is free to locate solutions that maximize the performance of the wireless device.

One fundamental difference in performance between cognitive and non-CRs is in how they obtain and access spectrum. Non-CRs generally obtain spectrum in one of two ways:

Assigned. Assigned spectrum is typically assigned to a given user or usage from a regulatory authority (or by delegation from one) and is typically assumed to be exclusive or preemptive use. Typically there is an assurance of noninterference with this class of spectrum. Broadcast services, cellular, satellite, and public safety are examples of this class.

Commons. Commons spectrum is provided for use by a number of users, generally with some technical or operational constraints. There is no assurance of availability or noninterference. Examples of this class include the industrial, scientific, and manufacturing (ISM) bands commonly referred to as unlicensed.

The effect of duty cycle is somewhat subtle. If the duty cycle is 25 percent, then does that mean that there is an opportunity to load four times as many radios? This is possible on a mean value, assuming that the system can use spectrum whenever available. This is certainly an appropriate model for a system that is intrinsically tolerant of access delays, such as a delay-tolerant network (DTN) [29], but for most applications would not be acceptable. The extent to which duty cycle can be exploited is a function of the size of the pool of spectrum. For example, 10 channels shared among 40 users is quite different statistically from 100 channels shared among 400 users in terms of the reliability it can deliver. Therefore, we introduce two additional parametrics to fully specify an environment: the required availability (A0) and the pool size (pool).

The availability of a given number of channels (needed) at a given duty cycle and pool size is the binomial distribution, where a success is that the channel is accessible, the probability of success is 1 − duty.

(5.15)A0=∑k=neededNpool(Npoolk)(1−duty)kdutyNpool−kfor needed≤Npool

For large pool sizes, this becomes a normal distribution, and a more convenient description of the CDF uses the regularized incomplete beta function (Ix):

(5.16) A0=1−Px(X≤needed−1)=1−I 1−duty(pool-needed-1needed).

The benefits of a statistically large pool of spectrum are clear in the values of A0 for various values of relative spectrum availability, and are shown in Figure 5.28. The horizontal axis is the degree of “excess” spectrum provided. Excess is spectrum beyond the expected value of the product of the number of nodes and the duty cycle.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 5.28. Representative values of spectrum probability of availability (A0) for 20 percent duty cycle.

Without a sufficiently large number of radios sharing spectrum, the pool size is too small statistically to ensure access to spectrum at high enough confidence to support reliable operations. For example, in the above case of a 25 percent radio, high-confidence operation (A0 > 98 percent) requires essentially spectrum for each radio in a pool of 10 radios, at least twice the mean value for a pool of 20 radios, but only 25 percent margin above the mean for a set of 160 radios sharing the spectrum. All users benefit from large-scale spectrum pooling.

The metric by which these will be assessed is SpectralDensity. This metric is highly sensitive to individual designs, environments, and usage patterns, but its derivative (holding these assumptions constant) will be shown to be insightful into the ability of CRs to achieve higher usage densities from fixed portions of the spectrum.

(5.17)SpectralDensity=∑i=1ndutycycl ei·bandusageiBandwidth0·Area0

where:

n = number of users in the spectrum

dutycyclei = the duty cycle of user i

bandusagei = the instantaneous spectral bandwidth used by user i

bandwidth0 = the total bandwidth made available to the set of users

Area0 = the geographic area over which the spectrum is used

Clearly, the optimum situation is to have spectrum available for use and allocation in all locations not within the interference radius of the potential emitter. This density is thus one device per the interference area. The density of a de-conflicted mobile device is the operating area, plus the region surrounding the perimeter of the operating area out to the interference radius. Our measure is the ratio of the maximal possible device density over the achievable device density.

(5.18)AreaEffectiveness=InterferenceAreaDeconflictedArea=1+2πrinterferenceDeconflictedArea+ rinterference2Deconflictedperimeter

where

rinterference = interference radius of the worst-case link pairing

Deconflictarea = the de-conflicted mobility area

Deconflictperimeter = the de-conflicted mobility perimeter

The first case is a simple two-way communications between two vehicles that will navigate around the continental United States. The communications range of the device is 7 kilometers, so a reasonable estimate of its interference radius is 22 kilometers. (estimated by assuming diffractive propagation in an r4 environment and a 20-dB SNR at the edge of the operating range). Since the United States has an approximate area of 9.6 million square kilometers, the spectrum effectiveness is 1.5 × 10−4. It is even lower when considering the effect of duty cycle. If we assume that the vehicles operate for 8 hours a day, and talk at most 10 percent of the time, the actual utilization drops to 5.2 × 10−6! This is a very good reason to believe that the CR technology has a target-rich environment to improve these practices.

A more reasonable approach might be to only dedicate a piece of spectrum for a single city. Los Angeles has an area of 12,000 square kilometers. Adding the perimeter that must also be protected from use, the total reservation increases to 22,000 square kilometers. Using the same operating assumptions as before, the effectiveness is approximately 2.3 × 10−3.

The benefits of increased spectrum utilization can be expressed as either additional wireless capability, or the reduction in constellation order, or through introduction of spreading. These can be equated to capacity linearly (assuming spectrum usage is reduced), or to reduction in energy (using the same spectrum, but at lower modulation order) through Shannon limit analysis.

It should be noted that many systems of wireless devices already implement more advanced techniques than the baseline. For example, wireless hubs may search for open channels, cellular devices may have open slots or frequencies assigned, and the 802.11a DFS. These examples do not argue against this baseline; they represent the first (albeit simple) implementations of dynamic spectrum systems, and thus cognitive radio.

5.6.2 Interference-Tolerant DSA Operation

In the previous section, we considered the initial case of DSA radios sharing spectrum with devices that were not tolerant of any interference to their operation, as they were presumed not to have DSA capability, and therefore any energy in their communications channel was considered to be noise that would degrade their link margin. In this section, we instead consider the case where the incumbent radios have DSA capability, and use that capability to not only locate open spectrum, but also to mitigate the effects of interference with their own network. The effect of this change is profound: instead of having to create essentially near-zero interference operation, they are allowed to create a possibility of interference, with that interference level constrained to be low enough so that the aggregate network costs of frequency relocation do not exceed the additional capability created by these more aggressive spectrum usage practices.

The approach of spectrum outage probability (SOP) provides a model for determining the probability that the aggregate signal strength from a set of homogenous emitters exceeds a set emissions mask at a fixed location within a network [30, 31]. We consider the difference in node density that is permitted in spectrum environments that must provide incumbent users confidence of noninterference, such as in shared spectrum, and the maximum density for nodes in the spectrum in which nodes must accept a probability of having some level of interfering signals. In this latter case, it is assumed that the device can relocate itself in the spectrum after determining that it is interfered with. For purposes of analysis, we consider this as a set of discrete choices, but in practice, a device could select multiple contiguous or noncontiguous opportunities.

Pinto and Win [30] show that the SOP of the total environment of an infinite plane of a Poissonly distributed field of nodes is given by an alpha-stable distribution in the general case. Although the SOP formulation considers a range of frequencies and interference masks, we need consider only the primary frequency in this analysis and, for both simplicity and generality, assume a flat energy distribution within the transmission band. Our interest is the relative density of CRs and non-CRs, and the results scale with differing values of interference threshold. We also look at the situation from the perspective of the decisions of the interferer, so we will examine only the effects of a single interferer.

The distinction of this analysis from the similar intent of Gupta-Kumar is that in Gupta-Kumar, the assumption was that an interference event caused a loss of throughput due to the failure of the information transfer. In this analysis, we will assume that the consequence of a interference event is a forced transition to a new frequency, and the cost of this transition is not a failure but a temporary loss of capability while the transition is performed. The mean effect on throughput in a statistically independent environment is thus:

(5.19)Capacity=Tsensing Tsensing+SOP·Trendezvous

where:

SOP = spectrum outage probability

Tsensing = the interval between sensing intervals

Trendezvous = the time to re-rendezvous the physical layer

This capacity measure is 1 if there are no occurrences of a spectrum outage. In fact, for short intervals of time (Tsensing) the assumption of independence is very conservative, as the movement of nodes is typically much slower than the sensing rate. This value is thus an upper bound on the rate of disruption and thus is a lower bound on the capacity. A unitless generalization of this relationship can be constructed by substituting the relationship of the rendezvous time to the sensing interval, referred to as the DSA index.

(5.20)Capacit y=11+SOP×IDSA,whereIDSA=TrendezvousTsensing

Figure 5.29 illustrates the relationship of capacity, SOP, and the DSA index for some representative values of each (a sensing interval of 100 ms and a re-rendezvous time of 186 ms, or an IDSA of 1.86). This range of IDSA values can operate at even a SOP of 10−1 with only a 15 percent degradation of channel access performance.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 5.29. Capacity for a typical set of sensing rate and rendezvous times.

While a 10−1 chance of causing interference would be unacceptable to a non-DSA radio sharing a band, it is not a significant performance impediment to a DSA radio sharing the same band. The opportunity provided by DSA-to-DSA spectrum sharing is to greatly increase the density of nodes, maximizing the aggregate throughput. The product of node density times channel access provides the total traffic density per unit area and spectrum allotment.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123745354000059

Integrated Services Digital Network (Broadband and Narrowband ISDN)

Gene Mesher, in Encyclopedia of Information Systems, 2003

IV. Conclusion: the ISDN Vision and the New Age of Telecom

The Integrated Services Digital Network concept grew out of the vision of transforming the analog telephone network of the 1960s into an entirely digital telecommunications network. The vision also includes the idea that the ISDN would be the first step towards an even more advanced end point in which the digital network would eventually evolve into a high-speed, packet-switched multimedia network in which realtime applications, especially video-telephony, would be common place.

As the first stage in moving towards that vision, narrowband ISDN was later developed to provide a complete, end-to-end solution as the final stage in converting the analog telephone network to a circuit-switched, digital form on the one hand and as a intermediate, bridging technology to set the stage for the later implementation of a high-speed, packet-switched broadband network.

Narrowband ISDN was also developed in the monopoly-controlled industrial environment that had become the standard for the telecommunications market worldwide at the time. Just as ISDN entered the marketplace in the early 1980s, however, the telecommunications market itself, in the U.S. and later around the world, changed dramatically as the decades-old telecommunications monopolies began to face competition in a variety of market segments. The telephone companies that were responsible for providing the ISDN product suddenly found that they were competing with a variety of products in a variety of market segments.

The evolution of modem technology illustrates this. By the mid-1980s, data rates for voice modems had reached only about 1200–2400 bps. A decade later, however, modem data rates had increased by about a factor of ten and newer protocols such as V.34 also began to incorporate data compression techniques that multiplied the effective throughput by an additional factor of 2–4 times. The market for modems was also highly competitive driving down modem prices while delivering higher and higher speed modems into the marketplace.

ISDN had great difficulty competing in the modem market for three reasons. First, and perhaps most importantly, modems use voice grade lines, which were already ubiquitous. No special equipment beyond the modem and a PC or terminal was required to make the network connection. ISDN, by contrast, required that the lines sending ISDN signals be specially engineered to carry those signals, costs that were inevitably passed on to the customer. Second, the BRI ISDN offering, intended for the home or small business users, was not a scalable technology. Modems, by contrast, were effectively scalable since all that was required was for a consumer to replace the modem with a newer model. Third, the market for modems was extremely competitive, rapidly producing new products at increasingly lower prices. Under the circumstances of the ubiquity of the voice lines, ISDN's lack of scalability, and fierce price competition in the modem market, it is small wonder that ISDN, a product designed for a very different market, did poorly.

Rapid increases in the use of the Internet also provided a major source of competition for ISDN services. Although some authors have described ISDN as being “a solution looking for a problem,” the same could easily be said of the Internet, which was developed as a general-purpose platform for networking. Like the modem market, however, and partly driven by it, the Internet's high growth rates were sparked by low prices for Internet access, a very cheap access platform, that included the telephone network itself coupled with cheap modems and PCs, with the main applications being e-mail and the World Wide Web. ISDN, by contrast, was developed first and foremost, to solve the technological problem of providing digital telephony to telephone users. No part of this solution, however, was market-oriented and thus after solving the “problem” of digital telephony, it foundered because it failed to address user needs.

Although it is still too soon to make a final pronouncement on B-ISDN, that technology appears to be headed for a similar fate. While the SONET physical layer protocol has indeed achieved a global success and is clearly replacing the T and E carriers around the world, the combination of SONET and ATM has not become widespread.

In fact, it now appears that the ATM protocol may well face the same fate. Although the ATM protocol was developed with a number of features intended to make it the protocol of choice for real-time networking, such as high-speed, small, fixed packet size, quality of service, and end-to-end protocol implementation, ATM does a poor job of transporting what are now the most popular networking protocols, Ethernet and IP.

Because of the transport compatibility issue, ATM has not become popular in LAN environments, in spite of its high speed and other desirable features for real-time operations. More recently, Ethernet, which is now used on over 95% of LANs worldwide, has been introduced into the WAN and is being modified to include an additional tag field, that allows frame prioritization. While not initially suited to compete in the WAN environment or to transport real-time applications, Ethernet appears to be evolving to compete with ATM by adopting many of the features considered desirable in the ATM protocol. Thus, although ATM has done well in the WAN environments to date, it is now possible that the ATM protocol may be replaced by Ethernet over the next few years, and both ISDN and B-ISDN may well turn out to be remembered more for their failure to succeed in the globally competitive telecommunications marketplace than for their roles as first movers into the high-speed digital networking market.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404000939

Architectural Requirements and Techniques

David Large, James Farmer, in Broadband Cable Access Networks, 2009

Techniques to Increase Information Capacity

Both competitive responses and new revenue opportunities put pressure on the cable technical community to increase network information capacity. Each of these has capital and operating cost implications and may have regulatory issues as well. Some of the technologies considered follow:

Increased upper frequency limit. As discussed in Chapter 3, this requires a changeout of active (and sometimes passive) network elements, but may also require respacing of amplifiers, replacement of cable, and revision of network powering. It will almost certainly require replacement of optical transmitters. Depending on the frequency limit, it may also require replacement of terminal equipment; in particular, most deployed cable modems and digital video set-top boxes (some owned by customers) do not tune above 864 MHz, and thus any bandwidth increase to a greater frequency may require a substantial investment beyond the distribution network upgrade and, in the case of customer-owned equipment, may run into regulatory issues.

Switched digital video. As discussed above, SDV offers bandwidth usage efficiency. Where customers already have digital video STBs that accept downloads of new applications, deployment can be quite smooth. The available gain, however, is limited by two principal factors: no bandwidth gain results from switching the most popular channels, as it is likely that at least one subscriber in any service group will always want to view it, and FCC rules currently appear to forbid switching any channel designated as “one way,” to ensure compatibility with “unidirectional digital cable products” (one-way cable-ready receivers), as defined in the rules.* Finally, the future ability to send personally targeted ads to customers accessing SDV channels demands dedicating streams to individual viewers (as opposed to groups of viewers requesting the same programming) and this reduces the effective throughput gain somewhat.

Node splitting. Splitting of existing nodes into smaller subnodes, each of which can receive independent signals, creates the potential to multiply both downstream and upstream capacity inversely to the ratio of homes in each subnode to those in the original node. The key word here is “potential,” however, because it depends on how bandwidth is used by the operator. In most cable systems today, the majority of RF bandwidth is devoted to signals that are sent in common to all subscribers, or at least to groups of subscribers that are at least as big as the unsplit node and for which there is no benefit to a smaller service group. Only for services that can efficiently share bandwidth in smaller groups (e.g., Internet access) is there a gain through node splitting. For other shared-bandwidth services, there will be a throughput gain but at the expense of more total headend transponder equipment than before the split.

Recovery of spectrum used for analog video. The majority of bandwidth on most cable systems is used for the downstream nonswitched transmission of analog video signals, primarily those making up the “basic” service level. These services are also often “simulcast” in digital form on the same networks, primarily because it allows operators to purchase digital-only converters that are less expensive than hybrid analog/digital models. While it seems inevitable that the need for delivery of analog signals will eventually go away (as customers gradually replace their old analog receivers), there may be both legal and marketing reasons to continue to support older receivers for many years. Given that reality, there are two methods by which the distribution system spectrum devoted to analog signal transmission can be recovered: supplying digital converters for every analog television receiver still in use by customers or re-creating the analog spectrum at the side of each home (or, potentially, at each tap). The first is a straightforward cost issue: capital cost to purchase the converters plus ongoing operating cost to install and service them must be balanced against the value of the recovered spectrum. The trade-off of the second approach is similar, except that it requires fewer, but more expensive, new devices.

Expansion of the upstream band. So long as over-air broadcast stations were assigned low-VHF channels, together with the right to demand coverage on those channels through cable systems, the top end of the upstream band was effectively limited to 41 MHz. With the demise of over-air analog television transmission, there may be few stations assigned to channels 2 to 6. This would allow shifting the upstream/downstream frequency division, as discussed in Section 9.2.1. Although this looks simple on paper, in fact it requires changing every diplex filter in every amplifier and node and upgrading or replacing upstream amplifiers and many optical transmitters and headend receivers. As with downstream bandwidth expansion beyond 864 MHz, it requires deployment of new terminal equipment that can use the expanded frequency range.

Use of advanced digital video-encoding techniques. Standards for cable television use of more efficient digital video encoding formats have been approved and their usage will allow roughly twice as many digital video streams to share the same RF channel. As with several other initiatives, however, the problem is that most deployed terminal devices do not have the ability to decode this advanced format, nor can they be upgraded to that capability. This means that operators will be faced with the choice of an expensive accelerated terminal equipment replacement schedule or waiting for attrition to accomplish the same task much more slowly. Furthermore, FCC regulations (via SCTE 40) require one-way digital video services to be delivered in an MPEG-2 format.

In summary, there are several options available to operators to increase effective network throughput capability, but all have trade-offs that must be evaluated against a company's near- and long-term expansion plans.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123744012000097

Clonal-Selection-Based Minimum-Interference Channel Assignment Algorithms for Multiradio Wireless Mesh Networks

Su Wei Tan, ... Cheong Loong Chan, in Bio-Inspired Computation in Telecommunications, 2015

13.1 Introduction

A wireless mesh network is a multihop wireless network formed by a number of stationary wireless mesh routers. These routers are connected wirelessly using a mesh-like backbone structure. Some of the routers function as a wireless access point for clients (e.g., laptops and smart devices with wireless access) to attach themselves to the network. The clients transmit and receive data via the backbone mesh network. To connect to external networks such as the Internet, one or more routers are connected to the wired network and serve as gateways. Figure 13.1 illustrates a sample wireless mesh network consisting of six mesh routers, two of which also function as gateways.

Upload speed is the amount of information that can be transmitted along a channel.

Figure 13.1. A multiradio wireless mesh network with channel assignment.

By leveraging the commodity of IEEE 802.11 (more commonly known as Wi-Fi) hardware, wireless mesh networking reduces the dependency on wired infrastructure, and hence is being used for providing low-cost Internet access to low-income neighborhoods and scarcely populated areas. The interested reader is referred to Akyildiz et al. (2005) for other application areas of wireless mesh networks.

One key challenge in adopting wireless mesh networking is the capacity of effective throughput that can be offered to the clients. Due to the broadcast nature of the wireless medium, signals transmitted from different devices over the same channel (frequency band) will result in collision, which in turn causes data loss. Hence, multiple access techniques such as time division multiple access, frequency multiple access, or random access are required to coordinate the transmissions over the channel. It is well known that the effectiveness of random access techniques used in IEEE 802.11 networks degrades as the number of devices increases. To reduce the interference, the devices may transmit over different nonoverlapping channels provisioned in the IEEE 802.11 standards. In other words, the capacity of a wireless mesh network can be increased by equipping the routers with multiple radio interfaces, each of which is tuned to a different channel.

The attainable network capacity of a multiradio wireless mesh network is dependent on how various channels are assigned to each radio interface to form a mesh network with minimum interference. This is referred to as the channel assignment problem. The channel assignment must fulfill the constraints that the number of channels assigned to a router is at most the number of interfaces on the router, and the resultant mesh network remains connected. This problem is known to be nondeterministic polynomial-time hard (NP-hard) (Subramanian et al., 2008). A sample channel assignment fulfilling the constraints is also given in Figure 13.1.

Channel assignment techniques developed for wireless mesh networks can broadly be classified into two categories: (1) Dynamic and (2) Quasistatic (Subramanian et al., 2008). In the dynamic approach, every router is equipped with a single radio interface. The interface is dynamically switched from one channel to another between successive data transmissions. While the technique allows routers with a single interface to exploit the additional capacity offered by the available channels, it cannot be achieved using commodity hardware that does not provide fast channel-switching capability. For cost and practical considerations, we focus on wireless mesh networks that use off-the-shelf wireless cards. Hence, we adopt the quasistatic approach in which channels are assigned to router interfaces statically. However, the channel assignment can be updated if significant changes to traffic load or network topology are detected.

The channel assignment problem can be solved centrally or in a distributed fashion. This chapter focuses on centralized algorithms. Various approaches have been proposed for the problem, such as the greedy graph theoretic-based algorithm (Marina and Das, 2005), genetic algorithm (Chen et al., 2009), and greedy and Tabu-based algorithms (Subramanian et al., 2008). Subramanian et al. (2008) compared their centralized algorithms with lower bounds obtained from semi-definite programming (SDP) and linear programming formulations. While the results show that their algorithms outperform the algorithm proposed in Marina and Das (2005), a large performance gap with the lower bounds is observable. This suggests room for further improvement.

In this chapter, we investigate the use of artificial immune algorithms as an optimization tool for the problem. In de Castro and Timmis (2003), immune algorithms are classified as population-based and network-based according to the adaptation procedures used. Our study focuses on algorithms developed based on the clonal selection principle, a population-based approach. Basically, clonal-selection-based algorithms evolve a population of individuals, typically called B-cells, to cope successfully with antigens representing locations of unknown optima of a given function. At each generation, each B-cell in the population is subject to a series of procedures consisting of cloning, affinity maturation, metadynamics, and possibly aging (collectively known as clonal selection and expansion). Details of these procedures will be explained in Section 13.3.

As will be discussed later, the channel assignment problem can be viewed as a variant of the graph-coloring problem. In Cutello et al. (2003), an immune algorithm was applied to the graph-coloring problem, with competitive results to those obtained by the best evolutionary algorithms. Furthermore, the immune algorithm achieves this without the need for specialized crossover operators. Motivated by this, in Tan (2010), we proposed an immune algorithm as the strategy to evolve and improve solutions obtained using a simple greedy channel-assignment procedure. The evolution strategy is based on CLONALG (de Castro and Von Zuben, 2002), a popular clonal-selection-based algorithm.

This chapter extends the work on several fronts. First, two widely used clonal selection algorithms are investigated in addition to CLONALG. Specifically, we consider the B-cell algorithm (BCA) developed by Kelsey et al. (2003a,b), and a class of immune algorithms grouped under the title of “Cloning, Information Gain, Aging” (CLIGA), by Cutello et al. (2003, 2005b), (Cutello and Nicosia, 2005). The chosen algorithms have been successfully applied to various optimization problems. Second, a total of 18 variants are implemented for the chosen algorithms. The variants exhibit differences in the ways the populations are maintained and evolved. Systematic comparison among the variants provides insights on the strategy that works best for our problem. Third, a simple Tabu-based local search operator is developed to further improve our channel assignment algorithm.

Through extensive simulations, we show that our algorithms perform better than the genetic algorithm (Chen et al., 2009), graph-theoretic algorithm (Marina and Das, 2005), and Tabu-based algorithm (Subramanian et al., 2008) proposed for the channel assignment problem. Our evaluations also show the behavior of our algorithms in terms of convergence speed, sensitivity to parameter setting, and performance difference among the various variants developed.

The rest of this chapter is structured as follows. In next section, we present the system model and channel assignment problem formulation, and discuss some related proposals. In Section 13.3, we describe our proposed algorithm and its variants in detail. Section 13.4 presents the simulation experiments and results. Section 13.5 concludes the chapter.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128015384000136

What is the speed that data can be sent over a network transmission?

In telecommunications, data transfer is usually measured in bits per second. For example, a typical low-speed connection to the Internet may be 33.6 kilobits per second (Kbps). On Ethernet local area networks, data transfer can be as fast as 10 megabits per second.

Is the rate at which data can be transmitted over a given communication path or channel under given conditions?

The rate at which data can be transmitted over a given communication path, or channel, under given conditions, is referred to as the channel capacity.