Why is maximum data rate always lower than bandwidth?


Bandwidth represents the theoretical maximum capacity, while the data rate (or throughput) is the actual, real-world speed that is reduced by factors like latency, protocol overhead, network congestion, and physical limitations. The distinction is important as data rates in digital systems are on a continual upward trajectory.

Data is sent using various communication protocols. These protocols have “overhead” that consumes some bandwidth but does not directly contribute to data transmission. For example, data may be structured into packets, there may be error detection and correction algorithms involved, and security functions are often implemented.

Figure 1. Throughput is impacted by numerous factors and can vary over time, but it’s always less than theoretical bandwidth. (Image: iTT Systems)

Latency can reduce data rates. Sources of latency include physical distance, network congestion, transmission medium type, and hardware limitations. Other factors are software processing, queuing delays, and the number of hops a data packet must travel. Latency can be a significant and not always controllable factor.

Network congestion can significantly slow data transmission and reduce throughput. It can result from a variety of causes, some innocent, like an excess demand for data capacity from multiple users, and some malicious, like distributed denial of service (DDOS) attacks.

Packet loss

Packet loss can be a major cause of poor network performance and can severely limit effective bandwidth. Packet loss can be caused by network congestion, hardware issues, and interference. Other sources include software bugs, incorrect network configurations, and security threats. On wireless networks, interference and weak signals are common culprits, while on wired networks, faulty cables or ports are more likely sources.

When packets are lost, the network must send them again, which uses up bandwidth and slows down the connection. The retransmission process also delays communication, increasing latency. The bigger the impact on latency, the more time it takes to retransmit and resend data, further impacting performance.

Figure 2. Some of the many causes of packet losses. (Image: Fortinet)

In addition to packet losses, sources of reduced bandwidth in data centers include network congestion, caused by too much traffic for the network to handle, and latency, which often the result of bottlenecks from outdated or slow hardware, inefficient network design, and retransmissions caused by errors or congestion. But raw speed isn’t always the goal.

Raw speed isn’t everything

In certain data center applications, high overhead protocols are often selected for benefits like higher reliability, better error detection and correction, and congestion control, instead of raw data transmission speed.

High-overhead protocols like the transmission control protocol (TCP) can deliver high levels of data integrity and reliability. TCP ensures data is delivered in the correct order and without errors by breaking data into packets, assigning sequence numbers, detecting errors, and retransmitting lost or corrupted packets. TCP uses checksums to detect if data has been corrupted during transit. If an error is found, the receiver requests retransmission.

In TCP, the receiver sends acknowledgments for successfully received packets, confirming delivery. If the sender doesn’t receive an acknowledgment, it resends the packet. TCP manages the flow of data to prevent the sender from overwhelming the receiver, which helps avoid network congestion. Some routing algorithms in data centers can quickly reroute retransmitted packets around network failures, minimizing downtime and latencies.

Standard protocols can have high overhead, but ensure that diverse devices from different makers can seamlessly interface and exchange data. That can significantly streamline network management in complex installations.

High overhead protocols can also require extra data and processing for security purposes. Protocols like SSL and TLS employ encryption and authentication mechanisms to protect from unauthorized data access and ensure secure transmissions.

Data center operators, especially Cloud data centers used for critical data processes like financial transactions, regularly trade off raw speed to support other mission-critical requirements like stability, security, and guaranteed data accuracy and delivery.

Summary

Bandwidth is the theoretical maximum transmission speed, while data rate is the practical limitation based on “imperfections” in the network. Some of those imperfections result from inherent physical and software performance limitations, while others result from the need for additional features like improved security and better data reliability. Regardless of the cause,the data rate is always less than the theoretical maximum bandwidth.

References

Bandwidth, Paessler
Bandwidth and Data Rates, Fluke Networks
Bandwidth vs. Latency: the vital differences, PingPlotter
Data Rate vs Bandwidth: What’s the Difference?, Altium
How Is The Bandwidth of a Network Measured?, Equal Optics
Latency vs Throughput vs Bandwidth: Unraveling the Complexities of Network Speed, Kentik
The Difference Between Bit Rate vs. Bandwidth, Cadence
Ways to increase connection speed, bandwidth, and stability of your Wi-Fi network, Keenetic
What is Network Throughput and How to Measure & Monitor it!, iTT Systems

Related EE World content

What is intersymbol interference — and why should I care about it?
Do you need a real-time oscilloscope or a sampling oscilloscope?
What is second-generation beamforming?
What is bit jitter, and what are its component jitters?
What are the applications for photonic integrated circuits on the edge?


Filed Under: FAQ, Featured

 



We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Compare items
  • Total (0)
Compare
0