In the realm of networking, throughput, bandwidth, and latency are key performance indicators (KPIs) that define the performance quality of a network. This article delves into these concepts, providing a comprehensive understanding of each and their interrelationships.
Internet data is transferred in the form of small encrypted packets, each containing data and information about its source and destination. The capacity of a network to transfer these packets is known as its bandwidth. The number of packets transferred within a specific time period is the network’s throughput. Meanwhile, latency reflects the time required for data packets to reach their destination.
Throughput in Networking
Throughput is the number of items processed per unit time, such as bits transmitted per second, HTTP operations per day, or millions of instructions per second (MIPS). The sum of the data rates delivered to all terminals in a network system is referred to as system throughput or aggregate throughput.
Throughput is measured in standard units for data, such as bits per second (bit/s or bps) or data packets per second (p/s or pps) or data packets per time slot. However, throughput can be compromised by factors such as poor hardware performance, signal degradation due to multiple routers not performing optimally, and packet loss during transit due to a heavier workload.
Bandwidth vs Throughput
Though closely related, bandwidth and throughput are distinct concepts. Bandwidth is a theoretical measure of the amount of data that could be transferred from source to destination under ideal conditions. In contrast, throughput is an actual measure of the amount of data successfully transferred from source to destination, taking into account environmental factors and internal interferences. Therefore, throughput is always lower than the network’s bandwidth.
Packet Loss and Congestion Windows
Packet loss refers to the number of packets that do not reach their destination out of every hundred sent by the host. A throughput not accounting for packet loss assumes that 100% of data packets are received, referred to as User Datagram Protocol Throughput or UDP throughput.
Networks sometimes use the Transmission Control Protocol (TCP) to ensure all packets are delivered. This involves the receiver sending back a message to acknowledge the number of data packets received. The number of packets that can be sent by the receiver before receiving an acknowledgment packet is called the TCP congestion window.
Latency in Networking
Latency is the time between making a request and beginning to see a result. It is the measure of time taken while the data packet travels from the source to the destination. Higher latency implies a slower network. Latency is measured in units of time, such as seconds, and can be measured in one way and round trip time.
Latency vs Throughput
Bandwidth, throughput, and latency are all indicators of network performance. Greater bandwidth means a network can support a heavier workload, greater throughput means a network can quickly transfer data, and lower latency means there will be little delay between command and response. Throughput and latency are related because a network with a smaller capacity to carry data quickly will experience more delay in the transfer of data.
Impact of Latency on Throughput
Latency affects throughput as it is the delay before acknowledgment packets are received. If the acknowledgment packet indicates that packet loss is significantly low or negligible, it will increase the TCP congestion window. This means the network will be sending more packet data faster, increasing the throughput. Hence, low latency in a network increases the throughput.








