rfc9097.original   rfc9097.txt 
Network Working Group A. Morton Internet Engineering Task Force (IETF) A. Morton
Internet-Draft AT&T Labs Request for Comments: 9097 AT&T Labs
Intended status: Standards Track R. Geib Category: Standards Track R. Geib
Expires: December 11, 2021 Deutsche Telekom ISSN: 2070-1721 Deutsche Telekom
L. Ciavattone L. Ciavattone
AT&T Labs AT&T Labs
June 9, 2021 November 2021
Metrics and Methods for One-way IP Capacity Metrics and Methods for One-Way IP Capacity
draft-ietf-ippm-capacity-metric-method-12
Abstract Abstract
This memo revisits the problem of Network Capacity metrics first This memo revisits the problem of Network Capacity Metrics first
examined in RFC 5136. The memo specifies a more practical Maximum examined in RFC 5136. This memo specifies a more practical Maximum
IP-Layer Capacity metric definition catering for measurement IP-Layer Capacity Metric definition catering to measurement and
purposes, and outlines the corresponding methods of measurement. outlines the corresponding Methods of Measurement.
Status of This Memo Status of This Memo
This Internet-Draft is submitted in full conformance with the This is an Internet Standards Track document.
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months This document is a product of the Internet Engineering Task Force
and may be updated, replaced, or obsoleted by other documents at any (IETF). It represents the consensus of the IETF community. It has
time. It is inappropriate to use Internet-Drafts as reference received public review and has been approved for publication by the
material or to cite them other than as "work in progress." Internet Engineering Steering Group (IESG). Further information on
Internet Standards is available in Section 2 of RFC 7841.
This Internet-Draft will expire on December 11, 2021. Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
https://www.rfc-editor.org/info/rfc9097.
Copyright Notice Copyright Notice
Copyright (c) 2021 IETF Trust and the persons identified as the Copyright (c) 2021 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Revised BSD License text as described in Section 4.e of the
the Trust Legal Provisions and are provided without warranty as Trust Legal Provisions and are provided without warranty as described
described in the Simplified BSD License. in the Revised BSD License.
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1. Introduction
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 1.1. Requirements Language
2. Scope, Goals, and Applicability . . . . . . . . . . . . . . . 4 2. Scope, Goals, and Applicability
3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 5 3. Motivation
4. General Parameters and Definitions . . . . . . . . . . . . . 6 4. General Parameters and Definitions
5. IP-Layer Capacity Singleton Metric Definitions . . . . . . . 8 5. IP-Layer Capacity Singleton Metric Definitions
5.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 8 5.1. Formal Name
5.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 8 5.2. Parameters
5.3. Metric Definitions . . . . . . . . . . . . . . . . . . . 8 5.3. Metric Definitions
5.4. Related Round-Trip Delay and One-way Loss Definitions . . 9 5.4. Related Round-Trip Delay and One-Way Loss Definitions
5.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 10 5.5. Discussion
5.6. Reporting the Metric . . . . . . . . . . . . . . . . . . 10 5.6. Reporting the Metric
6. Maximum IP-Layer Capacity Metric Definitions (Statistic) . . 10 6. Maximum IP-Layer Capacity Metric Definitions (Statistics)
6.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 10 6.1. Formal Name
6.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 11 6.2. Parameters
6.3. Metric Definitions . . . . . . . . . . . . . . . . . . . 11 6.3. Metric Definitions
6.4. Related Round-Trip Delay and One-way Loss Definitions . . 13 6.4. Related Round-Trip Delay and One-Way Loss Definitions
6.5. Discussion . . . . . . . . . . . . . . . . . . . . . . . 13 6.5. Discussion
6.6. Reporting the Metric . . . . . . . . . . . . . . . . . . 13 6.6. Reporting the Metric
7. IP-Layer Sender Bit Rate Singleton Metric Definitions . . . . 14 7. IP-Layer Sender Bit Rate Singleton Metric Definitions
7.1. Formal Name . . . . . . . . . . . . . . . . . . . . . . . 14 7.1. Formal Name
7.2. Parameters . . . . . . . . . . . . . . . . . . . . . . . 14 7.2. Parameters
7.3. Metric Definition . . . . . . . . . . . . . . . . . . . . 15 7.3. Metric Definition
7.4. Discussion . . . . . . . . . . . . . . . . . . . . . . . 15 7.4. Discussion
7.5. Reporting the Metric . . . . . . . . . . . . . . . . . . 15 7.5. Reporting the Metric
8. Method of Measurement . . . . . . . . . . . . . . . . . . . . 15 8. Method of Measurement
8.1. Load Rate Adjustment Algorithm . . . . . . . . . . . . . 16 8.1. Load Rate Adjustment Algorithm
8.2. Measurement Qualification or Verification . . . . . . . . 21 8.2. Measurement Qualification or Verification
8.3. Measurement Considerations . . . . . . . . . . . . . . . 22 8.3. Measurement Considerations
8.4. Running Code . . . . . . . . . . . . . . . . . . . . . . 24 9. Reporting Formats
9. Reporting Formats . . . . . . . . . . . . . . . . . . . . . . 25 9.1. Configuration and Reporting Data Formats
9.1. Configuration and Reporting Data Formats . . . . . . . . 27 10. Security Considerations
10. Security Considerations . . . . . . . . . . . . . . . . . . . 27 11. IANA Considerations
11. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28 12. References
12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 28 12.1. Normative References
13. Appendix A - Load Rate Adjustment Pseudo Code . . . . . . . . 28 12.2. Informative References
14. Appendix B - RFC 8085 UDP Guidelines Check . . . . . . . . . 29 Appendix A. Load Rate Adjustment Pseudocode
14.1. Assessment of Mandatory Requirements . . . . . . . . . . 29 Appendix B. RFC 8085 UDP Guidelines Check
14.2. Assessment of Recommendations . . . . . . . . . . . . . 31 B.1. Assessment of Mandatory Requirements
15. References . . . . . . . . . . . . . . . . . . . . . . . . . 34 B.2. Assessment of Recommendations
15.1. Normative References . . . . . . . . . . . . . . . . . . 34 Acknowledgments
15.2. Informative References . . . . . . . . . . . . . . . . . 35 Authors' Addresses
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 37
1. Introduction 1. Introduction
The IETF's efforts to define Network and Bulk Transport Capacity have The IETF's efforts to define Network Capacity and Bulk Transport
been chartered and progressed for over twenty years. Over that time, Capacity (BTC) have been chartered and progressed for over twenty
the performance community has seen development of Informative years. Over that time, the performance community has seen the
definitions in [RFC3148] for Framework for Bulk Transport Capacity development of Informative definitions in [RFC3148] for the Framework
(BTC), RFC 5136 for Network Capacity and Maximum IP-Layer Capacity, for Bulk Transport Capacity, [RFC5136] for Network Capacity and
and the Experimental metric definitions and methods in [RFC8337], Maximum IP-Layer Capacity, and the Experimental metric definitions
Model-Based Metrics for BTC. and methods in "Model-Based Metrics for Bulk Transport Capacity"
[RFC8337].
This memo revisits the problem of Network Capacity metrics examined This memo revisits the problem of Network Capacity Metrics examined
first in [RFC3148] and later in [RFC5136]. Maximum IP-Layer Capacity first in [RFC3148] and later in [RFC5136]. Maximum IP-Layer Capacity
and [RFC3148] Bulk Transfer Capacity (goodput) are different metrics. and Bulk Transfer Capacity [RFC3148] (goodput) are different metrics.
Maximum IP-Layer Capacity is like the theoretical goal for goodput. Maximum IP-Layer Capacity is like the theoretical goal for goodput.
There are many metrics in [RFC5136], such as Available Capacity. There are many metrics in [RFC5136], such as Available Capacity.
Measurements depend on the network path under test and the use case. Measurements depend on the network path under test and the use case.
Here, the main use case is to assess the maximum capacity of one or Here, the main use case is to assess the Maximum Capacity of one or
more networks where the subscriber receives specific performance more networks where the subscriber receives specific performance
assurances, sometimes referred to as the Internet access, or where a assurances, sometimes referred to as Internet access, or where a
limit of the technology used on a path is being tested. For example, limit of the technology used on a path is being tested. For example,
when a user subscribes to a 1 Gbps service, then the user, the when a user subscribes to a 1 Gbps service, then the user, the
service provider, and possibly other parties want to assure that Service Provider, and possibly other parties want to assure that the
performance level is delivered. When a test confirms the subscribed specified performance level is delivered. When a test confirms the
performance level, then a tester can seek the location of a subscribed performance level, a tester can seek the location of a
bottleneck elsewhere. bottleneck elsewhere.
This memo recognizes the importance of a definition of a Maximum IP- This memo recognizes the importance of a definition of a Maximum IP-
Layer Capacity Metric at a time when Internet subscription speeds Layer Capacity Metric at a time when Internet subscription speeds
have increased dramatically; a definition that is both practical and have increased dramatically -- a definition that is both practical
effective for the performance community's needs, including Internet and effective for the performance community's needs, including
users. The metric definition is intended to use Active Methods of Internet users. The metric definitions are intended to use Active
Measurement [RFC7799], and a method of measurement is included. Methods of Measurement [RFC7799], and a Method of Measurement is
included for each metric.
The most direct active measurement of IP-Layer Capacity would use IP The most direct Active Measurement of IP-Layer Capacity would use IP
packets, but in practice a transport header is needed to traverse packets, but in practice a transport header is needed to traverse
address and port translators. UDP offers the most direct assessment address and port translators. UDP offers the most direct assessment
possibility, and in the [copycat] measurement study to investigate possibility, and in the measurement study to investigate whether UDP
whether UDP is viable as a general Internet transport protocol, the is viable as a general Internet transport protocol [copycat], the
authors found that a high percentage of paths tested support UDP authors found that a high percentage of paths tested support UDP
transport. A number of liaisons have been exchanged on this topic transport. A number of liaison statements have been exchanged on
[LS-SG12-A] [LS-SG12-B], discussing the laboratory and field tests this topic [LS-SG12-A] [LS-SG12-B], discussing the laboratory and
that support the UDP-based approach to IP-Layer Capacity measurement. field tests that support the UDP-based approach to IP-Layer Capacity
measurement.
This memo also recognizes the many updates to the IP Performance This memo also recognizes the updates to the IP Performance Metrics
Metrics Framework [RFC2330] published over twenty years, and makes (IPPM) Framework [RFC2330] that have been published since 1998. In
use of [RFC7312] for Advanced Stream and Sampling Framework, and particular, it makes use of [RFC7312] for the Advanced Stream and
[RFC8468] with IPv4, IPv6, and IPv4-IPv6 Coexistence Updates. Sampling Framework and [RFC8468] for its IPv4, IPv6, and IPv4-IPv6
Coexistence Updates.
Appendix A describes the load rate adjustment algorithm in pseudo- Appendix A describes the load rate adjustment algorithm, using
code. Appendix B discusses the algorithm's compliance with pseudocode. Appendix B discusses the algorithm's compliance with
[RFC8085]. [RFC8085].
1.1. Requirements Language 1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in BCP "OPTIONAL" in this document are to be interpreted as described in
14[RFC2119] [RFC8174] when, and only when, they appear in all BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all
capitals, as shown here. capitals, as shown here.
2. Scope, Goals, and Applicability 2. Scope, Goals, and Applicability
The scope of this memo is to define Active Measurement metrics and The scope of this memo is to define Active Measurement metrics and
corresponding methods to unambiguously determine Maximum IP-Layer corresponding methods to unambiguously determine Maximum IP-Layer
Capacity and useful secondary metrics. Capacity and useful secondary metrics.
Another goal is to harmonize the specified metric and method across Another goal is to harmonize the specified Metric and Method across
the industry, and this memo is the vehicle that captures IETF the industry, and this memo is the vehicle that captures IETF
consensus, possibly resulting in changes to the specifications of consensus, possibly resulting in changes to the specifications of
other Standards Development Organizations (SDO) (through each SDO's other Standards Development Organizations (SDOs) (through each SDO's
normal contribution process, or through liaison exchange). normal contribution process or through liaison exchange).
Secondary goals are to add considerations for test procedures, and to Secondary goals are to add considerations for test procedures and to
provide interpretation of the Maximum IP-Layer Capacity results (to provide interpretation of the Maximum IP-Layer Capacity results (to
identify cases where more testing is warranted, possibly with identify cases where more testing is warranted, possibly with
alternate configurations). Fostering the development of protocol alternate configurations). Fostering the development of protocol
support for this metric and method of measurement is also a goal of support for this Metric and Method of Measurement is also a goal of
this memo (all active testing protocols currently defined by the IPPM this memo (all active testing protocols currently defined by the IPPM
WG are UDP-based, meeting a key requirement of these methods). The WG are UDP based, meeting a key requirement of these methods). The
supporting protocol development to measure this metric according to supporting protocol development to measure this metric according to
the specified method is a key future contribution to Internet the specified method is a key future contribution to Internet
measurement. measurement.
The load rate adjustment algorithm's scope is limited to helping The load rate adjustment algorithm's scope is limited to helping
determine the Maximum IP-Layer Capacity in the context of an determine the Maximum IP-Layer Capacity in the context of an
infrequent, diagnostic, short term measurement. It is RECOMMENDED to infrequent, diagnostic, short-term measurement. It is RECOMMENDED to
discontinue non-measurement traffic that shares a subscriber's discontinue non-measurement traffic that shares a subscriber's
dedicated resources while testing: measurements may not be accurate dedicated resources while testing: measurements may not be accurate,
and throughput of competing elastic traffic may be greatly reduced. and throughput of competing elastic traffic may be greatly reduced.
The primary application of the metric and method of measurement The primary application of the Metrics and Methods of Measurement
described here is the same as in Section 2 of [RFC7497] where: described here is the same as what is described in Section 2 of
[RFC7497], where:
o The access portion of the network is the focus of this problem | The access portion of the network is the focus of this problem
statement. The user typically subscribes to a service with | statement. The user typically subscribes to a service with
bidirectional Internet access partly described by rates in bits | bidirectional [Internet] access partly described by rates in bits
per second. | per second.
In addition, the use of the load rate adjustment algorithm described In addition, the use of the load rate adjustment algorithm described
in section 8.1 has the following additional applicability in Section 8.1 has the following additional applicability
limitations: limitations:
- MUST only be used in the application of diagnostic and operations * It MUST only be used in the application of diagnostic and
measurements as described in this memo operations measurements as described in this memo.
- MUST only be used in circumstances consistent with Section 10, * It MUST only be used in circumstances consistent with Section 10
Security Considerations ("Security Considerations").
- If a network operator is certain of the IP-layer capacity to be * If a network operator is certain of the IP-Layer Capacity to be
validated, then testing MAY start with a fixed rate test at the IP- validated, then testing MAY start with a fixed-rate test at the
layer capacity and avoid activating the load adjustment algorithm. IP-Layer Capacity and avoid activating the load adjustment
However, the stimulus for a diagnostic test (such as a subscriber algorithm. However, the stimulus for a diagnostic test (such as a
request) strongly implies that there is no certainty and the load subscriber request) strongly implies that there is no certainty,
adjustment algorithm is RECOMMENDED. and the load adjustment algorithm is RECOMMENDED.
Further, the metric and method of measurement are intended for use Further, the Metrics and Methods of Measurement are intended for use
where specific exact path information is unknown within a range of where specific exact path information is unknown within a range of
possible values: possible values:
- the subscriber's exact Maximum IP-Layer Capacity is unknown (which * The subscriber's exact Maximum IP-Layer Capacity is unknown (which
is sometimes the case; service rates can be increased due to upgrades is sometimes the case; service rates can be increased due to
without a subscriber's request, or to provide a surplus to compensate upgrades without a subscriber's request or increased to provide a
for possible underestimates of TCP-based testing). surplus to compensate for possible underestimates of TCP-based
testing).
- the size of the bottleneck buffer is unknown. * The size of the bottleneck buffer is unknown.
Finally, the measurement system's load rate adjustment algorithm Finally, the measurement system's load rate adjustment algorithm
SHALL NOT be provided with the exact capacity value to be validated a SHALL NOT be provided with the exact capacity value to be validated
priori. This restriction fosters a fair result, and removes an a priori. This restriction fosters a fair result and removes an
opportunity for bad actors to operate with knowledge of the "right opportunity for nefarious operation enabled by knowledge of the
answer". correct answer.
3. Motivation 3. Motivation
As with any problem that has been worked for many years in various As with any problem that has been worked on for many years in various
SDOs without any special attempts at coordination, various solutions SDOs without any special attempts at coordination, various solutions
for metrics and methods have emerged. for Metrics and Methods have emerged.
There are five factors that have changed (or begun to change) in the There are five factors that have changed (or began to change) in the
2013-2019 time frame, and the presence of any one of them on the path 2013-2019 time frame, and the presence of any one of them on the path
requires features in the measurement design to account for the requires features in the measurement design to account for the
changes: changes:
1. Internet access is no longer the bottleneck for many users (but 1. Internet access is no longer the bottleneck for many users (but
subscribers expect network providers to honor contracted subscribers expect network providers to honor contracted
performance). performance).
2. Both transfer rate and latency are important to user's 2. Both transfer rate and latency are important to a user's
satisfaction. satisfaction.
3. UDP's growing role in Transport, in areas where TCP once 3. UDP's role in transport is growing in areas where TCP once
dominated. dominated.
4. Content and applications are moving physically closer to users. 4. Content and applications are moving physically closer to users.
5. There is less emphasis on ISP gateway measurements, possibly due 5. There is less emphasis on ISP gateway measurements, possibly due
to less traffic crossing ISP gateways in the future. to less traffic crossing ISP gateways in the future.
4. General Parameters and Definitions 4. General Parameters and Definitions
This section lists the REQUIRED input factors to specify a Sender or This section lists the REQUIRED input factors to specify a Sender or
Receiver metric. Receiver metric.
o Src, one of the addresses of a host (such as a globally routable Src: One of the addresses of a host (such as a globally routable IP
IP address). address).
o Dst, one of the addresses of a host (such as a globally routable Dst: One of the addresses of a host (such as a globally routable IP
IP address). address).
o MaxHops, the limit on the number of Hops a specific packet may MaxHops: The limit on the number of Hops a specific packet may visit
visit as it traverses from the host at Src to the host at Dst as it traverses from the host at Src to the host at Dst
(implemented in the TTL or Hop Limit). (implemented in the TTL or Hop Limit).
o T0, the time at the start of measurement interval, when packets T0: The time at the start of a measurement interval, when packets
are first transmitted from the Source. are first transmitted from the Source.
o I, the nominal duration of a measurement interval at the I: The nominal duration of a measurement interval at the Destination
destination (default 10 sec) (default 10 sec).
o dt, the nominal duration of m equal sub-intervals in I at the dt: The nominal duration of m equal sub-intervals in I at the
destination (default 1 sec) Destination (default 1 sec).
o dtn, the beginning boundary of a specific sub-interval, n, one of dtn: The beginning boundary of a specific sub-interval, n, one of m
m sub-intervals in I sub-intervals in I.
o FT, the feedback time interval between status feedback messages FT: The feedback time interval between status feedback messages
communicating measurement results, sent from the receiver to communicating measurement results, sent from the Receiver to
control the sender. The results are evaluated throughout the test control the Sender. The results are evaluated throughout the test
to determine how to adjust the current offered load rate at the to determine how to adjust the current offered load rate at the
sender (default 50ms) Sender (default 50 msec).
o Tmax, a maximum waiting time for test packets to arrive at the Tmax: A maximum waiting time for test packets to arrive at the
destination, set sufficiently long to disambiguate packets with Destination, set sufficiently long to disambiguate packets with
long delays from packets that are discarded (lost), such that the long delays from packets that are discarded (lost), such that the
distribution of one-way delay is not truncated. distribution of one-way delay is not truncated.
o F, the number of different flows synthesized by the method F: The number of different flows synthesized by the method (default
(default 1 flow) one flow).
o flow, the stream of packets with the same n-tuple of designated Flow: The stream of packets with the same n-tuple of designated
header fields that (when held constant) result in identical header fields that (when held constant) result in identical
treatment in a multi-path decision (such as the decision taken in treatment in a multipath decision (such as the decision taken in
load balancing). Note: The IPv6 flow label SHOULD be included in load balancing). Note: The IPv6 flow label SHOULD be included in
the flow definition when routers have complied with [RFC6438] the flow definition when routers have complied with the guidelines
guidelines. provided in [RFC6438].
o Type-P, the complete description of the test packets for which Type-P: The complete description of the test packets for which this
this assessment applies (including the flow-defining fields). assessment applies (including the flow-defining fields). Note
Note that the UDP transport layer is one requirement for test that the UDP transport layer is one requirement for test packets
packets specified below. Type-P is a parallel concept to specified below. Type-P is a concept parallel to "population of
"population of interest" defined in clause 6.1.1 of[Y.1540]. interest" as defined in Clause 6.1.1 of [Y.1540].
o Payload Content, this IPPM Framework-conforming metric and method Payload Content: An aspect of the Type-P Parameter that can help to
includes packet payload content as an aspect of the Type-P improve measurement determinism. Specifying packet payload
parameter, which can help to improve measurement determinism. If content helps to ensure IPPM Framework-conforming Metrics and
there is payload compression in the path and tests intend to Methods. If there is payload compression in the path and tests
characterize a possible advantage due to compression, then payload intend to characterize a possible advantage due to compression,
content SHOULD be supplied by a pseudo-random sequence generator, then payload content SHOULD be supplied by a pseudorandom sequence
by using part of a compressed file, or by other means. See generator, by using part of a compressed file, or by other means.
Section 3.1.2 of [RFC7312]. See Section 3.1.2 of [RFC7312].
o PM, a list of fundamental metrics, such as loss, delay, and PM: A list of fundamental metrics, such as loss, delay, and
reordering, and corresponding target performance threshold. At reordering, and corresponding target performance threshold(s). At
least one fundamental metric and target performance threshold MUST least one fundamental metric and target performance threshold MUST
be supplied (such as One-way IP Packet Loss [RFC7680] equal to be supplied (such as one-way IP packet loss [RFC7680] equal to
zero). zero).
A non-Parameter which is required for several metrics is defined A non-Parameter that is required for several metrics is defined
below: below:
o T, the host time of the *first* test packet's *arrival* as T: The host time of the *first* test packet's *arrival* as measured
measured at the destination Measurement Point, or MP(Dst). There at the Destination Measurement Point, or MP(Dst). There may be
may be other packets sent between Source and Destination hosts other packets sent between Source and Destination hosts that are
that are excluded, so this is the time of arrival of the first excluded, so this is the time of arrival of the first packet used
packet used for measurement of the metric. for measurement of the metric.
Note that time stamp format and resolution, sequence numbers, etc. Note that timestamp format and resolution, sequence numbers, etc.
will be established by the chosen test protocol standard or will be established by the chosen test protocol standard or
implementation. implementation.
5. IP-Layer Capacity Singleton Metric Definitions 5. IP-Layer Capacity Singleton Metric Definitions
This section sets requirements for the singleton metric that supports This section sets requirements for the Singleton metric that supports
the Maximum IP-Layer Capacity Metric definition in Section 6. the Maximum IP-Layer Capacity Metric definitions in Section 6.
5.1. Formal Name 5.1. Formal Name
Type-P-One-way-IP-Capacity, or informally called IP-Layer Capacity. "Type-P-One-way-IP-Capacity" is the formal name; it is informally
called "IP-Layer Capacity".
Note that Type-P depends on the chosen method. Note that Type-P depends on the chosen method.
5.2. Parameters 5.2. Parameters
This section lists the REQUIRED input factors to specify the metric, This section lists the REQUIRED input factors to specify the metric,
beyond those listed in Section 4. beyond those listed in Section 4.
No additional Parameters are needed. No additional Parameters are needed.
5.3. Metric Definitions 5.3. Metric Definitions
This section defines the REQUIRED aspects of the measurable IP-Layer This section defines the REQUIRED aspects of the measurable IP-Layer
Capacity metric (unless otherwise indicated) for measurements between Capacity Metric (unless otherwise indicated) for measurements between
specified Source and Destination hosts: specified Source and Destination hosts:
Define the IP-Layer Capacity, C(T,dt,PM), to be the number of IP- Define the IP-Layer Capacity, C(T,dt,PM), to be the number of IP-
Layer bits (including header and data fields) in packets that can be Layer bits (including header and data fields) in packets that can be
transmitted from the Src host and correctly received by the Dst host transmitted from the Src host and correctly received by the Dst host
during one contiguous sub-interval, dt in length. The IP-Layer during one contiguous sub-interval, dt in length. The IP-Layer
Capacity depends on the Src and Dst hosts, the host addresses, and Capacity depends on the Src and Dst hosts, the host addresses, and
the path between the hosts. the path between the hosts.
The number of these IP-Layer bits is designated n0[dtn,dtn+1] for a The number of these IP-Layer bits is designated n0[dtn,dtn+1] for a
specific dt. specific dt.
When the packet size is known and of fixed size, the packet count When the packet size is known and of fixed size, the packet count
during a single sub-interval dt multiplied by the total bits in IP during a single sub-interval dt multiplied by the total bits in IP
header and data fields is equal to n0[dtn,dtn+1]. header and data fields is equal to n0[dtn,dtn+1].
Anticipating a Sample of Singletons, the number of sub-intervals with Anticipating a Sample of Singletons, the number of sub-intervals with
duration dt MUST be set to a natural number m, so that T+I = T + m*dt duration dt MUST be set to a natural number m, so that T+I = T + m*dt
with dtn+1 - dtn = dt for 1 <= n <= m. with dtn+1 - dtn = dt for 1 <= n <= m.
Parameter PM represents other performance metrics [see section 5.4 Parameter PM represents other performance metrics (see Section 5.4
below]; their measurement results SHALL be collected during below); their measurement results SHALL be collected during
measurement of IP-Layer Capacity and associated with the measurement of IP-Layer Capacity and associated with the
corresponding dtn for further evaluation and reporting. Users SHALL corresponding dtn for further evaluation and reporting. Users SHALL
specify the parameter Tmax as required by each metric's reference specify the Parameter Tmax as required by each metric's reference
definition. definition.
Mathematically, this definition is represented as (for each n): Mathematically, this definition is represented as (for each n):
( n0[dtn,dtn+1] ) ( n0[dtn,dtn+1] )
C(T,dt,PM) = ------------------------- C(T,dt,PM) = -------------------------
dt dt
Equation for IP-Layer Capacity Figure 1: Equation for IP-Layer Capacity
and: and:
o n0 is the total number of IP-Layer header and payload bits that * n0 is the total number of IP-Layer header and payload bits that
can be transmitted in standard-formed packets [RFC8468] from the can be transmitted in standard-formed packets [RFC8468] from the
Src host and correctly received by the Dst host during one Src host and correctly received by the Dst host during one
contiguous sub-interval, dt in length, during the interval [T, contiguous sub-interval, dt in length, during the interval
T+I], [T,T+I].
o C(T,dt,PM) the IP-Layer Capacity, corresponds to the value of n0 * C(T,dt,PM), the IP-Layer Capacity, corresponds to the value of n0
measured in any sub-interval beginning at dtn, divided by the measured in any sub-interval beginning at dtn, divided by the
length of sub-interval, dt. length of the sub-interval, dt.
o PM represents other performance metrics [see section 5.4 below]; * PM represents other performance metrics (see Section 5.4 below);
their measurement results SHALL be collected during measurement of their measurement results SHALL be collected during measurement of
IP-Layer Capacity and associated with the corresponding dtn for IP-Layer Capacity and associated with the corresponding dtn for
further evaluation and reporting. further evaluation and reporting.
o all sub-intervals MUST be of equal duration. Choosing dt as non- * All sub-intervals MUST be of equal duration. Choosing dt as non-
overlapping consecutive time intervals allows for a simple overlapping consecutive time intervals allows for a simple
implementation. implementation.
o The bit rate of the physical interface of the measurement devices * The bit rate of the physical interface of the measurement devices
MUST be higher than the smallest of the links on the path whose MUST be higher than the smallest of the links on the path whose
C(T,I,PM) is to be measured (the bottleneck link). C(T,I,PM) is to be measured (the bottleneck link).
Measurements according to these definitions SHALL use the UDP Measurements according to this definition SHALL use the UDP transport
transport layer. Standard-formed packets are specified in Section 5 layer. Standard-formed packets are specified in Section 5 of
of [RFC8468]. The measurement SHOULD use a randomized Source port or [RFC8468]. The measurement SHOULD use a randomized Source port or
equivalent technique, and SHOULD send responses from the Source equivalent technique, and SHOULD send responses from the Source
address matching the test packet destination address. address matching the test packet Destination address.
Some compression affects on measurement are discussed in Section 6 of Some effects of compression on measurement are discussed in Section 6
[RFC8468]. of [RFC8468].
5.4. Related Round-Trip Delay and One-way Loss Definitions 5.4. Related Round-Trip Delay and One-Way Loss Definitions
RTD[dtn,dtn+1] is defined as a Sample of the [RFC2681] Round-trip RTD[dtn,dtn+1] is defined as a Sample of the Round-Trip Delay
Delay between the Src host and the Dst host over the interval [T,T+I] [RFC2681] between the Src host and the Dst host during the interval
(that contains equal non-overlapping intervals of dt). The [T,T+I] (that contains equal non-overlapping intervals of dt). The
"reasonable period of time" in [RFC2681] is the parameter Tmax in "reasonable period of time" mentioned in [RFC2681] is the Parameter
this memo. The statistics used to summarize RTD[dtn,dtn+1] MAY Tmax in this memo. The statistics used to summarize RTD[dtn,dtn+1]
include the minimum, maximum, median, and mean, and the range = MAY include the minimum, maximum, median, mean, and the range =
(maximum - minimum) is referred to below in Section 8.1 for load (maximum - minimum). Some of these statistics are needed for load
adjustment purposes. adjustment purposes (Section 8.1), measurement qualification
(Section 8.2), and reporting (Section 9).
OWL[dtn,dtn+1] is defined as a Sample of the [RFC7680] One-way Loss OWL[dtn,dtn+1] is defined as a Sample of the One-Way Loss [RFC7680]
between the Src host and the Dst host over the interval [T,T+I] (that between the Src host and the Dst host during the interval [T,T+I]
contains equal non-overlapping intervals of dt). The statistics used (that contains equal non-overlapping intervals of dt). The
to summarize OWL[dtn,dtn+1] MAY include the lost packet count and the statistics used to summarize OWL[dtn,dtn+1] MAY include the count of
lost packet ratio. lost packets and the ratio of lost packets.
Other metrics MAY be measured: one-way reordering, duplication, and Other metrics MAY be measured: one-way reordering, duplication, and
delay variation. delay variation.
5.5. Discussion 5.5. Discussion
See the corresponding section for Maximum IP-Layer Capacity. See the corresponding section for Maximum IP-Layer Capacity
(Section 6.5).
5.6. Reporting the Metric 5.6. Reporting the Metric
The IP-Layer Capacity SHOULD be reported with at least single Megabit The IP-Layer Capacity SHOULD be reported with at least single-Megabit
resolution, in units of Megabits per second (Mbps), (which is resolution, in units of Megabits per second (Mbps) (which, to avoid
1,000,000 bits per second to avoid any confusion). any confusion, is 1,000,000 bits per second).
The related One-way Loss metric and Round Trip Delay measurements for The related One-Way Loss metric and Round-Trip Delay measurements for
the same Singleton SHALL be reported, also with meaningful resolution the same Singleton SHALL be reported, also with meaningful resolution
for the values measured. for the values measured.
Individual Capacity measurements MAY be reported in a manner Individual Capacity measurements MAY be reported in a manner
consistent with the Maximum IP-Layer Capacity, see Section 9. consistent with the Maximum IP-Layer Capacity; see Section 9.
6. Maximum IP-Layer Capacity Metric Definitions (Statistic) 6. Maximum IP-Layer Capacity Metric Definitions (Statistics)
This section sets requirements for the following components to This section sets requirements for the following components to
support the Maximum IP-Layer Capacity Metric. support the Maximum IP-Layer Capacity Metric.
6.1. Formal Name 6.1. Formal Name
Type-P-One-way-Max-IP-Capacity, or informally called Maximum IP-Layer "Type-P-One-way-Max-IP-Capacity" is the formal name; it is informally
Capacity. called "Maximum IP-Layer Capacity".
Note that Type-P depends on the chosen method. Note that Type-P depends on the chosen method.
6.2. Parameters 6.2. Parameters
This section lists the REQUIRED input factors to specify the metric, This section lists the REQUIRED input factors to specify the metric,
beyond those listed in Section 4. beyond those listed in Section 4.
No additional Parameters or definitions are needed. No additional Parameters or definitions are needed.
6.3. Metric Definitions 6.3. Metric Definitions
This section defines the REQUIRED aspects of the Maximum IP-Layer This section defines the REQUIRED aspects of the Maximum IP-Layer
Capacity metric (unless otherwise indicated) for measurements between Capacity Metric (unless otherwise indicated) for measurements between
specified Source and Destination hosts: specified Source and Destination hosts:
Define the Maximum IP-Layer Capacity, Maximum_C(T,I,PM), to be the Define the Maximum IP-Layer Capacity, Maximum_C(T,I,PM), to be the
maximum number of IP-Layer bits n0[dtn,dtn+1] divided by dt that can maximum number of IP-Layer bits n0[dtn,dtn+1] divided by dt that can
be transmitted in packets from the Src host and correctly received by be transmitted in packets from the Src host and correctly received by
the Dst host, over all dt length intervals in [T, T+I], and meeting the Dst host, over all dt-length intervals in [T,T+I] and meeting the
the PM criteria. Equivalently the Maximum of a Sample of size m of PM criteria. An equivalent definition would be the maximum of a
C(T,I,PM) collected during the interval [T, T+I] and meeting the PM Sample of size m of Singletons C(T,I,PM) collected during the
criteria. interval [T,T+I] and meeting the PM criteria.
The number of sub-intervals with duration dt MUST be set to a natural The number of sub-intervals with duration dt MUST be set to a natural
number m, so that T+I = T + m*dt with dtn+1 - dtn = dt for 1 <= n <= number m, so that T+I = T + m*dt with dtn+1 - dtn = dt for 1 <= n <=
m. m.
Parameter PM represents the other performance metrics (see Parameter PM represents the other performance metrics (see
Section 6.4 below) and their measurement results for the Maximum IP- Section 6.4 below) and their measurement results for the Maximum IP-
Layer Capacity. At least one target performance threshold (PM Layer Capacity. At least one target performance threshold (PM
criterion) MUST be defined. If more than one metric and target criterion) MUST be defined. If more than one metric and target
performance threshold are defined, then the sub-interval with maximum performance threshold is defined, then the sub-interval with the
number of bits transmitted MUST meet all the target performance maximum number of bits transmitted MUST meet all the target
thresholds. Users SHALL specify the parameter Tmax as required by performance thresholds. Users SHALL specify the Parameter Tmax as
each metric's reference definition. required by each metric's reference definition.
Mathematically, this definition can be represented as: Mathematically, this definition can be represented as:
max ( n0[dtn,dtn+1] ) max ( n0[dtn,dtn+1] )
[T,T+I] [T,T+I]
Maximum_C(T,I,PM) = ------------------------- Maximum_C(T,I,PM) = -------------------------
dt dt
where:
where:
T T+I T T+I
_________________________________________ _________________________________________
| | | | | | | | | | | | | | | | | | | | | |
dtn=1 2 3 4 5 6 7 8 9 10 n+1 dtn=1 2 3 4 5 6 7 8 9 10 n+1
n=m n=m
Equation for Maximum Capacity Figure 2: Equation for Maximum Capacity
and: and:
o n0 is the total number of IP-Layer header and payload bits that * n0 is the total number of IP-Layer header and payload bits that
can be transmitted in standard-formed packets from the Src host can be transmitted in standard-formed packets from the Src host
and correctly received by the Dst host during one contiguous sub- and correctly received by the Dst host during one contiguous sub-
interval, dt in length, during the interval [T, T+I], interval, dt in length, during the interval [T,T+I].
o Maximum_C(T,I,PM) the Maximum IP-Layer Capacity, corresponds to * Maximum_C(T,I,PM), the Maximum IP-Layer Capacity, corresponds to
the maximum value of n0 measured in any sub-interval beginning at the maximum value of n0 measured in any sub-interval beginning at
dtn, divided by the constant length of all sub-intervals, dt. dtn, divided by the constant length of all sub-intervals, dt.
o PM represents the other performance metrics (see Section 5.4) and * PM represents the other performance metrics (see Section 6.4) and
their measurement results for the Maximum IP-Layer Capacity. At their measurement results for the Maximum IP-Layer Capacity. At
least one target performance threshold (PM criterion) MUST be least one target performance threshold (PM criterion) MUST be
defined. defined.
o all sub-intervals MUST be of equal duration. Choosing dt as non- * All sub-intervals MUST be of equal duration. Choosing dt as non-
overlapping consecutive time intervals allows for a simple overlapping consecutive time intervals allows for a simple
implementation. implementation.
o The bit rate of the physical interface of the measurement systems * The bit rate of the physical interface of the measurement systems
MUST be higher than the smallest of the links on the path whose MUST be higher than the smallest of the links on the path whose
Maximum_C(T,I,PM) is to be measured (the bottleneck link). Maximum_C(T,I,PM) is to be measured (the bottleneck link).
In this definition, the m sub-intervals can be viewed as trials when In this definition, the m sub-intervals can be viewed as trials when
the Src host varies the transmitted packet rate, searching for the the Src host varies the transmitted packet rate, searching for the
maximum n0 that meets the PM criteria measured at the Dst host in a maximum n0 that meets the PM criteria measured at the Dst host in a
test of duration, I. When the transmitted packet rate is held test of duration I. When the transmitted packet rate is held
constant at the Src host, the m sub-intervals may also be viewed as constant at the Src host, the m sub-intervals may also be viewed as
trials to evaluate the stability of n0 and metric(s) in the PM list trials to evaluate the stability of n0 and metric(s) in the PM list
over all dt-length intervals in I. over all dt-length intervals in I.
Measurements according to these definitions SHALL use the UDP Measurements according to these definitions SHALL use the UDP
transport layer. transport layer.
6.4. Related Round-Trip Delay and One-way Loss Definitions 6.4. Related Round-Trip Delay and One-Way Loss Definitions
RTD[dtn,dtn+1] and OWL[dtn,dtn+1] are defined in Section 5.4. Here, RTD[dtn,dtn+1] and OWL[dtn,dtn+1] are defined in Section 5.4. Here,
the test intervals are increased to match the capacity Samples, the test intervals are increased to match the capacity Samples,
RTD[T,I] and OWL[T,I]. RTD[T,I] and OWL[T,I].
The interval dtn,dtn+1 where Maximum_C[T,I,PM] occurs is the The interval dtn,dtn+1 where Maximum_C(T,I,PM) occurs is the
reporting sub-interval within RTD[T,I] and OWL[T,I]. reporting sub-interval for RTD[dtn,dtn+1] and OWL[dtn,dtn+1] within
RTD[T,I] and OWL[T,I].
Other metrics MAY be measured: one-way reordering, duplication, and Other metrics MAY be measured: one-way reordering, duplication, and
delay variation. delay variation.
6.5. Discussion 6.5. Discussion
If traffic conditioning (e.g., shaping, policing) applies along a If traffic conditioning (e.g., shaping, policing) applies along a
path for which Maximum_C(T,I,PM) is to be determined, different path for which Maximum_C(T,I,PM) is to be determined, different
values for dt SHOULD be picked and measurements be executed during values for dt SHOULD be picked and measurements executed during
multiple intervals [T, T+I]. Each duration dt SHOULD be chosen so multiple intervals [T,T+I]. Each duration dt SHOULD be chosen so
that it is an integer multiple of increasing values k times that it is an integer multiple of increasing values k times
serialization delay of a path MTU at the physical interface speed serialization delay of a Path MTU (PMTU) at the physical interface
where traffic conditioning is expected. This should avoid taking speed where traffic conditioning is expected. This should avoid
configured burst tolerance singletons as a valid Maximum_C(T,I,PM) taking configured burst tolerance Singletons as a valid
result. Maximum_C(T,I,PM) result.
A Maximum_C(T,I,PM) without any indication of bottleneck congestion, A Maximum_C(T,I,PM) without any indication of bottleneck congestion,
be that an increasing latency, packet loss or ECN marks during a be that increased latency, packet loss, or Explicit Congestion
measurement interval I, is likely to underestimate Maximum_C(T,I,PM). Notification (ECN) marks during a measurement interval, I, is likely
an underestimate of Maximum_C(T,I,PM).
6.6. Reporting the Metric 6.6. Reporting the Metric
The IP-Layer Capacity SHOULD be reported with at least single Megabit The IP-Layer Capacity SHOULD be reported with at least single-Megabit
resolution, in units of Megabits per second (Mbps) (which is resolution, in units of Megabits per second (Mbps) (which, to avoid
1,000,000 bits per second to avoid any confusion). any confusion, is 1,000,000 bits per second).
The related One-way Loss metric and Round Trip Delay measurements for The related One-Way Loss metric and Round-Trip Delay measurements for
the same Singleton SHALL be reported, also with meaningful resolution the same Singleton SHALL be reported, also with meaningful resolution
for the values measured. for the values measured.
When there are demonstrated and repeatable Capacity modes in the When there are demonstrated and repeatable Capacity modes in the
Sample, then the Maximum IP-Layer Capacity SHALL be reported for each Sample, the Maximum IP-Layer Capacity SHALL be reported for each
mode, along with the relative time from the beginning of the stream mode, along with the relative time from the beginning of the stream
that the mode was observed to be present. Bimodal Maximum IP-Layer that the mode was observed to be present. Bimodal Maximum IP-Layer
Capacities have been observed with some services, sometimes called a Capacities have been observed with some services, sometimes called a
"turbo mode" intending to deliver short transfers more quickly, or "turbo mode" intending to deliver short transfers more quickly or
reduce the initial buffering time for some video streams. Note that reduce the initial buffering time for some video streams. Note that
modes lasting less than dt duration will not be detected. modes lasting less than duration dt will not be detected.
Some transmission technologies have multiple methods of operation Some transmission technologies have multiple methods of operation
that may be activated when channel conditions degrade or improve, and that may be activated when channel conditions degrade or improve, and
these transmission methods may determine the Maximum IP-Layer these transmission methods may determine the Maximum IP-Layer
Capacity. Examples include line-of-sight microwave modulator Capacity. Examples include line-of-sight microwave modulator
constellations, or cellular modem technologies where the changes may constellations, or cellular modem technologies where the changes may
be initiated by a user moving from one coverage area to another. be initiated by a user moving from one coverage area to another.
Operation in the different transmission methods may be observed over Operation in the different transmission methods may be observed over
time, but the modes of Maximum IP-Layer Capacity will not be time, but the modes of Maximum IP-Layer Capacity will not be
activated deterministically as with the "turbo mode" described in the activated deterministically as with the "turbo mode" described in the
paragraph above. paragraph above.
7. IP-Layer Sender Bit Rate Singleton Metric Definitions 7. IP-Layer Sender Bit Rate Singleton Metric Definitions
This section sets requirements for the following components to This section sets requirements for the following components to
support the IP-Layer Sender Bitrate Metric. This metric helps to support the IP-Layer Sender Bit Rate Metric. This metric helps to
check that the sender actually generated the desired rates during a check that the Sender actually generated the desired rates during a
test, and measurement takes place at the Src host to network path test, and measurement takes place at the interface between the Src
interface (or as close as practical within the Src host). It is not host and the network path (or as close as practical within the Src
a metric for path performance. host). It is not a metric for path performance.
7.1. Formal Name 7.1. Formal Name
Type-P-IP-Sender-Bit-Rate, or informally called IP-Layer Sender "Type-P-IP-Sender-Bit-Rate" is the formal name; it is informally
Bitrate. called the "IP-Layer Sender Bit Rate".
Note that Type-P depends on the chosen method. Note that Type-P depends on the chosen method.
7.2. Parameters 7.2. Parameters
This section lists the REQUIRED input factors to specify the metric, This section lists the REQUIRED input factors to specify the metric,
beyond those listed in Section 4. beyond those listed in Section 4.
o S, the duration of the measurement interval at the Source S: The duration of the measurement interval at the Source.
o st, the nominal duration of N sub-intervals in S (default st = st: The nominal duration of N sub-intervals in S (default st = 0.05
0.05 seconds) seconds).
o stn, the beginning boundary of a specific sub-interval, n, one of stn: The beginning boundary of a specific sub-interval, n, one of N
N sub-intervals in S sub-intervals in S.
S SHALL be longer than I, primarily to account for on-demand S SHALL be longer than I, primarily to account for on-demand
activation of the path, or any preamble to testing required, and the activation of the path, or any preamble to testing required, and the
delay of the path. delay of the path.
st SHOULD be much smaller than the sub-interval dt and on the same st SHOULD be much smaller than the sub-interval dt and on the same
order as FT, otherwise the rate measurement will include many rate order as FT; otherwise, the rate measurement will include many rate
adjustments and include more time smoothing, thus missing the Maximum adjustments and include more time smoothing, possibly smoothing the
IP-Layer Capacity. The st parameter does not have relevance when the interval that contains the Maximum IP-Layer Capacity (and therefore
losing relevance). The st Parameter does not have relevance when the
Source is transmitting at a fixed rate throughout S. Source is transmitting at a fixed rate throughout S.
7.3. Metric Definition 7.3. Metric Definition
This section defines the REQUIRED aspects of the IP-Layer Sender This section defines the REQUIRED aspects of the IP-Layer Sender Bit
Bitrate metric (unless otherwise indicated) for measurements at the Rate Metric (unless otherwise indicated) for measurements at the
specified Source on packets addressed for the intended Destination specified Source on packets addressed for the intended Destination
host and matching the required Type-P: host and matching the required Type-P:
Define the IP-Layer Sender Bit Rate, B(S,st), to be the number of IP- Define the IP-Layer Sender Bit Rate, B(S,st), to be the number of IP-
Layer bits (including header and data fields) that are transmitted Layer bits (including header and data fields) that are transmitted
from the Source with address pair Src and Dst during one contiguous from the Source with address pair Src and Dst during one contiguous
sub-interval, st, during the test interval S (where S SHALL be longer sub-interval, st, during the test interval S (where S SHALL be longer
than I), and where the fixed-size packet count during that single than I) and where the fixed-size packet count during that single sub-
sub-interval st also provides the number of IP-Layer bits in any interval st also provides the number of IP-Layer bits in any
interval, [stn,stn+1]. interval, [stn,stn+1].
Measurements according to these definitions SHALL use the UDP Measurements according to this definition SHALL use the UDP transport
transport layer. Any feedback from Dst host to Src host received by layer. Any feedback from the Dst host to the Src host received by
Src host during an interval [stn,stn+1] SHOULD NOT result in an the Src host during an interval [stn,stn+1] SHOULD NOT result in an
adaptation of the Src host traffic conditioning during this interval adaptation of the Src host traffic conditioning during this interval
(rate adjustment occurs on st interval boundaries). (rate adjustment occurs on st interval boundaries).
7.4. Discussion 7.4. Discussion
Both the Sender and Receiver or (Source and Destination) bit rates Both the Sender and Receiver (or Source and Destination) bit rates
SHOULD be assessed as part of an IP-Layer Capacity measurement. SHOULD be assessed as part of an IP-Layer Capacity measurement.
Otherwise, an unexpected sending rate limitation could produce an Otherwise, an unexpected sending rate limitation could produce an
erroneous Maximum IP-Layer Capacity measurement. erroneous Maximum IP-Layer Capacity measurement.
7.5. Reporting the Metric 7.5. Reporting the Metric
The IP-Layer Sender Bit Rate SHALL be reported with meaningful The IP-Layer Sender Bit Rate SHALL be reported with meaningful
resolution, in units of Megabits per second (which is 1,000,000 bits resolution, in units of Megabits per second (which, to avoid any
per second to avoid any confusion). confusion, is 1,000,000 bits per second).
Individual IP-Layer Sender Bit Rate measurements are discussed Individual IP-Layer Sender Bit Rate measurements are discussed
further in Section 9. further in Section 9.
8. Method of Measurement 8. Method of Measurement
The architecture of the method REQUIRES two cooperating hosts It is REQUIRED per the architecture of the method that two
operating in the roles of Src (test packet sender) and Dst cooperating hosts operate in the roles of Src (test packet Sender)
(receiver), with a measured path and return path between them. and Dst (Receiver) with a measured path and return path between them.
The duration of a test, parameter I, MUST be constrained in a The duration of a test, Parameter I, MUST be constrained in a
production network, since this is an active test method and it will production network, since this is an active test method and it will
likely cause congestion on the Src to Dst host path during a test. likely cause congestion on the path from the Src host to the Dst host
during a test.
8.1. Load Rate Adjustment Algorithm 8.1. Load Rate Adjustment Algorithm
The algorithm described in this section MUST NOT be used as a general The algorithm described in this section MUST NOT be used as a general
Congestion Control Algorithm (CCA). As stated in the Scope Congestion Control Algorithm (CCA). As stated in Section 2 ("Scope,
Section 2, the load rate adjustment algorithm's goal is to help Goals, and Applicability"), the load rate adjustment algorithm's goal
determine the Maximum IP-Layer Capacity in the context of an is to help determine the Maximum IP-Layer Capacity in the context of
infrequent, diagnostic, short term measurement. There is a tradeoff an infrequent, diagnostic, short-term measurement. There is a trade-
between test duration (also the test data volume) and algorithm off between test duration (also the test data volume) and algorithm
aggressiveness (speed of ramp-up and down to the Maximum IP-Layer aggressiveness (speed of ramp-up and ramp-down to the Maximum IP-
Capacity). The parameter values chosen below strike a well-tested Layer Capacity). The Parameter values chosen below strike a well-
balance among these factors. tested balance among these factors.
A table SHALL be pre-built (by the test initiator) defining all the A table SHALL be pre-built (by the test administrator), defining all
offered load rates that will be supported (R1 through Rn, in the offered load rates that will be supported (R1 through Rn, in
ascending order, corresponding to indexed rows in the table). It is ascending order, corresponding to indexed rows in the table). It is
RECOMMENDED that rates begin with 0.5 Mbps at index zero, use 1 Mbps RECOMMENDED that rates begin with 0.5 Mbps at index zero, use 1 Mbps
at index one, and then continue in 1 Mbps increments to 1 Gbps. at index one, and then continue in 1 Mbps increments to 1 Gbps.
Above 1 Gbps, and up to 10 Gbps, it is RECOMMENDED that 100 Mbps Above 1 Gbps, and up to 10 Gbps, it is RECOMMENDED that 100 Mbps
increments be used. Above 10 Gbps, increments of 1 Gbps are increments be used. Above 10 Gbps, increments of 1 Gbps are
RECOMMENDED. A higher initial IP-Layer Sender Bitrate might be RECOMMENDED. A higher initial IP-Layer Sender Bit Rate might be
configured when the test operator is certain that the Maximum IP- configured when the test operator is certain that the Maximum IP-
Layer Capacity is well-above the initial IP-Layer Sender Bitrate and Layer Capacity is well above the initial IP-Layer Sender Bit Rate and
factors such as test duration and total test traffic play an factors such as test duration and total test traffic play an
important role. The sending rate table SHOULD backet the maximum important role. The sending rate table SHOULD bracket the Maximum
capacity where it will make measurements, including constrained rates Capacity where it will make measurements, including constrained rates
less than 500kbps if applicable. less than 500 kbps if applicable.
Each rate is defined as datagrams of size ss, sent as a burst of Each rate is defined as datagrams of size ss, sent as a burst of
count cc, each time interval tt (default for tt is 1ms, a likely count cc, each time interval tt (the default for tt is 100 microsec,
system tick-interval). While it is advantageous to use datagrams of a likely system tick interval). While it is advantageous to use
as large a size as possible, it may be prudent to use a slightly datagrams of as large a size as possible, it may be prudent to use a
smaller maximum that allows for secondary protocol headers and/or slightly smaller maximum that allows for secondary protocol headers
tunneling without resulting in IP-Layer fragmentation. Selection of and/or tunneling without resulting in IP-Layer fragmentation.
a new rate is indicated by a calculation on the current row, Rx. For Selection of a new rate is indicated by a calculation on the current
example: row, Rx. For example:
"Rx+1": the sender uses the next higher rate in the table. "Rx+1": The Sender uses the next-higher rate in the table.
"Rx-10": the sender uses the rate 10 rows lower in the table. "Rx-10": The Sender uses the rate 10 rows lower in the table.
At the beginning of a test, the sender begins sending at rate R1 and At the beginning of a test, the Sender begins sending at rate R1 and
the receiver starts a feedback timer of duration FT (while awaiting the Receiver starts a feedback timer of duration FT (while awaiting
inbound datagrams). As datagrams are received they are checked for inbound datagrams). As datagrams are received, they are checked for
sequence number anomalies (loss, out-of-order, duplication, etc.) and sequence number anomalies (loss, out-of-order, duplication, etc.) and
the delay range is measured (one-way or round-trip). This the delay range is measured (one-way or round-trip). This
information is accumulated until the feedback timer FT expires and a information is accumulated until the feedback timer FT expires and a
status feedback message is sent from the receiver back to the sender, status feedback message is sent from the Receiver back to the Sender,
to communicate this information. The accumulated statistics are then to communicate this information. The accumulated statistics are then
reset by the receiver for the next feedback interval. As feedback reset by the Receiver for the next feedback interval. As feedback
messages are received back at the sender, they are evaluated to messages are received back at the Sender, they are evaluated to
determine how to adjust the current offered load rate (Rx). determine how to adjust the current offered load rate (Rx).
If the feedback indicates that no sequence number anomalies were If the feedback indicates that no sequence number anomalies were
detected AND the delay range was below the lower threshold, the detected AND the delay range was below the lower threshold, the
offered load rate is increased. If congestion has not been confirmed offered load rate is increased. If congestion has not been confirmed
up to this point (see below for the method to declare congestion), up to this point (see below for the method for declaring congestion),
the offered load rate is increased by more than one rate (e.g., the offered load rate is increased by more than one rate setting
Rx+10). This allows the offered load to quickly reach a near-maximum (e.g., Rx+10). This allows the offered load to quickly reach a near-
rate. Conversely, if congestion has been previously confirmed, the maximum rate. Conversely, if congestion has been previously
offered load rate is only increased by one (Rx+1). However, if a confirmed, the offered load rate is only increased by one (Rx+1).
rate threshold between high and very high sending rates (such as 1 However, if a rate threshold above a high sending rate (such as 1
Gbps) is exceeded, the offered load rate is only increased by one Gbps) is exceeded, the offered load rate is only increased by one
(Rx+1) above the rate threshold in any congestion state. (Rx+1) in any congestion state.
If the feedback indicates that sequence number anomalies were If the feedback indicates that sequence number anomalies were
detected OR the delay range was above the upper threshold, the detected OR the delay range was above the upper threshold, the
offered load rate is decreased. The RECOMMENDED threshold values are offered load rate is decreased. The RECOMMENDED threshold values are
0 for sequence number gaps and 30 ms for lower and 90 ms for upper 10 for sequence number gaps and 30 msec for lower and 90 msec for
delay thresholds, respectively. Also, if congestion is now confirmed upper delay thresholds, respectively. Also, if congestion is now
for the first time by the current feedback message being processed, confirmed for the first time by the current feedback message being
then the offered load rate is decreased by more than one rate (e.g., processed, then the offered load rate is decreased by more than one
Rx-30). This one-time reduction is intended to compensate for the rate setting (e.g., Rx-30). This one-time reduction is intended to
fast initial ramp-up. In all other cases, the offered load rate is compensate for the fast initial ramp-up. In all other cases, the
only decreased by one (Rx-1). offered load rate is only decreased by one (Rx-1).
If the feedback indicates that there were no sequence number If the feedback indicates that there were no sequence number
anomalies AND the delay range was above the lower threshold, but anomalies AND the delay range was above the lower threshold but below
below the upper threshold, the offered load rate is not changed. the upper threshold, the offered load rate is not changed. This
This allows time for recent changes in the offered load rate to allows time for recent changes in the offered load rate to stabilize
stabilize, and the feedback to represent current conditions more and for the feedback to represent current conditions more accurately.
accurately.
Lastly, the method for inferring congestion is that there were Lastly, the method for inferring congestion is that there were
sequence number anomalies AND/OR the delay range was above the upper sequence number anomalies AND/OR the delay range was above the upper
threshold for two consecutive feedback intervals. The algorithm threshold for three consecutive feedback intervals. The algorithm
described above is also illustrated in ITU-T Rec. Y.1540, 2020 described above is also illustrated in Annex B of ITU-T
version[Y.1540], in Annex B, and implemented in the Appendix on Load Recommendation Y.1540, 2020 version [Y.1540] and is implemented in
Rate Adjustment Pseudo Code in this memo. Appendix A ("Load Rate Adjustment Pseudocode") in this memo.
The load rate adjustment algorithm MUST include timers that stop the The load rate adjustment algorithm MUST include timers that stop the
test when received packet streams cease unexpectedly. The timeout test when received packet streams cease unexpectedly. The timeout
thresholds are provided in the table below, along with values for all thresholds are provided in Table 1, along with values for all other
other parameters and variables described in this section. Operation Parameters and variables described in this section. Operations of
of non-obvious parameters appear below: non-obvious Parameters appear below:
load packet timeout Operation: The load packet timeout SHALL be load packet timeout:
reset to the configured value each time a load packet received. The load packet timeout SHALL be reset to the configured value
If the timeout expires, the receiver SHALL be closed and no each time a load packet is received. If the timeout expires, the
further feedback sent. Receiver SHALL be closed and no further feedback sent.
feedback message timeout Operation: The feedback message timeout feedback message timeout:
SHALL be reset to the configured value each time a feedback The feedback message timeout SHALL be reset to the configured
message is received. If the timeout expires, the sender SHALL be value each time a feedback message is received. If the timeout
closed and no further load packets sent. expires, the Sender SHALL be closed and no further load packets
sent.
+-------------+-------------+---------------+-----------------------+ +=============+==========+===========+=========================+
| Parameter | Default | Tested Range | Expected Safe Range | | Parameter | Default | Tested | Expected Safe Range |
| | | or values | (not entirely tested, | | | | Range or | (not entirely tested, |
| | | | other | | | | Values | other values NOT |
| | | | values NOT | | | | | RECOMMENDED) |
| | | | RECOMMENDED) | +=============+==========+===========+=========================+
+-------------+-------------+---------------+-----------------------+ | FT, | 50 msec | 20 msec, | 20 msec <= FT <= 250 |
| FT, | 50ms | 20ms, 50ms, | 20ms <= FT <= 250ms | | feedback | | 50 msec, | msec; larger values may |
| feedback | | 100ms | Larger values may | | time | | 100 msec | slow the rate increase |
| time | | | slow the rate | | interval | | | and fail to find the |
| interval | | | increase and fail to | | | | | max |
| | | | find the max | +-------------+----------+-----------+-------------------------+
+-------------+-------------+---------------+-----------------------+ | Feedback | L*FT, | L=100 | 0.5 sec <= L*FT <= 30 |
| Feedback | L*FT, L=20 | L=100 with | 0.5sec <= L*FT <= | | message | L=20 (1 | with | sec; upper limit for |
| message | (1sec with | FT=50ms | 30sec Upper limit for | | timeout | sec with | FT=50 | very unreliable test |
| timeout | FT=50ms) | (5sec) | very unreliable | | (stop test) | FT=50 | msec (5 | paths only |
| (stop test) | | | test paths only | | | msec) | sec) | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| load packet | 1sec | 5sec | 0.250sec - 30sec | | Load packet | 1 sec | 5 sec | 0.250-30 sec; upper |
| timeout | | | Upper limit for very | | timeout | | | limit for very |
| (stop test) | | | unreliable test paths | | (stop test) | | | unreliable test paths |
| | | | only | | | | | only |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| table index | 0.5Mbps | 0.5Mbps | when testing <=10Gbps | | Table index | 0.5 Mbps | 0.5 Mbps | When testing <= 10 Gbps |
| 0 | | | | | 0 | | | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| table index | 1Mbps | 1Mbps | when testing <=10Gbps | | Table index | 1 Mbps | 1 Mbps | When testing <= 10 Gbps |
| 1 | | | | | 1 | | | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| table index | 1Mbps | 1Mbps<=rate<= | same as tested | | Table index | 1 Mbps | 1 Mbps <= | Same as tested |
| (step) size | | 1Gbps | | | (step) size | | rate <= 1 | |
+-------------+-------------+---------------+-----------------------+ | | | Gbps | |
| table index | 100Mbps | 1Gbps<=rate<= | same as tested | +-------------+----------+-----------+-------------------------+
| (step) | | 10Gbps | | | Table index | 100 Mbps | 1 Gbps <= | Same as tested |
| size, | | | | | (step) | | rate <= | |
| rate>1Gbps | | | | | size, rate | | 10 Gbps | |
+-------------+-------------+---------------+-----------------------+ | > 1 Gbps | | | |
| table index | 1Gbps | untested | >10Gbps | +-------------+----------+-----------+-------------------------+
| (step) | | | | | Table index | 1 Gbps | Untested | >10 Gbps |
| size, | | | | | (step) | | | |
| rate>10Gbps | | | | | size, rate | | | |
+-------------+-------------+---------------+-----------------------+ | > 10 Gbps | | | |
| ss, UDP | none | <=1222 | Recommend max at | +-------------+----------+-----------+-------------------------+
| payload | | | largest value that | | ss, UDP | None | <=1222 | Recommend max at |
| size, bytes | | | avoids fragmentation; | | payload | | | largest value that |
| | | | use of too- | | size, bytes | | | avoids fragmentation; |
| | | | small payload size | | | | | using a payload size |
| | | | might result in | | | | | that is too small might |
| | | | unexpected sender | | | | | result in unexpected |
| | | | limitations. | | | | | Sender limitations |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| cc, burst | none | 1<=cc<= 100 | same as tested. Vary | | cc, burst | None | 1 <= cc | Same as tested. Vary |
| count | | | cc as needed to | | count | | <= 100 | cc as needed to create |
| | | | create the desired | | | | | the desired maximum |
| | | | maximum | | | | | sending rate. Sender |
| | | | sending rate. Sender | | | | | buffer size may limit |
| | | | buffer size may limit | | | | | cc in the |
| | | | cc in implementation. | | | | | implementation |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| tt, burst | 100microsec | 100microsec, | available range of | | tt, burst | 100 | 100 | Available range of |
| interval | | 1msec | "tick" values (HZ | | interval | microsec | microsec, | "tick" values (HZ |
| | | | param) | | | | 1 msec | param) |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| low delay | 30ms | 5ms, 30ms | same as tested | | Low delay | 30 msec | 5 msec, | Same as tested |
| range | | | | | range | | 30 msec | |
| threshold | | | | | threshold | | | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| high delay | 90ms | 10ms, 90ms | same as tested | | High delay | 90 msec | 10 msec, | Same as tested |
| range | | | | | range | | 90 msec | |
| threshold | | | | | threshold | | | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| sequence | 0 | 0, 100 | same as tested | | Sequence | 10 | 0, 1, 5, | Same as tested |
| error | | | | | error | | 10, 100 | |
| threshold | | | | | threshold | | | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| consecutive | 2 | 2 | Use values >1 to | | Consecutive | 3 | 2, 3, 4, | Use values >1 to avoid |
| errored | | | avoid misinterpreting | | errored | | 5 | misinterpreting |
| status | | | transient loss | | status | | | transient loss |
| report | | | | | report | | | |
| threshold | | | | | threshold | | | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| Fast mode | 10 | 10 | 2 <= steps <= 30 | | Fast mode | 10 | 10 | 2 <= steps <= 30 |
| increase, | | | | | increase, | | | |
| in table | | | | | in table | | | |
| index steps | | | | | index steps | | | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
| Fast mode | 3 * Fast | 3 * Fast mode | same as tested | | Fast mode | 3 * Fast | 3 * Fast | Same as tested |
| decrease, | mode | increase | | | decrease, | mode | mode | |
| in table | increase | | | | in table | increase | increase | |
| index steps | | | | | index steps | | | |
+-------------+-------------+---------------+-----------------------+ +-------------+----------+-----------+-------------------------+
Parameters for Load Rate Adjustment Algorithm Table 1: Parameters for Load Rate Adjustment Algorithm
As a consequence of default parameterization, the Number of table As a consequence of default parameterization, the Number of table
steps in total for rates <10Gbps is 2000 (excluding index 0). steps in total for rates less than 10 Gbps is 1090 (excluding index
0).
A related sender backoff response to network conditions occurs when A related Sender backoff response to network conditions occurs when
one or more status feedback messages fail to arrive at the sender. one or more status feedback messages fail to arrive at the Sender.
If no status feedback messages arrive at the sender for the interval If no status feedback messages arrive at the Sender for the interval
greater than the Lost Status Backoff timeout: greater than the Lost Status Backoff timeout:
UDRT + (2+w)*FT = Lost Status Backoff timeout UDRT + (2+w)*FT = Lost Status Backoff timeout
where: where:
UDRT = upper delay range threshold (default 90ms)
FT = feedback time interval (default 50ms) UDRT = upper delay range threshold (default 90 msec)
FT = feedback time interval (default 50 msec)
w = number of repeated timeouts (w=0 initially, w++ on each w = number of repeated timeouts (w=0 initially, w++ on each
timeout, and reset to 0 when a message is received) timeout, and reset to 0 when a message is received)
beginning when the last message (of any type) was successfully Beginning when the last message (of any type) was successfully
received at the sender: received at the Sender:
Then the offered load SHALL be decreased, following the same process The offered load SHALL then be decreased, following the same process
as when the feedback indicates presence of one or more sequence as when the feedback indicates the presence of one or more sequence
number anomalies OR the delay range was above the upper threshold (as number anomalies OR the delay range was above the upper threshold (as
described above), with the same load rate adjustment algorithm described above), with the same load rate adjustment algorithm
variables in their current state. This means that rate reduction and variables in their current state. This means that lost status
congestion confirmation can result from a three-way OR that includes feedback messages OR sequence errors OR delay variation can result in
lost status feedback messages, sequence errors, or delay variation. rate reduction and congestion confirmation.
The RECOMMENDED initial value for w is 0, taking Round Trip Time The RECOMMENDED initial value for w is 0, taking a Round-Trip Time
(RTT) less than FT into account. A test with RTT longer than FT is a (RTT) of less than FT into account. A test with an RTT longer than
valid reason to increase the initial value of w appropriately. FT is a valid reason to increase the initial value of w
Variable w SHALL be incremented by 1 whenever the Lost Status Backoff appropriately. Variable w SHALL be incremented by one whenever the
timeout is exceeded. So with FT = 50ms and UDRT = 90ms, a status Lost Status Backoff timeout is exceeded. So, with FT = 50 msec and
feedback message loss would be declared at 190ms following a UDRT = 90 msec, a status feedback message loss would be declared at
successful message, again at 50ms after that (240ms total), and so 190 msec following a successful message, again at 50 msec after that
on. (240 msec total), and so on.
Also, if congestion is now confirmed for the first time by a Lost Also, if congestion is now confirmed for the first time by a Lost
Status Backoff timeout, then the offered load rate is decreased by Status Backoff timeout, then the offered load rate is decreased by
more than one rate (e.g., Rx-30). This one-time reduction is more than one rate setting (e.g., Rx-30). This one-time reduction is
intended to compensate for the fast initial ramp-up. In all other intended to compensate for the fast initial ramp-up. In all other
cases, the offered load rate is only decreased by one (Rx-1). cases, the offered load rate is only decreased by one (Rx-1).
Appendix B discusses compliance with the applicable mandatory Appendix B discusses compliance with the applicable mandatory
requirements of [RFC8085], consistent with the goals of the IP-Layer requirements of [RFC8085], consistent with the goals of the IP-Layer
Capacity Metric and Method, including the load rate adjustment Capacity Metric and Method, including the load rate adjustment
algorithm described in this section. algorithm described in this section.
8.2. Measurement Qualification or Verification 8.2. Measurement Qualification or Verification
It is of course necessary to calibrate the equipment performing the It is of course necessary to calibrate the equipment performing the
IP-Layer Capacity measurement, to ensure that the expected capacity IP-Layer Capacity measurement, to ensure that the expected capacity
can be measured accurately, and that equipment choices (processing can be measured accurately and that equipment choices (processing
speed, interface bandwidth, etc.) are suitably matched to the speed, interface bandwidth, etc.) are suitably matched to the
measurement range. measurement range.
When assessing a Maximum rate as the metric specifies, artificially When assessing a maximum rate as the metric specifies, artificially
high (optimistic) values might be measured until some buffer on the high (optimistic) values might be measured until some buffer on the
path is filled. Other causes include bursts of back-to-back packets path is filled. Other causes include bursts of back-to-back packets
with idle intervals delivered by a path, while the measurement with idle intervals delivered by a path, while the measurement
interval (dt) is small and aligned with the bursts. The artificial interval (dt) is small and aligned with the bursts. The artificial
values might result in an un-sustainable Maximum Capacity observed values might result in an unsustainable Maximum Capacity observed
when the method of measurement is searching for the Maximum, and that when the Method of Measurement is searching for the maximum, and that
would not do. This situation is different from the bi-modal service would not do. This situation is different from the bimodal service
rates (discussed under Reporting), which are characterized by a rates (discussed in "Reporting the Metric", Section 6.6), which are
multi-second duration (much longer than the measured RTT) and characterized by a multi-second duration (much longer than the
repeatable behavior. measured RTT) and repeatable behavior.
There are many ways that the Method of Measurement could handle this There are many ways that the Method of Measurement could handle this
false-max issue. The default value for measurement of singletons (dt false-max issue. The default value for measurement of Singletons (dt
= 1 second) has proven to be of practical value during tests of this = 1 second) has proven to be of practical value during tests of this
method, allows the bimodal service rates to be characterized, and it method, allows the bimodal service rates to be characterized, and has
has an obvious alignment with the reporting units (Mbps). an obvious alignment with the reporting units (Mbps).
Another approach comes from Section 24 of [RFC2544] and its Another approach comes from Section 24 of [RFC2544] and its
discussion of Trial duration, where relatively short trials conducted discussion of trial duration, where relatively short trials conducted
as part of the search are followed by longer trials to make the final as part of the search are followed by longer trials to make the final
determination. In the production network, measurements of Singletons determination. In the production network, measurements of Singletons
and Samples (the terms for trials and tests of Lab Benchmarking) must and Samples (the terms for trials and tests of Lab Benchmarking) must
be limited in duration because they may be service-affecting. But be limited in duration because they may affect service. But there is
there is sufficient value in repeating a Sample with a fixed sending sufficient value in repeating a Sample with a fixed sending rate
rate determined by the previous search for the Maximum IP-Layer determined by the previous search for the Maximum IP-Layer Capacity,
Capacity, to qualify the result in terms of the other performance to qualify the result in terms of the other performance metrics
metrics measured at the same time. measured at the same time.
A qualification measurement for the search result is a subsequent A Qualification measurement for the search result is a subsequent
measurement, sending at a fixed 99.x % of the Maximum IP-Layer measurement, sending at a fixed 99.x percent of the Maximum IP-Layer
Capacity for I, or an indefinite period. The same Maximum Capacity Capacity for I, or an indefinite period. The same Maximum Capacity
Metric is applied, and the Qualification for the result is a Sample Metric is applied, and the Qualification for the result is a Sample
without packet loss or a growing minimum delay trend in subsequent without supra-threshold packet losses or a growing minimum delay
singletons (or each dt of the measurement interval, I). Samples trend in subsequent Singletons (or each dt of the measurement
exhibiting losses or increasing queue occupation require a repeated interval, I). Samples exhibiting supra-threshold packet losses or
search and/or test at reduced fixed sender rate for qualification. increasing queue occupation require a repeated search and/or test at
a reduced fixed Sender rate for Qualification.
Here, as with any Active Capacity test, the test duration must be Here, as with any Active Capacity test, the test duration must be
kept short. 10 second tests for each direction of transmission are kept short. Ten-second tests for each direction of transmission are
common today. The default measurement interval specified here is I = common today. The default measurement interval specified here is I =
10 seconds. The combination of a fast and congestion-aware search 10 seconds. The combination of a fast and congestion-aware search
method and user-network coordination make a unique contribution to method and user-network coordination makes a unique contribution to
production testing. The Maximum IP Capacity metric and method for production testing. The Maximum IP Capacity Metric and Method for
assessing performance is very different from classic [RFC2544] assessing performance is very different from the classic Throughput
Throughput metric and methods : it uses near-real-time load Metric and Methods provided in [RFC2544]: it uses near-real-time load
adjustments that are sensitive to loss and delay, similar to other adjustments that are sensitive to loss and delay, similar to other
congestion control algorithms used on the Internet every day, along congestion control algorithms used on the Internet every day, along
with limited duration. On the other hand, [RFC2544] Throughput with limited duration. On the other hand, Throughput measurements
measurements can produce sustained overload conditions for extended [RFC2544] can produce sustained overload conditions for extended
periods of time. Individual trials in a test governed by a binary periods of time. Individual trials in a test governed by a binary
search can last 60 seconds for each step, and the final confirmation search can last 60 seconds for each step, and the final confirmation
trial may be even longer. This is very different from "normal" trial may be even longer. This is very different from "normal"
traffic levels, but overload conditions are not a concern in the traffic levels, but overload conditions are not a concern in the
isolated test environment. The concerns raised in [RFC6815] were isolated test environment. The concerns raised in [RFC6815] were
that [RFC2544] methods would be let loose on production networks, and that the methods discussed in [RFC2544] would be let loose on
instead the authors challenged the standards community to develop production networks, and instead the authors challenged the standards
metrics and methods like those described in this memo. community to develop Metrics and Methods like those described in this
memo.
8.3. Measurement Considerations 8.3. Measurement Considerations
In general, the wide-spread measurements that this memo encourages In general, the widespread measurements that this memo encourages
will encounter wide-spread behaviors. The bimodal IP Capacity will encounter widespread behaviors. The bimodal IP Capacity
behaviors already discussed in Section 6.6 are good examples. behaviors already discussed in Section 6.6 are good examples.
In general, it is RECOMMENDED to locate test endpoints as close to In general, it is RECOMMENDED to locate test endpoints as close to
the intended measured link(s) as practical (this is not always the intended measured link(s) as practical (for reasons of scale,
possible for reasons of scale; there is a limit on number of test this is not always possible; there is a limit on the number of test
endpoints coming from many perspectives, management and measurement endpoints coming from many perspectives -- for example, management
traffic for example). The testing operator MUST set a value for the and measurement traffic). The testing operator MUST set a value for
MaxHops parameter, based on the expected path length. This parameter the MaxHops Parameter, based on the expected path length. This
can keep measurement traffic from straying too far beyond the Parameter can keep measurement traffic from straying too far beyond
intended path. the intended path.
The path measured may be stateful based on many factors, and the The measured path may be stateful based on many factors, and the
Parameter "Time of day" when a test starts may not be enough Parameter "Time of day" when a test starts may not be enough
information. Repeatable testing may require the time from the information. Repeatable testing may require knowledge of the time
beginning of a measured flow, and how the flow is constructed from the beginning of a measured flow -- and how the flow is
including how much traffic has already been sent on that flow when a constructed, including how much traffic has already been sent on that
state-change is observed, because the state-change may be based on flow when a state change is observed -- because the state change may
time or bytes sent or both. Both load packets and status feedback be based on time, bytes sent, or both. Both load packets and status
messages MUST contain sequence numbers, which helps with measurements feedback messages MUST contain sequence numbers; this helps with
based on those packets. measurements based on those packets.
Many different types of traffic shapers and on-demand communications Many different types of traffic shapers and on-demand communications
access technologies may be encountered, as anticipated in [RFC7312], access technologies may be encountered, as anticipated in [RFC7312],
and play a key role in measurement results. Methods MUST be prepared and play a key role in measurement results. Methods MUST be prepared
to provide a short preamble transmission to activate on-demand to provide a short preamble transmission to activate on-demand
communications access, and to discard the preamble from subsequent communications access and to discard the preamble from subsequent
test results. test results.
Conditions which might be encountered during measurement, where The following conditions might be encountered during measurement,
packet losses may occur independently of the measurement sending where packet losses may occur independently of the measurement
rate: sending rate:
1. Congestion of an interconnection or backbone interface may appear 1. Congestion of an interconnection or backbone interface may appear
as packet losses distributed over time in the test stream, due to as packet losses distributed over time in the test stream, due to
much higher rate interfaces in the backbone. much-higher-rate interfaces in the backbone.
2. Packet loss due to use of Random Early Detection (RED) or other 2. Packet loss due to the use of Random Early Detection (RED) or
active queue management may or may not affect the measurement other active queue management may or may not affect the
flow if competing background traffic (other flows) are measurement flow if competing background traffic (other flows) is
simultaneously present. simultaneously present.
3. There may be only small delay variation independent of sending 3. There may be only a small delay variation independent of the
rate under these conditions, too. sending rate under these conditions as well.
4. Persistent competing traffic on measurement paths that include 4. Persistent competing traffic on measurement paths that include
shared transmission media may cause random packet losses in the shared transmission media may cause random packet losses in the
test stream. test stream.
It is possible to mitigate these conditions using the flexibility of It is possible to mitigate these conditions using the flexibility of
the load-rate adjusting algorithm described in Section 8.1 above the load rate adjustment algorithm described in Section 8.1 above
(tuning specific parameters). (tuning specific Parameters).
If the measurement flow burst duration happens to be on the order of If the measurement flow burst duration happens to be on the order of
or smaller than the burst size of a shaper or a policer in the path, or smaller than the burst size of a shaper or a policer in the path,
then the line rate might be measured rather than the bandwidth limit then the line rate might be measured rather than the bandwidth limit
imposed by the shaper or policer. If this condition is suspected, imposed by the shaper or policer. If this condition is suspected,
alternate configurations SHOULD be used. alternate configurations SHOULD be used.
In general, results depend on the sending stream characteristics; the In general, results depend on the sending stream's characteristics;
measurement community has known this for a long time, and needs to the measurement community has known this for a long time and needs to
keep it front of mind. Although the default is a single flow (F=1) keep it foremost in mind. Although the default is a single flow
for testing, use of multiple flows may be advantageous for the (F=1) for testing, the use of multiple flows may be advantageous for
following reasons: the following reasons:
1. the test hosts may be able to create higher load than with a 1. The test hosts may be able to create a higher load than with a
single flow, or parallel test hosts may be used to generate 1 single flow, or parallel test hosts may be used to generate one
flow each. flow each.
2. there may be link aggregation present (flow-based load balancing) 2. Link aggregation may be present (flow-based load balancing), and
and multiple flows are needed to occupy each member of the multiple flows are needed to occupy each member of the aggregate.
aggregate.
3. Internet access policies may limit the IP-Layer Capacity 3. Internet access policies may limit the IP-Layer Capacity
depending on the Type-P of packets, possibly reserving capacity depending on the Type-P of the packets, possibly reserving
for various stream types. capacity for various stream types.
Each flow would be controlled using its own implementation of the Each flow would be controlled using its own implementation of the
load rate adjustment (search) algorithm. load rate adjustment (search) algorithm.
It is obviously counter-productive to run more than one independent It is obviously counterproductive to run more than one independent
and concurrent test (regardless of the number of flows in the test and concurrent test (regardless of the number of flows in the test
stream) attempting to measure the *maximum* capacity on a single stream) attempting to measure the *maximum* capacity on a single
path. The number of concurrent, independent tests of a path SHALL be path. The number of concurrent, independent tests of a path SHALL be
limited to one. limited to one.
Tests of a v4-v6 transition mechanism might well be the intended Tests of a v4-v6 transition mechanism might well be the intended
subject of a capacity test. As long as the IPv4 and IPv6 packets subject of a capacity test. As long as both IPv4 packets and IPv6
sent/received are both standard-formed, this should be allowed (and packets sent/received are standard-formed, this should be allowed
the change in header size easily accounted for on a per-packet (and the change in header size easily accounted for on a per-packet
basis). basis).
As testing continues, implementers should expect some evolution in As testing continues, implementers should expect the methods to
the methods. The ITU-T has published a Supplement (60) to the evolve. The ITU-T has published a supplement (Supplement 60) to the
Y-series of Recommendations, "Interpreting ITU-T Y.1540 Maximum IP- Y-series of ITU-T Recommendations, "Interpreting ITU-T Y.1540 maximum
Layer Capacity measurements", [Y.Sup60], which is the result of IP-layer capacity measurements" [Y.Sup60], which is the result of
continued testing with the metric, and those results have improved continued testing with the metric. Those results have improved the
the method described here. methods described here.
8.4. Running Code
RFC Editor: This section is for the benefit of the Document
Shepherd's form, and will be deleted prior to publication.
Much of the development of the method and comparisons with existing
methods conducted at IETF Hackathons and elsewhere have been based on
the example udpst Linux measurement tool (which is a working
reference for further development) [udpst]. The current project:
o is a utility that can function as a client or server daemon
o requires a successful client-initiated setup handshake between
cooperating hosts and allows firewalls to control inbound
unsolicited UDP which either go to a control port [expected and w/
authentication] or to ephemeral ports that are only created as
needed. Firewalls protecting each host can both continue to do
their job normally. This aspect is similar to many other test
utilities available.
o is written in C, and built with gcc (release 9.3) and its standard
run-time libraries
o allows configuration of most of the parameters described in
Sections 4 and 7.
o supports IPv4 and IPv6 address families.
o supports IP-Layer packet marking.
9. Reporting Formats 9. Reporting Formats
The singleton IP-Layer Capacity results SHOULD be accompanied by the The Singleton IP-Layer Capacity results SHOULD be accompanied by the
context under which they were measured. context under which they were measured.
o timestamp (especially the time when the maximum was observed in * Timestamp (especially the time when the maximum was observed in
dtn) dtn).
o Source and Destination (by IP or other meaningful ID) * Source and Destination (by IP or other meaningful ID).
o other inner parameters of the test case (Section 4) * Other inner Parameters of the test case (Section 4).
o outer parameters, such as "test conducted in motion" or other * Outer Parameters, such as "test conducted in motion" or other
factors belonging to the context of the measurement factors belonging to the context of the measurement.
o result validity (indicating cases where the process was somehow * Result validity (indicating cases where the process was somehow
interrupted or the attempt failed) interrupted or the attempt failed).
o a field where unusual circumstances could be documented, and * A field where unusual circumstances could be documented, and
another one for "ignore/mask out" purposes in further processing another one for "ignore / mask out" purposes in further
processing.
The Maximum IP-Layer Capacity results SHOULD be reported in the The Maximum IP-Layer Capacity results SHOULD be reported in tabular
format of a table with a row for each of the test Phases and Number format. There SHOULD be a column that identifies the test Phase.
of Flows. There SHOULD be columns for the phases with number of There SHOULD be a column listing the number of flows used in that
flows, and for the resultant Maximum IP-Layer Capacity results for Phase. The remaining columns SHOULD report the following results for
the aggregate and each flow tested. the aggregate of all flows, including the Maximum IP-Layer Capacity,
the Loss Ratio, the RTT minimum, RTT maximum, and other metrics
tested having similar relevance.
As mentioned in Section 6.6, bi-modal (or multi-modal) maxima SHALL As mentioned in Section 6.6, bimodal (or multi-modal) maxima SHALL be
be reported for each mode separately. reported for each mode separately.
+-------------+-------------------------+----------+----------------+ +========+==========+==================+========+=========+=========+
| Phase, # | Maximum IP-Layer | Loss | RTT min, max, | | Phase | Number | Maximum IP-Layer | Loss | RTT min | RTT |
| Flows | Capacity, Mbps | Ratio | msec | | | of Flows | Capacity (Mbps) | Ratio | (msec) | max |
+-------------+-------------------------+----------+----------------+ | | | | | | (msec) |
| Search,1 | 967.31 | 0.0002 | 30, 58 | +========+==========+==================+========+=========+=========+
+-------------+-------------------------+----------+----------------+ | Search | 1 | 967.31 | 0.0002 | 30 | 58 |
| Verify,1 | 966.00 | 0.0000 | 30, 38 | +--------+----------+------------------+--------+---------+---------+
+-------------+-------------------------+----------+----------------+ | Verify | 1 | 966.00 | 0.0000 | 30 | 38 |
+--------+----------+------------------+--------+---------+---------+
Maximum IP-layer Capacity Results Table 2: Maximum IP-Layer Capacity Results
Static and configuration parameters: Static and configuration Parameters:
The sub-interval time, dt, MUST accompany a report of Maximum IP- The sub-interval time, dt, MUST accompany a report of Maximum IP-
Layer Capacity results, and the remaining Parameters from Section 4, Layer Capacity results, as well as the remaining Parameters from
General Parameters. Section 4 ("General Parameters and Definitions").
The PM list metrics corresponding to the sub-interval where the The PM list metrics corresponding to the sub-interval where the
Maximum Capacity occurred MUST accompany a report of Maximum IP-Layer Maximum Capacity occurred MUST accompany a report of Maximum IP-Layer
Capacity results, for each test phase. Capacity results, for each test Phase.
The IP-Layer Sender Bit rate results SHOULD be reported in the format The IP-Layer Sender Bit Rate results SHOULD be reported in tabular
of a table with a row for each of the test phases, sub-intervals (st) format. There SHOULD be a column that identifies the test Phase.
and number of flows. There SHOULD be columns for the phases with There SHOULD be a column listing each individual (numbered) flow used
number of flows, and for the resultant IP-Layer Sender Bit rate in that Phase, or the aggregate of flows in that Phase. A
results for the aggregate and each flow tested. corresponding column SHOULD identify the specific sending rate sub-
interval, stn, for each flow and aggregate. A final column SHOULD
report the IP-Layer Sender Bit Rate results for each flow used, or
the aggregate of all flows.
+--------------------------+-------------+----------------------+ +========+==========================+===========+=============+
| Phase, Flow or Aggregate | st, sec | Sender Bitrate, Mbps | | Phase | Flow Number or Aggregate | stn (sec) | Sender Bit |
+--------------------------+-------------+----------------------+ | | | | Rate (Mbps) |
| Search,1 | 0.00 - 0.05 | 345 | +========+==========================+===========+=============+
+--------------------------+-------------+----------------------+ | Search | 1 | 0.00 | 345 |
| Search,2 | 0.00 - 0.05 | 289 | +--------+--------------------------+-----------+-------------+
+--------------------------+-------------+----------------------+ | Search | 2 | 0.00 | 289 |
| Search,Agg | 0.00 - 0.05 | 634 | +--------+--------------------------+-----------+-------------+
+--------------------------+-------------+----------------------+ | Search | Agg | 0.00 | 634 |
+--------+--------------------------+-----------+-------------+
| Search | 1 | 0.05 | 499 |
+--------+--------------------------+-----------+-------------+
| Search | ... | 0.05 | ... |
+--------+--------------------------+-----------+-------------+
IP-layer Sender Bit Rate Results Table 3: IP-Layer Sender Bit Rate Results (Example with Two
Flows and st = 0.05 (sec))
Static and configuration parameters: Static and configuration Parameters:
The subinterval time, st, MUST accompany a report of Sender IP-Layer The sub-interval duration, st, MUST accompany a report of Sender IP-
Bit Rate results. Layer Bit Rate results.
Also, the values of the remaining Parameters from Section 4, General Also, the values of the remaining Parameters from Section 4 ("General
Parameters, MUST be reported. Parameters and Definitions") MUST be reported.
9.1. Configuration and Reporting Data Formats 9.1. Configuration and Reporting Data Formats
As a part of the multi-Standards Development Organization (SDO) As a part of the multi-Standards Development Organization (SDO)
harmonization of this metric and method of measurement, one of the harmonization of this Metric and Method of Measurement, one of the
areas where the Broadband Forum (BBF) contributed its expertise was areas where the Broadband Forum (BBF) contributed its expertise was
in the definition of an information model and data model for in the definition of an information model and data model for
configuration and reporting. These models are consistent with the configuration and reporting. These models are consistent with the
metric parameters and default values specified as lists is this memo. metric Parameters and default values specified as lists in this memo.
[TR-471] provides the Information model that was used to prepare a [TR-471] provides the information model that was used to prepare a
full data model in related BBF work. The BBF has also carefully full data model in related BBF work. The BBF has also carefully
considered topics within its purview, such as placement of considered topics within its purview, such as the placement of
measurement systems within the Internet access architecture. For measurement systems within the Internet access architecture. For
example, timestamp resolution requirements that influence the choice example, timestamp resolution requirements that influence the choice
of the test protocol are provided in Table 2 of [TR-471]. of the test protocol are provided in Table 2 of [TR-471].
10. Security Considerations 10. Security Considerations
Active metrics and measurements have a long history of security Active Metrics and Active Measurements have a long history of
considerations. The security considerations that apply to any active security considerations. The security considerations that apply to
measurement of live paths are relevant here. See [RFC4656] and any Active Measurement of live paths are relevant here. See
[RFC5357]. [RFC4656] and [RFC5357].
When considering privacy of those involved in measurement or those When considering the privacy of those involved in measurement or
whose traffic is measured, the sensitive information available to those whose traffic is measured, the sensitive information available
potential observers is greatly reduced when using active techniques to potential observers is greatly reduced when using active
which are within this scope of work. Passive observations of user techniques that are within this scope of work. Passive observations
traffic for measurement purposes raise many privacy issues. We refer of user traffic for measurement purposes raise many privacy issues.
the reader to the privacy considerations described in the Large Scale We refer the reader to the privacy considerations described in the
Measurement of Broadband Performance (LMAP) Framework [RFC7594], Large-scale Measurement of Broadband Performance (LMAP) Framework
which covers active and passive techniques. [RFC7594], which covers active and passive techniques.
There are some new considerations for Capacity measurement as There are some new considerations for Capacity measurement as
described in this memo. described in this memo.
1. Cooperating Source and Destination hosts and agreements to test 1. Cooperating Source and Destination hosts and agreements to test
the path between the hosts are REQUIRED. Hosts perform in either the path between the hosts are REQUIRED. Hosts perform in either
the Src or Dst roles. the Src role or the Dst role.
2. It is REQUIRED to have a user client-initiated setup handshake 2. It is REQUIRED to have a user client-initiated setup handshake
between cooperating hosts that allows firewalls to control between cooperating hosts that allows firewalls to control
inbound unsolicited UDP traffic which either goes to a control inbound unsolicited UDP traffic that goes to either a control
port [expected and w/authentication] or to ephemeral ports that port (expected and with authentication) or ephemeral ports that
are only created as needed. Firewalls protecting each host can are only created as needed. Firewalls protecting each host can
both continue to do their job normally. both continue to do their job normally.
3. Client-server authentication and integrity protection for 3. Client-server authentication and integrity protection for
feedback messages conveying measurements is RECOMMENDED. feedback messages conveying measurements are RECOMMENDED.
4. Hosts MUST limit the number of simultaneous tests to avoid 4. Hosts MUST limit the number of simultaneous tests to avoid
resource exhaustion and inaccurate results. resource exhaustion and inaccurate results.
5. Senders MUST be rate-limited. This can be accomplished using a 5. Senders MUST be rate limited. This can be accomplished using a
pre-built table defining all the offered load rates that will be pre-built table defining all the offered load rates that will be
supported (Section 8.1). The recommended load-control search supported (Section 8.1). The recommended load control search
algorithm results in "ramp-up" from the lowest rate in the table. algorithm results in "ramp-up" from the lowest rate in the table.
6. Service subscribers with limited data volumes who conduct 6. Service subscribers with limited data volumes who conduct
extensive capacity testing might experience the effects of extensive capacity testing might experience the effects of
Service Provider controls on their service. Testing with the Service Provider controls on their service. Testing with the
Service Provider's measurement hosts SHOULD be limited in Service Provider's measurement hosts SHOULD be limited in
frequency and/or overall volume of test traffic (for example, the frequency and/or overall volume of test traffic (for example, the
range of duration values, I, SHOULD be limited). range of duration values, I, SHOULD be limited).
The exact specification of these features is left for the future The exact specification of these features is left for future protocol
protocol development. development.
11. IANA Considerations 11. IANA Considerations
This memo makes no requests of IANA. This document has no IANA actions.
12. Acknowledgments
Thanks to Joachim Fabini, Matt Mathis, J.Ignacio Alvarez-Hamelin,
Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray
Kucherawy, and Benjamin Kaduk for their extensive comments on the
memo and related topics. In a second round of reviews, we
acknowledge Magnus Westerlund, Lars Eggert, and Zahed Sarkar.
13. Appendix A - Load Rate Adjustment Pseudo Code
The following is a pseudo-code implementation of the algorithm
described in Section 8.1.
Rx = 0 # The current sending rate (equivalent to a row of the table)
seqErr = 0 # Measured count of any of Loss or Reordering impairments
delay = 0 # Measured Range of Round Trip Delay, RTD, ms
lowThresh = 30 # Low threshold on the Range of RTD, ms
upperThresh = 90 # Upper threshold on the Range of RTD, ms
hSpeedTresh = 1 Gbps # Threshold for transition between sending rate step
sizes (such as 1 Mbps and 100 Mbps)
slowAdjCount = 0 # Measured Number of consecutive status reports
indicating loss and/or delay variation above upperThresh
slowAdjThresh = 2 # Threshold on slowAdjCount used to infer congestion.
Use values >1 to avoid misinterpreting transient loss
highSpeedDelta = 10 # The number of rows to move in a single adjustment
when initially increasing offered load (to ramp-up quickly)
maxLoadRates = 2000 # Maximum table index (rows)
if ( seqErr == 0 && delay < lowThresh ) {
if ( Rx < hSpeedTresh && slowAdjCount < slowAdjThresh ) {
Rx += highSpeedDelta;
slowAdjCount = 0;
} else {
if ( Rx < maxLoadRates - 1 )
Rx++;
}
} else if ( seqErr > 0 || delay > upperThresh ) {
slowAdjCount++;
if ( Rx < hSpeedTresh && slowAdjCount == slowAdjThresh ) {
if ( Rx > highSpeedDelta * 3 )
Rx -= highSpeedDelta * 3;
else
Rx = 0;
} else {
if ( Rx > 0 )
Rx--;
}
}
14. Appendix B - RFC 8085 UDP Guidelines Check
The BCP on UDP usage guidelines [RFC8085] focuses primarily on
congestion control in section 3.1. The Guidelines appear in
mandatory (MUST) and recommendation (SHOULD) categories.
14.1. Assessment of Mandatory Requirements
The mandatory requirements in Section 3 of [RFC8085] include:
Internet paths can have widely varying characteristics, ...
Consequently, applications that may be used on the Internet MUST
NOT make assumptions about specific path characteristics. They
MUST instead use mechanisms that let them operate safely under
very different path conditions. Typically, this requires
conservatively probing the current conditions of the Internet path
they communicate over to establish a transmission behavior that it
can sustain and that is reasonably fair to other traffic sharing
the path.
The purpose of the load rate adjustment algorithm in Section 8.1 is
to probe the network and enable Maximum IP-Layer Capacity
measurements with as few assumptions about the measured path as
possible, and within the range application described in Section 2.
The degree of probing conservatism is in tension with the need to
minimize both the traffic dedicated to testing (especially with
Gigabit rate measurements) and the duration of the test (which is one
contributing factor to the overall algorithm fairness).
The text of Section 3 of [RFC8085] goes on to recommend alternatives
to UDP to meet the mandatory requirements, but none are suitable for
the scope and purpose of the metrics and methods in this memo. In
fact, ad hoc TCP-based methods fail to achieve the measurement
accuracy repeatedly proven in comparison measurements with the
running code [LS-SG12-A] [LS-SG12-B] [Y.Sup60]. Also, the UDP aspect
of these methods is present primarily to support modern Internet
transmission where a transport protocol is required [copycat]; the
metric is based on the IP-Layer and UDP allows simple correlation to
the IP-Layer.
Section 3.1.1 of [RFC8085] discusses protocol timer guidelines:
Latency samples MUST NOT be derived from ambiguous transactions.
The canonical example is in a protocol that retransmits data, but
subsequently cannot determine which copy is being acknowledged.
Both load packets and status feedback messages MUST contain sequence
numbers, which helps with measurements based on those packets, and
there are no retransmissions needed.
When a latency estimate is used to arm a timer that provides loss
detection -- with or without retransmission -- expiry of the timer
MUST be interpreted as an indication of congestion in the network,
causing the sending rate to be adapted to a safe conservative
rate...
The method described in this memo uses timers for sending rate
backoff when status feedback messages are lost (Lost Status Backoff
timeout), and for stopping a test when connectivity is lost for a
longer interval (Feedback message or load packet timeouts).
There is no specific benefit foreseen by using Explicit Congestion
Notification (ECN) in this memo.
Section 3.2 of [RFC8085] discusses message size guidelines:
To determine an appropriate UDP payload size, applications MUST
subtract the size of the IP header (which includes any IPv4
optional headers or IPv6 extension headers) as well as the length
of the UDP header (8 bytes) from the PMTU size.
The method uses a sending rate table with a maximum UDP payload size
that anticipates significant header overhead and avoids
fragmentation.
Section 3.3 of [RFC8085] provides reliability guidelines:
Applications that do require reliable message delivery MUST
implement an appropriate mechanism themselves.
The IP-Layer Capacity Metric and Method do not require reliable
delivery.
Applications that require ordered delivery MUST reestablish
datagram ordering themselves.
The IP-Layer Capacity Metric and Method does not need to reestablish
packet order; it is preferred to measure packet reordering if it
occurs [RFC4737].
14.2. Assessment of Recommendations
The load rate adjustment algorithm's goal is to determine the Maximum
IP-Layer Capacity in the context of an infrequent, diagnostic, short
term measurement. This goal is a global exception to many [RFC8085]
SHOULD-level requirements, of which many are intended for long-lived
flows that must coexist with other traffic in more-or-less fair way.
However, the algorithm (as specified in Section 8.1 and Appendix A
above) reacts to indications of congestion in clearly defined ways.
A specific recommendation is provided as an example. Section 3.1.5
of [RFC8085] on implications of RTT and Loss Measurements on
Congestion Control says:
A congestion control designed for UDP SHOULD respond as quickly as
possible when it experiences congestion, and it SHOULD take into
account both the loss rate and the response time when choosing a
new rate.
The load rate adjustment algorithm responds to loss and RTT
measurements with a clear and concise rate reduction when warranted,
and the response makes use of direct measurements (more exact than
can be inferred from TCP ACKs).
Section 3.1.5 of [RFC8085] goes on to specify:
The implemented congestion control scheme SHOULD result in
bandwidth (capacity) use that is comparable to that of TCP within
an order of magnitude, so that it does not starve other flows
sharing a common bottleneck.
This is a requirement for coexistent streams, and not for diagnostic
and infrequent measurements using short durations. The rate
oscillations during short tests allow other packets to pass, and
don't starve other flows.
Ironically, ad hoc TCP-based measurements of "Internet Speed" are
also designed to work around this SHOULD-level requirement, by
launching many flows (9, for example) to increase the outstanding
data dedicated to testing.
The load rate adjustment algorithm cannot become a TCP-like
congestion control, or it will have the same weaknesses of TCP when
trying to make a Maximum IP-Layer Capacity measurement, and will not
achieve the goal. The results of the referenced testing [LS-SG12-A]
[LS-SG12-B] [Y.Sup60] supported this statement hundreds of times,
with comparisons to multi-connection TCP-based measurements.
A brief review of some other SHOULD-level requirements follows (Yes
or Not applicable = NA) :
+--+---------------------------------------------------------+---------+
|Y?| RFC 8085 Recommendation | Section |
+--+---------------------------------------------------------+---------+
Yes| MUST tolerate a wide range of Internet path conditions | 3 |
NA | SHOULD use a full-featured transport (e.g., TCP) | |
| | |
Yes| SHOULD control rate of transmission | 3.1 |
NA | SHOULD perform congestion control over all traffic | |
| | |
| for bulk transfers, | 3.1.2 |
NA | SHOULD consider implementing TFRC | |
NA | else, SHOULD in other ways use bandwidth similar to TCP | |
| | |
| for non-bulk transfers, | 3.1.3 |
NA | SHOULD measure RTT and transmit max. 1 datagram/RTT | 3.1.1 |
NA | else, SHOULD send at most 1 datagram every 3 seconds | |
NA | SHOULD back-off retransmission timers following loss | |
| | |
Yes| SHOULD provide mechanisms to regulate the bursts of | 3.1.6 |
| transmission | |
| | |
NA | MAY implement ECN; a specific set of application | 3.1.7 |
| mechanisms are REQUIRED if ECN is used. | |
| | |
Yes| for DiffServ, SHOULD NOT rely on implementation of PHBs | 3.1.8 |
| | |
Yes| for QoS-enabled paths, MAY choose not to use CC | 3.1.9 |
| | |
Yes| SHOULD NOT rely solely on QoS for their capacity | 3.1.10 |
| non-CC controlled flows SHOULD implement a transport | |
| circuit breaker | |
| MAY implement a circuit breaker for other applications | |
| | |
| for tunnels carrying IP traffic, | 3.1.11 |
NA | SHOULD NOT perform congestion control | |
NA | MUST correctly process the IP ECN field | |
| | |
| for non-IP tunnels or rate not determined by traffic, | |
NA | SHOULD perform CC or use circuit breaker | 3.1.11 |
NA | SHOULD restrict types of traffic transported by the | |
| tunnel | |
| | |
Yes| SHOULD NOT send datagrams that exceed the PMTU, i.e., | 3.2 |
Yes| SHOULD discover PMTU or send datagrams < minimum PMTU; | |
NA | Specific application mechanisms are REQUIRED if PLPMTUD | |
| is used. | |
| | |
Yes| SHOULD handle datagram loss, duplication, reordering | 3.3 |
NA | SHOULD be robust to delivery delays up to 2 minutes | |
| | |
Yes| SHOULD enable IPv4 UDP checksum | 3.4 |
Yes| SHOULD enable IPv6 UDP checksum; Specific application | 3.4.1 |
| mechanisms are REQUIRED if a zero IPv6 UDP checksum is | |
| used. | |
| | |
NA | SHOULD provide protection from off-path attacks | 5.1 |
| else, MAY use UDP-Lite with suitable checksum coverage | 3.4.2 |
| | |
NA | SHOULD NOT always send middlebox keep-alive messages | 3.5 |
NA | MAY use keep-alives when needed (min. interval 15 sec) | |
| | |
Yes| Applications specified for use in limited use (or | 3.6 |
| controlled environments) SHOULD identify equivalent | |
| mechanisms and describe their use case. | |
| | |
NA | Bulk-multicast apps SHOULD implement congestion control | 4.1.1 |
| | |
NA | Low volume multicast apps SHOULD implement congestion | 4.1.2 |
| control | |
| | |
NA | Multicast apps SHOULD use a safe PMTU | 4.2 |
| | |
Yes| SHOULD avoid using multiple ports | 5.1.2 |
Yes| MUST check received IP source address | |
| | |
NA | SHOULD validate payload in ICMP messages | 5.2 |
| | |
Yes| SHOULD use a randomized source port or equivalent | 6 |
| technique, and, for client/server applications, SHOULD | |
| send responses from source address matching request | |
| 5.1 | |
NA | SHOULD use standard IETF security protocols when needed | 6 |
+---------------------------------------------------------+---------+
15. References 12. References
15.1. Normative References 12.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, Requirement Levels", BCP 14, RFC 2119,
DOI 10.17487/RFC2119, March 1997, DOI 10.17487/RFC2119, March 1997,
<https://www.rfc-editor.org/info/rfc2119>. <https://www.rfc-editor.org/info/rfc2119>.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330, "Framework for IP Performance Metrics", RFC 2330,
DOI 10.17487/RFC2330, May 1998, DOI 10.17487/RFC2330, May 1998,
<https://www.rfc-editor.org/info/rfc2330>. <https://www.rfc-editor.org/info/rfc2330>.
skipping to change at page 35, line 40 skipping to change at line 1353
[RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
May 2017, <https://www.rfc-editor.org/info/rfc8174>. May 2017, <https://www.rfc-editor.org/info/rfc8174>.
[RFC8468] Morton, A., Fabini, J., Elkins, N., Ackermann, M., and V. [RFC8468] Morton, A., Fabini, J., Elkins, N., Ackermann, M., and V.
Hegde, "IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for Hegde, "IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for
the IP Performance Metrics (IPPM) Framework", RFC 8468, the IP Performance Metrics (IPPM) Framework", RFC 8468,
DOI 10.17487/RFC8468, November 2018, DOI 10.17487/RFC8468, November 2018,
<https://www.rfc-editor.org/info/rfc8468>. <https://www.rfc-editor.org/info/rfc8468>.
15.2. Informative References 12.2. Informative References
[copycat] Edleine, K., Kuhlewind, K., Trammell, B., and B. Donnet, [copycat] Edeline, K., KΓΌhlewind, M., Trammell, B., and B. Donnet,
"copycat: Testing Differential Treatment of New Transport "copycat: Testing Differential Treatment of New Transport
Protocols in the Wild (ANRW '17)", July 2017, Protocols in the Wild", ANRW '17,
DOI 10.1145/3106328.3106330, July 2017,
<https://irtf.org/anrw/2017/anrw17-final5.pdf>. <https://irtf.org/anrw/2017/anrw17-final5.pdf>.
[LS-SG12-A] [LS-SG12-A]
12, I. S., "LS - Harmonization of IP Capacity and Latency "Liaison statement: LS - Harmonization of IP Capacity and
Parameters: Revision of Draft Rec. Y.1540 on IP packet Latency Parameters: Revision of Draft Rec. Y.1540 on IP
transfer performance parameters and New Annex A with Lab packet transfer performance parameters and New Annex A
Evaluation Plan", May 2019, with Lab Evaluation Plan", From ITU-T SG 12, March 2019,
<https://datatracker.ietf.org/liaison/1632/>. <https://datatracker.ietf.org/liaison/1632/>.
[LS-SG12-B] [LS-SG12-B]
12, I. S., "LS on harmonization of IP Capacity and Latency "Liaison statement: LS on harmonization of IP Capacity and
Parameters: Consent of Draft Rec. Y.1540 on IP packet Latency Parameters: Consent of Draft Rec. Y.1540 on IP
transfer performance parameters and New Annex A with Lab & packet transfer performance parameters and New Annex A
Field Evaluation Plans", March 2019, with Lab & Field Evaluation Plans", From ITU-T-SG-12, May
<https://datatracker.ietf.org/liaison/1645/>. 2019, <https://datatracker.ietf.org/liaison/1645/>.
[RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, Network Interconnect Devices", RFC 2544,
DOI 10.17487/RFC2544, March 1999, DOI 10.17487/RFC2544, March 1999,
<https://www.rfc-editor.org/info/rfc2544>. <https://www.rfc-editor.org/info/rfc2544>.
[RFC3148] Mathis, M. and M. Allman, "A Framework for Defining [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining
Empirical Bulk Transfer Capacity Metrics", RFC 3148, Empirical Bulk Transfer Capacity Metrics", RFC 3148,
DOI 10.17487/RFC3148, July 2001, DOI 10.17487/RFC3148, July 2001,
<https://www.rfc-editor.org/info/rfc3148>. <https://www.rfc-editor.org/info/rfc3148>.
skipping to change at page 37, line 9 skipping to change at line 1418
May 2016, <https://www.rfc-editor.org/info/rfc7799>. May 2016, <https://www.rfc-editor.org/info/rfc7799>.
[RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage [RFC8085] Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage
Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085,
March 2017, <https://www.rfc-editor.org/info/rfc8085>. March 2017, <https://www.rfc-editor.org/info/rfc8085>.
[RFC8337] Mathis, M. and A. Morton, "Model-Based Metrics for Bulk [RFC8337] Mathis, M. and A. Morton, "Model-Based Metrics for Bulk
Transport Capacity", RFC 8337, DOI 10.17487/RFC8337, March Transport Capacity", RFC 8337, DOI 10.17487/RFC8337, March
2018, <https://www.rfc-editor.org/info/rfc8337>. 2018, <https://www.rfc-editor.org/info/rfc8337>.
[TR-471] Morton, A., "Broadband Forum TR-471: IP Layer Capacity [TR-471] Morton, A., "Maximum IP-Layer Capacity Metric, Related
Metrics and Measurement", July 2020, Metrics, and Measurements", Broadband Forum TR-471, July
<https://www.broadband-forum.org/technical/download/TR- 2020, <https://www.broadband-forum.org/technical/download/
471.pdf>. TR-471.pdf>.
[udpst] udpst Project Collaborators, "UDP Speed Test Open
Broadband project", December 2020,
<https://github.com/BroadbandForum/obudpst>.
[Y.1540] Y.1540, I. R., "Internet protocol data communication [Y.1540] ITU-T, "Internet protocol data communication service - IP
service - IP packet transfer and availability performance packet transfer and availability performance parameters",
parameters", December 2019, ITU-T Recommendation Y.1540, December 2019,
<https://www.itu.int/rec/T-REC-Y.1540-201912-I/en>. <https://www.itu.int/rec/T-REC-Y.1540-201912-I/en>.
[Y.Sup60] Morton, A., "Recommendation Y.Sup60, (09/20) Interpreting [Y.Sup60] ITU-T, "Interpreting ITU-T Y.1540 maximum IP-layer
ITU-T Y.1540 maximum IP-layer capacity measurements, and capacity measurements", ITU-T Recommendation Y.Sup60,
Errata", September 2020, October 2021, <https://www.itu.int/rec/T-REC-Y.Sup60/en>.
<https://www.itu.int/rec/T-REC-Y.Sup60/en>.
Appendix A. Load Rate Adjustment Pseudocode
This appendix provides a pseudocode implementation of the algorithm
described in Section 8.1.
Rx = 0 # The current sending rate (equivalent to a row
# of the table)
seqErr = 0 # Measured count that includes Loss or Reordering
# or Duplication impairments (all appear
# initially as errors in the packet sequence
# numbering)
seqErrThresh = 10 # Threshold on seqErr count that includes Loss or
# Reordering or Duplication impairments (all
# appear initially as errors in the packet
# sequence numbering)
delay = 0 # Measured Range of Round Trip Delay (RTD), msec
lowThresh = 30 # Low threshold on the Range of RTD, msec
upperThresh = 90 # Upper threshold on the Range of RTD, msec
hSpeedThresh = 1 # Threshold for transition between sending rate
# step sizes (such as 1 Mbps and 100 Mbps), Gbps
slowAdjCount = 0 # Measured Number of consecutive status reports
# indicating loss and/or delay variation above
# upperThresh
slowAdjThresh = 3 # Threshold on slowAdjCount used to infer
# congestion. Use values > 1 to avoid
# misinterpreting transient loss.
highSpeedDelta = 10 # The number of rows to move in a single
# adjustment when initially increasing offered
# load (to ramp up quickly)
maxLoadRates = 2000 # Maximum table index (rows)
if ( seqErr <= seqErrThresh && delay < lowThresh ) {
if ( Rx < hSpeedThresh && slowAdjCount < slowAdjThresh ) {
Rx += highSpeedDelta;
slowAdjCount = 0;
} else {
if ( Rx < maxLoadRates - 1 )
Rx++;
}
} else if ( seqErr > seqErrThresh || delay > upperThresh ) {
slowAdjCount++;
if ( Rx < hSpeedThresh && slowAdjCount == slowAdjThresh ) {
if ( Rx > highSpeedDelta * 3 )
Rx -= highSpeedDelta * 3;
else
Rx = 0;
} else {
if ( Rx > 0 )
Rx--;
}
}
Appendix B. RFC 8085 UDP Guidelines Check
Section 3.1 of [RFC8085] (BCP 145), which provides UDP usage
guidelines, focuses primarily on congestion control. The guidelines
appear in mandatory (MUST) and recommendation (SHOULD) categories.
B.1. Assessment of Mandatory Requirements
The mandatory requirements in Section 3 of [RFC8085] include the
following:
| Internet paths can have widely varying characteristics, ...
| Consequently, applications that may be used on the Internet MUST
| NOT make assumptions about specific path characteristics. They
| MUST instead use mechanisms that let them operate safely under
| very different path conditions. Typically, this requires
| conservatively probing the current conditions of the Internet path
| they communicate over to establish a transmission behavior that it
| can sustain and that is reasonably fair to other traffic sharing
| the path.
The purpose of the load rate adjustment algorithm described in
Section 8.1 is to probe the network and enable Maximum IP-Layer
Capacity measurements with as few assumptions about the measured path
as possible and within the range of applications described in
Section 2. There is tension between the goals of probing
conservatism and minimization of both the traffic dedicated to
testing (especially with Gigabit rate measurements) and the duration
of the test (which is one contributing factor to the overall
algorithm fairness).
The text of Section 3 of [RFC8085] goes on to recommend alternatives
to UDP to meet the mandatory requirements, but none are suitable for
the scope and purpose of the Metrics and Methods in this memo. In
fact, ad hoc TCP-based methods fail to achieve the measurement
accuracy repeatedly proven in comparison measurements with the
running code [LS-SG12-A] [LS-SG12-B] [Y.Sup60]. Also, the UDP aspect
of these methods is present primarily to support modern Internet
transmission where a transport protocol is required [copycat]; the
metric is based on the IP Layer, and UDP allows simple correlation to
the IP Layer.
Section 3.1.1 of [RFC8085] discusses protocol timer guidelines:
| Latency samples MUST NOT be derived from ambiguous transactions.
| The canonical example is in a protocol that retransmits data, but
| subsequently cannot determine which copy is being acknowledged.
Both load packets and status feedback messages MUST contain sequence
numbers; this helps with measurements based on those packets, and
there are no retransmissions needed.
| When a latency estimate is used to arm a timer that provides loss
| detection -- with or without retransmission -- expiry of the timer
| MUST be interpreted as an indication of congestion in the network,
| causing the sending rate to be adapted to a safe conservative rate
| ...
The methods described in this memo use timers for sending rate
backoff when status feedback messages are lost (Lost Status Backoff
timeout) and for stopping a test when connectivity is lost for a
longer interval (feedback message or load packet timeouts).
This memo does not foresee any specific benefit of using Explicit
Congestion Notification (ECN).
Section 3.2 of [RFC8085] discusses message size guidelines:
| To determine an appropriate UDP payload size, applications MUST
| subtract the size of the IP header (which includes any IPv4
| optional headers or IPv6 extension headers) as well as the length
| of the UDP header (8 bytes) from the PMTU size.
The method uses a sending rate table with a maximum UDP payload size
that anticipates significant header overhead and avoids
fragmentation.
Section 3.3 of [RFC8085] provides reliability guidelines:
| Applications that do require reliable message delivery MUST
| implement an appropriate mechanism themselves.
The IP-Layer Capacity Metrics and Methods do not require reliable
delivery.
| Applications that require ordered delivery MUST reestablish
| datagram ordering themselves.
The IP-Layer Capacity Metrics and Methods do not need to reestablish
packet order; it is preferable to measure packet reordering if it
occurs [RFC4737].
B.2. Assessment of Recommendations
The load rate adjustment algorithm's goal is to determine the Maximum
IP-Layer Capacity in the context of an infrequent, diagnostic, short-
term measurement. This goal is a global exception to many SHOULD-
level requirements as listed in [RFC8085], of which many are intended
for long-lived flows that must coexist with other traffic in a more
or less fair way. However, the algorithm (as specified in
Section 8.1 and Appendix A above) reacts to indications of congestion
in clearly defined ways.
A specific recommendation is provided as an example. Section 3.1.5
of [RFC8085] (regarding the implications of RTT and loss measurements
on congestion control) says:
| A congestion control [algorithm] designed for UDP SHOULD respond
| as quickly as possible when it experiences congestion, and it
| SHOULD take into account both the loss rate and the response time
| when choosing a new rate.
The load rate adjustment algorithm responds to loss and RTT
measurements with a clear and concise rate reduction when warranted,
and the response makes use of direct measurements (more exact than
can be inferred from TCP ACKs).
Section 3.1.5 of [RFC8085] goes on to specify the following:
| The implemented congestion control scheme SHOULD result in
| bandwidth (capacity) use that is comparable to that of TCP within
| an order of magnitude, so that it does not starve other flows
| sharing a common bottleneck.
This is a requirement for coexistent streams, and not for diagnostic
and infrequent measurements using short durations. The rate
oscillations during short tests allow other packets to pass and don't
starve other flows.
Ironically, ad hoc TCP-based measurements of "Internet Speed" are
also designed to work around this SHOULD-level requirement, by
launching many flows (9, for example) to increase the outstanding
data dedicated to testing.
The load rate adjustment algorithm cannot become a TCP-like
congestion control, or it will have the same weaknesses of TCP when
trying to make a Maximum IP-Layer Capacity measurement and will not
achieve the goal. The results of the referenced testing [LS-SG12-A]
[LS-SG12-B] [Y.Sup60] supported this statement hundreds of times,
with comparisons to multi-connection TCP-based measurements.
A brief review of requirements from [RFC8085] follows (marked "Yes"
when this memo is compliant, or "NA" (Not Applicable)):
+======+============================================+=========+
| Yes? | Recommendation in RFC 8085 | Section |
+======+============================================+=========+
| Yes | MUST tolerate a wide range of Internet | 3 |
| | path conditions | |
+------+--------------------------------------------+---------+
| NA | SHOULD use a full-featured transport | |
| | (e.g., TCP) | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | SHOULD control rate of transmission | 3.1 |
+------+--------------------------------------------+---------+
| NA | SHOULD perform congestion control over all | |
| | traffic | |
+------+--------------------------------------------+---------+
+======+============================================+=========+
| | For bulk transfers, | 3.1.2 |
+======+============================================+=========+
| NA | SHOULD consider implementing TFRC | |
+------+--------------------------------------------+---------+
| NA | else, SHOULD in other ways use bandwidth | |
| | similar to TCP | |
+------+--------------------------------------------+---------+
+======+============================================+=========+
| | For non-bulk transfers, | 3.1.3 |
+======+============================================+=========+
| NA | SHOULD measure RTT and transmit max. 1 | 3.1.1 |
| | datagram/RTT | |
+------+--------------------------------------------+---------+
| NA | else, SHOULD send at most 1 datagram every | |
| | 3 seconds | |
+------+--------------------------------------------+---------+
| NA | SHOULD back-off retransmission timers | |
| | following loss | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | SHOULD provide mechanisms to regulate the | 3.1.6 |
| | bursts of transmission | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| NA | MAY implement ECN; a specific set of | 3.1.7 |
| | application mechanisms are REQUIRED if ECN | |
| | is used | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | For DiffServ, SHOULD NOT rely on | 3.1.8 |
| | implementation of PHBs | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | For QoS-enabled paths, MAY choose not to | 3.1.9 |
| | use CC | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | SHOULD NOT rely solely on QoS for their | 3.1.10 |
| | capacity | |
+------+--------------------------------------------+---------+
| NA | non-CC controlled flows SHOULD implement a | |
| | transport circuit breaker | |
+------+--------------------------------------------+---------+
| Yes | MAY implement a circuit breaker for other | |
| | applications | |
+------+--------------------------------------------+---------+
+======+============================================+=========+
| | For tunnels carrying IP traffic, | 3.1.11 |
+======+============================================+=========+
| NA | SHOULD NOT perform congestion control | |
+------+--------------------------------------------+---------+
| NA | MUST correctly process the IP ECN field | |
+------+--------------------------------------------+---------+
+======+============================================+=========+
| | For non-IP tunnels or rate not determined | 3.1.11 |
| | by traffic, | |
+======+============================================+=========+
| NA | SHOULD perform CC or use circuit breaker | |
+------+--------------------------------------------+---------+
| NA | SHOULD restrict types of traffic | |
| | transported by the tunnel | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | SHOULD NOT send datagrams that exceed the | 3.2 |
| | PMTU, i.e., | |
+------+--------------------------------------------+---------+
| Yes | SHOULD discover PMTU or send datagrams < | |
| | minimum PMTU | |
+------+--------------------------------------------+---------+
| NA | Specific application mechanisms are | |
| | REQUIRED if PLPMTUD is used | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | SHOULD handle datagram loss, duplication, | 3.3 |
| | reordering | |
+------+--------------------------------------------+---------+
| NA | SHOULD be robust to delivery delays up to | |
| | 2 minutes | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | SHOULD enable IPv4 UDP checksum | 3.4 |
+------+--------------------------------------------+---------+
| Yes | SHOULD enable IPv6 UDP checksum; specific | 3.4.1 |
| | application mechanisms are REQUIRED if a | |
| | zero IPv6 UDP checksum is used | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| NA | SHOULD provide protection from off-path | 5.1 |
| | attacks | |
+------+--------------------------------------------+---------+
| | else, MAY use UDP-Lite with suitable | 3.4.2 |
| | checksum coverage | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| NA | SHOULD NOT always send middlebox keep- | 3.5 |
| | alive messages | |
+------+--------------------------------------------+---------+
| NA | MAY use keep-alives when needed (min. | |
| | interval 15 sec) | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | Applications specified for use in limited | 3.6 |
| | use (or controlled environments) SHOULD | |
| | identify equivalent mechanisms and | |
| | describe their use case | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| NA | Bulk-multicast apps SHOULD implement | 4.1.1 |
| | congestion control | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| NA | Low volume multicast apps SHOULD implement | 4.1.2 |
| | congestion control | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| NA | Multicast apps SHOULD use a safe PMTU | 4.2 |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | SHOULD avoid using multiple ports | 5.1.2 |
+------+--------------------------------------------+---------+
| Yes | MUST check received IP source address | |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| NA | SHOULD validate payload in ICMP messages | 5.2 |
+------+--------------------------------------------+---------+
+------+--------------------------------------------+---------+
| Yes | SHOULD use a randomized Source port or | 6 |
| | equivalent technique, and, for client/ | |
| | server applications, SHOULD send responses | |
| | from source address matching request | |
+------+--------------------------------------------+---------+
| NA | SHOULD use standard IETF security | 6 |
| | protocols when needed | |
+------+--------------------------------------------+---------+
Table 4: Summary of Key Guidelines from RFC 8085
Acknowledgments
Thanks to Joachim Fabini, Matt Mathis, J. Ignacio Alvarez-Hamelin,
Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray
Kucherawy, and Benjamin Kaduk for their extensive comments on this
memo and related topics. In a second round of reviews, we
acknowledge Magnus Westerlund, Lars Eggert, and Zaheduzzaman Sarker.
Authors' Addresses Authors' Addresses
Al Morton Al Morton
AT&T Labs AT&T Labs
200 Laurel Avenue South 200 Laurel Avenue South
Middletown, NJ 07748 Middletown, NJ 07748
USA United States of America
Phone: +1 732 420 1571 Phone: +1 732 420 1571
Fax: +1 732 368 1192
Email: acm@research.att.com Email: acm@research.att.com
Ruediger Geib RΓΌdiger Geib
Deutsche Telekom Deutsche Telekom
Heinrich Hertz Str. 3-7 Heinrich Hertz Str. 3-7
Darmstadt 64295 64295 Darmstadt
Germany Germany
Phone: +49 6151 5812747 Phone: +49 6151 5812747
Email: Ruediger.Geib@telekom.de Email: Ruediger.Geib@telekom.de
Len Ciavattone Len Ciavattone
AT&T Labs AT&T Labs
200 Laurel Avenue South 200 Laurel Avenue South
Middletown, NJ 07748 Middletown, NJ 07748
USA United States of America
Phone: +1 732 420 1239
Email: lencia@att.com Email: lencia@att.com
 End of changes. 237 change blocks. 
982 lines changed or deleted 1068 lines changed or added

This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/