Network Working Group

Internet Engineering Task Force (IETF)                         A. Morton
Internet-Draft
Request for Comments: 9097                                     AT&T Labs
Intended status:
Category: Standards Track                                        R. Geib
Expires: December 11, 2021
ISSN: 2070-1721                                         Deutsche Telekom
                                                           L. Ciavattone
                                                               AT&T Labs
                                                            June 9,
                                                           November 2021

              Metrics and Methods for One-way One-Way IP Capacity
               draft-ietf-ippm-capacity-metric-method-12

Abstract

   This memo revisits the problem of Network Capacity metrics Metrics first
   examined in RFC 5136.  The  This memo specifies a more practical Maximum
   IP-Layer Capacity metric Metric definition catering for to measurement
   purposes, and
   outlines the corresponding methods Methods of measurement. Measurement.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents an Internet Standards Track document.

   This document is a product of the Internet Engineering Task Force
   (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list  It represents the consensus of current Internet-
   Drafts is at https://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid the IETF community.  It has
   received public review and has been approved for a maximum publication by the
   Internet Engineering Steering Group (IESG).  Further information on
   Internet Standards is available in Section 2 of RFC 7841.

   Information about the current status of six months this document, any errata,
   and how to provide feedback on it may be updated, replaced, or obsoleted by other documents obtained at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on December 11, 2021.
   https://www.rfc-editor.org/info/rfc9097.

Copyright Notice

   Copyright (c) 2021 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (https://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified Revised BSD License text as described in Section 4.e of the
   Trust Legal Provisions and are provided without warranty as described
   in the Simplified Revised BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   3
     1.1.  Requirements Language . . . . . . . . . . . . . . . . . .   4
   2.  Scope, Goals, and Applicability . . . . . . . . . . . . . . .   4
   3.  Motivation  . . . . . . . . . . . . . . . . . . . . . . . . .   5
   4.  General Parameters and Definitions  . . . . . . . . . . . . .   6
   5.  IP-Layer Capacity Singleton Metric Definitions  . . . . . . .   8
     5.1.  Formal Name . . . . . . . . . . . . . . . . . . . . . . .   8
     5.2.  Parameters  . . . . . . . . . . . . . . . . . . . . . . .   8
     5.3.  Metric Definitions  . . . . . . . . . . . . . . . . . . .   8
     5.4.  Related Round-Trip Delay and One-way One-Way Loss Definitions . .   9
     5.5.  Discussion  . . . . . . . . . . . . . . . . . . . . . . .  10
     5.6.  Reporting the Metric  . . . . . . . . . . . . . . . . . .  10
   6.  Maximum IP-Layer Capacity Metric Definitions (Statistic)  . .  10 (Statistics)
     6.1.  Formal Name . . . . . . . . . . . . . . . . . . . . . . .  10
     6.2.  Parameters  . . . . . . . . . . . . . . . . . . . . . . .  11
     6.3.  Metric Definitions  . . . . . . . . . . . . . . . . . . .  11
     6.4.  Related Round-Trip Delay and One-way One-Way Loss Definitions . .  13
     6.5.  Discussion  . . . . . . . . . . . . . . . . . . . . . . .  13
     6.6.  Reporting the Metric  . . . . . . . . . . . . . . . . . .  13
   7.  IP-Layer Sender Bit Rate Singleton Metric Definitions . . . .  14
     7.1.  Formal Name . . . . . . . . . . . . . . . . . . . . . . .  14
     7.2.  Parameters  . . . . . . . . . . . . . . . . . . . . . . .  14
     7.3.  Metric Definition . . . . . . . . . . . . . . . . . . . .  15
     7.4.  Discussion  . . . . . . . . . . . . . . . . . . . . . . .  15
     7.5.  Reporting the Metric  . . . . . . . . . . . . . . . . . .  15
   8.  Method of Measurement . . . . . . . . . . . . . . . . . . . .  15
     8.1.  Load Rate Adjustment Algorithm  . . . . . . . . . . . . .  16
     8.2.  Measurement Qualification or Verification . . . . . . . .  21
     8.3.  Measurement Considerations  . . . . . . . . . . . . . . .  22
     8.4.  Running Code  . . . . . . . . . . . . . . . . . . . . . .  24
   9.  Reporting Formats . . . . . . . . . . . . . . . . . . . . . .  25
     9.1.  Configuration and Reporting Data Formats  . . . . . . . .  27
   10. Security Considerations . . . . . . . . . . . . . . . . . . .  27
   11. IANA Considerations . . . . . . . . . . . . . . . . . . . . .  28
   12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . .  28
   13. References
     12.1.  Normative References
     12.2.  Informative References
   Appendix A - A.  Load Rate Adjustment Pseudo Code . . . . . . . .  28
   14. Pseudocode
   Appendix B - B.  RFC 8085 UDP Guidelines Check  . . . . . . . . .  29
     14.1.
     B.1.  Assessment of Mandatory Requirements . . . . . . . . . .  29
     14.2.
     B.2.  Assessment of Recommendations  . . . . . . . . . . . . .  31
   15. References  . . . . . . . . . . . . . . . . . . . . . . . . .  34
     15.1.  Normative References . . . . . . . . . . . . . . . . . .  34
     15.2.  Informative References . . . . . . . . . . . . . . . . .  35
   Acknowledgments
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  37

1.  Introduction

   The IETF's efforts to define Network Capacity and Bulk Transport
   Capacity (BTC) have been chartered and progressed for over twenty
   years.  Over that time, the performance community has seen the
   development of Informative definitions in [RFC3148] for the Framework
   for Bulk Transport Capacity
   (BTC), RFC 5136 Capacity, [RFC5136] for Network Capacity and
   Maximum IP-Layer Capacity, and the Experimental metric definitions
   and methods in [RFC8337],
   Model-Based "Model-Based Metrics for BTC. Bulk Transport Capacity"
   [RFC8337].

   This memo revisits the problem of Network Capacity metrics Metrics examined
   first in [RFC3148] and later in [RFC5136].  Maximum IP-Layer Capacity
   and [RFC3148] Bulk Transfer Capacity [RFC3148] (goodput) are different metrics.
   Maximum IP-Layer Capacity is like the theoretical goal for goodput.
   There are many metrics in [RFC5136], such as Available Capacity.
   Measurements depend on the network path under test and the use case.
   Here, the main use case is to assess the maximum capacity Maximum Capacity of one or
   more networks where the subscriber receives specific performance
   assurances, sometimes referred to as the Internet access, or where a
   limit of the technology used on a path is being tested.  For example,
   when a user subscribes to a 1 Gbps service, then the user, the
   service provider,
   Service Provider, and possibly other parties want to assure that the
   specified performance level is delivered.  When a test confirms the
   subscribed performance level, then a tester can seek the location of a
   bottleneck elsewhere.

   This memo recognizes the importance of a definition of a Maximum IP-
   Layer Capacity Metric at a time when Internet subscription speeds
   have increased dramatically; dramatically -- a definition that is both practical
   and effective for the performance community's needs, including
   Internet users.  The metric definition is definitions are intended to use Active
   Methods of Measurement [RFC7799], and a method Method of measurement Measurement is included.
   included for each metric.

   The most direct active measurement Active Measurement of IP-Layer Capacity would use IP
   packets, but in practice a transport header is needed to traverse
   address and port translators.  UDP offers the most direct assessment
   possibility, and in the [copycat] measurement study to investigate whether UDP
   is viable as a general Internet transport protocol, protocol [copycat], the
   authors found that a high percentage of paths tested support UDP
   transport.  A number of liaisons liaison statements have been exchanged on
   this topic [LS-SG12-A] [LS-SG12-B], discussing the laboratory and
   field tests that support the UDP-based approach to IP-Layer Capacity
   measurement.

   This memo also recognizes the many updates to the IP Performance Metrics
   (IPPM) Framework [RFC2330] that have been published over twenty years, and since 1998.  In
   particular, it makes use of [RFC7312] for the Advanced Stream and
   Sampling Framework, Framework and [RFC8468] with for its IPv4, IPv6, and IPv4-IPv6
   Coexistence Updates.

   Appendix A describes the load rate adjustment algorithm in pseudo-
   code. algorithm, using
   pseudocode.  Appendix B discusses the algorithm's compliance with
   [RFC8085].

1.1.  Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
   "OPTIONAL" in this document are to be interpreted as described in
   BCP
   14[RFC2119] 14 [RFC2119] [RFC8174] when, and only when, they appear in all
   capitals, as shown here.

2.  Scope, Goals, and Applicability

   The scope of this memo is to define Active Measurement metrics and
   corresponding methods to unambiguously determine Maximum IP-Layer
   Capacity and useful secondary metrics.

   Another goal is to harmonize the specified metric Metric and method Method across
   the industry, and this memo is the vehicle that captures IETF
   consensus, possibly resulting in changes to the specifications of
   other Standards Development Organizations (SDO) (SDOs) (through each SDO's
   normal contribution process, process or through liaison exchange).

   Secondary goals are to add considerations for test procedures, procedures and to
   provide interpretation of the Maximum IP-Layer Capacity results (to
   identify cases where more testing is warranted, possibly with
   alternate configurations).  Fostering the development of protocol
   support for this metric Metric and method Method of measurement Measurement is also a goal of
   this memo (all active testing protocols currently defined by the IPPM
   WG are UDP-based, UDP based, meeting a key requirement of these methods).  The
   supporting protocol development to measure this metric according to
   the specified method is a key future contribution to Internet
   measurement.

   The load rate adjustment algorithm's scope is limited to helping
   determine the Maximum IP-Layer Capacity in the context of an
   infrequent, diagnostic, short term short-term measurement.  It is RECOMMENDED to
   discontinue non-measurement traffic that shares a subscriber's
   dedicated resources while testing: measurements may not be accurate accurate,
   and throughput of competing elastic traffic may be greatly reduced.

   The primary application of the metric Metrics and method Methods of measurement Measurement
   described here is the same as what is described in Section 2 of [RFC7497]
   [RFC7497], where:

   o

   |  The access portion of the network is the focus of this problem
   |  statement.  The user typically subscribes to a service with
   |  bidirectional Internet [Internet] access partly described by rates in bits
   |  per second.

   In addition, the use of the load rate adjustment algorithm described
   in section Section 8.1 has the following additional applicability
   limitations:

   -

   *  It MUST only be used in the application of diagnostic and
      operations measurements as described in this memo

   - memo.

   *  It MUST only be used in circumstances consistent with Section 10,
   Security Considerations

   - 10
      ("Security Considerations").

   *  If a network operator is certain of the IP-layer capacity IP-Layer Capacity to be
      validated, then testing MAY start with a fixed rate fixed-rate test at the IP-
   layer capacity
      IP-Layer Capacity and avoid activating the load adjustment
      algorithm.  However, the stimulus for a diagnostic test (such as a
      subscriber request) strongly implies that there is no certainty certainty,
      and the load adjustment algorithm is RECOMMENDED.

   Further, the metric Metrics and method Methods of measurement Measurement are intended for use
   where specific exact path information is unknown within a range of
   possible values:

   - the

   *  The subscriber's exact Maximum IP-Layer Capacity is unknown (which
      is sometimes the case; service rates can be increased due to
      upgrades without a subscriber's request, request or increased to provide a
      surplus to compensate for possible underestimates of TCP-based
      testing).

   - the

   *  The size of the bottleneck buffer is unknown.

   Finally, the measurement system's load rate adjustment algorithm
   SHALL NOT be provided with the exact capacity value to be validated
   a priori.  This restriction fosters a fair result, result and removes an
   opportunity for bad actors to operate with nefarious operation enabled by knowledge of the "right
   answer".
   correct answer.

3.  Motivation

   As with any problem that has been worked on for many years in various
   SDOs without any special attempts at coordination, various solutions
   for metrics Metrics and methods Methods have emerged.

   There are five factors that have changed (or begun began to change) in the
   2013-2019 time frame, and the presence of any one of them on the path
   requires features in the measurement design to account for the
   changes:

   1.  Internet access is no longer the bottleneck for many users (but
       subscribers expect network providers to honor contracted
       performance).

   2.  Both transfer rate and latency are important to a user's
       satisfaction.

   3.  UDP's growing role in Transport, transport is growing in areas where TCP once
       dominated.

   4.  Content and applications are moving physically closer to users.

   5.  There is less emphasis on ISP gateway measurements, possibly due
       to less traffic crossing ISP gateways in the future.

4.  General Parameters and Definitions

   This section lists the REQUIRED input factors to specify a Sender or
   Receiver metric.

   o  Src, one

   Src:  One of the addresses of a host (such as a globally routable IP
      address).

   o  Dst, one

   Dst:  One of the addresses of a host (such as a globally routable IP
      address).

   o  MaxHops, the

   MaxHops:  The limit on the number of Hops a specific packet may visit
      as it traverses from the host at Src to the host at Dst
      (implemented in the TTL or Hop Limit).

   o  T0, the

   T0:  The time at the start of a measurement interval, when packets
      are first transmitted from the Source.

   o  I, the

   I:  The nominal duration of a measurement interval at the
      destination Destination
      (default 10 sec)

   o  dt, the sec).

   dt:  The nominal duration of m equal sub-intervals in I at the
      destination
      Destination (default 1 sec)

   o  dtn, the sec).

   dtn:  The beginning boundary of a specific sub-interval, n, one of m
      sub-intervals in I

   o  FT, the I.

   FT:  The feedback time interval between status feedback messages
      communicating measurement results, sent from the receiver Receiver to
      control the sender. Sender.  The results are evaluated throughout the test
      to determine how to adjust the current offered load rate at the
      sender
      Sender (default 50ms)
   o  Tmax, a 50 msec).

   Tmax:  A maximum waiting time for test packets to arrive at the
      destination,
      Destination, set sufficiently long to disambiguate packets with
      long delays from packets that are discarded (lost), such that the
      distribution of one-way delay is not truncated.

   o  F, the

   F:  The number of different flows synthesized by the method (default 1 flow)

   o  flow, the
      one flow).

   Flow:  The stream of packets with the same n-tuple of designated
      header fields that (when held constant) result in identical
      treatment in a multi-path multipath decision (such as the decision taken in
      load balancing).  Note: The IPv6 flow label SHOULD be included in
      the flow definition when routers have complied with [RFC6438]
      guidelines.

   o  Type-P, the guidelines
      provided in [RFC6438].

   Type-P:  The complete description of the test packets for which this
      assessment applies (including the flow-defining fields).  Note
      that the UDP transport layer is one requirement for test packets
      specified below.  Type-P is a parallel concept parallel to "population of
      interest" as defined in clause Clause 6.1.1 of[Y.1540].

   o of [Y.1540].

   Payload Content, this IPPM Framework-conforming metric and method
      includes packet payload content as an Content:  An aspect of the Type-P
      parameter, which Parameter that can help to
      improve measurement determinism.  Specifying packet payload
      content helps to ensure IPPM Framework-conforming Metrics and
      Methods.  If there is payload compression in the path and tests
      intend to characterize a possible advantage due to compression,
      then payload content SHOULD be supplied by a pseudo-random pseudorandom sequence
      generator, by using part of a compressed file, or by other means.
      See Section 3.1.2 of [RFC7312].

   o  PM, a

   PM:  A list of fundamental metrics, such as loss, delay, and
      reordering, and corresponding target performance threshold. threshold(s).  At
      least one fundamental metric and target performance threshold MUST
      be supplied (such as One-way one-way IP Packet Loss packet loss [RFC7680] equal to
      zero).

   A non-Parameter which that is required for several metrics is defined
   below:

   o  T, the

   T:  The host time of the *first* test packet's *arrival* as measured
      at the destination Destination Measurement Point, or MP(Dst).  There may be
      other packets sent between Source and Destination hosts that are
      excluded, so this is the time of arrival of the first packet used
      for measurement of the metric.

   Note that time stamp timestamp format and resolution, sequence numbers, etc.
   will be established by the chosen test protocol standard or
   implementation.

5.  IP-Layer Capacity Singleton Metric Definitions

   This section sets requirements for the singleton Singleton metric that supports
   the Maximum IP-Layer Capacity Metric definition definitions in Section 6.

5.1.  Formal Name

   Type-P-One-way-IP-Capacity, or

   "Type-P-One-way-IP-Capacity" is the formal name; it is informally
   called IP-Layer Capacity. "IP-Layer Capacity".

   Note that Type-P depends on the chosen method.

5.2.  Parameters

   This section lists the REQUIRED input factors to specify the metric,
   beyond those listed in Section 4.

   No additional Parameters are needed.

5.3.  Metric Definitions

   This section defines the REQUIRED aspects of the measurable IP-Layer
   Capacity metric Metric (unless otherwise indicated) for measurements between
   specified Source and Destination hosts:

   Define the IP-Layer Capacity, C(T,dt,PM), to be the number of IP-
   Layer bits (including header and data fields) in packets that can be
   transmitted from the Src host and correctly received by the Dst host
   during one contiguous sub-interval, dt in length.  The IP-Layer
   Capacity depends on the Src and Dst hosts, the host addresses, and
   the path between the hosts.

   The number of these IP-Layer bits is designated n0[dtn,dtn+1] for a
   specific dt.

   When the packet size is known and of fixed size, the packet count
   during a single sub-interval dt multiplied by the total bits in IP
   header and data fields is equal to n0[dtn,dtn+1].

   Anticipating a Sample of Singletons, the number of sub-intervals with
   duration dt MUST be set to a natural number m, so that T+I = T + m*dt
   with dtn+1 - dtn = dt for 1 <= n <= m.

   Parameter PM represents other performance metrics [see section (see Section 5.4
   below];
   below); their measurement results SHALL be collected during
   measurement of IP-Layer Capacity and associated with the
   corresponding dtn for further evaluation and reporting.  Users SHALL
   specify the parameter Parameter Tmax as required by each metric's reference
   definition.

   Mathematically, this definition is represented as (for each n):

                                   ( n0[dtn,dtn+1] )
                   C(T,dt,PM) = -------------------------
                                          dt

                  Figure 1: Equation for IP-Layer Capacity

   and:

   o

   *  n0 is the total number of IP-Layer header and payload bits that
      can be transmitted in standard-formed packets [RFC8468] from the
      Src host and correctly received by the Dst host during one
      contiguous sub-interval, dt in length, during the interval [T,
      T+I],

   o  C(T,dt,PM)
      [T,T+I].

   *  C(T,dt,PM), the IP-Layer Capacity, corresponds to the value of n0
      measured in any sub-interval beginning at dtn, divided by the
      length of the sub-interval, dt.

   o

   *  PM represents other performance metrics [see section (see Section 5.4 below]; below);
      their measurement results SHALL be collected during measurement of
      IP-Layer Capacity and associated with the corresponding dtn for
      further evaluation and reporting.

   o  all

   *  All sub-intervals MUST be of equal duration.  Choosing dt as non-
      overlapping consecutive time intervals allows for a simple
      implementation.

   o

   *  The bit rate of the physical interface of the measurement devices
      MUST be higher than the smallest of the links on the path whose
      C(T,I,PM) is to be measured (the bottleneck link).

   Measurements according to these definitions this definition SHALL use the UDP transport
   layer.  Standard-formed packets are specified in Section 5 of
   [RFC8468].  The measurement SHOULD use a randomized Source port or
   equivalent technique, and SHOULD send responses from the Source
   address matching the test packet destination Destination address.

   Some effects of compression affects on measurement are discussed in Section 6
   of [RFC8468].

5.4.  Related Round-Trip Delay and One-way One-Way Loss Definitions

   RTD[dtn,dtn+1] is defined as a Sample of the [RFC2681] Round-trip Round-Trip Delay
   [RFC2681] between the Src host and the Dst host over during the interval
   [T,T+I] (that contains equal non-overlapping intervals of dt).  The
   "reasonable period of time" mentioned in [RFC2681] is the parameter Parameter
   Tmax in this memo.  The statistics used to summarize RTD[dtn,dtn+1]
   MAY include the minimum, maximum, median, and mean, and the range =
   (maximum - minimum) is referred to below in Section 8.1 minimum).  Some of these statistics are needed for load
   adjustment purposes. purposes (Section 8.1), measurement qualification
   (Section 8.2), and reporting (Section 9).

   OWL[dtn,dtn+1] is defined as a Sample of the [RFC7680] One-way One-Way Loss [RFC7680]
   between the Src host and the Dst host over during the interval [T,T+I]
   (that contains equal non-overlapping intervals of dt).  The
   statistics used to summarize OWL[dtn,dtn+1] MAY include the lost packet count of
   lost packets and the ratio of lost packet ratio. packets.

   Other metrics MAY be measured: one-way reordering, duplication, and
   delay variation.

5.5.  Discussion

   See the corresponding section for Maximum IP-Layer Capacity. Capacity
   (Section 6.5).

5.6.  Reporting the Metric

   The IP-Layer Capacity SHOULD be reported with at least single Megabit single-Megabit
   resolution, in units of Megabits per second (Mbps), (which (Mbps) (which, to avoid
   any confusion, is 1,000,000 bits per second to avoid any confusion). second).

   The related One-way One-Way Loss metric and Round Trip Round-Trip Delay measurements for
   the same Singleton SHALL be reported, also with meaningful resolution
   for the values measured.

   Individual Capacity measurements MAY be reported in a manner
   consistent with the Maximum IP-Layer Capacity, Capacity; see Section 9.

6.  Maximum IP-Layer Capacity Metric Definitions (Statistic) (Statistics)

   This section sets requirements for the following components to
   support the Maximum IP-Layer Capacity Metric.

6.1.  Formal Name

   Type-P-One-way-Max-IP-Capacity, or

   "Type-P-One-way-Max-IP-Capacity" is the formal name; it is informally
   called Maximum "Maximum IP-Layer
   Capacity. Capacity".

   Note that Type-P depends on the chosen method.

6.2.  Parameters

   This section lists the REQUIRED input factors to specify the metric,
   beyond those listed in Section 4.

   No additional Parameters or definitions are needed.

6.3.  Metric Definitions

   This section defines the REQUIRED aspects of the Maximum IP-Layer
   Capacity metric Metric (unless otherwise indicated) for measurements between
   specified Source and Destination hosts:

   Define the Maximum IP-Layer Capacity, Maximum_C(T,I,PM), to be the
   maximum number of IP-Layer bits n0[dtn,dtn+1] divided by dt that can
   be transmitted in packets from the Src host and correctly received by
   the Dst host, over all dt length dt-length intervals in [T, T+I], [T,T+I] and meeting the
   PM criteria.  Equivalently  An equivalent definition would be the Maximum maximum of a
   Sample of size m of Singletons C(T,I,PM) collected during the
   interval [T, T+I] [T,T+I] and meeting the PM criteria.

   The number of sub-intervals with duration dt MUST be set to a natural
   number m, so that T+I = T + m*dt with dtn+1 - dtn = dt for 1 <= n <=
   m.

   Parameter PM represents the other performance metrics (see
   Section 6.4 below) and their measurement results for the Maximum IP-
   Layer Capacity.  At least one target performance threshold (PM
   criterion) MUST be defined.  If more than one metric and target
   performance threshold are is defined, then the sub-interval with the
   maximum number of bits transmitted MUST meet all the target
   performance thresholds.  Users SHALL specify the parameter Parameter Tmax as
   required by each metric's reference definition.

   Mathematically, this definition can be represented as:

                                      max  ( n0[dtn,dtn+1] )
                                      [T,T+I]
                Maximum_C(T,I,PM) = -------------------------
                                               dt

                where:

                  T                                      T+I
                  _________________________________________
                  |   |   |   |   |   |   |   |   |   |   |
              dtn=1   2   3   4   5   6   7   8   9  10  n+1
                                                     n=m

                  Figure 2: Equation for Maximum Capacity

   and:

   o

   *  n0 is the total number of IP-Layer header and payload bits that
      can be transmitted in standard-formed packets from the Src host
      and correctly received by the Dst host during one contiguous sub-
      interval, dt in length, during the interval [T, T+I],

   o  Maximum_C(T,I,PM) [T,T+I].

   *  Maximum_C(T,I,PM), the Maximum IP-Layer Capacity, corresponds to
      the maximum value of n0 measured in any sub-interval beginning at
      dtn, divided by the constant length of all sub-intervals, dt.

   o

   *  PM represents the other performance metrics (see Section 5.4) 6.4) and
      their measurement results for the Maximum IP-Layer Capacity.  At
      least one target performance threshold (PM criterion) MUST be
      defined.

   o  all

   *  All sub-intervals MUST be of equal duration.  Choosing dt as non-
      overlapping consecutive time intervals allows for a simple
      implementation.

   o

   *  The bit rate of the physical interface of the measurement systems
      MUST be higher than the smallest of the links on the path whose
      Maximum_C(T,I,PM) is to be measured (the bottleneck link).

   In this definition, the m sub-intervals can be viewed as trials when
   the Src host varies the transmitted packet rate, searching for the
   maximum n0 that meets the PM criteria measured at the Dst host in a
   test of duration, duration I.  When the transmitted packet rate is held
   constant at the Src host, the m sub-intervals may also be viewed as
   trials to evaluate the stability of n0 and metric(s) in the PM list
   over all dt-length intervals in I.

   Measurements according to these definitions SHALL use the UDP
   transport layer.

6.4.  Related Round-Trip Delay and One-way One-Way Loss Definitions

   RTD[dtn,dtn+1] and OWL[dtn,dtn+1] are defined in Section 5.4.  Here,
   the test intervals are increased to match the capacity Samples,
   RTD[T,I] and OWL[T,I].

   The interval dtn,dtn+1 where Maximum_C[T,I,PM] Maximum_C(T,I,PM) occurs is the
   reporting sub-interval for RTD[dtn,dtn+1] and OWL[dtn,dtn+1] within
   RTD[T,I] and OWL[T,I].

   Other metrics MAY be measured: one-way reordering, duplication, and
   delay variation.

6.5.  Discussion

   If traffic conditioning (e.g., shaping, policing) applies along a
   path for which Maximum_C(T,I,PM) is to be determined, different
   values for dt SHOULD be picked and measurements be executed during
   multiple intervals [T, T+I]. [T,T+I].  Each duration dt SHOULD be chosen so
   that it is an integer multiple of increasing values k times
   serialization delay of a path Path MTU (PMTU) at the physical interface
   speed where traffic conditioning is expected.  This should avoid
   taking configured burst tolerance singletons Singletons as a valid
   Maximum_C(T,I,PM) result.

   A Maximum_C(T,I,PM) without any indication of bottleneck congestion,
   be that an increasing increased latency, packet loss loss, or ECN Explicit Congestion
   Notification (ECN) marks during a measurement interval interval, I, is likely to
   an underestimate of Maximum_C(T,I,PM).

6.6.  Reporting the Metric

   The IP-Layer Capacity SHOULD be reported with at least single Megabit single-Megabit
   resolution, in units of Megabits per second (Mbps) (which (which, to avoid
   any confusion, is 1,000,000 bits per second to avoid any confusion). second).

   The related One-way One-Way Loss metric and Round Trip Round-Trip Delay measurements for
   the same Singleton SHALL be reported, also with meaningful resolution
   for the values measured.

   When there are demonstrated and repeatable Capacity modes in the
   Sample, then the Maximum IP-Layer Capacity SHALL be reported for each
   mode, along with the relative time from the beginning of the stream
   that the mode was observed to be present.  Bimodal Maximum IP-Layer
   Capacities have been observed with some services, sometimes called a
   "turbo mode" intending to deliver short transfers more quickly, quickly or
   reduce the initial buffering time for some video streams.  Note that
   modes lasting less than dt duration dt will not be detected.

   Some transmission technologies have multiple methods of operation
   that may be activated when channel conditions degrade or improve, and
   these transmission methods may determine the Maximum IP-Layer
   Capacity.  Examples include line-of-sight microwave modulator
   constellations, or cellular modem technologies where the changes may
   be initiated by a user moving from one coverage area to another.
   Operation in the different transmission methods may be observed over
   time, but the modes of Maximum IP-Layer Capacity will not be
   activated deterministically as with the "turbo mode" described in the
   paragraph above.

7.  IP-Layer Sender Bit Rate Singleton Metric Definitions

   This section sets requirements for the following components to
   support the IP-Layer Sender Bitrate Bit Rate Metric.  This metric helps to
   check that the sender Sender actually generated the desired rates during a
   test, and measurement takes place at the interface between the Src
   host to and the network path
   interface (or as close as practical within the Src
   host).  It is not a metric for path performance.

7.1.  Formal Name

   Type-P-IP-Sender-Bit-Rate, or

   "Type-P-IP-Sender-Bit-Rate" is the formal name; it is informally
   called IP-Layer the "IP-Layer Sender
   Bitrate. Bit Rate".

   Note that Type-P depends on the chosen method.

7.2.  Parameters

   This section lists the REQUIRED input factors to specify the metric,
   beyond those listed in Section 4.

   o  S, the

   S:  The duration of the measurement interval at the Source

   o  st, the Source.

   st:  The nominal duration of N sub-intervals in S (default st = 0.05 seconds)

   o  stn, the
      seconds).

   stn:  The beginning boundary of a specific sub-interval, n, one of N
      sub-intervals in S S.

   S SHALL be longer than I, primarily to account for on-demand
   activation of the path, or any preamble to testing required, and the
   delay of the path.

   st SHOULD be much smaller than the sub-interval dt and on the same
   order as FT, otherwise FT; otherwise, the rate measurement will include many rate
   adjustments and include more time smoothing, thus missing possibly smoothing the
   interval that contains the Maximum IP-Layer Capacity. Capacity (and therefore
   losing relevance).  The st parameter Parameter does not have relevance when the
   Source is transmitting at a fixed rate throughout S.

7.3.  Metric Definition

   This section defines the REQUIRED aspects of the IP-Layer Sender
   Bitrate metric Bit
   Rate Metric (unless otherwise indicated) for measurements at the
   specified Source on packets addressed for the intended Destination
   host and matching the required Type-P:

   Define the IP-Layer Sender Bit Rate, B(S,st), to be the number of IP-
   Layer bits (including header and data fields) that are transmitted
   from the Source with address pair Src and Dst during one contiguous
   sub-interval, st, during the test interval S (where S SHALL be longer
   than I), I) and where the fixed-size packet count during that single
   sub-interval sub-
   interval st also provides the number of IP-Layer bits in any
   interval, [stn,stn+1].

   Measurements according to these definitions this definition SHALL use the UDP transport
   layer.  Any feedback from the Dst host to the Src host received by
   the Src host during an interval [stn,stn+1] SHOULD NOT result in an
   adaptation of the Src host traffic conditioning during this interval
   (rate adjustment occurs on st interval boundaries).

7.4.  Discussion

   Both the Sender and Receiver or (Source (or Source and Destination) bit rates
   SHOULD be assessed as part of an IP-Layer Capacity measurement.
   Otherwise, an unexpected sending rate limitation could produce an
   erroneous Maximum IP-Layer Capacity measurement.

7.5.  Reporting the Metric

   The IP-Layer Sender Bit Rate SHALL be reported with meaningful
   resolution, in units of Megabits per second (which (which, to avoid any
   confusion, is 1,000,000 bits per second to avoid any confusion). second).

   Individual IP-Layer Sender Bit Rate measurements are discussed
   further in Section 9.

8.  Method of Measurement

   The

   It is REQUIRED per the architecture of the method REQUIRES that two
   cooperating hosts
   operating operate in the roles of Src (test packet sender) Sender)
   and Dst
   (receiver), (Receiver) with a measured path and return path between them.

   The duration of a test, parameter Parameter I, MUST be constrained in a
   production network, since this is an active test method and it will
   likely cause congestion on the path from the Src host to the Dst host path
   during a test.

8.1.  Load Rate Adjustment Algorithm

   The algorithm described in this section MUST NOT be used as a general
   Congestion Control Algorithm (CCA).  As stated in the Scope Section 2, 2 ("Scope,
   Goals, and Applicability"), the load rate adjustment algorithm's goal
   is to help determine the Maximum IP-Layer Capacity in the context of
   an infrequent, diagnostic, short term short-term measurement.  There is a tradeoff trade-
   off between test duration (also the test data volume) and algorithm
   aggressiveness (speed of ramp-up and down ramp-down to the Maximum IP-Layer IP-
   Layer Capacity).  The parameter Parameter values chosen below strike a well-tested well-
   tested balance among these factors.

   A table SHALL be pre-built (by the test initiator) administrator), defining all
   the offered load rates that will be supported (R1 through Rn, in
   ascending order, corresponding to indexed rows in the table).  It is
   RECOMMENDED that rates begin with 0.5 Mbps at index zero, use 1 Mbps
   at index one, and then continue in 1 Mbps increments to 1 Gbps.
   Above 1 Gbps, and up to 10 Gbps, it is RECOMMENDED that 100 Mbps
   increments be used.  Above 10 Gbps, increments of 1 Gbps are
   RECOMMENDED.  A higher initial IP-Layer Sender Bitrate Bit Rate might be
   configured when the test operator is certain that the Maximum IP-
   Layer Capacity is well-above well above the initial IP-Layer Sender Bitrate Bit Rate and
   factors such as test duration and total test traffic play an
   important role.  The sending rate table SHOULD backet bracket the maximum
   capacity Maximum
   Capacity where it will make measurements, including constrained rates
   less than 500kbps 500 kbps if applicable.

   Each rate is defined as datagrams of size ss, sent as a burst of
   count cc, each time interval tt (default (the default for tt is 1ms, 100 microsec,
   a likely system tick-interval). tick interval).  While it is advantageous to use
   datagrams of as large a size as possible, it may be prudent to use a
   slightly smaller maximum that allows for secondary protocol headers
   and/or tunneling without resulting in IP-Layer fragmentation.
   Selection of a new rate is indicated by a calculation on the current
   row, Rx.  For example:

   "Rx+1": the sender  The Sender uses the next higher next-higher rate in the table.

   "Rx-10": the sender  The Sender uses the rate 10 rows lower in the table.

   At the beginning of a test, the sender Sender begins sending at rate R1 and
   the receiver Receiver starts a feedback timer of duration FT (while awaiting
   inbound datagrams).  As datagrams are received received, they are checked for
   sequence number anomalies (loss, out-of-order, duplication, etc.) and
   the delay range is measured (one-way or round-trip).  This
   information is accumulated until the feedback timer FT expires and a
   status feedback message is sent from the receiver Receiver back to the sender, Sender,
   to communicate this information.  The accumulated statistics are then
   reset by the receiver Receiver for the next feedback interval.  As feedback
   messages are received back at the sender, Sender, they are evaluated to
   determine how to adjust the current offered load rate (Rx).

   If the feedback indicates that no sequence number anomalies were
   detected AND the delay range was below the lower threshold, the
   offered load rate is increased.  If congestion has not been confirmed
   up to this point (see below for the method to declare for declaring congestion),
   the offered load rate is increased by more than one rate setting
   (e.g., Rx+10).  This allows the offered load to quickly reach a near-maximum near-
   maximum rate.  Conversely, if congestion has been previously
   confirmed, the offered load rate is only increased by one (Rx+1).
   However, if a rate threshold between high and very above a high sending rates rate (such as 1
   Gbps) is exceeded, the offered load rate is only increased by one
   (Rx+1) above the rate threshold in any congestion state.

   If the feedback indicates that sequence number anomalies were
   detected OR the delay range was above the upper threshold, the
   offered load rate is decreased.  The RECOMMENDED threshold values are
   0
   10 for sequence number gaps and 30 ms msec for lower and 90 ms msec for
   upper delay thresholds, respectively.  Also, if congestion is now
   confirmed for the first time by the current feedback message being
   processed, then the offered load rate is decreased by more than one
   rate setting (e.g., Rx-30).  This one-time reduction is intended to
   compensate for the fast initial ramp-up.  In all other cases, the
   offered load rate is only decreased by one (Rx-1).

   If the feedback indicates that there were no sequence number
   anomalies AND the delay range was above the lower threshold, threshold but below
   the upper threshold, the offered load rate is not changed.  This
   allows time for recent changes in the offered load rate to
   stabilize, stabilize
   and for the feedback to represent current conditions more accurately.

   Lastly, the method for inferring congestion is that there were
   sequence number anomalies AND/OR the delay range was above the upper
   threshold for two three consecutive feedback intervals.  The algorithm
   described above is also illustrated in Annex B of ITU-T Rec.
   Recommendation Y.1540, 2020
   version[Y.1540], in Annex B, version [Y.1540] and is implemented in the
   Appendix on Load A ("Load Rate Adjustment Pseudo Code Pseudocode") in this memo.

   The load rate adjustment algorithm MUST include timers that stop the
   test when received packet streams cease unexpectedly.  The timeout
   thresholds are provided in the table below, Table 1, along with values for all other parameters
   Parameters and variables described in this section.  Operation  Operations of
   non-obvious parameters Parameters appear below:

   load packet timeout  Operation: timeout:
      The load packet timeout SHALL be reset to the configured value
      each time a load packet is received.  If the timeout expires, the receiver
      Receiver SHALL be closed and no further feedback sent.

   feedback message timeout  Operation: timeout:
      The feedback message timeout SHALL be reset to the configured
      value each time a feedback message is received.  If the timeout
      expires, the sender Sender SHALL be closed and no further load packets
      sent.

   +-------------+-------------+---------------+-----------------------+

     +=============+==========+===========+=========================+
     | Parameter   | Default  | Tested Range    | Expected Safe Range     |
     |             |          | Range or values  | (not entirely tested,   |
     |             |          | Values    | other                 |
   |             |             |               | values NOT        |
     |             |          |           | RECOMMENDED)            |
   +-------------+-------------+---------------+-----------------------+
     +=============+==========+===========+=========================+
     | FT,         | 50ms 50 msec  | 20ms, 50ms, 20 msec,  | 20ms 20 msec <= FT <= 250ms 250    |
     | feedback    |          | 100ms 50 msec,  | Larger msec; larger values may |
     | time        |          | 100 msec  | slow the rate increase  |
     | interval    |          |           | increase and fail to find the    |
     |             |          |           | find the max                     |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | Feedback    | L*FT, L=20    | L=100 with     | 0.5sec 0.5 sec <= L*FT <= 30   |
     | message     | (1sec with L=20 (1  | FT=50ms with      | 30sec Upper sec; upper limit for    |
     | timeout     | FT=50ms) sec with | (5sec) FT=50     | very unreliable test    |
     | (stop test) | FT=50    | msec (5   | test paths only              |
   +-------------+-------------+---------------+-----------------------+
     | load             | msec)    | sec)      |                         |
     +-------------+----------+-----------+-------------------------+
     | Load packet | 1sec 1 sec    | 5sec 5 sec     | 0.250sec - 30sec 0.250-30 sec; upper     |
     | timeout     |          |           | Upper limit for very          |
     | (stop test) |          |           | unreliable test paths   |
     |             |          |           | only                    |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | table Table index | 0.5Mbps 0.5 Mbps | 0.5Mbps 0.5 Mbps  | when When testing <=10Gbps <= 10 Gbps |
     | 0           |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | table Table index | 1Mbps 1 Mbps   | 1Mbps 1 Mbps    | when When testing <=10Gbps <= 10 Gbps |
     | 1           |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | table Table index | 1Mbps 1 Mbps   | 1Mbps<=rate<= 1 Mbps <= | same Same as tested          |
     | (step) size |          | 1Gbps rate <= 1 |                         |
   +-------------+-------------+---------------+-----------------------+
     | table             |          | Gbps      |                         |
     +-------------+----------+-----------+-------------------------+
     | Table index | 100Mbps 100 Mbps | 1Gbps<=rate<= 1 Gbps <= | same Same as tested          |
     | (step)      |          | 10Gbps rate <=   |                         |
     | size, rate  |          | 10 Gbps   |                         |
     | rate>1Gbps > 1 Gbps    |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | table Table index | 1Gbps 1 Gbps   | untested Untested  | >10Gbps >10 Gbps                |
     | (step)      |          |           |                         |
     | size, rate  |          |           |                         |
     | rate>10Gbps > 10 Gbps   |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | ss, UDP     | none None     | <=1222    | Recommend max at        |
     | payload     |          |           | largest value that      |
     | size, bytes |          |           | avoids fragmentation;   |
     |             |          |           | use of           too- |
   |             |             |               | small using a payload size    |
     |             |          |           | that is too small might result in |
     |             |          |           | result in unexpected sender    |
     |             |          |           | limitations. Sender limitations      |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | cc, burst   | none None     | 1<=cc<= 100 1 <= cc   | same Same as tested.  Vary   |
     | count       |          | <= 100    | cc as needed to create  |
     |             |          |           | create the desired    |
   |             |             |               | maximum     |
     |             |          |           | sending rate.  Sender   |
     |             |          |           | buffer size may limit   |
     |             |          |           | cc in implementation. the               |
     |             |          |           | implementation          |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | tt, burst   | 100microsec 100      | 100microsec, 100       | available Available range of      |
     | interval    | microsec | 1msec microsec, | "tick" values (HZ       |
     |             |          | 1 msec    | param)                  |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | low Low delay   | 30ms 30 msec  | 5ms, 30ms 5 msec,   | same Same as tested          |
     | range       |          | 30 msec   |                         |
     | threshold   |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | high High delay  | 90ms 90 msec  | 10ms, 90ms 10 msec,  | same Same as tested          |
     | range       |          | 90 msec   |                         |
     | threshold   |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | sequence Sequence    | 0 10       | 0, 100 1, 5,  | same Same as tested          |
     | error       |          | 10, 100   |                         |
     | threshold   |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | consecutive Consecutive | 2 3        | 2 2, 3, 4,  | Use values >1 to avoid  |
     | errored     |          | 5         | avoid misinterpreting         |
     | status      |          |           | transient loss          |
     | report      |          |           |                         |
     | threshold   |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | Fast mode   | 10       | 10        | 2 <= steps <= 30        |
     | increase,   |          |           |                         |
     | in table    |          |           |                         |
     | index steps |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+
     | Fast mode   | 3 * Fast | 3 * Fast mode  | same Same as tested          |
     | decrease,   | mode     | increase mode      |                         |
     | in table    | increase | increase  |                         |
     | index steps |          |           |                         |
   +-------------+-------------+---------------+-----------------------+
     +-------------+----------+-----------+-------------------------+

          Table 1: Parameters for Load Rate Adjustment Algorithm

   As a consequence of default parameterization, the Number of table
   steps in total for rates <10Gbps less than 10 Gbps is 2000 1090 (excluding index
   0).

   A related sender Sender backoff response to network conditions occurs when
   one or more status feedback messages fail to arrive at the sender. Sender.

   If no status feedback messages arrive at the sender Sender for the interval
   greater than the Lost Status Backoff timeout:

              UDRT + (2+w)*FT = Lost Status Backoff timeout

      where:

      UDRT = upper delay range threshold (default 90ms) 90 msec)
      FT   = feedback time interval (default 50ms) 50 msec)
      w    = number of repeated timeouts (w=0 initially, w++ on each
             timeout, and reset to 0 when a message is received)

   beginning

   Beginning when the last message (of any type) was successfully
   received at the sender:

   Then the Sender:

   The offered load SHALL then be decreased, following the same process
   as when the feedback indicates the presence of one or more sequence
   number anomalies OR the delay range was above the upper threshold (as
   described above), with the same load rate adjustment algorithm
   variables in their current state.  This means that rate reduction and
   congestion confirmation can result from a three-way OR that includes lost status
   feedback messages, messages OR sequence errors, or errors OR delay variation. variation can result in
   rate reduction and congestion confirmation.

   The RECOMMENDED initial value for w is 0, taking Round Trip a Round-Trip Time
   (RTT) of less than FT into account.  A test with an RTT longer than
   FT is a valid reason to increase the initial value of w
   appropriately.  Variable w SHALL be incremented by 1 one whenever the
   Lost Status Backoff timeout is exceeded.  So  So, with FT = 50ms 50 msec and
   UDRT = 90ms, 90 msec, a status feedback message loss would be declared at 190ms
   190 msec following a successful message, again at 50ms 50 msec after that (240ms
   (240 msec total), and so on.

   Also, if congestion is now confirmed for the first time by a Lost
   Status Backoff timeout, then the offered load rate is decreased by
   more than one rate setting (e.g., Rx-30).  This one-time reduction is
   intended to compensate for the fast initial ramp-up.  In all other
   cases, the offered load rate is only decreased by one (Rx-1).

   Appendix B discusses compliance with the applicable mandatory
   requirements of [RFC8085], consistent with the goals of the IP-Layer
   Capacity Metric and Method, including the load rate adjustment
   algorithm described in this section.

8.2.  Measurement Qualification or Verification

   It is of course necessary to calibrate the equipment performing the
   IP-Layer Capacity measurement, to ensure that the expected capacity
   can be measured accurately, accurately and that equipment choices (processing
   speed, interface bandwidth, etc.) are suitably matched to the
   measurement range.

   When assessing a Maximum maximum rate as the metric specifies, artificially
   high (optimistic) values might be measured until some buffer on the
   path is filled.  Other causes include bursts of back-to-back packets
   with idle intervals delivered by a path, while the measurement
   interval (dt) is small and aligned with the bursts.  The artificial
   values might result in an un-sustainable unsustainable Maximum Capacity observed
   when the method Method of measurement Measurement is searching for the Maximum, maximum, and that
   would not do.  This situation is different from the bi-modal bimodal service
   rates (discussed under Reporting), in "Reporting the Metric", Section 6.6), which are
   characterized by a multi-second duration (much longer than the
   measured RTT) and repeatable behavior.

   There are many ways that the Method of Measurement could handle this
   false-max issue.  The default value for measurement of singletons Singletons (dt
   = 1 second) has proven to be of practical value during tests of this
   method, allows the bimodal service rates to be characterized, and it has
   an obvious alignment with the reporting units (Mbps).

   Another approach comes from Section 24 of [RFC2544] and its
   discussion of Trial trial duration, where relatively short trials conducted
   as part of the search are followed by longer trials to make the final
   determination.  In the production network, measurements of Singletons
   and Samples (the terms for trials and tests of Lab Benchmarking) must
   be limited in duration because they may be service-affecting. affect service.  But there is
   sufficient value in repeating a Sample with a fixed sending rate
   determined by the previous search for the Maximum IP-Layer Capacity,
   to qualify the result in terms of the other performance metrics
   measured at the same time.

   A qualification Qualification measurement for the search result is a subsequent
   measurement, sending at a fixed 99.x % percent of the Maximum IP-Layer
   Capacity for I, or an indefinite period.  The same Maximum Capacity
   Metric is applied, and the Qualification for the result is a Sample
   without supra-threshold packet loss losses or a growing minimum delay
   trend in subsequent
   singletons Singletons (or each dt of the measurement
   interval, I).  Samples exhibiting supra-threshold packet losses or
   increasing queue occupation require a repeated search and/or test at
   a reduced fixed sender Sender rate for qualification. Qualification.

   Here, as with any Active Capacity test, the test duration must be
   kept short. 10 second  Ten-second tests for each direction of transmission are
   common today.  The default measurement interval specified here is I =
   10 seconds.  The combination of a fast and congestion-aware search
   method and user-network coordination make makes a unique contribution to
   production testing.  The Maximum IP Capacity metric Metric and method Method for
   assessing performance is very different from the classic [RFC2544] Throughput metric
   Metric and methods : Methods provided in [RFC2544]: it uses near-real-time load
   adjustments that are sensitive to loss and delay, similar to other
   congestion control algorithms used on the Internet every day, along
   with limited duration.  On the other hand, [RFC2544] Throughput measurements
   [RFC2544] can produce sustained overload conditions for extended
   periods of time.  Individual trials in a test governed by a binary
   search can last 60 seconds for each step, and the final confirmation
   trial may be even longer.  This is very different from "normal"
   traffic levels, but overload conditions are not a concern in the
   isolated test environment.  The concerns raised in [RFC6815] were
   that [RFC2544] the methods discussed in [RFC2544] would be let loose on
   production networks, and instead the authors challenged the standards
   community to develop
   metrics Metrics and methods Methods like those described in this
   memo.

8.3.  Measurement Considerations

   In general, the wide-spread widespread measurements that this memo encourages
   will encounter wide-spread widespread behaviors.  The bimodal IP Capacity
   behaviors already discussed in Section 6.6 are good examples.

   In general, it is RECOMMENDED to locate test endpoints as close to
   the intended measured link(s) as practical (this (for reasons of scale,
   this is not always
   possible for reasons of scale; possible; there is a limit on the number of test
   endpoints coming from many perspectives, perspectives -- for example, management
   and measurement
   traffic for example). traffic).  The testing operator MUST set a value for
   the MaxHops parameter, Parameter, based on the expected path length.  This parameter
   Parameter can keep measurement traffic from straying too far beyond
   the intended path.

   The path measured path may be stateful based on many factors, and the
   Parameter "Time of day" when a test starts may not be enough
   information.  Repeatable testing may require knowledge of the time
   from the beginning of a measured flow, flow -- and how the flow is constructed
   constructed, including how much traffic has already been sent on that
   flow when a
   state-change state change is observed, observed -- because the state-change state change may
   be based on
   time or time, bytes sent sent, or both.  Both load packets and status
   feedback messages MUST contain sequence numbers, which numbers; this helps with
   measurements based on those packets.

   Many different types of traffic shapers and on-demand communications
   access technologies may be encountered, as anticipated in [RFC7312],
   and play a key role in measurement results.  Methods MUST be prepared
   to provide a short preamble transmission to activate on-demand
   communications access, access and to discard the preamble from subsequent
   test results.

   Conditions which

   The following conditions might be encountered during measurement,
   where packet losses may occur independently of the measurement
   sending rate:

   1.  Congestion of an interconnection or backbone interface may appear
       as packet losses distributed over time in the test stream, due to
       much higher rate
       much-higher-rate interfaces in the backbone.

   2.  Packet loss due to the use of Random Early Detection (RED) or
       other active queue management may or may not affect the
       measurement flow if competing background traffic (other flows) are is
       simultaneously present.

   3.  There may be only a small delay variation independent of the
       sending rate under these conditions, too. conditions as well.

   4.  Persistent competing traffic on measurement paths that include
       shared transmission media may cause random packet losses in the
       test stream.

   It is possible to mitigate these conditions using the flexibility of
   the load-rate adjusting load rate adjustment algorithm described in Section 8.1 above
   (tuning specific parameters). Parameters).

   If the measurement flow burst duration happens to be on the order of
   or smaller than the burst size of a shaper or a policer in the path,
   then the line rate might be measured rather than the bandwidth limit
   imposed by the shaper or policer.  If this condition is suspected,
   alternate configurations SHOULD be used.

   In general, results depend on the sending stream stream's characteristics;
   the measurement community has known this for a long time, time and needs to
   keep it front of foremost in mind.  Although the default is a single flow
   (F=1) for testing, the use of multiple flows may be advantageous for
   the following reasons:

   1.  the  The test hosts may be able to create a higher load than with a
       single flow, or parallel test hosts may be used to generate 1 one
       flow each.

   2.  there  Link aggregation may be link aggregation present (flow-based load balancing) balancing), and
       multiple flows are needed to occupy each member of the aggregate.

   3.  Internet access policies may limit the IP-Layer Capacity
       depending on the Type-P of the packets, possibly reserving
       capacity for various stream types.

   Each flow would be controlled using its own implementation of the
   load rate adjustment (search) algorithm.

   It is obviously counter-productive counterproductive to run more than one independent
   and concurrent test (regardless of the number of flows in the test
   stream) attempting to measure the *maximum* capacity on a single
   path.  The number of concurrent, independent tests of a path SHALL be
   limited to one.

   Tests of a v4-v6 transition mechanism might well be the intended
   subject of a capacity test.  As long as the both IPv4 packets and IPv6
   packets sent/received are both standard-formed, this should be allowed
   (and the change in header size easily accounted for on a per-packet
   basis).

   As testing continues, implementers should expect some evolution in the methods. methods to
   evolve.  The ITU-T has published a Supplement (60) supplement (Supplement 60) to the
   Y-series of ITU-T Recommendations, "Interpreting ITU-T Y.1540 Maximum IP-
   Layer Capacity measurements", maximum
   IP-layer capacity measurements" [Y.Sup60], which is the result of
   continued testing with the metric, and those metric.  Those results have improved the method described here.

8.4.  Running Code

   RFC Editor: This section is for the benefit of the Document
   Shepherd's form, and will be deleted prior to publication.

   Much of the development of the method and comparisons with existing
   methods conducted at IETF Hackathons and elsewhere have been based on
   the example udpst Linux measurement tool (which is a working
   reference for further development) [udpst].  The current project:

   o  is a utility that can function as a client or server daemon
   o  requires a successful client-initiated setup handshake between
      cooperating hosts and allows firewalls to control inbound
      unsolicited UDP which either go to a control port [expected and w/
      authentication] or to ephemeral ports that are only created as
      needed.  Firewalls protecting each host can both continue to do
      their job normally.  This aspect is similar to many other test
      utilities available.

   o  is written in C, and built with gcc (release 9.3) and its standard
      run-time libraries

   o  allows configuration of most of the parameters described in
      Sections 4 and 7.

   o  supports IPv4 and IPv6 address families.

   o  supports IP-Layer packet marking. here.

9.  Reporting Formats

   The singleton Singleton IP-Layer Capacity results SHOULD be accompanied by the
   context under which they were measured.

   o  timestamp

   *  Timestamp (especially the time when the maximum was observed in
      dtn)

   o
      dtn).

   *  Source and Destination (by IP or other meaningful ID)

   o  other ID).

   *  Other inner parameters Parameters of the test case (Section 4)

   o  outer parameters, 4).

   *  Outer Parameters, such as "test conducted in motion" or other
      factors belonging to the context of the measurement

   o  result measurement.

   *  Result validity (indicating cases where the process was somehow
      interrupted or the attempt failed)

   o  a failed).

   *  A field where unusual circumstances could be documented, and
      another one for "ignore/mask "ignore / mask out" purposes in further processing
      processing.

   The Maximum IP-Layer Capacity results SHOULD be reported in the
   format of a table with tabular
   format.  There SHOULD be a row for each of column that identifies the test Phases and Number
   of Flows. Phase.
   There SHOULD be a column listing the number of flows used in that
   Phase.  The remaining columns SHOULD report the following results for
   the phases with number aggregate of all flows, and for including the resultant Maximum IP-Layer Capacity results for Capacity,
   the aggregate Loss Ratio, the RTT minimum, RTT maximum, and each flow tested. other metrics
   tested having similar relevance.

   As mentioned in Section 6.6, bi-modal bimodal (or multi-modal) maxima SHALL be
   reported for each mode separately.

   +-------------+-------------------------+----------+----------------+

   +========+==========+==================+========+=========+=========+
   | Phase, # Phase  | Number   | Maximum IP-Layer | Loss   | RTT min, max, min | RTT     |
   |        | of Flows | Capacity, Mbps Capacity (Mbps)  | Ratio  | msec (msec)  | max     |
   |        |          |                  |        |
   +-------------+-------------------------+----------+----------------+         | Search,1 (msec)  |
   +========+==========+==================+========+=========+=========+
   | Search | 1        | 967.31           | 0.0002 | 30, 30      | 58      |
   +-------------+-------------------------+----------+----------------+
   +--------+----------+------------------+--------+---------+---------+
   | Verify,1 Verify | 1        | 966.00           | 0.0000 | 30, 30      | 38      |
   +-------------+-------------------------+----------+----------------+
   +--------+----------+------------------+--------+---------+---------+

                 Table 2: Maximum IP-layer IP-Layer Capacity Results

   Static and configuration parameters: Parameters:

   The sub-interval time, dt, MUST accompany a report of Maximum IP-
   Layer Capacity results, and as well as the remaining Parameters from
   Section 4,
   General Parameters. 4 ("General Parameters and Definitions").

   The PM list metrics corresponding to the sub-interval where the
   Maximum Capacity occurred MUST accompany a report of Maximum IP-Layer
   Capacity results, for each test phase. Phase.

   The IP-Layer Sender Bit rate Rate results SHOULD be reported in the format
   of a table with tabular
   format.  There SHOULD be a row for each of column that identifies the test phases, sub-intervals (st)
   and number of flows. Phase.
   There SHOULD be columns for a column listing each individual (numbered) flow used
   in that Phase, or the phases with
   number aggregate of flows, and flows in that Phase.  A
   corresponding column SHOULD identify the specific sending rate sub-
   interval, stn, for each flow and aggregate.  A final column SHOULD
   report the resultant IP-Layer Sender Bit rate Rate results for the aggregate and each flow tested.

     +--------------------------+-------------+----------------------+ used, or
   the aggregate of all flows.

      +========+==========================+===========+=============+
      | Phase  | Phase, Flow Number or Aggregate | st, sec stn (sec) | Sender Bitrate, Mbps Bit  |
      |        |
     +--------------------------+-------------+----------------------+                          | Search,1           | Rate (Mbps) |
      +========+==========================+===========+=============+
      | Search | 1                        | 0.00 - 0.05      | 345         |
     +--------------------------+-------------+----------------------+
      +--------+--------------------------+-----------+-------------+
      | Search,2 Search | 2                        | 0.00 - 0.05      | 289         |
     +--------------------------+-------------+----------------------+
      +--------+--------------------------+-----------+-------------+
      | Search,Agg Search | Agg                      | 0.00 - 0.05      | 634         |
     +--------------------------+-------------+----------------------+

                     IP-layer
      +--------+--------------------------+-----------+-------------+
      | Search | 1                        | 0.05      | 499         |
      +--------+--------------------------+-----------+-------------+
      | Search | ...                      | 0.05      | ...         |
      +--------+--------------------------+-----------+-------------+

        Table 3: IP-Layer Sender Bit Rate Results (Example with Two
                         Flows and st = 0.05 (sec))

   Static and configuration parameters: Parameters:

   The subinterval time, sub-interval duration, st, MUST accompany a report of Sender IP-Layer IP-
   Layer Bit Rate results.

   Also, the values of the remaining Parameters from Section 4, General
   Parameters, 4 ("General
   Parameters and Definitions") MUST be reported.

9.1.  Configuration and Reporting Data Formats

   As a part of the multi-Standards Development Organization (SDO)
   harmonization of this metric Metric and method Method of measurement, Measurement, one of the
   areas where the Broadband Forum (BBF) contributed its expertise was
   in the definition of an information model and data model for
   configuration and reporting.  These models are consistent with the
   metric parameters Parameters and default values specified as lists is in this memo.
   [TR-471] provides the Information information model that was used to prepare a
   full data model in related BBF work.  The BBF has also carefully
   considered topics within its purview, such as the placement of
   measurement systems within the Internet access architecture.  For
   example, timestamp resolution requirements that influence the choice
   of the test protocol are provided in Table 2 of [TR-471].

10.  Security Considerations

   Active metrics Metrics and measurements Active Measurements have a long history of
   security considerations.  The security considerations that apply to
   any active
   measurement Active Measurement of live paths are relevant here.  See
   [RFC4656] and [RFC5357].

   When considering the privacy of those involved in measurement or
   those whose traffic is measured, the sensitive information available
   to potential observers is greatly reduced when using active
   techniques
   which that are within this scope of work.  Passive observations
   of user traffic for measurement purposes raise many privacy issues.
   We refer the reader to the privacy considerations described in the Large Scale
   Large-scale Measurement of Broadband Performance (LMAP) Framework
   [RFC7594], which covers active and passive techniques.

   There are some new considerations for Capacity measurement as
   described in this memo.

   1.  Cooperating Source and Destination hosts and agreements to test
       the path between the hosts are REQUIRED.  Hosts perform in either
       the Src role or the Dst roles. role.

   2.  It is REQUIRED to have a user client-initiated setup handshake
       between cooperating hosts that allows firewalls to control
       inbound unsolicited UDP traffic which either that goes to either a control
       port [expected (expected and w/authentication] with authentication) or to ephemeral ports that
       are only created as needed.  Firewalls protecting each host can
       both continue to do their job normally.

   3.  Client-server authentication and integrity protection for
       feedback messages conveying measurements is are RECOMMENDED.

   4.  Hosts MUST limit the number of simultaneous tests to avoid
       resource exhaustion and inaccurate results.

   5.  Senders MUST be rate-limited. rate limited.  This can be accomplished using a
       pre-built table defining all the offered load rates that will be
       supported (Section 8.1).  The recommended load-control load control search
       algorithm results in "ramp-up" from the lowest rate in the table.

   6.  Service subscribers with limited data volumes who conduct
       extensive capacity testing might experience the effects of
       Service Provider controls on their service.  Testing with the
       Service Provider's measurement hosts SHOULD be limited in
       frequency and/or overall volume of test traffic (for example, the
       range of duration values, I, SHOULD be limited).

   The exact specification of these features is left for the future protocol
   development.

11.  IANA Considerations

   This memo makes document has no requests of IANA. IANA actions.

12.  Acknowledgments

   Thanks  References

12.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Joachim Indicate
              Requirement Levels", BCP 14, RFC 2119,
              DOI 10.17487/RFC2119, March 1997,
              <https://www.rfc-editor.org/info/rfc2119>.

   [RFC2330]  Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
              "Framework for IP Performance Metrics", RFC 2330,
              DOI 10.17487/RFC2330, May 1998,
              <https://www.rfc-editor.org/info/rfc2330>.

   [RFC2681]  Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip
              Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681,
              September 1999, <https://www.rfc-editor.org/info/rfc2681>.

   [RFC4656]  Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M.
              Zekauskas, "A One-way Active Measurement Protocol
              (OWAMP)", RFC 4656, DOI 10.17487/RFC4656, September 2006,
              <https://www.rfc-editor.org/info/rfc4656>.

   [RFC4737]  Morton, A., Ciavattone, L., Ramachandran, G., Shalunov,
              S., and J. Perser, "Packet Reordering Metrics", RFC 4737,
              DOI 10.17487/RFC4737, November 2006,
              <https://www.rfc-editor.org/info/rfc4737>.

   [RFC5357]  Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J.
              Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)",
              RFC 5357, DOI 10.17487/RFC5357, October 2008,
              <https://www.rfc-editor.org/info/rfc5357>.

   [RFC6438]  Carpenter, B. and S. Amante, "Using the IPv6 Flow Label
              for Equal Cost Multipath Routing and Link Aggregation in
              Tunnels", RFC 6438, DOI 10.17487/RFC6438, November 2011,
              <https://www.rfc-editor.org/info/rfc6438>.

   [RFC7497]  Morton, A., "Rate Measurement Test Protocol Problem
              Statement and Requirements", RFC 7497,
              DOI 10.17487/RFC7497, April 2015,
              <https://www.rfc-editor.org/info/rfc7497>.

   [RFC7680]  Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton,
              Ed., "A One-Way Loss Metric for IP Performance Metrics
              (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January
              2016, <https://www.rfc-editor.org/info/rfc7680>.

   [RFC8174]  Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC
              2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174,
              May 2017, <https://www.rfc-editor.org/info/rfc8174>.

   [RFC8468]  Morton, A., Fabini, Matt J., Elkins, N., Ackermann, M., and V.
              Hegde, "IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for
              the IP Performance Metrics (IPPM) Framework", RFC 8468,
              DOI 10.17487/RFC8468, November 2018,
              <https://www.rfc-editor.org/info/rfc8468>.

12.2.  Informative References

   [copycat]  Edeline, K., Kühlewind, M., Trammell, B., and B. Donnet,
              "copycat: Testing Differential Treatment of New Transport
              Protocols in the Wild", ANRW '17,
              DOI 10.1145/3106328.3106330, July 2017,
              <https://irtf.org/anrw/2017/anrw17-final5.pdf>.

   [LS-SG12-A]
              "Liaison statement: LS - Harmonization of IP Capacity and
              Latency Parameters: Revision of Draft Rec. Y.1540 on IP
              packet transfer performance parameters and New Annex A
              with Lab Evaluation Plan", From ITU-T SG 12, March 2019,
              <https://datatracker.ietf.org/liaison/1632/>.

   [LS-SG12-B]
              "Liaison statement: LS on harmonization of IP Capacity and
              Latency Parameters: Consent of Draft Rec. Y.1540 on IP
              packet transfer performance parameters and New Annex A
              with Lab & Field Evaluation Plans", From ITU-T-SG-12, May
              2019, <https://datatracker.ietf.org/liaison/1645/>.

   [RFC2544]  Bradner, S. and J. McQuaid, "Benchmarking Methodology for
              Network Interconnect Devices", RFC 2544,
              DOI 10.17487/RFC2544, March 1999,
              <https://www.rfc-editor.org/info/rfc2544>.

   [RFC3148]  Mathis, J.Ignacio Alvarez-Hamelin,
   Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray
   Kucherawy, M. and Benjamin Kaduk M. Allman, "A Framework for their extensive comments Defining
              Empirical Bulk Transfer Capacity Metrics", RFC 3148,
              DOI 10.17487/RFC3148, July 2001,
              <https://www.rfc-editor.org/info/rfc3148>.

   [RFC5136]  Chimento, P. and J. Ishac, "Defining Network Capacity",
              RFC 5136, DOI 10.17487/RFC5136, February 2008,
              <https://www.rfc-editor.org/info/rfc5136>.

   [RFC6815]  Bradner, S., Dubray, K., McQuaid, J., and A. Morton,
              "Applicability Statement for RFC 2544: Use on the
   memo Production
              Networks Considered Harmful", RFC 6815,
              DOI 10.17487/RFC6815, November 2012,
              <https://www.rfc-editor.org/info/rfc6815>.

   [RFC7312]  Fabini, J. and related topics.  In a second round A. Morton, "Advanced Stream and Sampling
              Framework for IP Performance Metrics (IPPM)", RFC 7312,
              DOI 10.17487/RFC7312, August 2014,
              <https://www.rfc-editor.org/info/rfc7312>.

   [RFC7594]  Eardley, P., Morton, A., Bagnulo, M., Burbridge, T.,
              Aitken, P., and A. Akhter, "A Framework for Large-Scale
              Measurement of reviews, we
   acknowledge Magnus Westerlund, Lars Broadband Performance (LMAP)", RFC 7594,
              DOI 10.17487/RFC7594, September 2015,
              <https://www.rfc-editor.org/info/rfc7594>.

   [RFC7799]  Morton, A., "Active and Passive Metrics and Methods (with
              Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799,
              May 2016, <https://www.rfc-editor.org/info/rfc7799>.

   [RFC8085]  Eggert, L., Fairhurst, G., and Zahed Sarkar.

13.  Appendix A G. Shepherd, "UDP Usage
              Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085,
              March 2017, <https://www.rfc-editor.org/info/rfc8085>.

   [RFC8337]  Mathis, M. and A. Morton, "Model-Based Metrics for Bulk
              Transport Capacity", RFC 8337, DOI 10.17487/RFC8337, March
              2018, <https://www.rfc-editor.org/info/rfc8337>.

   [TR-471]   Morton, A., "Maximum IP-Layer Capacity Metric, Related
              Metrics, and Measurements", Broadband Forum TR-471, July
              2020, <https://www.broadband-forum.org/technical/download/
              TR-471.pdf>.

   [Y.1540]   ITU-T, "Internet protocol data communication service - IP
              packet transfer and availability performance parameters",
              ITU-T Recommendation Y.1540, December 2019,
              <https://www.itu.int/rec/T-REC-Y.1540-201912-I/en>.

   [Y.Sup60]  ITU-T, "Interpreting ITU-T Y.1540 maximum IP-layer
              capacity measurements", ITU-T Recommendation Y.Sup60,
              October 2021, <https://www.itu.int/rec/T-REC-Y.Sup60/en>.

Appendix A.  Load Rate Adjustment Pseudo Code

   The following is Pseudocode

   This appendix provides a pseudo-code pseudocode implementation of the algorithm
   described in Section 8.1.

   Rx = 0              # The current sending rate (equivalent to a row
                       # of the table)

   seqErr = 0          # Measured count of any of that includes Loss or Reordering
                       # or Duplication impairments (all appear
                       # initially as errors in the packet sequence
                       # numbering)

   seqErrThresh = 10   # Threshold on seqErr count that includes Loss or
                       # Reordering or Duplication impairments (all
                       # appear initially as errors in the packet
                       # sequence numbering)

   delay = 0           # Measured Range of Round Trip Delay, RTD, ms Delay (RTD), msec

   lowThresh = 30      # Low threshold on the Range of RTD, ms msec

   upperThresh = 90    # Upper threshold on the Range of RTD, ms
hSpeedTresh msec

   hSpeedThresh = 1 Gbps    # Threshold for transition between sending rate
                       # step sizes (such as 1 Mbps and 100 Mbps) Mbps), Gbps

   slowAdjCount = 0    # Measured Number of consecutive status reports
                       # indicating loss and/or delay variation above
                       # upperThresh

   slowAdjThresh = 2 3   # Threshold on slowAdjCount used to infer
                       # congestion. Use values >1 > 1 to avoid
                       # misinterpreting transient loss loss.

   highSpeedDelta = 10 # The number of rows to move in a single
                       # adjustment when initially increasing offered
                       # load (to ramp-up ramp up quickly)

   maxLoadRates = 2000 # Maximum table index (rows)

   if ( seqErr == 0 <= seqErrThresh && delay < lowThresh ) {
           if ( Rx < hSpeedTresh hSpeedThresh && slowAdjCount < slowAdjThresh ) {
                           Rx += highSpeedDelta;
                           slowAdjCount = 0;
           } else {
                           if ( Rx < maxLoadRates - 1 )
                                           Rx++;
           }
   } else if ( seqErr > 0 seqErrThresh || delay > upperThresh ) {
           slowAdjCount++;
           if ( Rx < hSpeedTresh hSpeedThresh && slowAdjCount == slowAdjThresh ) {
                           if ( Rx > highSpeedDelta * 3 )
                                           Rx -= highSpeedDelta * 3;
                           else
                                           Rx = 0;
           } else {
                           if ( Rx > 0 )
                                           Rx--;
           }
   }

14.

Appendix B - B.  RFC 8085 UDP Guidelines Check

   The BCP on

   Section 3.1 of [RFC8085] (BCP 145), which provides UDP usage guidelines [RFC8085]
   guidelines, focuses primarily on congestion control in section 3.1. control.  The Guidelines guidelines
   appear in mandatory (MUST) and recommendation (SHOULD) categories.

14.1.

B.1.  Assessment of Mandatory Requirements

   The mandatory requirements in Section 3 of [RFC8085] include: include the
   following:

   |  Internet paths can have widely varying characteristics, ...
   |  Consequently, applications that may be used on the Internet MUST
   |  NOT make assumptions about specific path characteristics.  They
   |  MUST instead use mechanisms that let them operate safely under
   |  very different path conditions.  Typically, this requires
   |  conservatively probing the current conditions of the Internet path
   |  they communicate over to establish a transmission behavior that it
   |  can sustain and that is reasonably fair to other traffic sharing
   |  the path.

   The purpose of the load rate adjustment algorithm described in
   Section 8.1 is to probe the network and enable Maximum IP-Layer
   Capacity measurements with as few assumptions about the measured path
   as
   possible, possible and within the range application of applications described in
   Section 2.
   The degree of probing conservatism  There is in tension with between the need to
   minimize goals of probing
   conservatism and minimization of both the traffic dedicated to
   testing (especially with Gigabit rate measurements) and the duration
   of the test (which is one contributing factor to the overall
   algorithm fairness).

   The text of Section 3 of [RFC8085] goes on to recommend alternatives
   to UDP to meet the mandatory requirements, but none are suitable for
   the scope and purpose of the metrics Metrics and methods Methods in this memo.  In
   fact, ad hoc TCP-based methods fail to achieve the measurement
   accuracy repeatedly proven in comparison measurements with the
   running code [LS-SG12-A] [LS-SG12-B] [Y.Sup60].  Also, the UDP aspect
   of these methods is present primarily to support modern Internet
   transmission where a transport protocol is required [copycat]; the
   metric is based on the IP-Layer IP Layer, and UDP allows simple correlation to
   the IP-Layer. IP Layer.

   Section 3.1.1 of [RFC8085] discusses protocol timer guidelines:

   |  Latency samples MUST NOT be derived from ambiguous transactions.
   |  The canonical example is in a protocol that retransmits data, but
   |  subsequently cannot determine which copy is being acknowledged.

   Both load packets and status feedback messages MUST contain sequence
   numbers, which
   numbers; this helps with measurements based on those packets, and
   there are no retransmissions needed.

   |  When a latency estimate is used to arm a timer that provides loss
   |  detection -- with or without retransmission -- expiry of the timer
   |  MUST be interpreted as an indication of congestion in the network,
   |  causing the sending rate to be adapted to a safe conservative
      rate... rate
   |  ...

   The method methods described in this memo uses use timers for sending rate
   backoff when status feedback messages are lost (Lost Status Backoff
   timeout),
   timeout) and for stopping a test when connectivity is lost for a
   longer interval (Feedback (feedback message or load packet timeouts).

   There is no

   This memo does not foresee any specific benefit foreseen by of using Explicit
   Congestion Notification (ECN) in this memo. (ECN).

   Section 3.2 of [RFC8085] discusses message size guidelines:

   |  To determine an appropriate UDP payload size, applications MUST
   |  subtract the size of the IP header (which includes any IPv4
   |  optional headers or IPv6 extension headers) as well as the length
   |  of the UDP header (8 bytes) from the PMTU size.

   The method uses a sending rate table with a maximum UDP payload size
   that anticipates significant header overhead and avoids
   fragmentation.

   Section 3.3 of [RFC8085] provides reliability guidelines:

   |  Applications that do require reliable message delivery MUST
   |  implement an appropriate mechanism themselves.

   The IP-Layer Capacity Metric Metrics and Method Methods do not require reliable
   delivery.

   |  Applications that require ordered delivery MUST reestablish
   |  datagram ordering themselves.

   The IP-Layer Capacity Metric Metrics and Method does Methods do not need to reestablish
   packet order; it is preferred preferable to measure packet reordering if it
   occurs [RFC4737].

14.2.

B.2.  Assessment of Recommendations

   The load rate adjustment algorithm's goal is to determine the Maximum
   IP-Layer Capacity in the context of an infrequent, diagnostic, short short-
   term measurement.  This goal is a global exception to many [RFC8085]
   SHOULD-level requirements, SHOULD-
   level requirements as listed in [RFC8085], of which many are intended
   for long-lived flows that must coexist with other traffic in more-or-less a more
   or less fair way.  However, the algorithm (as specified in
   Section 8.1 and Appendix A above) reacts to indications of congestion
   in clearly defined ways.

   A specific recommendation is provided as an example.  Section 3.1.5
   of [RFC8085] on (regarding the implications of RTT and Loss Measurements loss measurements
   on
   Congestion Control congestion control) says:

   |  A congestion control [algorithm] designed for UDP SHOULD respond
   |  as quickly as possible when it experiences congestion, and it
   |  SHOULD take into account both the loss rate and the response time
   |  when choosing a new rate.

   The load rate adjustment algorithm responds to loss and RTT
   measurements with a clear and concise rate reduction when warranted,
   and the response makes use of direct measurements (more exact than
   can be inferred from TCP ACKs).

   Section 3.1.5 of [RFC8085] goes on to specify: specify the following:

   |  The implemented congestion control scheme SHOULD result in
   |  bandwidth (capacity) use that is comparable to that of TCP within
   |  an order of magnitude, so that it does not starve other flows
   |  sharing a common bottleneck.

   This is a requirement for coexistent streams, and not for diagnostic
   and infrequent measurements using short durations.  The rate
   oscillations during short tests allow other packets to pass, pass and don't
   starve other flows.

   Ironically, ad hoc TCP-based measurements of "Internet Speed" are
   also designed to work around this SHOULD-level requirement, by
   launching many flows (9, for example) to increase the outstanding
   data dedicated to testing.

   The load rate adjustment algorithm cannot become a TCP-like
   congestion control, or it will have the same weaknesses of TCP when
   trying to make a Maximum IP-Layer Capacity measurement, measurement and will not
   achieve the goal.  The results of the referenced testing [LS-SG12-A]
   [LS-SG12-B] [Y.Sup60] supported this statement hundreds of times,
   with comparisons to multi-connection TCP-based measurements.

   A brief review of some other SHOULD-level requirements from [RFC8085] follows (Yes (marked "Yes"
   when this memo is compliant, or Not applicable = NA) :

+--+---------------------------------------------------------+---------+
|Y?| "NA" (Not Applicable)):

      +======+============================================+=========+
      | Yes? | Recommendation in RFC 8085 Recommendation                 | Section |
+--+---------------------------------------------------------+---------+
Yes|
      +======+============================================+=========+
      | Yes  | MUST tolerate a wide range of Internet     | 3       |
      |      | path conditions                            | 3         |
      +------+--------------------------------------------+---------+
      | NA   | SHOULD use a full-featured transport       |         |
      |      | (e.g., TCP)                                |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |         |
Yes| SHOULD control rate of transmission        | 3.1     |
      +------+--------------------------------------------+---------+
      | NA   | SHOULD perform congestion control over all traffic |         |
      |      | traffic                                    |         | for
      +------+--------------------------------------------+---------+
      +======+============================================+=========+
      |      | For bulk transfers,                        | 3.1.2   |
      +======+============================================+=========+
      | NA   | SHOULD consider implementing TFRC          |         |
      +------+--------------------------------------------+---------+
      | NA   | else, SHOULD in other ways use bandwidth   |         |
      |      | similar to TCP                             |         |
      +------+--------------------------------------------+---------+
      +======+============================================+=========+
      |      |         |
   | for For non-bulk transfers,                    | 3.1.3   |
      +======+============================================+=========+
      | NA   | SHOULD measure RTT and transmit max. 1 datagram/RTT     | 3.1.1   |
      |      | datagram/RTT                               |         |
      +------+--------------------------------------------+---------+
      | NA   | else, SHOULD send at most 1 datagram every |         |
      |      | 3 seconds                                  |         |
      +------+--------------------------------------------+---------+
      | NA   | SHOULD back-off retransmission timers      |         |
      |      | following loss                             |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |         |
Yes| SHOULD provide mechanisms to regulate the bursts of  | 3.1.6   |
      | transmission                                            |      | bursts of transmission                     |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | NA   | MAY implement ECN; a specific set of application       | 3.1.7   |
      |      | application mechanisms are REQUIRED if ECN is used. |         |
      |      | is used                                    |
Yes| for         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  | For DiffServ, SHOULD NOT rely on           | 3.1.8   |
      |      | implementation of PHBs                     | 3.1.8         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |         |
Yes| for For QoS-enabled paths, MAY choose not to use CC   | 3.1.9   |
      |      | use CC                                     |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |
Yes| SHOULD NOT rely solely on QoS for their capacity    | 3.1.10  |
      |      | capacity                                   |         |
      +------+--------------------------------------------+---------+
      | NA   | non-CC controlled flows SHOULD implement a transport |         |
      |      | transport circuit breaker                  |         |
      +------+--------------------------------------------+---------+
      | Yes  | MAY implement a circuit breaker for other applications  |         |
      |      | applications                               |         | for
      +------+--------------------------------------------+---------+
      +======+============================================+=========+
      |      | For tunnels carrying IP traffic,           | 3.1.11  |
      +======+============================================+=========+
      | NA   | SHOULD NOT perform congestion control      |         |
      +------+--------------------------------------------+---------+
      | NA   | MUST correctly process the IP ECN field    |         |
      +------+--------------------------------------------+---------+
      +======+============================================+=========+
      |      |         |
   | for For non-IP tunnels or rate not determined  | 3.1.11  |
      |      | by traffic,                                |         |
      +======+============================================+=========+
      | NA   | SHOULD perform CC or use circuit breaker   | 3.1.11         |
      +------+--------------------------------------------+---------+
      | NA   | SHOULD restrict types of traffic transported by the           |         |
      | tunnel      | transported by the tunnel                  |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |
Yes| SHOULD NOT send datagrams that exceed the  | 3.2     |
      |      | PMTU, i.e.,                                | 3.2         |
Yes|
      +------+--------------------------------------------+---------+
      | Yes  | SHOULD discover PMTU or send datagrams <   |         |
      |      | minimum PMTU; PMTU                               |         |
      +------+--------------------------------------------+---------+
      | NA   | Specific application mechanisms are REQUIRED if PLPMTUD        |         |
      | is used.      | REQUIRED if PLPMTUD is used                |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |
Yes| SHOULD handle datagram loss, duplication, reordering  | 3.3     |
      |      | reordering                                 |         |
      +------+--------------------------------------------+---------+
      | NA   | SHOULD be robust to delivery delays up to  |         |
      |      | 2 minutes                                  |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |         |
Yes| SHOULD enable IPv4 UDP checksum            | 3.4     |
Yes|
      +------+--------------------------------------------+---------+
      | Yes  | SHOULD enable IPv6 UDP checksum; Specific application specific  | 3.4.1   |
      |      | application mechanisms are REQUIRED if a   |         |
      |      | zero IPv6 UDP checksum is used             |         |
   | used.                                                   |         |
   |                                                         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | NA   | SHOULD provide protection from off-path attacks    | 5.1     |
      |      | attacks                                    |         |
      +------+--------------------------------------------+---------+
      |      | else, MAY use UDP-Lite with suitable checksum coverage       | 3.4.2   |
      |      | checksum coverage                          |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | NA   | SHOULD NOT always send middlebox keep-alive messages keep-     | 3.5     |
      |      | alive messages                             |         |
      +------+--------------------------------------------+---------+
      | NA   | MAY use keep-alives when needed (min.      |         |
      |      | interval 15 sec)                           |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |         |

Yes| Applications specified for use in limited use (or  | 3.6     |
      |      | use (or controlled environments) SHOULD identify equivalent    |         |
      |      | identify equivalent mechanisms and         |         |
      |      | describe their use case.                 |         | case                    |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | NA   | Bulk-multicast apps SHOULD implement congestion control       | 4.1.1   |
      |      | congestion control                         |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | NA   | Low volume multicast apps SHOULD implement congestion | 4.1.2   |
      | control                                                 |      | congestion control                         |         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | NA   | Multicast apps SHOULD use a safe PMTU      | 4.2     |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |         |
Yes| SHOULD avoid using multiple ports          | 5.1.2   |
Yes|
      +------+--------------------------------------------+---------+
      | Yes  | MUST check received IP source address      |         |
   |                                                         |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | NA   | SHOULD validate payload in ICMP messages   | 5.2     |
      +------+--------------------------------------------+---------+
      +------+--------------------------------------------+---------+
      | Yes  |         |
Yes| SHOULD use a randomized source Source port or equivalent     | 6       |
      |      | equivalent technique, and, for client/server applications, SHOULD client/     |         |
      |      | server applications, SHOULD send responses |         |
      |      | from source address matching request       |         |
   | 5.1                                                     |
      +------+--------------------------------------------+---------+
      | NA   | SHOULD use standard IETF security          | 6       |
      |      | protocols when needed                      | 6         |
   +---------------------------------------------------------+---------+

15.  References

15.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119,
              DOI 10.17487/RFC2119, March 1997,
              <https://www.rfc-editor.org/info/rfc2119>.

   [RFC2330]  Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
              "Framework for IP Performance Metrics", RFC 2330,
              DOI 10.17487/RFC2330, May 1998,
              <https://www.rfc-editor.org/info/rfc2330>.

   [RFC2681]  Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip
              Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681,
              September 1999, <https://www.rfc-editor.org/info/rfc2681>.

   [RFC4656]  Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M.
              Zekauskas, "A One-way Active Measurement Protocol
              (OWAMP)", RFC 4656, DOI 10.17487/RFC4656, September 2006,
              <https://www.rfc-editor.org/info/rfc4656>.

   [RFC4737]  Morton, A., Ciavattone, L., Ramachandran, G., Shalunov,
              S., and J. Perser, "Packet Reordering Metrics", RFC 4737,
              DOI 10.17487/RFC4737, November 2006,
              <https://www.rfc-editor.org/info/rfc4737>.

   [RFC5357]  Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J.
              Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)",
              RFC 5357, DOI 10.17487/RFC5357, October 2008,
              <https://www.rfc-editor.org/info/rfc5357>.

   [RFC6438]  Carpenter, B. and S. Amante, "Using the IPv6 Flow Label
              for Equal Cost Multipath Routing and Link Aggregation in
              Tunnels", RFC 6438, DOI 10.17487/RFC6438, November 2011,
              <https://www.rfc-editor.org/info/rfc6438>.

   [RFC7497]  Morton, A., "Rate Measurement Test Protocol Problem
              Statement and Requirements", RFC 7497,
              DOI 10.17487/RFC7497, April 2015,
              <https://www.rfc-editor.org/info/rfc7497>.

   [RFC7680]  Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton,
              Ed., "A One-Way Loss Metric for IP Performance Metrics
              (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January
              2016, <https://www.rfc-editor.org/info/rfc7680>.

   [RFC8174]  Leiba, B., "Ambiguity
      +------+--------------------------------------------+---------+

              Table 4: Summary of Uppercase vs Lowercase in RFC
              2119 Key Words", BCP 14, Guidelines from RFC 8174, DOI 10.17487/RFC8174,
              May 2017, <https://www.rfc-editor.org/info/rfc8174>.

   [RFC8468]  Morton, A., 8085

Acknowledgments

   Thanks to Joachim Fabini, J., Elkins, N., Ackermann, M., and V.
              Hegde, "IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for
              the IP Performance Metrics (IPPM) Framework", RFC 8468,
              DOI 10.17487/RFC8468, November 2018,
              <https://www.rfc-editor.org/info/rfc8468>.

15.2.  Informative References

   [copycat]  Edleine, K., Kuhlewind, K., Trammell, B., and B. Donnet,
              "copycat: Testing Differential Treatment of New Transport
              Protocols in the Wild (ANRW '17)", July 2017,
              <https://irtf.org/anrw/2017/anrw17-final5.pdf>.

   [LS-SG12-A]
              12, I. S., "LS - Harmonization of IP Capacity and Latency
              Parameters: Revision of Draft Rec. Y.1540 on IP packet
              transfer performance parameters and New Annex A with Lab
              Evaluation Plan", May 2019,
              <https://datatracker.ietf.org/liaison/1632/>.

   [LS-SG12-B]
              12, I. S., "LS on harmonization of IP Capacity and Latency
              Parameters: Consent of Draft Rec. Y.1540 on IP packet
              transfer performance parameters and New Annex A with Lab &
              Field Evaluation Plans", March 2019,
              <https://datatracker.ietf.org/liaison/1645/>.

   [RFC2544]  Bradner, S. and J. McQuaid, "Benchmarking Methodology for
              Network Interconnect Devices", RFC 2544,
              DOI 10.17487/RFC2544, March 1999,
              <https://www.rfc-editor.org/info/rfc2544>.

   [RFC3148] Matt Mathis, M. and M. Allman, "A Framework for Defining
              Empirical Bulk Transfer Capacity Metrics", RFC 3148,
              DOI 10.17487/RFC3148, July 2001,
              <https://www.rfc-editor.org/info/rfc3148>.

   [RFC5136]  Chimento, P. and J. Ishac, "Defining Network Capacity",
              RFC 5136, DOI 10.17487/RFC5136, February 2008,
              <https://www.rfc-editor.org/info/rfc5136>.

   [RFC6815]  Bradner, S., Dubray, K., McQuaid, J., Ignacio Alvarez-Hamelin,
   Wolfgang Balzer, Frank Brockners, Greg Mirsky, Martin Duke, Murray
   Kucherawy, and A. Morton,
              "Applicability Statement Benjamin Kaduk for RFC 2544: Use their extensive comments on Production
              Networks Considered Harmful", RFC 6815,
              DOI 10.17487/RFC6815, November 2012,
              <https://www.rfc-editor.org/info/rfc6815>.

   [RFC7312]  Fabini, J. and A. Morton, "Advanced Stream and Sampling
              Framework for IP Performance Metrics (IPPM)", RFC 7312,
              DOI 10.17487/RFC7312, August 2014,
              <https://www.rfc-editor.org/info/rfc7312>.

   [RFC7594]  Eardley, P., Morton, A., Bagnulo, M., Burbridge, T.,
              Aitken, P., this
   memo and A. Akhter, "A Framework for Large-Scale
              Measurement related topics.  In a second round of Broadband Performance (LMAP)", RFC 7594,
              DOI 10.17487/RFC7594, September 2015,
              <https://www.rfc-editor.org/info/rfc7594>.

   [RFC7799]  Morton, A., "Active and Passive Metrics and Methods (with
              Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799,
              May 2016, <https://www.rfc-editor.org/info/rfc7799>.

   [RFC8085] reviews, we
   acknowledge Magnus Westerlund, Lars Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage
              Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085,
              March 2017, <https://www.rfc-editor.org/info/rfc8085>.

   [RFC8337]  Mathis, M. and A. Morton, "Model-Based Metrics for Bulk
              Transport Capacity", RFC 8337, DOI 10.17487/RFC8337, March
              2018, <https://www.rfc-editor.org/info/rfc8337>.

   [TR-471]   Morton, A., "Broadband Forum TR-471: IP Layer Capacity
              Metrics and Measurement", July 2020,
              <https://www.broadband-forum.org/technical/download/TR-
              471.pdf>.

   [udpst]    udpst Project Collaborators, "UDP Speed Test Open
              Broadband project", December 2020,
              <https://github.com/BroadbandForum/obudpst>.

   [Y.1540]   Y.1540, I. R., "Internet protocol data communication
              service - IP packet transfer and availability performance
              parameters", December 2019,
              <https://www.itu.int/rec/T-REC-Y.1540-201912-I/en>.

   [Y.Sup60]  Morton, A., "Recommendation Y.Sup60, (09/20) Interpreting
              ITU-T Y.1540 maximum IP-layer capacity measurements, and
              Errata", September 2020,
              <https://www.itu.int/rec/T-REC-Y.Sup60/en>. Zaheduzzaman Sarker.

Authors' Addresses

   Al Morton
   AT&T Labs
   200 Laurel Avenue South
   Middletown, NJ 07748
   USA
   United States of America

   Phone: +1 732 420 1571
   Fax:   +1 732 368 1192
   Email: acm@research.att.com

   Ruediger

   Rüdiger Geib
   Deutsche Telekom
   Heinrich Hertz Str. 3-7
   Darmstadt
   64295 Darmstadt
   Germany

   Phone: +49 6151 5812747
   Email: Ruediger.Geib@telekom.de

   Len Ciavattone
   AT&T Labs
   200 Laurel Avenue South
   Middletown, NJ 07748
   USA
   United States of America

   Phone: +1 732 420 1239
   Email: lencia@att.com