Internet-Draft
Internet Engineering Task Force (IETF)                  V. Bhuvaneswaran Vengainathan
Network Working Group                                       Anton
Request for Comments: 8456                                      A. Basil
Intended Status:
Category: Informational                               Veryx Technologies
Expires: November 25, 2018                               Mark
ISSN: 2070-1721                                             M. Tassinari
                                                        Hewlett-Packard
                                                         Vishwas
                                              Hewlett Packard Enterprise
                                                               V. Manral
                                                               Nano Sec
                                                            Sarah
                                                                 NanoSec
                                                                S. Banks
                                                          VSS Monitoring
                                                           May 25,
                                                            October 2018

     Benchmarking Methodology for SDN Software-Defined Networking (SDN)
                         Controller Performance
             draft-ietf-bmwg-sdn-controller-benchmark-meth-09

Abstract

   This document defines methodologies for benchmarking control the control-
   plane performance of Software-Defined Networking (SDN) Controllers.
   The SDN controllers. SDN controller Controller is a core component in software-defined networking the SDN architecture that
   controls the
   network behavior. behavior of the network.  SDN controllers Controllers have been
   implemented with many varying designs in order to achieve their
   intended network functionality.  Hence, the authors of this document
   have taken the approach of considering an SDN controller as Controller to be a
   black box, defining the methodology in a manner that is agnostic to
   protocols and network services supported by controllers. The intent of this  This
   document is to
   provide provides a method to measure for measuring the performance of all
   controller implementations.

Status of this This Memo

   This Internet-Draft document is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents not an Internet Standards Track specification; it is
   published for informational purposes.

   This document is a product of the Internet Engineering Task Force
   (IETF). Note that other groups may also distribute
   working documents as Internet-Drafts. The list  It represents the consensus of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current.

   Internet-Drafts are draft documents valid the IETF community.  It has
   received public review and has been approved for publication by the
   Internet Engineering Steering Group (IESG).  Not all documents
   approved by the IESG are a maximum candidate for any level of six
   months Internet
   Standard; see Section 2 of RFC 7841.

   Information about the current status of this document, any errata,
   and how to provide feedback on it may be updated, replaced, or obsoleted by other documents obtained at any time. It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress.

   This Internet-Draft will expire on November 25, 2018.
   https://www.rfc-editor.org/info/rfc8456.

Copyright Notice

   Copyright (c) 2018 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info)
   (https://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1. Introduction...................................................4 Introduction ....................................................4
      1.1. Conventions Used in This Document ..........................4
   2. Scope..........................................................4 Scope ...........................................................4
   3. Test Setup.....................................................4 Setup ......................................................4
      3.1. Test setup Setup - Controller working Operating in Standalone Mode........5 Mode .......5
      3.2. Test setup Setup - Controller working Operating in Cluster Mode...........6 Mode ..........6
   4. Test Considerations............................................7 Considerations .............................................7
      4.1. Network Topology..........................................7 Topology ...........................................7
      4.2. Test Traffic..............................................7 Traffic ...............................................7
      4.3. Test Emulator Requirements................................7 Requirements .................................7
      4.4. Connection Setup..........................................7 Setup ...........................................8
      4.5. Measurement Point Specification and Recommendation........8 Recommendation .........9
      4.6. Connectivity Recommendation...............................8 Recommendation ................................9
      4.7. Test Repeatability........................................8 Repeatability .........................................9
      4.8. Test Reporting............................................8 Reporting .............................................9
   5. Benchmarking Tests.............................................9 Tests .............................................11
      5.1. Performance...............................................9 Performance ...............................................11
           5.1.1. Network Topology Discovery Time......................9 Time ....................11
           5.1.2. Asynchronous Message Processing Time................11 Time ...............13
           5.1.3. Asynchronous Message Processing Rate................12 Rate ...............14
           5.1.4. Reactive Path Provisioning Time.....................15 Time ....................17
           5.1.5. Proactive Path Provisioning Time....................16 Time ...................19
           5.1.6. Reactive Path Provisioning Rate.....................18 Rate ....................21
           5.1.7. Proactive Path Provisioning Rate....................19 Rate ...................23
           5.1.8. Network Topology Change Detection Time..............21 Time .............25
      5.2. Scalability..............................................22 Scalability ...............................................26
           5.2.1. Control Session Capacity............................22 Sessions Capacity ..........................26
           5.2.2. Network Discovery Size..............................23 Size .............................27
           5.2.3. Forwarding Table Capacity...........................24 Capacity ..........................29
      5.3. Security.................................................26 Security ..................................................31
           5.3.1. Exception Handling..................................26 Handling .................................31
           5.3.2. Denial of Service Handling..........................27 Handling Denial-of-Service Attacks .................32
      5.4. Reliability..............................................29 Reliability ...............................................34
           5.4.1. Controller Failover Time............................29 Time ...........................34
           5.4.2. Network Re-Provisioning Time........................30 Re-provisioning Time .......................36
   6. References....................................................32
      6.1. Normative References.....................................32
      6.2. Informative References...................................32
   7. IANA Considerations...........................................32
   8. Considerations ............................................37
   7. Security Considerations.......................................32
   9. Acknowledgments...............................................33 Considerations ........................................38
   8. References .....................................................38
      8.1. Normative References ......................................38
      8.2. Informative References ....................................38
   Appendix A A. Benchmarking Methodology using Using OpenFlow Controllers..34 Controllers ...39
     A.1. Protocol Overview........................................34 Overview ..........................................39
     A.2. Messages Overview........................................34 Overview ..........................................39
     A.3. Connection Overview......................................34 Overview ........................................39
     A.4. Performance Benchmarking Tests...........................35 Tests .............................40
       A.4.1. Network Topology Discovery Time.....................35 Time ........................40
       A.4.2. Asynchronous Message Processing Time................36 Time ...................42
       A.4.3. Asynchronous Message Processing Rate................37 Rate ...................43
       A.4.4. Reactive Path Provisioning Time.....................38 Time ........................44
       A.4.5. Proactive Path Provisioning Time....................39 Time .......................46
       A.4.6. Reactive Path Provisioning Rate.....................40 Rate ........................47
       A.4.7. Proactive Path Provisioning Rate....................41 Rate .......................49
       A.4.8. Network Topology Change Detection Time..............42 Time .................50
     A.5. Scalability..............................................43 Scalability ................................................51
       A.5.1. Control Sessions Capacity...........................43 Capacity ..............................51
       A.5.2. Network Discovery Size..............................43 Size .................................52
       A.5.3. Forwarding Table Capacity...........................44 Capacity ..............................54
     A.6. Security.................................................46 Security ...................................................55
       A.6.1. Exception Handling..................................46 Handling .....................................55
       A.6.2. Denial of Service Handling..........................47 Handling Denial-of-Service Attacks .....................57
     A.7. Reliability..............................................49 Reliability ................................................59
       A.7.1. Controller Failover Time............................49 Time ...............................59
       A.7.2. Network Re-Provisioning Time........................50 Re-provisioning Time ...........................61
   Acknowledgments ...................................................63
   Authors' Addresses...............................................53 Addresses ................................................64

1.  Introduction

   This document provides generic methodologies for benchmarking SDN
   controller
   Software-Defined Networking (SDN) Controller performance. An  To achieve
   the desired functionality, an SDN controller Controller may support many
   northbound and southbound protocols, implement a wide range of
   applications, and work solely, either alone or as part of a group to achieve the desired
   functionality. group.  This
   document considers an SDN controller as Controller to be a black box, regardless of
   design and implementation.  The tests defined in
   the this document can be
   used to benchmark an SDN controller Controller for performance, scalability, reliability
   reliability, and security independent security, independently of northbound and southbound
   protocols.  Terminology related to benchmarking SDN controllers Controllers is
   described in the companion terminology document [I-D.sdn-controller-benchmark-term]. [RFC8455].  These
   tests can be performed on an SDN controller Controller running as a virtual
   machine (VM) instance or on a bare metal server.  This document is
   intended for those who want to measure the an SDN controller Controller's
   performance as well as compare the performance of various SDN controllers performance.
   Controllers.

1.1.  Conventions used Used in this document This Document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
   "OPTIONAL" in this document are to be interpreted as described in
   BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all
   capitals, as shown here.

2.  Scope

   This document defines a methodology to measure for measuring the networking
   metrics of SDN controllers. Controllers.  For the purpose of this memo, the SDN controller
   Controller is a function that manages and controls Network Devices.
   Any SDN
   controller Controller without a control capability is out of scope for
   this memo.  The tests defined in this document enable the
   benchmarking of SDN Controllers in two ways; as a ways: standalone controller mode
   (a standalone controller) and as a cluster mode (a cluster of homogeneous controllers.
   controllers).  These tests are recommended for execution in lab
   environments rather than in live network deployments.  Performance
   benchmarking of a federation of
   controllers, controllers (i.e., a set of SDN controllers
   Controllers) managing different domains, is beyond the scope of this
   document.

3.  Test Setup

   The

   As noted above, the tests defined in this document enable the
   measurement of an SDN
   controller's Controller's performance in standalone mode and
   cluster mode.  This section defines common reference topologies that
   are later referred to in individual tests. tests described later in this document.

3.1.  Test setup Setup - Controller working Operating in Standalone Mode

      +-----------------------------------------------------------+
      |               Application Plane               Application-Plane Test Emulator             |
      |                                                           |
      |        +-----------------+      +-------------+           |
      |        |   Application   |      |   Service   |           |
      |        +-----------------+      +-------------+           |
      |                                                           |
      +-----------------------------+(I2)-------------------------+
                                    |
                                    | (Northbound interfaces) Interface)
                   +-------------------------------+
                   |       +----------------+      |
                   |       | SDN Controller |      |
                   |       +----------------+      |
                   |                               |
                   |    Device Under Test (DUT)    |
                   +-------------------------------+
                                    | (Southbound interfaces) Interface)
                                    |
      +-----------------------------+(I1)-------------------------+
      |                                                           |
      |             +-----------+     +-----------+     +-------------+             |
      |             |  Network  |     |   Network   |             |
      |             | Device 2  |--..-| Device n-1| n - 1|             |
      |             +-----------+     +-----------+     +-------------+             |
      |                     /    \   /    \                       |
      |                    /      \ /      \                      |
      |                l0 /        X        \ ln                  |
      |                  /        / \        \                    |
      |               +-----------+  +-----------+                |
      |               |  Network  |  |  Network  |                |
      |               |  Device 1 |..|  Device n |                |
      |               +-----------+  +-----------+                |
      |                     |              |                      |
      |           +---------------+  +---------------+            |
      |           | Test Traffic  |  | Test Traffic  |            |
      |           |  Generator    |  |  Generator    |            |
      |           |    (TP1)      |  |    (TP2)      |            |
      |           +---------------+  +---------------+            |
      |                                                           |
      |              Forwarding Plane              Forwarding-Plane Test Emulator               |
      +-----------------------------------------------------------+

                                 Figure 1

3.2.  Test setup Setup - Controller working Operating in Cluster Mode

      +-----------------------------------------------------------+
      |               Application Plane               Application-Plane Test Emulator             |
      |                                                           |
      |        +-----------------+      +-------------+           |
      |        |   Application   |      |   Service   |           |
      |        +-----------------+      +-------------+           |
      |                                                           |
      +-----------------------------+(I2)-------------------------+
                                    |
                                    | (Northbound interfaces) Interface)
       +---------------------------------------------------------+
       |                                                         |
       |  ------------------             ------------------ +------------------+           +------------------+     |
       | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
       |  ------------------             ------------------ +------------------+           +------------------+     |
       |                                                         |
       |                    Device Under Test (DUT)              |
       +---------------------------------------------------------+
                                    | (Southbound interfaces) Interface)
                                    |
      +-----------------------------+(I1)-------------------------+
      |                                                           |
      |             +-----------+     +-----------+     +-------------+             |
      |             |  Network  |     |   Network   |             |
      |             | Device 2  |--..-| Device n-1| n - 1|             |
      |             +-----------+     +-----------+     +-------------+             |
      |                     /    \   /    \                       |
      |                    /      \ /      \                      |
      |                l0 /        X        \ ln                  |
      |                  /        / \        \                    |
      |               +-----------+  +-----------+                |
      |               |  Network  |  |  Network  |                |
      |               |  Device 1 |..|  Device n |                |
      |               +-----------+  +-----------+                |
      |                     |              |                      |
      |           +---------------+  +---------------+            |
      |           | Test Traffic  |  | Test Traffic  |            |
      |           |  Generator    |  |  Generator    |            |
      |           |    (TP1)      |  |    (TP2)      |            |
      |           +---------------+  +---------------+            |
      |                                                           |
      |              Forwarding Plane              Forwarding-Plane Test Emulator               |
      +-----------------------------------------------------------+

                                 Figure 2

4.  Test Considerations

4.1.  Network Topology

   The test cases SHOULD use Leaf-Spine topology with at least 2 two
   Network Devices in the topology for benchmarking. The test  Test traffic
   generators TP1 and TP2 SHOULD be connected to the leaf Network
   Device 1 and the leaf Network Device n.  To achieve a complete
   performance characterization of the SDN controller, Controller, it is recommended
   that the controller be benchmarked for many network topologies and a
   varying number of Network Devices.  Further, care should be taken to
   make sure that a loop prevention loop-prevention mechanism is enabled either in either the
   SDN controller, Controller or in the network when the topology contains redundant
   network paths.

4.2.  Test Traffic

   Test traffic is used to notify the controller about the asynchronous
   arrival of new flows.  The test cases SHOULD use frame sizes of 128,
   512
   512, and 1508 bytes for benchmarking.  Tests using jumbo frames are
   optional.

4.3.  Test Emulator Requirements

   The Test Emulator test emulator SHOULD time stamp timestamp the transmitted and received
   control messages to/from the controller on the established network
   connections.  The test cases use these values to compute the
   controller processing time.

4.4.  Connection Setup

   There may be controller implementations that support unencrypted and
   encrypted network connections with Network Devices.  Further, the
   controller may have be backward compatibility compatible with Network Devices running
   older versions of southbound protocols.  It may be useful to measure
   the controller controller's performance with one or more applicable connection
   setup methods defined below.  For cases with encrypted communications
   between the controller and the switch, key management and key
   exchange MUST take place before any performance or benchmark
   measurements.

      1. Unencrypted connection with Network Devices, running the same
         protocol version.

      2. Unencrypted connection with Network Devices, running different
         protocol versions.
        Example:

         Examples:

            a. Controller running current protocol version and switch
               running older protocol version version.

            b. Controller running older protocol version and switch
               running current protocol version version.

      3. Encrypted connection with Network Devices, running the same
         protocol version version.

      4. Encrypted connection with Network Devices, running different
         protocol versions.
        Example:

         Examples:

            a. Controller running current protocol version and switch
               running older protocol version version.

            b. Controller running older protocol version and switch
               running current protocol version version.

4.5.  Measurement Point Specification and Recommendation

   The measurement accuracy of the measurements depends on several factors factors,
   including the point of observation where the indications are
   captured.  For example, the notification can be observed at the
   controller or test emulator.  The test operator SHOULD make the observations/
   measurements
   observations/measurements at the interfaces of the test emulator emulator,
   unless it is explicitly mentioned specified otherwise in the individual test.  In any
   case, the locations of measurement points MUST be reported.

4.6.  Connectivity Recommendation

   The SDN controller Controller in the test setup SHOULD be connected directly
   with the forwarding forwarding-plane and the management plane management-plane test emulators to
   avoid any delays or failure introduced by the intermediate devices
   during benchmarking tests.  When the controller is implemented as a
   virtual machine, details of the physical and logical connectivity
   MUST be reported.

4.7.  Test Repeatability

   To increase the confidence in the measured result, results, it is recommended
   that each test RECOMMENDED SHOULD be repeated a minimum of 10 times.

4.8.  Test Reporting

   Each test has a reporting format that contains some global and
   identical reporting components, and some individual components that
   are specific to individual tests.  The following parameters for test
   configuration
   parameters and controller settings parameters MUST be reflected in the test
   report.

   Test Configuration Parameters:

      1.  Controller name and version

      2.  Northbound protocols and versions

      3.  Southbound protocols and versions

      4.  Controller redundancy mode (Standalone (standalone or Cluster Mode) cluster mode)

      5.  Connection setup (Unencrypted (unencrypted or Encrypted) encrypted)

      6.  Network Device Type (Physical or Virtual type (physical, virtual, or Emulated) emulated)

      7.  Number of Nodes nodes
      8.  Number of Links links

      9. Dataplane Test Traffic Type  Data-plane test traffic type

      10. Controller System Configuration system configuration (e.g., Physical physical or Virtual
        Machine, virtual
          machine, CPU, Memory, Caches, Operating System, Interface
        Speed, Storage) memory, caches, operating system, interface
          speed, storage)

      11. Reference Test Setup test setup (e.g., the setup shown in Section 3.1 etc.,) 3.1)

   Parameters for Controller Settings Parameters: Settings:

      1. Topology re-discovery rediscovery timeout

      2. Controller redundancy mode (e.g., active-standby etc.,) active-standby)

      3. Controller state persistence enabled/disabled

   To ensure the repeatability of the test, the following capabilities
   of the test emulator SHOULD be reported reported:

      1. Maximum number of Network Devices that the forwarding plane
         emulates

      2. Control message processing time (e.g., Topology Discovery
        Messages) topology discovery
         messages)

   One way to determine the above two values are is to simulate the required
   control sessions and messages from the control plane.

5.  Benchmarking Tests

5.1.  Performance

5.1.1.  Network Topology Discovery Time

   Objective:

   The

      Measure the time taken by the controller(s) to determine the
      complete network topology, defined as the interval starting with
      the first discovery message from the controller(s) at its Southbound interface,
      southbound interface and ending with all features of the static
      topology determined.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. The controller MUST support network discovery.

      2. Tester The tester should be able to retrieve the discovered topology
         information either through either the controller's management interface,
         interface or northbound interface to determine if the discovery
         was successful and complete.

      3. Ensure that the controller's topology re-discovery rediscovery timeout has
         been set to the maximum value value, to avoid initiation of re-discovery the
         rediscovery process in the middle of the test.

   Procedure:

      1. Ensure that the controller is operational, operational and that its network
         applications, northbound interface, and southbound interfaces interface
         are up and running.

      2. Establish the network connections between the controller and
         the Network Devices.

      3. Record the time for the first discovery message (Tm1) received
         from the controller at forwarding plane the forwarding-plane test emulator
         interface
     I1. (I1).

      4. Query the controller every t seconds (RECOMMENDED (the RECOMMENDED value for
         t is 3) to obtain the discovered network topology information
         through the northbound interface or the management interface interface,
         and compare it with the deployed network topology information.

      5. Stop the trial when the discovered topology information matches
         the deployed network topology, topology or when the discovered topology
         information return returns the same details for 3 three consecutive
         queries.

      6. Record the time for the last discovery message (Tmn) sent to
         the controller from the forwarding plane forwarding-plane test emulator
         interface (I1) when the trial completed successfully. completes successfully (e.g.,
         when the topology matches).

Measurement:

   Measurements:

      Topology Discovery Time Tr1 (DT1) = Tmn-Tm1.

                                           Tr1 Tmn - Tm1

                                              DT1 + Tr2 DT2 + Tr3 DT3 .. Trn DTn
      Average Topology Discovery Time (TDm) = -----------------------
                                                   Total Trials
                                           SUM[SQUAREOF(Tri-TDm)]

                                               SUM[SQUAREOF(DTi - TDm)]
      Topology Discovery Time Variance (TDv)  ---------------------- = ------------------------
                                                   Total Trials -1 - 1

   Reporting Format:

      The Topology Discovery Time results MUST be reported in the format
   of a table, tabular
      format, with a row for each successful iteration.  The last row of
      the table indicates the Topology Discovery Time variance variance, and the
      previous row indicates the average Average Topology Discovery Time.

      If this test is repeated with a varying number of nodes over the
      same topology, the results SHOULD be reported in the form of a
      graph.  The X coordinate SHOULD be the Number number of nodes (N), and
      the Y coordinate SHOULD be the average Average Topology Discovery Time.

5.1.2.  Asynchronous Message Processing Time

   Objective:

   The

      Measure the time taken by the controller(s) to process an
      asynchronous message, defined as the interval starting with an
      asynchronous message from a
   network device Network Device after the discovery of
      all the devices by the
   controller(s), controller(s) and ending with a response
      message from the controller(s) at its Southbound southbound interface.

   Reference Test Setup:

      This test SHOULD use one of the test setup described setups illustrated in section
      Section 3.1 or section Section 3.2 of this document.

   Prerequisite:

   1.

      The controller MUST have successfully completed the network
      topology discovery for the connected Network Devices.

   Procedure:

      1. Generate asynchronous messages from every connected Network
     Device,
         Device to the SDN controller, Controller, one at a time in series from the
     forwarding plane
         forwarding-plane test emulator for the trial duration. Trial Duration.

      2. Record every request transmit time (T1) and the corresponding
         response received time (R1) at the forwarding plane forwarding-plane test
         emulator interface (I1) for every successful message exchange.

Measurement:

                                                 SUM{Ri} - SUM{Ti}

   Measurements:

     Asynchronous Message Processing Time Tr1 (APT1) =
                                                   SUM{Ri} - SUM{Ti}
                                                 -----------------------
                                                            Nrx

        Where Nrx is the total number of successful messages exchanged

                                                  Tr1 + Tr2 + Tr3..Trn exchanged.

     Average Asynchronous Message Processing Time = --------------------
                                              APT1 + APT2 + APT3 .. APTn
                                              --------------------------
                                                     Total Trials
     Asynchronous Message Processing Time Variance (TAMv) =

                                                SUM[SQUAREOF(Tri-TAMm)]
                                                ----------------------
                                              SUM[SQUAREOF(APTi - TAMm)]
                                              --------------------------
                                                    Total Trials -1 - 1

        Where TAMm is the Average Asynchronous Message Processing Time.

   Reporting Format:

      The Asynchronous Message Processing Time results MUST be reported
      in
   the format of a table tabular format, with a row for each iteration.  The last row of
      the table indicates the Asynchronous Message Processing Time
   variance
      variance, and the previous row indicates the average Average Asynchronous
      Message Processing Time.

      The report SHOULD capture the following information information, in addition
      to the configuration parameters captured in section 4.8. per Section 4.8:

         -  Successful messages exchanged (Nrx)

         -  Percentage of unsuccessful messages exchanged, computed
            using the formula (1 ((1 - Nrx/Ntx) * 100), Where where Ntx is the
            total number of messages transmitted to the controller. controller

      If this test is repeated with a varying number of nodes with the
      same topology, the results SHOULD be reported in the form of a
      graph.  The X coordinate SHOULD be the Number number of nodes (N), and
      the Y coordinate SHOULD be the average Average Asynchronous Message
      Processing Time.

5.1.3.  Asynchronous Message Processing Rate

   Objective:

      Measure the number of responses to asynchronous messages (such as (a new
      flow arrival notification message, link down, etc.) for which the
      controller(s) performed processing and replied with a valid and
      productive (non-trivial) response message

   This message.

      Using a single procedure, this test will measure the following two
      benchmarks on the Asynchronous Message Processing Rate using a single procedure. The two benchmarks are (see section
      Section 2.3.1.3 of [I-D.sdn-controller-benchmark-term]): [RFC8455]):

         1. Loss-free Maximum Asynchronous Message Processing Rate

         2. Maximum Loss-Free Asynchronous Message Processing Rate

   Here
      Here, two benchmarks are determined through a series of trials
      where the number of messages are sent to the controller(s), controller(s) and the
      responses received from the controller(s) are counted over the trial
   duration.
      Trial Duration.  The message response rate and the message loss ratio Message Loss
      Ratio are calculated for each trial.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. The controller(s) MUST have successfully completed the network
         topology discovery for the connected Network Devices.

      2. Choose and record the Trial Duration (Td), the sending rate step-
     size (STEP),
         STEP size, the tolerance on equality for two consecutive trials
     (P%),and
         (P%), and the maximum possible message sending message-sending rate (Ntx1/Td).

   Procedure:

      1. Generate asynchronous messages continuously at the maximum
         possible rate on the established connections from all the
         emulated/simulated Network Devices for the given trial Trial
         Duration (Td).

      2. Record the total number of responses received (Nrx1) from the
         controller
     (Nrx1) as well as the number of messages sent (Ntx1) to the
         controller within the trial duration Trial Duration (Td).

      3. Calculate the Asynchronous Message Processing Rate (Tr1) (APR1) and
         the Message Loss Ratio (Lr1).  Ensure that the controller(s) have
         has returned to normal operation.

      4. Repeat the trial by reducing the asynchronous message sending message-sending
         rate used in the last trial by the STEP size.

      5. Continue repeating the trials and reducing the sending rate
         until both the maximum value of Nrxn (number of responses
         received from the controller) and the Nrxn corresponding to a
         Loss Ratio of zero loss ratio have been found.

      6. The trials corresponding to the benchmark levels MUST be
         repeated using the same asynchronous message rates until the
         responses received from the controller are equal (+/-P%) for
         two consecutive trials.

      7. Record the number of responses received (Nrxn) from the
         controller (Nrxn) as well as the number of messages sent (Ntxn) to the
         controller in the last trial.

Measurement:

   Measurements:

                                                    Nrxn
      Asynchronous Message Processing Rate Trn (APRn) = -----
                                                     Td

      Maximum Asynchronous Message Processing Rate = MAX(Trn) MAX(APRn) for all n

                                                  Nrxn
      Asynchronous Message Loss Ratio Lrn (Lrn) = 1 - -----
                                                  Ntxn

   Loss-free

      Loss-Free Asynchronous Message Processing Rate = MAX(Trn) MAX(APRn)
         given
   Lrn=0 Lrn = 0

   Reporting Format:

      The Asynchronous Message Processing Rate results MUST be reported
      in
   the format of a table tabular format, with a row for each trial.

      The table should report the following information information, in addition to
      the configuration parameters captured in section per Section 4.8, with
      columns:

         -  Offered rate (Ntxn/Td)

         -  Asynchronous Message Processing Rate (Trn) (APRn)

         -  Loss Ratio (Lr)

         -  Benchmark at this iteration (blank for none, Maximum, Loss-Free) Maximum
            Asynchronous Message Processing Rate, Loss-Free Asynchronous
            Message Processing Rate)

      The results MAY be presented in the form of a graph.  The X axis
      SHOULD be the Offered offered rate, and dual Y axes would represent the
      Asynchronous Message Processing Rate and the Loss Ratio,
      respectively.

      If this test is repeated with a varying number of nodes over the
      same topology, the results SHOULD be reported in the form of a
      graph.  The X axis SHOULD be the Number number of nodes (N), and the
      Y axis SHOULD be the Asynchronous Message Processing Rate.  Both
      the Maximum Asynchronous Message Processing Rate and the Loss-
   Free Rates Loss-Free
      Asynchronous Message Processing Rate should be plotted for each N.

5.1.4.  Reactive Path Provisioning Time

   Objective:

   The

      Measure the time taken by the controller to setup set up a path
      reactively between source and destination node, nodes, defined as the
      interval starting with the first flow provisioning request message
      received by the controller(s) at its Southbound interface, southbound interface and
      ending with the last flow provisioning response message sent from
      the controller(s) at its
   Southbound southbound interface.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.  The number of
      Network Devices in the path is a parameter of the test that may be
      varied from 2 two to the maximum discovery size in repetitions of
      this test.

Prerequisite:

   Prerequisites:

      1. The controller MUST contain the network topology information
         for the deployed network topology.

      2. The controller should have the knowledge about know the location of the destination
         endpoint for which the path has to be provisioned.  This can be
         achieved through dynamic learning or static provisioning.

      3. Ensure that the default action for 'flow miss' "flow miss" in the Network
         Device is configured to 'send "send to controller'. controller".

      4. Ensure that each Network Device in a path requires the
         controller to make the forwarding decision while paving the
         entire path.

   Procedure:

      1. Send a single traffic stream from the test traffic generator TP1 to
         test traffic generator TP2.

      2. Record the time of the first flow provisioning request message
         sent to the controller (Tsf1) from the Network Device at the
     forwarding plane
         forwarding-plane test emulator interface (I1).

      3. Wait for the arrival of the first traffic frame at the Traffic
     Endpoint TP2 endpoint
         (i.e., test traffic generator TP2) or the expiry of trial duration the Trial
         Duration (Td).

      4. Record the time of the last flow provisioning response message
         received from the controller (Tdf1) to the Network Device at
         the
     forwarding plane forwarding-plane test emulator interface (I1).

Measurement:

   Measurements:

      Reactive Path Provisioning Time Tr1 (RPT1) = Tdf1-Tsf1.

                                              Tr1 + Tr2 + Tr3 .. Trn Tdf1 - Tsf1

      Average Reactive Path Provisioning Time = -----------------------
                                              RPT1 + RPT2 + RPT3 .. RPTn
                                              --------------------------
                                                      Total Trials

                                                SUM[SQUAREOF(Tri-TRPm)]

      Reactive Path Provisioning Time Variance(TRPv) --------------------- Variance (TRPv) =
                                              SUM[SQUAREOF(RPTi - TRPm)]
                                              --------------------------
                                                     Total Trials -1 - 1

         Where TRPm is the Average Reactive Path Provisioning Time.

   Reporting Format:

      The Reactive Path Provisioning Time results MUST be reported in the
   format of a table
      tabular format, with a row for each iteration.  The last row of
      the table indicates the Reactive Path Provisioning Time variance variance,
      and the previous row indicates the Average Reactive Path
      Provisioning Time.

      The report should capture the following information information, in addition
      to the configuration parameters captured in section 4.8. per Section 4.8:

         -  Number of Network Devices in the path

5.1.5.  Proactive Path Provisioning Time

   Objective:

   The

      Measure the time taken by the controller to setup set up a path
      proactively between source and destination node, nodes, defined as the
      interval starting with the first proactive flow provisioned in the
      controller(s) at its
   Northbound interface, northbound interface and ending with the last
      flow provisioning response message sent from the controller(s) at
      its Southbound southbound interface.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. The controller MUST contain the network topology information
         for the deployed network topology.

      2. The controller should have the knowledge about know the location of the destination
         endpoint for which the path has to be provisioned.  This can be
         achieved through dynamic learning or static provisioning.

      3. Ensure that the default action for flow miss "flow miss" in the Network
         Device is
     'drop'. "drop".

   Procedure:

      1. Send a single traffic stream from test traffic generator TP1 to
         test traffic generator TP2.

      2. Install the flow entries to reach so that the traffic travels from test
         traffic generator TP1
     to the until it reaches test traffic
         generator TP2 through the controller's northbound interface or
         management interface.

      3. Wait for the arrival of the first traffic frame at the test traffic
         generator TP2 or the expiry of trial duration the Trial Duration (Td).

      4. Record the time when the proactive flow is provisioned in the
     Controller
         controller (Tsf1) at the management plane management-plane test emulator
         interface
     I2. (I2).

      5. Record the time of the last flow provisioning message received
         from the controller (Tdf1) at the forwarding plane forwarding-plane test
         emulator interface I1.

Measurement: (I1).

   Measurements:

      Proactive Flow Provisioning Time Tr1 (PPT1) = Tdf1-Tsf1.

                                               Tr1 + Tr2 + Tr3 .. Trn Tdf1 - Tsf1

      Average Proactive Path Provisioning Time = -----------------------
                                              PPT1 + PPT2 + PPT3 .. PPTn
                                              --------------------------
                                                      Total Trials

                                                SUM[SQUAREOF(Tri-TPPm)]

      Proactive Path Provisioning Time Variance(TPPv) -------------------- Variance (TPPv) =
                                              SUM[SQUAREOF(PPTi - TPPm)]
                                              --------------------------
                                                   Total Trials -1 - 1

         Where TPPm is the Average Proactive Path Provisioning Time.

   Reporting Format:

      The Proactive Path Provisioning Time results MUST be reported in the
   format of a table MUST be reported in
      tabular format, with a row for each iteration.  The last row of
      the table indicates the Proactive Path Provisioning Time variance variance,
      and the previous row indicates the Average Proactive Path
      Provisioning Time.

      The report should capture the following information information, in addition
      to the configuration parameters captured in section 4.8. per Section 4.8:

         -  Number of Network Devices in the path

5.1.6.  Reactive Path Provisioning Rate

   Objective:

   The

      Measure the maximum number of independent paths a controller can
      concurrently establish per second between source and destination
      nodes reactively, defined as the number of paths provisioned per
      second by the controller(s) at its Southbound southbound interface for the
      flow provisioning requests received for path provisioning at its
   Southbound
      southbound interface between the start of the test and the expiry
      of the given trial duration. Trial Duration.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. The controller MUST contain the network topology information
         for the deployed network topology.

      2. The controller should have the knowledge about know the location of destination
         addresses for which the paths have to be provisioned.  This can
         be achieved through dynamic learning or static provisioning.

      3. Ensure that the default action for 'flow miss' "flow miss" in the Network
         Device is configured to 'send "send to controller'. controller".

      4. Ensure that each Network Device in a path requires the
         controller to make the forwarding decision while provisioning
         the entire path.

   Procedure:

      1. Send traffic with unique source and destination addresses from
         test traffic generator TP1.

      2. Record the total number of unique traffic frames (Ndf) received
         at the test traffic generator TP2 within the trial duration Trial Duration (Td).

Measurement:

   Measurements:

                                                Ndf
      Reactive Path Provisioning Rate Tr1 (RPR1) = ------
                                                Td

                                               Tr1 + Tr2 + Tr3 .. Trn

      Average Reactive Path Provisioning Rate = ------------------------
                                              RPR1 + RPR2 + RPR3 .. RPRn
                                              --------------------------
                                                     Total Trials

                                                SUM[SQUAREOF(Tri-RPPm)]

      Reactive Path Provisioning Rate Variance(RPPv) -------------------- Variance (RPPv) =
                                              SUM[SQUAREOF(RPRi - RPPm)]
                                              --------------------------
                                                    Total Trials -1 - 1

         Where RPPm is the Average Reactive Path Provisioning Rate.

   Reporting Format:

      The Reactive Path Provisioning Rate results MUST be reported in the
   format of a table
      tabular format, with a row for each iteration.  The last row of
      the table indicates the Reactive Path Provisioning Rate variance variance,
      and the previous row indicates the Average Reactive Path
      Provisioning Rate.

      The report should capture the following information information, in addition
      to the configuration parameters captured in section 4.8. per Section 4.8:

         -  Number of Network Devices in the path

         -  Offered rate

5.1.7.  Proactive Path Provisioning Rate

   Objective:

      Measure the maximum number of independent paths a controller can
      concurrently establish per second between source and destination
      nodes proactively, defined as the number of paths provisioned per
      second by the controller(s) at its Southbound southbound interface for the
      paths requested in its Northbound northbound interface between the start of
      the test and the expiry of the given trial duration. Trial Duration.  The
      measurement is based on dataplane data-plane observations of successful path activation
      activation.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. The controller MUST contain the network topology information
         for the deployed network topology.

      2. The controller should have the knowledge about know the location of destination
         addresses for which the paths have to be provisioned.  This can
         be achieved through dynamic learning or static provisioning.

      3. Ensure that the default action for flow miss "flow miss" in the Network
         Device is
   'drop'. "drop".

   Procedure:

      1. Send traffic continuously with unique source and destination
         addresses from test traffic generator TP1.

      2. Install corresponding flow entries to reach so that the traffic travels
         from simulated sources at the test traffic generator TP1 to until it
         reaches the simulated destinations at test traffic
         generator TP2 through the controller's northbound interface or
         management interface.

      3. Record the total number of unique traffic frames (Ndf) received Ndf)
         at the test traffic generator TP2 within the trial duration Trial Duration (Td).

Measurement:

   Measurements:

                                                 Ndf
      Proactive Path Provisioning Rate Tr1 (PPR1) = ------
                                                 Td

                                               Tr1 + Tr2 + Tr3 .. Trn

      Average Proactive Path Provisioning Rate = -----------------------
                                              PPR1 + PPR2 + PPR3 .. PPRn
                                              --------------------------
                                                      Total Trials

                                                SUM[SQUAREOF(Tri-PPPm)]

      Proactive Path Provisioning Rate Variance(PPPv) -------------------- Variance (PPPv) =
                                              SUM[SQUAREOF(PPRi - PPPm)]
                                              -------------------------
                                                   Total Trials -1 - 1

         Where PPPm is the Average Proactive Path Provisioning Rate.

   Reporting Format:

      The Proactive Path Provisioning Rate results MUST be reported in the
   format of a table
      tabular format, with a row for each iteration.  The last row of
      the table indicates the Proactive Path Provisioning Rate variance variance,
      and the previous row indicates the Average Proactive Path
      Provisioning Rate.

      The report should capture the following information information, in addition
      to the configuration parameters captured in section 4.8. per Section 4.8:

         -  Number of Network Devices in the path

         -  Offered rate

5.1.8.  Network Topology Change Detection Time

   Objective:

   The

      Measure the amount of time required for taken by the controller to detect any
      changes in the network topology, defined as the interval starting
      with the notification message received by the controller(s) at its Southbound
   interface,
      southbound interface and ending with the first topology
      rediscovery messages message sent from the controller(s) at its Southbound southbound
      interface.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. The controller MUST have successfully discovered the network
         topology information for the deployed network topology.

      2. The periodic network discovery operation should be configured
         to twice the Trial duration Duration (Td) value.

   Procedure:

      1. Trigger a topology change event by bringing down an active
         Network Device in the topology.

      2. Record the time when the first topology change notification is
         sent to the controller (Tcn) at the forwarding plane forwarding-plane test
         emulator interface (I1).

      3. Stop the trial when the controller sends the first topology re-
   discovery
         rediscovery message to the Network Device or the expiry of trial
   duration the
         Trial Duration (Td).

      4. Record the time when the first topology re-discovery rediscovery message is
         received from the controller (Tcd) at the forwarding plane forwarding-plane test
         emulator interface (I1)

Measurement: (I1).

   Measurements:

      Network Topology Change Detection Time Tr1 (TDT1) = Tcd-Tcn.

                                                 Tr1 + Tr2 + Tr3 .. Trn Tcd - Tcn

      Average Network Topology Change Detection Time = ------------------
                                              TDT1 + TDT2 + TDT3 .. TDTn
                                              --------------------------
                                                      Total Trials

      Network Topology Change Detection Time Variance(NTDv) Variance (NTDv) =

                                             SUM[SQUAREOF(Tri-NTDm)]
                                             -----------------------
                                              SUM[SQUAREOF(TDTi - NTDm)]
                                              --------------------------
                                                    Total Trials -1 - 1

         Where NTDm is the Average Network Topology Change
            Detection Time.

   Reporting Format:

      The Network Topology Change Detection Time results MUST be
      reported in the format of a table tabular format, with a row for each iteration.  The
      last row of the table indicates the Network Topology Change
      Detection Time
   variance variance, and the previous row indicates the average
      Average Network Topology Change Detection Time.

5.2.  Scalability

5.2.1.  Control Session Sessions Capacity

   Objective:

      Measure the maximum number of control sessions the controller can
      maintain, defined as the number of sessions that the controller
      can accept from network devices, Network Devices, starting with the first control
   session,
      session and ending with the last control session that the
      controller(s) accepts at its Southbound southbound interface.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

   Prerequisites:

      None
   Procedure:

      1. Establish control connection connections with the controller from every
         Network Device emulated in the forwarding plane forwarding-plane test emulator.

      2. Stop the trial when the controller starts dropping the control
         connections.

      3. Record the number of successful connections established (CCn)
         with the controller (CCn) at the forwarding plane forwarding-plane test emulator.

   Measurement:

      Control Sessions Capacity = CCn. CCn

   Reporting Format:

      The Control Session Sessions Capacity results MUST be reported in addition
      to the configuration parameters captured in section per Section 4.8.

5.2.2.  Network Discovery Size

   Objective:

      Measure the network size (number of nodes, links links, and hosts) that
      a controller can discover, defined as the size of a network that
      the controller(s) can discover, starting from with a network topology given
      provided by the user for discovery, discovery and ending with the topology number of
      nodes, links, and hosts that the controller(s) could were able to
      successfully discover.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. The controller MUST support automatic network discovery.

      2. Tester The tester should be able to retrieve the discovered topology
         information either through either the controller's management
         interface or northbound interface.

   Procedure:

      1. Establish the network connections between the controller and
         the network nodes.

      2. Query the controller every t seconds (RECOMMENDED (the RECOMMENDED value for
         t is 30) to obtain the discovered network topology information
         through the northbound interface or the management interface.

      3. Stop the trial when the discovered network topology information
         remains the same as that of the last two query responses.

      4. Compare the obtained network topology information with the
         deployed network topology information.

      5. If the comparison is successful, increase the number of nodes
         by 1 and repeat the trial.
         If the comparison is unsuccessful, decrease the number of nodes
         by 1 and repeat the trial.

      6. Continue the trial until the comparison of step 5 (step 5) is successful.

      7. Record the number of nodes for the last trial run (Ns) where
         the topology comparison was successful.

   Measurement:

       Network Discovery Size = Ns. Ns

   Reporting Format:

      The Network Discovery Size results MUST be reported in addition to
      the configuration parameters captured in section per Section 4.8.

5.2.3.  Forwarding Table Capacity

   Objective:

      Measure the maximum number of flow entries a controller can manage
      in its Forwarding table. Table.

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. The controller controller's Forwarding table Table should be empty.

      2. Flow Idle time "Flow idle time" MUST be set to a higher or infinite value.

      3. The controller MUST have successfully completed network
         topology discovery.

      4. Tester The tester should be able to retrieve the forwarding table Forwarding Table
         information
     either through either the controller's management
         interface or northbound interface.

Procedure:

   Procedures:

      o  Reactive Flow Provisioning Mode:

         1. Send bi-directional bidirectional traffic continuously with unique source
            and destination addresses from test traffic generators TP1
            and TP2 at the asynchronous message processing rate Asynchronous Message Processing Rate of the
            controller.

         2. Query the controller at a regular interval (e.g., every
            5 seconds) for the number of learned flow entries from its
            northbound interface.

         3. Stop the trial when the retrieved value is constant for
            three consecutive iterations iterations, and record the value received
            from the last query (Nrp).

      o  Proactive Flow Provisioning Mode:

         1. Install unique flows continuously through the controller's
            northbound interface or management interface until a failure
            response is received from the controller.

         2. Record the total number of successful responses (Nrp).

         Note:

         Some controller designs for proactive flow provisioning Proactive Flow Provisioning mode
         may require the switch to send flow setup requests in order to
         generate flow setup responses.  In such cases, it is
         recommended to generate
   bi-directional bidirectional traffic for the
         provisioned flows.

Measurement:

   Measurements:

      Proactive Flow Provisioning Mode:

         Max Flow Entries = Total number of flows provisioned (Nrp)

      Reactive Flow Provisioning Mode:

         Max Flow Entries = Total number of learned flow entries (Nrp)

      Forwarding Table Capacity = Max Flow Entries. Entries

   Reporting Format:

      The Forwarding Table Capacity results MUST be tabulated with the
      following information information, in addition to the configuration parameters
      captured in section 4.8. per Section 4.8:

         -  Provisioning Type (Proactive/Reactive)

5.3.  Security

5.3.1.  Exception Handling

   Objective:

      Determine the effect effects of handling error packets and notifications
      on performance tests.  The impact MUST be measured for the
      following performance tests

    a. tests:

         1. Path Provisioning Rate

    b.

         2. Path Provisioning Time

    c.

         3. Network Topology Change Detection Time

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. This test MUST be performed after obtaining the baseline
         measurement results for the above performance tests. tests listed above.

      2. Ensure that the invalid messages are not dropped by the
         intermediate devices connecting the controller and Network
         Devices.

   Procedure:

      1. Perform the above listed above-listed performance tests tests, and send 1% of the
         messages from the Asynchronous Message Processing Rate test
         (Section 5.1.3) as invalid messages from the connected Network
         Devices emulated at the forwarding
     plane forwarding-plane test emulator.

      2. Perform the above listed above-listed performance tests tests, and send 2% of the
         messages from the Asynchronous Message Processing Rate test
         (Section 5.1.3) as invalid messages from the connected Network
         Devices emulated at the forwarding
     plane forwarding-plane test emulator.

      Note:

      Invalid messages can be frames with incorrect protocol fields or
      any form of failure notifications sent towards the controller.

Measurement:

   Measurement

   Measurements:

      Measurements MUST be done as per the equation defined in the
      "Measurements" section of the corresponding performance test measurement section. listed under
      "Objective".

   Reporting Format:

      The Exception Handling results MUST be reported in the format of
   table tabular format,
      with a column for each of the below parameters and row for each of
      the listed above-listed performance tests. tests:

         -  Without Exceptions

         -  With 1% Exceptions

         -  With 2% Exceptions

5.3.2. Denial of Service  Handling Denial-of-Service Attacks

   Objective:

      Determine the effect effects of handling DoS attacks on performance and
      scalability tests the tests.  The impact MUST be measured for the following
      tests:

    a.

         1. Path Provisioning Rate

    b.

         2. Path Provisioning Time
    c.

         3. Network Topology Change Detection Time

    d.

         4. Network Discovery Size

   Reference Test Setup:

   The

      This test SHOULD use one of the test setups described illustrated in section
      Section 3.1 or section Section 3.2 of this document.

   Prerequisite:

      This test MUST be performed after obtaining the baseline
      measurement results for the above tests. performance tests listed above.

   Procedure:

   1.

      Perform the listed tests above-listed tests, and launch a DoS attack towards
      the controller while the trial is running.

      Note: DoS attacks can be launched on one of the following interfaces.

     a.
      interfaces:

         1. Northbound (e.g., Query query for flow entries continuously on the
            northbound interface)
     b.

         2. Management (e.g., Ping requests to the controller's
            management interface)
     c.

         3. Southbound (e.g., TCP SYN messages on the southbound
            interface)

Measurement:

   Measurement

   Measurements:

      Measurements MUST be done as per the equation defined in the
      "Measurements" section of the corresponding test's measurement section. test listed under
      "Objective".

   Reporting Format:

      The DoS Attacks Handling results regarding the handling of DoS attacks MUST be reported
      in the format of
   table tabular format, with a column for each of the below parameters
      and a row for each of the listed above-listed tests.

         -  Without any attacks

         -  With attacks

      The report should also specify the nature of the attack and the
   interface.
      interface in question.

5.4.  Reliability

5.4.1.  Controller Failover Time

   Objective:

   The

      Measure the time taken to switch from an active controller to the
      backup
   controller, controller when the controllers work in redundancy mode and
      the active controller fails, defined as the interval starting with when
      the active controller bringing down, is brought down and ending with the first re-discovery
      rediscovery message received from the new controller at its Southbound
      southbound interface.

   Reference Test Setup:

   The

      This test SHOULD use the test setup described illustrated in section Section 3.2 of
      this document.

Prerequisite:

   Prerequisites:

      1. Master controller election MUST be completed.

      2. Nodes are connected to the controller cluster as per the
     Redundancy Mode (RM).
         implemented redundancy mode (e.g., active-standby).

      3. The controller cluster should have successfully completed the
         network topology discovery.

      4. The Network Device MUST send all new flows to the controller
         when it receives them from the test traffic generator.

      5. Controller The controller should have learned the location of the
         destination (D1) at test traffic generator TP2.

   Procedure:

      1. Send uni-directional unidirectional traffic continuously with incremental
         sequence number numbers and source addresses from test traffic
         generator TP1 at the rate that at which the controller processes can process
         the traffic without any drops.

      2. Ensure that there are no packet drops observed at test traffic
         generator TP2.

      3. Bring down the active controller.

      4. Stop the trial when a the first frame received on TP2 after the failover
     operation.
         operation is received on test traffic generator TP2.

      5. Record the time at which the last valid frame was received (T1)
         at test traffic generator TP2 before the sequence error and the
         time at which the first valid frame was received (T2) after the
         sequence error at TP2

Measurement: test traffic generator TP2.

   Measurements:

      Controller Failover Time = (T2 - T1)

      Packet Loss = Number of missing packet sequences. sequences

   Reporting Format:

      The Controller Failover Time results MUST be tabulated with the
      following information. information:

         -  Number of cluster nodes

         -  Redundancy mode

         -  Controller Failover Time

         -  Packet Loss

         -  Cluster keep-alive interval

5.4.2.  Network Re-Provisioning Re-provisioning Time

   Objective:

   The

      Measure the time taken to re-route the traffic by the Controller, controller to reroute traffic when
      there is a failure in existing traffic paths, defined as the
      interval starting from with the first failure notification message
      received by the
   controller, controller and ending with the last flow
      re-provisioning message sent by the controller at its Southbound southbound
      interface.

   Reference Test Setup:

      This test SHOULD use one of the test setup described setups illustrated in section
      Section 3.1 or section Section 3.2 of this document.

Prerequisite:

   Prerequisites:

      1. Network A network with the given a specified number of nodes and redundant paths
         MUST be deployed.

      2. Ensure that the The controller MUST have knowledge about know the location of test traffic
         generators TP1 and TP2.

      3. Ensure that the controller does not pre-provision the alternate
         path in the emulated Network Devices at the forwarding plane forwarding-plane
         test emulator.

   Procedure:

      1. Send bi-directional bidirectional traffic continuously with a unique sequence
         number from test traffic generators TP1 and TP2.

      2. Bring down a link or switch in the traffic path.

      3. Stop the trial after receiving the first frame after network re-
     convergence.
         reconvergence.

      4. Record the time of the last received frame prior to the frame
         loss at test traffic generator TP2 (TP2-Tlfr) and the time of
         the first frame received after the frame loss at test traffic
         generator TP2 (TP2-Tffr).  There must be a gap in sequence
         numbers of these frames frames.

      5. Record the time of the last received frame prior to the frame
         loss at test traffic generator TP1 (TP1-Tlfr) and the time of
         the first frame received after the frame loss at test traffic
         generator TP1 (TP1-Tffr).

Measurement:

   Measurements:

      Forward Direction Path Re-Provisioning Re-provisioning Time (FDRT)
                                                 = (TP2-Tffr - TP2-Tlfr)

      Reverse Direction Path Re-Provisioning Re-provisioning Time (RDRT)
                                                 = (TP1-Tffr - TP1-Tlfr)

      Network Re-Provisioning Re-provisioning Time = (FDRT+RDRT)/2 (FDRT + RDRT)/2

      Forward Direction Packet Loss = Number of missing sequence frames
         at test traffic generator TP1

      Reverse Direction Packet Loss = Number of missing sequence frames
         at test traffic generator TP2

   Reporting Format:

      The Network Re-Provisioning Re-provisioning Time results MUST be tabulated with
      the following information. information:

         -  Number of nodes in the primary path

         -  Number of nodes in the alternate path

         -  Network Re-Provisioning Re-provisioning Time

         -  Forward Direction Packet Loss

         -  Reverse Direction Packet Loss

6. References

6.1. Normative References

   [RFC2119]  S. Bradner, "Key words for use in RFCs to Indicate
              Requirement Levels", RFC 2119, March 1997.

   [RFC8174]  B. Leiba, "Ambiguity of Uppercase vs Lowercase in RFC
              2119 Key Words", RFC 8174, May 2017.

   [I-D.sdn-controller-benchmark-term]  Bhuvaneswaran.V, Anton Basil,
              Mark.T, Vishwas Manral, Sarah Banks, "Terminology for
              Benchmarking SDN Controller Performance",
              draft-ietf-bmwg-sdn-controller-benchmark-term-10
              (Work in progress), May 25, 2018

6.2. Informative References

   [OpenFlow Switch Specification]  ONF,"OpenFlow Switch Specification"
              Version 1.4.0 (Wire Protocol 0x05), October 14, 2013.

7.  IANA Considerations

   This document does not have any has no IANA requests.

8. actions.

7.  Security Considerations

   Benchmarking

   The benchmarking tests described in this document are limited to the
   performance characterization of controllers in a lab environment with
   isolated network. networks.

   The benchmarking network topology will be an independent test setup
   and MUST NOT be connected to devices that may forward the test
   traffic into a production network, network or misroute traffic to the test
   management network.

   Further, benchmarking is performed on a "black-box" basis, relying
   solely on measurements observable external to the controller.

   Special capabilities SHOULD NOT exist in the controller specifically
   for benchmarking purposes.  Any implications for network security
   arising from the controller SHOULD be identical in the lab and in
   production networks.

9. Acknowledgments

   The authors would like to thank the following individuals

8.  References

8.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for
   providing their valuable comments use in RFCs to the earlier versions Indicate
              Requirement Levels", BCP 14, RFC 2119,
              DOI 10.17487/RFC2119, March 1997,
              <https://www.rfc-editor.org/info/rfc2119>.

   [RFC8174]  Leiba, B., "Ambiguity of this
   document: Al Morton (AT&T), Sandeep Gangadharan (HP), M. Georgescu
   (NAIST), Andrew McGregor (Google), Scott Bradner , Jay Karthik
   (Cisco), Ramakrishnan (Dell), Khasanov Boris (Huawei), Brian
   Castelli (Spirent)

   This document was prepared using 2-Word-v2.0.template.dot. Uppercase vs Lowercase in
              RFC 2119 Key Words", BCP 14, RFC 8174,
              DOI 10.17487/RFC8174, May 2017,
              <https://www.rfc-editor.org/info/rfc8174>.

   [RFC8455]  Bhuvaneswaran, V., Basil, A., Tassinari, M., Manral, V.,
              and S. Banks, "Terminology for Benchmarking
              Software-Defined Networking (SDN) Controller Performance",
              RFC 8455, DOI 10.17487/RFC8455, October 2018,
              <https://www.rfc-editor.org/info/rfc8455>.

8.2.  Informative References

   [OpenFlow-Spec]
              ONF, "OpenFlow Switch Specification" Version 1.4.0 (Wire
              Protocol 0x05), October 2013,
              <https://www.opennetworking.org/wp-content/
              uploads/2014/10/openflow-spec-v1.4.0.pdf>.

Appendix A A.  Benchmarking Methodology using Using OpenFlow Controllers

   This section gives an overview of the OpenFlow protocol
   [OpenFlow-Spec] and provides a test methodology to benchmark for benchmarking SDN controllers
   Controllers supporting the OpenFlow southbound protocol.  The
   OpenFlow protocol is used as an example to illustrate the
   methodologies defined in this document.

A.1.  Protocol Overview

   OpenFlow [OpenFlow-Spec] is an open standard protocol defined by the
   Open Networking Foundation (ONF)[ OpenFlow Switch Specification], (ONF) and used for programming the
   forwarding plane of network switches or routers via a centralized
   controller.

A.2.  Messages Overview

   The OpenFlow protocol supports three messages message types namely controller-
   to-switch, asynchronous -- namely,
   controller-to-switch, asynchronous, and symmetric.

   Controller-to-switch messages are initiated by the controller and
   used to directly manage or inspect the state of the switch.  These
   messages allow controllers to query/configure the switch (Features,
   Configuration ("features"
   messages, configuration messages), collect information from a switch
   (Read-State
   message), messages), send packets on a specified port of a switch (Packet-out
   message),
   (OFPT_PACKET_OUT messages), and modify the switch forwarding plane
   and state (Modify-
   State, (Modify-State messages, Role-Request messages messages, etc.).

   Asynchronous messages are generated by the switch without a
   controller soliciting them.  These messages allow switches to update
   controllers to denote an arrival of a new flow (Packet-in), (OFPT_PACKET_IN
   messages), switch state change (Flow-Removed, Port-status) changes ("flow-removed" messages, port-status
   messages), and error (Error). errors (Error messages).

   Symmetric messages are generated in either direction without
   solicitation.  These messages allow switches and controllers to set
   up a connection (Hello), (Hello messages), verify for liveness (Echo) (Echo messages),
   and offer additional functionalities (Experimenter). (Experimenter messages).

A.3.  Connection Overview

   The OpenFlow channel is used to exchange OpenFlow message messages between an
   OpenFlow switch and an OpenFlow controller.  The OpenFlow channel
   connection can be setup set up using plain TCP or TLS.  By default, a
   switch establishes a single connection with the SDN controller. Controller.  A
   switch may establish multiple parallel connections to a single
   controller (auxiliary connection) or multiple controllers to handle
   controller failures and load balancing.

A.4.  Performance Benchmarking Tests

A.4.1.  Network Topology Discovery Time

   Procedure:

      Network Devices               OpenFlow                    SDN
                                   Controller               Application
            |                            |                           |
            |                            |<Initialize controller     |
            |                            |app.,NB                            |app., NB and SB interfaces> | interfaces>|
            |                            |                           |
            |<Deploy network with        |                           |
            | given no. of OF switches>  |                           |
            |                            |                           |
            |    OFPT_HELLO Exchange     |                           |
            |<-------------------------->|                           |
            |                            |                           |
            |    PACKET_OUT   OFPT_PACKET_OUT with LLDP    | LLDP|                           |
            |             to all switches       | switches|                           |
       (Tm1)|<---------------------------|                           |
            |                            |                           |
            |         PACKET_IN    OFPT_PACKET_IN with LLDP|                           |
            |          rcvd from switch-1| Switch 1|                           |
            |--------------------------->|                           |
            |                            |                           |
            |         PACKET_IN    OFPT_PACKET_IN with LLDP|                           |
            |          rcvd from switch-2| Switch 2|                           |
            |--------------------------->|                           |
            |            .               |                           |
            |            .               |                           |
            |                            |                           |
            |         PACKET_IN    OFPT_PACKET_IN with LLDP|                           |
            |          rcvd from switch-n| Switch n|                           |
       (Tmn)|--------------------------->|                           |
            |                            |                           |
            |                            |    <Wait for the expiry of|
            |                            |                            |     of   the Trial duration Duration (Td)>|
            |                            |                           |
            |                            |   Query the controller for|
            |                            |  discovered n/w topo.(Di)| topo. (Di)|
            |                            |<--------------------------|
            |                            |                           |
            |                            |    <Compare the discovered discovered|
            |                            |       n/w topology and the|
            |                            |    &      offered n/w topology>|
            |                            |                           |
   Legend:

      NB: Northbound
      SB: Southbound
      OF: OpenFlow
      OFP: OpenFlow Protocol
      LLDP: Link-Layer Discovery Protocol
      Tm1: Time of reception of first LLDP message from controller
      Tmn: Time of last LLDP message sent to controller

   Discussion:

      The Network Topology Discovery Time can be obtained by calculating
      the time difference between the first PACKET_OUT OFPT_PACKET_OUT with an LLDP
      message received from the controller (Tm1) and the last PACKET_IN
      OFPT_PACKET_IN with an LLDP message sent to the controller (Tmn)
      when the comparison is successful.

A.4.2.  Asynchronous Message Processing Time

   Procedure:

         Network Devices            OpenFlow                    SDN
                                   Controller               Application
            |                            |                           |
            |PACKET_IN
            |OFPT_PACKET_IN with single  |                           |
            |OFP match header            |                           |
        (T0)|--------------------------->|                           |
            |                            |                           |
            | PACKET_OUT
            |OFPT_PACKET_OUT with single OFP |                           |
            |          OFP action header |                           |
        (R0)|<---------------------------|                           |
            |          .                 |                           |
            |          .                 |                           |
            |          .                 |                           |
            |                            |                           |
            |PACKET_IN
            |OFPT_PACKET_IN with single OFP  |                           |
            |match
            |OFP match header            |                           |
        (Tn)|--------------------------->|                           |
            |                            |                           |
            | PACKET_OUT
            |OFPT_PACKET_OUT with single OFP |                           |
            |          OFP action header| header |                           |
        (Rn)|<---------------------------|                           |
            |                            |                           |
            |<Wait for the expiry of the |                           |
            |Trial duration> Duration>             |                           |
            |                            |                           |
            |<Record the number of       |                           |
            |PACKET_INs/PACKET_OUTs
            |OFPT_PACKET_INs/            |                           |
            |OFPT_PACKET_OUTs            |
            |Exchanged                           |
            |exchanged (Nrx)>            |                           |
            |                            |                           |

   Legend:

      T0,T1, ..Tn are PACKET_IN messages ..Tn: transmit timestamps.
         R0,R1, ..Rn are PACKET_OUT timestamps of OFPT_PACKET_IN messages
      R0,R1, ..Rn: receive timestamps.
         Nrx : timestamps of OFPT_PACKET_OUT messages
      Nrx: Number of successful PACKET_IN/PACKET_OUT OFPT_PACKET_IN/OFPT_PACKET_OUT
           message exchanges

   Discussion:

      The Asynchronous Message Processing Time will be obtained by
      calculating the sum of
   ((R0-T0),(R1-T1)..(Rn ((R0 - T0),(R1 - T1)..(Rn - Tn))/ Nrx. Tn))/Nrx.

A.4.3.  Asynchronous Message Processing Rate

   Procedure:

         Network Devices           OpenFlow                    SDN
                                  Controller               Application
            |                            |                          |
            |PACKET_IN
            |OFPT_PACKET_IN with single OFP  |                          |
            |match headers
            |OFP match header            |                          |
            |--------------------------->|                          |
            |                            |                          |
            | PACKET_OUT
            |OFPT_PACKET_OUT with single |                          |
            |          OFP action headers| header |                          |
            |<---------------------------|                          |
            |                            |                          |
            |            .               |                          |
            |            .               |                          |
            |            .               |                          |
            |                            |                          |
            |PACKET_IN
            |OFPT_PACKET_IN with single OFP  |                          |
            |match headers
            |OFP match header            |                          |
            |--------------------------->|                          |
            |                            |                          |
            | PACKET_OUT
            |OFPT_PACKET_OUT with single |                          |
            |          OFP action headers| header |                          |
            |<---------------------------|                          |
            |                            |                          |
            |<Repeat the steps until the     |                          |
            |expiry
            |the expiry of Trial the           |                          |
            |Trial Duration>             |                          |
            |                            |                          |
            |<Record the number of OFP   |                          |
      (Ntx1)|match headers sent>         |                          |
            |                            |                          |
            |<Record the number of OFP   |                          |
      (Nrx1)|action headers rcvd>        |                          |
            |                            |                          |

      Note: The Ntx1 on initial trials should be greater than Nrx1 and
   repeat Nrx1.
      Repeat the trials until the Nrxn for two consecutive trials equeal
   to equals
      (+/-P%).

   Discussion:

   This

      Using a single procedure, this test will measure two benchmarks using single procedure. 1) benchmarks:

         1. The Maximum Asynchronous Message Processing Rate will be
            obtained by calculating the maximum PACKET OUTs OFPT_PACKET_OUTs (Nrxn)
            received from the controller(s) across n trials. 2)

         2. The Loss-free Loss-Free Asynchronous Message Processing Rate will be
            obtained by calculating the maximum PACKET
   OUTs OFPT_PACKET_OUTs
            received from controller (s) the controller(s) when the Loss Ratio equals
            zero.  The
   loss ratio Loss Ratio is obtained by calculating
            1 - Nrxn/Ntxn Nrxn/Ntxn.

A.4.4.  Reactive Path Provisioning Time

   Procedure:

       Test Traffic     Test Traffic      Network Devices      OpenFlow
      Generator TP1    Generator TP2                          Controller
            |             |                      |                     |
            |             |G-ARP (D1)            |                     |
            |             |--------------------->|                     |
            |             |                      |                     |
            |             |                      |PACKET_IN(D1)                      |OFPT_PACKET_IN(D1)   |
            |             |                      |------------------>|                      |-------------------->|
            |             |                      |                     |
            |Traffic (S1,D1)                     |                     |
      (Tsf1)|----------------------------------->|                     |
            |             |                      |                     |
            |             |                      |                     |
            |             |                      |                     |
            |             |                      |PACKET_IN(S1,D1)   |                      |OFPT_PACKET_IN(S1,D1)|
            |             |                      |------------------>|                      |-------------------->|
            |             |                      |                     |
            |             |                      |  FLOW_MOD(D1)       |
            |             |                      |<------------------|                      |<--------------------|
            |             |                      |                     |
            |             |Traffic (S1,D1)       |                     |
            |       (Tdf1)|<---------------------|                     |
            |             |                      |                     |
   Legend:

      G-ARP: Gratuitous ARP message. message
      Tsf1: Time of first frame sent from TP1
      Tdf1: Time of first frame received from TP2

   Discussion:

      The Reactive Path Provisioning Time can be obtained by finding the
      time difference between the transmit and receive time times of the
      traffic
   (Tsf1-Tdf1). (Tsf1 - Tdf1).

A.4.5.  Proactive Path Provisioning Time

   Procedure:

   Test Traffic  Test Traffic    Network Devices OpenFlow       SDN
   Generator TP1 Generator TP2                  Controller   Application
         |            |               |                  |             |
         |            |               |                  |             |
         |            |               |                | <Install                  |<Install flow|
         |            |               |                  |  for S1,D1> |
         |            |G-ARP (D1)     |                  |             |
         |            |-------------->|                  |             |
         |            |               |                  |             |
         |            |               |PACKET_IN(D1)               |OFPT_PACKET_IN(D1)|             |
         |            |             |               |--------------->|               |----------------->|             |
         |            |               |                  |             |
         |Traffic (S1,D1)             |                  |             |
   Tsf1)|---------------------------->|
   (Tsf1)|--------------------------->|                  |             |
         |            |               |                  |             |
         |            |               |   FLOW_MOD(D1)   |             |
         |            |               |<---------------|               |<-----------------|             |
         |            |               |                  |             |
         |            |Traffic (S1,D1)|                  |             |
         |      (Tdf1)|<--------------|                  |             |
         |            |               |                  |             |

   Legend:

      G-ARP: Gratuitous ARP message. message
      Tsf1: Time of first frame sent from TP1
      Tdf1: Time of first frame received from TP2

   Discussion:

      The Proactive Path Provisioning Time can be obtained by finding
      the time difference between the transmit and receive time times of the
      traffic
   (Tsf1-Tdf1). (Tsf1 - Tdf1).

A.4.6.  Reactive Path Provisioning Rate

   Procedure:

       Test Traffic     Test Traffic   Network Devices         OpenFlow
      Generator TP1    Generator TP2                         Controller
            |             |                    |                      |
            |             |                    |                      |
            |             |                    |                      |
            |             |G-ARP (D1..Dn)      |                      |
            |             |--------------------|                      |
            |             |                    |                      |
            |             |                    |PACKET_IN(D1..Dn)     |                    |OFPT_PACKET_IN(D1..Dn)|
            |             |                    |--------------------->|
            |             |                    |                      |
            |Traffic (S1..Sn,D1..Dn)           |                      |
            |--------------------------------->|                      |
            |             |                    |                      |
            |             |                    |PACKET_IN(S1.Sn,D1.Dn)|                    |OFPT_PACKET_IN(S1..Sn,|
            |             |                    |               D1..Dn)|
            |             |                    |--------------------->|
            |             |                    |                      |
            |             |                    |        FLOW_MOD(S1)  |
            |             |                    |<---------------------|
            |             |                    |                      |
            |             |                    |        FLOW_MOD(D1)  |
            |             |                    |<---------------------|
            |             |                    |                      |
            |             |                    |        FLOW_MOD(S2)  |
            |             |                    |<---------------------|
            |             |                    |                      |
            |             |                    |        FLOW_MOD(D2)  |
            |             |                    |<---------------------|
            |             |                    |             .        |
            |             |                    |             .        |
            |             |                    |                      |
            |             |                    |        FLOW_MOD(Sn)  |
            |             |                    |<---------------------|
            |             |                    |                      |
            |             |                    |        FLOW_MOD(Dn)  |
            |             |                    |<---------------------|
            |             |                    |                      |
            |             | Traffic (S1..Sn,   |                      |
            |             |             D1..Dn)|                      |
            |             |<-------------------|                      |
            |             |                    |                      |
            |             |                    |                      |
   Legend:

      G-ARP: Gratuitous ARP message
      D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... ...,
              Destination Endpoint n
      S1..Sn: Source Endpoint 1, Source Endpoint 2 .., ...,
              Source Endpoint n

   Discussion:

      The Reactive Path Provisioning Rate can be obtained by finding the
      total number of frames received at test traffic generator TP2
      after the trial duration. Trial Duration.

A.4.7.  Proactive Path Provisioning Rate

   Procedure:

   Test Traffic  Test Traffic   Network Devices   OpenFlow        SDN
   Generator TP1 Generator TP2                   Controller  Application
         |            |                |                 |             |
         |            |G-ARP (D1..Dn)  |                 |             |
         |             |-------------->|            |--------------->|                 |             |
         |            |                |                 |             |
         |            |               |PACKET_IN(D1.Dn)|                |OFPT_PACKET_IN   |             |
         |            |               |--------------->|                |         (D1..Dn)|             |
         |            |                |---------------->|             |
         |            |                |                 |             |
         |Traffic (S1..Sn,D1..Dn)      |                 |             |
   Tsf1)|---------------------------->|                |
   (Tsf1)|---------------------------->|                 |             |
         |            |                |                 |             |
         |            |                | <Install                 |<Install flow|
         |            |                |                 |  for S1,D1> |
         |            |                |                 |             |
         |            |                |                 |       .     |
         |            |                |                | <Install                 |<Install flow|
         |            |                |                 |  for Sn,Dn> |
         |            |                |                 |             |
         |            |                |  FLOW_MOD(S1)   |             |
         |            |               |<---------------|                |<----------------|             |
         |            |                |                 |             |
         |            |                |  FLOW_MOD(D1)   |             |
         |            |               |<---------------|                |<----------------|             |
         |            |                |                 |             |
         |            |                |       .         |             |
         |            |                |  FLOW_MOD(Sn)   |             |
         |            |               |<---------------|                |<----------------|             |
         |            |                |                 |             |
         |            |                |  FLOW_MOD(Dn)   |             |
         |            |               |<---------------|                |<----------------|             |
         |            |                |                 |             |
         |            |Traffic (S1.Sn,| (S1..Sn,|                 |             |
         |            |         D1.Dn)|         D1..Dn)|                 |             |
         |       (Tdf1)|<--------------|      (Tdf1)|<---------------|                 |             |
         |            |                |                 |             |
   Legend:

      G-ARP: Gratuitous ARP message
      D1..Dn: Destination Endpoint 1, Destination Endpoint 2 .... ...,
              Destination Endpoint n
      S1..Sn: Source Endpoint 1, Source Endpoint 2 .., ...,
              Source Endpoint n

   Discussion:

      The Proactive Path Provisioning Rate can be obtained by finding
      the total number of frames received at test traffic generator TP2
      after the trial duration Trial Duration.

A.4.8.  Network Topology Change Detection Time

   Procedure:

       Network Devices              OpenFlow                    SDN
                                   Controller               Application
            |                            |                           |
            |                            |     <Bring down a link in |
            |                            |                 switch                 Switch S1>|
            |                            |                           |
         T0 |PORT_STATUS with link down  |                           |
            | from S1                    |                           |
            |--------------------------->|                           |
            |                            |                           |
            |First PACKET_OUT OFPT_PACKET_OUT with LLDP  |                           |
            |to
            |LLDP to OF Switch switch           |                           |
         T1 |<---------------------------|                           |
            |                            |                           |
            |                            |      <Record time of 1st first|
            |                            |       OFPT_PACKET_OUT with|
            |                            |   PACKET_OUT with                   LLDP T1>|
            |                            |                           |

   Discussion:

      The Network Topology Change Detection Time can be obtained by
      finding the difference between the time the that OpenFlow switch Switch S1
      sends the PORT_STATUS message (T0) and the time that the OpenFlow
      controller sends the first topology re-discovery rediscovery message (T1) to
      OpenFlow switches.

A.5.  Scalability

A.5.1.  Control Sessions Capacity

   Procedure:

         Network Devices                        OpenFlow
                                               Controller
            |                                       |
            |    OFPT_HELLO Exchange for Switch 1   |
            |<------------------------------------->|
            |                                       |
            |    OFPT_HELLO Exchange for Switch 2   |
            |<------------------------------------->|
            |                  .                    |
            |                  .                    |
            |                  .                    |
            |    OFPT_HELLO Exchange for Switch n   |
            |X<----------------------------------->X|
            |                                       |

   Discussion:

      The value of Switch n-1 (n - 1) will provide the Control Sessions
      Capacity.

A.5.2.  Network Discovery Size

   Procedure:

       Network Devices              OpenFlow                    SDN
                                   Controller               Application
            |                            |                           |
            |                            |     <Deploy network with  |
            |                            |given no. of OF switches N>|
            |                            |                           |
            |    OFPT_HELLO Exchange     |                           |
            |<-------------------------->|                           |
            |                            |                           |
            |    PACKET_OUT   OFPT_PACKET_OUT with LLDP    | LLDP|                           |
            |      to all switches       |                           |
            |<---------------------------|                           |
            |                            |                           |
            |         PACKET_IN    OFPT_PACKET_IN with LLDP|                           |
            |          rcvd from switch-1| Switch 1|                           |
            |--------------------------->|                           |
            |                            |                           |
            |         PACKET_IN    OFPT_PACKET_IN with LLDP|                           |
            |          rcvd from switch-2| Switch 2|                           |
            |--------------------------->|                           |
            |            .               |                           |
            |            .               |                           |
            |                            |                           |
            |         PACKET_IN    OFPT_PACKET_IN with LLDP|                           |
            |          rcvd from switch-n| Switch n|                           |
            |--------------------------->|                           |
            |                            |                           |
            |                            |    <Wait for the expiry of|
            |                            |                            |    of   the Trial duration Duration (Td)>|
            |                            |                           |
            |                            |   Query the controller for|
            |                            |  discovered n/w topo.(N1)| topo. (N1)|
            |                            |<--------------------------|
            |                            |                           |
            |                            |   <If N1==N N1==N, repeat Step 1|
            |                            |           with N + 1 nodes|
            |                            |                            |with N+1 nodes               until N1<N >|
            |                            |                           |
            |                            |   <If N1<N N1<N, repeat Step 1 |
            |                            | with N=N1 nodes once and  |
            |                            | exit>                     |
            |                            |                           |
   Legend:

      n/w topo: Network Topology topology
      OF: OpenFlow

   Discussion:

      The value of N1 provides the Network Discovery Size value.  The trial
   duration
      Trial Duration can be set to the stipulated time within which the
      user expects the controller to complete the discovery process.

A.5.3.  Forwarding Table Capacity

   Procedure:

   Test Traffic     Network Devices        OpenFlow          SDN
   Generator TP1                           Controller     Application
        |                 |                      |                 |
        |                 |                      |                 |
        |G-ARP (H1..Hn)   |                      |                 |
        |----------------->|                   |
        |---------------->|                      |                 |
        |                 |                      |                 |                  |PACKET_IN(D1..Dn)
        |                 |OFPT_PACKET_IN(D1..Dn)|                 |
        |                  |------------------>|                 |--------------------->|                 |
        |                 |                      |                 |
        |                 |                      |<Wait for 5 secs>|
        |                 |                      |                 |
        |                 |                      |  <Query for FWD |
        |                 |                      |          entry> |(F1)
        |                 |                      |                 |
        |                 |                      |<Wait for 5 secs>|
        |                 |                      |                 |
        |                 |                      |  <Query for FWD |
        |                 |                      |          entry> |(F2)
        |                 |                      |                 |
        |                 |                      |<Wait for 5 secs>|
        |                 |                      |                 |
        |                 |                      |  <Query for FWD |
        |                 |                      |          entry> |(F3)
        |                 |                      |                 |
        |                 |                      | <Repeat Step 2  |
        |                 |                      |until F1==F2==F3>|
        |                 |                      |                 |

   Legend:

      G-ARP: Gratuitous ARP message
      H1..Hn: Host 1 .. Host n
      FWD: Forwarding Table

   Discussion:

      Query the controller forwarding table controller's Forwarding Table entries for multiple times times,
      until the three consecutive queries return the same value.  The last
      value retrieved from the controller will provide the Forwarding
      Table Capacity value.  The query interval is user configurable.
      The interval of 5 seconds shown in this example is for
      representational purpose. purposes.

A.6.  Security

A.6.1.  Exception Handling

Procedure:

Test Traffic  Test Traffic   Network Devices   OpenFlow          SDN
Generator TP1 Generator TP2                  Controller      Application
   |          |                |                      |                |
   |          |G-ARP (D1..Dn)  |                      |                |
   |          |------------------>|          |--------------->|                      |                |
   |          |                |                      |                |
   |          |                   |PACKET_IN(D1..Dn)|                |OFPT_PACKET_IN(D1..Dn)|                |
   |          |                   |---------------->|                |--------------------->|                |
   |          |                |                      |                |
   |Traffic (S1..Sn,D1..Dn)    |                      |                |
       |----------------------------->|
   |-------------------------->|                      |                |
   |          |                |                      |                |
   |          |                   |PACKET_IN(S1..Sa,|                |OFPT_PACKET_IN(S1..Sa,|                |
   |          |                |               D1..Da)|                |
   |          |                   |---------------->|                |--------------------->|                |
   |          |                |                      |                |
   |          |                |OFPT_PACKET_IN        |                   |PACKET_IN(Sa+1..                |
   |          |                |                   |.Sn,Da+1..Dn)            (Sa+1..Sn,|                |
   |          |                |                   |(1%             Da+1..Dn)|                |
   |          |                |     (1% incorrect OFP|                |
   |          |                |    Match         match header)|                |
   |          |                   |---------------->|                |--------------------->|                |
   |          |                |                      |                |
   |          |                |      FLOW_MOD(D1..Dn)|                |
   |          |                   |<----------------|                |<---------------------|                |
   |          |                |                      |                |
   |          |                |      FLOW_MOD(S1..Sa)|                |
   |          |                |           OFP headers|                |
   |          |                   |<----------------|                |<---------------------|                |
   |          |                |                      |                |
   |          |Traffic (S1..Sa,   | (S1..Sa,|                      |                |
   |          |         D1..Da)|                      |                |
   |          |<------------------|          |<---------------|                      |                |
   |          |                |                      |                |
   |          |                |                      |   <Wait for the|
   |          |                |                      |   expiry of the|
   |          |                |      Test                      |           Trial|
   |          |                |                      |       Duration>|
   |          |                |                      |                |
   |          |                |                      |      <Record Rx|
   |          |                |                      |       frames at|
   |          |                |                      |      TP2 (Rn1)>|
   |          |                |                      |                |
   |          |                |                      |        <Repeat |
   |          |                |                      | Step1 with     Step 1 with|
   |          |                |                      |                 |2%    2% incorrect|
   |          |                |                 | PACKET_INs>|                      |OFPT_PACKET_INs>|
   |          |                |                      |                |
   |          |                |                      |      <Record Rx|
   |          |                |                      |       frames at|
   |          |                |                      |      TP2 (Rn2)>|
       |          |                   |                 |            |

   Legend:

      G-ARP: Gratuitous ARP
         PACKET_IN(Sa+1..Sn,Da+1..Dn): OpenFlow PACKET_IN message
      OFPT_PACKET_IN(Sa+1..Sn,Da+1..Dn): OFPT_PACKET_IN with
                                         wrong version number
      Rn1: Total number of frames received at Test Port 2
           with 1% incorrect frames
      Rn2: Total number of frames received at Test Port 2
           with 2% incorrect frames

   Discussion:

      The traffic rate sent towards the OpenFlow switch from Test Port 1
      should be 1% higher than the Path Programming Rate.  Rn1 will
      provide the Path Provisioning Rate of the controller at when 1% of
      incorrect frames
   handling are received, and Rn2 will provide the Path
      Provisioning Rate of the controller at when 2% of incorrect frames handling.
      are received.

      The procedure defined above provides test steps to determine the
   effect
      effects of handling error packets on the Path Programming Rate. Same
      The same procedure can be adopted adapted to determine the effects on
      other performance tests listed in this benchmarking tests. test.

A.6.2. Denial of Service  Handling Denial-of-Service Attacks

Procedure:

Test Traffic  Test Traffic   Network Devic Device      OpenFlow         SDN
Generator TP1 Generator TP2                     Controller  Application
    |          |                 |                      |             |
    |          |G-ARP (D1..Dn)   |                      |             |
    |          |------------------>|          |---------------->|                      |             |
    |          |                 |                      |             |
    |          |                   |PACKET_IN(D1..Dn)|                 |OFPT_PACKET_IN(D1..Dn)|             |
    |          |                   |---------------->|                 |--------------------->|             |
    |          |                 |                      |             |
    |Traffic (S1..Sn,D1..Dn)     |                      |             |
       |----------------------------->|
    |--------------------------->|                      |             |
    |          |                 |                      |             |
    |          |                   |PACKET_IN(S1..Sn,|                 |OFPT_PACKET_IN(S1..Sn,|             |
    |          |                 |               D1..Dn)|             |
    |          |                   |---------------->|                 |--------------------->|             |
    |          |                 |                      |             |
    |          |                 |TCP SYN Attack attack        |             |
    |          |                 |from a switch         |             |
    |          |                   |---------------->|                 |--------------------->|             |
    |          |                 |                      |             |
    |          |                 |FLOW_MOD(D1..Dn)      |             |
    |          |                   |<----------------|                 |<---------------------|             |
    |          |                 |                      |             |
    |          |                 | FLOW_MOD(S1..Sn)     | FLOW_MOD(S1..Sn)|             |
    |          |                 |      OFP headers| headers     |             |
    |                   |<----------------|          |                 |<---------------------|             |
    |          |                 |                      |             |
    |          |Traffic (S1..Sn, |                      |             |
    |          |            D1..Dn)|         D1..Dn) |                      |             |          |<------------------|
    |          |<----------------|                      |             |
    |          |                 |                      |             |
    |          |                 |  <Wait                      |<Wait for the|
    |          |                 |                      |expiry of the|
    |          |      Test                 |                      |        Trial|
    |          |                 |                      |    Duration>|
    |          |                 |                      |             |
    |          |                 |                      |   <Record Rx|
    |          |                 |                      |    frames at|
    |          |                 |                      |   TP2 (Rn1)>|
    |          |                 |                      |             |
   Legend:

      G-ARP: Gratuitous ARP message

   Discussion:

      A TCP SYN attack should be launched from one of the
      emulated/simulated OpenFlow Switch. switches.  Rn1 provides the Path
      Programming Rate of the controller uponhandling denial of upon handling a denial-of-
      service attack.

      The procedure defined above provides test steps to determine the
   effect
      effects of handling denial of service on the Path Programming
      Rate. Same  The same procedure can be adopted adapted to determine the effects
      on other performance tests listed in this benchmarking tests. test.

A.7.  Reliability

A.7.1.  Controller Failover Time

Procedure:

Test Traffic  Test Traffic  Network Device       OpenFlow      SDN
Generator TP1 Generator TP2                    Controller   Application
   |            |               |                       |             |
   |            |G-ARP (D1)     |                       |             |
   |             |------------>|            |-------------->|                       |             |
   |            |               |                       |             |
   |            |             |PACKET_IN(D1)               |OFPT_PACKET_IN(D1)     |             |
   |            |             |---------------->|               |---------------------->|             |
   |            |               |                       |             |
   |Traffic (S1..Sn,D1)         |                       |             |
       |-------------------------->|
   |--------------------------->|                       |             |
   |            |               |                       |             |
   |            |               |                       |             |
   |            |             |PACKET_IN(S1,D1)               |OFPT_PACKET_IN(S1,D1)  |             |
   |            |             |---------------->|               |---------------------->|             |
   |            |               |                       |             |
   |            |               |FLOW_MOD(D1)           |             |
   |            |             |<----------------|               |<----------------------|             |
   |            |               |FLOW_MOD(S1)           |             |
   |            |             |<----------------|               |<----------------------|             |
   |            |               |                       |             |
   |            |Traffic (S1,D1)|                       |             |
   |             |<------------|            |<--------------|                       |             |
   |            |               |                       |             |
   |            |             |PACKET_IN(S2,D1)               |OFPT_PACKET_IN(S2,D1)  |             |
   |            |             |---------------->|               |---------------------->|             |
   |            |               |                       |             |
   |            |               |FLOW_MOD(S2)           |             |
   |            |             |<----------------|               |<----------------------|             |
   |            |               |                       |             |
   |            |             |PACKET_IN(Sn-1,D1)|               |OFPT_PACKET_IN         |             |
   |             |---------------->|            |               |             (Sn-1,D1) |             |
   |            |               |---------------------->|             |
   |             |PACKET_IN(Sn,D1)            |               |                       |             |             |---------------->|
   |            |               |OFPT_PACKET_IN(Sn,D1)  |             |
   |            |               |---------------------->|             |
   |            |               |          .            |             |
   |            |               |          .            |<Bring down the|  |
   |            |               |          .         |active control-|            | the active  |
   |            |               |       ler>                       | controller> |
   |            |               |  FLOW_MOD(Sn-1)       |             |
   |            |               |    <-X----------|    X<-----------------|             |
   |            |               |                       |             |
   |            |               |FLOW_MOD(Sn)           |             |
   |            |             |<----------------|               |<----------------------|             |
   |            |               |                       |             |
   |            |Traffic (Sn,D1)|                       |             |
   |             |<------------|            |<--------------|                       |             |
   |            |               |                       |             |
   |            |               |                       |<Stop the test    |
   |            |               |                 |after recv.                       |test after   |
   |            |               |                 |traffic upon                       |recv. traffic|
   |            |               |                       |upon         |
   | failure>            |               |                       |failure>     |

   Legend:

      G-ARP: Gratuitous ARP. ARP message

   Discussion:

      The time difference between the last valid frame received before
      the traffic loss and the first frame received after the traffic
      loss will provide the controller failover time. Controller Failover Time.

      If there is no frame loss during controller failover time, the
   controller failover time Controller Failover Time, the
      Controller Failover Time can be deemed negligible.

A.7.2.  Network Re-Provisioning Re-provisioning Time

Procedure:

Test Traffic  Test Traffic   Network Devices     OpenFlow       SDN
Generator TP1 Generator TP2                     Controller   Application
  |             |                |                      |              |
  |             |G-ARP (D1)      |                      |              |
  |             |-------------->|             |--------------->|                      |              |
  |             |                |                      |              |
  |             |               |PACKET_IN(D1)                |OFPT_PACKET_IN(D1)    |              |
  |             |               |---------------->|                |--------------------->|              |
  |              G-ARP             |G-ARP (S1)      |                      |              |
       |---------------------------->|
  |----------------------------->|                      |              |
  |             |                |                      |              |
  |             |               |PACKET_IN(S1)                |OFPT_PACKET_IN(S1)    |              |
  |             |               |---------------->|                |--------------------->|              |
  |             |                |                      |              |
  |Traffic (S1,D1,Seq.no (S1,D1,Seq. no (1..n))|                      |              |
       |---------------------------->|
  |----------------------------->|                      |              |
  |             |                |                      |              |
  |             |               |PACKET_IN(S1,D1)                |OFPT_PACKET_IN(S1,D1) |              |
  |             |               |---------------->|                |--------------------->|              |
  |             |                |                      |              |
  |             |Traffic             | Traffic (D1,S1,|                      |              |
  |             | Seq.no Seq. no (1..n))|                      |              |
  |             |-------------->|             |--------------->|                      |              |
  |             |                |                      |              |
  |             |               |PACKET_IN(D1,S1)                |OFPT_PACKET_IN(D1,S1) |              |
  |             |               |---------------->|                |--------------------->|              |
  |             |                |                      |              |
  |             |                |FLOW_MOD(D1)          |              |
  |             |               |<----------------|                |<---------------------|              |
  |             |                |                      |              |
  |             |                |FLOW_MOD(S1)          |              |
  |             |               |<----------------|                |<---------------------|              |
  |             |                |                      |              |
  |             |Traffic             | Traffic (S1,D1,|                      |              |
  |             |     Seq.no(1))|     Seq. no(1))|                      |              |
  |             |<--------------|             |<---------------|                      |              |
  |             |                |                      |              |
  |             |Traffic             | Traffic (S1,D1,|                      |              |
  |             |     Seq.no(2))|     Seq. no(2))|                      |              |
  |             |<--------------|             |<---------------|                      |              |
  |             |                |                      |              |
  |             |                |                      |              |
  |    Traffic (D1,S1,Seq.no(1))| (D1,S1,Seq. no(1))|                      |              |
       |<----------------------------|
  |<-----------------------------|                      |              |
  |             |                |                      |              |
  |    Traffic (D1,S1,Seq.no(2))| (D1,S1,Seq. no(2))|                      |              |
       |<----------------------------|
  |<-----------------------------|                      |              |
  |             |                |                      |              |
  |    Traffic (D1,S1,Seq.no(x))| (D1,S1,Seq. no(x))|                      |              |
       |<----------------------------|
  |<-----------------------------|                      |              |
  |             |                |                      |              |
  |             |Traffic             | Traffic (S1,D1,|                      |              |
  |             |     Seq.no(x))|     Seq. no(x))|                      |              |
  |             |<--------------|             |<---------------|                      |              |
  |             |                |                      |              |
  |             |                |                      |              |
  |             |                |                      |  <Bring down |
  |             |                |                      | the switch in|
  |             |                |                 |active traffic|                      |    the active|
  |             |                |       path>                      | traffic path>|
  |             |                |                      |              |
  |             |                |PORT_STATUS(Sa)       |              |
  |             |               |---------------->|                |--------------------->|              |
  |             |                |                      |              |
  |             |Traffic (S1,D1,|             | Traffic (S1,D1,|                      |              |
  |   Seq.no(n-1))|             | Seq. no(n - 1))|                      |              |
  |  X<-----------|             |  X<------------|                      |              |
  |             |                |                      |              |  Traffic (D1,S1,Seq.no(n-1))|
  |Traffic (D1,S1,Seq. no(n - 1))|                      |              |
  |    X------------------------|    X<------------------------|                      |              |
  |             |                |                      |              |
  |             |                |                      |              |
  |             |                |FLOW_MOD(D1)          |              |
  |             |               |<----------------|                |<---------------------|              |
  |             |                |                      |              |
  |             |                |FLOW_MOD(S1)          |              |
  |             |               |<----------------|                |<---------------------|              |
  |             |                |                      |              |
  |    Traffic (D1,S1,Seq.no(n))| (D1,S1,Seq. no(n))|                      |              |
       |<----------------------------|
  |<-----------------------------|                      |              |
  |             |                |                      |              |
  |             |Traffic             | Traffic (S1,D1,|                      |              |
  |             |     Seq.no(n))|     Seq. no(n))|                      |              |
  |             |<--------------|             |<---------------|                      |              |
  |             |                |                      |              |
  |             |                |                      |<Stop the test|
  |             |                |                      |  after recv. |
  |             |                |                      |  traffic upon|
  |             |                |                      |   failover>  |
   Legend:

      G-ARP: Gratuitous ARP message.
         Seq.no: message
      Seq. no: Sequence number. number
      Sa: Neighbor switch of the switch that was brought down. down

   Discussion:

      The time difference between the last valid frame received before
      the traffic loss (Packet number (packet with sequence number x) and the first
      frame received after the traffic loss (packet with sequence
      number n) will provide the network path re-provisioning time. Network Re-provisioning Time.

      Note that the trial is valid only when the controller provisions
      the alternate path upon network failure.

Acknowledgments

   The authors would like to thank the following individuals for
   providing their valuable comments regarding the earlier draft
   versions of this document: Al Morton (AT&T), Sandeep Gangadharan
   (HP), M. Georgescu (NAIST), Andrew McGregor (Google), Scott Bradner,
   Jay Karthik (Cisco), Ramki Krishnan (VMware), Khasanov Boris
   (Huawei), and Brian Castelli (Spirent).

Authors' Addresses

   Bhuvaneswaran Vengainathan
   Veryx Technologies Inc.
   1 International Plaza, Suite 550
   Philadelphia
   Philadelphia, PA  19113
   United States of America

   Email: bhuvaneswaran.vengainathan@veryxtech.com

   Anton Basil
   Veryx Technologies Inc.
   1 International Plaza, Suite 550
   Philadelphia
   Philadelphia, PA  19113
   United States of America

   Email: anton.basil@veryxtech.com

   Mark Tassinari
   Hewlett-Packard,
   Hewlett Packard Enterprise
   8000 Foothills Blvd, Blvd.
   Roseville, CA  95747
   United States of America

   Email: mark.tassinari@hpe.com

   Vishwas Manral
   Nano Sec,
   NanoSec Co
   3350 Thomas Rd.
   Santa Clara, CA  95054
   United States of America

   Email: vishwas.manral@gmail.com

   Sarah Banks
   VSS Monitoring
   930 De Guigne Drive, Drive
   Sunnyvale, CA  94085
   United States of America

   Email: sbanks@encrypted.net