<?xml version="1.0" encoding="US-ASCII"?> version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
<?rfc toc="yes"?>
<?rfc tocompact="yes"?>
<?rfc tocdepth="3"?>
<?rfc tocindent="yes"?>
<?rfc symrefs="yes"?>
<?rfc sortrefs="yes"?>
<?rfc comments="yes"?>
<?rfc inline="yes"?>
<?rfc compact="yes"?>
<?rfc subcompact="no"?> "rfc2629-xhtml.ent">
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" number="8670"
     category="info" consensus="true" submissionType="IETF"
     docName="draft-ietf-spring-segment-routing-msdc-11"
     ipr="trust200902"> ipr="trust200902" obsoletes="" updates="" xml:lang="en" tocInclude="true" symRefs="true" sortRefs="true" version="3">

  <front>
    <title abbrev="BGP-Prefix SID abbrev="BGP Prefix-SID in large-scale DCs">BGP-Prefix Large-Scale DCs">BGP Prefix Segment in
    large-scale data centers</title>
    Large-Scale Data Centers</title>
    <seriesInfo name="RFC" value="8670"/>
    <author fullname="Clarence Filsfils" initials="C." role="editor" surname="Filsfils">
      <organization>Cisco Systems, Inc.</organization>
      <address>
        <postal>
          <street/>
          <city>Brussels</city>
          <region/>
          <code/>

          <country>BE</country>
          <country>Belgium</country>
        </postal>
        <email>cfilsfil@cisco.com</email>
      </address>
    </author>
    <author fullname="Stefano Previdi" initials="S." surname="Previdi">
      <organization>Cisco Systems, Inc.</organization>
      <address>
        <postal>
          <street/>
          <city/>
          <code/>
          <country>Italy</country>
        </postal>
        <email>stefano@previdi.net</email>
      </address>
    </author>
    <author fullname="Gaurav Dawra" initials="G." surname="Dawra">
      <organization>LinkedIn</organization>
      <address>
        <postal>
          <street/>
          <city/>
          <code/>

          <country>USA</country>
          <country>United States of America</country>
        </postal>
        <email>gdawra.ietf@gmail.com</email>
      </address>
    </author>
    <author fullname="Ebben Aries" initials="E." surname="Aries">
      <organization>Juniper Networks</organization>
      <organization>Arrcus, Inc.</organization>
      <address>
        <postal>
          <street>1133 Innovation Way</street>

          <city>Sunnyvale</city>
          <street>2077 Gateway Place, Suite #400</street>
          <city>San Jose</city>
          <code>CA 94089</code>

          <country>US</country> 95119</code>
          <country>United States of America</country>
        </postal>

        <email>exa@juniper.net</email>
        <email>exa@arrcus.com</email>
      </address>
    </author>
    <author fullname="Petr Lapukhov" initials="P." surname="Lapukhov">
      <organization>Facebook</organization>
      <address>
        <postal>
          <street/>
          <city/>
          <code/>

          <country>US</country>
          <country>United States of America</country>
        </postal>
        <email>petr@fb.com</email>
      </address>
    </author>
    <date year="2018"/> month="December" year="2019"/>
    <workgroup>Network Working Group</workgroup>

    <keyword>example</keyword>
    <abstract>
      <t>This document describes the motivation for, and benefits for of, applying
      segment routing
      Segment Routing (SR) in BGP-based large-scale data-centers. data centers. It describes the
      design to deploy segment routing SR in those data-centers, data centers for both the
      MPLS and IPv6 dataplanes.</t> data planes.</t>
    </abstract>
  </front>

  <middle>
    <section anchor="INTRO" title="Introduction"> numbered="true" toc="default">
      <name>Introduction</name>
      <t>Segment Routing (SR), as described in <xref
      target="I-D.ietf-spring-segment-routing"/> target="RFC8402" format="default"/>, leverages the source routing source-routing
      paradigm. A node steers a packet through an ordered list of
      instructions,
      instructions called segments. "segments". A segment can represent any instruction,
      topological or service-based. service based. A segment can have a local semantic to an
      SR node or a global semantic within an SR domain. SR allows to enforce the enforcement of a flow
      through any topological path while maintaining per-flow state only at from
      the ingress node to the SR domain. Segment Routing SR can be applied to the
      MPLS and IPv6 data-planes.</t> data planes.</t>
      <t>The use-cases use cases described in this document should be considered in the
      context of the BGP-based large-scale data-center (DC) design described
      in <xref target="RFC7938"/>. target="RFC7938" format="default"/>. This document extends it by applying SR
      both with IPv6 and MPLS dataplane.</t> data planes.</t>
    </section>
    <section anchor="LARGESCALEDC"
             title="Large Scale Data Center numbered="true" toc="default">
      <name>Large-Scale Data-Center Network Design Summary"> Summary</name>
      <t>This section provides a brief summary of the informational document Informational RFC
      <xref target="RFC7938"/> that target="RFC7938" format="default"/>, which outlines a practical network design
      suitable for data-centers data centers of various scales:<list style="symbols">
          <t>Data-center scales:</t>
      <ul spacing="normal">
        <li>Data-center networks have highly symmetric topologies with
          multiple parallel paths between two server attachment server-attachment points. The
          well-known Clos topology is most popular among the operators (as
          described in <xref target="RFC7938"/>). target="RFC7938" format="default"/>). In a Clos topology, the
          minimum number of parallel paths between two elements is determined
          by the "width" of the "Tier-1" stage. See <xref target="FIGLARGE"/>
          below target="FIGLARGE" format="default"/>
          for an illustration of the concept.</t>

          <t>Large-scale data-centers concept.</li>
        <li>Large-scale data centers commonly use a routing protocol, such as
          BGP-4 <xref target="RFC4271"/> target="RFC4271" format="default"/>, in order to provide endpoint
          connectivity. Recovery Therefore, recovery after a network failure is therefore driven
          either by local knowledge of directly available backup paths or by
          distributed signaling between the network devices.</t>

          <t>Within devices.</li>
        <li>Within data-center networks, traffic is load-shared load shared using the
          Equal Cost Multipath (ECMP) mechanism. With ECMP, every network
          device implements a pseudo-random pseudorandom decision, mapping packets to one
          of the parallel paths by means of a hash function calculated over
          certain parts of the packet, typically a combination of various
          packet header fields.</t>
        </list></t> fields.</li>
      </ul>
      <t>The following is a schematic of a five-stage Clos topology, topology with four
      devices in the "Tier-1" stage. Notice that the number of paths between Node1
      and Node12 equals to four: four; the paths have to cross all of the Tier-1
      devices. At the same time, the number of paths between Node1 and Node2
      equals two, and the paths only cross Tier-2 devices. Other topologies
      are possible, but for simplicity simplicity, only the topologies that have a single
      path from Tier-1 to Tier-3 are considered below. The rest could be
      treated similarly, with a few modifications to the logic.</t>
      <section anchor="REFDESIGN" title="Reference design"> numbered="true" toc="default">
        <name>Reference Design</name>
        <figure anchor="FIGLARGE" title="5-stage anchor="FIGLARGE">
          <name>5-Stage Clos topology">
          <artwork> Topology</name>
          <artwork name="" type="" align="left" alt=""><![CDATA[                                Tier-1
                               +-----+
                               |NODE |
                            +-&gt;|
                            +->|  5  |--+
                            |  +-----+  |
                    Tier-2  |           |   Tier-2
                   +-----+  |  +-----+  |  +-----+
     +------------&gt;|NODE |--+-&gt;|NODE
     +------------>|NODE |--+->|NODE |--+--|NODE |-------------+
     |       +-----|  3  |--+  |  6  |  +--|  9  |-----+       |
     |       |     +-----+     +-----+     +-----+     |       |
     |       |                                         |       |
     |       |     +-----+     +-----+     +-----+     |       |
     | +-----+----&gt;|NODE +-----+---->|NODE |--+  |NODE |  +--|NODE |-----+-----+ |
     | |     | +---|  4  |--+-&gt;|  |--+->|  7  |--+--|  10 |---+ |     | |
     | |     | |   +-----+  |  +-----+  |  +-----+   | |     | |
     | |     | |            |           |            | |     | |
   +-----+ +-----+          |  +-----+  |          +-----+ +-----+
   |NODE | |NODE | Tier-3   +-&gt;|NODE   +->|NODE |--+   Tier-3 |NODE | |NODE |
   |  1  | |  2  |             |  8  |             | 11  | |  12 |
   +-----+ +-----+             +-----+             +-----+ +-----+
     | |     | |                                     | |     | |
     A O     B O            &lt;-            <- Servers -&gt; ->            Z O     O O
</artwork> O]]></artwork>
        </figure>
        <t>In the reference topology illustrated in <xref target="FIGLARGE"/>,
        It target="FIGLARGE" format="default"/>,
        it is assumed:<list style="symbols"> assumed:</t>
        <ul spacing="normal">
          <li>
            <t>Each node is its own AS autonomous system (AS) (Node X has AS X). 4-byte AS numbers
            are recommended (<xref target="RFC6793"/>).<list>
                <t>For target="RFC6793" format="default"/>).</t>
            <ul spacing="normal">
              <li>For simple and efficient route propagation filtering,
                Node5, Node6, Node7 Node7, and Node8 use the same AS, AS; Node3 and Node4
                use the same AS, AS; and Node9 and Node10 use the same AS.</t>

                <t>In AS.</li>

              <li>In the case of in which 2-byte autonomous system numbers are used and
                for efficient usage of the scarce 2-byte Private Use AS pool,
                different Tier-3 nodes might use the same AS.</t>

                <t>Without AS.</li>
              <li>Without loss of generality, these details will be
                simplified in this document and assume document. It is to be assumed that each node has its
                own AS.</t>
              </list></t>

            <t>Each AS.</li>
            </ul>
          </li>

          <li>Each node peers with its neighbors with a BGP session. If not
            specified, eBGP external BGP (EBGP) is assumed. In a specific use-case, iBGP use case,
            internal BGP (IBGP) will be
            used used, but this will be called out
            explicitly in that case.</t> case.</li>
          <li>
            <t>Each node originates the IPv4 address of its loopback interface
            into BGP and announces it to its neighbors. <list>
                <t>The </t>
            <ul spacing="normal">
              <li>The loopback of Node X is 192.0.2.x/32.</t>
              </list></t>
          </list></t> 192.0.2.x/32.</li>
            </ul>
          </li>
        </ul>
        <t>In this document, the Tier-1, Tier-2 Tier-2, and Tier-3 nodes are referred
        to respectively as Spine, Leaf "Spine", "Leaf", and ToR "ToR" (top of rack) nodes. nodes, respectively.  When a ToR
        node acts as a gateway to the "outside world", it is referred to as a
        border node.</t>
        "border node".</t>
      </section>
    </section>
    <section anchor="OPENPROBS"
             title="Some open problems in large data-center networks"> numbered="true" toc="default">
      <name>Some Open Problems in Large Data-Center Networks</name>
      <t>The data-center network data-center-network design summarized above provides means for
      moving traffic between hosts with reasonable efficiency. There are few
      open performance and reliability problems that arise in such a design:
      <list style="symbols">
          <t>ECMP
      </t>
      <ul spacing="normal">
        <li>ECMP routing is most commonly realized per-flow. per flow. This means that
          large, long-lived "elephant" flows may affect performance of
          smaller, short-lived &ldquo;mouse&rdquo; "mouse" flows and may reduce efficiency
          of per-flow load-sharing. load sharing. In other words, per-flow ECMP does not
          perform efficiently when flow lifetime flow-lifetime distribution is heavy-tailed. heavy tailed.
          Furthermore, due to hash-function inefficiencies inefficiencies, it is possible to
          have frequent flow collisions, collisions where more flows get placed on one
          path over the others.</t>

          <t>Shortest-path others.</li>
        <li>Shortest-path routing with ECMP implements an oblivious routing
          model, which
          model that is not aware of the network imbalances. If the network
          symmetry is broken, for example example, due to link failures, utilization
          hotspots may appear. For example, if a link fails between Tier-1 and
          Tier-2 devices (e.g. (e.g., Node5 and Node9), Tier-3 devices Node1 and
          Node2 will not be aware of that, that since there are other paths
          available from the perspective of Node3. They will continue sending
          roughly equal traffic to Node3 and Node4 as if the failure didn't
          exist
          exist, which may cause a traffic hotspot.</t>

          <t>Isolating hotspot.</li>
        <li>Isolating faults in the network with multiple parallel paths and
          ECMP-based routing is non-trivial nontrivial due to lack of determinism.
          Specifically, the connections from HostA to HostB may take a
          different path every time a new connection is formed, thus making
          consistent reproduction of a failure much more difficult. This
          complexity scales linearly with the number of parallel paths in the
          network,
          network and stems from the random nature of path selection by the
          network devices.</t>
        </list></t>

      <t>First, it will be explained how to apply SR in the DC, for MPLS and
      IPv6 data-planes.</t> devices.</li>
      </ul>

    </section>
    <section anchor="APPLYSR"
             title="Applying numbered="true" toc="default">
      <name>Applying Segment Routing in the DC with MPLS dataplane"> Data Plane</name>
      <section anchor="BGPREFIXSEGMENT"
               title="BGP numbered="true" toc="default">
        <name>BGP Prefix Segment (BGP-Prefix-SID)"> (BGP Prefix-SID)</name>
        <t>A BGP Prefix Segment is a segment associated with a BGP prefix. A
        BGP Prefix Segment is a network-wide instruction to forward the packet
        along the ECMP-aware best path to the related prefix.</t>
        <t>The BGP Prefix Segment is defined as the BGP-Prefix-SID BGP Prefix-SID Attribute
        in <xref target="I-D.ietf-idr-bgp-prefix-sid"/> target="RFC8669" format="default"/>, which contains an
        index. Throughout this document document, the BGP Prefix Segment Attribute is
        referred to as the BGP-Prefix-SID "BGP Prefix-SID" and the encoded index as the
        label-index.</t>
        label index.</t>
        <t>In this document, the network design decision has been made to
        assume that all the nodes are allocated the same SRGB (Segment Routing
        Global Block), e.g. e.g., [16000, 23999]. This provides operational
        simplification as explained in <xref target="SINGLESRGB"/>, target="SINGLESRGB" format="default"/>, but this
        is not a requirement.</t>
        <t>For illustration purpose, purposes, when considering an MPLS data-plane, data plane, it
        is assumed that the label-index label index allocated to prefix 192.0.2.x/32 is X.
        As a result, a local label (16000+x) is allocated for prefix
        192.0.2.x/32 by each node throughout the DC fabric.</t>
        <t>When the IPv6 data-plane data plane is considered, it is assumed that Node X is
        allocated IPv6 address (segment) 2001:DB8::X.</t>
      </section>
      <section anchor="eBGP8277" title="eBGP numbered="true" toc="default">
        <name>EBGP Labeled Unicast (RFC8277)"> (RFC 8277)</name>
        <t>Referring to <xref target="FIGLARGE"/> target="FIGLARGE"	format="default"/> and
	<xref
        target="RFC7938"/>, target="RFC7938" format="default"/>, the following design modifications are
        introduced:<list style="symbols">
            <t>Each
        introduced:</t>
        <ul spacing="normal">
          <li>Each node peers with its neighbors via a eBGP an EBGP session with
            extensions defined in <xref target="RFC8277"/> target="RFC8277" format="default"/> (named "eBGP8277" "EBGP8277"
            throughout this document) and with the BGP-Prefix-SID BGP Prefix-SID attribute
            extension as defined in <xref
            target="I-D.ietf-idr-bgp-prefix-sid"/>.</t>

            <t>The target="RFC8669" format="default"/>.</li>
          <li>The forwarding plane at Tier-2 and Tier-1 is MPLS.</t>

            <t>The MPLS.</li>
          <li>The forwarding plane at Tier-3 is either IP2MPLS (if the host
            sends IP traffic) or MPLS2MPLS (if the host sends MPLS-
            encapsulated traffic).</t>
          </list></t> MPLS-encapsulated traffic).</li>
        </ul>
        <t><xref target="FIGSMALL"/> target="FIGSMALL" format="default"/> zooms into a path from server A ServerA to server
        Z ServerZ within the topology of <xref target="FIGLARGE"/>.</t> target="FIGLARGE" format="default"/>.</t>
        <figure anchor="FIGSMALL"
                title="Path anchor="FIGSMALL">
          <name>Path from A to Z via nodes Nodes 1, 4, 7, 10 10, and 11">
          <artwork> 11</name>
          <artwork name="" type="" align="left" alt=""><![CDATA[                   +-----+     +-----+     +-----+
       +----------&gt;|NODE
       +---------->|NODE |     |NODE |     |NODE |
       |           |  4  |--+-&gt;|  |--+->|  7  |--+--|  10 |---+
       |           +-----+     +-----+     +-----+   |
       |                                             |
   +-----+                                         +-----+
   |NODE |                                         |NODE |
   |  1  |                                         | 11  |
   +-----+                                         +-----+
     |                                              |
     A                    &lt;-                    <- Servers -&gt;             Z
</artwork> ->             Z]]></artwork>
        </figure>
        <t>Referring to Figures <xref target="FIGLARGE"/> target="FIGLARGE"
	format="counter"/> and <xref
        target="FIGSMALL"/> target="FIGSMALL" format="counter"/>, and assuming the IP address with the AS and
        label-index allocation previously described, the following sections
        detail the control plane control-plane operation and the data plane data-plane states for the
        prefix 192.0.2.11/32 (loopback of Node11)</t> Node11).</t>
        <section anchor="CONTROLPLANE" title="Control Plane"> numbered="true" toc="default">
          <name>Control Plane</name>
          <t>Node11 originates 192.0.2.11/32 in BGP and allocates to it a
          BGP-Prefix-SID
          BGP Prefix-SID with label-index: index11 <xref
          target="I-D.ietf-idr-bgp-prefix-sid"/>.</t> target="RFC8669" format="default"/>.</t>
          <t>Node11 sends the following eBGP8277 EBGP8277 update to Node10:<figure>
              <artwork>. IP Node10:</t>

<ul empty="true">

<li>
<dl>

<dt>IP Prefix:  192.0.2.11/32
. Label: Implicit-Null
. Next-hop: Node11&rsquo;s
</dt>
<dd>192.0.2.11/32
</dd>

<dt>Label:
</dt>
<dd>Implicit NULL
</dd>

<dt>Next hop:
</dt>
<dd>Node11's interface address on the link to Node10
. AS
</dd>

<dt>AS Path: {11}
. BGP-Prefix-SID: Label-Index
</dt>
<dd>{11}
</dd>

<dt>BGP Prefix-SID:
</dt>
<dd>Label-Index 11
</artwork>
            </figure></t>
</dd>

</dl>
</li>
</ul>

          <t>Node10 receives the above update. As it is SR capable, Node10 is
          able to interpret the BGP-Prefix-SID and hence BGP Prefix-SID; therefore, it understands that it
          should allocate the label from its own SRGB block, offset by the
          Label-Index
          label index received in the BGP-Prefix-SID (16000+11 hence BGP Prefix-SID (16000+11, hence, 16011) to
          the NLRI Network Layer Reachability Information (NLRI) instead of
          allocating a non-deterministic nondeterministic label out of a dynamically allocated
          portion of the local label space. The
          implicit-null implicit NULL label in the
          NLRI tells Node10 that it is the penultimate hop and that it must pop the
          top label on the stack before forwarding traffic for this prefix to
          Node11.</t>
          <t>Then, Node10 sends the following eBGP8277 EBGP8277 update to Node7:<figure>
              <artwork>. IP Node7:</t>

<ul empty="true">

<li>
<dl>

<dt>IP Prefix:  192.0.2.11/32
. Label: 16011
. Next-hop: Node10&rsquo;s
</dt>
<dd>192.0.2.11/32
</dd>

<dt>Label:
</dt>
<dd>16011
</dd>

<dt>Next hop:
</dt>
<dd>Node10's interface address on the link to Node7
. AS
</dd>

<dt>AS Path: {10,
</dt>
<dd>{10, 11}
. BGP-Prefix-SID: Label-Index
</dd>

<dt>BGP Prefix-SID:
</dt>
<dd>Label-Index 11
</artwork>
            </figure></t>
</dd>

</dl>
</li>
</ul>

          <t>Node7 receives the above update. As it is SR capable, Node7 is
          able to interpret the BGP-Prefix-SID and hence BGP Prefix-SID; therefore, it allocates the local
          (incoming) label 16011 (16000 + 11) to the NLRI (instead of
          allocating a &ldquo;dynamic&rdquo; "dynamic" local label from its label
          manager). Node7 uses the label in the received eBGP8277 EBGP8277 NLRI as the
          outgoing label (the index is only used to derive the local/incoming
          label).</t>
          <t>Node7 sends the following eBGP8277 EBGP8277 update to Node4:<figure>
              <artwork>. IP Node4:</t>

<ul empty="true">

<li>
<dl>

<dt>IP Prefix:  192.0.2.11/32
. Label: 16011
. Next-hop: Node7&rsquo;s
</dt>
<dd>192.0.2.11/32
</dd>

<dt>Label:
</dt>
<dd>16011
</dd>

<dt>Next hop:
</dt>
<dd>Node7's interface address on the link to Node4
. AS
</dd>

<dt>AS Path: {7,
</dt>
<dd>{7, 10, 11}
. BGP-Prefix-SID: Label-Index
</dd>

<dt>BGP Prefix-SID:
</dt>
<dd>Label-Index 11
</artwork>
            </figure></t>
</dd>

</dl>
</li>
</ul>

          <t>Node4 receives the above update. As it is SR capable, Node4 is
          able to interpret the BGP-Prefix-SID and hence BGP Prefix-SID; therefore, it allocates the local
          (incoming) label 16011 to the NLRI (instead of allocating a
          &ldquo;dynamic&rdquo;
          "dynamic" local label from its label manager). Node4
          uses the label in the received eBGP8277 EBGP8277 NLRI as an outgoing label (the
          index is only used to derive the local/incoming label).</t>
          <t>Node4 sends the following eBGP8277 EBGP8277 update to Node1:<figure>
              <artwork>. IP Node1:</t>

<ul empty="true">

<li>
<dl>

<dt>IP Prefix:  192.0.2.11/32
. Label: 16011
. Next-hop: Node4&rsquo;s
</dt>
<dd>192.0.2.11/32
</dd>

<dt>Label:
</dt>
<dd>16011
</dd>

<dt>Next hop:
</dt>
<dd>Node4's interface address on the link to Node1
. AS
</dd>

<dt>AS Path: {4,
</dt>
<dd>{4, 7, 10, 11}
. BGP-Prefix-SID: Label-Index
</dd>

<dt>BGP Prefix-SID:
</dt>
<dd>Label-Index 11
</artwork>
            </figure></t>
</dd>

</dl>
</li>
</ul>

          <t>Node1 receives the above update. As it is SR capable, Node1 is
          able to interpret the BGP-Prefix-SID and hence BGP Prefix-SID; therefore, it allocates the local
          (incoming) label 16011 to the NLRI (instead of allocating a
          &ldquo;dynamic&rdquo;
          "dynamic" local label from its label manager). Node1
          uses the label in the received eBGP8277 EBGP8277 NLRI as an outgoing label (the
          index is only used to derive the local/incoming label).</t>
        </section>
        <section anchor="DATAPLANE" title="Data Plane"> numbered="true" toc="default">
          <name>Data Plane</name>
          <t>Referring to <xref target="FIGLARGE"/>, target="FIGLARGE" format="default"/>, and assuming all nodes
          apply the same advertisement rules described above and all nodes
          have the same SRGB (16000-23999), here are the IP/MPLS forwarding
          tables for prefix 192.0.2.11/32 at Node1, Node4, Node7 Node7, and
          Node10.</t>

          <figure align="left" anchor="NODE1FIB"
                  title="Node1

<table anchor="NODE1FIB">

<name>Node1 Forwarding Table">
            <artwork align="center">-----------------------------------------------
Incoming label    | outgoing label | Outgoing
or IP destination |                | Interface
------------------+----------------+-----------
     16011        |      16011     | ECMP{3, 4}
  192.0.2.11/32   |      16011     | ECMP{3, 4}
------------------+----------------+-----------</artwork>
          </figure>

          <figure anchor="NODE4FIB" suppress-title="false"
                  title="Node4 Forwarding Table">
            <artwork align="center">
-----------------------------------------------
Incoming label    | outgoing label | Outgoing Table
</name>

<tbody>

<tr>
<td align="center">Incoming Label or IP destination |                | Destination
</td>
<td align="center">Outgoing Label
</td>
<td align="center">Outgoing Interface
------------------+----------------+-----------
     16011        |      16011     | ECMP{7, 8}
  192.0.2.11/32   |      16011     | ECMP{7, 8}
------------------+----------------+-----------</artwork>
          </figure>

          <figure anchor="NODE7FIB" suppress-title="false"
                  title="Node7
</td>
</tr>

<tr>
<td align="center">16011
</td>
<td align="center">16011
</td>
<td align="center">ECMP{3,&nbsp;4}
</td>
</tr>

<tr>
<td align="center">192.0.2.11/32
</td>
<td align="center">16011
</td>
<td align="center">ECMP{3,&nbsp;4}
</td>
</tr>

</tbody>

</table>

<table anchor="NODE4FIB">
<name>Node4 Forwarding Table">
            <artwork align="center">
-----------------------------------------------
Incoming label    | outgoing label | Outgoing Table
</name>

<tbody >

<tr>
<td align="center">Incoming Label or IP destination |                | Destination
</td>
<td align="center">Outgoing Label
</td>
<td align="center">Outgoing Interface
------------------+----------------+-----------
     16011        |      16011     |    10
  192.0.2.11/32   |      16011     |    10
------------------+----------------+-----------</artwork>
          </figure>

          <figure anchor="NODE10FIB" suppress-title="true"
                  title="Node10
</td>
</tr>

<tr>
<td align="center">16011
</td>
<td align="center">16011
</td>
<td align="center">ECMP{7,&nbsp;8}
</td>
</tr>

<tr>
<td align="center">192.0.2.11/32
</td>
<td align="center">16011
</td>
<td align="center">ECMP{7,&nbsp;8}
</td>
</tr>

</tbody>
</table>

<table anchor="NODE7FIB">
<name>Node7 Forwarding Table">
            <artwork align="center">
-----------------------------------------------
Incoming label    | outgoing label | Outgoing Table
</name>

<tbody >

<tr >
<td align="center">Incoming Label or IP destination |                | Destination
</td>
<td align="center">Outgoing Label
</td>
<td align="center">Outgoing Interface
------------------+----------------+-----------
     16011        |      POP       |    11
  192.0.2.11/32   |      N/A       |    11
------------------+----------------+-----------</artwork>
          </figure>
</td>
</tr>

<tr>
<td align="center">16011
</td>
<td align="center">16011
</td>
<td align="center">10
</td>
</tr>

<tr>
<td align="center">192.0.2.11/32
</td>
<td align="center">16011
</td>
<td align="center">10
</td>
</tr>

</tbody>
</table>

<table anchor="NODE10FIB">
<name>Node10 Forwarding Table
</name>

<tbody >

<tr >
<td align="center">Incoming Label or IP Destination
</td>
<td align="center">Outgoing Label
</td>
<td align="center">Outgoing Interface
</td>
</tr>

<tr>
<td align="center">16011
</td>
<td align="center">POP
</td>
<td align="center">11
</td>
</tr>

<tr>
<td align="center">192.0.2.11/32
</td>
<td align="center">N/A
</td>
<td align="center">11
</td>
</tr>

</tbody>
</table>

        </section>
        <section anchor="VARIATIONS" title="Network numbered="true" toc="default">
          <name>Network Design Variation"> Variation</name>
          <t>A network design choice could consist of switching all the
          traffic through Tier-1 and Tier-2 as MPLS traffic. In this case, one
          could filter away the IP entries at Node4, Node7 Node7, and Node10. This
          might be beneficial in order to optimize the forwarding table
          size.</t>

          <t>A network design choice could consist in of allowing the hosts to
          send MPLS-encapsulated traffic based on the Egress Peer Engineering
          (EPE) use-case use case as defined in <xref
          target="I-D.ietf-spring-segment-routing-central-epe"/>. target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>. For example,
          applications at HostA would send their Z-destined traffic to Node1
          with an MPLS label stack where the top label is 16011 and the next
          label is an EPE peer segment (<xref
          target="I-D.ietf-spring-segment-routing-central-epe"/>) target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>) at Node11
          directing the traffic to Z.</t>
        </section>
        <section anchor="FABRIC"
                 title="Global numbered="true" toc="default">
          <name>Global BGP Prefix Segment through the fabric"> Fabric</name>
          <t>When the previous design is deployed, the operator enjoys global
          BGP-Prefix-SID
          BGP Prefix-SID and label allocation throughout the DC fabric.</t>
          <t>A few examples follow:<list style="symbols">
              <t>Normal follow:</t>
          <ul spacing="normal">
            <li>Normal forwarding to Node11: a A packet with top label 16011
              received by any node in the fabric will be forwarded along the
              ECMP-aware BGP best-path best path towards Node11 Node11, and the label 16011 is
              penultimate-popped
              penultimate popped at Node10 (or at Node 9).</t>

              <t>Traffic-engineered 9).</li>
            <li>Traffic-engineered path to Node11: an An application on a host
              behind Node1 might want to restrict its traffic to paths via the
              Spine node Node5. The application achieves this by sending its
              packets with a label stack of {16005, 16011}. BGP Prefix SID Prefix-SID
              16005 directs the packet up to Node5 along the path (Node1,
              Node3, Node5). BGP-Prefix-SID BGP Prefix-SID 16011 then directs the packet down
              to Node11 along the path (Node5, Node9, Node11).</t>
            </list></t> Node11).</li>
          </ul>
        </section>
        <section anchor="INCRDEP" title="Incremental Deployments"> numbered="true" toc="default">
          <name>Incremental Deployments</name>
          <t>The design previously described can be deployed incrementally.
          Let us assume that Node7 does not support the BGP-Prefix-SID BGP Prefix-SID, and let
          us show how the fabric connectivity is preserved.</t>
          <t>From a signaling viewpoint, nothing would change: change; even though
          Node7 does not support the BGP-Prefix-SID, BGP Prefix-SID, it does propagate the
          attribute unmodified to its neighbors.</t>
          <t>From a label allocation label-allocation viewpoint, the only difference is that
          Node7 would allocate a dynamic (random) label to the prefix
          192.0.2.11/32 (e.g. (e.g., 123456) instead of the "hinted" label as
          instructed by the BGP-Prefix-SID. BGP Prefix-SID. The neighbors of Node7 adapt
          automatically as they always use the label in the BGP8277 NLRI as
          an outgoing label.</t>
          <t>Node4 does understand the BGP-Prefix-SID and hence BGP Prefix-SID; therefore, it allocates the
          indexed label in the SRGB (16011) for 192.0.2.11/32.</t>
          <t>As a result, all the data-plane entries across the network would
          be unchanged except the entries at Node7 and its neighbor Node4 as
          shown in the figures below.</t>
          <t>The key point is that the end-to-end Label Switched Path (LSP) is
          preserved because the outgoing label is always derived from the
          received label within the BGP8277 NLRI. The index in the
          BGP-Prefix-SID
          BGP Prefix-SID is only used as a hint on how to allocate the local
          label (the incoming label) but never for the outgoing label.</t>

          <figure anchor="NODE7FIBINC" title="Node7

<table anchor="NODE7FIBINC">

<name>Node7 Forwarding Table">
            <artwork align="center">------------------------------------------
Incoming label     | outgoing | Outgoing Table
</name>

<tbody >

<tr >
<td align="center">Incoming Label or IP destination  |  label   | Destination
</td>
<td align="center">Outgoing Label
</td>
<td align="center">Outgoing Interface
-------------------+----------------------
     12345         |  16011   |   10
</artwork>
          </figure>

          <figure anchor="NODE4FIBINC" title="Node4
</td>
</tr>

<tr>
<td align="center">12345
</td>
<td align="center">16011
</td>
<td align="center">10
</td>
</tr>

</tbody>

</table>

<table anchor="NODE4FIBINC">

<name>Node4 Forwarding Table">
            <artwork align="center">------------------------------------------
Incoming label     | outgoing | Outgoing Table
</name>

<tbody >

<tr >
<td align="center">Incoming Label or IP destination  |  label   | Destination
</td>
<td align="center">Outgoing Label
</td>
<td align="center">Outgoing Interface
-------------------+----------------------
     16011         |  12345   |   7
</artwork>
          </figure>
</td>
</tr>

<tr>
<td align="center">16011
</td>
<td align="center">12345
</td>
<td align="center">7
</td>
</tr>

</tbody>

</table>

          <t>The BGP-Prefix-SID BGP Prefix-SID can thus be deployed incrementally incrementally, i.e., one node at
          a time.</t>
          <t>When deployed together with a homogeneous SRGB (same (the same SRGB across
          the fabric), the operator incrementally enjoys the global prefix
          segment benefits as the deployment progresses through the
          fabric.</t>
        </section>
      </section>
      <section anchor="iBGP3107" title="iBGP numbered="true" toc="default">
        <name>IBGP Labeled Unicast (RFC8277)"> (RFC 8277)</name>
        <t>The same exact design as eBGP8277 EBGP8277 is used with the following
        modifications:<list>
            <t>All
        modifications:</t>
        <ul spacing="normal">
          <li>All nodes use the same AS number.</t>

            <t>Each number.</li>
          <li>Each node peers with its neighbors via an internal BGP session
            (iBGP)
            (IBGP) with extensions defined in <xref target="RFC8277"/> target="RFC8277" format="default"/> (named
            "iBGP8277"
            "IBGP8277" throughout this document).</t>

            <t>Each document).</li>
          <li>Each node acts as a route-reflector route reflector for each of its neighbors
            and with the next-hop-self option. Next-hop-self is a well known well-known
            operational feature which that consists of rewriting the next-hop next hop of a
            BGP update prior to send sending it to the neighbor. Usually, it&rsquo;s
            it's a common practice to apply next-hop-self behavior
            towards iBGP IBGP peers for eBGP learned EBGP-learned routes. In the case outlined
            in this section section, it is proposed to use the next-hop-self mechanism
            also to iBGP
            learned routes.</t>

            <t><figure anchor="IBGPFIG"
                title="iBGP IBGP-learned routes.</li></ul>

            <figure anchor="IBGPFIG">
              <name>IBGP Sessions with Reflection and Next-Hop-Self">
                <artwork> Next-Hop-Self</name>
              <artwork name="" type="" align="left" alt=""><![CDATA[
                               Cluster-1
                            +-----------+
                            |  Tier-1   |
                            |  +-----+  |
                            |  |NODE |  |
                            |  |  5  |  |
                 Cluster-2  |  +-----+  |  Cluster-3
                +---------+ |           | +---------+
                | Tier-2  | |           | |  Tier-2 |
                | +-----+ | |  +-----+  | | +-----+ |
                | |NODE | | |  |NODE |  | | |NODE | |
                | |  3  | | |  |  6  |  | | |  9  | |
                | +-----+ | |  +-----+  | | +-----+ |
                |         | |           | |         |
                |         | |           | |         |
                | +-----+ | |  +-----+  | | +-----+ |
                | |NODE | | |  |NODE |  | | |NODE | |
                | |  4  | | |  |  7  |  | | |  10 | |
                | +-----+ | |  +-----+  | | +-----+ |
                +---------+ |           | +---------+
                            |           |
                            |  +-----+  |
                            |  |NODE |  |
          Tier-3            |  |  8  |  |         Tier-3
      +-----+ +-----+       |  +-----+  |      +-----+ +-----+
      |NODE | |NODE |       +-----------+      |NODE | |NODE |
      |  1  | |  2  |                          | 11  | |  12 |
      +-----+ +-----+                          +-----+ +-----+
                            </artwork>
              </figure></t> +-----+]]></artwork>
            </figure>
          <ul spacing="normal">
          <li>
            <t>For simple and efficient route propagation filtering and as
            illustrated in <xref target="IBGPFIG"/>: <list>
                <t>Node5, target="IBGPFIG" format="default"/>: </t>
            <ul spacing="normal">
              <li>Node5, Node6, Node7 Node7, and Node8 use the same Cluster ID
                (Cluster-1)</t>

                <t>Node3
                (Cluster-1).</li>
              <li>Node3 and Node4 use the same Cluster ID (Cluster-2)</t>

                <t>Node9 (Cluster-2).</li>
              <li>Node9 and Node10 use the same Cluster ID (Cluster-3)</t>
              </list></t>

            <t>The (Cluster-3).</li>
            </ul>
          </li>
          <li>The control-plane behavior is mostly the same as described in
            the previous section: section; the only difference is that the eBGP8277 EBGP8277
            path propagation is simply replaced by an iBGP8277 IBGP8277 path reflection
            with next-hop next hop changed to self.</t>

            <t>The self.</li>
          <li>The data-plane tables are exactly the same.</t>
          </list></t> same.</li>
        </ul>
      </section>
    </section>
    <section anchor="IPV6"
             title="Applying numbered="true" toc="default">
      <name>Applying Segment Routing in the DC with IPv6 dataplane"> Data Plane</name>
      <t>The design described in <xref target="RFC7938"/> target="RFC7938" format="default"/> is reused with one
      single modification. It is highlighted using the example of the
      reachability to Node11 via spine Spine node Node5.</t>
      <t>Node5 originates 2001:DB8::5/128 with the attached BGP-Prefix-SID BGP Prefix-SID for
      IPv6 packets destined to segment 2001:DB8::5 (<xref
      target="I-D.ietf-idr-bgp-prefix-sid"/>).</t> target="RFC8402" format="default"/>).</t>
      <t>Node11 originates 2001:DB8::11/128 with the attached BGP-Prefix-SID BGP Prefix-SID
      advertising the support of the SRH Segment Routing Header (SRH) for IPv6 packets destined to segment
      2001:DB8::11.</t>
      <t>The control-plane and data-plane processing of all the other nodes in
      the fabric is unchanged. Specifically, the routes to 2001:DB8::5 and
      2001:DB8::11 are installed in the FIB along the eBGP best-path EBGP best path to Node5
      (spine
      (Spine node) and Node11 (ToR node) respectively.</t>
      <t>An application on HostA which that needs to send traffic to HostZ via only
      Node5 (spine (Spine node) can do so by sending IPv6 packets with a Segment
      Routing header Header (SRH, <xref
      target="I-D.ietf-6man-segment-routing-header"/>). target="I-D.ietf-6man-segment-routing-header" format="default"/>). The destination
      address and active segment is set to 2001:DB8::5. The next and last
      segment is set to 2001:DB8::11.</t>
      <t>The application must only use IPv6 addresses that have been
      advertised as capable for SRv6 segment processing (e.g. (e.g., for which the
      BGP prefix segment Prefix Segment capability has been advertised). How applications
      learn this (e.g.: (e.g., centralized controller and orchestration) is outside
      the scope of this document.</t>
    </section>
    <section anchor="COMMHOSTS"
             title="Communicating path information numbered="true" toc="default">
      <name>Communicating Path Information to the host"> Host</name>
      <t>There are two general methods for communicating path information to
      the end-hosts: "proactive" and "reactive", aka "push" and "pull" models.
      There are multiple ways to implement either of these methods. Here, it
      is noted that one way could be using a centralized controller: the
      controller either tells the hosts of the prefix-to-path mappings
      beforehand and updates them as needed (network event driven push), push) or
      responds to the hosts making request requests for a path to a specific destination
      (host event driven pull). It is also possible to use a hybrid model,
      i.e., pushing some state from the controller in response to particular
      network events, while the host pulls other state on demand.</t>

      <t>It is
      <t>Note also noted, that when disseminating network-related data to the
      end-hosts
      end-hosts, a trade-off is made to balance the amount of information Vs.
      vs. the level of visibility in the network state. This applies both
      to both push and pull models. In the extreme case, the host would request
      path information on every flow, flow and keep no local state at all. On the
      other end of the spectrum, information for every prefix in the network
      along with available paths could be pushed and continuously updated on
      all hosts.</t>
    </section>
    <section anchor="BENEFITS" title="Additional Benefits"> numbered="true" toc="default">
      <name>Additional Benefits</name>
      <section anchor="MPLSIMPLE"
               title="MPLS Dataplane numbered="true" toc="default">
        <name>MPLS Data Plane with operational simplicity"> Operational Simplicity</name>
        <t>As required by <xref target="RFC7938"/>, target="RFC7938" format="default"/>, no new signaling protocol
        is introduced. The BGP-Prefix-SID BGP Prefix-SID is a lightweight extension to BGP
        Labeled Unicast <xref target="RFC8277"/>. target="RFC8277" format="default"/>. It applies either to eBGP EBGP- or
        iBGP based
        IBGP-based designs.</t>
        <t>Specifically, LDP and RSVP-TE are not used. These protocols would
        drastically impact the operational complexity of the Data Center data center and
        would not scale. This is in line with the requirements expressed in
        <xref target="RFC7938"/>.</t> target="RFC7938" format="default"/>.</t>
        <t>Provided the same SRGB is configured on all nodes, all nodes use
        the same MPLS label for a given IP prefix. This is simpler from an
        operation standpoint, as discussed in <xref target="SINGLESRGB"/></t> target="SINGLESRGB" format="default"/>.</t>
      </section>
      <section anchor="MINFIB" title="Minimizing numbered="true" toc="default">
        <name>Minimizing the FIB table"> Table</name>
        <t>The designer may decide to switch all the traffic at Tier-1 and
        Tier-2's
        Tier-2 based on MPLS, hence thereby drastically decreasing the IP table size
        at these nodes.</t>

        <t>This is easily accomplished by encapsulating the traffic either
        directly at the host or at the source ToR node node. The encapsulation is
        done by pushing the
        BGP-Prefix-SID BGP Prefix-SID of the destination ToR for intra-DC
        traffic, or by pushing the
        BGP-Prefix-SID BGP Prefix-SID for the the border node for
        inter-DC or DC-to-outside-world traffic.</t>
      </section>
      <section anchor="EPE" title="Egress numbered="true" toc="default">
        <name>Egress Peer Engineering"> Engineering</name>
        <t>It is straightforward to combine the design illustrated in this
        document with the Egress Peer Engineering (EPE) use-case use case described in
        <xref target="I-D.ietf-spring-segment-routing-central-epe"/>.</t> target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>.</t>
        <t>In such a case, the operator is able to engineer its outbound traffic
        on a per host-flow per-host-flow basis, without incurring any additional state at
        intermediate points in the DC fabric.</t>
        <t>For example, the controller only needs to inject a per-flow state
        on the HostA to force it to send its traffic destined to a specific
        Internet destination D via a selected border node (say Node12 in <xref
        target="FIGLARGE"/> target="FIGLARGE" format="default"/> instead of another border node, Node11) and a
        specific egress peer of Node12 (say peer AS 9999 of local PeerNode
        segment 9999 at Node12 instead of any other peer which that provides a path
        to the destination D). Any packet matching this state at host A HostA would
        be encapsulated with SR segment list (label stack) {16012, 9999}.
        16012 would steer the flow through the DC fabric, leveraging any ECMP,
        along the best path to border node Node12. Once the flow gets to
        border node Node12, the active segment is 9999 (because of PHP Penultimate
        Hop Popping (PHP) on the upstream neighbor of Node12). This EPE
        PeerNode segment forces border node Node12 to forward the packet to
        peer AS 9999, 9999 without any IP lookup at the border node. There is no
        per-flow state for this engineered flow in the DC fabric. A benefit of segment routing
        SR is that the per-flow state is only required at the
        source.</t>
        <t>As well as allowing full traffic engineering control traffic-engineering control, such a design
        also offers FIB table minimization table-minimization benefits as the Internet-scale FIB
        at border node Node12 is not required if all FIB lookups are avoided
        there by using EPE.</t>
      </section>
      <section anchor="ANYCAST" title="Anycast"> numbered="true" toc="default">
        <name>Anycast</name>
        <t>The design presented in this document preserves the availability
        and load-balancing properties of the base design presented in <xref
        target="I-D.ietf-spring-segment-routing"/>.</t> target="RFC8402" format="default"/>.</t>

        <t>For example, one could assign an anycast loopback 192.0.2.20/32 and
        associate segment index 20 to it on the border nodes Node11 and Node12 (in
        addition to their node-specific loopbacks). Doing so, the EPE
        controller could express a default "go-to-the-Internet via any border
        node" policy as segment list {16020}. Indeed, from any host in the DC
        fabric or from any ToR node, 16020 steers the packet towards the
        border nodes Node11 or Node12 leveraging ECMP where available along the best
        paths to these nodes.</t>
      </section>
    </section>
    <section anchor="SINGLESRGB" title="Preferred numbered="true" toc="default">
      <name>Preferred SRGB Allocation"> Allocation</name>
      <t>In the MPLS case, it is recommend recommended to use the same SRGBs at each node.</t>
      <t>Different SRGBs in each node likely increase the complexity of the
      solution both from an operational viewpoint and from a controller
      viewpoint.</t>
      <t>From an operation operational viewpoint, it is much simpler to have the same
      global label at every node for the same destination (the MPLS
      troubleshooting is then similar to the IPv6 troubleshooting where this
      global property is a given).</t>
      <t>From a controller viewpoint, this allows us to construct simple
      policies applicable across the fabric.</t>
      <t>Let us consider two applications applications, A and B B, respectively connected to
      Node1 and Node2 (ToR nodes). Application A has two flows flows, FA1 and FA2 FA2, destined to Z.
      B has two flows flows, FB1 and FB2 FB2, destined to Z. The controller wants FA1 and
      FB1 to be load-shared load shared across the fabric while FA2 and FB2 must be
      respectively steered via Node5 and Node8.</t>

      <t>Assuming a consistent unique SRGB across the fabric as described in
      the
      this document, the controller can simply do it by instructing A and B to
      use {16011} respectively for FA1 and FB1 and by instructing A and B to
      use {16005 16011} and {16008 16011} respectively for FA2 and FB2.</t>
      <t>Let us assume a design where the SRGB is different at every node and
      where the SRGB of each node is advertised using the Originator SRGB TLV
      of the BGP-Prefix-SID BGP Prefix-SID as defined in <xref
      target="I-D.ietf-idr-bgp-prefix-sid"/>: target="RFC8669" format="default"/>: SRGB of Node K starts at value
      K*1000
      K*1000, and the SRGB length is 1000 (e.g. Node1&rsquo;s (e.g., Node1's SRGB is [1000,
      1999], Node2&rsquo;s Node2's SRGB is [2000, 2999], &hellip;).</t> ...).</t>

      <t>In this case, not only the controller would need to collect and store all of
      these different SRGB&rsquo;s SRGBs (e.g., through the Originator SRGB TLV of the BGP-Prefix-SID), furthermore
      BGP Prefix-SID); furthermore, it would also need to adapt the policy for
      each host. Indeed, the controller would instruct A to use {1011} for FA1
      while it would have to instruct B to use {2011} for FB1 (while with the
      same SRGB, both policies are the same {16011}).</t>
      <t>Even worse, the controller would instruct A to use {1005, 5011} for
      FA1 while it would instruct B to use {2011, 8011} for FB1 (while with
      the same SRGB, the second segment is the same across both policies:
      16011). When combining segments to create a policy, one need needs to
      carefully update the label of each segment. This is obviously more
      error-prone, error
      prone, more complex complex, and more difficult to troubleshoot.</t>
    </section>
    <section anchor="IANA" title="IANA Considerations"> numbered="true" toc="default">
      <name>IANA Considerations</name>
      <t>This document does not make any has no IANA request.</t> actions.</t>
    </section>
    <section anchor="MANAGE" title="Manageability Considerations"> numbered="true" toc="default">
      <name>Manageability Considerations</name>
      <t>The design and deployment guidelines described in this document are
      based on the network design described in <xref target="RFC7938"/>.</t> target="RFC7938" format="default"/>.</t>
      <t>The deployment model assumed in this document is based on a single
      domain where the interconnected DCs are part of the same administrative
      domain (which, of course, is split into different autonomous systems).
      The operator has full control of the whole domain domain, and the usual
      operational and management mechanisms and procedures are used in order
      to prevent any information related to internal prefixes and topology to
      be leaked outside the domain.</t>
      <t>As recommended in <xref target="I-D.ietf-spring-segment-routing"/>, target="RFC8402" format="default"/>,
      the same SRGB should be allocated in all nodes in order to facilitate
      the design, deployment deployment, and operations of the domain.</t>
      <t>When EPE (<xref
      target="I-D.ietf-spring-segment-routing-central-epe"/>) target="I-D.ietf-spring-segment-routing-central-epe" format="default"/>) is used (as
      explained in <xref target="EPE"/>, target="EPE" format="default"/>), the same operational model is
      assumed. EPE information is originated and propagated throughout the
      domain towards an internal server server, and unless explicitly configured by
      the operator, no EPE information is leaked outside the domain
      boundaries.</t>
    </section>
    <section anchor="SEC" title="Security Considerations"> numbered="true" toc="default">
      <name>Security Considerations</name>
      <t>This document proposes to apply Segment Routing SR to a well known well-known
      scalability requirement expressed in <xref target="RFC7938"/> target="RFC7938" format="default"/> using the
      BGP-Prefix-SID
      BGP Prefix-SID as defined in <xref
      target="I-D.ietf-idr-bgp-prefix-sid"/>.</t> target="RFC8669" format="default"/>.</t>
      <t>It has to be noted, as described in <xref target="MANAGE"/> target="MANAGE" format="default"/>, that the
      design illustrated in <xref target="RFC7938"/> target="RFC7938" format="default"/> and in this document, document
      refer to a deployment model where all nodes are under the same
      administration. In this context, it is assumed that the operator doesn't
      want to leak outside of the domain any information related to internal
      prefixes and topology. The internal information includes prefix-sid Prefix-SID and
      EPE information. In order to prevent such leaking, the standard BGP
      mechanisms (filters) are applied on the boundary of the domain.</t>
      <t>Therefore, the solution proposed in this document does not introduce
      any additional security concerns from what is expressed in <xref
      target="RFC7938"/> target="RFC7938" format="default"/> and <xref target="I-D.ietf-idr-bgp-prefix-sid"/>. target="RFC8669" format="default"/>. It
      is assumed that the security and confidentiality of the prefix and
      topology information is preserved by outbound filters at each peering
      point of the domain as described in <xref target="MANAGE"/>.</t> target="MANAGE" format="default"/>.</t>
    </section>
  </middle>
  <back>
    <displayreference
	target="I-D.ietf-spring-segment-routing-central-epe"
	to="SR-CENTRAL-EPE"/>

    <displayreference target="I-D.ietf-6man-segment-routing-header"
		      to="IPv6-SRH"/>

    <references>
      <name>References</name>
      <references>
        <name>Normative References</name>

        <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.8277.xml"/>
        <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.4271.xml"/>
        <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.7938.xml"/>
        <!--I-D.ietf-spring-segment-routing became RFC 8402 -->
        <xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.8402.xml"/>

        <!-- I-D.ietf-idr-bgp-prefix-sid-27: companion document-->
<reference anchor='RFC8669' target='https://www.rfc-editor.org/info/rfc8669'>
<front>
<title>Segment Routing Prefix Segment Identifier Extensions for BGP</title>

<author initials='S' surname='Previdi' fullname='Stefano Previdi'>
    <organization />
</author>

<author initials='C' surname='Filsfils' fullname='Clarence Filsfils'>
    <organization />
</author>

<author initials='A' surname='Lindem' fullname='Acee Lindem' role="editor">
    <organization />
</author>

<author initials='A' surname='Sreekantiah' fullname='Arjun Sreekantiah'>
    <organization />
</author>

<author initials='H' surname='Gredler' fullname='Hannes Gredler'>
    <organization />
</author>

<date month='December' year='2019' />

</front>

<seriesInfo name='RFC' value='8669' />
<seriesInfo name="DOI" value="10.17487/RFC8669"/>
</reference>

      </references>
      <references>
        <name>Informative References</name>

<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-spring-segment-routing-central-epe.xml"/>

<xi:include href="https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.6793.xml"/>

<xi:include href="https://xml2rfc.ietf.org/public/rfc/bibxml3/reference.I-D.ietf-6man-segment-routing-header.xml"/>

        <!-- I-D.ietf-6man-segment-routing-header: I-D exists -->

      </references>
    </references>
    <section anchor="Acknowledgements" title="Acknowledgements"> numbered="false" toc="default">
      <name>Acknowledgements</name>
      <t>The authors would like to thank Benjamin Black, Arjun Sreekantiah,
      Keyur Patel, Acee Lindem Lindem, and Anoop Ghanwani for their comments and
      review of this document.</t>
    </section>
    <section anchor="Contributors" title="Contributors">
      <figure>
        <artwork>Gaya numbered="false" toc="default">
      <name>Contributors</name>
      <artwork name="" type="" align="left" alt=""><![CDATA[Gaya Nagarajan
Facebook
US
United States of America

Email: gaya@fb.com</artwork>
      </figure>

      <figure>
        <artwork>Gaurav gaya@fb.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Gaurav Dawra
Cisco Systems
US
United States of America

Email: gdawra.ietf@gmail.com</artwork>
      </figure>

      <figure>
        <artwork>Dmitry gdawra.ietf@gmail.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Dmitry Afanasiev
Yandex
RU
Russian Federation

Email: fl0w@yandex-team.ru</artwork>
      </figure>

      <figure>
        <artwork>Tim fl0w@yandex-team.ru]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Tim Laberge
Cisco
US
United States of America

Email: tlaberge@cisco.com</artwork>
      </figure>

      <figure>
        <artwork>Edet tlaberge@cisco.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Edet Nkposong
Salesforce.com Inc.
US
United States of America

Email: enkposong@salesforce.com</artwork>
      </figure>

      <figure>
        <artwork>Mohan enkposong@salesforce.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Mohan Nanduri
Microsoft
US
United States of America

Email: mnanduri@microsoft.com</artwork>
      </figure>

      <figure>
        <artwork>James mohan.nanduri@oracle.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[James Uttaro
ATT
US
United States of America

Email: ju1738@att.com</artwork>
      </figure>

      <figure>
        <artwork>Saikat ju1738@att.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Saikat Ray
Unaffiliated
US
United States of America

Email: raysaikat@gmail.com</artwork>
      </figure>

      <figure>
        <artwork>Jon raysaikat@gmail.com]]></artwork>
      <artwork name="" type="" align="left" alt=""><![CDATA[Jon Mitchell
Unaffiliated
US
United States of America

Email: jrmitche@puck.nether.net</artwork>
      </figure> jrmitche@puck.nether.net]]></artwork>
    </section>
  </middle>

  <back>
    <references title="Normative References">
      <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml"?>

      <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.8277.xml"?>

      <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.4271.xml"?>

      <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.7938.xml"?>

      <?rfc include="reference.I-D.ietf-spring-segment-routing.xml"?>

      <?rfc include="reference.I-D.ietf-idr-bgp-prefix-sid.xml"?>

      <?rfc include="reference.I-D.ietf-spring-segment-routing-central-epe.xml"?>
    </references>

    <references title="Informative References">
      <?rfc include="http://xml.resource.org/public/rfc/bibxml/reference.RFC.6793.xml"?>

      <?rfc include="reference.I-D.ietf-6man-segment-routing-header.xml"?>
    </references>
  </back>
</rfc>