YANG Data Model for L3VPN Service DeliveryHuawei Technologiesbill.wu@huawei.comOrange Business Servicesstephane.litkowski@orange.comVerizonluis.tomotaki@verizon.comKDDI Corporationke-oogaki@kddi.com
Operations and Management
YANGL3VPNData ModelService ModelThis document defines a YANG data model that can be used for
communication between customers and network operators and to deliver
a Layer 3 provider-provisioned VPN service. This document is limited
to BGP PE-based VPNs as described in RFCs 4026, 4110, and 4364. This
model is intended to be instantiated at the management system to
deliver the overall service. It is not a configuration model to be
used directly on network elements. This model provides an abstracted
view of the Layer 3 IP VPN service configuration components. It will
be up to the management system to take this model as input and use
specific configuration models to configure the different network
elements to deliver the service. How the configuration of network
elements is done is out of scope for this document.This document obsoletes RFC 8049; it replaces the unimplementable
module in that RFC with a new module with the same name that is
not backward compatible. The changes are a series of small fixes to
the YANG module and some clarifications to the text.This document defines a Layer 3 VPN service data model written in
YANG. The model defines service configuration elements that can be
used in communication protocols between customers and network
operators. Those elements can also be used as input to automated
control and configuration applications.This document obsoletes ; it creates a new module with the
same name as the module defined in . The changes from
are listed in full in . They are small in scope, but include
fixes to the module to make it possible to implement.The YANG module described in cannot be implemented because
of issues around the use of XPATH. These issues are explained in .Section 11 of describes when it is permissible to reuse a
module name. provides an impact assessment in this
context.The following terms are defined in and are not redefined here:
clientconfiguration dataserverstate dataThe following terms are defined in
and are not redefined here:
augmentdata modeldata nodeThe terminology for describing YANG data models is found
in .This document presents some configuration examples using
XML representation.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
"MAY", and "OPTIONAL" in this document are to be interpreted as
described in BCP 14
when, and only when, they appear in all capitals, as shown here.
A simplified graphical representation of the data model is presented
in .The meanings of the symbols in these diagrams are as follows:
Brackets "[" and "]" enclose list keys.Curly braces "{" and "}" contain names of optional features that
make the corresponding node conditional.Abbreviations before data node names: "rw" means configuration
data (read-write), and "ro" means state data (read-only).Symbols after data node names: "?" means an optional node, and "*"
denotes a "list" or "leaf-list".Parentheses enclose choice and case nodes, and case nodes are also
marked with a colon (":").Ellipsis ("...") stands for contents of subtrees that are not
shown.This document revises and obsoletes L3VPN Service Model ,
drawing on insights gained from L3VPN Service Model deployments and on
feedback from the community. The major changes are as follows: Change type from 16-bit integer to string for the leaf id
under "qos-classification-policy" container.Stick to using ordered-by user and remove inefficiency to map
service model sequence number to device model sequence number.Remove mandating the use of deviations and add "if-feature
target-sites" under the leaf-list target-sites in Section 6.12.2.1 of .Change in keywords from and on operation of the management
system in the third paragraph of , , and
.Fix incomplete description statements.Add YANG statement to check that Stateless Address Autoconfiguration (SLAAC) parameters are used only
for IPv6.Fix strange wording in .Change the use of the absolute paths to the use of relative
paths in the "must" statement or "path" statement for vpn-policy-id
leaf node, management container, location leaf node, devices container,
location case, location-reference leaf, device case, device-reference
leaf to make configuration is only applicable to the current sites.
Change "must" statement to "when" statement for management container
device container. Fix optional parameter issues by adding a default or
description for others or make some of them mandatory.Define new grouping vpn-profile-cfg for all the identifiers provided
by SP to the customer. The identifiers include cloud-identifier,
std-qos-profile, OAM profile-name, and provider-profile for encryption.
Add in the XPATH string representation of identityrefs and remove
unqualified name. Change from YANG 1.0 Support to YANG 1.1 Support.Remove "when" statement from leaf nat44-customer-address. Fixed broken example and Add mandatory element in the examples. Remove redundant parameters in the cloud access.Specify provider address and a list of start-end addresses from
provider address for DHCP case. Add a few text to clarify what the site is in .Add multi-filter and multiVPN per entry support for VPN policy.Modify description for svc-input-bandwidth leaf and
svc-output-bandwidth leaf to make it consistent with
the text in .Clarify the rational of the model in the .Add text to clarify the way to achieve Per-VPN QoS policy. made an initial attempt to define a YANG
data model forL3VPN services. After it was published it was discovered
that, while the YANG compiled it was broken from an implementation
perspective. That is, it was impossible to build a functional
implementation of the module. Section 1.4 provides a full list of the changes since .
Some of these changes remove ambiguities from the documented YANG,
while other changes fix the implementation issues. Several uses of 'must' expressions in the module were broken
badly enough that the module was not usable in the form it was
published. While some compilers and YANG checkers found no
issues (most YANG tools do not attempt to parse these
expressions), other tools that really understand the XPATH in
the expressions refused to compile them. The changes needed to fix these expressions were
small and local.The second issue relates to how Access Control List (ACL)
rules were sorted. In the English language text and
the text in the YANG definition contradicted each other.
Furthermore, the model used classic ACL rule numbering notation
for something that was semantically very different (ordered-by
user) in the YANG thus creating the potential for
misunderstanding. Further to point 2, the ACL modeling in was
incompatible with work going on in other IETF documents such as
. When changing the content of a YANG module, care must be taken to
ensure that there are no interoperability issues caused by a failure
to enable backward compatibility. Section 11 of clearly describes the circumstances under
which it is not acceptable to maintain a module name. ...changes to published modules are not allowed if they have
any potential to cause interoperability problems between a
client using an original specification and a server using an
updated specification. The module defined in this document is not backward compatible
with that defined in , but it is important to understand
that there is no possibility of an interoperability issue between
the module defined in this document and that presented in
because that module could not be implemented for the reasons
described in . Thus, noting the rules set out in
, it was decided to retain the module name in this document.
AAA: Authentication, Authorization, and Accounting.ACL: Access Control List.ADSL: Asymmetric DSL.AH: Authentication Header.AS: Autonomous System.ASBR: Autonomous System Border Router.ASM: Any-Source Multicast.BAS: Broadband Access Switch.BFD: Bidirectional Forwarding Detection.BGP: Border Gateway Protocol.BSR: Bootstrap Router.CE: Customer Edge.CLI: Command Line Interface.CsC: Carriers' Carriers.CSP: Cloud Service Provider.DHCP: Dynamic Host Configuration Protocol.DSLAM: Digital Subscriber Line Access Multiplexer.ESP: Encapsulating Security Payload.GRE: Generic Routing Encapsulation.IGMP: Internet Group Management Protocol.LAN: Local Area Network.MLD: Multicast Listener Discovery.MTU: Maximum Transmission Unit.NAT: Network Address Translation.NETCONF: Network Configuration Protocol.NNI: Network-to-Network Interface.OAM: Operations, Administration, and Maintenance.OSPF: Open Shortest Path First.OSS: Operations Support System.PE: Provider Edge.PIM: Protocol Independent Multicast.POP: Point of Presence.QoS: Quality of Service.RD: Route Distinguisher.RIP: Routing Information Protocol.RP: Rendezvous Point.RT: Route Target.SFTP: Secure FTP.SLA: Service Level Agreement.SLAAC: Stateless Address Autoconfiguration.SP: Service Provider.SPT: Shortest Path Tree.SSM: Source-Specific Multicast.VM: Virtual Machine.VPN: Virtual Private Network.VRF: VPN Routing and Forwarding.VRRP: Virtual Router Redundancy Protocol.Customer Edge (CE) Device: A CE is equipment dedicated to a
particular customer; it is directly connected (at Layer 3) to one or
more PE devices via attachment circuits. A CE is usually located at
the customer premises and is usually dedicated to a single VPN,
although it may support multiple VPNs if each one has separate
attachment circuits.Provider Edge (PE) Device: A PE is equipment managed by the SP; it
can support multiple VPNs for different customers and is directly
connected (at Layer 3) to one or more CE devices via attachment
circuits. A PE is usually located at an SP point of presence (POP)
and is managed by the SP.PE-Based VPNs: The PE devices know that certain traffic is VPN
traffic. They forward the traffic (through tunnels) based on the
destination IP address of the packet and, optionally, based on other
information in the IP header of the packet. The PE devices are
themselves the tunnel endpoints. The tunnels may make use of various
encapsulations to send traffic over the SP network (such as, but not
restricted to, GRE, IP-in-IP, IPsec, or MPLS tunnels).A Layer 3 IP VPN service is a collection of sites that are authorized
to exchange traffic between each other over a shared IP
infrastructure. This Layer 3 VPN service model aims at providing a
common understanding of how the corresponding IP VPN service is to be
deployed over the shared infrastructure. This service model is
limited to BGP PE-based VPNs as described in ,
, and .The idea of the L3 IP VPN service model is to propose an abstracted
interface between customers and network operators to manage
configuration of components of an L3VPN service. The model is
intended to be used in a mode where the network operator's system is
the server and the customer's system is the client. A typical scenario
would be to use this model as an input for an orchestration layer
that will be responsible for translating it to an orchestrated
configuration of network elements that will be part of the service.
The network elements can be routers but can also be servers (like
AAA); the network's configuration is not limited to these
examples. The configuration of network elements can be done via the
CLI, NETCONF/RESTCONF
coupled with YANG data models of a specific configuration (BGP, VRF,
BFD, etc.), or some other technique, as preferred by the operator.The usage of this service model is not limited to this example; it
can be used by any component of the management system but not
directly by network elements.The YANG module is divided into two main containers: "vpn-services"
and "sites".The "vpn-service" list under the vpn-services container defines
global parameters for the VPN service for a specific customer.A "site" is composed of at least one "site-network-access" and, in
the case of multihoming, may have multiple site-network-access
points. The site-network-access attachment is done through a
"bearer" with an "ip-connection" on top. The bearer refers to
properties of the attachment that are below Layer 3, while the
connection refers to properties oriented to the Layer 3 protocol.
The bearer may be allocated dynamically by the SP, and the customer
may provide some constraints or parameters to drive the placement of
the access.Authorization of traffic exchange is done through what we call a VPN
policy or VPN service topology defining routing exchange rules
between sites.The figure below describes the overall structure of the YANG module:The model defined in this document implements many features that
allow implementations to be modular. As an example, an
implementation may support only IPv4 VPNs (IPv4 feature), IPv6 VPNs
(IPv6 feature), or both (by advertising both features). The routing
protocols proposed to the customer may also be enabled through
features. This model also defines some features for options that
are more advanced, such as support for extranet VPNs (),
site diversity (), and QoS ().In addition, as for any YANG data model, this service model can be
augmented to implement new behaviors or specific features. For
example, this model uses different options for IP address
assignments; if those options do not fulfill all requirements, new
options can be added through augmentation.A vpn-service list item contains generic information about the VPN
service. The "vpn-id" provided in the vpn-service list refers to an
internal reference for this VPN service, while the customer name
refers to a more-explicit reference to the customer. This identifier
is purely internal to the organization responsible for the VPN
service.The type of VPN service topology is required for configuration. Our
proposed model supports any-to-any, Hub and Spoke (where Hubs can
exchange traffic), and "Hub and Spoke disjoint" (where Hubs cannot
exchange traffic). New topologies could be added via augmentation.
By default, the any-to-any VPN service topology is used.A Layer 3 PE-based VPN is built using route targets (RTs) as
described in . The management system is
expected to automatically allocate a set of RTs upon receiving a
VPN service creation request. How the management system allocates
RTs is out of scope for this document, but multiple ways could be
envisaged, as described below.In the example above, a service orchestration, owning the
instantiation of this service model, requests RTs to the network OSS.
Based on the requested VPN service topology, the network OSS replies
with one or multiple RTs. The interface between this service
orchestration and the network OSS is out of scope for this document.In the example above, a service orchestration, owning the
instantiation of this service model, owns one or more pools of RTs
(specified by the SP) that can be allocated. Based on the requested
VPN service topology, it will allocate one or multiple RTs from the
pool.The mechanisms shown above are just examples and should not be
considered an exhaustive list of solutions.In the any-to-any VPN service topology, all VPN sites can communicate
with each other without any restrictions. The management system that
receives an any-to-any IP VPN service request through this model is
expected to assign and then configure the VRF and RTs on the
appropriate PEs. In the any-to-any case, a single RT is generally
required, and every VRF imports and exports this RT.In the Hub-and-Spoke VPN service topology, all Spoke sites can
communicate only with Hub sites but not with each other, and Hubs can
also communicate with each other. The management system that owns an
any-to-any IP VPN service request through this model is expected to
assign and then configure the VRF and RTs on the appropriate PEs. In
the Hub-and-Spoke case, two RTs are generally required (one RT for
Hub routes and one RT for Spoke routes). A Hub VRF that connects Hub
sites will export Hub routes with the Hub RT and will import Spoke
routes through the Spoke RT. It will also import the Hub RT to allow
Hub-to-Hub communication. A Spoke VRF that connects Spoke sites will
export Spoke routes with the Spoke RT and will import Hub routes
through the Hub RT.The management system MUST take into account constraints on Hub-and-
Spoke connections. For example, if a management system decides to
mesh a Spoke site and a Hub site on the same PE, it needs to mesh
connections in different VRFs, as shown in the figure below.In the Hub and Spoke disjoint VPN service topology, all Spoke sites
can communicate only with Hub sites but not with each other, and Hubs
cannot communicate with each other. The management system that owns
an any-to-any IP VPN service request through this model is expected
to assign and then configure the VRF and RTs on the appropriate PEs.
In the Hub-and-Spoke case, two RTs are required (one RT for Hub
routes and one RT for Spoke routes). A Hub VRF that connects Hub
sites will export Hub routes with the Hub RT and will import Spoke
routes through the Spoke RT. A Spoke VRF that connects Spoke sites
will export Spoke routes with the Spoke RT and will import Hub routes
through the Hub RT.The management system MUST take into account constraints on Hub-and-
Spoke connections, as in the previous case.Hub and Spoke disjoint can also be seen as multiple Hub-and-Spoke
VPNs (one per Hub) that share a common set of Spoke sites.The proposed model provides cloud access configuration via the
"cloud-accesses" container. The usage of cloud-access is targeted
for the public cloud. An Internet access can also be considered a
public cloud access service. The cloud-accesses container provides
parameters for network address translation and authorization rules.A private cloud access may be addressed through NNIs, as described in
.A cloud identifier is used to reference the target service. This
identifier is local to each administration.The model allows for source address translation before accessing the
cloud. IPv4-to-IPv4 address translation (NAT44) is the only
supported option, but other options can be added through
augmentation. If IP source address translation is required to access
the cloud, the "enabled" leaf MUST be set to true in the "nat44"
container. An IP address may be provided in the "customer-address"
leaf if the customer is providing the IP address to be used for the
cloud access. If the SP is providing this address,
"customer-address" is not necessary, as it can be picked from a pool
of SPs.By default, all sites in the IP VPN MUST be authorized to access the
cloud. If restrictions are required, a user MAY configure the
"permit-site" or "deny-site" leaf-list. The permit-site leaf-list
defines the list of sites authorized for cloud access. The deny-site
leaf-list defines the list of sites denied for cloud access. The
model supports both "deny-any-except" and "permit-any-except"
authorization.How the restrictions will be configured on network elements is out of
scope for this document.In the example above, we configure the global VPN to access the
Internet by creating a cloud-access pointing to the cloud identifier
for the Internet service. No authorized sites will be configured, as
all sites are required to access the Internet. The
"address-translation/nat44/enabled" leaf will be set to true.If Site 1 and Site 2 require access to Cloud 1, a new cloud-access
pointing to the cloud identifier of Cloud 1 will be created. The
permit-site leaf-list will be filled with a reference to Site 1 and
Site 2.If all sites except Site 1 require access to Cloud 2, a new
cloud-access pointing to the cloud identifier of Cloud 2 will be
created. The deny-site leaf-list will be filled with a reference to
Site 1.A service with more than one cloud access is functionally
identical to multiple services each with a single cloud access,
where the sites that belong to each service in the latter case
correspond with the authorized sites for each cloud access in the
former case. However, defining a single service with multiple cloud
accesses may be operationally simpler.Multicast in IP VPNs is described in .If multicast support is required for an IP VPN, some global multicast
parameters are required as input for the service request.Users of this model will need to provide the flavors of trees that
will be used by customers within the IP VPN (customer tree). The
proposed model supports bidirectional, shared, and source-based trees
(and can be augmented). Multiple flavors of trees can be supported
simultaneously.When an ASM flavor is requested, this model requires that the "rp"
and "rp-discovery" parameters be filled. Multiple RP-to-group
mappings can be created using the "rp-group-mappings" container. For
each mapping, the SP can manage the RP service by setting the
"provider-managed/enabled" leaf to true. In the case of a provider-
managed RP, the user can request RP redundancy and/or optimal traffic
delivery. Those parameters will help the SP select the appropriate
technology or architecture to fulfill the customer service
requirement: for instance, in the case of a request for optimal
traffic delivery, an SP may use Anycast-RP or RP-tree-to-SPT
switchover architectures.In the case of a customer-managed RP, the RP address must be filled
in the RP-to-group mappings using the "rp-address" leaf. This leaf
is not needed for a provider-managed RP.Users can define a specific mechanism for RP discovery, such as the
"auto-rp", "static-rp", or "bsr-rp" modes. By default, the model
uses "static-rp" if ASM is requested. A single rp-discovery
mechanism is allowed for the VPN. The "rp-discovery" container can
be used for both provider-managed and customer-managed RPs. In the
case of a provider-managed RP, if the user wants to use "bsr-rp" as a
discovery protocol, an SP should consider the provider-managed
"rp-group-mappings" for the "bsr-rp" configuration. The SP will then
configure its selected RPs to be "bsr-rp-candidates". In the case of
a customer-managed RP and a "bsr-rp" discovery mechanism, the
"rp-address" provided will be the bsr-rp candidate.There are some cases where a particular VPN needs access to resources
(servers, hosts, etc.) that are external. Those resources may be
located in another VPN.In the figure above, VPN B has some resources on Site B that need to
be available to some customers/partners. VPN A must be able to
access those VPN B resources.Such a VPN connection scenario can be achieved via a VPN policy as
defined in . But there are some simple
cases where a particular VPN (VPN A) needs access to all resources
in another VPN (VPN B). The model provides an easy way to set up
this connection using the "extranet-vpns" container.The extranet-vpns container defines a list of VPNs a particular VPN
wants to access. The extranet-vpns container must be used on
customer VPNs accessing extranet resources in another VPN. In the
figure above, in order to provide VPN A with access to VPN B, the
extranet-vpns container needs to be configured under VPN A with an
entry corresponding to VPN B. There is no service configuration
requirement on VPN B.Readers should note that even if there is no configuration
requirement on VPN B, if VPN A lists VPN B as an extranet, all sites
in VPN B will gain access to all sites in VPN A.The "site-role" leaf defines the role of the local VPN sites in the
target extranet VPN service topology. Site roles are defined in
. Based on this, the requirements described
in regarding the site-role leaf are also
applicable here.In the example below, VPN A accesses VPN B resources through an
extranet connection. A Spoke role is required for VPN A sites, as
sites from VPN A must not be able to communicate with each other
through the extranet VPN connection.This model does not define how the extranet configuration will be
achieved.Any VPN interconnection scenario that is more complex (e.g., only
certain parts of sites on VPN A accessing only certain parts of sites
on VPN B) needs to be achieved using a VPN attachment as defined in
, and especially a VPN policy as defined in
.A site represents a connection of a customer office to one or more
VPN services. Each site is associated with one or more locations.
A site has several characteristics:
Unique identifier (site-id): uniquely identifies the site within
the overall network infrastructure. The identifier is a string
that allows any encoding for the local administration of the VPN
service.Locations (locations): site location information that allows easy
retrieval of information from the nearest available resources. A
site may be composed of multiple locations. Alternatively, two or
more sites can be associated with the same location, by referencing
the same location ID.Devices (devices): allows the customer to request one or more
customer premises equipment entities from the SP for a particular
site.Management (management): defines the type of management for the
site -- for example, co-managed, customer-managed, or provider-
managed. See .Site network accesses (site-network-accesses): defines the list of
network accesses associated with the sites, and their properties
-- especially bearer, connection, and service parameters.A site-network-access represents an IP logical connection of a site.
A site may have multiple site-network-accesses.Multiple site-network-accesses are used, for instance, in the case of
multihoming. Some other meshing cases may also include multiple
site-network-accesses.The site configuration is viewed as a global entity; we assume that
it is mostly the management system's role to split the parameters
between the different elements within the network. For example, in
the case of the site-network-access configuration, the management
system needs to split the overall parameters between the PE
configuration and the CE configuration.A site may be composed of multiple locations. All the locations will
need to be configured as part of the "locations" container and list.
A typical example of a multi-location site is a headquarters office
in a city composed of multiple buildings. Those buildings may be
located in different parts of the city and may be linked by
intra-city fibers (customer metropolitan area network). In such a
case, when connecting to a VPN service, the customer may ask for
multihoming based on its distributed locations.A customer may also request some premises equipment entities (CEs)
from the SP via the "devices" container. Requesting a CE implies a
provider-managed or co-managed model. A particular device must be
ordered to a particular already-configured location. This would help
the SP send the device to the appropriate postal address. In a
multi-location site, a customer may, for example, request a CE for
each location on the site where multihoming must be implemented. In
the figure above, one device may be requested for the Manhattan
location and one other for the Brooklyn location.By using devices and locations, the user can influence the
multihoming scenario he wants to implement: single CE, dual CE, etc.As mentioned earlier, a site may be multihomed. Each IP network
access for a site is defined in the "site-network-accesses"
container. The site-network-access parameter defines how the site is
connected on the network and is split into three main classes of
parameters:
bearer: defines requirements of the attachment (below Layer 3).connection: defines Layer 3 protocol parameters of the attachment.availability: defines the site's availability policy. The
availability parameters are defined in .The site-network-access has a specific type
(site-network-access-type). This document defines two types:
point-to-point: describes a point-to-point connection between the
SP and the customer.multipoint: describes a multipoint connection between the SP and
the customer.The type of site-network-access may have an impact on the parameters
offered to the customer, e.g., an SP may not offer encryption for
multipoint accesses. It is up to the provider to decide what
parameter is supported for point-to-point and/or multipoint accesses;
this topic is out of scope for this document. Some containers
proposed in the model may require extensions in order to work
properly for multipoint accesses.The bearer container defines the requirements for the site attachment
to the provider network that are below Layer 3.The bearer parameters will help determine the access media to be
used. This is further described in .The "ip-connection" container defines the protocol parameters of the
attachment (IPv4 and IPv6). Depending on the management mode, it
refers to PE-CE addressing or CE-to-customer-LAN addressing. In any
case, it describes the responsibility boundary between the provider
and the customer. For a customer-managed site, it refers to the
PE-CE connection. For a provider-managed site, it refers to the
CE-to-LAN connection.An IP subnet can be configured for either IPv4 or IPv6 Layer 3
protocols. For a dual-stack connection, two subnets will be
provided, one for each address family.The "address-allocation-type" determines how the address allocation
needs to be done. The current model defines five ways to perform IP
address allocation:
provider-dhcp: The provider will provide DHCP service for customer
equipment; this is applicable to either the "IPv4" container or
the "IPv6" container.provider-dhcp-relay: The provider will provide DHCP relay service
for customer equipment; this is applicable to both IPv4 and IPv6
addressing. The customer needs to populate the DHCP server list
to be used.static-address: Addresses will be assigned manually; this is
applicable to both IPv4 and IPv6 addressing.slaac: This parameter enables stateless address autoconfiguration
. This is applicable to IPv6 only.provider-dhcp-slaac: The provider will provide DHCP service for
customer equipment, as well as stateless address
autoconfiguration. This is applicable to IPv6 only.In the dynamic addressing mechanism, the SP is expected to provide at
least the IP address, prefix length, and default gateway information. In the
case of multiple site-network-access points belonging to the same
VPN, address space allocated for one site-network-access should not
conflict with one allocated for other site-network-accesses.A customer may require a specific IP connectivity fault detection
mechanism on the IP connection. The model supports BFD as a fault
detection mechanism. This can be extended with other mechanisms via
augmentation. The provider can propose some profiles to the
customer, depending on the service level the customer wants to
achieve. Profile names must be communicated to the customer. This
communication is out of scope for this document. Some fixed values
for the holdtime period may also be imposed by the customer if the
provider allows the customer this function.The "oam" container can easily be augmented by other mechanisms; in
particular, work done by the LIME Working Group
(https://datatracker.ietf.org/wg/lime/charter/) may be reused in
applicable scenarios.Some parameters can be configured at both the site level and the
site-network-access level, e.g., routing, services, security.
Inheritance applies when parameters are defined at the site level.
If a parameter is configured at both the site level and the access
level, the access-level parameter MUST override the site-level
parameter. Those parameters will be described later in this
document.In terms of provisioning impact, it will be up to the implementation
to decide on the appropriate behavior when modifying existing
configurations. But the SP will need to communicate to the user
about the impact of using inheritance. For example, if we consider
that a site has already provisioned three site-network-accesses, what
will happen if a customer changes a service parameter at the site
level? An implementation of this model may update the service
parameters of all already-provisioned site-network-accesses (with
potential impact on live traffic), or it may take into account this
new parameter only for the new sites.A VPN has a particular service topology, as described in
. As a consequence, each site belonging to a VPN is
assigned with a particular role in this topology. The site-role leaf
defines the role of the site in a particular VPN topology.In the any-to-any VPN service topology, all sites MUST have the same
role, which will be "any-to-any-role".In the Hub-and-Spoke VPN service topology or the Hub and Spoke
disjoint VPN service topology, sites MUST have a Hub role or a
Spoke role.A site may be part of one or multiple VPNs. The "site-vpn-flavor"
defines the way the VPN multiplexing is done. The current version of
the model supports four flavors:
site-vpn-flavor-single: The site belongs to only one VPN.site-vpn-flavor-multi: The site belongs to multiple VPNs, and all
the logical accesses of the sites belong to the same set of VPNs.site-vpn-flavor-sub: The site belongs to multiple VPNs with
multiple logical accesses. Each logical access may map to
different VPNs (one or many).site-vpn-flavor-nni: The site represents an option A NNI.The figure below describes a single VPN attachment. The site
connects to only one VPN.The figure below describes a site connected to multiple VPNs.In the example above, the New York office is multihomed. Both
logical accesses are using the same VPN attachment rules, and both
are connected to VPN A and VPN B.Reaching VPN A or VPN B from the New York office will be done via
destination-based routing. Having the same destination reachable
from the two VPNs may cause routing troubles. The customer
administration's role in this case would be to ensure the appropriate
mapping of its prefixes in each VPN.The figure below describes a subVPN attachment. The site connects to
multiple VPNs, but each logical access is attached to a particular
set of VPNs. A typical use case for a subVPN is a customer site used
by multiple affiliates with private resources for each affiliate that
cannot be shared (communication between the affiliates is prevented).
It is similar to having separate sites, but in the case of a SubVPN, the
customer can share some physical components at a single location, while
maintaining strong communication isolation between the affiliates.
In this example, site-network-access#1 is attached to VPN B, while
site-network-access#2 is attached to VPN A.A multiVPN can be implemented in addition to a subVPN; as a
consequence, each site-network-access can access multiple VPNs. In
the example below, site-network-access#1 is mapped to VPN B and
VPN C, while site-network-access#2 is mapped to VPN A and VPN D.Multihoming is also possible with subVPNs; in this case,
site-network-accesses are grouped, and a particular group will have
access to the same set of VPNs. In the example below,
site-network-access#1 and site-network-access#2 are part of the same
group (multihomed together) and are mapped to VPN B and VPN C; in
addition, site-network-access#3 and site-network-access#4 are part of
the same group (multihomed together) and are mapped to VPN A and
VPN D.In terms of service configuration, a subVPN can be achieved by
requesting that the site-network-access use the same bearer (see
for more details).A Network-to-Network Interface (NNI) scenario may be modeled using
the sites container (see ). Using the sites
container to model an NNI is only one possible option for NNIs (see
). This option is called "option A" by
reference to the option A NNI defined in .
It is helpful for the SP to indicate that the requested VPN connection
is not a regular site but rather is an NNI, as specific default device
configuration parameters may be applied in the case of NNIs (e.g.,
ACLs, routing policies).The figure above describes an option A NNI scenario that can be
modeled using the sites container. In order to connect its customer
VPNs (VPN1 and VPN2) in SP B, SP A may request the creation of some
site-network-accesses to SP B. The site-vpn-flavor-nni will be used
to inform SP B that this is an NNI and not a regular customer site.
The site-vpn-flavor-nni may be multihomed and multiVPN as well.Due to the multiple site-vpn flavors, the attachment of a site to an
IP VPN is done at the site-network-access (logical access) level
through the "vpn-attachment" container. The vpn-attachment container
is mandatory. The model provides two ways to attach a site to a VPN:
By referencing the target VPN directly.By referencing a VPN policy for attachments that are more complex.A choice is implemented to allow the user to choose the flavor that
provides the best fit.Referencing a vpn-id provides an easy way to attach a particular
logical access to a VPN. This is the best way in the case of a
single VPN attachment or subVPN with a single VPN attachment per
logical access. When referencing a vpn-id, the site-role setting
must be added to express the role of the site in the target VPN
service topology.The example of a corresponding XML snippet above describes a subVPN
case where a site (SITE1) has two logical accesses (LA1 and LA2),
with LA1 attached to VPNA and LA2 attached to VPNB.The "vpn-policy" list helps express a multiVPN scenario where a
logical access belongs to multiple VPNs. Multiple VPN policies can
be created to handle the subVPN case where each logical access is
part of a different set of VPNs.As a site can belong to multiple VPNs, the vpn-policy list may be
composed of multiple entries. A filter can be applied to specify
that only some LANs of the site should be part of a particular VPN.
Each time a site (or LAN) is attached to a VPN, the user must
precisely describe its role (site-role) within the target VPN service
topology.In the example above, Site5 is part of two VPNs: VPN3 and VPN2. It
will play a Hub role in VPN2 and an any-to-any role in VPN3. We can
express such a multiVPN scenario with the following XML snippet:Now, if a more-granular VPN attachment is
necessary, filtering can be used. For example, if only LAN1 from Site5 must be attached to VPN2 as a Hub and only LAN2 must be attached to VPN3, the following XML snippet can be used:The management system will have to determine where to connect each
site-network-access of a particular site to the provider network
(e.g., PE, aggregation switch).The current model defines parameters and constraints that can
influence the meshing of the site-network-access.The management system MUST honor all customer constraints, or if a
constraint is too strict and cannot be fulfilled, the management
system MUST NOT provision the site and MUST provide
information to the user about which constraints could not
be fulfilled. How the information is provided is out of scope for
this document. Whether or not to relax the constraint would
then be left up to the user.Parameters such as site location (see )
and access type are just hints (see )
for the management system for service placement.In addition to parameters and constraints, the management system's
decision MAY be based on any other internal constraints that are left
up to the SP: least load, distance, etc.In the case of provider management or co-management, one or more
devices have been ordered by the customer to a particular
already-configured location. The customer may force a particular
site-network-access to be connected on a particular device
that he ordered.In the figure above, site-network-access#1 is associated with CE1 in
the service request. The SP must ensure the provisioning of this
connection.The location information provided in this model MAY be used by a
management system to determine the target PE to mesh the site
(SP side). A particular location must be associated with each site
network access when configuring it. The SP MUST honor the
termination of the access on the location associated with the site
network access (customer side). The "country-code" in the
site location SHOULD be expressed as an ISO ALPHA-2 code.The site-network-access location is determined by the
"location-flavor". In the case of a provider-managed or co-managed
site, the user is expected to configure a "device-reference" (device
case) that will bind the site-network-access to a particular device
that the customer ordered. As each device is already associated with
a particular location, in such a case the location information is
retrieved from the device location. In the case of a customer-
managed site, the user is expected to configure a
"location-reference" (location case); this provides a reference to an
existing configured location and will help with placement.In the example above, Site #1 is a customer-managed site with a
location L1, while Site #2 is a provider-managed site for which a CE
(CE#1) was ordered. Site #2 is configured with L2 as its location.
When configuring a site-network-access for Site #1, the user will
need to reference location L1 so that the management system will know
that the access will need to terminate on this location. Then, for
distance reasons, this management system may mesh Site #1 on a PE in
the Philadelphia POP. It may also take into account resources
available on PEs to determine the exact target PE (e.g., least
loaded). For Site #2, the user is expected to configure the
site-network-access with a device-reference to CE#1 so that the
management system will know that the access must terminate on the
location of CE#1 and must be connected to CE#1. For placement of the
SP side of the access connection, in the case of the nearest PE used,
it may mesh Site #2 on the Washington POP.The management system needs to elect the access media to connect the
site to the customer (for example, xDSL, leased line, Ethernet
backhaul). The customer may provide some parameters/constraints that
will provide hints to the management system.The bearer container information SHOULD be the first piece of
information considered when making this decision:
The "requested-type" parameter provides information about the
media type that the customer would like to use. If the "strict"
leaf is equal to "true", this MUST be considered a strict
constraint so that the management system cannot connect the site
with another media type. If the "strict" leaf is equal to "false"
(default) and if the requested media type cannot be fulfilled, the
management system can select another media type. The supported
media types SHOULD be communicated by the SP to the customer via a
mechanism that is out of scope for this document.The "always-on" leaf defines a strict constraint: if set to true,
the management system MUST elect a media type that is "always-on"
(e.g., this means no dial access type).The "bearer-reference" parameter is used in cases where the
customer has already ordered a network connection to the SP apart
from the IP VPN site and wants to reuse this connection. The
string used is an internal reference from the SP and describes the
already-available connection. This is also a strict requirement
that cannot be relaxed. How the reference is given to the
customer is out of scope for this document, but as a pure example,
when the customer ordered the bearer (through a process that is
out of scope for this model), the SP may have provided the bearer
reference that can be used for provisioning services on top.Any other internal parameters from the SP can also be used. The
management system MAY use other parameters, such as the requested
"svc-input-bandwidth" and "svc-output-bandwidth", to help decide
which access type to use.Each site-network-access may have one or more constraints that would
drive the placement of the access. By default, the model assumes
that there are no constraints, but allocation of a unique bearer per
site-network-access is expected.In order to help with the different placement scenarios, a
site-network-access may be tagged using one or multiple group
identifiers. The group identifier is a string, so it can accommodate
both explicit naming of a group of sites (e.g., "multihomed-set1" or
"subVPN") and the use of a numbered identifier (e.g., 12345678). The
meaning of each group-id is local to each customer administrator, and
the management system MUST ensure that different customers can use
the same group-ids. One or more group-ids can also be defined at the
site level; as a consequence, all site-network-accesses under the
site MUST inherit the group-ids of the site they belong to. When, in
addition to the site group-ids some group-ids are defined at the
site-network-access level, the management system MUST consider the
union of all groups (site level and site network access level) for
this particular site-network-access.For an already-configured site-network-access, each constraint MUST
be expressed against a targeted set of site-network-accesses. This
site-network-access MUST never be taken into account in the targeted
set -- for example, "My site-network-access S must not be connected
on the same POP as the site-network-accesses that are part of
Group 10." The set of site-network-accesses against which the
constraint is evaluated can be expressed as a list of groups,
"all-other-accesses", or "all-other-groups". The all-other-accesses
option means that the current site-network-access constraint MUST be
evaluated against all the other site-network-accesses belonging to
the current site. The all-other-groups option means that the
constraint MUST be evaluated against all groups that the current
site-network-access does not belong to.The current model defines multiple constraint-types:
pe-diverse: The current site-network-access MUST NOT be connected
to the same PE as the targeted site-network-accesses.pop-diverse: The current site-network-access MUST NOT be connected
to the same POP as the targeted site-network-accesses.linecard-diverse: The current site-network-access MUST NOT be
connected to the same linecard as the targeted
site-network-accesses.bearer-diverse: The current site-network-access MUST NOT use
common bearer components compared to bearers used by the targeted
site-network-accesses. "bearer-diverse" provides some level of
diversity at the access level. As an example, two bearer-diverse
site-network-accesses must not use the same DSLAM, BAS, or Layer 2
switch.same-pe: The current site-network-access MUST be connected to the
same PE as the targeted site-network-accesses.same-bearer: The current site-network-access MUST be connected
using the same bearer as the targeted site-network-accesses.These constraint-types can be extended through augmentation.Each constraint is expressed as "The site-network-access S must be
<constraint-type> (e.g., pe-diverse, pop-diverse) from
these <target> site-network-accesses."The group-id used to target some site-network-accesses may be the
same as the one used by the current site-network-access. This eases
the configuration of scenarios where a group of site-network-access
points has a constraint between the access points in the group. As
an example, if we want a set of sites (Site#1 to Site#5) to be
connected on different PEs, we can tag them with the same group-id
and express a pe-diverse constraint for this group-id with the
following XML snippet: The group-id used to target some site-network-accesses may also be
different than the one used by the current site-network-access. This
can be used to express that a group of sites has some constraints
against another group of sites, but there is no constraint within the
group. For example, we consider a set of six sites and two groups;
we want to ensure that a site in the first group must be pop-diverse
from a site in the second group. The example of a corresponding XML
snippet is described as follows:Some infeasible access placement scenarios could be created via the
proposed configuration framework. Such infeasible access placement
scenarios could result from constraints that are too restrictive,
leading to infeasible access placement in the network or conflicting
constraints that would also lead to infeasible access placement. An
example of conflicting rules would be to request that
site-network-access#1 be pe-diverse from site-network-access#2 and to
request at the same time that site-network-access#2 be on the same PE
as site-network-access#1. When the management system cannot
determine the placement of a site-network-access, it MUST return an
error message indicating that placement was not possible.The customer wants to create a multihomed site. The site will be
composed of two site-network-accesses; for resiliency purposes, the
customer wants the two site-network-accesses to be meshed on
different POPs.This scenario can be expressed with the following XML snippet:But it can also be expressed with the following XML snippet:The customer has six branch offices in a particular region, and he
wants to prevent having all branch offices connected on the same PE.
He wants to express that three branch offices cannot be connected on
the same linecard. Also, the other branch offices must be connected
on a different POP. Those other branch offices cannot also be
connected on the same linecard.This scenario can be expressed as follows:
We need to create two groups of sites: Group#10, which is composed
of Office#1, Office#2, and Office#3; and Group#20, which is
composed of Office#4, Office#5, and Office#6.Sites within Group#10 must be pop-diverse from sites within
Group#20, and vice versa.Sites within Group#10 must be linecard-diverse from other sites in
Group#10 (same for Group#20).To increase its site bandwidth at lower cost, a customer wants to
order two parallel site-network-accesses that will be connected to
the same PE.This scenario can be expressed with the following XML snippet:A customer has a site that is dual-homed. The dual-homing must be
done on two different PEs. The customer also wants to implement two
subVPNs on those multihomed accesses.This scenario can be expressed as follows:
The site will have four site network accesses (two subVPNs coupled
via dual-homing).Site-network-access#1 and site-network-access#3 will correspond to
the multihoming of subVPN B. A PE-diverse constraint is required
between them.Site-network-access#2 and site-network-access#4 will correspond to
the multihoming of subVPN C. A PE-diverse constraint is required
between them.To ensure proper usage of the same bearer for the subVPN,
site-network-access#1 and site-network-access#2 must share the
same bearer as site-network-access#3 and site-network-access#4.The route distinguisher (RD) is a critical parameter of PE-based
L3VPNs as described in that provides the ability to
distinguish common addressing plans in different VPNs. As for route
targets (RTs), a management system is expected to allocate a VRF on
the target PE and an RD for this VRF.If a VRF already exists on the target PE and the VRF fulfills the
connectivity constraints for the site, there is no need to recreate
another VRF, and the site MAY be meshed within this existing VRF.
How the management system checks that an existing VRF fulfills the
connectivity constraints for a site is out of scope for this
document.If no such VRF exists on the target PE, the management system has to
initiate the creation of a new VRF on the target PE and has to
allocate a new RD for this new VRF.The management system MAY apply a per-VPN or per-VRF allocation
policy for the RD, depending on the SP's policy. In a per-VPN
allocation policy, all VRFs (dispatched on multiple PEs) within a VPN
will share the same RD value. In a per-VRF model, all VRFs should
always have a unique RD value. Some other allocation policies are
also possible, and this document does not restrict the allocation
policies to be used.The allocation of RDs MAY be done in the same way as RTs. The
examples provided in could be reused in this
scenario.Note that an SP MAY configure a target PE for an automated allocation
of RDs. In this case, there will be no need for any backend system
to allocate an RD value.A site may be multihomed, meaning that it has multiple
site-network-access points. Placement constraints defined in
previous sections will help ensure physical diversity.When the site-network-accesses are placed on the network, a customer
may want to use a particular routing policy on those accesses.The "site-network-access/availability" container defines parameters
for site redundancy. The "access-priority" leaf defines a preference
for a particular access. This preference is used to model
load-balancing or primary/backup scenarios. The higher the
access-priority value, the higher the preference will be.The figure below describes how the access-priority attribute can be
used.In the figure above, Hub#2 requires load-sharing, so all the
site-network-accesses must use the same access-priority value. On
the other hand, as Hub#1 requires a primary site-network-access and a
backup site-network-access, a higher access-priority setting will be
configured on the primary site-network-access.Scenarios that are more complex can be modeled. Let's consider
a Hub site with five accesses to the network (A1,A2,A3,A4,A5). The
customer wants to load-share its traffic on A1,A2 in the nominal
situation. If A1 and A2 fail, the customer wants to load-share its
traffic on A3 and A4; finally, if A1 to A4 are down, he wants to
use A5. We can model this easily by configuring the following
access-priority values: A1=100, A2=100, A3=50, A4=50, A5=10.The access-priority scenario has some limitations. An
access-priority scenario like the previous one with five accesses but
with the constraint of having traffic load-shared between A3 and A4
in the case where A1 OR A2 is down is not achievable. But the
authors believe that using the access-priority attribute will cover
most of the deployment use cases and that the model can still be
extended via augmentation to support additional use cases.The service model supports the ability to protect the traffic for a
site. Such protection provides a better level of availability in
multihoming scenarios by, for example, using local-repair techniques
in case of failures. The associated level of service guarantee would
be based on an agreement between the customer and the SP and is out
of scope for this document.In the figure above, we consider an IP VPN service with three sites,
including two dual-homed sites (Site#1 and Site#2). For dual-homed
sites, we consider PE1-CE1 and PE3-CE3 as primary and PE2-CE2,PE4-CE4
as backup for the example (even if protection also applies to
load-sharing scenarios).In order to protect Site#2 against a failure, a user may set the
"traffic-protection/enabled" leaf to true for Site#2. How the
traffic protection will be implemented is out of scope for this
document. However, in such a case, we could consider traffic coming
from a remote site (Site#1 or Site#3), where the primary path would
use PE3 as the egress PE. PE3 may have preprogrammed a backup
forwarding entry pointing to the backup path (through PE4-CE4) for
all prefixes going through the PE3-CE3 link. How the backup path is
computed is out of scope for this document. When the PE3-CE3 link
fails, traffic is still received by PE3, but PE3 automatically
switches traffic to the backup entry; the path will therefore be
PE1-P1-(...)-P3-PE3-PE4-CE4 until the remote PEs reconverge and use
PE4 as the egress PE.The "security" container defines customer-specific security
parameters for the site. The security options supported in the model
are limited but may be extended via augmentation.The current model does not support any authentication parameters for
the site connection, but such parameters may be added in the
"authentication" container through augmentation.Traffic encryption can be requested on the connection. It may be
performed at Layer 2 or Layer 3 by selecting the appropriate
enumeration in the "layer" leaf. For example, an SP may use IPsec
when a customer requests Layer 3 encryption. The encryption profile
can be SP defined or customer specific.When an SP profile is used and a key (e.g., a pre-shared key) is
allocated by the provider to be used by a customer, the SP should
provide a way to communicate the key in a secured way to the
customer.When a customer profile is used, the model supports only a pre-shared
key for authentication of the site connection, with the pre-shared key
provided through the NETCONF or RESTCONF request. A secure channel must
be used to ensure that the pre-shared key cannot be intercepted.For security reasons, it may be necessary for the customer to change
the pre-shared key on a regular basis. To perform a key change, the
user can ask the SP to change the pre-shared key by submitting a new
pre-shared key for the site configuration (as shown below with a
corresponding XML snippet). This mechanism might not be hitless. A hitless key change mechanism may be added through augmentation.Other key-management methodologies (e.g., PKI) may be added
through augmentation. The model defines three types of common management options:
provider-managed: The CE router is managed only by the provider.
In this model, the responsibility boundary between the SP and the
customer is between the CE and the customer network.customer-managed: The CE router is managed only by the customer.
In this model, the responsibility boundary between the SP and the
customer is between the PE and the CE.co-managed: The CE router is primarily managed by the provider; in
addition, the SP allows customers to access the CE for
configuration/monitoring purposes. In the co-managed mode, the
responsibility boundary is the same as the responsibility boundary
for the provider-managed model.Based on the management model, different security options MAY be
derived.In the co-managed case, the model defines options for the
management address family (IPv4 or IPv6) and the associated
management address."routing-protocol" defines which routing protocol must be activated
between the provider and the customer router. The current model
supports the following settings: bgp, rip, ospf, static, direct,
and vrrp.The routing protocol defined applies at the provider-to-customer
boundary. Depending on how the management model is administered, it
may apply to the PE-CE boundary or the CE-to-customer boundary. In
the case of a customer-managed site, the routing protocol defined
will be activated between the PE and the CE router managed by the
customer. In the case of a provider-managed site, the routing
protocol defined will be activated between the CE managed by the SP
and the router or LAN belonging to the customer. In this case, we
expect the PE-CE routing to be configured based on the SP's rules, as
both are managed by the same entity.All the examples below will refer to a scenario for a customer-
managed site.All routing protocol types support dual stack by using the
"address-family" leaf-list.Example of a corresponding XML snippet with dual stack using the same routing protocol:Example of a corresponding XML snippet with dual stack using
two different routing protocols:The routing protocol type "direct" SHOULD be used when a customer LAN
is directly connected to the provider network and must be advertised
in the IP VPN.In this case, the customer has a default route to the PE address.The routing protocol type "vrrp" SHOULD be used and advertised in the
IP VPN when
the customer LAN is directly connected to the provider network,
andLAN redundancy is expected.LAN attached directly to provider network with LAN redundancy:In this case, the customer has a default route to the SP network.The routing protocol type "static" MAY be used when a customer LAN is
connected to the provider network through a CE router and must be
advertised in the IP VPN. In this case, the static routes give next
hops (nh) to the CE and to the PE. The customer has a default route
to the SP network.The routing protocol type "rip" MAY be used when a customer LAN is
connected to the provider network through a CE router and must be
advertised in the IP VPN. For IPv4, the model assumes that RIP
version 2 is used.In the case of dual-stack routing requested through this model, the
management system will be responsible for configuring RIP (including
the correct version number) and associated address families on
network elements.The routing protocol type "ospf" MAY be used when a customer LAN is
connected to the provider network through a CE router and must be
advertised in the IP VPN.It can be used to extend an existing OSPF network and interconnect
different areas. See for more details.The model also defines an option to create an OSPF sham link between
two sites sharing the same area and having a backdoor link. The
sham link is created by referencing the target site sharing the same
OSPF area. The management system will be responsible for checking to
see if there is already a sham link configured for this VPN and area
between the same pair of PEs. If there is no existing sham link, the
management system will provision one. This sham link MAY be reused
by other sites.Regarding dual-stack support, the user MAY specify both IPv4 and IPv6
address families, if both protocols should be routed through OSPF.
As OSPF uses separate protocol instances for IPv4 and IPv6, the
management system will need to configure both OSPF version 2 and OSPF
version 3 on the PE-CE link.Other OSPF parameters, such as timers, are typically set by the SP and
communicated to the customer outside the scope of this model.Example of a corresponding XML snippet with OSPF routing parameters
in the service model:Example of PE configuration done by the management system:The routing protocol type "bgp" MAY be used when a customer LAN is
connected to the provider network through a CE router and must be
advertised in the IP VPN.The session addressing will be derived from connection parameters as
well as the SP's knowledge of the addressing plan that is in use.In the case of dual-stack access, the user MAY request BGP routing
for both IPv4 and IPv6 by specifying both address families. It will
be up to the SP and management system to determine how to describe the
configuration (two BGP sessions, single, multi-session, etc.). This,
along with other BGP parameters such as timers, is communicated to
the customer outside the scope of this model.The service configuration below activates BGP on the PE-CE link for
both IPv4 and IPv6.BGP activation requires the SP to know the address of the customer
peer. If the site-network-access connection addresses are used for
BGP peering, the "static-address" allocation type for the IP connection
MUST be used. Other peering mechanisms are outside the scope of
this model. An example of a corresponding XML snippet is described
as follows:Depending on the SP flavor, a management system can divide this
service configuration into different flavors, as shown by the
following examples.Example of PE configuration done by the management system
(single IPv4 transport session):Example of PE configuration done by the management system
(two sessions):Example of PE configuration done by the management system
(multi-session):The service defines service parameters associated with the site.The service bandwidth refers to the bandwidth requirement between the
PE and the CE (WAN link bandwidth). The requested bandwidth is
expressed as svc-input-bandwidth and svc-output-bandwidth in bits
per second. The input/output direction uses the customer site as a
reference: "input bandwidth" means download bandwidth for the site,
and "output bandwidth" means upload bandwidth for the site.The service bandwidth is only configurable at the site-network-access
level.Using a different input and output bandwidth will allow the SP to
determine if the customer allows for asymmetric bandwidth access,
such as ADSL. It can also be used to set rate-limiting in a
different way for uploading and downloading on a symmetric bandwidth
access.The bandwidth is a service bandwidth expressed primarily as IP
bandwidth, but if the customer enables MPLS for Carriers' Carriers
(CsC), this becomes MPLS bandwidth.The service MTU refers to the maximum PDU size that the customer may use.
If the customer sends packets that are longer than the requested service
MTU, the network may discard it (or for IPv4, fragment it).The model defines QoS parameters in an abstracted way:
qos-classification-policy: policy that defines a set of ordered
rules to classify customer traffic.qos-profile: QoS scheduling profile to be applied.QoS classification rules are handled by the
"qos-classification-policy" container. The qos-classification-policy
container is an ordered list of rules that match a flow or
application and set the appropriate target class of service
(target-class-id). The user can define the match using an
application reference or a flow definition that is more specific
(e.g., based on Layer 3 source and destination addresses, Layer 4
ports, and Layer 4 protocol). When a flow definition is used, the
user can employ a "target-sites" leaf-list to identify the
destination of a flow rather than using destination IP addresses. In
such a case, an association between the site abstraction and the IP
addresses used by this site must be done dynamically. How this
association is done is out of scope for this document. The association
of a site to an IP VPN is done through the "vpn-attachment" container.
Therefore, the user can also employ "target-sites" leaf-list and
"vpn-attachment" to identify the destination of a flow targeted to
a specific VPN service. A rule that does not have a match statement is
considered a match-all rule. An SP may implement a default terminal
classification rule if the customer does not provide it. It will be
up to the SP to determine its default target class. The current model
defines some applications, but new application identities may be added
through augmentation. The exact meaning of each application identity
is up to the SP, so it will be necessary for the SP to advise the
customer on the usage of application matching.Where the classification is done depends on the SP's implementation
of the service, but classification concerns the flow coming from the
customer site and entering the network.In the figure above, the management system should implement the
classification rule:
in the ingress direction on the PE interface, if the CE is
customer-managed.in the ingress direction on the CE interface connected to the
customer LAN, if the CE is provider-managed.The figure below describes a sample service description of QoS
classification for a site:In the example above:
HTTP traffic from the 192.0.2.0/24 LAN destined for 203.0.113.1/32
will be classified in DATA2.FTP traffic from the 192.0.2.0/24 LAN destined for 203.0.113.1/32
will be classified in DATA2.Peer-to-peer traffic will be classified in DATA3.All other traffic will be classified in DATA1.The order of rule list entries is defined by the user.
The management system responsible for translating
those rules in network element configuration MUST keep
the same processing order in network element configuration. The user can choose either a standard profile provided by the
operator or a custom profile. The "qos-profile" container defines
the traffic-scheduling policy to be used by the SP.A custom QoS profile is defined as a list of classes of services and
associated properties. The properties are as follows:
direction: used to specify the direction to which the QoS
profile is applied. This model supports three direction
settings: "Site-to-WAN", "WAN-to-Site", and "both". By default,
the "both" direction value is used. If the direction is "both",
the provider should ensure scheduling according to the requested
policy in both traffic directions (SP to customer and customer
to SP). As an example, a device-scheduling policy may be
implemented on both the PE side and the CE side of the WAN
link. If the direction is "WAN-to-Site", the provider should
ensure scheduling from the SP network to the customer site. As
an example, a device-scheduling policy may be implemented only
on the PE side of the WAN link towards the customer.rate-limit: used to rate-limit the class of service. The value
is expressed as a percentage of the global service bandwidth.
When the qos-profile container is implemented on the CE side,
svc-output-bandwidth is taken into account as a reference. When
it is implemented on the PE side, svc-input-bandwidth is used.
latency: used to define the latency constraint of the class. The
latency constraint can be expressed as the lowest possible latency
or a latency boundary expressed in milliseconds. How this latency
constraint will be fulfilled is up to the SP's implementation
of the service: a strict priority queuing may be used on the access
and in the core network, and/or a low-latency routing configuration
may be created for this traffic class.jitter: used to define the jitter constraint of the class. The
jitter constraint can be expressed as the lowest possible jitter
or a jitter boundary expressed in microseconds. How this jitter
constraint will be fulfilled is up to the SP's implementation
of the service: a strict priority queuing may be used on the access
and in the core network, and/or a jitter-aware routing configuration
may be created for this traffic class.bandwidth: used to define a guaranteed amount of bandwidth for the
class of service. It is expressed as a percentage. The
"guaranteed-bw-percent" parameter uses available bandwidth as a
reference. When the qos-profile container is implemented on the
CE side, svc-output-bandwidth is taken into account as a
reference. When it is implemented on the PE side,
svc-input-bandwidth is used. By default, the bandwidth
reservation is only guaranteed at the access level. The user can
use the "end-to-end" leaf to request an end-to-end bandwidth
reservation, including across the MPLS transport network. (In
other words, the SP will activate something in the MPLS core to
ensure that the bandwidth request from the customer will be
fulfilled by the MPLS core as well.) How this is done (e.g., RSVP
reservation, controller reservation) is out of scope for this
document.In addition, due to network conditions, some constraints may not be
completely fulfilled by the SP; in this case, the SP should advise
the customer about the limitations. How this communication is done
is out of scope for this document.Example of service configuration using a standard QoS profile with
the following corresponding XML snippet:Example of service configuration using a custom QoS profile with
the following corresponding XML snippet:The custom QoS profile for Site1 defines a REAL_TIME class with a
latency constraint expressed as the lowest possible latency. It also
defines two data classes -- DATA1 and DATA2. The two classes express
a latency boundary constraint as well as a bandwidth reservation, as
the REAL_TIME class is rate-limited to 10% of the service bandwidth
(10% of 100 Mbps = 10 Mbps). In cases where congestion occurs, the
REAL_TIME traffic can go up to 10 Mbps (let's assume that only
5 Mbps are consumed). DATA1 and DATA2 will share the remaining
bandwidth (95 Mbps) according to their percentage. So, the DATA1
class will be served with at least 76 Mbps of bandwidth, while the
DATA2 class will be served with at least 4.75 Mbps. The latency
boundary information of the data class may help the SP define a
specific buffer tuning or a specific routing within the network.
The maximum percentage to be used is not limited by this model but
MUST be limited by the management system according to the policies
authorized by the SP.The "multicast" container defines the type of site in the customer
multicast service topology: source, receiver, or both. These
parameters will help the management system optimize the multicast
service. Users can also define the type of multicast relationship
with the customer: router (requires a protocol such as PIM), host
(IGMP or MLD), or both. An address family (IPv4, IPv6, or both) can
also be defined.In the case of CsC , a customer may want to
build an MPLS service using an IP VPN to carry its traffic.In the figure above, ISP1 resells an IP VPN service but has no core
network infrastructure between its POPs. ISP1 uses an IP VPN as the
core network infrastructure (belonging to another provider) between
its POPs.In order to support CsC, the VPN service must indicate MPLS support
by setting the "carrierscarrier" leaf to true in the vpn-service
list. The link between CE1_ISP1/PE1 and CE2_ISP1/PE2 must also run
an MPLS signalling protocol. This configuration is done at the site
level.In the proposed model, LDP or BGP can be used as the MPLS signalling
protocol. In the case of LDP, an IGP routing protocol MUST also be
activated. In the case of BGP signalling, BGP MUST also be
configured as the routing protocol.If CsC is enabled, the requested "svc-mtu" leaf will refer to the
MPLS MTU and not to the IP MTU. The service model sometimes refers to external information through
identifiers. As an example, to order a cloud-access to a particular
cloud service provider (CSP), the model uses an identifier to refer
to the targeted CSP. If a customer is directly using this service
model as an API (through REST or NETCONF, for example) to order a
particular service, the SP should provide a list of authorized
identifiers. In the case of cloud-access, the SP will provide the
associated identifiers for each available CSP. The same applies to
other identifiers, such as std-qos-profile, OAM profile-name, and
provider-profile for encryption.How an SP provides the meanings of those identifiers to the customer
is out of scope for this document.An autonomous system (AS) is a single network or group of networks
that is controlled by a common system administration group and that
uses a single, clearly defined routing protocol. In some cases, VPNs
need to span different ASes in different geographic areas or span
different SPs. The connection between ASes is established by the SPs
and is seamless to the customer. Examples include
a partnership between SPs (e.g., carrier, cloud) to extend their
VPN service seamlessly.an internal administrative boundary within a single SP (e.g.,
backhaul versus core versus data center).NNIs (network-to-network interfaces) have to be defined to extend the
VPNs across multiple ASes. defines multiple flavors of VPN NNI
implementations. Each implementation has pros and cons; this topic
is outside the scope of this document. For example, in an Inter-AS
option A, autonomous system border router (ASBR) peers are connected
by multiple interfaces with at least one of those interfaces spanning
the two ASes while being present in the same VPN. In order for these
ASBRs to signal unlabeled IP prefixes, they associate each interface
with a VPN routing and forwarding (VRF) instance and a Border Gateway
Protocol (BGP) session. As a result, traffic between the
back-to-back VRFs is IP. In this scenario, the VPNs are isolated
from each other, and because the traffic is IP, QoS mechanisms that
operate on IP traffic can be applied to achieve customer service
level agreements (SLAs).The figure above describes an SP network called "My network" that has
several NNIs. This network uses NNIs to:
increase its footprint by relying on L3VPN partners.connect its own data center services to the customer IP VPN.enable the customer to access its private resources located in a
private cloud owned by some CSPs.In option A, the two ASes are connected to each other with physical
links on ASBRs. For resiliency purposes, there may be multiple
physical connections between the ASes. A VPN connection -- physical
or logical (on top of physical) -- is created for each VPN that needs
to cross the AS boundary, thus providing a back-to-back VRF model.From a service model's perspective, this VPN connection can be seen
as a site. Let's say that AS B wants to extend some VPN connections
for VPN C on AS A. The administrator of AS B can use this service
model to order a site on AS A. All connection scenarios could be
realized using the features of the current model. As an example, the
figure above shows two physical connections that have logical
connections per VPN overlaid on them. This could be seen as a
dual-homed subVPN scenario. Also, the administrator of AS B will be
able to choose the appropriate routing protocol (e.g., E-BGP) to
dynamically exchange routes between ASes.This document assumes that the option A NNI flavor SHOULD reuse the
existing VPN site modeling.Example: a customer wants its CSP A to attach its virtual network N
to an existing IP VPN (VPN1) that he has from L3VPN SP B.To create the VPN connectivity, the CSP or the customer may use the
L3VPN service model that SP B exposes. We could consider that, as
the NNI is shared, the physical connection (bearer) between CSP A and
SP B already exists. CSP A may request through a service model the
creation of a new site with a single site-network-access
(single-homing is used in the figure). As a placement constraint,
CSP A may use the existing bearer reference it has from SP A to force
the placement of the VPN NNI on the existing link. The XML snippet
below illustrates a possible configuration request to SP B:The case described above is different from a scenario using the
cloud-accesses container, as the cloud-access provides a public cloud
access while this example enables access to private resources located
in a CSP network.In option B, the two ASes are connected to each other with physical
links on ASBRs. For resiliency purposes, there may be multiple
physical connections between the ASes. The VPN "connection" between
ASes is done by exchanging VPN routes through MP-BGP
.There are multiple flavors of implementations of such an NNI. For
example:
The NNI is internal to the provider and is situated between a
backbone and a data center. There is enough trust between the
domains to not filter the VPN routes. So, all the VPN routes
are exchanged. RT filtering may be implemented to save some
unnecessary route states.The NNI is used between providers that agreed to exchange VPN
routes for specific RTs only. Each provider is authorized to
use the RT values from the other provider.The NNI is used between providers that agreed to exchange VPN
routes for specific RTs only. Each provider has its own RT
scheme. So, a customer spanning the two networks will have
different RTs in each network for a particular VPN.Case 1 does not require any service modeling, as the protocol enables
the dynamic exchange of necessary VPN routes.Case 2 requires that an RT-filtering policy on ASBRs be maintained.
From a service modeling point of view, it is necessary to agree on
the list of RTs to authorize.In Case 3, both ASes need to agree on the VPN RT to exchange, as well
as how to map a VPN RT from AS A to the corresponding RT in AS B (and
vice versa).Those modelings are currently out of scope for this document.The example above describes an NNI connection between CSP A and SP
network B. Both SPs do not trust themselves and use a different RT
allocation policy. So, in terms of implementation, the customer VPN
has a different RT in each network (RT A in CSP A and RT B in SP
network B). In order to connect the customer virtual network in
CSP A to the customer IP VPN (VPN1) in SP network B, CSP A should
request that SP network B open the customer VPN on the NNI (accept
the appropriate RT). Who does the RT translation depends on the
agreement between the two SPs: SP B may permit CSP A to request VPN
(RT) translation.From a VPN service's perspective, the option C NNI is very
similar to option B, as an MP-BGP session is used to exchange
VPN routes between the ASes. The difference is that the
forwarding plane and the control plane are on different nodes,
so the MP-BGP session is multihop between routing gateway
(RGW) nodes.From a VPN service's point of view, modeling options B
and C will be identical.As explained in , this service model is
intended to be instantiated at a management layer and is not
intended to be used directly on network elements. The management
system serves as a central point of configuration of the overall
service.This section provides an example of how a management system can use
this model to configure an IP VPN service on network elements.In this example, we want to achieve the provisioning of a VPN
service for three sites using a Hub-and-Spoke VPN service topology.
One of the sites will be dual-homed, and load-sharing is expected.The following XML snippet describes the overall simplified service
configuration of this VPN.When receiving the request for provisioning the VPN service, the
management system will internally (or through communication with
another OSS component) allocate VPN RTs. In this specific case,
two RTs will be allocated (100:1 for Hub and 100:2 for Spoke).
The output of corresponding XML snippet below describes the
configuration of Spoke_Site1.When receiving the request for provisioning Spoke_Site1, the
management system MUST allocate network resources for this site. It
MUST first determine the target network elements to provision the
access, particularly the PE router (and perhaps also an aggregation
switch). As described in , the management
system SHOULD use the location information and MUST use the
access-diversity constraint to find the appropriate PE. In this
case, we consider that Spoke_Site1 requires PE diversity with the
Hub and that the management system allocates PEs based on the least
distance. Based on the location information, the management system
finds the available PEs in the area nearest the customer and picks
one that fits the access-diversity constraint.When the PE is chosen, the management system needs to allocate
interface resources on the node. One interface is selected from the
pool of available PEs. The management system can start provisioning
the chosen PE node via whatever means the management system prefers
(e.g., NETCONF, CLI). The management system will check to see if a
VRF that fits its needs is already present. If not, it will
provision the VRF: the RD will be derived from the internal
allocation policy model, and the RTs will be derived from the VPN
policy configuration of the site (the management system allocated
some RTs for the VPN). As the site is a Spoke site (site-role), the
management system knows which RTs must be imported and exported. As
the site is provider-managed, some management RTs may also be added
(100:5000). Standard provider VPN policies MAY also be added in the
configuration.Example of generated PE configuration:When the VRF has been provisioned, the management system can start
configuring the access on the PE using the allocated interface
information. IP addressing is chosen by the management system. One
address will be picked from an allocated subnet for the PE, and
another will be used for the CE configuration. Routing protocols
will also be configured between the PE and CE; because this model is
provider-managed, the choices are left to the SP. BGP was chosen for
this example. This choice is independent of the routing protocol
chosen by the customer. BGP will be used to configure the CE-to-LAN
connection as requested in the service model. Peering addresses will
be derived from those of the connection. As the CE is provider-
managed, the CE's AS number can be automatically allocated by the
management system. Standard configuration templates provided by the
SP may also be added.Example of generated PE configuration:As the CE router is not reachable at this stage, the management
system can produce a complete CE configuration that can be manually
uploaded to the node before sending the CE configuration to the
customer premises. The CE configuration will be built in the same
way as the PE would be configured. Based on the CE type
(vendor/model) allocated to the customer as well as the bearer
information, the management system knows which interface must be
configured on the CE. PE-CE link configuration is expected to be
handled automatically using the SP OSS, as both resources are managed
internally. CE-to-LAN-interface parameters such as IP addressing are
derived from the ip-connection container, taking into account how the
management system distributes addresses between the PE and CE within
the subnet. This will allow a plug-and-play configuration for the CE
to be created.Example of generated CE configuration:As expressed in , this service model is
intended to be instantiated in a management system and not
directly on network elements.The management system's role will be to configure the network
elements. The management system may be modular, so the component
instantiating the service model (let's call it "service component")
and the component responsible for network element configuration
(let's call it "configuration component") may be different.In the previous sections, we provided some examples of the
translation of service provisioning requests to router configuration
lines. In the NETCONF/YANG ecosystem, we expect NETCONF/YANG to be
used between the configuration component and network elements to
configure the requested services on those elements.In this framework, specifications are expected to provide specific
YANG modeling of service components on network elements. There will
be a strong relationship between the abstracted view provided by this
service model and the detailed configuration view that will be
provided by specific configuration models for network elements.The authors of this document anticipate definitions of YANG modules
for the network elements listed below. Note that this list is not
exhaustive:
VRF definition, including VPN policy expression.Physical interface.IP layer (IPv4, IPv6).QoS: classification, profiles, etc.Routing protocols: support of configuration of all protocols
listed in the document, as well as routing policies associated
with those protocols.Multicast VPN.Network address translation.Example of a corresponding XML snippet with a VPN site request at
the service level, using this model:In the service example above, the service component is expected to
request that the configuration component of the management system
provide the configuration of the service elements. If we consider
that the service component selected a PE (PE A) as the target PE for
the site, the configuration component will need to push the
configuration to PE A. The configuration component will use several
YANG data models to define the configuration to be applied to PE A.
The XML snippet configuration of PE A might look like this:The YANG module specified in this document defines a schema for data
that is designed to be accessed via network management protocols such
as NETCONF or RESTCONF
. The lowest NETCONF layer is the secure
transport layer, and the mandatory-to-implement secure transport
is Secure Shell (SSH) . The lowest RESTCONF
layer is HTTPS, and the mandatory-to-implement secure transport is TLS
.The NETCONF access control model provides
the means to restrict access for particular NETCONF or RESTCONF users
to a preconfigured subset of all available NETCONF or RESTCONF
protocol operations and content.There are a number of data nodes defined in this YANG module that
are writable/creatable/deletable (i.e., config true, which is the
default). These data nodes may be considered sensitive or vulnerable
in some network environments. Write operations (e.g., edit-config)
to these data nodes without proper protection can have a negative
effect on network operations. These are the subtrees and data nodes
and their sensitivity/vulnerability:
/l3vpn-svc/vpn-services/vpn-service
The entries in the list above include the whole vpn service
configurations which the customer subscribes, and indirectly
create or modify the PE and CE device configurations. Unexpected changes to these entries could lead to service disruption and/or network misbehavior./l3vpn-svc/sites/site The entries
in the list above include the customer site configurations.
As above, unexpected changes to these entries could lead to service disruption and/or network misbehavior.Some of the readable data nodes in this YANG module may be
considered sensitive or vulnerable in some network environments.
It is thus important to control read access (e.g., via get,
get-config, or notification) to these data nodes. These are the
subtrees and data nodes and their sensitivity/vulnerability:
/l3vpn-svc/vpn-services/vpn-service/l3vpn-svc/sites/site
The entries in the lists above include customer-proprietary or
confidential information, e.g., customer-name, site location,
what service the customer subscribes.The data model defines some security parameters than can be extended
via augmentation as part of the customer service request; those
parameters are described in .IANA has assigned a new URI from the "IETF XML Registry"
.IANA has recorded a YANG module name in the "YANG Module Names"
registry as follows:
IANA previously assigned the URI and YANG module as described in
. IANA has updated the references for these entries to
refer to this document.Network Access Control List (ACL) YANG Data ModelMaxim Klyus, Luis Miguel Contreras,
Gregory Mirsky, Zitao Wang, Jing Zhao, Kireeti Kompella, Eric Rosen,
Aijun Wang, Michael Scharf, Xufeng Liu, David Ball, Lucy Yong, Jean-Philippe
Landry, and Andrew Leu provided useful review to this document.Jan Lindblad reviewed RFC 8049 and found some bugs,
and his thorough YANG Doctor review on the YANG Module is valuable input. David Ball also provided a second review on RFC 8049.Many thanks to these people. The authors would like to thank Rob Shakir for his major
contributions to the initial modeling and use cases.Adrian Farrel prepared the editorial revisions for this document.