Monday, 9 December 2019

NEXUS LICENSE

NEXUS LICENSE

To obtain the license go to:

www.cisco.com/web/go/license

After obtaining the license Obtain the serial number for your device by entering the show license host-id command.
The host ID is also referred to as the device serial number.

switch# show license host-id
License hostid: VDH=FOX064317SQ

First copy the license from your laptop to Nexus.

#copy tftp: bootflash:license_file.lic
will ask for vrf interface : management
Ip address of remote : 100.100.0.2
Copied.....

Note: Use ip address for management like 100.100.0.1 for Nexus management interface and use 100.100.0.2 for the laptop

Now install the license file

switch# install license bootflash:license_file.lic
Installing license ..done
To see the installed license
switch# show license brief

MPLS_PKG.lic
To see the details of license:
switch# show license
To see the features set of license

# show license usage vdc-all

After you installed the license, see the features of mpls 
for which you have installed the license.

# show features-set 

Now install the features-set of mpls

(config)# install feature-set service mpls

Now activate the mpls feature 
(config)#feature mpls
(config)#feature ldp

Uninstalling Licenses

Uninstall the Enterprise.lic file by using the clear license filename command, where filename is the name of the installed license key file.
switch# clear license Enterprise.lic
Do u want to continue:yes
Clearing license ..done
Backing Up an Installed License
You can back up your license key file to a remote server or to an external device by using the copy command.
This example saves all licenses installed on your device to a .tar file and copies it to a remote UNIX-based server:
switch# copy licenses bootflash:Enterprise.tar
Backing up license done 
switch# copy bootflash:lEnterprise.tar tftp://10.10.1.1//Enterprise.tar
See the link of cisco for further readings:
http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/licensing/guide/Cisco_NX-OS_Licensing_Guide_chapter1.html

Rollback features in nexus
Checkpoints allow you to create a “snapshot” of the current configuration at a given point in time. This is a very useful feature to for change management and should be used before a change, and use again if you need to roll-back a change. To create a checkpoint: CoreSwitch1.VDC1.RWC# checkpoint Initial ............................Done CoreSwitch1.VDC1.RWC# To view the checkpoint that you just created, or to see what has been created: CoreSwitch1.VDC1.RWC# show checkpoint summary User Checkpoint Summary ----------------------------------------------- 1) Initial: Created by admin Created at Thu, 00:55:02 10 Jun 2010 You may also save the checkpoint to bootflash: CoreSwitch1.VDC1.RWC# checkpoint file bootflash: ExampleCheckpoint Done CoreSwitch1.VDC1.RWC# dir 11139 Jun 10 00:59:36 2010 ExampleCheckpoint To rollback a checkpoint, depending on where you stored the checkpoint, the command syntax is: CoreSwitch1.VDC1.RWC# rollback running-config checkpoint Initial or CoreSwitch1.VDC1.RWC# rollback running-config file bootflash:ExampleCheckpoint

Wednesday, 4 June 2014

ASR1k architechture and toubleshooting packet loss

ASR1k architechture and toubleshooting packet loss











Points of Packet Drops ->>
Module   Functional Component
SPA   Dependent on the interface type
SIP    IO Control Processor (IOCP) SPA Aggregation ASIC Interconnect ASIC
ESP   Cisco QuantumFlow Processor (QFP) Forwarding Control Processor (FECP) Interconnect ASIC QFP subsystem. QFP subsystem consists of these components:
  -  Packet Processor Engine (PPE)
  - Buffering, Queuing, and Scheduling (BQS)
  - Input Packet Module (IPM)
  - Output Packet Module (OPM)
  - Global Packet Memory (GPM)

RP    Linux Shared Memory Punt Interface (LSMPI) Interconnect ASIC

SPA
    show interfaces <interface-name>
    show interfaces <interface-name> accounting
    show interfaces <interface-name> stats

SIP
    show platform hardware port <slot/card/port> plim statistics
    show platform hardware subslot {slot/card} plim statistics
    show platform hardware slot {slot} plim statistics
    show platform hardware slot {0|1|2} plim status internal
    show platform hardware slot {0|1|2} serdes statistics

ESP
    show platform hardware slot {f0|f1} serdes statistics
    show platform hardware slot {f0|f1} serdes statistics internal
    show platform hardware qfp active bqs 0 ipm mapping
    show platform hardware qfp active bqs 0 ipm statistics channel all
    show platform hardware qfp active bqs 0 opm mapping
    show platform hardware qfp active bqs 0 opm statistics channel all
    show platform hardware qfp active statistics drop | exclude _0_
    show platform hardware qfp active interface if-name <Interface-name> statistics
    show platform hardware qfp active infrastructure punt statistics type per-cause | exclude _0_
    show platform hardware qfp active infrastructure punt statistics type punt-drop | exclude _0_
    show platform hardware qfp active infrastructure punt statistics type inject-drop  | exclude _0_
    show platform hardware qfp active infrastructure punt statistics type global-drop | exclude _0_
    show platform hardware qfp active infrastructure bqs queue output default all
    show platform hardware qfp active infrastructure bqs queue output recycle all
    !--- The if-name option requires full interface-name

RP
    show platform hardware slot {r0|r1} serdes statistics
    show platform software infrastructure lsmpi
To display statistics of packets that are according to protocol, use this command:
  Router#show interfaces TenGigabitEthernet 1/0/0 accounting
To display statistics of packets that were process switched, fast switched, or distributed switched:
   Router#show interfaces TenGigabitEthernet 1/0/0 stats
To display per port queue drop counters on SPA Aggregation ASIC, use this command:
 
Router#show platform hardware port 1/0/0 plim statistics
To display per SPA counters on SPA Aggregation ASIC, use this command:
   
Router#show platform hardware subslot 1/0 plim statistics
To display all SPA counters on SPA Aggregation ASIC, use this command:
    
Router#show platform hardware slot 1 plim statistics
To display aggregated rx/tx counters to/from Interconnect ASIC on SPA Aggregation ASIC, use.. Rx counter means the input packet from SPA; the Tx counter means output packet to SPA.
   
Router#show platform hardware slot 1 plim status internal
To display rx counters from RP, SIP Interconnect ASIC on ESP Interconnect ASIC, use this command:
    
Router#show platform hardware slot F0 serdes statistics
To display internal link packet counters and error counters, use this command:
   
Router#show platform hardware slot F0 serdes statistics internal
To check mapping for the Input Packet Module (IPM) channel and other components, use this command:
   
Router#show platform hardware qfp active bqs 0 ipm mapping
To display statistical information for each channel in Input Packet Module (IPM), use:
   
Router#show platform hardware qfp active bqs 0 ipm statistics channel all
To check mapping for the Output Packet Module (OPM) channel and other components, use:
   
Router#show platform hardware qfp active bqs 0 opm mapping
To display statistical information for each channel in Output Packet Module (OPM), use:
   
Router#show platform hardware qfp active bqs 0 opm statistics channel all
To display statistics of drops for all interfaces in Packet Processor Engine (PPE), use:  (start here)
   
Router#show platform hardware qfp active statistics drop
To clear statistics of drops for all interfaces in Packet Processor Engine (PPE), use.
   
Router#show platform hardware qfp active statistics drop clear
To display statistics of drops for each interface in the Packet Processor Engine (PPE), use..
This counter is cleared every 10 seconds.
   
Router#show platform hardware qfp active interface if-name TenGig1/0/0 statistics
To check cause of packet punted to RP, use this command:
   
Router#show platform hardware qfp active infrastructure punt statistics type per-cause
To display the statistics of drops for punt packets (ESP to RP), use this command:
   
Router#show platform hardware qfp active infrastructure punt statistics type punt-drop
To display the statistics of drops for inject packets (RP to ESP), use this command..
Inject packets are sent from the RP to the ESP. Most of them are generated by IOSD. They are L2 keep alives, routing protocols, management protocols like SNMP, etc.
   
Router#show platform hardware qfp active infrastructure punt statistics type inject-drop
To display the statistics of global drops packets, use this command:
   
Router#show platform hardware qfp active infrastructure punt statistics type global-drop
To display statistics of default queues/schedules of Buffering, Queuing, and Scheduling (BQS) for each interface, use this command:
Router#show platform hardware qfp active infrastructure bqs queue output default all
To display statistics of Recycle queues/schedules of Buffering, Queuing, and Scheduling (BQS) for each interface, use this command. Recycle queues hold packets that are processed more than once by QFP. For example, fragment packets and multicast packets are placed here.
   
Router#show platform hardware qfp active infrastructure bqs queue output recycle all
The RP processes these types of traffic:
- Mgmt traffic that comes through the gigabit Ethernet management port on the route processor.
-Punt traffic in system(through the ESP),which includes all control-plane traffic recd on any SPA
- Older protocol traffic, DECnet, Internet Packet Exchange (IPX), etc.
To display rx counters from ESP Interconnect ASIC on RP Interconnect ASIC, use this command:
 Router#show platform hardware slot r0 serdes statistics
To display the statistics for the Linux Shared Memory Punt Interface (LSMPI) on the router, use this command. LSMPI offers a way to do zero-copy transfer of packets between the network and IOSd for high performance. In order to achieve this, share (memory map) a region in the Linux kernel virtual memory between the LSMPI module and IOSd.
Router#show platform software infrastructure lsmpi
***************
~ Packet Drops on SPA
* Error Packet
-> If a packet has an error, these packets are dropped on SPA. This is common behavior, not only on Cisco ASR 1000 Series Routers, but on all platforms.
    Router#show interfaces TenGigabitEthernet 1/0/0
419050 input errors, 419050 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     1 packets output, 402 bytes, 0 underruns
~ Packet drops on SIP
* High Utilization of QFP
-> In case of high utilization of QFP, packets are dropped in each interface queue on SIP by backpressure from QFP. In this case, a pause frame is also sent from the interface.
   
Router#show platform hardware port 1/0/0 plim statistics
    Interface 1/0/0
      RX Low Priority
        RX Drop Pkts 21344279    Bytes 1515446578
~ Packet Drops on ESP
*Oversubscription
-> If you send packets that exceed the wire rate of the interface, the packets are dropped at the egress interface.
   
Router#show interfaces GigabitEthernet 1/1/0
Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 48783
On QFP, these drops can be checked as Taildrop.
    Router#show platform hardware qfp active statistics drop | exclude _0_
    ----------------------------------------------------------------
    Global Drop Stats                         Octets         Packets
    ----------------------------------------------------------------
      TailDrop                            72374984           483790
* Overload by Packet Fragment
If packets are fragmented due to the MTU size, even if the ingress interface is less than the wire rate, wire rate can be exceeded at the egress interface. In this case, the packet is dropped at the egress interface.
    Router#show interfaces gigabitEthernet 1/1/0
Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 272828
On QFP, these drops can be checked as Taildrop.
   
Router#show platform hardware qfp active statistics drop | exclude _0_
    ----------------------------------------------------------------
    Global Drop Stats                         Octets         Packets
    ----------------------------------------------------------------
      TailDrop                           109431162          272769
* Performance Limit by Fragment Packets
In QFP, Global Packet Memory (GPM) is used for reassembly for the fragmented packet. If GPM runs out in the reassembly of large numbers of fragmentation packets, these counters show the number of packet drops. In many cases, this is a performance limit.
    Router#show platform hardware qfp active statistics drop | ex _0_
    ----------------------------------------------------------------
    Global Drop Stats                         Octets         Packets
    ----------------------------------------------------------------
      ReassNoFragInfo                  39280654854        57344096
      ReassTimeout                          124672             128
* Forwarding to Null0 Interface
The packets to Null0 interface are dropped on ESP and not punted to RP. In such a case, possibly you are not unable to check the counter by the traditional command (show interfaces null0). Check the ESP counter, in order to know the number of packet drops. If the “clear” and “exclude _0_” options are used at the same time, you can check only new drop packets.
   
Router#show platform hardware qfp active statistics drop clear | ex _0_
    ----------------------------------------------------------------
    Global Drop Stats                         Octets         Packets
    ----------------------------------------------------------------
      Ipv4Null0                              11286              99
* RP Switchover with HA Nonsupport Feature
In the case of RP switch over, these packets are dropped until the new active RP reprograms the QFP. All packets are dropped if the new active RP was not synced with the old active RP before the switch over. Packets are processed by High Availability (HA) nonsupport features.
   
Router#show platform hardware qfp active statistics drop | ex _0_
    ----------------------------------------------------------------
    Global Drop Stats                         Octets         Packets
    ----------------------------------------------------------------
      Ipv4NoAdj                            6993660          116561
      Ipv4NoRoute                        338660188         5644337
* Punt Packets
On the Cisco ASR 1000 Series Routers, packets that cannot be handled by ESP are punted to RP. If there are too many punt packets, the TailDrop of QFP drop statistics increases.
   
Router#show platform hardware qfp active statistics drop | ex _0_
    ----------------------------------------------------------------
    Global Drop Stats                         Octets         Packets
    ----------------------------------------------------------------
      TailDrop                            26257792           17552
Check the Buffering, Queuing, and Scheduling (BQS) queue output counter in order to specify the dropped interface. The “internal0/0/rp:0” shows the interface to punt from ESP to RP.
    Router#show platform hardware qfp active infrastructure bqs queue output default all
    Interface: internal0/0/rp:0
     Statistics:
        tail drops (bytes): 26257792            ,          (packets): 17552
        total enqs (bytes): 4433777480          ,          (packets): 2963755
queue_depth (bytes): 0
In such a case, the Input queue drop is counted on the ingress interface.
    Router#show interfaces TenGigabitEthernet 1/0/0
Input queue: 0/375/2438309/0 (size/max/drops/flushes); Total output drops: 0
The reason for the punt can be shown by this command:
    Router#show platform hardware qfp active infrastructure punt statistics type per-cause
    Global Per Cause Statistics
Counter ID  Punt Cause Name                   Received      Transmitted
  ------------------------------------------------------------------------
  00          RESERVED                          0             0
  01          MPLS_FRAG_REQUIRE                 0             0
  02          IPV4_OPTIONS                      2981307       2963755
You can also check the show ip traffic command.
   
Router#show ip traffic
IP statistics:
  Rcvd:  2981307 total, 15 local destination
         0 format errors, 0 checksum errors, 0 bad hop count
         0 unknown protocol, 0 not a gateway
         0 security failures, 0 bad options, 2981307 with options
*Punt Limit by Punt Global Policer
In case too many punt packets are destined to the router itself, the Taildrop counts with PuntGlobalPolicerDrops by the QFP drop counter. The Punt Global Policer protects RP from an overload. These drops are seen not by the transit packet but by the FOR_US packet.
 
 Router#show platform hardware qfp active statistics drop | ex _0_
    ----------------------------------------------------------------
    Global Drop Stats                         Octets         Packets
    ----------------------------------------------------------------
      PuntGlobalPolicerDrops                155856             102
      TailDrop                          4141792688         2768579
The reason for the punt can be known by this command:
    Router#show platform hardware qfp active infrastructure punt statistics type per-cause
    Global Per Cause Statistics
Counter ID  Punt Cause Name                   Received      Transmitted
  ------------------------------------------------------------------------
11          FOR_US                            5197865       2428755
* Packet Drops on RP
Packet Errors on LSMPI
On the Cisco ASR 1000 Series Routers, the packet is punted from ESP to RP through the Linux Shared Memory Punt Interface (LSMPI). LSMPI is the virtual interface for the packet transfer between the IOSd and Linux kernel on RP through the Linux shared memory. Packets punted from the ESP to the RP are received by the Linux kernel of the RP. The Linux kernel sends those packets to the IOSD process through LSMPI. If you see error counters up on the LSMPI, this is a software defect. Open a TAC case.
    Router#show platform software infrastructure lsmpi
Lsmpi0 is up, line protocol is up
  Hardware is LSMPI
1 input errors, 0 CRC, 3 frame, 0 overrun, 0 ignored, 0 abort

Thanks to - Harbaksh Singh
Shared for Reading

Monday, 30 December 2013

NFV or SDN the Difference 

Software-Defined Networking (SDN), Network Functions Virtualization (NFV) and Network Virtualization (NV) are giving us new ways to design, build and operate networks. Over the past two decades, we have seen tons of innovation in the devices we use to access the network, the applications and services we depend on to run our lives, and the computing and storage solutions we rely on to hold all that “big data” for us, however, the underlying network that connects all of these things has remained virtually unchanged. The reality is the demands of the exploding number of people and devices using the network are stretching its limits. It’s time for a change.

The Time for Changes in Networking is Now

Thanks to the advances in today’s off-the-shelf hardware, developer tools and standards, a seismic technology shift in networking to software can finally take place. It’s this shift that underlies all SDN, NFV and NV technologies –software can finally be decoupled from the hardware, so that it’s no longer constrained by the box that delivers it. This is the key to building networks that can:
  • Reduce CapEx: allowing network functions to run on off-the-shelf hardware.
  • Reduce OpEX: supporting automation and algorithm control through increased programmability of network elements to make it simple to design, deploy, manage and scale networks.
  • Deliver Agility and Flexibility: helping organizations rapidly deploy new applications, services and infrastructure to quickly meet their changing requirements.  
  • Enable Innovation: enabling organizations to create new types of applications, services and business models.
SDN(Software Defined Networking)

SDN got its start on campus networks. As researchers were experimenting with new protocols they were frustrated the need to change the software in the network devices each time they wanted to try a new approach. They came up with the idea of making the behavior of the network devices programmable, and allowing them to be controlled by a central element. This lead to a formalization of the principle elements that define SDN today:

Separation of control and forwarding functions ,Centralization of control ,Ability to program the behavior of the network using well-defined interfaces

The next area of success for SDN was in cloud data centers. As the size and scope of these data centers expanded it became clear that a better way was needed to connect and control the explosion of virtual machines. The principles of SDN soon showed promise in improving how data centers could be controlled.

OpenFlow – Driving Towards Standards

So, where does OpenFlow come into the picture? As SDN started to gain more prominence it became clear that standardization was needed. The Open Networking Forum (ONF) [1] was organized for the purpose of formalizing one approach for controllers talking to network elements, and that approach is OpenFlow. OpenFlow defines both a model for how traffic is organized into flows, and how those flows can be controlled as needed. This was a big step forward in realizing the benefits of SDN.
NFV – Created by Service Providers

Whereas SDN was created by researchers and data center architects, NFV was created by a consortium of service providers. The original NFV white paper <http://www.tid.es/es/Documents/NFV_White_PaperV2.pdf > describes the problems that they are facing, along with their proposed solution:

Network Operators’ networks are populated with a large and increasing variety of proprietary hardware appliances. To launch a new network service often requires yet another variety and finding the space and power to accommodate these boxes is becoming increasingly difficult; compounded by the increasing costs of energy, capital investment challenges and the rarity of skills necessary to design, integrate and operate increasingly complex hardware-based appliances. Moreover, hardware-based appliances rapidly reach end of life, requiring much of the procure-design-integrate-deploy cycle to be repeated with little or no revenue benefit.

Network Functions Virtualisation aims to address these problems by leveraging standard IT virtualisation technology to consolidate many network equipment types onto industry standard high volume servers, switches and storage, which could be located in Datacentres, Network Nodes and in the end user premises. We believe Network Functions Virtualisation is applicable to any data plane packet processing and control plane function in fixed and mobile network infrastructures.

SDN versus NFV

Now, let’s turn to the relationship between you SDN and NFV. The original NFV white paper [2] gives an overview of the relationship between SDN and NFV:

Network Functions Virtualization is highly complementary to Software Defined Networking (SDN), but not dependent on it (or vice-versa). Network Functions Virtualization can be implemented without a SDN being required, although the two concepts and solutions can be combined and potentially greater value accrued.

Network Functions Virtualization goals can be achieved using non-SDN mechanisms, relying on the techniques currently in use in many datacentres. But approaches relying on the separation of the control and data forwarding planes as proposed by SDN can enhance performance, simplify compatibility with existing deployments, and facilitate operation and maintenance procedures. Network Functions Virtualization is able to support SDN by providing the infrastructure upon which the SDN software can be run. Furthermore, Network Functions Virtualization aligns closely with the SDN objectives to use commodity servers and switches.

SDN and NFV are not part and parcel
SDN evolved out of two fairly different industry problems. First, building and managing large IP/Ethernet networks was becoming increasingly complex given the adaptive nature of packet forwarding for both protocols. Traffic management and operations efficiencies could be improved, many said, by exercising central control over forwarding. Early examples of SDN by players like Google seem to bear this out.

Second, the promise of cloud computing creates a new model for application deployment where tenants must share public cloud data centers in a non-interfering way, and multi-component applications must be deployed on flexible resource pools without losing control over performance and security. Given two different missions, it's not surprising that there are at least three models of SDN being promoted. One model is based on centralized control using OpenFlow controllers, another depends on using SDN to provision and manage network virtualization using network overlays, and the third is a distributed model in which a higher layer of software communicates with the network and its existing protocols.

Network functions virtualization is a carrier-driven initiative to virtualize network functions and migrate them from purpose-built devices to generic servers. The express goals of NFV are to reduce deployment costs for services by reducing the reliance on proprietary devices and to improve service flexibility by using a more agile software-based framework for building service features. From the first white paper proposing NFV, innovators visualized a pool of virtual functions, a pool of resources, and a composition/orchestration process that links the former to the latter. That paper suggests that NFV and SDN have some overlap, but SDN is not a subset of NFV, or the other way around. So where do SDN and NFV intersect? And how will the interaction between SDN and NFV impact the evolution of both ideas?

NFV demands virtual network overlays

While it may take some time before we see NFV play a key role in SDN architecture and vice versa, the use of network overlays in NFV will drive an intersection of the technologies in the shorter term.
NFV is likely to at least accept, if not mandate, a model of cloud-hosted virtual functions. Each collection of virtual functions that make up a user service could be viewed as a tenant on NFV infrastructure, which would mean that the cloud issues of multi-tenancy would likely influence NFV to adopt a software-overlay network model. This is where SDN comes into play.

This model, made up of tunnels and vSwitches, would segregate virtual functions to prevent accidental or malicious interaction, and it would link easily to current cloud computing virtual network interfaces like OpenStack's Quantum. The virtual networks would be provisioned and managed using SDN.

Adoption of network overlays for virtual function segregation could make NFV the largest consumer of cloud networking and SDN services. This would mean that NFV could shape product features and accelerate product deployment in the SDN space. That alone could have an impact on every cloud computing data center and application, including private and hybrid clouds.

SDN and NFV will meet to advance centralized control
It seems clear that NFV could define the central control functions of SDN as virtual functions, so, for example, OpenFlow switches could be directed by NFV software. In theory, the SDN controller could be implemented as a virtual function, which would make it conform to both SDN and NFV.

Firewall and load-balancing applications are also targets of NFV since they have an SDN-like segregation of forwarding and control behaviors. Indeed, if NFV addresses the general case of policy-managed forwarding, it could define a superset of SDN.

NFV could also define central control and administration of networks that operate through other protocols, such as BGP and MPLS, and even define configuration and management of optical-layer transport. However, none of these appear to be near-term priorities for the body, and so this direct overlap of SDN and NFV doesn't seem likely in the next few years.

How NFV will push SDN beyond the data center


NFV's use of virtual network overlays could also drive an expansion of this SDN model beyond the data center where it's focused most often today. If NFV allows services to be composed of virtual functions hosted in different data centers, that would require virtual networks to stretch across data centers and become end-to-end. An end-to-end virtual network would be far more interesting to enterprises than one limited to the data center. Building application-specific networks that extend to the branch locations might usher in a new model for application access control, application performance management and even application security.


Will NFV unify differing SDN models?


With the use of network overlays, NFV could also unify the two models of SDN infrastructure -- centralized and distributed. If connectivity control and application component or user isolation are managed by the network overlay, then the physical-network mission of SDN can be more constrained to traffic management. If SDN manages aggregated routes more than individual application flows, it could be more scalable.

Remember that the most commonly referenced SDN applications today -- data center LANs and Google's SDN IP core network -- are more route-driven than flow-driven. Unification of the SDN model might also make it easier to sort out SDN implementations. The lower physical network SDN in this two-layer model might easily be created using revisions to existing protocols, which has already been proposed. While it doesn't offer the kind of application connectivity control some would like, that requirement would be met by the higher software virtual network layer or overlay.

Despite all the conversations, SDN and NFV are still works in progress, and both could miss their targets. But if NFV succeeds in reaching its goals, it will solidify and propel SDN forward as well and create a common network revolution at last.


SDN and NFV – Working Together?

Let’s look at an example of how SDN and NFV could work together. First, how a managed router service is implemented today, using a router at the customer site.

NFV would be applied to this situation by virtualizing the router function, All that is left at the customer site is a Network Interface Device (NID) for providing a point of demarcation as well as for measuring performance.

Finally, SDN is introduced to separate the control and data,Now, the data packets are forwarded by an optimized data plane, while the routing (control plane) function is running in a virtual machine running in a rack mount server.

Summary

The table below provides a brief comparison of some of the key points of SDN and NFV.



Category
SDN
NFV
Reason for Being
Separation of control and data, centralization of control and programmability of network
Relocation of network functions from dedicated appliances to generic servers
Target Location
Campus, data center / cloud
Service provider network
Target Devices
Commodity servers and switches
Commodity servers and switches
Initial Applications
Cloud orchestration and networking
Routers, firewalls, gateways, CDN, WAN accelerators, SLA assurance
New Protocols
OpenFlow
None yet
Formalization
Open Networking Forum (ONF)
ETSI NFV Working Group
 
http://searchsdn.techtarget.com/essentialguide/SDN-use-cases-emerge-across-the-LAN-WAN-and-data-center

Tuesday, 23 April 2013

Notes on Data Center Power




Get Beyond PUE

PUE (power usage effectiveness) is a metric used to determine the energy efficiency of a data center. This is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. PUE is a great start, but managers need to understand the IT power efficiency of your equipment. With a fixed cooling infrastructure, upgrading IT equipment to lower the power consumption will make your PUE go up.


You can't manage what you don’t measure, so be sure to track your data center's energy use. To effectively use PUE, it's important to measure often. Sample at least once per second. It’s even more important to capture energy data over the entire year, since seasonal weather variations affect PUE . To locate “hot spots” and better understand airflow in the data center. In the design phase, physically arrange equipment to even out temperatures in the facility. Even after that,

Its Cool to Be Warm

Increase inlet temperatures for servers to 80.6 per ASHREA (American Society of Heating, Refrigerating and Air-Conditioning Engineers) recommendations. Use hot/cold aisles to decrease cooling requirements and optimize air flow

The need to keep data centers at 70°F is a myth. Virtually all equipment manufacturers allow you to run your cold aisle at 80°F or higher. If your facility uses an economizer run elevated cold aisle temperatures to enable more days of "free cooling" and higher energy savings.

Facility Utilization an Operational Key

Eliminate all those bottlenecks that get built into the infrastructure through disconnected silos of servers or storage arrays that come in months or years apart. There is plenty of good data centerware now available that can get various departments to talk to each other, share resources, and, in turn, save costs.


Chillers typically use the most energy in a data center's cooling infrastructure, its largest opportunity for savings by minimizing their use. Take advantage of "free cooling" to remove heat from your facility without using a chiller. This can include using low temperature ambient air, evaporating water, or a large thermal reservoir. While there's more than one way to free cool, water and air-side economizers are proven and readily available

PM Integration Needs to Be in Place

Power management needs to be integrated directly into capacity and performance management. This is ultimately about the transactions per kilowatt hour. Understanding server efficiency is an important metric.


Maximize capacity and reduce power consumption through intelligent technology refresh decisions.

Get Updated and More Efficient with IT

Virtualization is an important factor here. Consolidating servers and storage arrays to use far less power and increase capacity utilization can have a huge positive power impact right away--not only over the long haul.

Know That Application Usage Drives the Data Center

Realize that application service levels drive the entire vehicle—including data center power, capacity and performance-level decisions. Always keep this as priority No. 1.

Keep Looking Ahead at New Possibilities

When planning ahead for a new data center or a DC addition, research new concepts such as tiered data centers, application quality-of-service grouping, storage pooling, and active/active multi-site configurations. This is where the data center is going in the future, so you might as well be on board early.

Use Power Chargebacks in Your Billing

Implementing chargebacks to share power costs amount the various corporate departments receiving IT services make the issue more visible to the managers who can make a difference. Including power usage into a chargeback mechanism is also a better method to allocate virtual machine charges to eliminate over-provisioning and under-utilization. Otherwise, many users will simply take advantage of the resources for as long as they can get away with it.


You can minimize power distribution losses by eliminating as many power conversion steps as possible. For the conversion steps you must have, be sure to specify efficient equipment transformers and power distribution units (PDUs). One of the largest losses in data center power distribution is from the uninterruptible power supply (UPS), so it's important to select a high-efficiency model. Lastly, keep your high voltages as close to the power supply as possible to reduce line losses

Good link's can be used