Showing posts with label LAN. Show all posts
Showing posts with label LAN. Show all posts

Sunday, March 25, 2012

Routed versus Routing Protocols - Understanding Routing

Routing Protocols

Routed versus Routing Protocols

Confusion often exists between the similar terms routing protocol and routed protocol. Routed protocols are any network protocol suite that provides enough information in its network layer address to allow a packet to direct user traffic. Routed protocols define the format and use of the fields within a packet. Packets generally are conveyed from end system to end system. The Internet IP protocol and Novell’s IPX are examples of routed protocols. Other examples include DECnet, AppleTalk, Novell NetWare, Open Systems Interconnect (OSI), Banyan VINES, and Xerox Network System (XNS).
A routing protocol supports a routed protocol by providing mechanisms for sharing routing information. Routing protocol messages move between the routers. A routing protocol allows the routers to communicate with other routers to update and maintain tables. Routing protocol messages do not carry end-user traffic from network to network. A routing protocol uses the routed protocol to pass information between routers. TCP/IP examples of routing protocols are the Routing Information Protocol (RIP), Interior Gateway Routing Protocol (IGRP), Open Shortest Path First (OSPF), Border Gateway Protocol (BGP), and Enhanced IGRP (EIGRP).

Routing Protocol Evolutions

Distance Vector

RIP - Routing Information Protocol. The most common IGP in the Internet. RIP uses hop count as a routing metric.

IGRP - Interior Gateway Routing Protocol. IGP developed by Cisco to address the issues associated with routing in large, heterogeneous networks.

Link State

OSPF - Open Shortest Path First. Link-state, hierarchical IGP routing algorithm proposed as a successor to RIP in the Internet community. OSPF features include least-cost routing, multipath routing, and load balancing. OSPF was derived from an early version of the IS-IS protocol.

NLSP - NetWare Link Services Protocol. Link-state routing protocol based on IS-IS.

IS-IS - Intermediate System-to-Intermediate System. OSI link-state hierarchical routing protocol based on DECnet Phase V routing, whereby ISs (routers) exchange routing information based on a single metric, to determine network topology.

Hybrid

EIGRP - Enhanced Interior Gateway Routing Protocol. Advanced version of IGRP developed by Cisco. Provides superior convergence properties and operating efficiency, and combines the advantages of link state protocols with those of distance vector protocols.

RIP and IGRP

RIP takes the path with the least number of hops, but does not account for the speed of the links. It only counts hops. The limitation of RIP is about 15 hops. This creates a scalability issue when routing in large, heterogeneous networks.
IGRP was developed by Cisco and works only with Cisco products (although it has been licensed to some other vendors). It accounts for the varying speeds of each link. Additionally, IRGP can handle 224 to 252 hops, depending on the IOS version. However, IGRP only supports IP.

OSPF and EIGRP

OSPF - Open Shortest Path First. Link-state, hierarchical IGP routing algorithm proposed as a successor to RIP in the Internet community. OSPF features include least-cost routing, multipath routing, and load balancing. OSPF was derived from an early version of the IS-IS protocol.
EIGRP - Enhanced Interior Gateway Routing Protocol. Advanced version of IGRP developed by Cisco. Provides superior convergence properties and operating efficiency, and combines the advantages of link state protocols with those of distance vector protocols.

Static Routing and Dynamic Routing - Understanding Routing

Static Routing

Static routing knowledge is administered manually: a network administrator enters it into the router’s configuration. The administrator must manually update this static route entry whenever an internetwork topology change requires an update. Static knowledge is private—it is not conveyed to other routers as part of an update process. Static routing has several useful applications when it reflects a network administrator’s special knowledge about network topology.
When an internetwork partition is accessible by only one path, a static route to the partition can be sufficient. This type of partition is called a stub network. Configuring static routing to a stub network avoids the overhead of dynamic routing.

Dynamic Routing

After the network administrator enters configuration commands to start dynamic routing, route knowledge is updated automatically by a routing process whenever new topology information is received from the internetwork. Changes in dynamic knowledge are exchanged between routers as part of the update process. Dynamic routing tends to reveal everything known about an internetwork. For security reasons, it might be appropriate to conceal parts of an internetwork. Static routing allows an internetwork administrator to specify what is advertised about restricted partitions.
In the illustration above, the preferred path between routers A and C is through router D. If the path between Router A and Router D fails, dynamic routing determines an alternate path from A to C. According to the routing table generated by Router A, a packet can reach its destination over the preferred route through Router D. However, a second path to the destination is available by way of Router B. When Router A recognizes that the link to Router D is down, it adjusts its routing table, making the path through Router B the preferred path to the destination. The routers continue sending packets over this link. When the path between Routers A and D is restored to service, Router A can once again change its routing table to indicate a preference for the counterclockwise path through Routers D and C to the destination network.

Distance Vector versus Link State

Distance vector versus link state is another possible routing algorithm classification.

 - Link state algorithms (also known as shortest path first algorithms) flood routing information    about its own link to all network nodes. The link-state (also called shortest path first) approach    recreates the exact topology of the entire internetwork (or at least the partition in which the    router is situated).

 - Distance vector algorithms send all or some portion of their routing table only to neighbors. The    distance vector routing approach determines the direction (vector) and distance to any link in the    internetwork.

 - A third classification in this course, called hybrid, combines aspects of these two basic algorithms.

There is no single best routing algorithm for all internetworks. Network administrators must weigh technical and non-technical aspects of their network to determine what’s best.

Network and Node Addresses - Understanding Routing

Network Addressing

Network and Node Addresses


Each network segment between routers is is identified by a network address. These addresses contain information about the path used by the router to pass packets from a source to a destination.
For some network layer protocols, a network administrator assigns network addresses according to some preconceived internetwork addressing plan. For other network layer protocols, assigning addresses is partially or completely dynamic.
Most network protocol addressing schemes also use some form of a node address. The node address refers to the device’s port on the network. The figure in this slide shows three nodes sharing network address 1 (Router 1.1, PC 1.2, and PC 1.3). For LANs, this port or device address can reflect the real Media Access Control or MAC address of the device.
Unlike a MAC address that has a preestablished and usually fixed relationship to a device, a network address contains a logical relationship within the network topology..
The hierarchy of Layer 3 addresses across the entire internetwork improves the use of bandwidth by preventing unnecessary broadcasts. Broadcasts invoke unnecessary process overhead and waste capacity on any devices or links that do not need to receive the broadcast. By using consistent end-to-end addressing to represent the path of media connections, the network layer can find a path to the destination without unnecessarily burdening the devices or links on the internetwork with broadcasts.
Examples:-
For TCP/IP, dotted decimal numbers show a network part and a host part. Network 10 uses the first of the four numbers as the network part and the last three numbers—8.2.48-as a host address. The mask is a companion number to the IP address. It communicates to the router the part of the number to interpret as the network number and identifies the remainder available for host addresses inside that network.
For Novell IPX, the network address 1aceb0b is a hexadecimal (base 16) number that cannot exceed a fixed maximum number of digits. The host address 0000.0c00.6e25 (also a hexadecimal number) is a fixed 48 bits long. This host address derives automatically from information in the hardware of the specific LAN device.

Subnetwork Addressing


Subnetworks or subnets are networks arbitrarily segmented by a network administrator in order to provide a multilevel, hierarchical routing structure while shielding the subnetwork from the addressing complexity of attached networks.
Subnetting allows single routing entries to refer either to the larger block or to its individual constituents. This permits a single, general routing entry to be used through most of the Internet, more specific routes only being required for routers in the subnetted block.
A subnet mask is a 32-bit number that determines how an IP address is split into network and host portions, on a bitwise basis. For example, 131.108.0.0 is a standard Class B subnet mask; the first two bytes identify the network and the last two bytes identify the host.
A subnet mask is a 32-bit address mask used in IP to indicate the bits of an IP address that are being used for the subnet address. Sometimes referred to simply as mask. The term mask derives from the fact that the non-host portions of the IP address bits are masked by 0’s to form the subnet mask.
Subnetting helps to organize the network, allows rules to be developed and applied to the network, and provides security and shielding. Subnetting also enables scalability by controlling the size of links to a logical grouping of nodes that have reason to communicate with each other (such as within Human Resources, R&D, or Manufacturing).

Routing Algorithm Types

Routing algorithms can be classified by type. Key differentiators include:

 - Single-path versus multi-path: Multi-path routing algorithms support multiple paths to the same    destination and permit traffic multiplexing over multiple lines. Multi-path routing algorithms can    provide better throughput and reliability.

 - Flat versus hierarchical: In a flat routing system, the routers are peers of all others. In a    hierarchical routing system, some routers form what amounts to a routing backbone. In    hierarchical systems, some routers in a given domain can communicate with routers in other    domains, while others can communicate only with routers in their own domain.

 - Host-intelligent versus router-intelligent: In host-intelligent routing algorithms, the source end-   node determines the entire route and routers act simply as store-and-forward devices. In router-   intelligent routing algorithms, host are assumed to know nothing about routes and routers    determine the optimal path.

 - Intradomain versus interdomain: Some routing algorithms work only within domains; others work    within and between domains.

 - Static versus dynamic - this classification will be discussed in the following two slides.

 - Link state versus distance vector: will be discussed after static versus dynamic routing.

Routing Tables and Routing Algorithms - Understanding Routing

Routing Tables

To aid the process of path determination, routing algorithms initialize and maintain routing tables, which contain route information. Route information varies depending on the routing algorithm used. Routing algorithms fill routing tables with a variety of information. Two examples are
destination/next hop associations and path desirability.

 - Destination/next hop associations tell a router that a particular destination is linked to a particular    router representing the “next hop” on the way to the final destination. When a router receives an    incoming packet, it checks the destination address and attempts to associate this address with a    next hop.

 - With path desirability, routers compare metrics to determine optimal routes. Metrics differ    depending on the routing algorithm used. A metric is a standard of measurement, such as path    length, that is used by routing algorithms to determine the optimal path to a destination.

Routers communicate with one another and maintain their routing tables through the transmission of a variety of messages.

 - Routing update messages may include all or a portion of a routing table. By analyzing routing    updates from all other routers, a router can build a detailed picture of network topology.

 - Link-state advertisements inform other routers of the state of the sender’s link so that routers    can maintain a picture of the network topology and continuously determine optimal routes to    network destinations.

Routing Algorithm Goals

Routing tables contain information used by software to select the best route. But how, specifically, are routing tables built? What is the specific nature of the information they contain? How do routing algorithms determine that one route is preferable to others?
Routing algorithms often have one or more of the following design goals:

   Optimality - the capability of the routing algorithm to select the best route, depending on metrics    and metric weightings used in the calculation. For example, one algorithm may use a number of    hops and delays, but may weight delay more heavily in the calculation.

   Simplicity and low overhead - efficient routing algorithm functionality with a minimum of software    and utilization overhead. Particularly important when routing algorithm software must run on a    computer with limited physical resources.

   Robustness and stability - routing algorithm should perform correctly in the face of unusual or    unforeseen circumstances, such as hardware failures, high load conditions, and incorrect    implementations. Because of their locations at network junctions, failures can cause extensive    problems.

   Rapid convergence - Convergence is the process of agreement, by all routers, on optimal routes.    When a network event causes changes in router availability, recalculations are need to    restablish networks. Routing algorithms that converge slowly can cause routing loops or network    outages.

   Flexibility - routing algorithm should quickly and accurately adapt to a variety of network    circumstances. Changes of consequence include router availability, changes in network    bandwidth, queue size, and network delay.

Routing Metrics

Routing algorithms have used many different metrics to determine the best route. Sophisticated routing algorithms can base route selection on multiple metrics, combining them in a single (hybrid) metric. All the following metrics have been used:

   Path length - The most common metric. The sum of either an assigned cost per network link or    hop count, a metric specify the number of passes through network devices between source and    destination.

   Reliability - dependability (bit-error rate) of each network link. Some network links might go    down more often than others. Also, some links may be easier or faster to repair after a failure.

   Delay - The length of time required to move a packet from source to destination through the    internetwork. Depends on bandwidth of intermediate links, port queues at each router, network    congestion, and physical distance. A common and useful metric.

   Bandwidth - available traffic capacity of a link.

   Load - Degree to which a network resource, such as a router, is busy (uses CPU utilization or    packets processed per second).

   Communication cost - operating expenses of network links (private versus public lines).
   Now let’s talk a little about network addressing.

LAN-to-LAN Connectivity - Understanding Routing

LAN-to-LAN Connectivity


This illustrates the flow of packets through a routed network using the example of an e-mail message being sent from system X to system Y. The message exits system X and travel through an organization’s internal network until it gets to a point where it needs an Internet service provider. The message will bounce through their network and eventually arrive at system Y’s internet provider. While this example shows three routers, the message could actually travel through many different networks before it arrives at its destination. From the OSI model reference point of view, when the e-mail is converted into packets and sent to a different network, a data-link frame is received on one of a router's interfaces.

 - The router de-encapsulates and examines the frame to determine what type of network layer    data is being carried. The network layer data is sent to the appropriate network layer process,    and the frame itself is discarded.

 - The network layer process examines the header to determine the destination network and then    references the routing table that associates networks to outgoing interfaces.

 - The packet is again encapsulated in the link frame for the selected interface and sent on.

This process occurs each time the packet transfers to another router. At the router connected to the network containing the destination host, the packet is encapsulated in the destination LAN’s data-link frame type for delivery to the protocol stack on the destination host.

Path Determination

Routing involves two basic activities: determining optimal routing paths and transporting information groups (typically called packets) through an internetwork. In the context of the routing process, the latter of these is referred to as switching. Although switching is relatively straightforward, path determination can be very complex.
During path determination, routers evaluate the available paths to a destination and to establish the preferred handling of a packet.

 - Routing services use internetwork topology information (such as metrics) when evaluating network paths. This    information can be configured by the network administrator or collected through dynamic processes running in the    internetwork.

 - After the router determines which path to use, it can proceed with switching the packet: Taking the packet it    accepted on one interface and forwarding it to another interface or port that reflects the best path to the packet’s    destination.

Multiprotocol Routing

Routers can support multiple independent routing algorithms and maintain associated routing tables for several routed protocols concurrently. This capability allows a router to interleave packets from several routed protocols over the same data links.
The various routed protocols operate separately. Each uses routing tables to determine paths and switches over addressed ports in a “ships in the night” fashion; that is, each protocol operates without knowledge of or coordination with any of the other protocol operations.
In the example above, as the router receives packets from the users on the networks using IP, it begins to build a routing table containing the addresses of the network of these IP users. As the router receives packets from Macintosh AppleTalk users. Again, the router adds the AppleTalk addresses. Routing tables can contain address information from multiple protocol networks. This process may continue with IPX traffic from Novell NetWare networks and Digital traffic from VAX minicomputers attached to Ethernet networks.

Multicast Routing Protocols - Understanding LAN Switching

This also needs to be done dynamically because these multicast groups are going to change over time at any given moment. So, in order to do this, we need some special protocols in our network. First of all, in the Wide Area, we need something known as multicast routing protocols.Certainly, in our Wide Area we already have routing protocols such as RIP, the Routing Information Protocol, or OSPF, or IGRP, for example, but what we need to do is add multicast extensions so that these routing protocols need, understand how to handle the need for our multicast groups.
An example of a multicast routing protocol would be PIM, or Protocol Independent multicasting, for example. This is simply an extension of the existing routing protocols in our network.Another protocol we have is known as IGMP, or the Internet Group Management Protocol. And IGMP simply allows us to identify the group membership of the IP stations that want to participate in a given multicast conversation.

So as you can see indicated by the red traffic in our network, we have channel #1 being multicast through the network. And by way of IGMP, the workstations can signal back to the original video servers that they want to participate.And by way of the multicast routing protocols are added, we can efficiently deliver our traffic in the Wide Area.Now, another challenge that we have is once our traffic gets to the Local Area Network, or the switch, by default that traffic is going to be flooded to all stations in the network.

End-to-End Multicast

And that's because IGMP works at Layer 3,, but our LAN switch works at Layer 2. So the switch has no concept of our Layer 3 group membership. So what we need to do is add some intelligence to our switch.The intelligence that going to add is a protocol such as CGMP, for example, or Cisco Group Management Protocol. Another similar technology that we could add, is called IGMP Snooping, which has the same effect in the Local Area Network.
And that effect is, as you see in the diagram, to limit our multicast traffic to only those stations that want to participate in the group. So now, as you can see, the red channel, or channel number 1, is delivered to only station #1 and station #3.
The station 2 does not receive this content because he doesn't wish to participate.So the advantage of adding protocols such as IGMP, CGMP, IGMP Snooping, and Protocol Independent multicasting into our network, that achieved bandwidth savings for our multicast traffic.

Why Use Multicast?

What we see indicated in the red is, as we add stations to our multicast group, the amount of bandwidth we need to do that is going to increase in a linear fashion.But by adding multicast controls, you can see the amount of bandwidth is reduced dramatically. Because these intelligent multicast controls can better make, can make better use of the bandwidth in our network.So by adding multicast controls that's going to also reduce the cost of networking as well because we've reduced the bandwidth that we need, so that's going to provide a dramatic improvement to our Local Area Network.

Key Switching Technologies - Understanding LAN Switching

Key Switching Technologies

let's look at some key technologies within LAN switching.
 - 802.1d Spanning-Tree Protocol

 - Multicasting

The Need for Spanning Tree

Specifically we'll look at the Spanning Tree Protocol, and also some multicasting controls that we have in our network.As we build out large networks, one of the problems we have at Layer 2 in the OSI model, is if we're just making forwarding decisions at Layer 2, that means that we cannot have any Physical Layer loops in our network.
So if we have a simple network, as we see in the diagram here, what these switches are going to do is that anytime they have any multicast, broadcast traffic, or any unknown traffic, that's going to create storms of traffic that are going to get looped endlessly through our network.So in order to prevent that situation we need to cut out any of the loops.

802.1d Spanning-Tree Protocol (STP)

Spanning Tree Protocol, or STP. This is actually an industry standard that's defined by the IEEE standards committee, it's known as the 802.1d Spanning Tree Protocol.This allows us to have physical redundancy in the network, but it logically disconnects those loops.
It's important to understand that we logically disconnect the loops because that allows us to dynamically re-establish a connection if we need to, in the event of a failure within our network.The way that the switches do this, and actually bridges can do this as well, is that they simply communicate by way of a protocol, back and forth. The basically exchange these little hello messages.
If they stop hearing a given communication from a certain device on the network, we know that a network device has failed. And when a network failure occurs we have to re-establish a link in order to maintain that redundancy.technically, these little exchanges are known as BPDUs or Bridge Protocol Data Units.
Now, Spanning Tree protocol works just fine, but one of the issues with Spanning Tree is that it can take anywhere from half a minute to a full minute in order for the network to fully converge, or in order for all devices to know the status of the network.So in order to improve on this, there are some refinements that Cisco has introduced, such as PortFast and UplinkFast, and this allows your Spanning Tree protocol to converge even faster.

Multicasting

Now, another issue that we have in Layer 2 networks, or switched networks, is control of our multicast traffic. There's a lot of new applications that are emerging today such as video based applications, desktop conferencing, and so on, that take advantage of multicasting

But without special controls in the network, multicasting is going to quickly congest our network. Okay, so what we need is to add intelligent multicasting in the network.

Multipoint Communications

Now, again, let's understand that there are a few fundamental ways that we have in order to achieve multipoint communications, because effectively, that's what we're trying to do with our video based applications or any of our multimedia type applications that use this mechanism.
One way is to broadcast our traffic. And what that does is it effectively sends our messages everywhere. The problem, and the obvious down side there is that not everybody necessarily needs to hear these communications.So while it will get the job done, it's not the most efficient way to get the job done. So the better way to do this is by way of multicasting.
And that is, the applications will use a special group address to communicate to only those stations or group of stations that need to receive these transmissions.And that's what we mean by multipint communications. That's going to be the more effective way to do that.

Switching Technology: Full Duplex

Another concept that we have in LAN switching that allows us to dramatically improve the scalability, is something known as full duplex transmission. And that effectively doubles the amount of bandwidth between nodes.This can be important, for example, between high bandwidth consumers such as between a switch and a server connection, for example. It provides essentially collision free transmissions in the network.
And what this provides, for example, in 10 megabit per second connections, it effectively provides 10 meg of transmit capacity, and 10 megabit of receive capacity, for effectively 20 megabits of capacity on a single connection.Likewise, for a 100 megabit per second connection, we can get effectively 200 megabits per second of throughput

Switching Technology: Two Methods

Another concept that we have in switching is that we have actually two different modes of switching. And this is important because it can actually effect the performance or the latency of the switching through our network.

     Cut-through

First of all we have something known as cut through switching. What cut through switching does, is, as the traffic flows through the switch, the switch simple reads the destination MAC address, in other words we find out where the traffic needs to go through, go to.And as the data flows through the switch we don't actually look at all of the data. We simply look at that destination address, and then, as the name implies, we cut it through to its destination without continuing to read the rest of the frame.

     Store-and-forward

And that allows to improve performance over another method known as store and forward. With store and forward switching, what we do is we actually read, not only the destination address, but we read the entire frame of data.As we read that entire frame we then make a decision on where it needs to go, and send it on it's way. The obvious trade-off there is, if we're going to read the entire frame it takes longer to do that.
But the reason that we read the entire frame is that we can do some error correction, or error detection, on that frame, that may increase the reliability if we're having problems with that in a switched network.So cut through switching is faster, but the trade-off is that we can't do any error detection in our switched network.

LAN Switching Basics - Understanding LAN Switching

LAN Switching Basics


 - Enables dedicated access
 - Eliminates collisions and increases capacity
 - Supports multiple conversations at the same time
First of all, it's important to understand the reason that we use LAN switching. Basically, they do this to provide what we called earlier as micro-segmentation. Again, micro-segmentation provides dedicated bandwidth for each user on the network.What this is going to do is eliminate collisions in our network, and it's going to effectively increase the capacity for each station connected to the network.It'll also support multiple, simultaneous conversations at any given time, and this will dramatically improve the bandwidth that's available, and it'll dramatically improve the scalability in our network.

LAN Switch Operation

So let's take a look at the fundamental operation of a LAN switch to see what it can do for us. As you can see indicated in the diagram, we have some data that we need to transmit from Station A to Station B.

Now, as we watch this traffic go through the network, remember that the switch operates at Layer 2. What that means is the switch has the ability to look at the MAC-layer address, the Media Access Control address, that's on each frame as it goes through the network.

And we're going to see that the switch actually looks at the traffic as it goes through to pick off that MAC address and store it in an address table.So, as the traffic goes through, you can see that we've made an entry into this table in terms of which station and the port that it's connected to on the switch.

Now what happens, once that frame of data is in the switch, we have no choice but to flood it to all ports. The reason that we flood it to all ports is because we don't know where the destination station resides.

Once that address entry is made into the table, though, when we have a response coming back from Station B, going back to Station A, we now know where Station A is connected to the network.
So what we do is we transmit our data into the switch,but notice the switch doesn't flood that traffic this time, it sends it only out port number 3. The reason is because we know exactly where Station A is on the network, because of that original transmission we made.On that original transmission we were able to note where that MAC address came from. That allows us to more efficiently deliver that traffic in the network.

Today’s LANs - Understanding LAN Switching

Today’s LANs


 - Mostly switched resources; few shared
 - Routers provide scalability
 - Groups of users determined by physical location
When we look at today's LANs, the ones that are most commonly implemented today, we're looking at mostly switched infrastructures, because of the price point of deploying switches, many companies are bypassing the shared hub technologies and moving directly to switches.Even within switched networks, at some point we still need to look to routers to provide scalability. And also we see that in terms of the grouping of users, they're largely determined by the physical location.So that's a quick look at traditional shared LAN technologies. What we want to do now, since we know those limitations, we want to look at how we can fix some of those issues. We want to see how we can deploy LAN switches to take advantage of some new, improved technologies.

The Need for Speed: Early warning signs for congestion problems - Understanding LAN Switching

Now, how can you tell if you have congestion problems in your network? Well, some early things to look at, some early things to watch out for, include increased delay on our file transfers.If basic file transfers are taking a long, long time in the network, that means we may need more bandwidth. Also, another thing to watch out for is print jobs that take a very long time to print out.From the time we queue them from our workstation, till the time they actually get printed, if that's increasing, that's an indication that we may have some LAN congestion problems.Also, if your organization is looking to take advantage of multimedia applications, you're going to need to move beyond basic shared LAN technologies, because those shared LAN technologies don't have the multicast controls that we're going to need for multimedia applications.

Typical Causes of Network Congestion

Some causes of this congestion, if we're seeing those early warning signs some things we might want to look for, if we have too many users on a shared LAN segment. Remember that shared LAN segments have a fixed amount of bandwidth.As we add users, proportionally, we're degrading the amount of bandwidth per user. So we're going to get to a certain number of users and it's going to be too much congestion, too many collisions, too many simultaneous conversations trying to occur all at the same time.
And that's going to reduce our performance. Also, when we look at the newer technologies that we're using in our workstations. With early LAN technologies the workstations were relatively limited in terms of the amount of traffic they could dump on the network.Well, with newer, faster CPUs, faster busses, faster peripherals and so on, it's much easier for a single workstation to fill up a network segment.So by virtue of the fact that we have much faster PCs, we can also do more with the applications that are on there, we can more quickly fill up the available bandwidth that we have.

Network Traffic Impact from Centralization of Servers

Also, the way the traffic is distributed on our network can have an impact as well. A very common thing to do in many networks is to build what's known as a server farm for example.Well, in a server farm effectively what we're doing is centralizing all of the resources on our network that need to be accessed by all of the workstations in our network.So what happens here is we cause congestion on those centralized segments within the network. So, when we start doing that, what we're going to do is cause congestion on those centralized or backbone resources.
Servers are gradually moving into a central area (data center) versus being located throughout the company to:

 - Ensure company data integrity
 - Maintain the network and ensure operability
 - Maintain security
 - Perform configuration and administrative functions

More centralized servers increase the bandwidth demands on campus and workgroup backbones

Bridges And Switches - Layer 2 - Understanding LAN Switching

Bridges


Another way is to add bridges. In order to scale our networks we need to do something known as segmentation. And bridges provide a certain level of segmentation in our network.And bridges do this by adding a certain amount of intelligence into the network. Bridges operate at Layer 2, while hubs operate at Layer 1. So operating at Layer 2 gives us more intelligence in order to make an intelligent forwarding decision.
That's why we say that bridges are more intelligent than a hub, because they can actually listen in, or eavesdrop on the traffic going through the bridge, they can look at source and destination addresses, and they can build a table that allows them to make intelligent forwarding decisions.
They actually collect and pass frames between two network segments and while they're doing this they're making intelligent forwarding decisions. As a result, they can actually provide greater control of the traffic within our network.

Switches - Layer 2

To provide even better control we're going to look to switches to provide the most control in our network, at least at Layer 2. And as you can see in the diagram, have improved the model of traffic going through our network.
Getting back to our traffic analogy, as you can see looking at the highway here, we've actually subdivided the main highway so that each particular car has it's own lane that they can drive on through the network. And fundamentally, this is what we can provide in our data networks as well.So that when we look at our network we see that physically each station has its own cable into the network, well, conceptually we can think of this as each workstation having their own lane through the highway.Basically there is something known as micro-segmentation. That's a fancy way simply to say that each workstation gets its own dedicated segment through the network.

Switches versus Hubs

If we compare that with a hub or with a bridge, we're limited on the number of simultaneous conversations we can have at a time.Remember that if two stations tried to communicate in a hubbed environment, that caused something known as collisions. Well, in a switched environment we're not going to expect collisions because each workstation has its own dedicated path through the network.What that means in terms of bandwidth, and in terms of scalability, is we have dramatically more bandwidth in the network. Each station now will have a dedicated 10 megabits per second worth of bandwidth.
So when we look at our switches versus our hubs, and the top diagram, remember that we're looking at a hub. And this is when all of our traffic was fighting for the same fixed amount of bandwidth.Looking at the bottom diagram you can see that we've improved our traffic flow through the network, because we've provided a dedicated lane for each workstation.

Broadcasts Consume Bandwidth - Understanding LAN Switching

Broadcasts Consume Bandwidth

Now, in terms of broadcast, it's relatively easy to broadcast in a network, and that's a transmission mechanism that many different protocols use to communicate certain information, such as address resolution, for example.Address resolution is something that all protocols need to do in order to map Layer 2 MAC addresses up to logical layer, or Layer 3, addresses. For example, in an IP network we do something known as an ARP, an Address Resolution Protocol.And this allows us to map Layer 3 IP addresses down to Layer 2 MAC-layer addresses. Also, in terms of distributing routing protocol information, we do this by way of broadcasting, and also some key network services in our networks rely on broadcast mechanisms as well.
And it doesn't really matter what our protocol is, whether it's AppleTalk or Novell IPX, or TCP IP, for example, all of these different Layer 3 protocols rely on the broadcast mechanism. So, in other words, all of these protocols produce broadcast traffic in a network.

Broadcasts Consume Processor Performance

Now, in addition to consuming bandwidth on the network, another by-product of broadcast traffic in the network is that they consume CPU cycles as well.Since broadcast traffic is sent out and received by all stations on the network, that means that we must interrupt the CPU of all stations connected to the network.So here in this diagram you see the results of a study that was performed with several different CPUs on a network. And it shows you the relative level of CPU degradation as the number of broadcasts on a network increases.
So you can see, we did this study based on a SPARC2 CPU, a SPARC5 CPU and also a Pentium CPU. And as the number of broadcasts increased, the amount of CPU cycles consumed, simply by processing and listening to that broadcast traffic, increased dramatically.So, the other thing we need to recognize is that a lot of times the broadcast traffic in our network is not needed by the stations that receive it.So what we have then in shared LAN technologies is our broadcast traffic running throughout the network, needlessly consuming bandwidth, and needlessly consuming CPU cycles.

Hub-Based LANs

So hubs are introduced into the network as a better way to scale our thinand thick Ethernet networks. It's important to remember, though, that these are still shared Ethernet networks, even though we're using hubs.
Basically what we have is an individual desktop connection for each individual workstation or server in the network, and this allows us to centralize all of our cabling back to a wiring closet for example. There are still security issues here, though.It's still relatively easy to tap in and monitor a network by way of a hub. In fact it's even easier to do that because all of the resources are generally located centrally.If we need to scale this type of network we're going to rely on routers to scale this network beyond the workgroup, for example.
It's makes adds, moves and changes easier because we can simply go to the wiring closet and move cables around, but we'll see later on with LAN switching that it's even easier with LAN switching.Also, in terms of our workgroups, in a hub or concentrator based network, the workgroups are determined simply by the physical hub that we plug into. And once again we'll see later on with LAN switching how we can improve this as well.

Shared LAN Technology - Understanding LAN Switching

Early Local Area Networks

The earliest Local Area Network technologies that were installed widely were either thick Ethernet or thin Ethernet infrastructures. And it's important to understand some of he limitations of these to see where we're at today with LAN switching.With thick Ethernet installations there were some important limitations such as distance, for example. Early thick Ethernet networks were limited to only 500 meters before the signal degraded.In order to extend beyond the 500 meter distance, they required to install repeaters to boost and amplify that signal.There were also limitations on the number of stations and servers we could have on our network, as well as the placement of those workstations on the network.
The cable itself was relatively expensive, it was also large in diameter, which made it difficult or more challenging to install throughout the building, as we pulled it through the walls and ceilings and so on. As far as adding new users, it was relatively simple.There could use what was known as a non-intrusive tap to plug in a new station anywhere along the cable.And in terms of the capacity that was provided by this thick Ethernet network, it provided 10 megabits per second, but this was shared bandwidth, meaning that that 10 megabits was shared amongst all users on a given segment.
A slight improvement to thick Ethernet was thin Ethernet technology, commonly referred to as cheaper net.This was less expensive and it required less space in terms of installation than thick Ethernet because it was actually thinner in diameter, which is where the name thin Ethernet came from.It was still relatively challenging to install, though, as it sometimes required what we call home runs, or a direct run from a workstation back to a hub or concentrator.And also adding users required a momentary interruption in the network, because we actually had to cut or make a break in a cable segment in order to add a new server or workstation. So those are some of the limitations of early thin and thick Ethernet networks.An improvement on thin and thick Ethernet technology was adding hubs or concentrators into our network. And this allowed us to use something known as UTP cabling, or Unshielded Twisted Pair cabling.
As you can see indicated in the diagram on the left, Ethernet is fundamentally what we call a shared technology.And that is, all users of a given LAN segment are fighting for the same amount of bandwidth. And this is very similar to the cars you see in our diagram, here, all trying to get onto the freeway at once.This is really what our frames, or packets, do in our network as we're trying to make transmissions on our Ethernet network. So, this is actually what's occurring on our hub.Even though each device has its own cable segment connecting into the hub, we're still all fighting for the same fixed amount of bandwidth in the network.Some common terms that we hear associated with the use of hubs, sometimes we call these Ethernet concentrators, or Ethernet repeaters, and they're basically self-contained Ethernet segments within a box.So while physically it looks like everybody has their own segment to their workstation, they're all interconnected inside of this hub, so it's still a shared Ethernet technology.Also, these are passive devices, meaning that they're virtually transparent to the end users, the end users don't even know that those devices exist, and they don't have any role in terms of a forwarding decision in the network whatsoever, they also don't provide any segmentation within the network whatsoever.And this is basically because they work at Layer 1 in the OSI framework.

Collisions: Telltale Signs

A by-product that we have in any Ethernet network is something called collisions. And this is a result of the fundamental characteristic of how any Ethernet network works.Basically, what happens in an Ethernet network is that many stations are sharing the same segment. So what can happen is any one of these stations can transmit at any given time.And if 2 or more stations try to transmit at the same time, it's going to result in what we call a collision. And this is actually one of the early tell-tale signs that your Ethernet network is becoming too congested. Or we simply have too many users on the same segment.And when we get to a certain number of collisions in the network, where they become excessive, this is going to cause sluggish network response times, and a good way to measure that is by the increasing number of user complaints that are reported to the network manager.

Other Bandwidth Consumers

It's also important to understand fundamentally how transmissions can occur in the network. There's basically three different ways that we can communicate in the network. The most common way is by way of unicast transmissions.And when we make a unicast transmission, we basically have one transmitter that's trying to reach one receiver, which is by far the most common, or hopefully the most common form of communication in our network.
Another way to communicate is with a mechanism known as a broadcast. And that is when one transmitter is trying to reach all receivers in the network.So, as you can see in the diagram, in the middle, our server station is sending out one message, and it's being received by everyone on that particular segment.
The last mechanism we have is what is known as a multicast.And a multicast is when one transmitter is trying to reach, not everyone, but a subset or a group of the entire segment.So as you can see in the bottom diagram, we're reaching two stations, but there's one station that doesn't need to participate, so he's not in our multicast group. So those are the three basic ways that we can communicate within our Local Area Network.

FDDI - Fiber Distributed Data Interface - LAN Basics

FDDI - Fiber Distributed Data Interface


FDDI is an American National Standards Institute (ANSI) standard that defines a dual Token Ring LAN operating at 100 Mbps over an optical fiber medium. It is used primarily for corporate and carrier backbones.
Token Ring and FDDI share several characteristics including token passing and a ring architecture which were explored in the previous section on Token Ring. Copper Distributed Data Interface (CDDI) is the implementation of FDDI protocols over STP and UTP cabling. CDDI transmits over relatively short distances (about 100 meters), providing data rates of 100 Mbps using a dual-ring architecture to provide redundancy.
While FDDI is fast, reliable, and handles a lot of data well, its major problem is the use of expensive fiber-optic cable. CDDI addresses this problem by using UTP or STP. However, notice that the maximum segment length drops significantly.
FDDI was developed in the mid-1980s to fill the needs of growing high-speed engineering workstation capacity and network reliability. Today, FDDI is frequently used as a high-speed backbone technology because of its support for high bandwidth and greater distances than copper.

FDDI Network Architecture

FDDI uses a dual-ring architecture. Traffic on each ring flows in opposite directions (called counter-rotating). The dual-rings consist of a primary and a secondary ring. During normal operation, the primary ring is used for data transmissions, and the secondary ring remains idle. The primary purpose of the dual rings is to provide superior reliability and robustness.
One of the unique characteristics of FDDI is that multiple ways exist to connect devices to the ring. FDDI defines three types of devices: single-attachment station (SAS) such as PCs, dual attachment station (DAS) such as routers and servers, and a concentrator.

 - Dual-ring architecture

       - Primary ring for data transmissions
       - Secondary ring for reliability and robustness

 - Components

       - Single attachment station (SAS)—PCs
       - Dual attachment station (DAS)—Servers
       - Concentrator

 - FDDI concentrator

       - Also called a dual-attached concentrator (DAC)
       - Building block of an FDDI network
       - Attaches directly to both rings and ensures that any SAS failure or power-down does not           bring down the ring
Example:-


An FDDI concentrator (also called a dual-attachment concentrator [DAC]) is the building block of an FDDI network. It attaches directly to both the primary and secondary rings and ensures that the failure or power-down of any single attachment station (SAS) does not bring down the ring. This is particularly useful when PCs, or similar devices that are frequently powered on and off, connect to the ring.

- FDDI Summary -

 - Features

       - 100-Mbps token-passing network
       - Single-mode (100 km), double-mode (2 km)
       - CDDI transmits at 100 Mbps over about 100 m
       - Dual-ring architecture for reliability

 - Optical fiber advantages versus copper

       - Security, reliability, and performance are enhanced because it does not emit electrical signals
       - Much higher bandwidth than copper

 - Used for corporate and carrier backbones

Token Ring Operation - LAN Basics

Token Ring Operation


Station access to a Token Ring is deterministic; a station can transmit only when it receives a special frame called a token. One station on a token ring network is designated as the active monitor. The active monitor will prepare a token. A token is usually a few bits with significance to each one of the network interface cards on the network. The active monitor will pass the token into the multistation access unit. The multistation access unit then will pass the token to the first downstream neighbor. Let’s say in this example that station A has something to transmit. Station A will seize the token and append its data to the token. Station A will then send its token back to the multistation access unit. The MAU will then grab the token and push it to the next downstream neighbor. This process is followed until the token reaches the destination for which it is intended.
If a station receiving the token has no information to send, it simply passes the token to the next station. If a station possessing the token has information to transmit, it claims the token by altering one bit of the frame, the T bit. The station then appends the information it wishes to transmit and sends the information frame to the next station on the Token Ring.
The information frame circulates the ring until it reaches the destination station, where the frame is copied by the station and tagged as having been copied. The information frame continues around the ring until it returns to the station that originated it, and is removed.
Because frames proceed serially around the ring, and because a station must claim the token before transmitting, collisions are not expected in a Token Ring network.
Broadcasting is supported in the form of a special mechanism known as explorer packets. These are used to locate a route to a destination through one or more source route bridges.

- Token Ring Summary -

 - Reliable transport, minimized collisions

 - Token passing/token seizing

 - 4- or 16-Mbps transport

 - Little performance impact with increased number of users

 - Popular at IBM-oriented sites such as banks and automated factories

Token Ring (IEEE 802.5) - LAN Basics

Token Ring (IEEE 802.5)

The Token Ring network was originally developed by IBM in the 1970s. It is still IBM’s primary LAN technology and is second only to Ethernet in general LAN popularity. The related IEEE 802.5 specification is almost identical to and completely compatible with IBM’s Token Ring network.
Collisions cannot occur in Token Ring networks. Possession of the token grants the right to transmit. If a node receiving the token has no information to send, it passes the token to the next end station. Each station can hold the token for a maximum period of time.
Token-passing networks are deterministic, which means that it is possible to calculate the maximum time that will pass before any end station will be able to transmit. This feature and several reliability features make Token Ring networks ideal for applications where delay must be predictable and robust network operation is important. Factory automation environments are examples of such applications.
Token Ring is more difficult and costly to implement. However, as the number of users in a network rises, Token Ring’s performance drops very little. In contrast, Ethernet’s performance drops significantly as more users are added to the network.

Token Ring Bandwidth

Here are some of the speeds associated with Token Ring. Note that Token Ring runs at 4 Mbps or 16 Mbps. Today, most networks operate at 16 Mbps. If a network contains even one component with a maximum speed of 4 Mbps, the whole network must operate at that speed.
When Ethernet first came out, networking professionals believed that Token Ring would die, but this has not happened. Token Ring is primarily used with IBM networks running Systems Network Architecture (SNA) networking operating systems. Token Ring has not yet left the market because of the huge installed base of IBM mainframes being used in industries such as banking.
The practical difference between Ethernet and Token Ring is that Ethernet is much cheaper and simpler. However, Token Ring is more elegant and robust.

Token Ring Topology

The logical topology of an 802.5 network is a ring in which each station receives signals from its nearest active upstream neighbor (NAUN) and repeats those signals to its downstream neighbor. Physically, however, 802.5 networks are laid out as stars, with each station connecting to a central hub called a multistation access unit or MAU. The stations connect to the central hub through shielded or unshielded twisted-pair wire.
Typically, a MAU connects up to eight Token Ring stations. If a Token Ring network consists of more stations than a MAU can handle, or if stations are located in different parts of a building–for example on different floors–MAUs can be chained together to create an extended ring. When installing an extended ring, you must ensure that the MAUs themselves are oriented in a ring. Otherwise, the Token Ring will have a break in it and will not operate.

High-Speed Ethernet Options - LAN Basics

High-Speed Ethernet Options


 - Fast Ethernet
 - Fast EtherChannel®
 - Gigabit Ethernet
 - Gigabit EtherChannel
We’ve mentioned that Ethernet also has high speed options that are currently available. Fast Ethernet is used widely at this point and provides customers with 100 Mbps performance, a ten-fold increase. Fast EtherChannel is a Cisco value-added feature that provides bandwidth up to 800 Mbps. There is now a standard for Gigabit Ethernet as well and Cisco provides Gigabit Ethernet solutions with 1000 Mbps performance.

Let’s look more closely at Fast EtherChannel and Gigabit Ethernet.

What Is Fast EtherChannel?

Grouping of multiple Fast Ethernet interfaces into one logical transmission path
 - Scalable bandwidth up to 800+ Mbps
 - Using industry-standard Fast Ethernet
 - Load balancing across parallel links
 - Extendable to Gigabit Ethernet
Fast EtherChannel provides a solution for network managers who require higher bandwidth between servers, routers, and switches than Fast Ethernet technology can currently provide.
Fast EtherChannel is the grouping of multiple Fast Ethernet interfaces into one logical transmission path providing parallel bandwidth between switches, servers, and Cisco routers. Fast EtherChannel provides bandwidth aggregation by combining parallel 100-Mbps Ethernet links (200-Mbps full-duplex) to provide flexible, incremental bandwidth between network devices.
For example, network managers can deploy Fast EtherChannel consisting of pairs of full-duplex Fast Ethernet to provide 400+ Mbps between the wiring closet and the data center, while in the data center bandwidths of up to 800 Mbps can be provided between servers and the network backbone to provide large amounts of scalable incremental bandwidth.
Cisco’s Fast EtherChannel technology builds upon standards-based 802.3 full-duplex Fast Ethernet. It is supported by industry leaders such as Adaptec, Compaq, Hewlett-Packard, Intel, Micron, Silicon Graphics, Sun Microsystems, and Xircom and is scalable to Gigabit Ethernet in the future.

What Is Gigabit Ethernet?

In some cases, Fast EtherChannel technology may not be enough.
The old 80/20 rule of network traffic (80 percent of traffic was local, 20 percent was over the backbone) has been inverted by intranets and the World Wide Web. The rule of thumb today is to plan for 80 percent of the traffic going over the backbone.


Gigabit networking is important to accommodate these evolving needs.
Gigabit Ethernet builds on the Ethernet protocol but increases speed tenfold over Fast Ethernet, to 1000 Mbps, or 1 Gbps. It promises to be a dominant player in high-speed LAN backbones and server connectivity. Because Gigabit Ethernet significantly leverages on Ethernet, network managers will be able to leverage their existing knowledge base to manage and maintain Gigabit networks.

The Gigabit Ethernet spec addresses three forms of transmission media though not all are available yet:

 - 1000BaseLX: Long-wave (LW) laser over single-mode and multimode fiber
 - 1000BaseSX: Short-wave (SW) laser over multimode fiber
 - 1000BaseCX: Transmission over balanced shielded 150-ohm 2-pair STP copper cable
 - 1000BaseT: Category 5 UTP copper wiring Gigabit Ethernet allows Ethernet to scale from 10 Mbps at the desktop, to 100    Mbps to the workgroup, to 1000 Mbps in the data center. By leveraging the current Ethernet standards as well as the    installed base of Ethernet and Fast Ethernet switches and routers, network managers do not need to retrain and relearn a    new technology to provide support for Gigabit Ethernet.

Ethernet Reliability - LAN Basics

Ethernet Reliability


Ethernet is known as being a very reliable local area networking protocol. In this example, A is transmitting information and B also has information to transmit. Let’s say that A & B listen to the network, hear no traffic and broadcast at the same time. A collision occurs when these two packets crash into one another on the network. Both transmissions are corrupted and unusable.
When a collision occurs on the network, the NIC card sensing the collision, in this case, station C sends out a jam signal that jams the entire network for a designated amount of time.
Once the jam signal has been received and recognized by all of the stations on the network, stations A and D will both back off for different amounts of time before they try to retransmit. This type of technology is known as Carrier Sense Multiple Access With Collision Detection – CSMA/CD.