Radio buttons are
used when you want to let the visitor select one - and just one - option
from a set of alternatives. If more options are to be allowed at the
same time you should use
check boxes instead.
SETTINGS:
Below is a listing of valid settings for radio buttons:
Showing posts with label Tutorials. Show all posts
Showing posts with label Tutorials. Show all posts
Monday, March 26, 2012
Custom error pages - beating 404 errors and keeping visitors
Create custom error pages, help keep your visitors and increase your sales!
HTTP 404 - File not found is a browser error message that we've all grown to know and
hate after clicking on a link.
On reviewing my server logs many moons ago, I noticed around 1% of all requests from my site will return this HTTP error code. One of the reasons for it was a stupid mistake. I wasn't happy with the naming of a couple of my files, so I renamed them without considering the consequences:
- The files had been on my site for short time
- During that time a couple of search engine spiders had crawled through the pages. A search engine spider is a software program that scours web sites for content and returns the results to a search engine database. The search engine interface feeds off this to return listings to searchers when they have entered their search criteria.
On reviewing my server logs many moons ago, I noticed around 1% of all requests from my site will return this HTTP error code. One of the reasons for it was a stupid mistake. I wasn't happy with the naming of a couple of my files, so I renamed them without considering the consequences:
- The files had been on my site for short time
- During that time a couple of search engine spiders had crawled through the pages. A search engine spider is a software program that scours web sites for content and returns the results to a search engine database. The search engine interface feeds off this to return listings to searchers when they have entered their search criteria.
|
- Since I changed the names of the files after the spider went through
and had not used a 301
redirect, the pages in their original state no longer "existed".
- The search engine query results reflect the database entries, pointing to the wrong filename, visitor clicks on the result - 404.... aaaaaaaaargh!
- The search engine query results reflect the database entries, pointing to the wrong filename, visitor clicks on the result - 404.... aaaaaaaaargh!
404 errors may also be caused through a malformed browser request (user error - wrong URL typed into address bar)
You can help prevent 404 errors scaring visitors away quite easily - dependent upon your hosting service set up. Instead of a visitor being directed to those rather horrible "file not found" pages, you can create custom error pages. Here is an example:
http://www.tamingthebeast.net/aaaargh
You can help prevent 404 errors scaring visitors away quite easily - dependent upon your hosting service set up. Instead of a visitor being directed to those rather horrible "file not found" pages, you can create custom error pages. Here is an example:
http://www.tamingthebeast.net/aaaargh
The above link is incomplete which triggers a 404 response on my server.
By implementing custom error pages, you have a good chance of retaining the visitor, especially if you include the standard navigation buttons. It also acts as a means of apologizing to the visitor for the inconvenience. More retained visitors can equal more sales.
By implementing custom error pages, you have a good chance of retaining the visitor, especially if you include the standard navigation buttons. It also acts as a means of apologizing to the visitor for the inconvenience. More retained visitors can equal more sales.
It isn't just 404 error messages that you can apply this to. There are a number of error code returns that you could
customize, all with the goal of alleviating visitor stress and encouraging them to further explore your site.
View a listing of http error codes.
Creating custom error pages:
You may want to check with your hosting service
first before creating custom error pages as certain hosting
configurations may not allow you to create custom error pages.
For this exercise, your host needs to support .htaccess files.
1. Create your custom error page
First design and publish the error pages to your
server. You'll only really need to design a couple for the more common
errors, for file not found (404) or
unauthorized/forbidden (403, 401). Your custom error pages should
have a brief summary of what went wrong and an encouragement for the
visitor to try again or explore a different area of the site. The best
custom error pages are those that match the site's other pages in
navigation and layout.
Note:
I also suggest adding the following line between the <head> and
</head> tags:
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">
You
don't want search engines spidering the page - it has been known to cause
problems.
2. Open/create your .htaccess file
After publishing the pages, you'll need to edit the .htaccess file in the root directory of your server based
web. Use the Edit utility (set to ASCII transfer mode) in your FTP program to view the
file or a text editor such as Notepad.
The .htaccess file contains a number of
settings to control who can access the contents of a specific directory
and how much access they have. It can also be used to create a "URL
Redirect".
If you have a FrontPage based web, be especially careful, as the .htaccess file contains other settings as well.
If
you don't find a .htaccess file, you can create your own, but once
again, check with your hosting service first for guidelines.
Be sure to use a plain text editor and name the file ".htaccess"
(notice the "."
preceding the file name).
3. Edit your .htaccess file
Add the following lines to the end of the file (examples provided as a guideline
- alter path and file names to point towards your error pages)
ErrorDocument 404 http://site.com/folder/404.htm
ErrorDocument 403 http://site.com/folder/403.htm
ErrorDocument 401 http://site.com/folder/401.htm
ErrorDocument 403 http://site.com/folder/403.htm
ErrorDocument 401 http://site.com/folder/401.htm
Upload the .htacess file back to the root of your web in ASCII mode
and you should be good to go. Try it out by typing in a non existent URL on
your site.
Custom error pages are simple to create, help you to increase your site's traffic
by retaining wayward visitors and also encourage better visitor/customer relations.
Creating a Custom 404 Error Page - Tutorial
This tutorial will teach you how to create
your own 404 error pages for your web site. Why would you want to do
this? The answer is that it is a great way to retain visitors to your
site. If a visitor arrives at your site from an outdated link on an
external web site, they will receive a 404 error page. If you do not
have a custom one configured, they will receive the browser's default
error page. Internet Explorer's is shown below:

When visitors see this page, most will assume that they site no longer exists and go elsewhere. If they can see that the site is still there, but the particular page does not exist, they are likely use your navigation menus or search box to try to find what they are looking for. As an example, take a look at our 404 error page.
Creating custom 404 error pages is pretty easy. If your site is hosted on a Linux, Unix, or BSD server, you need to edit your .htaccess file and add the following line:
The italicized part needs to be the name and extension of your error page. Next, create your error page. Error pages are standard HTML pages and can also utilize SSI like the rest of your site. When creating this page, make sure that it is over 512 bytes in size or Internet Explorer will continue to display its own 404 pages. Note that images and graphics do not count toward the total size in this circumstance. Once this is completed upload your .htaccess file and your error page, and you should be all set.
If you are using a Windows web server, you will need to open the IIS managment console. Expand the list of web sites in the left-hand pane and right-click on the web site you want add a custom error page to and select "Properties". Click the "Custom Errors" tab in the window. Highlight HTTP Error 404 in the list of errors and click the "Edit Properties" button. This will open the "Error Mapping Properties" window. In the "Message Type" dropdown, specify that you'll be using a URL. Enter the path to your 404 error page in the URL field as in /yourerrorpage.shtml.
You can perform this exact same process for other HTTP errors such as 401 and 403.
When visitors see this page, most will assume that they site no longer exists and go elsewhere. If they can see that the site is still there, but the particular page does not exist, they are likely use your navigation menus or search box to try to find what they are looking for. As an example, take a look at our 404 error page.
Creating custom 404 error pages is pretty easy. If your site is hosted on a Linux, Unix, or BSD server, you need to edit your .htaccess file and add the following line:
Code :
ErrorDocument 404 /yourerrorpage.shtml
ErrorDocument 404 /yourerrorpage.shtml
The italicized part needs to be the name and extension of your error page. Next, create your error page. Error pages are standard HTML pages and can also utilize SSI like the rest of your site. When creating this page, make sure that it is over 512 bytes in size or Internet Explorer will continue to display its own 404 pages. Note that images and graphics do not count toward the total size in this circumstance. Once this is completed upload your .htaccess file and your error page, and you should be all set.
If you are using a Windows web server, you will need to open the IIS managment console. Expand the list of web sites in the left-hand pane and right-click on the web site you want add a custom error page to and select "Properties". Click the "Custom Errors" tab in the window. Highlight HTTP Error 404 in the list of errors and click the "Edit Properties" button. This will open the "Error Mapping Properties" window. In the "Message Type" dropdown, specify that you'll be using a URL. Enter the path to your 404 error page in the URL field as in /yourerrorpage.shtml.
You can perform this exact same process for other HTTP errors such as 401 and 403.
Sunday, March 25, 2012
Peer-to-Peer Communications - OSI Reference Model
Peer-to-Peer Communications
Let’s see how these layers work in
a Peer to Peer Communications Network. In this exercise we
will package information and move it from Host A, across network
lines to Host B.
Each layer uses its own layer protocol to communicate with its peer layer in the other system. Each layer’s protocol exchanges information, called protocol data units (PDUs), between peer layers.
This peer-layer protocol communication is achieved by using the services of the layers below it. The layer below any current or active layer provides its services to the current layer.
The transport layer will insure that data is kept segmented or separated from one other data. At the network layer we get packets that begin to be assembled. At the data link layer those packets become frames and then at the physical layer those frames go out on the wires from one host to the other host as bits
Each layer uses its own layer protocol to communicate with its peer layer in the other system. Each layer’s protocol exchanges information, called protocol data units (PDUs), between peer layers.
This peer-layer protocol communication is achieved by using the services of the layers below it. The layer below any current or active layer provides its services to the current layer.
The transport layer will insure that data is kept segmented or separated from one other data. At the network layer we get packets that begin to be assembled. At the data link layer those packets become frames and then at the physical layer those frames go out on the wires from one host to the other host as bits
Data Encapsulation
This whole process of moving data from host
A to host B is known as data encapsulation – the data
is being wrapped in the appropriate protocol header so it
can be properly received.
Let’s say we compose an email that we wish to send from system A to system B. The application we are using is Eudora. We write the letter and then hit send. Now, the computer translates the numbers into ASCII and then into binary (1s and 0s). If the email is a long one, then it is broken up and mailed in pieces. This all happens by the time the data reaches the Transport layer.
Let’s say we compose an email that we wish to send from system A to system B. The application we are using is Eudora. We write the letter and then hit send. Now, the computer translates the numbers into ASCII and then into binary (1s and 0s). If the email is a long one, then it is broken up and mailed in pieces. This all happens by the time the data reaches the Transport layer.
At the network layer, a network header is
added to the data. This header contains information required
to complete the transfer, such as source and destination logical
addresses.
The packet from the network layer is then
passed to the data link layer where a frame header and a frame
trailer are added thus creating a data link frame.
Finally, the physical layer provides a service
to the data link layer. This service includes encoding the
data link frame into a pattern of 1s and 0s for transmission
on the medium (usually a wire).
Why a Layered Network Model? Need of Layered Network Model - OSI Reference Model
That’s essentially the same thing that
goes in networking with the OSI model. This image illustrates
the model.
So, why use a layered network model in the first place? Well, a layered network model does a number of things. It reduces the complexity of the problems from one large one to seven smaller ones. It allows the standardization of interfaces among devices. It also facilitates modular engineering so engineers can work on one layer of the network model without being concerned with what happens at another layer. This modularity both accelerates evolution of technology and finally teaching and learning by dividing the complexity of internetworking into discrete, more easily learned operation subsets.
Note that a layered model does not define or constrain an implementation; it provides a framework. Implementations, therefore, do not conform to the OSI reference model, but they do conform to the standards developed from the OSI reference model principles.
Devices Function at Layers
Let’s put this in some context. You
are already familiar with different networking devices such
as hubs, switches, and routers. Each of these devices operate
at a different level of the OSI Model.
NIC cards receive information from upper level applications and properly package data for transmission on to the network media. Essentially, NIC cards live at the lower four layers of the OSI Model.
Hubs, whether Ethernet, or FDDI, live at the physical layer. They are only concerned with passing bits from one station to other connected stations on the network. They do not filter any traffic.
Bridges and switches on the other hand, will filter traffic and build bridging and switching tables in order to keep track of what device is connected to what port.
Routers, or the technology of routing, lives at layer 3.
These are the layers people are referring to when they speak of “layer 2” or “layer 3” devices.
Let’s take a closer look at the model.
NIC cards receive information from upper level applications and properly package data for transmission on to the network media. Essentially, NIC cards live at the lower four layers of the OSI Model.
Hubs, whether Ethernet, or FDDI, live at the physical layer. They are only concerned with passing bits from one station to other connected stations on the network. They do not filter any traffic.
Bridges and switches on the other hand, will filter traffic and build bridging and switching tables in order to keep track of what device is connected to what port.
Routers, or the technology of routing, lives at layer 3.
These are the layers people are referring to when they speak of “layer 2” or “layer 3” devices.
Let’s take a closer look at the model.
Host Layers & Media Layers
Host Layers :-
The upper four layers, Application, Presentation, Session, and Transport, are responsible for accurate data delivery between computers. The tasks or functions of these upper four layers must “interoperate” with the upper four layers in the system being communicated with.Media Layers :-
The lower three layers – Network, Data Link and Physical -- are called the media layers. The media layers are responsible for seeing that the information does indeed arrive at the destination for which it was intended.The Layered Model - OSI Reference Model
The Layered Model
The concept of layered communication is essential to ensuring interoperability of all the pieces of a network. To introduce the process of layered communication, let’s take a look at a simple example.
In this image, the goal is to get a message
from Location A to Location B. The sender doesn’t know
what language the receiver speaks – so the sender passes
the message on to a translator.
The translator, while not concerned with the content of the message, will translate it into a language that may be globally understood by most, if not all translators – thus it doesn’t matter what language the final recipient speaks. In this example, the language is Dutch. The translator also indicates what the language type is, and then passes the message to an administrative assistant.
The administrative assistant, while not concerned with the language, or the message, will work to ensure the reliable delivery of the message to the destination. In this example, she will attach the fax number, and then fax the document to the destination – Location B.
The translator, while not concerned with the content of the message, will translate it into a language that may be globally understood by most, if not all translators – thus it doesn’t matter what language the final recipient speaks. In this example, the language is Dutch. The translator also indicates what the language type is, and then passes the message to an administrative assistant.
The administrative assistant, while not concerned with the language, or the message, will work to ensure the reliable delivery of the message to the destination. In this example, she will attach the fax number, and then fax the document to the destination – Location B.
The document is received by an administrative
assistant at Location B. The assistant at Location B may even
call the assistant at Location A to let her know the fax was
properly received.
The assistant at Location B will then pass the message to the translator at her office. The translator will see that the message is in Dutch. The translator, knowing that the person to whom the message is addressed only speaks French, will translate the message so the recipient can properly read the message. This completes the process of moving information from one location to another.
The assistant at Location B will then pass the message to the translator at her office. The translator will see that the message is in Dutch. The translator, knowing that the person to whom the message is addressed only speaks French, will translate the message so the recipient can properly read the message. This completes the process of moving information from one location to another.
Upon closer study of the process employed
to communicate, you will notice that communication took place
at different layers. At layer 1, the administrative assistants
communicated with each other. At layer 2, the translators
communicated with each other. And, at layer 3 the sender was
able to communicate with the recipient.
A Simple Network Structure - Tutorial
Above figure shows a simple network with three computers and
a Printer. You can see that all devices are connected with network cables to a
central network device called a Network Router. The printer in this network can
be used by all the PCs. Also the figure show you how the Wireless network Works,
the Notebook and the Computer connected with the wireless router by wireless adapters
which equipped with them.
Network Stations: May be terminal, computers, telephones or
other communication devices. They are also called HOST\END SYSTEMS. The hosts
are connected to communication subnet or subnet. They carry messages from host
and consist of switching elements and transmission lines. Transmission lines are
also called CIRCUITS, CHANNELS, TRUNKS, move bits between the machines.
The switching elements are specialized computers used to connect
two or more transmission lines. The purpose of the switching element is to choose
outgoing line and forward the data arriving on an incoming line. All traffic to/from
the host has to go via its IMP. They are also known as PACKET SWITCHING NODES,
INTERMEDIATE SYSTEM OR DATA SWITCHING EXCHANGES.
Subnet is the collection of the communication lines and routers
but not the host. The set of nods to which stations attached is the boundary of
the communication network. The collection of routers and communication lines moves
packets from source host to the destination host.
Network structure can be thought with- Data terminal equipment (DTE).
- Data circuit terminating Equipment (DCE) concept.
Most digital data processing device have limited data transmission capacity and limited distance of data transmission. DTE is the end user machine, generally refers to (Devices) terminals and computers.
Example: Email terminal, workstation, ATM in a bank, sales terminal in a departmental store. They are not commonly connected directly to transmission medium.
DCE is used to connect the communication channel.
Example: modem. It interacts with DTE and provides an interface of DTE to communication network transmits and receive bits one at a time over the communication channel.
To specify the exact nature of interface between DTE and DCE various standards and protocols have been developed. DCEs and DTEs are connected in two ways. A high degree of cooperation is essential in DTE-DCE combination, as data and control information is to be exchanged. They can be connected in two ways
- Point to point configuration: Here only two DTE devices are in the channel
- Multidrop configuration: Here
more than two devices are connected to the same communication channel.
This will provide the basic technology concepts required for
understanding networking. The following are the lessons how we categorized Computer
Network.
Implement Chat Systems on an Intranet - Tutorial
It used to be that businesses had a major influence on what types of
products and services were available to the general public. In the
early days of the Internet, networking LAN technologies
and needs were the driving force behind the creation of many software
applications and tools that users accessed. The growing popularity of
the World Wide Web with casual computer users led to a
paradigm shift in how to approach usable software solutions for these
users. Simple to install and use software applications such as file sharing, e-mail and instant chat programs were instantly popular.
Businesses began to take notice of how these tools could be useful in everyday work settings. As such, organizations began to look into how to utilize collaborative tools such as sending IMs on LANs, social network portals and productivity applications to support their objectives. Initial tools required either using obscure and hard to understand command line utilities such as net send in order to communicate on a networking LAN environment. While this method was effective in keeping communications internal and secure within the company network system, unless users were skilled computer technicians, it wasn’t a practical solution. It soon became apparent that more user friendly resolutions needed to be implemented for the corporate world.
The growing popularity of the Yahoo Messenger, Google Talk and MSN Messenger
programs were a driving force into the development of effective
networking LAN IM solutions. While all of these tools can be used to
communicate with virtually anyone that is connected to the Internet,
they can also allow business users to communicate with one another. For
the business, this solution does present some risks. It meant that to
accommodate the ability for users to work together by sending IMs on
LAN equipment, they would be able to contact people outside of the
organization.
The solution to preventing a security risk from allowing sending IMs on LAN networks was easy to implement. Each of these popular messaging programs has been updated to allow specific configuration on networking LAN topologies to only allow people to see other users on the same intranet. For a company that wished to keep their software solutions in-house, the creation of custom chat programs helped to overcome the possible risk of outside intrusion. The end result was that the casual user caused businesses to take notice and influenced the solution for a common organizational need.
Businesses began to take notice of how these tools could be useful in everyday work settings. As such, organizations began to look into how to utilize collaborative tools such as sending IMs on LANs, social network portals and productivity applications to support their objectives. Initial tools required either using obscure and hard to understand command line utilities such as net send in order to communicate on a networking LAN environment. While this method was effective in keeping communications internal and secure within the company network system, unless users were skilled computer technicians, it wasn’t a practical solution. It soon became apparent that more user friendly resolutions needed to be implemented for the corporate world.
The solution to preventing a security risk from allowing sending IMs on LAN networks was easy to implement. Each of these popular messaging programs has been updated to allow specific configuration on networking LAN topologies to only allow people to see other users on the same intranet. For a company that wished to keep their software solutions in-house, the creation of custom chat programs helped to overcome the possible risk of outside intrusion. The end result was that the casual user caused businesses to take notice and influenced the solution for a common organizational need.
Brief History of Wi-Fi - Tutorial
Wireless networking: Few
people have a kind word to say about telecoms regulators. But the
success of Wi-Fi shows what can be achieved when regulators and
technologists work together
IT STANDS as perhaps the signal success of the computer industry in the last few years, a rare bright spot in a bubble-battered market: Wi-Fi, the short-range wireless broadband technology. Among geeks, it has inspired a mania unseen since the days of the internet boom. Tens of millions of Wi-Fi devices will be sold this year, including the majority of laptop computers. Analysts predict that 100m people will be using Wi-Fi by 2006. Homes, offices, colleges and schools around the world have installed Wi-Fi equipment to blanket their premises with wireless access to the internet. Wi-Fi access is available in a growing number of coffee-shops, airports and hotels too. Yet merely five years ago wireless networking was a niche technology. How did Wi-Fi get started, and become so successful, in the depths of a downturn?
Wi-Fi seems even more remarkable when you look at its provenance: it was, in effect, spawned by an American government agency from an area of radio spectrum widely referred to as “the garbage bands”. Technology entrepreneurs generally prefer governments to stay out of their way: funding basic research, perhaps, and then buying finished products when they emerge on the market. But in the case of Wi-Fi, the government seems actively to have guided innovation. “Wi-Fi is a creature of regulation, created more by lawyers than by engineers,” asserts Mitchell Lazarus, an expert in telecoms regulation at Fletcher, Heald & Hildreth, a law firm based in Arlington, Virginia. As a lawyer, Mr. Lazarus might be expected to say that. But he was also educated as an electrical engineer—and besides, the facts seem to bear him out.
Wi-Fi
would certainly not exist without a decision taken in 1985 by the
Federal Communications Commission (FCC), America's telecoms regulator,
to open several bands of wireless spectrum, allowing them to be used
without the need for a government license. This was an unheard-of move
at the time; other than the ham radio channels, there was very little
unlicensed spectrum. But the FCC, prompted by a visionary engineer on
its staff, Michael Marcus, took three chunks of spectrum from the
industrial, scientific and medical bands and opened them up to
communications entrepreneurs.
These so-called “garbage bands”, at 900MHz, 2.4GHz and 5.8GHz, were already allocated to equipment that used radio-frequency energy for purposes other than communications: microwave ovens, for example, which use radio waves to heat food. The FCC made them available for communications purposes as well, on the condition that any devices using these bands would have to steer around interference from other equipment. They would do so using “spread spectrum” technology, originally developed for military use, which spreads a radio signal out over a wide range of frequencies, in contrast to the usual approach of transmitting on a single, well-defined frequency. This makes the signal both difficult to intercept and less susceptible to interference.
Though the 1985 ruling seems visionary in hindsight, nothing much happened at the time. What ultimately got Wi-Fi moving was the creation of an industry-wide standard. Initially, vendors of wireless equipment for local-area networks (LANs), such as Proxim and Symbol, developed their own kinds of proprietary equipment that operated in the unlicensed bands: equipment from one vendor could not talk to equipment from another. Inspired by the success of Ethernet, a wireline-networking standard, several vendors realized that a common wireless standard made sense too. Buyers would be more likely to adopt the technology if they were not “locked in” to a particular vendor's products.
In 1988, NCR Corporation, which wanted to use the unlicensed spectrum to hook up wireless cash registers, asked Victor Hayes, one of its engineers, to look into getting a standard started. Mr. Hayes, along with Bruce Tuch of Bell Labs, approached the Institute of Electrical and Electronics Engineers (IEEE), where a committee called 802.3 had defined the Ethernet standard. A new committee called 802.11 was set up, with Mr. Hayes as chairman, and the negotiations began.
The fragmented market meant it took a long time for the various vendors to agree on definitions and draw up a standard acceptable to 75% of the committee members. Finally, in 1997, the committee agreed on a basic specification. It allowed for a data-transfer rate of two megabits per second, using either of two spread spectrum technologies, frequency hopping or direct-sequence transmission. (The first avoids interference from other signals by jumping between radio frequencies; the second spreads the signal out over a wide band of frequencies.)
The new standard was published in 1997, and engineers immediately began working on prototype equipment to comply with it. Two variants, called 802.11b (which operates in the 2.4GHz band) and 802.11a (which operates in the 5.8GHz band), were ratified in December 1999 and January 2000 respectively. 802.11b was developed primarily by Richard van Nee of Lucent and Mark Webster of Intersil (then Harris Semiconductor).
Companies began building 802.11b-compatible devices. But the specification was so long and complex—it filled 400 pages—that compatibility problems persisted. So in August 1999, six companies—Intersil, 3Com, Nokia, Aironet (since purchased by Cisco), Symbol and Lucent (which has since spun off its components division to form Agere Systems)—got together to create the Wireless Ethernet Compatibility Alliance (WECA).
The technology had been standardized; it had a name; now Wi-Fi needed a market champion, and it found one in Apple, a computer-maker renowned for innovation. The company told Lucent that, if it could make an adapter for under $100, Apple would incorporate a Wi-Fi slot into all its laptops. Lucent delivered, and in July 1999 Apple introduced Wi-Fi as an option on its new iBook computers, under the brand name AirPort. “And that completely changed the map for wireless networking,” says Greg Raleigh of Airgo, a wireless start-up based in Palo Alto, California. Other computer-makers quickly followed suit. Wi-Fi caught on with consumers just as corporate technology spending dried up in 2001.
Wi-Fi was boosted by the growing popularity of high-speed broadband internet connections in the home; it is the easiest way to enable several computers to share a broadband link. To this day, Wi-Fi's main use is in home networking. As the technology spread, fee-based access points known as “hotspots” also began to spring up in public places such as coffee-shops, though many hotspot operators have gone bust and the commercial viability of many hotspots is unclear. Meanwhile, the FCC again tweaked its rules to allow for a new variant of Wi-Fi technology, known as 802.11g. It uses a new, more advanced form of spread-spectrum technology called orthogonal frequency-division multiplexing (OFDM) and can achieve speeds of up to 54 megabits per second in the 2.4GHz band.
Where next? Many Wi-Fi enthusiasts believe it will sweep other wireless technologies aside: that hotspots will, for example, undermine the prospects for third-generation (3G) mobile-telephone networks, which are also intended to deliver high-speed data to users on the move. But such speculation is overblown. Wi-Fi is a short-range technology that will never be able to provide the blanket coverage of a mobile network. Worse, subscribe to one network of hotspots (in coffee-shops, say) and you may not be able to use the hotspot in the airport. Ken Denman, the boss of iPass, an internet-access provider based in Redwood Shores, California, insists that things are improving. Roaming and billing agreements will, he says, be sorted out within a couple of years.
By that time, however, the first networks based on a new technology, technically known as 802.16 but named WiMax, should be up and running. As its name suggests, WiMax is positioned as a wide-area version of Wi-Fi. It has a maximum throughput of 70 megabits per second, and a maximum range of 50km, compared with 50m or so for Wi-Fi. Where Wi-Fi offers access in selected places, like phone boxes once did, WiMax could offer blanket coverage, like mobile phones do.
Wi-Fi is also under threat in the home. At the moment it is the dominant home networking technology: Wi-Fi-capable televisions, CD players and video-recorders and other consumer-electronics devices are already starting to appear. This will make it possible to pipe music, say, around the house without laying any cables. Cordless phones based on Wi-Fi are also in the works. But Wi-Fi may not turn out to be the long-term winner in these applications. It is currently too power-hungry for handheld devices, and even 802.11g cannot reliably support more than one stream of video. And a new standard, technically known as 802.15.3 and named WiMedia, has been specifically designed as a short-range, high-capacity home networking standard for entertainment devices.
Wi-Fi's ultimate significance, then, may be that it provides a glimpse of what will be possible with future wireless technologies. It has also changed the way regulators and technologists think about spectrum policy. The FCC has just proposed that broadcast “whitespace”—the airwaves assigned to television broadcasters but not used for technical reasons—should be opened up too. That is not to say that spectrum licensing will be junked in favor of a complete free-for-all over the airwaves. Julius Knapp, the deputy chief of the office of engineering and technology at the FCC, maintains that both the licensed and unlicensed approaches have merit.
Wi-Fi also shows that agreeing on a common standard can create a market. Its example has been taken to heart by the backers of WiMax. Long-range wireless networking gear, like short-range technology before it, has long been dominated by vendors pushing proprietary standards, none of which has been widely adopted. Inspired by Wi-Fi's success, the vendors have now thrown their weight behind WiMax, a common standard with a consumer-friendly name, which they hope will expand the market and boost all their fortunes. Whatever happens to Wi-Fi in future, it has blazed a trail for other technologies to follow.
IT STANDS as perhaps the signal success of the computer industry in the last few years, a rare bright spot in a bubble-battered market: Wi-Fi, the short-range wireless broadband technology. Among geeks, it has inspired a mania unseen since the days of the internet boom. Tens of millions of Wi-Fi devices will be sold this year, including the majority of laptop computers. Analysts predict that 100m people will be using Wi-Fi by 2006. Homes, offices, colleges and schools around the world have installed Wi-Fi equipment to blanket their premises with wireless access to the internet. Wi-Fi access is available in a growing number of coffee-shops, airports and hotels too. Yet merely five years ago wireless networking was a niche technology. How did Wi-Fi get started, and become so successful, in the depths of a downturn?
Wi-Fi seems even more remarkable when you look at its provenance: it was, in effect, spawned by an American government agency from an area of radio spectrum widely referred to as “the garbage bands”. Technology entrepreneurs generally prefer governments to stay out of their way: funding basic research, perhaps, and then buying finished products when they emerge on the market. But in the case of Wi-Fi, the government seems actively to have guided innovation. “Wi-Fi is a creature of regulation, created more by lawyers than by engineers,” asserts Mitchell Lazarus, an expert in telecoms regulation at Fletcher, Heald & Hildreth, a law firm based in Arlington, Virginia. As a lawyer, Mr. Lazarus might be expected to say that. But he was also educated as an electrical engineer—and besides, the facts seem to bear him out.
In the beginning
These so-called “garbage bands”, at 900MHz, 2.4GHz and 5.8GHz, were already allocated to equipment that used radio-frequency energy for purposes other than communications: microwave ovens, for example, which use radio waves to heat food. The FCC made them available for communications purposes as well, on the condition that any devices using these bands would have to steer around interference from other equipment. They would do so using “spread spectrum” technology, originally developed for military use, which spreads a radio signal out over a wide range of frequencies, in contrast to the usual approach of transmitting on a single, well-defined frequency. This makes the signal both difficult to intercept and less susceptible to interference.
Though the 1985 ruling seems visionary in hindsight, nothing much happened at the time. What ultimately got Wi-Fi moving was the creation of an industry-wide standard. Initially, vendors of wireless equipment for local-area networks (LANs), such as Proxim and Symbol, developed their own kinds of proprietary equipment that operated in the unlicensed bands: equipment from one vendor could not talk to equipment from another. Inspired by the success of Ethernet, a wireline-networking standard, several vendors realized that a common wireless standard made sense too. Buyers would be more likely to adopt the technology if they were not “locked in” to a particular vendor's products.
In 1988, NCR Corporation, which wanted to use the unlicensed spectrum to hook up wireless cash registers, asked Victor Hayes, one of its engineers, to look into getting a standard started. Mr. Hayes, along with Bruce Tuch of Bell Labs, approached the Institute of Electrical and Electronics Engineers (IEEE), where a committee called 802.3 had defined the Ethernet standard. A new committee called 802.11 was set up, with Mr. Hayes as chairman, and the negotiations began.
The fragmented market meant it took a long time for the various vendors to agree on definitions and draw up a standard acceptable to 75% of the committee members. Finally, in 1997, the committee agreed on a basic specification. It allowed for a data-transfer rate of two megabits per second, using either of two spread spectrum technologies, frequency hopping or direct-sequence transmission. (The first avoids interference from other signals by jumping between radio frequencies; the second spreads the signal out over a wide band of frequencies.)
The new standard was published in 1997, and engineers immediately began working on prototype equipment to comply with it. Two variants, called 802.11b (which operates in the 2.4GHz band) and 802.11a (which operates in the 5.8GHz band), were ratified in December 1999 and January 2000 respectively. 802.11b was developed primarily by Richard van Nee of Lucent and Mark Webster of Intersil (then Harris Semiconductor).
Companies began building 802.11b-compatible devices. But the specification was so long and complex—it filled 400 pages—that compatibility problems persisted. So in August 1999, six companies—Intersil, 3Com, Nokia, Aironet (since purchased by Cisco), Symbol and Lucent (which has since spun off its components division to form Agere Systems)—got together to create the Wireless Ethernet Compatibility Alliance (WECA).
A rose by any other name...
The idea was that this body would certify that products from different vendors were truly compatible with each other. But the terms “WECA compatible” or “IEEE802.11b compliant” hardly tripped off the tongue. The new technology needed a consumer friendly name. Branding consultants suggested a number of names, including “FlankSpeed” and “DragonFly”. But the clear winner was “Wi-Fi”. It sounded a bit like hi-fi, and consumers were used to the idea that a CD player from one company would work with an amplifier from another. So Wi-Fi it was. (The idea that this stood for “wireless fidelity” was dreamed up later.)The technology had been standardized; it had a name; now Wi-Fi needed a market champion, and it found one in Apple, a computer-maker renowned for innovation. The company told Lucent that, if it could make an adapter for under $100, Apple would incorporate a Wi-Fi slot into all its laptops. Lucent delivered, and in July 1999 Apple introduced Wi-Fi as an option on its new iBook computers, under the brand name AirPort. “And that completely changed the map for wireless networking,” says Greg Raleigh of Airgo, a wireless start-up based in Palo Alto, California. Other computer-makers quickly followed suit. Wi-Fi caught on with consumers just as corporate technology spending dried up in 2001.
Wi-Fi was boosted by the growing popularity of high-speed broadband internet connections in the home; it is the easiest way to enable several computers to share a broadband link. To this day, Wi-Fi's main use is in home networking. As the technology spread, fee-based access points known as “hotspots” also began to spring up in public places such as coffee-shops, though many hotspot operators have gone bust and the commercial viability of many hotspots is unclear. Meanwhile, the FCC again tweaked its rules to allow for a new variant of Wi-Fi technology, known as 802.11g. It uses a new, more advanced form of spread-spectrum technology called orthogonal frequency-division multiplexing (OFDM) and can achieve speeds of up to 54 megabits per second in the 2.4GHz band.
Where next? Many Wi-Fi enthusiasts believe it will sweep other wireless technologies aside: that hotspots will, for example, undermine the prospects for third-generation (3G) mobile-telephone networks, which are also intended to deliver high-speed data to users on the move. But such speculation is overblown. Wi-Fi is a short-range technology that will never be able to provide the blanket coverage of a mobile network. Worse, subscribe to one network of hotspots (in coffee-shops, say) and you may not be able to use the hotspot in the airport. Ken Denman, the boss of iPass, an internet-access provider based in Redwood Shores, California, insists that things are improving. Roaming and billing agreements will, he says, be sorted out within a couple of years.
By that time, however, the first networks based on a new technology, technically known as 802.16 but named WiMax, should be up and running. As its name suggests, WiMax is positioned as a wide-area version of Wi-Fi. It has a maximum throughput of 70 megabits per second, and a maximum range of 50km, compared with 50m or so for Wi-Fi. Where Wi-Fi offers access in selected places, like phone boxes once did, WiMax could offer blanket coverage, like mobile phones do.
Wi-Fi is also under threat in the home. At the moment it is the dominant home networking technology: Wi-Fi-capable televisions, CD players and video-recorders and other consumer-electronics devices are already starting to appear. This will make it possible to pipe music, say, around the house without laying any cables. Cordless phones based on Wi-Fi are also in the works. But Wi-Fi may not turn out to be the long-term winner in these applications. It is currently too power-hungry for handheld devices, and even 802.11g cannot reliably support more than one stream of video. And a new standard, technically known as 802.15.3 and named WiMedia, has been specifically designed as a short-range, high-capacity home networking standard for entertainment devices.
Wi-Fi's ultimate significance, then, may be that it provides a glimpse of what will be possible with future wireless technologies. It has also changed the way regulators and technologists think about spectrum policy. The FCC has just proposed that broadcast “whitespace”—the airwaves assigned to television broadcasters but not used for technical reasons—should be opened up too. That is not to say that spectrum licensing will be junked in favor of a complete free-for-all over the airwaves. Julius Knapp, the deputy chief of the office of engineering and technology at the FCC, maintains that both the licensed and unlicensed approaches have merit.
Wi-Fi also shows that agreeing on a common standard can create a market. Its example has been taken to heart by the backers of WiMax. Long-range wireless networking gear, like short-range technology before it, has long been dominated by vendors pushing proprietary standards, none of which has been widely adopted. Inspired by Wi-Fi's success, the vendors have now thrown their weight behind WiMax, a common standard with a consumer-friendly name, which they hope will expand the market and boost all their fortunes. Whatever happens to Wi-Fi in future, it has blazed a trail for other technologies to follow.
IP Spoofing and Sniffing - Tutorial
Sniffing and spoofing are security threats that target the lower layers of the networking infrastructure
supporting applications that use the Internet. Users do not interact
directly with these lower layers and are typically completely unaware
that they exist. Without a deliberate consideration of these threats, it
is impossible to build effective security into the higher levels.
Sniffing is a passive security attack in which a machine separate from the intended destination reads data on a network. The term “sniffing” comes from the notion of “sniffing the ether” in an Ethernet network and is a bad pun on the two meanings of the word “ether.” Passive security attack are those that do not alter the normal flow of data on a communication link or inject data into the link.
Spoofing is an active security attack in which one machine on the network masquerades as a different machine. As an active attack, it disrupts the normal flow of data and may involve injecting data into the communications link between other machines. This masquerade aims to fool other machines on the network into accepting the impostor as an original, either to lure the other machines into sending it data or to allow it to alter data. The meaning of “spoof” here is not “a lighthearted parody,” but rather “a deception intended to trick one into accepting as genuine something that is actually false.” Such deception can have grave consequences because notions of trust are central to many networking systems. Sniffing may seem innocuous (depending on just how sensitive and confidential you consider the information on your network), some network security attacks use sniffing as a prelude to spoofing. Sniffing gathers sufficient information to make the deception believable.
Sniffing is the use of a network interface to receive data not
intended for the machine in which the interface resides. A variety of
types of machines need to have this capability. A token-ring bridge, for
example, typically has two network interfaces that normally receive all
packets traveling on the media on one interface and retransmit some,
but not all, of these packets on the other interface. Another example of
a device that incorporates sniffing is one typically marketed as a
“network analyzer.” A network analyzer helps network administrators
diagnose a variety of obscure problems that may not be visible on any
one particular host. These problems can involve unusual interactions
between more than just one or two machines and sometimes involve a
variety of protocols interacting in strange ways.
Devices that incorporate sniffing are useful and necessary. However, their very existence implies that a malicious person could use such a device or modify an existing machine to snoop on network traffic. Sniffing programs could be used to gather passwords, read inter-machine e-mail, and examine client-server database records in transit. Besides these high-level data, lowlevel information might be used to mount an active attack on data in another computer system.
At times, you may hear network administrators talk about their
networking trouble spots— when they observe failures in a localized
area. They will say a particular area of the Ethernet is busier than
other areas of the Ethernet where there are no problems. All of the
packets travel through all parts of the Ethernet segment.
Interconnection devices that do not pass all the frames from one side of
the device to the other form the boundaries of a segment. Bridges,
switches, and routers divide segments from each other, but low-level
devices that operate on one bit at a time, such as repeaters and hubs,
do not divide segments from each other. If only low-level devices
separate two parts of the network, both are part of a single segment.
All frames traveling in one part of the segment also travel in the other
part.
The broadcast nature of shared media networks affects network performance and reliability so greatly that networking professionals use a network analyzer, or sniffer, to troubleshoot problems. A sniffer puts a network interface in promiscuous mode so that the sniffer can monitor each data packet on the network segment. In the hands of an experienced system administrator, a sniffer is an invaluable aid in determining why a network is behaving (or misbehaving) the way it is. With an analyzer, you can determine how much of the traffic is due to which network protocols, which hosts are the source of most of the traffic, and which hosts are the destination of most of the traffic. You can also examine data traveling between a particular pair of hosts and categorize it by protocol and store it for later analysis offline. With a sufficiently powerful CPU, you can also do the analysis in real time.
Most commercial network sniffers are rather expensive, costing thousands of dollars. When you examine these closely, you notice that they are nothing more than a portable computer with an Ethernet card and some special software. The only item that differentiates a sniffer from an ordinary computer is software. It is also easy to download shareware and freeware sniffing software from the Internet or various bulletin board systems.
The ease of access to sniffing software is great for network administrators because this type of software helps them become better network troubleshooters. However, the availability of this software also means that malicious computer users with access to a network can capture all the data flowing through the network. The sniffer can capture all the data for a short period of time or selected portions of the data for a fairly long period of time. Eventually, the malicious user will run out of space to store the data—the network I use often has 1000 packets per second flowing on it. Just capturing the first 64 bytes of data from each packet fills up my system’s local disk space within the hour.
Warning: On
some Unix systems, TCPDump comes bundled with the vendor OS. When run
by an ordinary, unprivileged user, it does not put the network interface
into promiscuous mode. With this command available, a user can only see
data being sent to the Unix host, but is not limited to seeing data
sent to processes owned by the user. Systems administrators concerned
about sniffing should remove user execution privileges from this
program.
Passwords are used not only to authenticate users for access to the files they keep in their private accounts but other passwords are often employed within multilevel secure database systems. When the user types any of these passwords, the system does not echo them to the computer screen to ensure that no one will see them. After jealously guarding these passwords and having the computer system reinforce the notion that they are private, a setup that sends each character in a password across the network is extremely easy for any Ethernet sniffer to see. End users do not realize just how easily these passwords can be found by someone using a simple and common piece of software.
However, much larger potential losses exist for businesses that conduct electronic funds transfer or electronic document interchange over a computer network. These transactions involve the transmission of account numbers that a sniffer could pick up; the thief could then transfer funds into his or her own account or order goods paid for by a corporate account. Most credit card fraud of this kind involves only a few thousand dollars per incident.
Spoofing is an active security attack in which one machine on the network masquerades as a different machine. As an active attack, it disrupts the normal flow of data and may involve injecting data into the communications link between other machines. This masquerade aims to fool other machines on the network into accepting the impostor as an original, either to lure the other machines into sending it data or to allow it to alter data. The meaning of “spoof” here is not “a lighthearted parody,” but rather “a deception intended to trick one into accepting as genuine something that is actually false.” Such deception can have grave consequences because notions of trust are central to many networking systems. Sniffing may seem innocuous (depending on just how sensitive and confidential you consider the information on your network), some network security attacks use sniffing as a prelude to spoofing. Sniffing gathers sufficient information to make the deception believable.
Sniffing
Sniffing is the use of a network interface to receive data not
intended for the machine in which the interface resides. A variety of
types of machines need to have this capability. A token-ring bridge, for
example, typically has two network interfaces that normally receive all
packets traveling on the media on one interface and retransmit some,
but not all, of these packets on the other interface. Another example of
a device that incorporates sniffing is one typically marketed as a
“network analyzer.” A network analyzer helps network administrators
diagnose a variety of obscure problems that may not be visible on any
one particular host. These problems can involve unusual interactions
between more than just one or two machines and sometimes involve a
variety of protocols interacting in strange ways.Devices that incorporate sniffing are useful and necessary. However, their very existence implies that a malicious person could use such a device or modify an existing machine to snoop on network traffic. Sniffing programs could be used to gather passwords, read inter-machine e-mail, and examine client-server database records in transit. Besides these high-level data, lowlevel information might be used to mount an active attack on data in another computer system.
Sniffing: How It Is Done
In a shared media network, such as Ethernet, all network interfaces on a network segment have access to all of the data that travels on the media. Each network interface has a hardware-layer address that should differ from all hardware-layer addresses of all other network interfaces on the network. Each network also has at least one broadcast address that corresponds not to an individual network interface, but to the set of all network interfaces. Normally, a network interface will only respond to a data frame carrying either its own hardware-layer address in the frame’s destination field or the “broadcast address” in the destination field. It responds to these frames by generating a hardware interrupt to the CPU. This interrupt gets the attention of the operating system, and passes the data in the frame to the operating system for further processing.
Note: The term “broadcast address” is somewhat
misleading. When the sender wants to get the attention of the operating
systems of all hosts on the network, he or she uses the “broadcast
address.” Most network interfaces are capable of being put into a
“promiscuous mode.” In promiscuous mode, network interfaces generate a
hardware interrupt to the CPU for every frame they encounter, not just
the ones with their own address or the “broadcast address.” The term
“shared media” indicates to the reader that such networks broadcast all
frames—the frames travel on all the physical media that make up the
network. |
The broadcast nature of shared media networks affects network performance and reliability so greatly that networking professionals use a network analyzer, or sniffer, to troubleshoot problems. A sniffer puts a network interface in promiscuous mode so that the sniffer can monitor each data packet on the network segment. In the hands of an experienced system administrator, a sniffer is an invaluable aid in determining why a network is behaving (or misbehaving) the way it is. With an analyzer, you can determine how much of the traffic is due to which network protocols, which hosts are the source of most of the traffic, and which hosts are the destination of most of the traffic. You can also examine data traveling between a particular pair of hosts and categorize it by protocol and store it for later analysis offline. With a sufficiently powerful CPU, you can also do the analysis in real time.
Most commercial network sniffers are rather expensive, costing thousands of dollars. When you examine these closely, you notice that they are nothing more than a portable computer with an Ethernet card and some special software. The only item that differentiates a sniffer from an ordinary computer is software. It is also easy to download shareware and freeware sniffing software from the Internet or various bulletin board systems.
The ease of access to sniffing software is great for network administrators because this type of software helps them become better network troubleshooters. However, the availability of this software also means that malicious computer users with access to a network can capture all the data flowing through the network. The sniffer can capture all the data for a short period of time or selected portions of the data for a fairly long period of time. Eventually, the malicious user will run out of space to store the data—the network I use often has 1000 packets per second flowing on it. Just capturing the first 64 bytes of data from each packet fills up my system’s local disk space within the hour.
Note:
Esniff.c is a simple 300-line C language program that works on SunOS
4.x. When run by the root user on a Sun workstation, Esniff captures the
first 300 bytes of each TCP/IP connection on the local network. It is
quite effective at capturing all usernames and passwords entered by
users for telnet, rlogin, and FTP. TCPDump 3.0.2 is a common, more sophisticated, and more portable Unix sniffing program written by Van Jacobson, a famous developer of high-quality TCP/IP software. It uses the libpcap library for portably interfacing with promiscuous mode network interfaces. The most recent version is available via anonymous FTP to ftp.ee.lbl.gov. NetMan contains a more sophisticated, portable Unix sniffer in several programs in its network management suite. The latest version of NetMan is available via anonymous FTP to ftp.cs.curtin.edu.au in the directory /pub/netman. EthDump is a sniffer that runs under DOS and can be obtained via anonymous FTP from ftp.eu.germany.net in the directory /pub/networking/inet/ethernet/. |
Sniffing: How It Threatens Security
Sniffing data from the network leads to loss of privacy of several kinds of information that should be private for a computer network to be secure. These kinds of information include the following:- Passwords
- Financial account numbers
- Private data
- Low-level protocol information
Sniffing Passwords
Perhaps the most common loss of computer privacy is the loss of passwords. Typical users type a password at least once a day. Data is often thought of as secure because access to it requires a password. Users usually are very careful about guarding their password by not sharing it with anyone and not writing it down anywhere.Passwords are used not only to authenticate users for access to the files they keep in their private accounts but other passwords are often employed within multilevel secure database systems. When the user types any of these passwords, the system does not echo them to the computer screen to ensure that no one will see them. After jealously guarding these passwords and having the computer system reinforce the notion that they are private, a setup that sends each character in a password across the network is extremely easy for any Ethernet sniffer to see. End users do not realize just how easily these passwords can be found by someone using a simple and common piece of software.
Sniffing Financial Account Numbers
Most users are uneasy about sending financial account numbers, such as credit card numbers and checking account numbers, over the Internet. This apprehension may be partly because of the carelessness most retailers display when tearing up or returning carbons of credit card receipts. The privacy of each user’s credit card numbers is important. Although the Internet is by no means bulletproof, the most likely location for the loss of privacy to occur is at the endpoints of the transmission. Presumably, businesses making electronic transactions are as fastidious about security as those that make paper transactions, so the highest risk probably comes from the same local network in which the users are typing passwords.However, much larger potential losses exist for businesses that conduct electronic funds transfer or electronic document interchange over a computer network. These transactions involve the transmission of account numbers that a sniffer could pick up; the thief could then transfer funds into his or her own account or order goods paid for by a corporate account. Most credit card fraud of this kind involves only a few thousand dollars per incident.
Sniffing Private Data
Loss of privacy is also common in e-mail transactions. Many e-mail messages have been publicized without the permission of the sender or receiver. Remember the Iran-Contra affair in which President Reagan’s secretary of defense, Caspar Weinberger, was convicted. A crucial piece of evidence was backup tapes of PROFS e-mail on a National Security Agency computer. The e-mail was not intercepted in transit, but in a typical networked system, it could have been. It is not at all uncommon for e-mail to contain confidential business information or personal information. Even routine memos can be embarrassing when they fall into the wrong hands.Sniffing Low-Level Protocol Information
Information network protocols send between computers includes hardware addresses of local network interfaces, the IP addresses of remote network interfaces, IP routing information, and sequence numbers assigned to bytes on a TCP connection. Knowledge of any of this information can be misused by someone interested in attacking the security of machines on the network. See the second part of this chapter for more information on how these data can pose risks for the security of a network. A sniffer can obtain any of these data. After an attacker has this kind of information, he or she is in a position to turn a passive attack into an active attack with even greater potential for damage.Wireless Network Security - Tutorial
Is wireless an okay way to go as far as integrity of the system is concerned?
Currently, they are safer than using a Cable ISP, because there are less people out there who have the tools to compromise your wireless network. However, this will not be true forever. Someone with the right equipment could drive around and break into many home wireless networks. This is not a concern for our family, because we live out in the middle of farm land. Anyone getting close enough to try and compromise our network would be very obvious. However, there are many youth breaking into home networks when you are connected to your ISP. Most of them are youth and do not seek to harm, but want to explore and see what they can get into. You might occasionally become aware of this when you shutdown, and see a message that someone (or several some ones) are connected, and this will drop their connection. However, this message will also be generated, if one of the other home computers has been looking at the computer you are shutting down.
Also wireless networks only work at the 10 Mbits per second and not the 100 Mbits which is available via Cat cables. This should not be a problem for typical home network use though. I am considering wireless for my in-laws when they are staying in their attached home. It would allow them to print to our computers, share our ISP, and move files. I do have wireless on my new Compaq laptop, which I might use at future Compaq seminars.
If one is concerned about security:
1. Use some security software (firewalls) - Compaq has this software for me to use, and I plan to explore this and set it up in my home, but I still do not see this as a major problem yet.
2. Set the network setting to be more restrictive. This is done differently depending on what O/S you are using. NT, Windows 2000 and XP Professional can be made more secure, than Windows 95/98. However, this can also cause protection problems when legitimate users want to share files. There is a fine balance between being secure and being able to use your computers easily.
3. Backup your critical data onto removable media (CD-ROM or Tape) so that you can recover from intrusion or disasters. This is good policy in any case.
One great piece of advice I would give, is to disconnect from the Internet whenever it is not needed. This would make it harder for someone to probe and try and infiltrate your home network.
Currently, they are safer than using a Cable ISP, because there are less people out there who have the tools to compromise your wireless network. However, this will not be true forever. Someone with the right equipment could drive around and break into many home wireless networks. This is not a concern for our family, because we live out in the middle of farm land. Anyone getting close enough to try and compromise our network would be very obvious. However, there are many youth breaking into home networks when you are connected to your ISP. Most of them are youth and do not seek to harm, but want to explore and see what they can get into. You might occasionally become aware of this when you shutdown, and see a message that someone (or several some ones) are connected, and this will drop their connection. However, this message will also be generated, if one of the other home computers has been looking at the computer you are shutting down.
Also wireless networks only work at the 10 Mbits per second and not the 100 Mbits which is available via Cat cables. This should not be a problem for typical home network use though. I am considering wireless for my in-laws when they are staying in their attached home. It would allow them to print to our computers, share our ISP, and move files. I do have wireless on my new Compaq laptop, which I might use at future Compaq seminars.
If one is concerned about security:
1. Use some security software (firewalls) - Compaq has this software for me to use, and I plan to explore this and set it up in my home, but I still do not see this as a major problem yet.
2. Set the network setting to be more restrictive. This is done differently depending on what O/S you are using. NT, Windows 2000 and XP Professional can be made more secure, than Windows 95/98. However, this can also cause protection problems when legitimate users want to share files. There is a fine balance between being secure and being able to use your computers easily.
3. Backup your critical data onto removable media (CD-ROM or Tape) so that you can recover from intrusion or disasters. This is good policy in any case.
One great piece of advice I would give, is to disconnect from the Internet whenever it is not needed. This would make it harder for someone to probe and try and infiltrate your home network.
Using the Computer, Away From the Computer
You get to work on a Monday morning, and realise that you've left the
document you were presenting today back at home on your computer. How
do you get it? Or maybe you have a friend that has problems with their
computer, and you know how to fix it, but only if you can see it. There
could be many more situations where you need to use a particular
computer and you're not there. Enter the range of programs known as VNC.
VNC stands for Virtual Network Computing, and was originally created by a group of people from the AT&T Laboratories in Cambridge, UK. VNC makes it possible to use a computer when you are not in front of it, by giving you access to the mouse, keyboard, and monitor output. All you need is to be connected via a network - even the Internet. And, most versions of VNC are cross-platform compatible - meaning you can use a computer running Linux while you're on a computer running Windows, and of course vice versa.
So, just how does it work? The computer that is going to be accessed must be running a VNC server. Normally, a password is set on the server to prevent unauthorised access. Then, the computer that is going to access (the client) needs to run a VNC viewer. All you need to do is enter the host name (IP address, domain name, network name etc.) and password, and the client computer will show on its screen the contents of the server computer's screen. And, the client computer's mouse or keyboard will act as the server computer's mouse and keyboard.
Sound confusing? Try it for yourself! There are many varieties of the VNC program, and the best thing is that most of them are free.
The original and probably most widely used version is RealVNC, which can be downloaded free from www.realvnc.com. There are many other versions, all of which have the same idea but offer different features. A few of the most popular versions are:
On each computer, install the RealVNC server (you can download it from www.realvnc.com). Right click on the VNC icon in the system tray/notification area (next to the clock), and select Properties. Uncheck the "Auto" checkbox next to the "Display Number" section, and enter a number from 0-99 to use as your display number. Make sure you enter a different number for each machine, and use any number higher than 99, otherwise you won't be able to connect.
What this does is changes the port that VNC runs on. By default, VNC runs on port 5900, which corresponds with display number 0. Display number 1 corresponds with port 5901, and so on, right up to 99 corresponding with 5999.
Now, open the VNC viewer on one of the computers on your network (it'll be in the Start Menu under RealVNC once you've installed it), and connect to another computer, using either its IP address or name. But, you need to tell the VNC viewer that you want to connect on a different port (display number), so you do this by entering a colon, followed by the display number. For example, if my computer is called "server" and it's running on display number 7, I'd connect to "server:7". What this is really doing is connecting on port 5907. Then, enter the password and you're connected!
So, each computer is now running its VNC server on a separate port. All you need to do now is forward ports from your router or directly connected computer, and then you'll be able to access each of those computers from the outside world! Just run the viewer, enter your IP address or domain name, followed by the correct display number.
And another little feature - if you can't install the VNC viewer on the computer you are at, just open up a web browser and navigate to http://your-ip:5800/ (notice we're now using the 5800 range, not the 5900 range). This will open up a java viewer for the computer running on display number 0. If you want display number 34, just go to http://your-ip:5834/. It couldn't be any easier!
VNC stands for Virtual Network Computing, and was originally created by a group of people from the AT&T Laboratories in Cambridge, UK. VNC makes it possible to use a computer when you are not in front of it, by giving you access to the mouse, keyboard, and monitor output. All you need is to be connected via a network - even the Internet. And, most versions of VNC are cross-platform compatible - meaning you can use a computer running Linux while you're on a computer running Windows, and of course vice versa.
So, just how does it work? The computer that is going to be accessed must be running a VNC server. Normally, a password is set on the server to prevent unauthorised access. Then, the computer that is going to access (the client) needs to run a VNC viewer. All you need to do is enter the host name (IP address, domain name, network name etc.) and password, and the client computer will show on its screen the contents of the server computer's screen. And, the client computer's mouse or keyboard will act as the server computer's mouse and keyboard.
Sound confusing? Try it for yourself! There are many varieties of the VNC program, and the best thing is that most of them are free.
The original and probably most widely used version is RealVNC, which can be downloaded free from www.realvnc.com. There are many other versions, all of which have the same idea but offer different features. A few of the most popular versions are:
- Ultr@VNC - Free - http://ultravnc.sourceforge.net/
- TightVNC - Free - http://www.tightvnc.com/
- TridiaVNC - Free - http://www.tridiavnc.com/
- TridiaVNC Pro - $49 - http://www.tridiavncpro.com/
Using VNC In A Network
So, what if you have more than one computer that you wish to get access to through the Internet, and they are all on a local area network, with Internet access only through one machine? What can you do? RealVNC offers a great solution.On each computer, install the RealVNC server (you can download it from www.realvnc.com). Right click on the VNC icon in the system tray/notification area (next to the clock), and select Properties. Uncheck the "Auto" checkbox next to the "Display Number" section, and enter a number from 0-99 to use as your display number. Make sure you enter a different number for each machine, and use any number higher than 99, otherwise you won't be able to connect.
What this does is changes the port that VNC runs on. By default, VNC runs on port 5900, which corresponds with display number 0. Display number 1 corresponds with port 5901, and so on, right up to 99 corresponding with 5999.
Now, open the VNC viewer on one of the computers on your network (it'll be in the Start Menu under RealVNC once you've installed it), and connect to another computer, using either its IP address or name. But, you need to tell the VNC viewer that you want to connect on a different port (display number), so you do this by entering a colon, followed by the display number. For example, if my computer is called "server" and it's running on display number 7, I'd connect to "server:7". What this is really doing is connecting on port 5907. Then, enter the password and you're connected!
So, each computer is now running its VNC server on a separate port. All you need to do now is forward ports from your router or directly connected computer, and then you'll be able to access each of those computers from the outside world! Just run the viewer, enter your IP address or domain name, followed by the correct display number.
And another little feature - if you can't install the VNC viewer on the computer you are at, just open up a web browser and navigate to http://your-ip:5800/ (notice we're now using the 5800 range, not the 5900 range). This will open up a java viewer for the computer running on display number 0. If you want display number 34, just go to http://your-ip:5834/. It couldn't be any easier!
Conclusion
This article has only outlined some of the features of VNC. There's many more - so get out there and explore them! And as always, if you have any troubles using VNC, or have any questions about it, post a message in our Articles Forum. We're here to help!Internet Connection Sharing - Part I
So, you've got multiple computers running Windows, and multiple
people in your home or business who are active Internet users. But you
only have one Internet connection. How can you all browse the Internet,
read e-mail, chat online, and download files at the same time? The
solution is built right into Windows - and it's called Internet
Connection Sharing. This article explains Internet Connection Sharing in
detail, and is designed as a practical guide to help you set it all up.
There are many different ways to set up Internet Connection Sharing, but for the purposes of this article, it is assumed that you have two or more computers connected via a hub or switch. The instructions in this article are also for Windows XP - instructions for other versions of Windows will be coming in future articles. Your host computer (the one with the connection) will need to be running Windows XP, but the other computer(s) can be running any version of Windows.
The other varying factor that this article will focus on is sharing a 56k modem connection. However, other types of connections (eg. cable modem) are quite similar, so you shouldn't have any problems using this article if you are in that situation.
Click Start > Connect To. You'll see the network connections that are present on your computer. Right click the connection you use for the Internet, and select Properties. Move to the Advanced tab, then select Allow other network users to connect through this computer's Internet connection. Two other options will be available to you: Establish a dial-up connection whenever a computer on my network attempts to access the Internet and Allow other network users to control or disable the shared Internet connection. You can select these if you wish.
Next, we need to check your computer's IP address. Click Start > Run. Type in cmd and click OK, and the command prompt will open. Now type in ipconfig (you'll need to type winipcfg if you're trying this in Windows 98) and press Enter. At least two IP addresses should be displayed - one for your internet connection, and one for your network. The one that we want to look at now is your network connection IP address. It should be 192.168.0.1. If it isn't, we'll need to change it so it is - because for Windows Internet Connection Sharing, it is assumed that the IP address of the host computer is 192.168.0.1.
Open up the network connections folder like we did before, but this time click on Show all connections. Now right click on your local area connection, and click Properties. Find TCP/IP in the list of items that your connection uses, and select that then click Properties. You'll then need to make sure that Use the following IP address is selected, and type in 192.168.0.1 as your IP address. Note that the dots are already inserted.
Finally, enter 255.255.255.0 as your subnet mask, then click OK and OK again. It should take a few seconds to update, and then you're set. You may be asked to restart your computer - if so, then restart it before continuing.
From the desktop, right click on Network Neighborhood. Find TCP/IP in the list, and select that and click Properties (if there is more than one TCP/IP listing, use the one that also mentions your network card). There's three things to do here:
Good luck with your network!
The Basics
Now, let's lay down the basics. This article assumes that you have a working knowledge of the Windows operating system, and a basic knowledge of simple Windows networking - and that the computers you wish to use are already networked together, and the network and internet connection are functioning correctly. If this isn't yet sorted out, then you may want to start out with one of our networking tutorials.There are many different ways to set up Internet Connection Sharing, but for the purposes of this article, it is assumed that you have two or more computers connected via a hub or switch. The instructions in this article are also for Windows XP - instructions for other versions of Windows will be coming in future articles. Your host computer (the one with the connection) will need to be running Windows XP, but the other computer(s) can be running any version of Windows.
The other varying factor that this article will focus on is sharing a 56k modem connection. However, other types of connections (eg. cable modem) are quite similar, so you shouldn't have any problems using this article if you are in that situation.
Configuring The Host
Remember, we're assuming from here on in that your network is functioning correctly (i.e. you can transfer files between computers) and that your host computer can successfully connect to the Internet.Click Start > Connect To. You'll see the network connections that are present on your computer. Right click the connection you use for the Internet, and select Properties. Move to the Advanced tab, then select Allow other network users to connect through this computer's Internet connection. Two other options will be available to you: Establish a dial-up connection whenever a computer on my network attempts to access the Internet and Allow other network users to control or disable the shared Internet connection. You can select these if you wish.
Next, we need to check your computer's IP address. Click Start > Run. Type in cmd and click OK, and the command prompt will open. Now type in ipconfig (you'll need to type winipcfg if you're trying this in Windows 98) and press Enter. At least two IP addresses should be displayed - one for your internet connection, and one for your network. The one that we want to look at now is your network connection IP address. It should be 192.168.0.1. If it isn't, we'll need to change it so it is - because for Windows Internet Connection Sharing, it is assumed that the IP address of the host computer is 192.168.0.1.
Giving the Host Computer a Static IP Address
If your host computer's IP address is already 192.168.0.1, you can skip this step. Otherwise… read on.Open up the network connections folder like we did before, but this time click on Show all connections. Now right click on your local area connection, and click Properties. Find TCP/IP in the list of items that your connection uses, and select that then click Properties. You'll then need to make sure that Use the following IP address is selected, and type in 192.168.0.1 as your IP address. Note that the dots are already inserted.
Finally, enter 255.255.255.0 as your subnet mask, then click OK and OK again. It should take a few seconds to update, and then you're set. You may be asked to restart your computer - if so, then restart it before continuing.
Configuring The Clients
Now we're ready to configure the client computers. For this bit, we'll assume you're running Windows 98. It won't be too different for other versions though.From the desktop, right click on Network Neighborhood. Find TCP/IP in the list, and select that and click Properties (if there is more than one TCP/IP listing, use the one that also mentions your network card). There's three things to do here:
- - On the IP Address tab, make sure that Obtain an IP address automatically is selected
- - On the WINS Configuration tab, make sure that Use DHCP for WINS Resolution is selected
- - On the Gateway tab, remove any gateways that may be listed
What If It Doesn't Work?
In my experiences with networking, the same thing frequently doesn't work more than once. Therefore, I know that there will be some of you who are reading this and just can't get it working. Every situation is different, so if you can't get it working, don't give up! Please post a message into the Networking forum on the TechiWarehouse message board, and either I or someone else will try to help you. Then, we'll include information about what we find in future articles.Good luck with your network!
Improving Performance Over Wireless Networks
Introduction
TCP is a common transport protocol that is used in almost all the internet applications.With the advent of PDAs and with many wireless data applications TCP is a major sourceas a transport protocol. Because of the unreliability of the wireless channel, much workhas to be done in order for the data reliable. This characteristic is seen in the TCP. But theother major challenge posed is the speed of the data. So there is a compromise betweenthe speed and reliability. There has been much research going on like changing the syntaxof TCP and adding extra protocols at data link layer etc.The main problem with the TCP is that TCP falsely assume the packet loss as congestion. The TCP sender detects a packet loss when a time out happens or duplicate acknowledgements happen.
TCP cannot recover from a loss without timing out unless
1) The connection has a large num of outstanding packets
2) Enough ACKs flowing back from the receiver.
There is a situation where a packet cannot be recovered by a FAST RETRANSMISSIONbecause reduced window size reduced window size does not produce enough outstandingpackets return duplicate ACKs.
One of the common methods in improving the performance onwireless is done at physical layer using Forward error correction (FEC). This method hasmany disadvantages as it doesn’t solve the problem entirely. The solutions to solve theproblem were defined in three categories: Link layer, Split connection, and Proxy.A protocol called AIRMAIL besides FEC makes link layer reliable .An entire window is sent by the base station. Advantage in doing like this is, we need notbother about the acknowledgements for each packet. Unfortunately the main issueignoring here is the worst case scenario, if the error rate is high. Then we don’t have anyidea of the errors until the end of the window
In the case of split connection, there is a split in the TCP connection between thesource and base station and between base station and receiver. This approach fails at thebasic principle of violating the TCP syntax.
In the case of Proxy approach, between the sender and receiver a proxy is inserted. Snoopprotocol uses this approach. The main disadvantage in the case of snoop protocol is that ittakes into account only the cumulative acknowledgements and it makes manyassumptions inappropriately the pattern in the losses.
Throughput of a TCP connection is a measure of its performance.Maximum throughput occurs when the TCP congestion window is equal to theBANDWIDTH DELAY product of the link. At this stage we are using the maximumcapacity of the Link. To achieve high throughput we need to either
- Try mechanisms to Avoid Loss
- Or else try mechanisms to recover fast if loss occurs
Problems in Wireless Links
On wireless networks most packet losses are due to poor link quality and intermittentconnectivity. The random characteristics of the channel make it difficult to predict end to end data rates, delay statistics and packet loss probabilities.The Link Layer
Link layer protocols operate independently of the higher layer protocols. TCPimplementations have large retransmission times that are multiples of 500ms whereaslink layer retransmissions have times on the order of 200ms. Link layer protocols thatwere used do not attempt in order delivery across the link and caused packets to arriveout of order at the receiver. Currently not particular standard is there for the link layerprotocols. Most of the link layer protocols use stop-and-wait, go-back-N, selective repeat,and forward error correction to provide reliability. It is shown from various researchsimulations that correcting errors at the link layer has increased the overall performance.But the retransmissions at the link layer does not always improve performance becauseof the poor TCP performance if there are local retransmissions. Therefore a solution atlink layer must ensure in order delivery of packets.TULIP (Transport Unaware Link Improvement Protocol)
- It has the ability to maintain local recovery of all lost packets thereby preventingunnecessary and delayed retransmission of packets and subsequent reduction ofTCP's congestion window. No modification to network or Transport layersoftware
- Efficient link layer protocol taking advantage of opposing flows by piggybackinglink layer ACK's with transport layer ACK's.Throughput which is up to three times higher than TCP with no modifications.
- TULIP does not depend on TCP state information i.e. TCP headers etc so TULIPis able to adapt to different versions of TCP TULIP piggybacks TCP ACK’s withlink layer ACK’s thereby doesn’t need extra bandwidth.
TULIP passes one packet at a time to the MAC layer. TULIP use two additional signalscalled TRANS and WAIT .TULIP needs MAC layer to inform TULIP that thattransmission for the packet passed to it has started. The MAC layer informs the start ofthe transmission though the TRANS signal. After receiving the TRANS signal TULIPstarts a timer and waits for t1 seconds before sending the next packet, because either sidehas variable length data to send and because such packets are longer than lacks The MAClayer much informs TULIP it should wait longer than t1.This procedure of packet interleaving allows the two sources to be clocking during thetransfer of bidirectional data.
The basic principle for the TULIP is MAC level Acceleration. MAClevel Acceleration is the mechanism to reduce the link delay TULIP includes a MACacceleration feature and uses the three way handshake implemented by the FAMA-NCSprotocol. In this handshake the sender uses non persistent carrier sensing to transmit RTS(request to send) to receiver and the receiver sends back CTS (clear to send) that lastslonger than an RTS. This CTS is a tone that forces all other nodes to back off longenough to allow data packet to arrive collision free at the receiver.MAC Acceleration works as follows:
1) Sender transmits a TULIP packet (containing data) after a RTS CTS handshake;receiver sends back TULIP ACK to sender immediately.
2) If the receiver wants to send back a data packet (whose size is 40bytes or less)when it had received a TULIP packet from this packet is piggybacked with theTULIP ACK and sent to sender. There is no RTS CTS handshake for that datapacket.
3) If the receiver’s data packet is larger then 40bytes, then only there is an RTS CTShandshake for that packet.
TULIP uses a Cumulative ACK feature. Whenever a packet fails to arrive at the receiverfrom the sender the receiver sends back an ACK with a bit vector indicating that thecorresponding packet has not been received. The receiver does not stop receivingsuccessive packets from the sender .It receives them and stores them in a buffer .Then itprepares a retransmission list of the packets missing and forwards an ACK giving theinformation of the missing packets.When the receiver receives the missing packets and they are all in the correct order thenonly it passes them together to the next higher layer.
Performance
The properties of wireless channels are entirely different to that of the wired channels.Wireless channels have high bit-error rates (BER). Also these wireless channels cancause burst errors especially when it is in a deep fade for significant amount of time.There are various strategies have been proposed in categorizingthe different proposals:1) End-to-end
2) Split connection
3) Link layer
End-to-end connection basically handles all kinds of losses. An optimum end-to-endscheme can employ the following strategies:
- The optimum error categories depend on the type of network. If analyzedfrom the sender’s point of view, this method can be used if the accuracy ofthe detection scheme can be sacrificed in the exchange for the minimalchanges at the intermediate nodes.
- In this method it is better to employ selective acknowledgement scheme,because it allows the TCP sender to recover more efficiently from themultiple packet drops in a given window.
Throughput
The bit error rates vary from 0 to 15 million bits/million. The receiver window size is42Kbytes.TULIP protocol makes a retransmission list at the sender upon receiving the first ACKand knows which packets are missing. Because this information is sent back from thereceiver. It retransmits the packets as soon as it receives this ACK Errors further downthe window are recovered before the first error. Other protocols must rely heavily ontimers and cumulative ACKs and get stuck trying to retransmit the packets in a series oflosses.End to End delays are drastically reduced using TULIP.
How Virtual Lans VLANS Work?
Introduction
A VLAN is a grouping of computers that is logically segmented by functions, project teams, or applications without regard to the physical location of users. For example, several end stations might be grouped as a department, such as Engineering or Accounting, having the same attributes as a LAN even though they are not all on the same physical LAN segment. To accomplish this logical grouping, a VLAN-capable switching device must be used. Each switch port can be assigned to a VLAN. Ports in a VLAN share broadcast traffic and belong to the same broadcast domain. Broadcast traffic in one VLAN is not transmitted outside that VLAN. This segmentation improves the overall performance of the network.Benefits
VLANs provide the following benefits:- Reduced administration costs associated with moves, adds, and changes
- Controlled broadcast activity and better network security
- Leveraging existing investments
- Flexible and scalable segmentation
You can leverage existing hub investments by assigning each hub segment connected to a switch port to a VLAN. All the stations that share a hub segment are assigned to the same VLAN. If an individual station must be reassigned to another VLAN, the station is relocated to the appropriate corresponding hub module. The interconnected switch fabric handles communication between the switching ports and automatically determines the appropriate receiving segments.
You can also assign VLANs based on the application type and the amount of applications broadcasts.
VLAN Operation
Switches—the Core of VLANs
Switches are a primary component of VLAN communication. They perform critical VLAN functions by acting as the entry point for end-station devices into the switched fabric, facilitating communication across the organization, and providing the intelligence to group users, ports, or logical addresses into common communities of interest. Each switch has the intelligence to make filtering and forwarding decisions by frame, based on VLAN metrics defined by network managers, and to communicate this information to other switches and routers within the network.The criteria used to define the logical grouping of nodes into a VLAN is based on a technique known as frame tagging. There are two types of frame tagging—implicit and explicit. Implicit tagging enables a packet to belong to a VLAN based on the Media Access Control (MAC) address, protocol, the receiving port of a switch, or another parameter into which nodes can be logically grouped. Explicit tagging requires the addition of a field into a frame or packet header that serves to classify the VLAN association of the frame. Frame tagging functions at Layer 2 and requires little processing or administrative overhead.Routers
For inter-VLAN communication, you must use routers that extend VLAN communications between workgroups. Routers provide policy-based control, broadcast management, and route processing and distribution. They also provide the communication between VLANs and VLAN access to shared resources such as servers and hosts. Routers connect to other parts of the network that are either logically segmented into subnets or require access to remote sites across wide-area links. Consolidating the overall number of physical router ports required for communication between VLANs, routers use high-speed backbone connections over Fast Ethernet, Fiber Distributed Data Interface (FDDI), or Asynchronous Transfer Mode (ATM) for higher throughput between switches and routers.Types of VLANS
Each VLAN is of a particular type, and has its own maximum transmission unit (MTU) size. Two types of VLANs are defined:- Ethernet/802.3 VLANs
- Token Ring/802.5 VLANs
Inter VLAN Communication
By definition, Virtual LANs perform traffic separation within a shared network environment. Communication between VLANs is performed through routing functionality and, for non routable protocols, switching. This integrated solution of high-speed, scalable VLAN switching of local traffic and efficient routing and switching of inter-VLAN traffic is becoming increasingly attractive in large networks. Cisco routers address this requirement with their ability to connect 802.10, ISL, and ATM LANE-based VLANs.VLAN Standardization
IEEE 802.1q provides for the standardization of VLANs based on a three-layer approach. The IEEE 802.1q draft is expected to be approved as a standard in 1998.Currently, several different transport mechanisms are used for communicating VLAN information across high-performance backbones. Among them are the LANE standard that has been approved by the ATM Forum, Cisco's Inter-Switch Link (ISL) for Fast Ethernet, and the IEEE 802.10 protocol, which provides VLAN communication across shared FDDI backbones.Remote Desktop Technology of Windows XP
Windows XP Professional is built on the proven code base of Windows
2000, which features a 32-bit computing architecture, and a fully
protected memory model. This makes Windows XP Professional the most
reliable version yet.
Windows XP helps protect data transmitted across a network. IP Security is an important part of providing security for virtual private networks (VPNs), which allow organizations to transmit data securely over the Internet and a firewall client that can protect small businesses from common Internet attacks. Windows XP Professional makes it significantly easier for you to remotely connect to networks, including to VPNs, over dialup connections, infrared and direct cable connections.
Remote Desktop lets you take advantage of the flexibility provided by a distributed computing environment. A standard component of Windows XP Professional (although not included in Windows XP Home Edition), Remote Desktop lets you access your Windows XP computer from anywhere, over any connection, using any Windows-based client. Remote Desktop gives you secure access to all your applications, files, and network resources-as if you were in front of your own workstation. Any applications that you leave running at the office will be running when you connect remotely-at home, in a conference room, or on the road.
Remote Desktop works well even under low-bandwidth conditions, because all your applications are hosted on the Terminal Server. Only keyboard, mouse, and display information are transmitted over the network.
If you're an IT administrator, Remote Desktop provides you with a rapid response tool: It lets you remotely access a server running Windows 2000 Server or Whistler Server and see messages on the console, administer the computer remotely, or apply headless server control.
Remote Desktop Web Connection isn't installed by default in Windows XP Professional, so you'll need to add it yourself to the remote computer. And to add it, you'll also need to enable Internet Information Services (IIS) on your remote desktop.
Remote Desktop Web Connection means you can work from home or the road, and access all the data and capabilities of your office computer. Bookmark your remote desktop in Internet Explorer and you can get There are several issues to consider when managing and administering Remote Assistance in the corporate environment or large organization. You can specify an open environment where employees can receive Remote Assistance from outside the corporate firewall. Or you can restrict Remote Assistance via Group Policy and specify various levels of permissions such as only allowing Remote Assistance from within the corporate firewall.
Here server means the computer that's actually serving up the Remote Desktop session. This could be a Windows XP Professional-based computer running Remote Desktop Services, or earlier versions of Microsoft Windows NT/2000 Server running Terminal Services, or even a Windows NT 3.5-based computer running Citrix.
One can easily open up the Remote Desktop port when you're using the Internet Connection Firewall (ICF) included in Windows XP. Heck, you don't even need to remember the port number, but if your network is running some other firewall, you'll need to work with your network administrators to sort out the details for it.
Windows XP helps protect data transmitted across a network. IP Security is an important part of providing security for virtual private networks (VPNs), which allow organizations to transmit data securely over the Internet and a firewall client that can protect small businesses from common Internet attacks. Windows XP Professional makes it significantly easier for you to remotely connect to networks, including to VPNs, over dialup connections, infrared and direct cable connections.
What is VPN?
VPN uses a technique known as tunneling to transfer data securely on the Internet to a remote access server on your workplace network. Using a VPN helps you save money by using the public Internet instead of making long-distance phone calls to connect securely with your private network.There are two ways to create a VPN connection:
- By dialing an Internet service provider (ISP).
- By connecting directly to the Internet.
What is Remote Assistance?
Remote Assistance enables a user to share control of his or her computer with someone on a network or the Internet. An administrator or friend can view the user's screen, and control the pointer and keyboard to help solve a technical problem. IT departments can build custom solutions, on top of published APIs using HTML, to tailor Remote Assistance to their needs, and the feature can be centrally enabled or disabled.What is Remote Desktop?
Remote Desktop is based on Terminal Services technology. Using Remote Desktop, you can run applications on a remote computer running Windows XP Professional from any other client running a Microsoft Windows operating system.Remote Desktop lets you take advantage of the flexibility provided by a distributed computing environment. A standard component of Windows XP Professional (although not included in Windows XP Home Edition), Remote Desktop lets you access your Windows XP computer from anywhere, over any connection, using any Windows-based client. Remote Desktop gives you secure access to all your applications, files, and network resources-as if you were in front of your own workstation. Any applications that you leave running at the office will be running when you connect remotely-at home, in a conference room, or on the road.
Remote Desktop works well even under low-bandwidth conditions, because all your applications are hosted on the Terminal Server. Only keyboard, mouse, and display information are transmitted over the network.
If you're an IT administrator, Remote Desktop provides you with a rapid response tool: It lets you remotely access a server running Windows 2000 Server or Whistler Server and see messages on the console, administer the computer remotely, or apply headless server control.
Remote Desktop Protocol:
The features provided by Remote Desktop are made available through the Remote Desktop Protocol (RDP). RDP is a presentation protocol that allows a Windows-based terminal (WBT), or other Windows-based clients, to communicate with a Windows-based Terminal Server. RDP is designed to provide remote display and input capabilities over network connections for Windows-based applications running on your Windows XP Professional desktop. RDP works across any TCP/IP connection, including a dial-up connection, local area network (LAN), wide area network (WAN), Integrated Services Digital Network (ISDN), DSL, or Virtual Private Network (VPN).Remote Desktop Resource Redirection:
When you use Remote Desktop from a Windows XP-based client, or another RDP 5.1-enabled client, many of the client resources are available within the Remote Desktop connection.These resources include:File system redirection
This makes the local file system available on the remote desktop within a terminal session. The client file system is accessible through the Remote Desktop as if it was a network-shared drive, and no network connectivity-except the Remote Desktop-is required. The client drives appear in Windows Explorer with the designation " on tsclient."Printer redirection
This route printing jobs from the Terminal Server to a printer attached to the local computer. When the client logs on to the remote computer, the local printer is detected, and the appropriate printer driver is installed on the remote computer.Port redirection
This enables applications running within a terminal session to have access to the serial and parallel ports on the client. Port redirection allows these ports to access and manipulate devices such as bar code readers or scanners.Audio
You can run an audio-enabled application on your remote desktop and hear the audio output from speakers attached to the computer you're working on.Clipboard
The Remote Desktop and the client computer share a clipboard that allows data to be interchanged.Remote Desktop Web Connection in windows XP:
The Remote Desktop Web Connection provides a simple way to connect to your Windows XP Professional remote desktop, even when you don't have the Remote Desktop client software installed on the computer you're currently using (called the client computer). Before you can use the Remote Desktop Web Connection from home or the road, you need to set up Remote Desktop on the remote computer.Remote Desktop Web Connection isn't installed by default in Windows XP Professional, so you'll need to add it yourself to the remote computer. And to add it, you'll also need to enable Internet Information Services (IIS) on your remote desktop.
Remote Desktop Web Connection means you can work from home or the road, and access all the data and capabilities of your office computer. Bookmark your remote desktop in Internet Explorer and you can get There are several issues to consider when managing and administering Remote Assistance in the corporate environment or large organization. You can specify an open environment where employees can receive Remote Assistance from outside the corporate firewall. Or you can restrict Remote Assistance via Group Policy and specify various levels of permissions such as only allowing Remote Assistance from within the corporate firewall.
Issues of Proxy server/Firewall:
Regardless of how you connect to a Remote Desktop server, if either your client or your server is behind a firewall or proxy server, you won't be able to connect unless you open up the necessary port, 3389, to permit the Remote Desktop Connection capability to pass through.Here server means the computer that's actually serving up the Remote Desktop session. This could be a Windows XP Professional-based computer running Remote Desktop Services, or earlier versions of Microsoft Windows NT/2000 Server running Terminal Services, or even a Windows NT 3.5-based computer running Citrix.
One can easily open up the Remote Desktop port when you're using the Internet Connection Firewall (ICF) included in Windows XP. Heck, you don't even need to remember the port number, but if your network is running some other firewall, you'll need to work with your network administrators to sort out the details for it.
Remote Assistance:
Remote Assistance runs over the top of Terminal Services technology, which means it, needs to use the same port already used by Terminal Services: port 3389.If the person who is being helped is behind a firewall, NAT, or ICS, Remote Assistance will still function as long as the person being helped initiates the session via Windows Messenger. However, as stated above, Remote Assistance will not work in cases when the outbound traffic from port 3389 is blocked.Using Remote Assistance in a Home Environment:
If you are using Personal Firewall or NAT in a home environment, you can use Remote Assistance without any special configurations. However, if you have a corporate-like firewall in a home environment, the same restrictions apply: you would need to open Port 3389 in order to use Remote Assistance.What is a computer Network? Tutorial
What is a computer Network?
A network is any collection of independent computers that communicate with one another over a shared network medium. A computer network is a collection of two or more connected computers. When these computers are joined in a network, people can share files and peripherals such as modems, printers, tape backup drives, or CD-ROM drives. When networks at multiple locations are connected using services available from phone companies, people can send e-mail, share links to the global Internet, or conduct video conferences in real time with other remote users. When a network becomes open sourced it can be managed properly with online collaboration software. As companies rely on applications like electronic mail and database management for core business operations, computer networking becomes increasingly more important.- Every network includes:
- At least two computers Server or Client workstation.
- Networking Interface Card's (NIC)
- A connection medium, usually a wire or cable, although wireless communication between networked computers and peripherals is also possible.
- Network Operating system software, such as Microsoft Windows NT or 2000, Novell NetWare, Unix and Linux.
Types of Networks:
LANs (Local Area Networks)
A network is any collection of independent computers that communicate with one another over a shared network medium. LANs are networks usually confined to a geographic area, such as a single building or a college campus. LANs can be small, linking as few as three computers, but often link hundreds of computers used by thousands of people. The development of standard networking protocols and media has resulted in worldwide proliferation of LANs throughout business and educational organizations.WANs (Wide Area Networks)
Wide area networking combines multiple LANs that are geographically separate. This is accomplished by connecting the different LANs using services such as dedicated leased phone lines, dial-up phone lines (both synchronous and asynchronous), satellite links, and data packet carrier services. Wide area networking can be as simple as a modem and remote access server for employees to dial into, or it can be as complex as hundreds of branch offices globally linked using special routing protocols and filters to minimize the expense of sending data sent over vast distances.Internet
The Internet is a system of linked networks that are worldwide in scope and facilitate data communication services such as remote login, file transfer, electronic mail, the World Wide Web and newsgroups.With the meteoric rise in demand for connectivity, the Internet has become a communications highway for millions of users. The Internet was initially restricted to military and academic institutions, but now it is a full-fledged conduit for any and all forms of information and commerce. Internet websites now provide personal, educational, political and economic resources to every corner of the planet.
Intranet
With the advancements made in browser-based software for the Internet, many private organizations are implementing intranets. An intranet is a private network utilizing Internet-type tools, but available only within that organization. For large organizations, an intranet provides an easy access mode to corporate information for employees.MANs (Metropolitan area Networks)
The refers to a network of computers with in a City.VPN (Virtual Private Network)
VPN uses a technique known as tunneling to transfer data securely on the Internet to a remote access server on your workplace network. Using a VPN helps you save money by using the public Internet instead of making long–distance phone calls to connect securely with your private network. There are two ways to create a VPN connection, by dialing an Internet service provider (ISP), or connecting directly to Internet.Categories of Network:
Network can be divided in to two main categories:
- Peer-to-peer.
- Server – based.
Peer-to-peer networks are good choices for needs of small organizations where the users are allocated in the same general area, security is not an issue and the organization and the network will have limited growth within the foreseeable future.
The term Client/server refers to the concept of sharing the work involved in processing data between the client computer and the most powerful server computer.
The client/server network is the most efficient way to provide:
- Databases and management of applications such as Spreadsheets, Accounting, Communications and Document management.
- Network management.
- Centralized file storage.
Client/server application design also lets the application provider mask the actual location of application function. The user often does not know where a specific operation is executing. The entire function may execute in either the PC or server, or the function may be split between them. This masking of application function locations enables system implementers to upgrade portions of a system over time with a minimum disruption of application operations, while protecting the investment in existing hardware and software.
The OSI Model:
Open System Interconnection (OSI) reference model has become an
International standard and serves as a guide for networking. This model
is the best known and most widely used guide to describe networking
environments. Vendors design network products based on the
specifications of the OSI model. It provides a description of how
network hardware and software work together in a layered fashion to make
communications possible. It also helps with trouble shooting by
providing a frame of reference that describes how components are
supposed to function.There are seven to get familiar with and these are the physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and the application layer.
- Physical Layer, is just that the physical parts of the network such as wires, cables, and there media along with the length. Also this layer takes note of the electrical signals that transmit data throughout system.
- Data Link Layer, this layer is where we actually assign meaning to the electrical signals in the network. The layer also determines the size and format of data sent to printers, and other devices. Also I don't want to forget that these are also called nodes in the network. Another thing to consider in this layer is will also allow and define the error detection and correction schemes that insure data was sent and received.
- Network Layer, this layer provides the definition for the connection of two dissimilar networks.
- Transport Layer, this layer allows data to be broken into smaller packages for data to be distributed and addressed to other nodes (workstations).
- Session Layer, this layer helps out with the task to carry information from one node (workstation) to another node (workstation). A session has to be made before we can transport information to another computer.
- Presentation Layer, this layer is responsible to code and decode data sent to the node.
- Application Layer, this layer allows you to use an application that will communicate with say the operation system of a server. A good example would be using your web browser to interact with the operating system on a server such as Windows NT, which in turn gets the data you requested.
Network Architectures:
Ethernet
Ethernet is the most popular physical layer LAN technology in use
today. Other LAN types include Token Ring, Fast Ethernet, Fiber
Distributed Data Interface (FDDI), Asynchronous Transfer Mode (ATM) and
LocalTalk. Ethernet is popular because it strikes a good balance between
speed, cost and ease of installation. These benefits, combined with
wide acceptance in the computer marketplace and the ability to support
virtually all popular network protocols, make Ethernet an ideal
networking technology for most computer users today. The Institute for
Electrical and Electronic Engineers (IEEE) defines the Ethernet standard
as IEEE Standard 802.3. This standard defines rules for configuring an
Ethernet network as well as specifying how elements in an Ethernet
network interact with one another. By adhering to the IEEE standard,
network equipment and network protocols can communicate efficiently.Fast Ethernet
For Ethernet networks that need higher transmission speeds, the Fast Ethernet standard (IEEE 802.3u) has been established. This standard raises the Ethernet speed limit from 10 Megabits per second (Mbps) to 100 Mbps with only minimal changes to the existing cable structure. There are three types of Fast Ethernet: 100BASE-TX for use with level 5 UTP cable, 100BASE-FX for use with fiber-optic cable, and 100BASE-T4 which utilizes an extra two wires for use with level 3 UTP cable. The 100BASE-TX standard has become the most popular due to its close compatibility with the 10BASE-T Ethernet standard. For the network manager, the incorporation of Fast Ethernet into an existing configuration presents a host of decisions. Managers must determine the number of users in each site on the network that need the higher throughput, decide which segments of the backbone need to be reconfigured specifically for 100BASE-T and then choose the necessary hardware to connect the 100BASE-T segments with existing 10BASE-T segments. Gigabit Ethernet is a future technology that promises a migration path beyond Fast Ethernet so the next generation of networks will support even higher data transfer speeds.
Token Ring
Token Ring is another form of network configuration which differs
from Ethernet in that all messages are transferred in a unidirectional
manner along the ring at all times. Data is transmitted in tokens, which
are passed along the ring and viewed by each device. When a device sees
a message addressed to it, that device copies the message and then
marks that message as being read. As the message makes its way along the
ring, it eventually gets back to the sender who now notes that the
message was received by the intended device. The sender can then remove
the message and free that token for use by others.Various PC vendors have been proponents of Token Ring networks at different times and thus these types of networks have been implemented in many organizations.
FDDI
FDDI (Fiber-Distributed Data Interface) is a standard for data
transmission on fiber optic lines in a local area network that can
extend in range up to 200 km (124 miles). The FDDI protocol is based on
the token ring protocol. In addition to being large geographically, an
FDDI local area network can support thousands of users.Protocols:
Network protocols are standards that allow computers to communicate. A protocol defines how computers identify one another on a network, the form that the data should take in transit, and how this information is processed once it reaches its final destination. Protocols also define procedures for handling lost or damaged transmissions or "packets." TCP/IP (for UNIX, Windows NT, Windows 95 and other platforms), IPX (for Novell NetWare), DECnet (for networking Digital Equipment Corp. computers), AppleTalk (for Macintosh computers), and NetBIOS/NetBEUI (for LAN Manager and Windows NT networks) are the main types of network protocols in use today.Although each network protocol is different, they all share the same physical cabling. This common method of accessing the physical network allows multiple protocols to peacefully coexist over the network media, and allows the builder of a network to use common hardware for a variety of protocols. This concept is known as "protocol independence,"
Some Important Protocols and their job:
Protocol | Acronym | Its Job |
Point-To-Point | TCP/IP | The backbone protocol of the internet. Popular also for intranets using the internet |
Transmission Control Protocol/internet Protocol | TCP/IP | The backbone protocol of the internet. Popular also for intranets using the internet |
Internetwork Package Exchange/Sequenced Packet Exchange | IPX/SPX | This is a standard protocol for Novell Network Operating System |
NetBIOS Extended User Interface | NetBEUI | This is a Microsoft protocol that doesn't support routing to other networks |
File Transfer Protocol | FTP | Used to send and receive files from a remote host |
Hyper Text Transfer Protocol | HTTP | Used for the web to send documents that are encoded in HTML. |
Network File Services | NFS | Allows network nodes or workstations to access files and drives as if they were their own. |
Simple Mail Transfer Protocol | SMTP | Used to send Email over a network |
Telnet | Used to connect to a host and emulate a terminal that the remote server can recognize |
Introduction to TCP/IP Networks:
TCP/IP-based networks play an increasingly important role in computer networks. Perhaps one reason for their appeal is that they are based on an open specification that is not controlled by any vendor.What Is TCP/IP?
TCP stands for Transmission Control Protocol and IP stands for Internet Protocol. The term TCP/IP is not limited just to these two protocols, however. Frequently, the term TCP/IP is used to refer to a group of protocols related to the TCP and IP protocols such as the User Datagram Protocol (UDP), File Transfer Protocol (FTP), Terminal Emulation Protocol (TELNET), and so on.The Origins of TCP/IP
In the late 1960s, DARPA (the Defense Advanced Research Project Agency), in the United States, noticed that there was a rapid proliferation of computers in military communications. Computers, because they can be easily programmed, provide flexibility in achieving network functions that is not available with other types of communications equipment. The computers then used in military communications were manufactured by different vendors and were designed to interoperate with computers from that vendor only. Vendors used proprietary protocols in their communications equipment. The military had a multi vendor network but no common protocol to support the heterogeneous equipment from different vendorsNet work Cables and Stuff:
In the network you will commonly find three types of cables used these are the, coaxial cable, fiber optic and twisted pair.Thick Coaxial Cable
This type cable is usually yellow in color and used in what is called thicknets, and has two conductors. This coax can be used in 500-meter lengths. The cable itself is made up of a solid center wire with a braided metal shield and plastic sheathing protecting the rest of the wire.Thin Coaxial Cable
As with the thick coaxial cable is used in thicknets the thin version is used in thinnets. This type cable is also used called or referred to as RG-58. The cable is really just a cheaper version of the thick cable.Fiber Optic Cable
As we all know fiber optics are pretty darn cool and not cheap. This cable is smaller and can carry a vast amount of information fast and over long distances.Twisted Pair Cables
These come in two flavors of unshielded and shielded.Shielded Twisted Pair (STP)
Is more common in high-speed networks. The biggest difference you will see in the UTP and STP is that the STP use's metallic shield wrapping to protect the wire from interference.-Something else to note about these cables is that they are defined in numbers also. The bigger the number the better the protection from interference. Most networks should go with no less than a CAT 3 and CAT 5 is most recommended.
-Now you know about cables we need to know about connectors. This is pretty important and you will most likely need the RJ-45 connector. This is the cousin of the phone jack connector and looks real similar with the exception that the RJ-45 is bigger. Most commonly your connector are in two flavors and this is BNC (Bayonet Naur Connector) used in thicknets and the RJ-45 used in smaller networks using UTP/STP.
Unshielded Twisted Pair (UTP)
This is the most popular form of cables in the network and the cheapest form that you can go with. The UTP has four pairs of wires and all inside plastic sheathing. The biggest reason that we call it Twisted Pair is to protect the wires from interference from themselves. Each wire is only protected with a thin plastic sheath.Ethernet Cabling
Now to familiarize you with more on the Ethernet and it's cabling we need to look at the 10's. 10Base2, is considered the thin Ethernet, thinnet, and thinwire which uses light coaxial cable to create a 10 Mbps network. The cable segments in this network can't be over 185 meters in length. These cables connect with the BNC connector. Also as a note these unused connection must have a terminator, which will be a 50-ohm terminator.10Base5, this is considered a thicknet and is used with coaxial cable arrangement such as the BNC connector. The good side to the coaxial cable is the high-speed transfer and cable segments can be up to 500 meters between nodes/workstations. You will typically see the same speed as the 10Base2 but larger cable lengths for more versatility.
10BaseT, the “T” stands for twisted as in UTP (Unshielded Twisted Pair) and uses this for 10Mbps of transfer. The down side to this is you can only have cable lengths of 100 meters between nodes/workstations. The good side to this network is they are easy to set up and cheap! This is why they are so common an ideal for small offices or homes.
100BaseT, is considered Fast Ethernet uses STP (Shielded Twisted Pair) reaching data transfer of 100Mbps. This system is a little more expensive but still remains popular as the 10BaseT and cheaper than most other type networks. This on of course would be the cheap fast version.
10BaseF, this little guy has the advantage of fiber optics and the F stands for just that. This arrangement is a little more complicated and uses special connectors and NIC's along with hubs to create its network. Pretty darn neat and not to cheap on the wallet.
An important part of designing and installing an Ethernet is selecting the appropriate Ethernet medium. There are four major types of media in use today: Thickwire for 10BASE5 networks, thin coax for 10BASE2 networks, unshielded twisted pair (UTP) for 10BASE-T networks and fiber optic for 10BASE-FL or Fiber-Optic Inter-Repeater Link (FOIRL) networks. This wide variety of media reflects the evolution of Ethernet and also points to the technology's flexibility. Thickwire was one of the first cabling systems used in Ethernet but was expensive and difficult to use. This evolved to thin coax, which is easier to work with and less expensive.
Network Topologies:
What is a Network topology?
A network topology is the geometric arrangement of nodes and cable links in a LAN,There are three topology's to think about when you get into networks. These are the star, rind, and the bus.
Star, in a star topology each node has a dedicated set of wires connecting it to a central network hub. Since all traffic passes through the hub, the hub becomes a central point for isolating network problems and gathering network statistics.
Ring, a ring topology features a logically closed loop. Data packets travel in a single direction around the ring from one network device to the next. Each network device acts as a repeater, meaning it regenerates the signal
Bus, the bus topology, each node (computer, server, peripheral etc.) attaches directly to a common cable. This topology most often serves as the backbone for a network. In some instances, such as in classrooms or labs, a bus will connect small workgroups
Collisions:
Ethernet is a shared media, so there are rules for sending packets of data to avoid conflicts and protect data integrity. Nodes determine when the network is available for sending packets. It is possible that two nodes at different locations attempt to send data at the same time. When both PCs are transferring a packet to the network at the same time, a collision will result.Minimizing collisions is a crucial element in the design and operation of networks. Increased collisions are often the result of too many users on the network, which results in a lot of contention for network bandwidth. This can slow the performance of the network from the user's point of view. Segmenting the network, where a network is divided into different pieces joined together logically with a bridge or switch, is one way of reducing an overcrowded network.
Ethernet Products:
The standards and technology that have just been discussed help define the specific products that network managers use to build Ethernet networks. The following text discusses the key products needed to build an Ethernet LAN.Transceivers
Transceivers are used to connect nodes to the various Ethernet media. Most computers and network interface cards contain a built-in 10BASE-T or 10BASE2 transceiver, allowing them to be connected directly to Ethernet without requiring an external transceiver. Many Ethernet devices provide an AUI connector to allow the user to connect to any media type via an external transceiver. The AUI connector consists of a 15-pin D-shell type connector, female on the computer side, male on the transceiver side. Thickwire (10BASE5) cables also use transceivers to allow connections.For Fast Ethernet networks, a new interface called the MII (Media Independent Interface) was developed to offer a flexible way to support 100 Mbps connections. The MII is a popular way to connect 100BASE-FX links to copper-based Fast Ethernet devices.
Network Interface Cards:
Network interface cards, commonly referred to as NICs, and are used
to connect a PC to a network. The NIC provides a physical connection
between the networking cable and the computer's internal bus. Different
computers have different bus architectures; PCI bus master slots are
most commonly found on 486/Pentium PCs and ISA expansion slots are
commonly found on 386 and older PCs. NICs come in three basic varieties:
8-bit, 16-bit, and 32-bit. The larger the number of bits that can be
transferred to the NIC, the faster the NIC can transfer data to the
network cable.Many NIC adapters comply with Plug-n-Play specifications. On these systems, NICs are automatically configured without user intervention, while on non-Plug-n-Play systems, configuration is done manually through a setup program and/or DIP switches.
Cards are available to support almost all networking standards, including the latest Fast Ethernet environment. Fast Ethernet NICs are often 10/100 capable, and will automatically set to the appropriate speed. Full duplex networking is another option, where a dedicated connection to a switch allows a NIC to operate at twice the speed.
Hubs/Repeaters:
Hubs/repeaters are used to connect together two or more Ethernet segments of any media type. In larger designs, signal quality begins to deteriorate as segments exceed their maximum length. Hubs provide the signal amplification required to allow a segment to be extended a greater distance. A hub takes any incoming signal and repeats it out all ports.Ethernet hubs are necessary in star topologies such as 10BASE-T. A multi-port twisted pair hub allows several point-to-point segments to be joined into one network. One end of the point-to-point link is attached to the hub and the other is attached to the computer. If the hub is attached to a backbone, then all computers at the end of the twisted pair segments can communicate with all the hosts on the backbone. The number and type of hubs in any one-collision domain is limited by the Ethernet rules. These repeater rules are discussed in more detail later.
Network Type | Max Nodes Per Segment |
Max Distance Per Segment |
10BASE-T 10BASE2 10BASE5 10BASE-FL |
2 30 100 2 |
100m 185m 500m 2000m |
Adding Speed:
While repeaters allow LANs to extend beyond normal distance limitations, they still limit the number of nodes that can be supported. Bridges and switches, however, allow LANs to grow significantly larger by virtue of their ability to support full Ethernet segments on each port. Additionally, bridges and switches selectively filter network traffic to only those packets needed on each segment - this significantly increases throughput on each segment and on the overall network. By providing better performance and more flexibility for network topologies, bridges and switches will continue to gain popularity among network managers.Bridges:
The function of a bridge is to connect separate networks together. Bridges connect different networks types (such as Ethernet and Fast Ethernet) or networks of the same type. Bridges map the Ethernet addresses of the nodes residing on each network segment and allow only necessary traffic to pass through the bridge. When a packet is received by the bridge, the bridge determines the destination and source segments. If the segments are the same, the packet is dropped ("filtered"); if the segments are different, then the packet is "forwarded" to the correct segment. Additionally, bridges do not forward bad or misaligned packets.Bridges are also called "store-and-forward" devices because they look at the whole Ethernet packet before making filtering or forwarding decisions. Filtering packets, and regenerating forwarded packets enable bridging technology to split a network into separate collision domains. This allows for greater distances and more repeaters to be used in the total network design.
Ethernet Switches:
Ethernet switches are an expansion of the concept in Ethernet bridging. LAN switches can link four, six, ten or more networks together, and have two basic architectures: cut-through and store-and-forward. In the past, cut-through switches were faster because they examined the packet destination address only before forwarding it on to its destination segment. A store-and-forward switch, on the other hand, accepts and analyzes the entire packet before forwarding it to its destination.It takes more time to examine the entire packet, but it allows the switch to catch certain packet errors and keep them from propagating through the network. Both cut-through and store-and-forward switches separate a network into collision domains, allowing network design rules to be extended. Each of the segments attached to an Ethernet switch has a full 10 Mbps of bandwidth shared by fewer users, which results in better performance (as opposed to hubs that only allow bandwidth sharing from a single Ethernet). Newer switches today offer high-speed links, FDDI, Fast Ethernet or ATM. These are used to link switches together or give added bandwidth to high-traffic servers. A network composed of a number of switches linked together via uplinks is termed a "collapsed backbone" network.
Routers:
Routers filter out network traffic by specific protocol rather than by packet address. Routers also divide networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments. Network speed often decreases due to this type of intelligent forwarding. Such filtering takes more time than that exercised in a switch or bridge, which only looks at the Ethernet address. However, in more complex networks, overall efficiency is improved by using routers.What is a network firewall?
A firewall is a system or group of systems that enforces an access control policy between two networks. The actual means by which this is accomplished varies widely, but in principle, the firewall can be thought of as a pair of mechanisms: one which exists to block traffic, and the other which exists to permit traffic. Some firewalls place a greater emphasis on blocking traffic, while others emphasize permitting traffic. Probably the most important thing to recognize about a firewall is that it implements an access control policy. If you don't have a good idea of what kind of access you want to allow or to deny, a firewall really won't help you. It's also important to recognize that the firewall's configuration, because it is a mechanism for enforcing policy, imposes its policy on everything behind it. Administrators for firewalls managing the connectivity for a large number of hosts therefore have a heavy responsibility.Network Design Criteria:
Ethernets and Fast Ethernets have design rules that must be followed in order to function correctly. Maximum number of nodes, number of repeaters and maximum segment distances are defined by the electrical and mechanical design properties of each type of Ethernet and Fast Ethernet media.A network using repeaters, for instance, functions with the timing constraints of Ethernet. Although electrical signals on the Ethernet media travel near the speed of light, it still takes a finite time for the signal to travel from one end of a large Ethernet to another. The Ethernet standard assumes it will take roughly 50 microseconds for a signal to reach its destination.
Ethernet is subject to the "5-4-3" rule of repeater placement: the network can only have five segments connected; it can only use four repeaters; and of the five segments, only three can have users attached to them; the other two must be inter-repeater links.
If the design of the network violates these repeater and placement rules, then timing guidelines will not be met and the sending station will resend that packet. This can lead to lost packets and excessive resent packets, which can slow network performance and create trouble for applications. Fast Ethernet has modified repeater rules, since the minimum packet size takes less time to transmit than regular Ethernet. The length of the network links allows for a fewer number of repeaters. In Fast Ethernet networks, there are two classes of repeaters. Class I repeaters have a latency of 0.7 microseconds or less and are limited to one repeater per network. Class II repeaters have a latency of 0.46 microseconds or less and are limited to two repeaters per network. The following are the distance (diameter) characteristics for these types of Fast Ethernet repeater combinations:
Fast Ethernet | Copper | Fiber |
No Repeaters One Class I Repeater One Class II Repeater Two Class II Repeaters |
100m 200m 200m 205m |
412m* 272m 272m 228m |
* Full Duplex Mode 2 km
When conditions require greater distances or an increase in the number of nodes/repeaters, then a bridge, router or switch can be used to connect multiple networks together. These devices join two or more separate networks, allowing network design criteria to be restored. Switches allow network designers to build large networks that function well. The reduction in costs of bridges and switches reduces the impact of repeater rules on network design.
Each network connected via one of these devices is referred to as a separate collision domain in the overall network.
Types of Servers:
Device Servers
A device server is defined as a specialized, network-based hardware device designed to perform a single or specialized set of server functions. It is characterized by a minimal operating architecture that requires no per seat network operating system license, and client access that is independent of any operating system or proprietary protocol. In addition the device server is a "closed box," delivering extreme ease of installation, minimal maintenance, and can be managed by the client remotely via a Web browser.Print servers, terminal servers, remote access servers and network time servers are examples of device servers which are specialized for particular functions. Each of these types of servers has unique configuration attributes in hardware or software that help them to perform best in their particular arena.
Print Servers
Print servers allow printers to be shared by other users on the network. Supporting either parallel and/or serial interfaces, a print server accepts print jobs from any person on the network using supported protocols and manages those jobs on each appropriate printer.Print servers generally do not contain a large amount of memory; printers simply store information in a queue. When the desired printer becomes available, they allow the host to transmit the data to the appropriate printer port on the server. The print server can then simply queue and print each job in the order in which print requests are received, regardless of protocol used or the size of the job.
Multiport Device Servers
Devices that are attached to a network through a multiport device server can be shared between terminals and hosts at both the local site and throughout the network. A single terminal may be connected to several hosts at the same time (in multiple concurrent sessions), and can switch between them. Multiport device servers are also used to network devices that have only serial outputs. A connection between serial ports on different servers is opened, allowing data to move between the two devices.Given its natural translation ability, a multi-protocol multiport device server can perform conversions between the protocols it knows, like LAT and TCP/IP. While server bandwidth is not adequate for large file transfers, it can easily handle host-to-host inquiry/response applications, electronic mailbox checking, etc. And it is far more economical than the alternatives of acquiring expensive host software and special-purpose converters. Multiport device and print servers give their users greater flexibility in configuring and managing their networks.
Whether it is moving printers and other peripherals from one network to another, expanding the dimensions of interoperability or preparing for growth, multiport device servers can fulfill your needs, all without major rewiring.
Access Servers
While Ethernet is limited to a geographic area, remote users such as traveling sales people need access to network-based resources. Remote LAN access, or remote access, is a popular way to provide this connectivity. Access servers use telephone services to link a user or office with an office network. Dial-up remote access solutions such as ISDN or asynchronous dial introduce more flexibility. Dial-up remote access offers both the remote office and the remote user the economy and flexibility of "pay as you go" telephone services. ISDN is a special telephone service that offers three channels, two 64 Kbps "B" channels for user data and a "D" channel for setting up the connection. With ISDN, the B channels can be combined for double bandwidth or separated for different applications or users. With asynchronous remote access, regular telephone lines are combined with modems and remote access servers to allow users and networks to dial anywhere in the world and have data access. Remote access servers provide connection points for both dial-in and dial-out applications on the network to which they are attached. These hybrid devices route and filter protocols and offer other services such as modem pooling and terminal/printer services. For the remote PC user, one can connect from any available telephone jack (RJ45), including those in a hotel rooms or on most airplanes.Network Time Servers
A network time server is a server specialized in the handling of timing information from sources such as satellites or radio broadcasts and is capable of providing this timing data to its attached network. Specialized protocols such as NTP or udp/time allow a time server to communicate to other network nodes ensuring that activities that must be coordinated according to their time of execution are synchronized correctly. GPS satellites are one source of information that can allow global installations to achieve constant timing.IP Addressing:
An IP (Internet Protocol) address is a unique identifier for a node or host connection on an IP network. An IP address is a 32 bit binary number usually represented as 4 decimal values, each representing 8 bits, in the range 0 to 255 (known as octets) separated by decimal points. This is known as "dotted decimal" notation.Example: 140.179.220.200
It is sometimes useful to view the values in their binary form.
140 .179 .220 .200
10001100.10110011.11011100.11001000
Every IP address consists of two parts, one identifying the network and one identifying the node. The Class of the address and the subnet mask determine which part belongs to the network address and which part belongs to the node address.
Address Classes:
There are 5 different address classes. You can determine which class any IP address is in by examining the first 4 bits of the IP address.Class A addresses begin with 0xxx, or 1 to 126 decimal.
Class B addresses begin with 10xx, or 128 to 191 decimal.
Class C addresses begin with 110x, or 192 to 223 decimal.
Class D addresses begin with 1110, or 224 to 239 decimal.
Class E addresses begin with 1111, or 240 to 254 decimal.
Addresses beginning with 01111111, or 127 decimal, are reserved for loopback and for internal testing on a local machine. [You can test this: you should always be able to ping 127.0.0.1, which points to yourself] Class D addresses are reserved for multicasting. Class E addresses are reserved for future use. They should not be used for host addresses.
Now we can see how the Class determines, by default, which part of the IP address belongs to the network (N) and which part belongs to the node (n).
Class A -- NNNNNNNN.nnnnnnnn.nnnnnnn.nnnnnnn
Class B -- NNNNNNNN.NNNNNNNN.nnnnnnnn.nnnnnnnn
Class C -- NNNNNNNN.NNNNNNNN.NNNNNNNN.nnnnnnnn
In the example, 140.179.220.200 is a Class B address so by default the Network part of the address (also known as the Network Address) is defined by the first two octets (140.179.x.x) and the node part is defined by the last 2 octets (x.x.220.200).
In order to specify the network address for a given IP address, the node section is set to all "0"s. In our example, 140.179.0.0 specifies the network address for 140.179.220.200. When the node section is set to all "1"s, it specifies a broadcast that is sent to all hosts on the network. 140.179.255.255 specifies the example broadcast address. Note that this is true regardless of the length of the node section.
Private Subnets:
There are three IP network addresses reserved for private networks. The addresses are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. They can be used by anyone setting up internal IP networks, such as a lab or home LAN behind a NAT or proxy server or a router. It is always safe to use these because routers on the Internet will never forward packets coming from these addressesSubnetting an IP Network can be done for a variety of reasons, including organization, use of different physical media (such as Ethernet, FDDI, WAN, etc.), preservation of address space, and security. The most common reason is to control network traffic. In an Ethernet network, all nodes on a segment see all the packets transmitted by all the other nodes on that segment. Performance can be adversely affected under heavy traffic loads, due to collisions and the resulting retransmissions. A router is used to connect IP networks to minimize the amount of traffic each segment must receive.
Subnet Masking
Applying a subnet mask to an IP address allows you to identify the network and node parts of the address. The network bits are represented by the 1s in the mask, and the node bits are represented by the 0s. Performing a bitwise logical AND operation between the IP address and the subnet mask results in the Network Address or Number.For example, using our test IP address and the default Class B subnet mask, we get:
10001100.10110011.11110000.11001000 140.179.240.200 Class B IP Address
11111111.11111111.00000000.00000000 255.255.000.000 Default Class B Subnet Mask
10001100.10110011.00000000.00000000 140.179.000.000 Network Address
Default subnet masks:
Class A - 255.0.0.0 - 11111111.00000000.00000000.00000000Class B - 255.255.0.0 - 11111111.11111111.00000000.00000000
Class C - 255.255.255.0 - 11111111.11111111.11111111.00000000
CIDR -- Classless InterDomain Routing.
CIDR was invented several years ago to keep the internet from running out of IP addresses. The "classful" system of allocating IP addresses can be very wasteful; anyone who could reasonably show a need for more that 254 host addresses was given a Class B address block of 65533 host addresses. Even more wasteful were companies and organizations that were allocated Class A address blocks, which contain over 16 Million host addresses! Only a tiny percentage of the allocated Class A and Class B address space has ever been actually assigned to a host computer on the Internet.
People realized that addresses could be conserved if the class system was eliminated. By accurately allocating only the amount of address space that was actually needed, the address space crisis could be avoided for many years. This was first proposed in 1992 as a scheme called Supernetting.
The use of a CIDR notated address is the same as for a Classful address. Classful addresses can easily be written in CIDR notation (Class A = /8, Class B = /16, and Class C = /24)
It is currently almost impossible for an individual or company to be allocated their own IP address blocks. You will simply be told to get them from your ISP. The reason for this is the ever-growing size of the internet routing table. Just 5 years ago, there were less than 5000 network routes in the entire Internet. Today, there are over 90,000. Using CIDR, the biggest ISPs are allocated large chunks of address space (usually with a subnet mask of /19 or even smaller); the ISP's customers (often other, smaller ISPs) are then allocated networks from the big ISP's pool. That way, all the big ISP's customers (and their customers, and so on) are accessible via 1 network route on the Internet.
It is expected that CIDR will keep the Internet happily in IP addresses for the next few years at least. After that, IPv6, with 128 bit addresses, will be needed. Under IPv6, even sloppy address allocation would comfortably allow a billion unique IP addresses for every person on earth
Examining your network with commands:
PingPING is used to check for a response from another computer on the network. It can tell you a great deal of information about the status of the network and the computers you are communicating with.
Ping returns different responses depending on the computer in question. The responses are similar depending on the options used.
Ping uses IP to request a response from the host. It does not use TCP
.It takes its name from a submarine sonar search - you send a short sound burst and listen for an echo - a ping - coming back.
In an IP network, `ping' sends a short data burst - a single packet - and listens for a single packet in reply. Since this tests the most basic function of an IP network (delivery of single packet), it's easy to see how you can learn a lot from some `pings'.
To stop ping, type control-c. This terminates the program and prints out a nice summary of the number of packets transmitted, the number received, and the percentage of packets lost, plus the minimum, average, and maximum round-trip times of the packets.
Sample ping session
PING localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=4 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=5 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=6 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=7 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=8 ttl=255 time=2 ms
64 bytes from 127.0.0.1: icmp_seq=9 ttl=255 time=2 ms
localhost ping statistics
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max = 2/2/2 ms
meikro$
The Time To Live (TTL) field can be interesting. The main purpose of this is so that a packet doesn't live forever on the network and will eventually die when it is deemed "lost." But for us, it provides additional information. We can use the TTL to determine approximately how many router hops the packet has gone through. In this case it's 255 minus N hops, where N is the TTL of the returning Echo Replies. If the TTL field varies in successive pings, it could indicate that the successive reply packets are going via different routes, which isn't a great thing.
The time field is an indication of the round-trip time to get a packet to the remote host. The reply is measured in milliseconds. In general, it's best if round-trip times are under 200 milliseconds. The time it takes a packet to reach its destination is called latency. If you see a large variance in the round-trip times (which is called "jitter"), you are going to see poor performance talking to the host
NSLOOKUP
NSLOOKUP is an application that facilitates looking up hostnames on the network. It can reveal the IP address of a host or, using the IP address, return the host name.It is very important when troubleshooting problems on a network that you can verify the components of the networking process. Nslookup allows this by revealing details within the infrastructure.
NETSTAT
NETSTAT is used to look up the various active connections within a computer. It is helpful to understand what computers or networks you are connected to. This allows you to further investigate problems. One host may be responding well but another may be less responsive.IPconfig
This is a Microsoft windows NT, 2000 command. It is very useful in determining what could be wrong with a network.This command when used with the /all switch, reveal enormous amounts of troubleshooting information within the system.
Windows 2000 IP Configuration
Host Name . . . . . . . . . . . . : cowder
Primary DNS Suffix . . . . . . . :
Node Type . . . . . . . . . . . . : Broadcast
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . :
WAN (PPP/SLIP) Interface
Physical Address. . . . . . . . . : 00-53-45-00-00-00
DHCP Enabled. . . . . . . . . . . : No
IP Address. . . . . . . . . . . . : 12.90.108.123
Subnet Mask . . . . . . . . . . . : 255.255.255.255
Default Gateway . . . . . . . . . : 12.90.108.125
DNS Servers . . . . . . . . . . . : 12.102.244.2
204.127.129.2
Traceroute
Traceroute on Unix and Linux (or tracert in the Microsoft world) attempts to trace the current network path to a destination. Here is an example of a traceroute run to www.berkeley.edu:$ traceroute www.berkeley.edu
traceroute to amber.Berkeley.EDU (128.32.25.12), 30 hops max, 40 byte packets
1 sf1-e3.wired.net (206.221.193.1) 3.135 ms 3.021 ms 3.616 ms
2 sf0-e2s2.wired.net (205.227.206.33) 1.829 ms 3.886 ms 2.772 ms
3 paloalto-cr10.bbnplanet.net (131.119.26.105) 5.327 ms 4.597 ms 5.729 ms
4 paloalto-br1.bbnplanet.net (131.119.0.193) 4.842 ms 4.615 ms 3.425 ms
5 sl-sj-2.sprintlink.net (4.0.1.66) 7.488 ms 38.804 ms 7.708 ms
6 144.232.8.81 (144.232.8.81) 6.560 ms 6.631 ms 6.565 ms
7 144.232.4.97 (144.232.4.97) 7.638 ms 7.948 ms 8.129 ms
8 144.228.146.50 (144.228.146.50) 9.504 ms 12.684 ms 16.648 ms
9 f5-0.inr-666-eva.berkeley.edu (198.128.16.21) 9.762 ms 10.611 ms 10.403 ms
10 f0-0.inr-107-eva.Berkeley.EDU (128.32.2.1) 11.478 ms 10.868 ms 9.367 ms
11 f8-0.inr-100-eva.Berkeley.EDU (128.32.235.100) 10.738 ms 11.693 ms 12.520 ms
Subscribe to:
Posts (Atom)