• You are here:
  • Home »

3.1.7  Explain why protocols are necessary.  teacher Note Including data integrity, flow control, deadlock, congestion, error checking.

20 Nov Network Congestion - Traffic Flow -  Data Integrity  

Protocols are needed in computer networks primarily because the networks are made up of devices and software made by many different companies. The only way to ensure compatibility among everything is to have common documents, i.e. protocol specifications, that stipulate things such as the format of the data to be sent, and the mechanics of how it is to be sent and received.

Protocols are necessary so as to assure data integrity, manage the flow of data, prevent congestion and deadlock, and supply an agreed upon way of error checking".

Data Integrity

The "correctness" of information over its entire life-cycle, meaning what is sent is what is received. In fact, this is part of what error checking is for. Refer to the error checking section below - but the point is that the protocol needs to have ***some*** way of assuring data integrity. (When playing "telephone" as elementary students, there was not good data integrity with the passing on of the message by whispers from student to student!)

Flow Control

Protocols dictate the ways servers are able control the flow of traffic through a network, particularly the speed of transmission. This helps prevent a fast sender from overwhelming a slow receiver.

The good analogy here for controlling flow would be how traffic is controlled though a city. Some of the rules:

  • stop at stop signs
  • keep to the speed limit
  • stick to your lane
  • emergency vehicles take priority
  • Congestion

    As Internet can be considered as a Queue of packets, where transmitting nodes are constantly adding packets and some of them (receiving nodes) are removing packets from the queue. So, consider a situation where too many packets are present in this queue (or internet or a part of internet), such that constantly transmitting nodes are pouring packets at a higher rate than receiving nodes are removing them. This degrades the performance, and such a situation is termed as Congestion. 

    when everything in a network slows down due to the amount of traffic going through particular paths. Typical effects include queuing delay, packet loss or the blocking of new connections.

    What can Cause Congestion?

    • Slow router cpu
    • Lines with low bandwidth

    Fix Congestion?

    • Congestion can also be relieved simply by re-directing network traffic to alternative routes which are not the shortest but which are relatively uncontested
    • Install Lines with higher bandwidth

    Traffic ANALOGY Re-routing of truck/lorrie traffic around a city on a "ring road" is often done to relieve inner city congestion.


    Road network Analogy  "Gridlock" in the automotive traffic analogy which refers to a situation in which there is such a high level of traffic congestion that no car can move.

    Error Checking

    Error Checking the protocol will dictate the use of some sort of error checking algorithm to help assure that what was sent is what was received. Common error checking algorithms include parity checking and check sums.

    Parity Checking

    Parity checking is a system in which the number of binary 0s or the number of binary 1s in a message ("message", in the case of network activity is a packet) are calculated before the message is sent, and after it is received. That number should be exactly the same if no errors occurred during the transmission. If it is not the same, it means there (was at least one) error. So re-transmission is requested.

    In the case where a particular protocol uses Even Number of Zeros Parity Checking, the number of 0s in a packet are counted up, and if that number is odd, another 0 is added as the "parity bit" to make the total number of 0s even. And if the number of 0s is even, then it is kept even by adding a 1 as the parity bit. When the packet arrives at its destination, the number of 0s is added up, and if it's still even, then no error is assumed to have occurred, if it's odd, then an error must have happened during transmission, and re-transmission is demanded by the protocol.

    Check Sums & Weighted Check Sums

    Parity checking is not so sophisticated, and what happens if two bits get flipped?? An alternative approach is to use check sums. A check sum is some sort of value that is calculated using a specific formula before and after transmission. As with parity checking, that number should be exactly the same if the message was un-altered during transmission. A simple example is as follow: add up the ASCII decimal equivalents of all the characters in the message, and see if it is the same after transmission. Example divide by 255 rather.

    Check Sum Example:

    ABC: that's 65 + 66 + 67 = 198 % 255 = 198. So what is sent is ABC198

    If received without an error, the recalculation is:

    ABC: 65 + 66 + 67 = 198 % 255 = 198 (So check! it's the same; 198 == 198, so everything is assumed to have been transferred correctly.)

    But in the case of it being sent with an error occurring, say the last 1 of the C is changed to a 0:
    0100 0001 0100 0010 0100 0011 -----------> 0100 0001 0100 0010 0100 0010

    So what is received is ABB198, and the check sum calculation goes:

    ABB: 65 + 66 + 66 = 197 % 255 = 197 - ERROR, ERROR (because 197 != 198, the check sum before sending) RE-TRANSMIT!

    Weighted Check Sum:

    The problem with the check sum is that two letters could be transposed, and the sum would be the same, so for example ABC would yield 198, but so would BAC. Each letter is therefore weighted by multiplying its ASCII value by the number which represents its place in the message.

    ABC: that's (1 x 65) + (2 * 66) + (3 * 67) = 398 % 255 = 143

    BAC: that's (1 x 66) + (2 * 65) + (3 * 67) = 397 % 255 = 142, and 142 != 143, indicating the error in transmission.

    Question : What could cause bits to get flipped and other corruption of data in transmission: one example is magnetism or electromagnetism affecting the wire/wireless signal

    20 Nov Wi Fi Security

    In some places you can be held liable for crimes committed on your WIFI connections

    • WEP  ( wired Equivalent Protocol) " Not secure can be broken easily with downloaded software : incredibly week can be hacked in minueswith BRUTE FORCE 
    • WPA introduced 2006 with a key that changes , but still can be hacked : Much longer encryption and constantly changing keys.
    • WPA2- PSK  ( Pre Shared keys ) Now Mandatory use of Hackers can crack weak Pre-Shared Keys by using brute-force cracking tools Make your Pre-Shared Key over 25 characters long and make it random 
    • WPA -  Enterprise

    20 Nov Class Activity

    • 1
      Create Browser Functions ( maybe already created ) - Describe key functions of the Web Browser including how Caching Works 
    • 2
      Create Page / Section network Traffic - Add in section describing Congestion / Deadlock  / Data Integrity and  Flow Control
    • 3
      Using an online compression tool compress and image  example using https://tinyjpg.com/ 
    • 4
      Create Page Security Cyber Crimes -  Give examples of   Man in the Middle Attack /    Denial of Service / Distributed denial of Service and Brute Force ( to break  encryption) .  Describe the difference between the these 4 cyber crimes
    • 5
      Create Table for Presentation listing advantages and disadvantages of Wi Fi in Comparison to a wired network