3.1.7 Explain why protocols are necessary. teacher Note Including data integrity, flow control, deadlock, congestion, error checking.
Protocols are needed in computer networks primarily because the networks are made up of devices and software made by many different companies. The only way to ensure compatibility among everything is to have common documents, i.e. protocol specifications, that stipulate things such as the format of the data to be sent, and the mechanics of how it is to be sent and received.
Protocols are necessary so as to assure data integrity, manage the flow of data, prevent congestion and deadlock, and supply an agreed upon way of error checking".
The "correctness" of information over its entire life-cycle, meaning what is sent is what is received. In fact, this is part of what error checking is for. Refer to the error checking section below - but the point is that the protocol needs to have ***some*** way of assuring data integrity. (When playing "telephone" as elementary students, there was not good data integrity with the passing on of the message by whispers from student to student!)
Protocols dictate the ways servers are able control the flow of traffic through a network, particularly the speed of transmission. This helps prevent a fast sender from overwhelming a slow receiver.
The good analogy here for controlling flow would be how traffic is controlled though a city. Some of the rules:
As Internet can be considered as a Queue of packets, where transmitting nodes are constantly adding packets and some of them (receiving nodes) are removing packets from the queue. So, consider a situation where too many packets are present in this queue (or internet or a part of internet), such that constantly transmitting nodes are pouring packets at a higher rate than receiving nodes are removing them. This degrades the performance, and such a situation is termed as Congestion.
when everything in a network slows down due to the amount of traffic going through particular paths. Typical effects include queuing delay, packet loss or the blocking of new connections.
What can Cause Congestion?
Traffic ANALOGY Re-routing of truck/lorrie traffic around a city on a "ring road" is often done to relieve inner city congestion.
Road network Analogy "Gridlock" in the automotive traffic analogy which refers to a situation in which there is such a high level of traffic congestion that no car can move.
Error Checking the protocol will dictate the use of some sort of error checking algorithm to help assure that what was sent is what was received. Common error checking algorithms include parity checking and check sums.
Parity checking is a system in which the number of binary 0s or the number of binary 1s in message ("message", in the case of network activity is a packet) are calculated before the message is sent, and after it is received. That number should be exactly the same if no errors occurred during the transmission. If it is not the same, it means there (was at least one) error. So re-transmission is requested.
In the he case where a particular protocol uses Even Number of Zeros Parity Checking, the number of 0s in a packet are counted up, and if that number is odd, another 0 is added as the "parity bit" to make the total number of 0s even. And if the number of 0s is even, then it is kept even by adding a 1 as the parity bit. When the packet arrives at its destination, the number of 0s is added up, and if it's still even, than no error is assumed to have occurred, if it's odd, then an error must have happened during transmission, and re-transmission is demanded by the protocol.
Check Sums & Weighted Check Sums
Parity checking is not so sophisticated, and what happens if two bits get flipped?? An alternative approach is to use check sums. A check sum is some sort of value that is calculated using a specific formula before and after transmission. As with parity checking, that number should be exactly the same if the message was un-altered during transmission. A simple example is as follow: add up the ASCII decimal equivalents of all the characters in the message, and see if it is the same after transmission. Example divide by 255 rather.
Check Sum Example:
ABC: that's 65 + 66 + 67 = 198 % 255 = 198. So what is sent is ABC198
If received without an error, the recalculation is:
ABC: 65 + 66 + 67 = 198 % 255 = 198 (So check! it's the same; 198 == 198, so everything is assumed to have been transferred correctly.)
But in the case of it being sent with an error occurring, say the last 1 of the C is changed to a 0:
0100 0001 0100 0010 0100 0011 -----------> 0100 0001 0100 0010 0100 0010
So what is received is ABB198, and the check sum calculation goes:
ABB: 65 + 66 + 66 = 197 % 255 = 197 - ERROR, ERROR (because 197 != 198, the check sum before sending) RE-TRANSMIT!
Weighted Check Sum:
The problem with the check sum is that two letters could be transposed, and the sum would be the same, so for example ABC would yield 198, but so would BAC. Each letter is therefore weighted by multiplying its ASCII value by the number which represents its place in the message.
ABC: that's (1 x 65) + (2 * 66) + (3 * 67) = 398 % 255 = 143
BAC: that's (1 x 66) + (2 * 65) + (3 * 67) = 397 % 255 = 142, and 142 != 143, indicating the error in transmission.
Question : What could cause bits to get flipped and other corruption of data in transmission: one example is magnetism or electromagnetism affecting the wire/wireless signal
In some places you can be held liable for crimes committed on your WIFI connections