2014-09-05

 Course Code                            :BCS-052
Course Title                              :Network Programming and Administration
Assignment Number                 :BCA(V)-052/Assign/14-15
Maximum Marks                       :100
Weightage                                :25%
Last Dates for Submission     :15th October, 2014 (For July 2014 Session)
15th April, 2015 (For January 2015 Session)

This assignment has 4 questions of 80 marks. Rest 20 marks are for viva voce.

Question 1:
(a)  What is IPv6? Explain its need and important features those are not available in IPv4.

Solution:

IPv6 (Internet Protocol Version 6) is also called IPng (Internet Protocol next generation) and it is the newest version of the Internet Protocol (IP) reviewed in the IETF standards committees to replace the current version of IPv4 (Internet Protocol Version 4).

The official name of IPng is IPv6, where IP stands for Internet Protocol and v6 stands for version 6. Internet Protocol Version 6 (IPv6) is the successor to Internet Protocol Version 4 (IPv4). IPv6 was designed as an evolutionary upgrade to the Internet Protocol and will, in fact, coexist with the older IPv4 for some time. IPv6 is designed to allow the Internet to grow steadily, both in terms of the number of hosts connected and the total amount of data traffic transmitted.

Important features those are not available in IPv4.

IPv6 (Internet Protocol Version 6) is also called IPng (Internet Protocol next generation) and it is the newest version of the Internet Protocol (IP) reviewed in the IETF standards committees to replace the current version of IPv4 (Internet Protocol Version 4).

IPv6 is the successor to Internet Protocol Version 4 (IPv4). It was designed as an evolutionary upgrade to the Internet Protocol and will, in fact, coexist with the older IPv4 for some time. IPv6 is designed to allow the Internet to grow steadily, both in terms of the number of hosts connected and the total amount of data traffic transmitted.

IPv6 is often referred to as the ”next generation” Internet standard and has been under development now since the mid-1990s. IPv6 was born out of concern that the demand for IP addresses would exceed the available supply.

While increasing the pool of addresses is one of the most often-talked about benefit of IPv6, there are other important technological changes in IPv6 that will improve the IP protocol:

- No more NAT (Network Address Translation)

- Auto-configuration

- No more private address collisions

- Better multicast routing

- Simpler header format

- Simplified, more efficient routing

- True quality of service (QoS), also called “flow labeling”

- Built-in authentication and privacy support

- Flexible options and extensions

- Easier administration (say good-bye to DHCP)

(b) Compare the sliding window protocol of data link layer and transport layer. Why flow control is used at both layers? Justify your answer.

Solution:
Sliding Window protocol

Frames have sequence number 0 to maximum 2n - 1 (n bit field).

At any moment, the sender maintains a list of sequence numbers it is permitted to send – these fall within the sending window. These are frames sent-but-no-ack and frames not-yet-sent.

When new packet from Network layer comes in to send, it is given highest no, and upper edge of window advanced by

When ack comes in, lower edge of window advanced by 1.

Receiver has receiving window – the frames it is permitted to accept.

Sliding window size 1. Sequence nos. 0 to 7.

(a)  At start. Receiver waits for 0.

(b)  Sender sends 0.

(c) Receiver receives 0. Waits for 1.

(d) Sender got ack for 0. Hasn’t got 1 from its Network layer yet.

More complex Data Link layer, as more freedom about the order in which it sends and receives frames.

Sender may have n unacknowledged frames at any time (window size n).

Needs n buffers to hold them all for possible re-transmit.

If window grows to its maximum size, DA must shut off NA.

This is all hidden from NB – still receives packets in exact same order.

Sender window might grow as it receives more frames to send and still has ones un-ack’ed.

Starts with nothing to send, then NA gives it frames to send.

Later, window may shrink as frames are ack-ed and NA has no more.

Receiver window constant size.

Receiver window size 1 means will only accept them in order.

Size n means will receive out of order (e.g. receive later ones after earlier frame is lost)

and then must buffer them before sending to NB (must send to NB in order).

e.g. DB has buffers to receive frames 0..7

Receives 1..7 in varying orders. Still waiting on 0. Can’t send frames to NB yet.

0 got lost and was re-sent. Eventually gets 0.

Can now send all of 0..7 to NB

and re-use these slots.

e.g. consider frames numbered 0..7 but DB only has 2 buffers

Currently the sliding window is over 4,5

If get 4 can send it to NB and move window to 5,6

If get 5 have to wait for 4, then send both, and advance window to 6,7

Data Link Layer

The data link layer is the second layer in the OSI (open systems interconnection) seven-layer reference model. It responds to service requests from the network layer above it and issues service requests to the physical layer below it.

The data link layer is responsible for encoding bits into packets prior to transmission and then decoding the packets back into bits at the destination. Bits are the most basic unit of information in computing and communications. Packets are the fundamental unit of information transport in all modern computer networks, and increasingly in other communications networks as well.

The data link layer is also responsible for logical link control, media access control, hardware addressing, error detection and handling and defining physical layer standards. It provides reliable data transfer by transmitting packets with the necessary synchronization, error control and flow control.

The data link layer is divided into two sublayers: the media access control (MAC) layer and the logical link control (LLC) layer. The former controls how computers on the network gain access to the data and obtain permission to transmit it; the latter controls packet synchronization, flow control and error checking.

The data link layer is where most LAN (local area network) and wireless LAN technologies are defined. Among the most popular technologies and protocols generally associated with this layer are Ethernet, Token Ring, FDDI (fiber distributed data interface), ATM (asynchronous transfer mode), SLIP (serial line Internet protocol), PPP (point-to-point protocol), HDLC (high level data link control) and ADCCP (advanced data communication control procedures).

The data link layer is often implemented in software as a driver for a network interface card (NIC). Because the data link and physical layers are so closely related, many types of hardware are also associated with the data link layer. For example, NICs typically implement a specific data link layer technology, so they are often called Ethernet cards, Token Ring cards, etc. There are also several types of network interconnection devices that are said to operate at the data link layer in whole or in part, because they make decisions about what to do with data they receive by looking at data link layer packets. These devices include most bridges and switches, although switches also encompass functions performed by the network layer.

Data link layer processing is faster than network layer processing because less analysis of the packet is required.

(c) What are the corresponding protocols of the following TCP/IP protocols in the OSI Model?Compare them.

DNS

FTP

TFTP

Solution:
DNS

The DNS translates Internet domain and host names to IP addresses. DNS automatically converts the names we type in  our Web browser address bar to the IP addresses of Web servers hosting those sites.

DNS implements a distributed database to store this name and address information for all public hosts on the Internet. DNS assumes IP addresses do not change (are statically assigned rather than dynamically assigned).
FTP

File Transfer Protocol (FTP) is a standard Internet protocol for transmitting files between computers on the Internet. Like the Hypertext Transfer Protocol (HTTP), which transfers displayable Web pages and related files, and the Simple Mail Transfer Protocol (SMTP), which transfers e-mail, FTP is an application protocol that uses the Internet’s TCP/IP protocols. FTP is commonly used to transfer Web page files from their creator to the computer that acts as their server for everyone on the Internet. It’s also commonly used to download programs and other files to your computer from other servers.
TFTP

Trivial File Transfer Protocol (TFTP) is an Internet software utility for transferring files that is simpler to use than the File Transfer Protocol (FTP) but less capable.
(d) Why do LANs tend to use broadcast networks? Why not use networks consisting of multiplexers and switches.
Solution:

The computer systems within a Local Area Networks (LAN) are usually separated by a small distance (usually < 100m) therefore high speed and efficient communication is quite possible utilizing a shared broadcast carrier. The price of the carrier is minimal and also the total cost is dominated by the expense of the network interface cards in every computer system. Additionally, the Local Area Networks (LAN) users generally are members of the same team where all users are usually trustworthy, so broadcast doesn’t cause significant security and reliability risk.
Not use networks consisting of multiplexers and switches.

The main reason behind keeping away from a multiplexer and switch approach to LANs is the fact that a central, costly “box” is needed. The production of Application Specific Integrated Circuits (ASICs) has decreased the expense of switching boxes making switch- dependent LANs possible, as well as in a few conditions the prominent method.

Question 2:
(a) Why would an application use UDP instead of TCP? Also, explain how can TCP handle urgent data?

Solution:

UDP is good for sending messages from one system to another when the   order isn’t important and you don’t need all of     the messages to get to the other machine.  This is why I’ve only used UDP once to write the example code for the faq.  Usually TCP is a better solution.  It saves  you having to write code to ensure that messages make it to the   desired    destination, or to ensure the message ordering.  Keep in mind   that every additional line of code you add to your project in another  line that could contain a potentially expensive bug.

If you find that TCP is too slow for your needs you may be able to get  better performance with UDP so long as you are  willing to sacrifice  message order and/or reliability.

UDP must be used to multicast messages to more than one other machine at the same time.  With TCP an application  would have to open separate connections to each of the destination machines and send the message once to each target machine.  This limits your application to only communicate with machines that it already knows about.

It should be obvious with regard to the second point that if a TCP connection is used for multiple request-reply exchanges, then the cost of the connection’s establishment and teardown is amortized across all the requests and replies, and this is normally a better design than using a new connection for each request-reply. Nevertheless, there are applications that use a new TCP connection for each request-reply (e.g., the older versions of HTTP), and there are applications in which the client and server exchange one request-reply (e.g., the DNS) and then might not talk to each other for hours or days.

We now list the features of TCP that are not provided by UDP, which means that an application must provide these features itself, if they are necessary to the application. We use the qualifier “necessary” because not all features are needed by all applications. For example, dropped segments might not need to be retransmitted for a real-time audio application, if the receiver can interpolate the missing data. Also, for simple request-reply transactions, windowed flow control might not be needed if the two ends agree ahead of time on the size of the largest request and reply.

Positive acknowledgments, retransmission of lost packets, duplicate detection, and sequencing of packets reordered by the network—TCP acknowledges all data, allowing lost packets to be detected. The implementation of these two features requires that every TCP data segment contain a sequence number that can then be acknowledged. It also requires that TCP estimate a retransmission timeout value for the connection and that this value be updated continually as network traffic between the two end-systems changes.

Windowed flow control—A receiving TCP tells the sender how much buffer space it has allocated for receiving data, and the sender cannot exceed this. That is, the amount of unacknowledged data at the sender can never exceed the receiver’s advertised window.

Slow start and congestion avoidance—This is a form of flow control imposed by the sender to determine the current network capacity and to handle periods of congestion. All current TCPs must support these two features and we know from experience (before these algorithms were implemented in the late 1980s) that protocols that do not “back off” in the face of congestion just make the congestion worse (e.g., [Jacobson 1988]).                                                                                TCP handle  urgent data

When an interactive hits the DEL or CTRL-C key to break-off a remote computation that has already begun, the sending application puts some control information in the data stream and gives it to TCP along with the URGENT flag. This even causes TCP to stop accumulating data and transmit everything it has for that connection immediately.

The receiving application is interrupted so it can stop whatever it was doing so that it can read the data stream to find the urgent data.

(b) Write a connection-oriented client and server program in C language on UNIX platform,

where client program interact with the Server as given below:

i- The client begins by sending a request; the server sends back a confirmation and its clock time to the client.

ii -The client sends a number and server replies as the square of that number to the

client.

(C) What are the special IP addresses? Give the significance of these addresses.

Solution:

There are several IP addresses that are special in one way or another. These addresses are for special purposes or are to be put to special use.

Addresses significant to every IP subnet

Network Address

Broadcast Address

Addresses significant to individual hosts

Loopback Address

Special Addresses of Global Significance

Private Addresses

Reserved Addresses

LOOPBACK ADDRESS (127.0.0.1)The 127.0.0.0 class ‘A’ subnet is used for only a single address: the loopback address 127.0.0.1. This address is used to test the local network interface device’s functionality. All network interface devices should respond to this address. If you ping 127.0.0.1, you can be assured that the network hardware is functioning and that the network software is also functioning.

Question 3:
(a)  What is HTTP? Describe the various HTTP request methods using an example of each.

Solution:

Short for HyperText Transfer Protocol, HTTP is a set of standards that allow users of the World Wide Web to exchange information found on web pages. When wanting to access any web page enter http:// in front of the web address, which tells the browser to communicate over HTTP. For example, the full URL for Computer Hope is http://www.computerhope.com. Today’s modern browsers no longer require HTTP in front of the URL since it is the default method of communication. However, it is still used in browsers because of the need to access other protocols such as FTP through the browser. Below are a few of the major facts on HTTP.

The term HTTP was coined by Ted Nelson.

HTTP commonly utilizes port 80, 8008, or 8080.

HTTP/0.9 was the first version of the HTTP and was introduced in 1991.

HTTP/1.0 is specified in RFC 1945 and introduced in 1996.

HTTP/1.1 is specified in RFC 2616 and officially released in January 1997.

HTTPS

Short for Hypertext Transfer Protocol over Secure, HTTPS is a secure method of accessing or sending information across a web page. All data sent over HTTPS is encrypted before it is sent, this prevents anyone from understanding that information if intercepted. Because data is encrypted over HTTPS, it is slower than HTTP, which is why HTTPS is only used when requiring login information or with pages that contain sensitive information such as an online bank web page.

HTTPS uses port 443 to transfer its information.

HTTPS is first used in HTTP/1.1 and is defined in RFC 2616.

How to protect yourself and verify Internet data is secure while online.
Other related RFCs of interest

RFC 2068
HTTP status codes

Below is a listing of HTTP status codes currently defined by Computer Hope. These codes enable a client accessing another computer or device over HTTP to know how to proceed or not proceed.

1xx – 2xx

3xx – 4xx

5XX

100

101

102

200

201

202

204

205

206

207

301

302

304

400

401

402

403

404

405

406

407

408

409

410

413

414

416

500

501

503

505

HTTP request methods

Minimally, the response should be a 200 OK and have an Allow header with a list of HTTP methods that may be used on this resource. As an authorized user on an API, if you were to request OPTIONS /users/me, you should receive something like…

200 OK

Allow: HEAD,GET,PUT,DELETE,OPTIONS



Server: Apache/2.4.1 (Unix) OpenSSL/1.0.0g

Allow: GET,HEAD,POST,OPTIONS,TRACE

Content-Type: httpd/unix-directory



It could be an HTML page with documentation, but that’s sort of unpractical because users don’t click the “get options” button in their browsers before visiting a page. Machines may though.

APIs should be taking advantage of this. There are many benefits to be gained from producing machine readable docs at every endpoint. It would be a boon for automatic client generation for web services. Communication between web services could be much more resilient if they had a codified way to check their abilities against each other.

At the very least, services should be responding with a 200 and the Allow header. That’s just correct web server behavior. But there’s really no excuse for JSON APIs not to be returning a documentation object. To use GitHub as example again, on the issues endpoint, a request like OPTIONS /repos/:user/:repo/issues should respond with a body like…

{

“POST”: {

“description”: “Create an issue”,

“parameters”: {

“title”: {

“type”: “string”

“description”: “Issue title.”,

“required”: true

},

“body”: {

“type”: “string”,

“description”: “Issue body.”,

},

“assignee”: {

“type”: “string”,

“description” “Login for the user that this issue should be assigned to.”

},

“milestone”: {

“type”: “number”,

“description”: “Milestone to associate this issue with.”

},

“labels”: {

“type”: “array/string”

“description”: “Labels to associate with this issue.”

}

},

“example”: {

“title”: “Found a bug”,

“body”: “I’m having a problem with this.”,

“assignee”: “octocat”,

“milestone”: 1,

“labels”: [

"Label1",

"Label2"

]

}

}

}
(b) What is domain name? How is a domain name translated to an equivalent IP address? Explain using an example.

Solution:
Domain name

Every time you visit a website, you are interacting with the largest distributed database in the world. This massive database is collectively known as the DNS, or the Domain Name System. Without it, the Internet as we know it would be unable to function. The work that the DNS does happens so seamlessly and instantaneously that you are usually completely unaware that it’s even happening. The only time that you’ll get an inkling about what the DNS is doing is when you’re presented with an error after trying to visit a website. Learn more about what the DNS is, how it works and why it’s so critical by reading on below.

A lot of what has been discussed may be a bit confusing, so lets do a real life example. In the flowchart below labeled Figure 1, you will see a computer trying to connect to www.google.com and the steps it takes.

Example

A User opens a web browser and tries to connect to www.google.com. The operating system not knowing the IP Address for www.google.com, asks the ISP’s DNS Server for this information.

The ISP’s DNS Server does not know this information, so it connects to a Root Server to find out what name server, running somewhere in the world, know the information about google.com.

The Root Server tells the ISP’s DNS Server to contact a particular name server that knows the information about google.com.

The ISP’s DNS Server connects to Google’s DNS server and asks for the IP Address for www.google.com.

Google’s DNS Server responds to the ISP’s DNS server with the appropriate IP Address.

The ISP’s DNS Server tells the User’s operating system the IP Address for google.com.

The operating system tells the Web Browser the IP Address for www.google.com.

The web browser connects and starts communication with www.google.com.

(c) What is a mail server? Briefly explain specifying the protocols involved, how a sender can send a mail to the server and the recipient retrieves it from the server?

Solution:
Mail server

A mail server is the computerized equivalent of your friendly neighborhood mailman. Every email that is sent passes through a series of mail servers along its way to its intended recipient. Although it may seem like a message is sent instantly – zipping from one PC to another in the blink of an eye – the reality is that a complex series of transfers takes place. Without this series of mail servers, you would only be able to send emails to people whose email address domains matched your own – i.e., you could only send messages from one example.com account to another example.com account.
Types of Mail Servers

Mail servers can be broken down into two main categories: outgoing mail servers and incoming mail servers. Outgoing mail servers are known as SMTP, or Simple Mail Transfer Protocol, servers. Incoming mail servers come in two main varieties. POP3, or Post Office Protocol, version 3, servers are best known for storing sent and received messages on PCs’ local hard drives. IMAP, or Internet Message Access Protocol, servers always store copies of messages on servers. Most POP3 servers can store messages on servers, too, which is a lot more convenient.
The Process of Sending an Email

Now that you know the basics about incoming and outgoing mail servers, it will be easier to understand the role that they play in the emailing process. The basic steps of this process are outlined below for your convenience.

Step #1: After composing a message and hitting send, your email client – whether it’s Outlook Express or Gmail – connects to your domain’s SMTP server. This server can be named many things; a standard example would be smtp.example.com.

Step #2: Your email client communicates with the SMTP server, giving it your email address, the recipient’s email address, the message body and any attachments.

Step #3: The SMTP server processes the recipient’s email address – especially its domain. If the domain name is the same as the sender’s, the message is routed directly over to the domain’s POP3 or IMAP server – no routing between servers is needed. If the domain is different, though, the SMTP server will have to communicate with the other domain’s server.

Step #4: In order to find the recipient’s server, the sender’s SMTP server has to communicate with the DNS, or Domain Name Server. The DNS takes the recipient’s email domain name and translates it into an IP address. The sender’s SMTP server cannot route an email properly with a domain name alone; an IP address is a unique number that is assigned to every computer that is connected to the Internet. By knowing this information, an outgoing mail server can perform its work more efficiently.

Step #5: Now that the SMTP server has the recipient’s IP address, it can connect to its SMTP server. This isn’t usually done directly, though; instead, the message is routed along a series of unrelated SMTP servers until it arrives at its destination.

Step #6: The recipient’s SMTP server scans the incoming message. If it recognizes the domain and the user name, it forwards the message along to the domain’s POP3 or IMAP server. From there, it is placed in a sendmail queue until the recipient’s email client allows it to be downloaded. At that point, the message can be read by the recipient.
How Email Clients are Handled

Many people use web-based email clients, like Yahoo Mail and Gmail. Those who require a lot more space – especially businesses – often have to invest in their own servers. That means that they also have to have a way of receiving and transmitting emails, which means that they need to set up their own mail servers. To that end, programs like Postfix and Microsoft Exchange are two of the most popular options. Such programs facilitate the preceding process behind the scenes. Those who send and receive messages across those mail servers, of course, generally only see the “send” and “receive” parts of the process.

At the end of the day, a mail server is a computer that helps move files along to their intended destinations. In this case, of course, those files are email messages. As easy as they are to take for granted, it’s smart to have a basic grasp of how mail servers work.

(d) Draw the IP datagram header format. ―IP datagram has a checksum field still its called unreliable protocol.‖ Justify.

Solution:

IP is the workhorse protocol of the TCP/IP protocol suite. All TCP, UDP, ICMP, and IGMP data gets transmitted as IP datagrams (Figure 1.4). A fact that amazes many newcomers to TCP/IP, especially those from an X.25 or SNA background, is that IP provides an unreliable, connectionless datagram delivery service.

By unreliable we mean there are no guarantees that an IP datagram successfully gets to its destination. IP provides a best effort service. When something goes wrong, such as a router temporarily running out of buffers, IP has a simple error handling algorithm: throw away the datagram and try to send an ICMP message back to the source. Any required reliability must be provided by the upper layers (e.g., TCP).

The term connectionless means that IP does not maintain any state information about successive datagrams. Each datagram is handled independently from all other datagrams. This also means that IP datagrams can get delivered out of order. If a source sends two consecutive datagrams (first A, then B) to the same destination, each is routed independently and can take different routes, with B arriving before A.

In this chapter we take a brief look at the fields in the IP header, describe IP routing, and cover subnetting. We also look at two useful commands: ifconfig and netstat. We leave a detailed discussion of some of the fields in the IP header for later when we can see exactly how the fields are used. RFC 791 [Postel 1981a] is the official specification of IP.

IP Header

shows the format of an IP datagram. The normal size of the IP header is 20 bytes, unless options are present.
IP datagram, showing the fields in the IP header.

We will show the pictures of protocol headers in TCP/IP as in Figure The most significant bit is numbered 0 at the left, and the least significant bit of a 32-bit value is numbered 31 on the right.

The 4 bytes in the 32-bit value are transmitted in the order: bits 0-7 first, then bits 8-15, then 16-23, and bits 24-31 last. This is called big endian byte ordering, which is the byte ordering required for all binary integers in the TCP/IP headers as they traverse a network. This is called the network byte order. Machines that store binary integers in other formats, such as the little endian format, must convert the header values into the network byte order before transmitting the data.
The current protocol version is 4, so IP is sometimes called IPv4. Section 3.10 discusses some proposals for a new version of IP.
The header length is the number of 32-bit words in the header, including any options. Since this is a 4-bit field, it limits the header to 60 bytes. In Chapter 8 we’ll see that this limitation makes some of the options, such as the record route option, useless today. The normal value of this field (when no options are present) is 5.
The type-of-service field (TOS) is composed of a 3-bit precedence field (which is ignored today), 4 TOS bits, and an unused bit that must be 0. The 4 TOS bits are: minimize delay, maximize throughput, maximize reliability, and minimize monetary cost.
Recommended values for type-of-service field.

Only 1 of these 4 bits can be turned on. If all 4 bits are 0 it implies normal service. RFC 1340 [Reynolds and Postel 1992] specifies how these bits should be set by all the standard applications. RFC 1349 [Almquist 1992] contains some corrections to this RFC, and a more detailed description of the TOS feature.

shows the recommended values of the TOS field for various applications. In the final column we show the hexadecimal value, since that’s what we’ll see in the tcpdump output later in the text.

Question 4:
(a) What are the NTFS, FAT, HPFS file systems? Compare and contrast between these file

systems.

Solution:
File Systems (FAT, HPFS, NTFS)

At the BIOS level, a disk partition contains sectors numbered 0, 1, etc. Without additional support, each partition would be one large dataset. Operating systems add a directory structure to break the partition up into smaller files, assign names to each file, and manage the free space available to create new files.

The directory structure and methods for organizing a partition is called a File System. Different File Systems reflect different operating system requirements or different performance assumptions. Unix, for example, has the convention that lowercase and uppercase are different in file names, so “sample.txt” and “Sample.txt” are two different files. DOS and the systems that descend from it (Windows 95, OS/2, and Windows NT) ignore case differences when finding file names. Some File Systems work better on small machines, others work better on large servers.

Each partition is assigned a type (in the MBR for primary partitions, in the Extended Partition directory for logical volumes). When the partition is formatted with a particular File System, the partition type will be updated to reflect this choice.

The same hard disk can have partitions with File Systems belonging to DOS, OS/2, NT, and Linux (or other Unix clones). Generally, an operating system will ignore partitions whose type ID represents an unknown file system type. It is fairly easy (given a big enough disk) to install all of the different operating systems and all of the File System types. There are a few rules to make things simple.

Each File System is described in detail in a separate section.

FAT File System

The FAT File system is used by DOS and is supported by all the other operating systems. It is simple, reliable, and uses little storage.

VFAT

VFAT is an alternate use of the FAT file system available in Windows 95 and Windows NT 3.5. It allows files to have longer names than the “8.3″ convention adopted by DOS. VFAT stores extra information in the directory that older DOS and OS/2 systems can ignore.

HPFS

HPFS is used by OS/2 and is supported by Windows NT. It provides better performance than FAT on larger disk volumes and supports long file names. However, it requires more memory than FAT and may not be a reasonable choice on systems with only 8 megs of RAM.

NTFS

NTFS provides everything. It supports long file names, large volumes, data security, and universal file sharing. A departmental NT file server will probably have all its partitions formatted for NTFS. Because the other operating systems cannot use it, NTFS is less attractive on personal desktop workstations or portables.
File Systems and Disk Letters
DOS and Windows 95 can only boot from the C: disk. Technically, the C: letter will be assigned to the first Primary Partition on the first hard disk that has a FAT file system. In no case can DOS boot from a second hard disk or a logical volume in the extended partition. However, if as the system comes up, the DOS boot sector and DOS files turn out to be on the second Primary Partition on the first hard disk, then this will not be a problem so long as the first partition has a non-FAT file system. DOS simply ignores primary partitions that are formatted for other operating systems.

Some people exploit this feature. They put an HPFS or NTFS file system on the first Primary Partition, and a FAT file system on the second. This can produce confusion. When the other operating system boots up, it will now assign letter C: to its first partition, and the disk that DOS calls “C:” will become “D:” on the other system. If the two systems share application programs, it becomes very difficult to configure INI files as the drive letter keeps changing back and forth. It is a simpler and safer strategy to accept the view that the first Primary Partition on the first hard disk should be formatted with the FAT file system and should be the C: drive in every operating system.
Choosing a File System

The performance problems with FAT have been greatly reduced by various strategies to use Cache memory and to periodically DEFRAG the disk. FAT is the only system fully supported by DOS and Windows 95. It is also a perfectly acceptable choice under Windows NT and OS/2. FAT systems require the least memory and are the best choice on small machines.

Although it is simpler to manage a few larger volumes, FAT performance degrades with volume size. The distance between the directory and the data increases the disk movement, and larger allocation units waste space. A good rule of thumb would limit FAT volumes to a maximum of 255 megabytes.

FAT proven to be quite reliable and is fairly immune to damage. When the system crashes, FAT can “misplace” disk space that was being allocated to a file. CHKDSK (or Microsoft’s newer SCANDISK) will recover the missing space. Less frequently a really serious error could leave the same sector of disk space assigned to two different files. Such “crosslinked” files are damaged, and once this occurs the entire volume is suspect. The preferred recovery would be to back everything up, reformat the volume, and restore the data. Crosslinked files could be produced by a damaged operating system, or by a hardware problem in the disk subsystem itself.

HPFS is supported by OS/2 and Windows NT. Although it is not officially supported by DOS or Windows 95, there are shareware drivers (such as AMOS3) that can provide these systems with at least Read-Only access to HPFS files. Since OS/2 does not support VFAT, it cannot use long file names on a FAT volume. Many OS/2 software packages require long file names. An OS/2 system with enough memory and disk space should have at least one HPFS volume to support such packages.

Only Windows NT can use data on an NTFS volume. NTFS is required to provide full security on an NT File Server, and to support Macintosh datasets. On desktop workstations that run other operating systems as well as NT, NTFS is probably more trouble than it is worth.

A good general principle is to put FAT volumes first on a disk, then HPFS, and finally NTFS. All the systems will see the FAT volumes and will assign them disk letters. With device drivers for DOS, all the system will see the HPFS volumes as well. The NTFS volumes will only be available to Windows NT and will be ignored by the other systems.

(b) Describe the activities to be performed at every layer in the TCP/IP model when information flows from layer to another layer.

Solution:

The theory and idea behind having standards accepted, ratified, and agreed upon by nations around the world, is to ensure that the system from Country A will be easily integrated with the system from Country B with little effort. It also helps to make specification for industries to create goods and services that conform to the standard and by providing competition to the same product, decrease prices for products that must match the minimum standards. Comparisons are made easier in this way for products made by competing groups that must meet or exceed the minimum accepted specified standards.

A protocol is more like a language that can be shared by many people. A protocol may become a standard, if all of the players in the game that would like to use that protocol all politically agree that it shall be the protocol of choice for use in, and between nations. When the protocol is ratified by the governing bodies as the shared and agreed upon system, it becomes an official standard..

The ISO looked to create a simple model for networking. They took the approach of defining layers that rest in a stack formation, one layer upon the other. Each layer would have a specific function, and deal with a specific task. Much time was spent in creating their model called “The ISO OSI Seven Layer Model for Networking”. In this model, they have 7 layers, and each layer has a special and specific function.

ISO OSI Seven Layer Model

Described:

7.) Application Layer:

The Application Layer can include things like File Transfers, and display formatting. HTTP is an example of an Application Layer protocol. Commonly known protocols considered by many to be part of the application layer actually may be considered to occupy the Session,Presentation, and Application Layers. For example an examination of an NFS file mount with files being copied defies simple categorization within the ISO OSI 7 Layer Reference Model. Is NFS an Application Layer Protocol? Well, files are copied, so we see that the Application Layer may be included. However, synchronization in file transfers takes place to some extent and session are created and torn down on demand as files are transferred. This suggests that it could also be part of the session layer or maybe presentation layer.

6.) Presentation Layer

Other than data sent/accepted to/from the application layer and Session Layer, this layer is reserved for certain kinds of data manipulation or consistent data types being encapsulated for transmission. Translations could possibly be made between ASCII and Unicode or even EBCIDIC if hexadecimal values for letter were being transmitted.

It is the presentation layer that is also able to exchange messages and often dynamically create a syntax that is shared by it and its peer layer service on the remote stack.

It is possible for something like a database translation system that could provide a consistent presentation service for an application program performing database queries to operate here. There are some parts of OBDC that may fit into the presentation layer in this respect.

Other examples of translation that might be made to “fit” in this layer include vt100, vt220, HTML and codes for translation of data to be presented. FOr example, in html, “&” can be used to represent “&” and this is effectively a modification of data being displayed. (The end user often does not see the escape codes used to display an inverse letter, or odd symbol.)

5.) Session Layer:

This section is one of the most often misunderstood sections since it does not have an obvious separate protocol when people try to apply it to a common layered system that may use TCP/IP or IPX/SPX. Often with these protocols and protocols on top of these protocols, layer boundaries are not so obvious. In examining what services are supposed to take place here according to the ISO in this OSI 7 Layer Reference model for Networking we can see a short list.

Deal with creating a session, transmission of data, and then tear down of the created session. Sessions are created and terminated at the request of thePresentation Layer as it has data needing to be passed on to a different location.

Part of the Session creation process includes dealing with cases of Half Duplex sessions where only a transmission or reception may take place and working out a turn sharing system to ensure both sides get opportunities to transmit as they need to relay data. In the case of Full Duplex support, a discovery process may be needed to allow this layer to know that bi-directional conversations make take place at the same time.

Another service that is offered as a part of the Session Layer might include data synchronization. Checksums may also be included at the Session Layer as a part of data synchronization. A checksum is performed after each packet is transmitted to see if applying the data from the packet to the file or stream being moved or transmitted would cause it to have the same checksum as the file on the remote location up to that point. If it is, then the new data may be added to the local machine being transferred from the remote site. This is a form of error correction for transmitted data. A familiar form of checksums in use can be seen in Z-modem transfers as part of communications or terminal software. The wonderful part of z-modem transfers is that it becomes possible for an interrupted z-modem download to be resumed where it left off with a minimal amount of retransmitted data. This may not be a method used at this layer, but it shows how using a system of synchronization with each part of the data being transferred can allow for interruptions to limit the problems associated with having to start the whole transmission over again.

4.) Transport Layer:

This layer is responsible for many things that individually may not seem exceptionally important but actually provide for some critical needs.

Just as you will read in some of the next layers below, this layer also looks to prevent a fast sender from over-running a slow receiver. An analogy is made in the Networking Layer section between data throughput rates and pipe sizes may better illustrate this and keep the amount of reading smaller if it is examined later.

This layer, just as all layers, accept data from layer immediately above and below it (except for layer 1, and layer 7) as well as provide services for the layers above them (except for Layer 7.) In this case, the Transport Layer must create a connection of the type needed by the Session Layer for each connection requested by the Session Layer. In cases where data being pushed down this model towards this layer is larger than the maximum allowed size of packets for this layer, it is up to this layer to re-size the incoming data from above. It does this by breaking the larger sized data from the layer above into smaller sizes that may fit within packets for this layer. The peer level Session Layer then re-creates the larger sized data for its upper layer by connecting payloads of separate packets together in a “stream”.

If a higher layer requires that a connection be created that is “reliable” (able to notice an error and then correct for the error so that all data sent eventually arrives at its destination) and the above layer cannot provide its own method of ensuring a “reliable” connection, then it is up to this layer to create a “reliable connection” that ensures all data sent eventually arrives.

If a “reliable” service is not required, but instead an “unreliable” connection is desired, then it is up to this layer to ensure that the packets arriving are the same as the packets that were sent, or else they are discarded. This can be performed by a number of ways, but the most common is to use special checksums (explained in a lower layer below this.)

(Described Later: Brief summary) Both reliable and unreliable connections may use something called a checksum (explained later in this page) If a packets computed checksum matches its carried checksum, then the packet may be considered valid, and have its payload passed on up to a higher layer. If the checksums do not match, then the packet may be discarded.

3.) Networking Layer:

It is the opinion of one of my associates that this layer is potentially the most complex of all the layers due to the issues that it must address. Most importantly: routing. This layer is responsible primarily with routing of data from the layer above (Transport Layer) to a remote location that may or may not share the same Physical Layer-direct-link, or even Data Link Layer protocol. Issues in differences in the commonly referenced name “bandwidth” to describe the size of an imaginary pipe for pushing data from one point to another as if it were liquid in a real pipe, are also address here at this layer. Just as the Data Link Layer below must ensure that a fast sender does not flood-out a slower receiver and possibly lead to lost data, this layer also must address problems that may exist when a stream of packets coming form a network with unused big pipes encounters a possibly busy network with small pipes. Even in the case where two networks both have the same sized pipes, a network with a pipe that is almost full may have difficulties in passing on incoming packets from a network with a pipe that is nearly empty.

Other issues that are resolved by this layer include dealing with packet sized in the case of dissimilar settings, or protocols between networks forcing the size of a packet to become smaller before being passed on (called fragmentation in IP of the TCP/IP suite of protocols). (Part of another document goes to better describe this using the often used “carrier pigeon” lesson to convey the problems of packet fragmentation by using symbolism and allowing the reader to draw some parallels. It can be found here at /networking/integrated.html#pigeon. I do not promote the slaughter of innocent birds here. I use the often-used “carrier pigeon” scenario to describe TCP/IP based transmissions of data but extend it to include packet fragmentation. The idea is to use a pre-exiting model for teaching that the user may know, and extend it to also include the special case of fragmentation.)

Often some sort of accounting mechanisms are included at this layer to allow a network administrator to see how many packets, bytes, and various numbers of different sized packets may have been transmitted. Though it is not an absolute necessity to the function of this layer, it does often provide statistical data for making charges to parties, optimizing links and arguing for bigger more expensive pipes, or smaller cheaper pipes, or bigger slower pipes, or smaller faster pipes. It can be one thing to tell your boss you need a bigger pipe, and it is entirely another to show your boss you need a bigger pipe.

2.) Data Link Layer: This layer is responsible for creating what appears to the layer above (Network Layer) as a channel that is free of detected errors. Often this is done by packaging bits into cells, or frames, or generically “packets” with a predictable beginning and end and special calculations performed on the data known as checksums.

It is necessary for the sender and receiver to agree upon the beginning and ends of packets so their transmission may be synchronized. The beginning of the packet may be known by both the sender and receiver based on a shared timing. For example (this one is not a real-case), a packet should be expected every second with a 1 second pause after each packet (synchronous). Another example may include using a special sequence of bits that act as a signature for the beginning of a packet and another special signature that acts as the end of a packet (asynchronous). With either system it is possible for the sender and receiver to both know when a packet starts, and stops.

However the bits are packaged, a system is devised and used by the sender and receiver to allow the receiver to detect a bad cell, frame or generically packet. Often this is a checksum. A checksum is a special mathematical check performed on the data being transmitted by the sender’s Data Link Layer. The sender examines the payload it will be encapsulating in a packet and performs a special mathematical equation on the payload (or complete packet depending on the defined Data Link Layer protocol). Then it includes the results of that equation in a part of the packet that is not the payload, such as the beginning, or header of the packet. When the receiving machine gets the packet, it looks in the agreed upon location of the packet for the checksum value, and removes it from the packet. Then the receiver performs the same mathematical equation on the payload (or remainder of packet, depending upon the protocol’s agreed upon method) and compares its result with that of the transmitted packet’s checksum value. If both are different, then there is something wrong with the received packet, and it may be discarded.

Some examples of some protocols that operate mostly in the Data Link Layer include Ethernet, TokenRing, ATM, and PPP.

1)    Physical Layer

This layer is responsible for moving bits across a shared media between two points. Agreed upon specification by both parties involved (or all parties involved) on how (1) bit and off (0) bit should be signaled. For what duration should an amperage, how a voltage signature proceeds in order for the sender to the receiver foor it to “hear” the signal, and decode the signal back into the bits transmitted by the sender. If not a wired physical media, but instead a wireless system, then it would be this layer that specified what frequency of light or sound would be used and if luminous intensity or amplitude changed meanings of bits. This layer also specifies how the channel may be used: Full Duplex, Half Duplex, or (possibly?) Simplex. This layer also deals with conductor mapping in the case of wired media, and frequency/amplitude/cycle-offsets in the case of wireless media for mapping Receiving and Transmission.

(b)  Describe how to monitor the number of TCP connection failures in Linux and UNIX

Solution:

You can use auditd to monitor system call. It has the ability to log system call based on function name and its return value. Therefore you can monitor things like socket creating failure, read failure etc.

In Linux I would like know what is the retry mechanism (how many times and how far apart). Asking because for a TCP client connect() call I am getting ETIMEDOUT error. This socket has O_NONBLOCK option and monitored by epoll() for the events.

If someone can point to me where in the code this retry logic is implemented that would be helpful too. I tried following a bit starting with tcp_v4_connect() from net/ipv4/tcp_ipv4.c, but lost my way pretty soon..

Tcp:

1478133 active connections openings

121093 passive connection openings

906 failed connection attempts

76814 connection resets received

1 connections established

12674512 segments received

14727243 segments send out

14561 segments retransmited

0 bad segments received.

3603 resets sent

Show more