2014-09-02

Course Code : BCS-011

Course Title : Computer Basics and PC Software

Assignment Number : BCA(1)-011/Assignment/14-15

Maximum Marks : 100

Weightage : 25%

Last Date of Submission : 15th October, 2014/15th April, 2015

This assignment has three questions of 80 marks (each section of a question carries same marks).Answer all the questions. Rest 20 marks are for viva voce. You may use llustrations and diagrams to enhance explanations. Please go through the guidelines regarding assignments given in the Programme Guide for the format of presentation. Please give precise answers. The word limit for each part is 200 words.

Question 1: (Covers Block 1)

a) Explain the terms: transistor, integrated circuit and von Neumann Architecture in the context of a Computer. Have the developments of very large scale integration affected the von Neumann Architecture? Explain your answer.

Solution:
Transistor: A device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented in 1947 at Bell Labs, transistors have become the key ingredient of all digital circuits, including computers.
Integrated circuit: an integrated circuit (IC) is a small electronic device made out of a semiconductor material. Integrated circuits are used for a variety of devices, including microprocessors, audio and video equipment, and automobiles.
Von Neumann Architecture: John von Neumann elucidated the first practical stored-program computer architecture (scheme for connecting computer components) in the mid-1940s. It is comprised of the five classical components (input, output, processor, memory, and datapath). The processor is divided into an arithmetic logic unit (ALU) and control unit, a method of organization that persists to the present. Within the processor, the ALU datapath mediates data transfer for computation. The registers are fast memory modules from/to which data can be read/written to support streaming computation, as shown in Figure 1.8. Within the ALU, an accumulator supports efficient addition or incrementation of values corresponding to variables such as loop indices.

The von Neumann architecture has a significant disadvantage – its speed is dependent on the bandwidth or throughput of the datapath between the processor and memory. This is called the von Neumann bottleneck. After the integrated circuits, the only place to go was down – in size, that is. Large scale integration (LSI) could fit hundreds of components onto one chip. By the 1980′s, very large scale integration (VLSI) squeezed hundreds of thousands of components onto a chip. Ultra-large scale integration (ULSI) increased that number into the millions. The ability to fit so much onto an area about half the size of a U.S. dime helped diminish the size and price of computers. It also increased their power, efficiency and reliability.

b) In the context of memory organization, what tradeoff is faced by a computer? Explain the characteristics of primary, magnetic and optical memories. Differentiate between sequential and random access.

Solution:

a space–time or time–memory tradeoff is a situation where the memory use can be reduced at the cost of slower program execution (and, conversely, the computation time can be reduced at the cost of increased memory use). As the relative costs of CPU cycles, RAM space, and hard drive space change—hard drive space has for some time been getting cheaper at a much faster rate than other components of computers—the appropriate choices for space–time tradeoffs have changed radically. Often, by exploiting a space–time tradeoff, a program can be made to run much faster.
Characteristics of Primary Memory

• These are semiconductor memories

• It is known as main memory.

• Usually volatile memory.

• Data is lost in case power is switched off.

• It is working memory of the computer.

• Faster than secondary memories.

• A computer cannot run without primary memory.
Characteristic of Magnetic Memory And Optical Memories

• These are magnetic and optical memories

• It is known as backup memory.

• It is non-volatile memory.

• Data is permanently stored even if power is switched off.

• It is used for storage of data in a computer.

• Computer may run without secondary memory.

• Slower than primary memories.

Comparing random versus sequential operations is one way of assessing application efficiency in terms of disk use. Accessing data sequentially is much faster than accessing it randomly because of the way in which the disk hardware works. The seek operation, which occurs when the disk head positions itself at the right disk cylinder to access data requested, takes more time than any other part of the I/O process. Because reading randomly involves a higher number of seek operations than does sequential reading, random reads deliver a lower rate of throughput. The same is true for random writing.



ignou.nisecomputers.com

You might find it useful to examine your workload to determine whether it accesses data randomly or sequentially. If you find disk access is predominantly random, you might want to pay particular attention to the activities being done and monitor for the emergence of a bottleneck.

c) Convert the following numbers as stated
(i) Decimal 117.0125 to binary

Solution:

Integer Part

Quotient on division by 2

Remainder on division by 2

3456

1728

0

1728

864

0

864

432

0

432

216

0

216

108

0

108

54

0

54

27

0

27

13

1

13

6

1

6

3

0

3

1

1

1

0

1

Please note in the figure above that:

The equivalent binary to the Integer part of the number is (1110101)2.

You will get the Integer part of the number, if you READ the remainder in the direction of the Arrow.

Integer Part

Quotient on division by 2

Remainder on division by 2

3456

1728

0

1728

864

0

864

432

0

432

216

0

216

108

0

108

54

0

54

27

0

27

13

1

13

6

1

6

3

0

3

1

1

1

0

1

Please note in the figure above that:

! The equivalent binary to the Fractional part of the number is 000000110.

! You will get the fractional part of the number, if you READ the Integer part of

the number in the direction of the Arrow.

Thus, the number (1110101.000000110)2 is equivalent to (117.0125) .

(ii) Decimal 2459 to hexadecimal

Solution:

Integer Part Quotient on division by 16 Remainder on division by 2

Integer Part

Quotient on division by 2

Remainder on division by 2

3456

1728

0

1728

864

0

864

432

0

432

216

0

216

108

0

108

54

0

54

27

0

27

13

1

13

6

1

6

3

0

3

1

1

1

0

1

Please note in the figure above that:

 The equivalent binary to the Integer part of the number is (99B)16.

 You will get the Integer part of the number, if you READ the remainder in the direction of the Arrow.

(iii) Character X and x to ASCII and Unicode

Solution:

According to ASCII Table and Unicode Table

Character ASCII Code Unicode

Integer Part

Quotient on division by 2

Remainder on division by 2

3456

1728

0

1728

864

0

864

432

0

432

216

0

216

108

0

108

54

0

54

27

0

27

13

1

13

6

1

6

3

0

3

1

1

1

0

1

(iv) Decimal 3456 to binary

Solution:

Integer Part Quotient on division by 2 Remainder on division by 2

Integer Part

Quotient on division by 2

Remainder on division by 2

3456

1728

0

1728

864

0

864

432

0

432

216

0

216

108

0

108

54

0

54

27

0

27

13

1

13

6

1

6

3

0

3

1

1

1

0

1

Please note in the figure above that:

 The equivalent binary to the Integer part of the number is (110110000000)2

 You will get the Integer part of the number, if you READ the remainder in the direction of the Arrow.

Converted Decimal (3456)10 to Binary (110100000000)2

d) What is the need of ports in a computer system? What is the purpose of Universal Serial Bus? Name the devices that can be connected using Universal Serial Bus.

Solution:

In computer networking, the term port can refer to either physical or virtual connection points. Physical network ports allow connecting cables to computers, routers, modems and other peripheral devices. Several different types of physical ports available on computer network hardware include:

• Ethernet ports

• USB ports

• Serial ports

USB 1.x is an external bus standard that supports data transfer rates of 12 Mbps and is capable of supporting up to 127 peripheral devices. Generally, USB refers to the types of cables and connectors used to connect these many types of external devices to computers. The Universal Serial Bus standard has been extremely successful. USB ports and cables are used to connect hardware such as printers, scanners, keyboards, mice, flash drives, external hard drives, joysticks, cameras, and more to computers of all kinds, including desktops, tablets, laptops, notebooks, etc. Many portable devices, like Smartphone’s, eBooks readers, and small tablets, use USB primarily for charging. USB charging has become so common that it’s now easy to find replacement electrical outlets at home improvement stores with USB ports built it, negating the need for a USB power adapter.

USB is a system for connecting a wide range of peripherals to a computer, including pointing devices, displays, and data storage and communications products.

e) Differentiate between the following:
(i) Static RAM vs. Dynamic RAM

Solution:

• Firstly the main difference in the structure varies due to transistor and capacitor number and setting as just three to four transistors are required for a Dynamic RAM, but six to eight MOS transistors are necessary for a Static RAM.

• Secondly Dynamic RAM memory can be deleted and refreshed while running the program, but in case of Static RAM it is not possible to refresh programs.

• Data is stored as a charge in a capacitor in Dynamic RAM, where data is stored in flip flop level in Static RAM.

• For refreshing a data another capacitor is required in case of Dynamic capacitor, but no refreshing option is available in Static RAM.

• A Dynamic RAM possesses less space in the chip than a Static RAM.

• Dynamic RAM is used to create larger RAM space system, where Static RAM create speed- sensitive cache.

• Static ram is 4 times more expensive than Dynamic RAM.

• Dynamic RAM consumes less power than Static RAM.

• For accessing a data or information, Static RAM takes less time than Dynamic RAM.

• Dynamic RAM has higher storage capacity. In fact it can store 4 times than Static RAM.

With all mentioned differences, the discussion can be concluded by saying that Dynamic RAM is slower than Static RAM, but it has a refreshing option which makes it more viable. Static RAM is costly and takes more spaces than Dynamic RAM, but is faster than the other.

(ii) Seek time vs. Latency time

Solution:

Seek time is the time required to move the disk arm to the required track. Rotational delay or latency is the time it

takes for the beginning of the required sector to reach the head. Sum of seek time (if any) and latency is the access time. Time taken to actually transfer a span of data is transfer

time

f) Explain the following terms:
(i) Resolution of monitors

Solution:

Imagine lying down in the grass with your nose pressed deep into the thatch. Your field of vision would not be very large, and all you would see are a few big blades of grass, some grains of dirt, and maybe an ant or two. This is a 14-inch 640 x 480 monitor. Now, get up on your hands and knees, and your field of vision will improve considerably: you’ll see a lot more grass. This is a 15-inch 800 x 640 monitor. For a 1280 x 1024 perspective (on a 19-inch monitor), stand up and look at the ground. Some monitors can handle higher resolutions such as 1600 x 1200 or even 1920 x 1440—somewhat akin to a view from up in a tree. Monitors are measured in inches, diagonally from side to side (on the screen). However, there can be a big difference between that measurement and the actual viewable area. A 14-inch monitor only has a 13.2-inch viewable area, a 15-inch sees only 13.8 inches, and a 20-inch will give you 18.8 inches (viewing 85.7% more than a 15-inch screen). A computer monitor is made of pixels (short for “picture element”). Monitor resolution is measured in pixels, width by height. 640 x 480 resolution means that the screen is 640 pixels wide by 480 tall, an aspect ratio of 4:3. With the exception of one resolution combination (1280 x 1024 uses a ratio of 5:4), all aspect ratios are the same.

(ii) Liquid Crystal Displays

Solution:

The most common application of liquid crystal technology is in liquid crystal displays (LCDs). From the ubiquitous wrist watch and pocket calculator to an advanced VGA computer screen, this type of display has evolved into an important and versatile interface.

A liquid crystal display consists of an array of tiny segments (called pixels) that can be manipulated to present information. This basic idea is common to all displays, ranging from simple calculators to a full color LCD television. Why are liquid crystal displays important? The first factor is size. As will be shown in the following sections, an LCD consists primarily of two glass plates with some liquid crystal material between them. There is no bulky picture tube. This makes LCDs practical for applications where sizes (as well as weight) are important. In general, LCDs use much less power than their cathode-ray tube (CRT) counterparts. Many LCDs are reflective, meaning that they use only ambient light to illuminate the display. Even displays that do require an external light source (i.e. computer displays) consume much less power than CRT devices. Liquid crystal displays do have drawbacks, and these are the subject of intense research. Problems with viewing angle, contrast ratio, and response time still need to be solved before the LCD replaces the cathode-ray tube. However with the rate of technological innovation, this day may not be too far into the future.

We will restrict this discussion to traditional nematic LCDs since the major technological advances have been developed for this group of devices. Other LC applications, such as those employing polymer stabilization of LC structure, are discussed in the appropriate section covering those materials.

(iii) Line printers

Solution:

Line Printers :- Line Printer can Print One Line at a Time. The line printer is a form of high speed impact printer. They can Print 300 to 3000 Lines per Minute. So that they are very fast. Large Computer system typically use Line Printer.

The Line Printers are of two Types

1. Drum Printers: Drum Printer consists of a Drum Which Consists of a Number of Characters; those are Printed on the Drum. And the Number of Characters or Number of Tracks are Divided, after examining the width of the Paper. But there are also Some Character sets Available Means the Number of Characters those are printed on the Paper. For Example 64, and 96 Characters etc.

2. Chain Printers: These are also Line Printers, which Prints one Line at a Time. All the Characters are printed on the Chain and the Set of Characters are placed on the Chain. There are 48 and 64 and 96 Characters set Printers are Available. There are also Some Hammers, those are Placed in Front of the Chain, and Paper is Placed between the Hammer and the Inked Ribbon. The Total Number of Hammers will be Equals to the Total Number of Print Positions.

(iv) Workstation

Solution:

A workstation is a computer intended for individual use that is faster and more capable than a personal computer. A workstation is a computer intended for individual use that is faster and more capable than a personal computer. It’s intended for business or professional use (rather than home or recreational use). Workstations and applications designed for them are used by small engineering companies, architects, graphic designers, and any organization, department, or individual that requires a faster microprocessor, a large amount of random access memory (RAM), and special features such as high-speed graphics adapters. Historically, the workstation developed technologically about the same time and for the same audience as the UNIX operating system, which is often used as the workstation operating system. Among the most successful makers of this kind of workstation are Sun Microsystems, Hewlett-Packard, DEC, and IBM.

g) What are the uses of following Utility Software:
(i) Disk checkers

Solution:

Disk Checker is a hard drive monitoring/repairing tool for Windows. It is even more of a suite of tools that allow you to scan your hard disks for errors. The first thing you will want to do is carry out a full scan. This process will most likely take about half an hour to complete for a 120-gigabyte hard drive. If the application finds any logical errors (it scans using several techniques), you will be able to fix those. Also, there is an option to automatically repair any errors. There are two methods for scanning: Direct Access and File Access. The Direct Access works best with local media and the File Access uses other parameters to scan data. Interestingly, Disc Checker will tell you what files are located on sectors with errors, so you can delete or maybe save them. Furthermore, the application has the ability to create disc images in an array of formats. To top it all off, there is a tab for S.M.A.R.T. data, which should give you some insight into your hard drive’s overall status. In short, with back-up and error scanning capabilities, Disk Checker should prove enough for a casual hard drive monitoring and repair suite. However, Disk Checker (and most other applications) can’t handle physical errors and some logical ones.

(ii) System restores

Solution:

System Restore is a feature in Microsoft Windows that allows the user to revert their computer’s state (including system files, installed applications, Windows Registry, and system settings) to that of a previous point in time, which can be used to recover from system malfunctions or other problems. First included in Windows ME, it has since been included in all following desktop versions of Windows released since, excluding the Windows Server. In prior Windows versions it was based on a file filter that watched changes for a certain set of file extensions, and then copied files before they were overwritten. An updated version of System Restore introduced by Windows Vista uses the Shadow Copy service as a backend (allowing block-level changes in files located in any directory on the volume to be monitored and backed up regardless of their location) and allows System Restore to be used from the Windows Recovery Environment in case the Windows installation no longer boots at all

(iii) Disk Defragmenter

Solution:

When defragmenting a disk partition, the files stored on the disk are rearranged to occupy contiguous storage locations. This process increases the access speed to your files by minimizing the time required to read and write files to/from the disk and by maximizing the transfer rate. The system startup time for Windows is also improved. USB drives have a limited number of read/write cycles, therefore defragmentation is not advisable. Windows 7′s Disk defragmenter knows this so, if an SSD drive is detected, the defragmentation is automatically deactivated for it.

There are several ways of finding Disk Defragmenter. The easiest one is to type ‘defragment’ in the search bar of the Start Menu. From the list of available programs choose Disk Defragmenter.

The second way of launching Disk Defragmenter is to go to ‘Start Menu -> All Programs -> Accessories -> System tools -> Disk Defragmenter’.

(iv) Disk Management

Solution:

Disk Management is a system utility for managing hard disks and the volumes or partitions that they contain. With Disk Management, you can initialize disks, create volumes, and format volumes with the FAT, FAT32, or NTFS file systems. Disk Management enables you to perform most disk-related tasks without restarting the system or interrupting users. Most configuration changes take effect immediately.

In this version of Windows, Disk Management provides the same features you may already be familiar with from earlier versions, but also adds some new features:

• Simpler partition creation. When you right-click a volume, you can choose whether to create a basic, spanned, or striped partition directly from the menu.

• Disk conversion options. When you add more than four partitions to a basic disk, you are prompted to convert the disk to dynamic or to the GUID partition table (GPT) partition style.

• Extend and shrink partitions. You can extend and shrink partitions directly from the Windows interface.

Question 2: (Covers Block 2)
a) Why are different software architectures developed? Explain the concept of cloud computing giving its advantages and disadvantages.

Solution:

Developers would quickly recognize repair dispatch as a workflow-centered application. They would gather requirements by walking through a formal workflow model with the sponsor to match the domain terminology of repair dispatch to the generic terminology of the workflow model. This would account for a good proportion of the requirements analysis and design phases of traditional approaches and proceed much faster. Developers would then adapt standard workflow components to actually create the application. Very few if any components would be created from scratch. The final result would be very robust and would look to the sponsor to be fully customized to their business, yet from the developer’s point of view would be mostly generic. I have personally seen in practice that an architectural approach to development can dramatically improve integration, flexibility, reuse, and development productivity. This approach may seem to be a very radical change from standard software engineering practice, yet this is exactly how the engineering process occurs in other engineering disciplines. A major issue, however, is how to transition to an architectural approach when the components and models needed for this approach aren’t yet available. Design handbooks like AMS’s DesignPro seem to be an important part of the answer. They provide the essential linkage between architectural concept and specific software components. The software development community at large also needs to recognize that traditional approaches to reuse are essentially archaeological approaches. In archaeology, software artifacts are dumped into repositories and various carrots and sticks are employed in the attempt to get developers to scavenge through them.

Cloud computing refers to the use of computing resources, those being hardware and/or software) that reside on a remote machine and are delivered to the end user as a service over a network, with the most prevalent example being the internet. By definition, a user entrusts his data to a remote service, on which has limited to no influence. A lot of critics dismissed it as being the latest tech fad. However, cloud computing managed to cut through the hype and truly shift the paradigm of how IT is done nowadays. The Cloud has achieved cutting costs for enterprises and helping users focus on their core business instead of being obstructed by IT issues. For this reason, it seems that it is here to stay for the immediate future.

The Advantages

Mobility

One of the main advantages of working in the cloud is that it allows users the mobility necessary in this day and age of global marketing. For example, a busy executive on a business trip in Japan may need to know what is going on at the company headquarters in Australia.

Versatile Compatibility

It is an ongoing debate: which is better, the Mac or PC? Despite which side of the fence you stand on this argument, it makes no difference when it comes to implementing cloud solutions into a business model.

Only Pay for What You Need

Unlike many computing programs where the package comes with unnecessary applications, the cloud allows users to literally get what they pay for. This scalability allows for you to simply purchase the applications and data storage you really need.

Individuality

One of the most convenient aspects of working in the cloud is that it is compatible with aspects specific to the company. For example, cloud IT services can be scaled to meet changing system demands within a single company.

The Disadvantages

While the cloud benefits are numerous, this method of computation is not for all businesses. There are certain disadvantages that could persuade you that this system is not for your company, and it takes careful consideration and professional advice to determine if this is the case in any specific circumstance.

Less Control

Utilising the public cloud in business does have an obvious downside. By using this technology you risk losing a level of control over your company. While many IT managers are experimenting with various ways of implementing an in-house cloud system that runs on delivered metered services, this is not always the most lucrative business move.

Not Always Enough Room

Many have been disappointed with cloud technology, because they find that once they have instituted a cloud system within their business, they run out of storage space. While it is possible to update the system, it can be a painstaking process.

Security and Confidentiality

Since technology has started to expand in the exponential ways we are seeing in this day and age, cyber-crime has become a concerning issue. Cloud computing does pose the risk of increased security threats. While most companies have an up-to-date virus database, this does not make the files and information stored in the cloud immune to hackers.

b) Explain the Structured and modular software design paradigm with the help of an example. How is a service (as in service oriented software paradigm) different than an object? Explain with the help of an example.

Solution:

See in the book or as soon as possible we provide in ignou.nisecomputer.com

c) Why do you need an operating system in a computer? Explain the file management, memory management and process control management in an Operating system. List the user level commands of any operating system for file management.

Solution:

An operating system is basically the general contractor of the computer. While the programs are busy doing their one specialized thing — plumbing, electrical, carpentry — the operating system is overseeing them all, communicating what they need to the processor and providing a common language that they can all work with to stay on the same page. There are a few other things your operating system does that you probably don’t think about. For instance, it’s the operating system (not just the hard drive) that’s going to decide how to manage memory. The operating system needs to delegate how much memory each process uses and make sure no memory overlaps. Also keep in mind that your home computer is most likely a single-user, multitasking operating system. That means you only have one processor, but it can run many programs at once. In reality, the computer is switching between processes at extremely high speeds — so high, you don’t know it. While you’re under the illusion that your CPU and operating system have a hand in every pot, your programs are under the impression that they have complete control of the operating system at any given moment.

File Management:

A set of files and directories contained on a single drive. The raw data on the drive is translated to this abstract view of files and directories by the file system manager according to the specification of the file system standard. “Each file is a named collection of data stored in a device. The file manager implements this abstraction and provides directories for organizing files. It also provides a spectrum of commands to read and write the contents of a file, to set the file read/write position, to set and use the protection mechanism, to change the ownership, to list files in a directory, and to remove a file…The file manager provides a protection mechanism to allow machine users to administer how processes executing on behalf of different users can access the information in files. File protection is a fundamental property of files because it allows different people to store their information on a shared computer, with the confidence that the information can be kept confidential.”

Memory Management:

• Primary (Main) Memory

• Provides direct access storage for CPU

• Processes must be in main memory to execute

• OS must:

• Mechanics

• Keep track of memory in use

• Keep track of unused (“free”) memory

• Protect memory space

• Allocate, deallocate space for processes

• Swap processes: memory <–> disk

• Policies

• Decide when to load each process intovmemory

• Decide how much memory space to allocate each process

• Decide when a process should be removed from memory

Process control Management:

The microprocessor (or central processing unit (CPU), or just processor) is the central component of the computer, and is in one way or another involved in everything the computer does. A computer program consists of a series of machine code instructions which the processor executes one at a time. This means that, even in a multi-tasking environment, a computer system can, at any given moment, only execute as many program instructions as there are processors. In a single-processor system, therefore, only one program can be running at any one time. The fact that a modern desktop computer can be downloading files from the Internet, playing music files, and running various applications all at (apparently) the same time, is due to the fact that the processor can execute many millions of program instructions per second, allowing the operating system to allocate some processor time to each program in a transparent manner. The process control block (PCB) maintains information that the operating system needs in order to manage a process. PCBs typically include information such as the process ID, the current state of the process (e.g. running, ready, blocked, etc.), the number of the next program instruction to be executed, and the starting address of the process in memory. The PCB also stores the contents of various processor registers (the execution context), which are saved when a process leaves the running state and which are restored to the processor when the process returns to the running state. When a process makes the transition from one state to another, the operating system updates the information in its PCB. When the process is terminated, the operating system removes it from the process table and frees the memory and any other resources allocated to the process so that they become available to other processes.

User Level Commands File Management:
COPY: Copies one or more files from source disk/drive to the specified disk/drive.
XCOPY: Copies files and directories, including lower-level directories if they exists.
DEL: Removes specified files from specified disk/drive.
REN: Changes the name of a file(Renaming).
ATTRIB: Sets or shows file attributes (read, write, hidden, Archive).
BACKUP: Stores or back up one or more files/directories from source disk/drive to other destination disk/drive.
RESTORE: Restores files that were backed up using BACKUP command.
EDIT: Provides a full screen editor to create or edit a text file.
FORMAT: Formats a disk/drive for data storage and use.

d) Draw a flow chart of a program that adds odd numbers up to 100.

Solution:



ignou.nisecomputers.com

e) Explain the terms: variable, data type, one dimensional array and subroutine with the help of an example each.

Solution:
Variable:

The best way to demonstrate what a variable does is by way of an example. Take the calculation:

A = 3 + 4

A variable is used to store a value. It’s that simple. You can have a variable that stores any type of data, and you can have as many as you want. The following program shows you how the contents of the variable can be output to the screen:

A = 3 + 4

PRINT ( A )

Data Type:

We have established that statements are used to write a program. A statement can be broken up into a command and its data. The command is the operation, or task you wish to perform. The data is that which must be used by the command to complete the operation. The data is also referred to as the parameter(s). There are many types of data you can use, including integer numbers, real numbers and string. Each type of data holds a slightly different type of value.

int main()

{

printf(“Storage size for int : %d \n”, sizeof(int));

return 0;

}

One Dimensional Array

Arrays are going to be a very important part of your future programs. They allow you to store large amounts of data under a single name. You can then access the data by index rather than by name alone.If you had to write a program that stored each weeks lottery numbers, typing out 52 unique variable names is a lot of work, hard to maintain and quite unnecessary. Arrays allow you to create a special kind of variable that can store more than one item of data.

Syntax: data_type array_name[width];

Example: int roll [8];

In our example, int specifies the type if the variable, roll specifies the name of the variable and the value in bracket [8] is new for newbie. The bracket ([ ]) tells compiler that it is an array and number mention in the bracket specifies that how many elements (values in any array is called elements) it can store. This number is called dimension of array. So, with respect to our example we have declared an array of integer type and named it “roll” which can store roll numbers of 8 students.

Subroutine:

To define your own subroutine, use the keyword sub, the name of the subroutine (without the ampersand), then the indented block of code (in curly braces) which makes up the body of the subroutine, something like this:

sub marine {

$n += 1; # Global variable $n

print “Hello, sailor number $n!\n”;

}

Subroutine definitions can be anywhere in your program text, but programmers who come from a background of languages like C or Pascal like to put them at the start of the file. Others may prefer to put them at the end of the file, so that the main part of the program appears at the beginning. It’s up to you. In any case, you don’t normally need any kind of forward declaration.

Subroutine definitions are global; without some powerful trickiness, there are no private subroutines. If you have two subroutine definitions with the same name, the later one overwrites the earlier one. That’s generally considered bad form, or the sign of a confused maintenance programmer.

f) Explain the uses and/or facilities provided by the following software:
(i) E-mail

Solution:

Electronic mail, the transmission of messages over communications networks. The messages can be notes entered from the keyboard or electronic files stored on disk. Most mainframes, minicomputers, and computer networks have an e-mail system. Some electronic-mail systems are confined to a single computer system or network, but others have gateways to other computer systems, enabling users to send electronic mail anywhere in the world. Companies that are fully computerized make extensive use of e-mail because it is fast, flexible, and reliable. Most e-mail systems include a rudimentary text editor for composing messages, but many allow you to edit your messages using any editor you want. You then send the message to the recipient by specifying the recipient’s address. You can also send the same message to several users at once. This is called broadcasting. Sent messages are stored in electronic mailboxes until the recipient fetches them. To see if you have any mail, you may have to check your electronic mailbox periodically, although many systems alert you when mail is received. After reading your mail, you can store it in a text file, forward it to other users, or delete it. Copies of memos can be printed out on a printer if you want a paper copy.

(ii) Database Management System

Solution:

A database management system (DBMS) is the software that allows a computer to perform database functions of storing, retrieving, adding, deleting and modifying data. Relational database management systems (RDBMS) implement the relational model of tables and relationships.

The following are examples of database applications:

1. Computerized library systems

2. Automated teller machines

3. Flight reservation systems

4. Computerized parts inventory systems

From a technical standpoint, DBMSs can differ widely. The terms relational, network, flat, and hierarchical all refer to the way a DBMS organizes information internally. The internal organization can affect how quickly and flexibly you can extract information.

(iii) Spreadsheet

Solution:

A spreadsheet application is a computer program such as Excel, OpenOffice Calc, or Google Docs Spreadsheets.

It has a number of built in features and tools, such as functions, formulas, charts, and data analysis tools that make it easier to work with large amounts of data.

The term is also used to refer to the computer file created by the above mentioned programs. In this sense, a spreadsheet is a file used to store various types of data. The basic storage unit for a spreadsheet file is a table. In a table, the data is arranged in rows and columns to make it easier to store, organize, and analyze the information. In Excel an individual spreadsheet file is referred to as a workbook. A term related to this is worksheet, which, in Excel, refers to a single page or sheet in a workbook. By default, Excel has three worksheets per workbook. So, to put it all together, a spreadsheet program, such as Excel, is used to create workbook files that contain one or more worksheets containing data.

(iv) Word Processing

Solution:

Word processing, use of a computer program or a dedicated hardware and software package to write, edit, format, and print a document. Text is most commonly entered using a keyboard similar to a typewriter’s, although handwritten input (see pen-based computer) and audio input (as for dictation) devices have been introduced. Word processors have various functions that allow a person to revise text without retyping an entire document. As the text is entered or after it has been retrieved, sections ranging from words and sentences to paragraphs and pages can be moved, copied, deleted, altered, and added to while displayed. As word processors have become more sophisticated, such functions as word counting, spell checking, footnoting, and index generation have been added. In addition, a document’s format—type size, line spacing, margins, page length, and the like—usually can be easily altered. To aid in these alterations, the text is displayed as it will appear when printed with indented paragraphs and lists, multiple columns, tables, etc; this is called a what-you-see-is-what-you-get (WYSIWYG) display.

Word processors are distinguished from text editors and desktop publishing systems. Text editors are designed for creating and editing computer programs. While they have features found in simple word processors, such as search and replace, that make the entry and editing of words and numbers easier, text editors provide only the most primitive facilities for text formatting and printing. Desktop publishers may include only simple word processing features but provide enhanced formatting functions, such as routines for merging text and graphics into complex page layouts.

g) Define the following terms:
(i) Open Source

Solution:

Open source refers to a program in which the source code is available to the general public for use and/or modification from its original design free of charge, i.e., open. Open source code is typically created as a collaborative effort in which programmers improve upon the code and share the changes within the community. Open source sprouted in the technological community as a response to proprietary software owned by corporations. A certification standard issued by the Open Source Initiative (OSI) that indicates that the source code of a computer program is made available free of charge to the general public. The rationale for this movement is that a larger group of programmers not concerned with proprietary ownership or financial gain will produce a more useful and bug -free product for everyone to use. The concept relies on peer review to find and eliminate bugs in the program code, a process which commercially developed and packaged programs do not utilize. Programmers on the Internet read, redistribute and modify the source code, forcing an expedient evolution of the product. The process of eliminating bugs and improving the software happens at a much quicker rate than through the traditional development channels of commercial software as the information is shared throughout the open source community and does not originate and channel through a corporation’s research and development cogs.

(ii) Open Source development model

Solution:

Open-source software development is the process by which open-source software (or similar software whose source code is publicly available) is often developed. These are software products “available with its source code and under an open-source license to study, change, and improve its design”. Examples of popular open-source software products are Mozilla Firefox, Google Chromium, Android and the Apache Open Office Suite. In the past, the open-source software development method has been very unstructured, because no clear development tools, phases, etc., had been defined with development methods such as dynamic systems development method. Instead, each project had its own phases. More recently, however, there has been much better progress, coordination, and communication within the open-source community.

(iii) System Software

Solution:

System software is a type of computer program that is designed to run a computer’s hardware and application programs.

If we think of the computer system as a layered model, the system software is the interface between the hardware and user applications. The operating system (OS) is the best-known example of system software. The OS manages all the other programs in a computer. System software and application programs are the two main types of computer software. Unlike system software, an application program (often just called an application or app) performs a particular function for the user. Examples (among many possibilities) include browsers, email clients, word processors and spreadsheets.

(iv) Compiler

Solution:

A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or “code” that a computer’s processor uses. Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor. The file that is created contains what are called the source statements. The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements. When executing (running), the compiler first parses (or analyzes) all of the language statements syntactically one after the other and then, in one or more successive stages or “passes”, builds the output code, making sure that statements that refer to other statements are referred to correctly in the final code. Traditionally, the output of the compilation has been called object code or sometimes an object module . (Note that the term “object” here is not related to object-oriented programming.) The object code is machine code that the processor can process or “execute” one instruction at a time.

(v) Device Driver

Solution:

A device driver is a program that controls a particular type of device that is attached to your computer. There are device drivers for printers, displays, CD-ROM readers, diskette drives, and so on. A device driver is a program that controls a particular type of device that is attached to your computer. There are device drivers for printers, displays, CD-ROM readers, diskette drives, and so on. When you buy an operating system, many device drivers are built into the product. However, if you later buy a new type of device that the operating system didn’t anticipate, you’ll have to install the new device driver. A device driver essentially converts the more general input/output instructions of the operating system to messages that the device type can understand. In Windows operating systems, a device driver file usually has a file name suffix of DLL or EXE. A virtual device driver usually has the suffix of VXD.

(vi) Linker

Solution:

To convert Source Code to Machine code takes two phases.

Compilation: This turns the Source Code into Object Code.
Linking: This collects all the various Object Code files and builds them into an EXE or DLL.

Linking is quite a technical process. The obj files generated by the compiler include extra information that the linker needs to ensure that function calls between different obj files are correctly “joined up”.

Also called link editor and binder, a linker is a program that combines object modules to form an executable program. Many programming languages allow you to write different pieces of code, called modules, separately. This simplifies the programming task because you can break a large program into small, more manageable pieces. Eventually, though, you need to put all the modules together. This is the job of the linker.

(vii) Anti-virus software

Solution:

Anti-virus software is a program or set of programs that are designed to prevent, search for, detect, and remove software viruses, and other malicious software like worms, Trojans, adware, and more.

These tools are critical for users to have installed and up-to-date because a computer without anti-virus software installed will be infected within minutes of connecting to the internet. The bombardment is constant, with anti-virus companies update their detection tools constantly to deal with the more than 60,000 new pieces of malware created daily.

There are several different companies that build and offer anti-virus software and what each offers can vary but all perform some basic functions:

• Scan specific files or directories for any malware or known malicious patterns

• Allow you to schedule scans to automatically run for you

• Allow you to initiate a scan of a specific file or of your computer, or of a CD or flash drive at any time.

• Remove any malicious code detected –sometimes you will be notified of an infection and asked if you want to clean the file, other programs will automatically do this behind the scenes.

• Show you the ‘health’ of your computer

Always be sure you have the best, up-to-date security software installed to protect your computers, laptops, tablets and Smartphone’s.

(viii) Diagnostic program

Solution:

A diagnostic program is a program written for the express purpose of locating problems with the software, hardware, or any combination there of in a system, or a network of systems. Preferably, diagnostic programs provide solutions to the user to solve issues.

• Diagnostics that are run on-demand when a user needs assistance, typically within the primary operating system of the computer (e.g.Windows)

• “Off-line diagnostics” that are run outside the primary operating system, typically to reduce the masking influence of software on hardware issues

• Background diagnostics that monitor the system for failures and marginal events, and provide statistical data for failure prediction, and root cause analysis of actual failure conditions

• Solutions-oriented diagnostics, that diagnose and resolve user-perceived issues with a computer system.

Question 3: (Covers Block 3)
(a) What is a data communication system? Explain the characteristics of various communication media.

Solution:

Data communication refers to the exchange of data between a source and a receiver. Data communication is said to be local if

Communicating devices are in the same building or a similarly restricted geographical area.

Types of Transmission Media

Transmission media is broadly classified into two groups.

1. Wired or Guided Media or Bound Transmission Media

2. Wireless or Unguided Media or Unbound Transmission Media

Wired or Guided Media or Bound Transmission Media: Bound transmission media are the cables that are tangible or have physical existence and are limited by the physical geography. Popular bound transmission media in use are twisted pair cable, co-axial cable and fiber optical cable. Each of them has its own characteristics like transmission speed, effect of noise, physical appearance, cost etc.

Wireless or Unguided Media or Unbound Transmission Media: Unbound transmission media are the ways of transmitting data without using any cables. These media are not bounded by physical geography. This type of transmission is called Wireless communication. Nowadays wireless communication is becoming popular. Wireless LANs are being installed in office and college campuses. This transmission uses Microwave, Radio wave, Infra red are some of popular unbound transmission media.

The data transmission capabilities of various Medias vary differently depending upon the various factors. These factors are:

1. Bandwidth. It refers to the data carrying capacity of a channel or medium. Higher bandwidth communication channels support higher data rates.

2. Radiation. It refers to the leakage of signal from the medium due to undesirable electrical characteristics of the medium.

3. Noise Absorption. It refers to the susceptibility of the media to external electrical noise that can cause distortion of data signal.

4. Attenuation. It refers to loss of energy as signal propagates outwards. The amount of energy lost depends on frequency. Radiations and physical characteristics of media contribute to attenuation.

(b) Compare and contrast the characteristics of LAN, MAN and WAN.

Solution:

A LAN connection is a high-speed connection to a LAN. On the IUB campus, most connections are either Ethernet (10 Mbps) or Fast Ethernet (100 Mbps), and a few locations have Gigabit Ethernet (1000 Mbps) connections.

A MAN (metropolitan area network) is a larger network that usually spans several buildings in the same city or town. The IUB network is an example of a MAN.

A WAN (wide area network), in comparison to a MAN, is not restricted to a geographical location, although it might be confined within the bounds of a state or country. A WAN connects several LANs, and may be limited to an enterprise (a corporation

Show more