Sunday, March 9, 2008
Assingment 5
Network topology is the study of the arrangement or mapping of the elements (links, nodes, etc.) of a network, especially the physical (real) and logical (virtual) interconnections between nodes.
A local area network (LAN) is one example of a network that exhibits both a physical topology and a logical topology. Any given node in the LAN will have one or more links to one or more other nodes in the network and the mapping of these links and nodes onto a graph results in a geometrical shape that determines the physical topology of the network. Likewise, the mapping of the flow of data between the nodes in the network determines the logical topology of the network. It is important to note that the physical and logical topologies might be identical in any particular network but they also may be different.
Any particular network topology is determined only by the graphical mapping of the configuration of physical and/or logical connections between nodes. LAN Network Topology is, therefore, technically a part of graph theory. Distances between nodes, physical interconnections, transmission rates, and/or signal types may differ in two networks and yet their topologies may be identical.
Primary difference between Bridge and Gateway
Gateway
(1) A node on a network that serves as an entrance to another network. In enterprises, the gateway is the computer that routes the traffic from a workstation to the outside network that is serving the Web pages. In homes, the gateway is the ISP that connects the user to the internet.
In enterprises, the gateway node often acts as a proxy server and a firewall. The gateway is also associated with both a router, which use headers and forwarding tables to determine where packets are sent, and a switch, which provides the actual path for the packet in and out of the gateway.
(2) A computer system located on earth that switches data signals and voice signals between satellites and terrestrial networks.
(3) An earlier term for router, though now obsolete in this sense as router is commonly used.
Network bridge
A bridge for interconnecting data networks includes an adapter connected to each network and a central programmed processor. Each adapter includes a receive and a transmit FIFO storage which is less than the packets being transferred from one network to the other.
The control program generates Receive Buffer Descriptors which include buffer pointers, buffer length fields and pointers to next descriptors. These Descriptors are used by the adapters to buffer received packets which are directed to another network. When a packet is buffered the control program generates Transmission Descriptors which are used by the adapter to transfer packet data to the other network.
The control program modifies packet when needed by by generating and storing in its memory only the modified portion and including in the Receive Buffer Descriptor pointers which the buffered information which is to be transmitted and the sequence.
STAR TOPOLOGY
This is a form of LAN architecture is which nodes on a network are connected to a common central hub or switch, and this is done by the use of dedicated links. The Star topology is now emerging as the most common network layout used today in LAN layout. Each workstation is connected point-to-point to a single central location.
BUS TOPOLOGY
In the bus topology the server is at one end, and the client PCs (devices) are connected at different points or positions along the network. All signals pass through each of the devices. Each device has a unique identity and can recognize those signals intended for it. It is easy and simple to design and implement.
The Mesh topology is a variation of the bus, in which all devices are connected to one another in a daisy-chain fashion, as opposed to connecting in sequence to a single network cable. Each node is capable of transmitting, receiving, and routing data.
LAN Ring Topology
This topology is a simple design and consists of a single cable that forms the main data path in the shape of a ring. Each device is connected to a closed loop of cable. Signals travel in one direction from one node to all other nodes around the loop.
LAN Tree Topology
The Tree topology is essentially a hybrid of the bus and star layouts. The basic topology is similar to that of a bus, with nodes connected in sequence to a linear central cable. But tree networks may have "branches" that contain multiple workstations that are connected point-to-point in a star-like pattern. Signals from a transmitting node travel the length of the medium and are received by all other nodes.
Tuesday, January 15, 2008
Assignment 4
The major difference between deadlock, starvation and race is that in deadlock, the problem occurs when the jobs are processed. Starvation, however is the allocation of resource that prevents one job to be executed. Race occurs before the process has been started.
2. Example of Deadlock:
When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.
Example of Starvation:
When you barrowed a book and the owner want it back.
Example of Race:
A two car race for a price.
3. Four necessary condition needed for the deadlock from exercise #2:
if the terminal of the train is only one.if the two train needed the passengers.if there's other alternative terminal available.if the two train is not full.
4.
5.
a. Deadlock will not happen because there are two traffic lights that control the traffic. But when some motorist don't follow the traffic lights, deadlock can occur because there's only one bridge to drive through.b. Deadlock can be detected when there will be a huge bumper to bumper to the traffic and there will be accident that will happen.c. The solution to prevent deadlock is that, the traffic lights should be accurate and motorist should follow it. In order to have a nice driving through the bridge.
Monday, December 10, 2007
Assignment 3
For instance, if your computer has a slow disk drive and you are doing a lot of paging (using virtual memory) to switch from one program to another rapidly, then your disk drive will become a performance bottleneck and your computer will seem to have trouble keeping up with your commands. The computer, here, is "thrashing", spending all of it's time trying to keep up. Imagine a person drowning. They are thrashing because they are spending all of their energy doing one thing to stay alive.
Q:What is the cause of thrashing? How does the system detect thrashing?
Once it detects thrashing, what can the system do to eliminate this problem? - Thrashing is caused by under allocation of the minimum number of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utilization as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming.
Q:Once thrashing is detected,what can the operating system do to eliminate it?
Operating system designers attempt to keep high CPU utilization by maintaining an optimal multiprogramming level (MPL). Although running more processes makes it less likely to leave the CPU idle, too many processes adversely incur serious memory competition, and even introduce thrashing, which eventually lowers CPU utilization. A common practice to address the problem is to lower the MPL with the aid of process swapping out/in operations. This approach is expensive and is only used when the system begins serious thrashing. The objective of our study is to provide highly responsive and cost-effective thrashing protection by adaptively conducting priority page replacement in a timely manner. We have designed a dynamic system Thrashing Protection Facility (TPF) in the system kernel. Once TPF detects system thrashing, one of the active processes will be identified for protection. The identified process will have a short period of privilege in which it does not contribute its least recently used (LRU) pages for removal so that the process can quickly establish its working set, improving the CPU utilization. With the support of TPF, thrashing can be eliminated in its early stage by adaptive page replacement, so that process swapping will be avoided or delayed until it is truly necessary. We have implemented TPF in a current and representative Linux kernel running on an Intel Pentium machine. Compared with the original Linux page replacement, we showthat TPF consistently and significantly reduces page faults and the execution time of each individual job in several groups of interacting SPEC CPU2000 programs. We also show that TPF introduces little additional overhead to program executions, and its implementation in Linux (or Unix) systems is straightforward.
1.Explain the following:
A.Multiprogramming. Why is it used?
A multiprogramming is a technique used to utilize maximum CPU time by running multiple programs simultaneously. The execution begins with the first program and continuous till an instruction waiting for a peripheral is reached, the context of this program is stored, and the second program is memory is given chance to run. The process continued until all program finished running. Multiprogramming has no guarantee that a program will run is timely manner.
B.Internal Fragmentation. How does it occur?
The internal fragmentation occurs it when a fixed partition is partially used by program, the remaining space within the partition is unavailable to any other job and that's the time internal fragmentation occur when there is another job followed on the space. So that it will not wasted.
C.Compaction: Why is it need?
Compaction is very needed because it is the process of collecting fragments of available memory space into contiguous in block by moving programs and data in a
computer's memory disks, or known as garbage collection.
E.Relocation: How often should it performed?
It depend on the process of address refferences in program.
2.Describe the Major Disadvantages for each of the four memory allocation schemes presented in the chapter.
The disadvantage of this memory allocation its an overhead process, so that while compaction is being done everything else must wait.
3.Describe the Major Advantages for each of the memory allocation schemes presented in the chapter.
They could be divided into segments of variable sizes of equal size. Each page, or segment, could be stored wherever there was an empty block best enough to hold it.
Thursday, November 29, 2007
assignment 2
How each emplements virtual memory?
Virtual memory is one of the most important subsystems of any modern operating system. Virtual memory is deeply intertwined with user processes, protection between processes and protection of the kernel from user processes, efficient shared memory, communication with IO (DMA, etc.), paging, swapping, and countless other systems. Understanding the VM subsystem greatly helps understanding how all other parts of the kernel work and interact. Because of this "Understanding the Linux Virtual Memory Manager" is a great guide in better understanding and working with the entire kernel
How each handles page sizes?
As computer system main memories get larger and processor cycles-per-instruction (CPIs) get smaller, the time spent in handling translation lookaside buffer (TLB) misses could become a performance bottleneck. We explore relieving this bottleneck by (a) increasing the page size and (b) supporting two page sizes. We discuss how to build a TLB to support two page sizes and examine both alternatives experimentally with a dozen uniprogrammed, user-mode traces for the SPARC architecture. Our results show that increasing the page size to 32KB causes both a significant increase in average working set size (e.g., 60%) and a significant reduction in the TLB's contribution to CPI, CPITLB, (namely a factor of eight) compared to using 4KB pages. Results for using two page sizes, 4KB and 32KB pages, on the other hand, show a small increase in working set size (about 10%) and variable decrease in CPITLB, (from negligible to as good as found with the 32KB page size). CPITLB when using two page sizes is consistently better for fully associative TLBs than for set-associative ones. Our results are preliminary, however, since (a) our traces do not include multiprogramming or operating system behavior, and (b) our page-size assignment policy may not reflect a real operating system's policy.
How each handles page fault?
The chip uses this 32 bit number to look up values in a page table. The value in this page table is the page's physical address (or an indication that the page is not available) and the accessibility of the page (read/write, user/kernel). The physical address actually maps to real memory in the computer that contains the data being accessed. If the page is not available- a page fault occurs and the kernel either kills the process or loads the page from disk, depending on the value in the page table (which is up to the kernel to set) If the page is readonly and a write is being attempted- a page fault occurs and the kernel either kills the process or does other clever stuff (also depending on data in the entry or elsewhere) If the page is kernel and the processor is not in kernel mode- a fault occurs (can't remember if its a page fault or a GPF) and the kernel again decides what to do to the process.
How each handles working set?
No such concept. For all practical purposes, the app has virtually no control over its working set, unless the programmer has done something as fundamentally irresponsible as using VirtualLock, which almost always is a mistake, usually caused by fundamental misunderstanding of the programming problem. It is an API sufficiently obscure that it is hardly ever used anyway, and therefore it can usually be ignored as a possibility. If the app tops out at 32K files, it has exceeded some other limit, for example, some internal table that some programmer did a #define of 32768 (or some multiple thereof), or it is running some MS-DOS system, such as WIn98, that has built-in limits on how many objects you can add to a control. It has absolutely nothing to do with "working set".
How it reconciles thrashing issues?
Many interactive computing environments provide automatic storage reclamation and virtual memory to ease the burden of managing storage. Unfortunately, many storage reclamation algorithms impede interaction with distracting pauses. Generation Scavenging is a reclamation algorithm that has no noticeable pauses, eliminates page faults for transient objects, compacts objects without resorting to indirection, and reclaims circular structures, in one third the time of traditional approaches. We have incorporated Generation Scavenging in Berkeley Smalltalk(BS), our Smalltalk-80 implementation, and instrumented it to obtain performance data. We are also designing a microprocessor with hardware support for Generation Scavenging.
Wednesday, November 21, 2007
assignment 1
I.
ComputerworldUK Operating Systems is your essential resource for all the latest news, analysis, case studies and reviews of Windows, Linux, Unix, Macintosh OS, Netware, open source operating systems, OS-390 and Solaris.
II.
There are three primary limits to performance at the supercomputer level: individual processor speed, the overhead involved in making large numbers of processors work together on a single task, and the input/output speed between processors and between processors and memory. Input/output speed between the data-storage medium and memory is also a problem, but no more so than in any other kind of computer, and, since supercomputers all have amazingly high RAM capacities, this problem can be largely solved with the liberal application of large amounts of money.
The speed of individual processors is increasing all the time, but at a great cost in research and development, and the reality is that we are beginning to reach the limits of silicon based processors. Seymour Cray showed that gallium arsenide technology could be made to work, but it is very difficult to work with and very few companies know enough to make usable processors based on it. It was such a problem that Cray Computer was forced to acquire their own GaAs foundry so that they could do the work themselves.