In a system using the relocatable dynamic partitions scheme, given the following situation and using decimal form : Job Q is loaded into memory starting at memory location 42K. Calculate the exact starting address for Job Q in bytes. If the memory block has 3K in fragmentation, calculate the size of the memory block.
Is the resulting fragmentation internal or external? Explain your reasoning. ANS: Look for a fundamental understanding of the differences between internal and external fragmentation. In a system using the relocatable dynamic partitions scheme, given the following situation and using decimal form : Job W is loaded into memory starting at memory location K.
Calculate the exact starting address for Job W in bytes. If the relocation register holds the value , was the relocated job moved toward lower or higher addressable end of main memory? ANS: lower addressable end of memory By how many kilobytes was it moved? In a system using the fixed partitions memory allocation scheme, given the following situation and using decimal form : After Job J is loaded into a partition of size 50K, the resulting fragmentation is bytes: a. What is the size of Job J in bytes?
What type of fragmentation is caused? ANS: Internal fragmentation External fragmentation Explain your answer. Students should show their calculations and explain how they know that the fragmentation is between partitions. Advanced Exercises The relocation example presented in the chapter implies that compaction is done entirely in memory, without secondary storage.
Can all free sections of memory be merged into one contiguous block using this approach? Why or why not? ANS: This question can be answered using the following pseudocode and example, which show that all free sections of memory can be merged into one contiguous block without using secondary storage as a temporary holding area. It could generate a discussion on additional hardware components needed to implement this method of compaction.
Example: Main Memory 0K. Therefore, compaction is needed. Pseudo code:. WileyPLUS, including a test bank, self—check exercises, and a student solutions manual, is also part of the comprehensive support package. Learners examine operating system theory, installation, upgrading, configuring operating system and hardware, file systems, virtualization, security, hardware options, storage, resource sharing, network connectivity, maintenance, and troubleshooting.
This edition helps readers understand the fundamental concepts of computer operating systems. In addition, general information introduces many other operating systems. Operating Systems Demystified describes the features common to most of today's popular operating systems and how they handle complex tasks. Written in a step-by-step format, this practical guide begins with an overview of what operating systems are and how they are designed. The book then offers in-depth coverage of the boot process; CPU management; deadlocks; memory, disk, and file management; network operating systems; and the essentials of system security.
Detailed examples and concise explanations make it easy to understand even the technical material, and end-of-chapter quizzes and a final exam help reinforce key concepts.
It's a no-brainer! You'll learn about: Fundamentals of operating system design Differences between menu- and command-driven user interfaces CPU scheduling and deadlocks Management of RAM and virtual memory Device management for hard drives, CDs, DVDs, and Blu-ray drives Networking basics, including wireless LANs and virtual private networks Key concepts of computer and data security Simple enough for a beginner, but challenging enough for an advanced student, Operating Systems Demystified helps you learn the essential elements of OS design and everyday use.
Download Fundamentals Of Operating Systems books , An operating system is probably the most important part of the body of soft ware which goes with any modern computer system. I ts importance is reflected in the large amount of manpower usually invested in its construction, and in the mystique by which it is often surrounded.
To the non-expert the design and construction of operating systems has often appeared an activity impenetrable to those who do not practise it. I hope this book will go some way toward dispelling the mystique, and encourage a greater general understanding of the principles on which operating systems are constructed. The material in the book is based on a course of lectures I have given for the past few years to undergraduate students of computer science.
It takes 8 milliseconds to service a page fault if an empty page is available or the replaced page is not modified, and 20 milliseconds if the replaced page is modified. Memory access time is nanoseconds. Assume that the page to be replaced is modified 70 percent of the time. What is the maximum acceptable page-fault rate for an effective access time of no more than nanoseconds?
Answer: 0. What can you say about the system if you notice the following behavior: a. It is most likely that during the period between the point at which the bit corresponding to a page is cleared and it is checked again, the page is accessed again and therefore cannot be replaced.
This results in more scanning of the pages before a victim page is found. If the pointer is moving slow, then the virtual memory system is finding candidate pages for replacement extremely efficiently, indicating that many of the resident pages are not being ac- cessed.
Also discuss under what circumstance does the opposite holds. Answer: Consider the following sequence of memory accesses in a system that can hold four pages in memory. When page 5 is accessed, the least frequently used page-replacement algorithm would replace a page other than 1, and therefore would not incur a page fault when page 1 is accessed again.
Answer: Consider the sequence in a system that holds four pages in memory: 1 2 3 4 4 4 5 1. The most frequently used page replacement algo- rithm evicts page 4 while fetching page 5, while the LRU algorithm evicts page 1. This is unlikely to happen much in practice. Assume that the free-frame pool is managed using the least recently used replacement policy.
Answer the following questions: a. If a page fault occurs and if the page does not exist in the free- frame pool, how is free space generated for the newly requested page? If a page fault occurs and if the page exists in the free-frame pool, how is the resident page set and the free-frame pool managed to make space for the requested page?
What does the system degenerate to if the number of resident pages is set to one? What does the system degenerate to if the number of pages in the free-frame pool is zero? The accessed page is then moved to the resident set.
Install a faster CPU. Install a bigger paging disk. Increase the degree of multiprogramming. Decrease the degree of multiprogramming. Install more main memory. Install a faster hard disk or multiple controllers with multiple hard disks. Add prepaging to the page fetch algorithms.
Increase the page size. Answer: The system obviously is spending most of its time paging, indicating over-allocation of memory. If the level of multiprogramming is reduced resident processes would page fault less frequently and the CPU utilization would improve.
Another way to improve performance would be to get more physical memory or a faster paging drum. Get a faster CPU —No. Get a bigger paging drum—No. Increase the degree of multiprogramming—No. Decrease the degree of multiprogramming—Yes. Exercises 65 e. Install more main memory—Likely to improve CPU utilization as more pages can remain resident and not require paging to or from the disks.
Install a faster hard disk, or multiple controllers with multiple hard disks—Also an improvement, for as the disk bottleneck is removed by faster response and more throughput to the disks, the CPU will get more data more quickly.
Add prepaging to the page fetch algorithms—Again, the CPU will get more data faster, so it will be more in use. This is only the case if the paging action is amenable to prefetching i. Increase the page size —Increasing the page size will result in fewer page faults if data is being accessed sequentially. If data access is more or less random, more paging action could ensue because fewer pages can be kept in memory and more data is transferred per page fault. So this change is as likely to decrease utilization as it is to increase it.
What is the sequence of page faults incurred when all of the pages of a program are currently non-resident and the first instruction of the program is an indirect memory load operation? What happens when the operating system is using a per-process frame allocation technique and only two pages are allocated to this process?
Answer: The following page faults take place: page fault to access the instruction, a page fault to access the memory location that contains a pointer to the target memory location, and a page fault when the target memory location is accessed. The operating system will generate three page faults with the third page replacing the page containing the instruc- tion. If the instruction needs to be fetched again to repeat the trapped instruction, then the sequence of page faults will continue indefinitely.
If the instruction is cached in a register, then it will be able to execute completely after the third page fault. What would you gain and what would you lose by using this policy rather than LRU or second-chance replacement?
Answer: Such an algorithm could be implemented with the use of a reference bit. After every examination, the bit is set to zero; set back to one if the page is referenced. The algorithm would then select an arbitrary page for replacement from the set of unused pages since the last examination. The advantage of this algorithm is its simplicity - nothing other than a reference bit need be maintained. The disadvantage of this algorithm is that it ignores locality by only using a short time frame for determining whether to evict a page or not.
We can do this minimization by distributing heavily used pages evenly over all of memory, rather than having them compete for a small number of page frames. We can associate with each page frame a counter of the number of pages that are associated with that frame.
Then, to replace a page, we search for the page frame with the smallest counter. Define a page-replacement algorithm using this basic idea. Specif- ically address the problems of 1 what the initial value of the counters is, 2 when counters are increased, 3 when counters are decreased, and 4 how the page to be replaced is selected.
How many page faults occur for your algorithm for the following reference string, for four page frames? What is the minimum number of page faults for an optimal page- replacement strategy for the reference string in part b with four page frames? Define a page-replacement algorithm addressing the problems of: 1.
Initial value of the counters—0. Counters are increased —whenever a new page is associ- ated with that frame. Counters are decreased —whenever one of the pages asso- ciated with that frame is no longer required.
How the page to be replaced is selected —find a frame with the smallest counter. Use FIFO for breaking ties. Addresses are translated through a page table in main memory, with an access time of 1 microsec- ond per memory access. Thus, each memory reference through the page table takes two accesses.
To improve this time, we have added an asso- ciative memory that reduces access time to one memory reference, if the page-table entry is in the associative memory. Assume that 80 percent of the accesses are in the associative memory and that, of the remaining, 10 percent or 2 percent of the total cause page faults. What is the effective memory access time? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?
Answer: Thrashing is caused by underallocation of the minimum num- ber of pages required by a process, forcing it to continuously page fault. The system can detect thrashing by evaluating the level of CPU utiliza- tion as compared to the level of multiprogramming. It can be eliminated by reducing the level of multiprogramming. One representing data and another representing code? As an example, the code being accessed by a process may retain the same working set for a long period of time.
However, the data the code accesses may change, thus reflecting a change in the working set for data accesses. This could result in a large number of page faults.
However, once a process is scheduled, it is unlikely to generate page faults since its resident set has been overestimated. Using Figure 9. The byte request is assigned a byte segement, the 60 byte request is assigned a 64 byte segment and the byte request is assigned a byte segment. After the releases of memory, the only segment in use would be a byte segment containing bytes of data. What could be done to address this scalability issue?
Answer: This had long been a problem with the slab allocator - poor scalability with multiple CPUs. The issue comes from having to lock the global cache when it is being accesses.
This has the effect of serializing cache accesses on multiprocessor systems. Solaris has addressed this by introducing a per-CPU cache, rather than a single global cache. What are the advantages of such a paging scheme?
What modifications to the virtual memory system are provide this functionality? Answer: The program could have a large code segment or use large- sized arrays as data. These portions of the program could be allocated to larger pages, thereby decreasing the memory overheads associated with a page table. The virtual memory system would then have to maintain multiple free lists of pages for the different sizes and should also need to have more complex code for address translation to take into account different page sizes.
First, generate a random page- reference string where page numbers range from Apply the ran- dom page-reference string to each algorithm and record the number of page faults incurred by each algorithm. Implement the replacement algorithms such that the number of page frames can vary from Assume that demand paging is used. Design two programs that communicate with shared memory using the Win32 API as outlined in Section 9.
The consumer process will then read and output the sequence from shared memory. In this instance, the producer process will be passed an integer pa- rameter on the command line specifying the number of Catalan numbers to produce, i. Every- thing is typically stored in files: programs, data, output, etc. The student should learn what a file is to the operating system and what the problems are provid- ing naming conventions to allow files to be found by user programs, protec- tion.
Two problems can crop up with this chapter. First, terminology may be different between your system and the book. This can be used to drive home the point that concepts are important and terms must be clearly defined when you get to a new system. Second, it may be difficult to motivate students to learn about directory structures that are not the ones on the system they are using. This can best be overcome if the students have two very different systems to consider, such as a single-user system for a microcomputer and a large, university time-shared system.
Projects might include a report about the details of the file system for the local system. It is also possible to write programs to implement a simple file system either in memory allocate a large block of memory that is used to simulate a disk or on top of an existing file system. In many cases, the design of a file system is an interesting project of its own.
Exercises What problems may occur if a new file is created in the same storage area or with the same absolute path name? How can these problems be avoided? Answer: Let F1 be the old file and F2 be the new file. A user wishing to access F1 through an existing link will actually access F2. Note that the access protection for file F1 is used rather than the one associated with F2. This can be accomplished in several ways: a. Should the operating system maintain a separate table for each user or just maintain one table that contains references to files that are being accessed by all users at the current time?
If the same file is being accessed by two different programs or users, should there be separate entries in the open file table? Answer: By keeping a central open-file table, the operating system can perform the following operation that would be infeasible other- wise. Consider a file that is currently being accessed by one or more processes. If the file is deleted, then it should not be removed from the disk until all processes accessing the file have closed it.
This check could be performed only if there is centralized accounting of number of processes accessing the file. On the other hand, if two processes are accessing the file, then separate state needs to be maintained to keep track of the current location of which parts of the file are being accessed by the two processes.
This requires the operating system to maintain separate entries for the two processes. Answer: In many cases, separate programs might be willing to tolerate concurrent access to a file without requiring the need to obtain locks and thereby guaranteeing mutual exclusion to the files. Mutual exclusion could be guaranteed by other program structures such as memory locks or other forms of synchronization. In such situations, the mandatory locks would limit the flexibility in how files could be accessed and might also increase the overheads associated with accessing files.
Answer: By recording the name of the creating program, the operat- ing system is able to implement features such as automatic program invocation when the file is accessed based on this information. It does add overhead in the operating system and require space in the file descriptor, however. Answer: Automatic opening and closing of files relieves the user from the invocation of these functions, and thus makes it more convenient to the user; however, it requires more overhead than the case where explicit opening and closing is required.
Answer: When a block is accessed, the file system could prefetch the subsequent blocks in anticipation of future requests to these blocks. This prefetching optimization would reduce the waiting time experienced by the process for future requests. Answer: An application that maintains a database of entries could benefit from such support.
For instance, if a program is maintaining a student database, then accesses to the database cannot be modeled by any predetermined access pattern. The access to records are random and locating the records would be more efficient if the operating system were to provide some form of tree-based index.
Answer: The advantage is that there is greater transparency in the sense that the user does not need to be aware of mount points and create links in all scenarios.
The disadvantage however is that the filesystem containing the link might be mounted while the filesystem containing the target file might not be, and therefore one cannot provide trans- parent access to the file in such a scenario; the error condition would expose to the user that a link is a dead link and that the link does indeed cross filesystem boundaries.
Discuss the relative merits of each approach. Answer: With a single copy, several concurrent updates to a file may result in user obtaining incorrect information, and the file being left in an incorrect state. With multiple copies, there is storage waste and the various copies may not be consistent with respect to each other.
Answer: The advantage is that the application can deal with the failure condition in a more intelligent manner if it realizes that it incurred an error while accessing a file stored in a remote filesystem. The disadvantage however is the lack of uniformity in failure semantics and the resulting complexity in application code. Answer: UNIXconsistency semantics requires updates to a file to be immediately available to other processes.
Supporting such a semantics for shared files on remote file systems could result in the following inefficiencies: all updates by a client have to be immediately reported to the fileserver instead of being batched or even ignored if the updates are to a temporary file , and updates have to be communicated by the fileserver to clients caching the data immediately again resulting in more communication.
The basic issues are device directory, free space management, and space allocation on a disk. A file is a collection of extents, with each ex- tent corresponding to a contiguous set of blocks. A key issue in such systems is the degree of variability in the size of the extents.
What are the advantages and disadvantages of the following schemes: a. All extents are of the same size, and the size is predetermined. Extents can be of any size and are allocated dynamically. Extents can be of a few fixed sizes, and these sizes are predeter- mined. Answer: If all extents are of the same size, and the size is predeter- mined, then it simplifies the block allocation scheme.
A simple bit map or free list for extents would suffice. If the extents can be of any size and are allocated dynamically, then more complex allocation schemes are required. It might be difficult to find an extent of the appropriate size and there might be external fragmentation. One could use the Buddy system allocator discussed in the previous chapters to design an ap- propriate allocator.
When the extents can be of a few fixed sizes, and these sizes are predetermined, one would have to maintain a separate bitmap or free list for each possible size. This scheme is of intermediate complexity and of intermediate flexibility in comparison to the earlier schemes.
Answer: The advantage is that while accessing a block that is stored at the middle of a file, its location can be determined by chasing the pointers stored in the FAT as opposed to accessing all of the individual blocks of the file in a sequential manner to find the pointer to the target block.
Typically, most of the FAT can be cached in memory and therefore the pointers can be determined with just memory accesses instead of having to access the disk blocks. Suppose that the pointer to the free-space list is lost. Can the system reconstruct the free-space list? Consider a file system similar to the one used by UNIX with indexed allocation.
Assume that none of the disk blocks is currently being cached. Suggest a scheme to ensure that the pointer is never lost as a result of memory failure. Those remaining unallocated pages could be relinked as the free-space list. The free-space list pointer could be stored on the disk, perhaps in several places. For instance, a file system could allocate 4 KB of disk space as a single 4-KB block or as eight byte blocks.
How could we take advantage of this flexibility to improve performance? What modifications would have to be made to the free-space management scheme in order to support this feature? Answer: Such a scheme would decrease internal fragmentation. If a file is 5KB, then it could be allocated a 4KB block and two contiguous byte blocks. In addition to maintaining a bitmap of free blocks, one would also have to maintain extra state regarding which of the subblocks are currently being used inside a block.
The allocator would then have to examine this extra state to allocate subblocks and coallesce the subblocks to obtain the larger block when all of the subblocks become free.
Exercises 77 Answer: The primary difficulty that might arise is due to delayed updates of data and metadata. Updates could be delayed in the hope that the same data might be updated in the future or that the updated data might be temporary and might be deleted in the near future.
However, if the system were to crash without having committed the delayed updates, then the consistency of the file system is destroyed. Assume that the information about each file is al- ready in memory.
For each of the three allocation strategies contiguous, linked, and indexed , answer these questions: a. How is the logical-to-physical address mapping accomplished in this system? For the indexed allocation, assume that a file is always less than blocks long. If we are currently at logical block 10 the last block accessed was block 10 and want to access logical block 4, how many physical blocks must be read from the disk?
Answer: Let Z be the starting file address block number. Divide the logical address by with X and Y the resulting quotient and remainder respectively. Add X to Z to obtain the physical block number. Y is the displacement into that block.
Divide the logical physical address by with X and Y the resulting quotient and remainder respectively. Get the index block into memory. Physical block address is contained in the index block at location X. Y is the dis- placement into the desired physical block. Typical disk devices do not have relocation or base registers such as are used when memory is to be compacted , so how can we relocate files? Give three reasons why recompacting and relocation of files often are avoided.
Answer: Relocation of files on secondary storage involves considerable overhead — data blocks would have to be read into main memory and written back out to their new locations. Furthermore, relocation regis- ters apply only to sequential files, and many disk files are not sequential. Answer: In cases where the user or system knows exactly what data is going to be needed.
Caches are algorithm-based, while a RAM disk is user-directed. Each client maintains a name cache that caches translations from file names to corresponding file handles. What issues should we take into account in implementing the name cache? Answer: One issue is maintaining consistency of the name cache.
If the cache entry becomes inconsistent, then it should be either updated or its inconsistency should be detected when it is used next. If the in- consistency is detected later, then there should be a fallback mechanism for determining the new translation for the name.
Also, another related issue is whether a name lookup is performed one element at a time for each subdirectory in the pathname or whether it is performed in a single shot at the server. If it is perfomed one element at a time, then the client might obtain more information regarding the translations for all of the intermediate directories.
On the other hand, it increases the network traffic as a single name lookup causes a sequence of partial name lookups. Answer: For a file system to be recoverable after a crash, it must be consistent or must be able to be made consistent. Therefore, we have to prove that logging metadata updates keeps the file system in a consistent or able-to-be-consistent state. For a file system to become inconsistent, the metadata must be written incompletely or in the wrong order to the file system data structures.
With metadata logging, the writes are made to a sequential log. The complete transaction is written there before it is moved to the file system structures. If the system crashes during file system data updates, the updates can be completed based on the information in the log. Thus, logging ensures that file system changes are made completely either before or after a crash. The order of the changes are guaranteed to be correct because of the sequential writes to the log.
If a change was made incompletely to the log, it is discarded, with no changes made to the file system structures. Therefore, the structures are either consistent or can be trivially made consistent via metadata logging replay. Copy to a backup medium all files from the disk. Copy to another medium all files changed since day 1. Exercises 79 This contrasts to the schedule given in Section What are the benefits of this system over the one in Section What are the drawbacks?
Are restore operations made easier or more difficult? Answer: Restores are easier because you can go to the last backup tape, rather than the full tape. No intermediate tapes need be read. More tape is used as more files change. We also discuss the lowest level of the file system the secondary storage structure.
We first describe disk- head-scheduling algorithms. Next we discuss disk formatting and manage- ment of boot blocks, damaged blocks, and swap space.
We end with coverage of disk reliability and stable-storage. Simulation may be the best way to involve the student with the algorithms exercise The paper by Worthington et al. Be suspicious of the results of the disk scheduling papers from the s, such as Teory and Pinkerton [], because they generally assume that the seek time function is linear, rather than a square root.
The paper by Lynch [b] shows the importance of keeping the overall system context in mind when choosing scheduling algorithms. Unfortunately, it is fairly difficult to find. Chapter 2 introduced the concept of primary, secondary, and tertiary stor- age. In this chapter, we discuss tertiary storage in more detail. First we describe the types of storage devices used for tertiary storage. Next, we discuss the is- sues that arise when an operating system uses tertiary storage.
Finally, we consider some performance aspects of tertiary storage systems. Explain why this assertion is true. Describe a way to modify algorithms such as SCAN to ensure fairness. Explain why fairness is an important goal in a time-sharing sys- tem. New requests for the track over which the head currently resides can theoretically arrive as quickly as these requests are being serviced.
To prevent unusually long response times. Paging and swapping should take priority over user requests. A new design allows for easier navigation and enhances reader motivation.
Additional end—of—chapter, exercises, review questions, and programming exercises help to further reinforce important concepts. WileyPLUS, including a test bank, self—check exercises, and a student solutions manual, is also part of the comprehensive support package. An operating system is probably the most important part of the body of soft ware which goes with any modern computer system. I ts importance is reflected in the large amount of manpower usually invested in its construction, and in the mystique by which it is often surrounded.
To the non-expert the design and construction of operating systems has often appeared an activity impenetrable to those who do not practise it. I hope this book will go some way toward dispelling the mystique, and encourage a greater general understanding of the principles on which operating systems are constructed. The material in the book is based on a course of lectures I have given for the past few years to undergraduate students of computer science.
The book is therefore understanding operating systems 7th edition pdf download suitable introduction to operating systems for students who have a basic grounding in computer science, or for people who have worked with computers for some time.
Ideally the reader should have a knowledge of prorramming and be familiar with general machine architecture, common data structures such as lists and trees, and the functions of system understanding operating systems 7th edition pdf download such as compilers, loaders, and editors.
I t will also be helpful if he has had some experience of using a large operating system, seeing it, as it were, from the out side. This in-depth book prepares you for any or all four understanding operating systems 7th edition pdf download, with full coverage of all exam objectives. Includes all chapter review questions and 8 total practice exams. Study anywhere, any time, and approach the exam with confidence. Visit www. This fully updated second edition also includes new material on virtual machine technologies such as VirtualBox, understanding operating systems 7th edition pdf download , Vagrant and the Linux container system Docker.
0コメント