storage management answers
play

Storage management answers 1. A file is from the view of a process a - PDF document

Linkping University 2011-05-14 Department of Computer and Information Science (IDA) Concurrent programming, Operating systems and Real-time operating systems (TDDI04) Storage management answers 1. A file is from the view of a process a


  1. Linköping University 2011-05-14 Department of Computer and Information Science (IDA) Concurrent programming, Operating systems and Real-time operating systems (TDDI04) Storage management answers 1. A file is from the view of a process a sequentially numbered collection of bytes on secondary storage, no matter which order and representation it has on physical disk. A file can be read sequentially, one byte after the other, reading/writing one auto- matically moves to the next. Some files can also move the read/write position to achieve random access. 2. a) A sequential device or file is read from start to end one byte at a time. It is not possible to read any byte twice, once read each byte is consumed. It may be possible to rewind some devices to start. b) A random access file is a sequential file that also can be read at any position by first selecting that position. 3. a) Directories are not general files but special “file containers” in the file system and in general controlled by OS or system software. Special protection mechanisms may apply. b) Photos have no special relation to OS and can be completely left to applications. c) Executable program files are special in that they store not data but an “activity”. They are normally specific to the execution environment (hardware and OS). Thus it is motivated that the OS recognize them as such and threat them special, allowing special actions (execute), requiring a special format, and possibly add special protection features (virus scanning). Or it provide interface to allow third party software or system tools to do it. d) Icon files are in some systems closely integrated in the user interface managed by the OS or system software. Keeping a format recognized by OS is essential for them to be displayed correct. In other systems this are left to applications. e) Symbolic links are special files that should be transparent to the applications, thus the OS/file system must know them. f) Archives may be provided by OS for system backup and restore, but can in general be left to application. 11

  2. TDDI04 2011-05-14 4. a) Tar archives are suited for sequential backup and restore as each file can be read, it’s metadata and content stored without the need to remember any information. The store order is sequential, as is the restore order. b) Tar archives are not suited to remove operations as the entire archive must be scanned to find the position of the file in the archive. Adding files at the end is easy. Zip files provide an index to quickly with few disk accesses find a specific file to extract or remove. Adding files require the index to be moved to add space for the new file. c) See b) 5. a) The directory structure stores the file and directory names coupled with the position on disk, size and other less essential attributes such as modification time and access rights. When a file is opened this information is cached for quick retrieval in the OS central list of open files (inodes). This avoids scanning the directory tree each time a file is used. A counter keeps track of the number of users of the inode. Each process have a table of open files that enable the process to know which files it has open and also enables several processes to have the same file open several times simultaneous. This list store for each file a link to the corresponding inode (position in OS central list) and the current read/ write position. b) On disk information provide permanent storage. An OS-central file-list cache one copy of disk info for all files currently in use. It knows where to find each file on disk. A per process file-list store information unique for each file instance opened by that process. Each open instance link to the common information in the OS table. c) See a) and b). 6. It would involve scanning the directory tree to find the disk position of the file at each access which would be too slow. It would also not provide any way for synchroniza- tion to avoid inconsistency when the file is read/written by several processes simul- taneously. Further the file names would be spread out disk, requiring scanning a large part of the disk to find all names in a directory. 12

  3. TDDI04 2011-05-14 7. a) A bitmap would require 4TB/1kB/8=512MB of memory. As this must be stores in RAM it is infeasible. Increasing the block size should be considered to make a bitmap feasible. A linked structure would be very long an tedious to setup. An alternate linked version also keeping track of the number of following consecutive blocks would reduce this chain. b) Knowing how the disk will be used allow us to make more intelligent choices. Clearly, storing only large files do not require small block size. The internal fragmentation will be at most one block per file. For 1MB block size and 350Mb files this is 1MB/350M=0.29% of the disk space wasted at most. That can be deemed acceptable and will yield a bitmap of free sectors of 4TB/1MB/8=512kB memory. For an average size of 5MB 1MB blocks will however not be acceptable, as the space wasted on internal fragmentation amounts up to 20%. This calls for two volumes (partitions). One for movies and one for music, each with different block size. The proportions are suggested to be the same as the expected space used by movies and music respectively. The block size for music could be for example 32kB which would yield a freemap of 1TB/32kB/8=4MB (max 0.64% waste). 8. Assuming the directory structure keeping the list of files are unhurt one can scan the directory tree to find the sectors each file uses and build a list of occupied sectors. The blocks remaining after this procedure are free. 9. The fourth operation will fail, since it refer to the file “regular” that can no longer be found by that name. All other operations will succeed, also the seventh, as a file named “regular” exists at that point. 10. a) N operations to read the directory, 1 operation to write back the new name of the file. Moving a file in general only involve changing the name and/or place in the directory. The data on disk does not have to move. b) In this case, as different partitions do not share disk blocks also the data must be moved. N operations to read the source directory, P operations to read the destination directory, 1 operations to update each, 192 to read the file, and 192 to write it at the new partition. (N+P+2+192+192 operations total). 11.I assume the file is already opened, thus no need to read the directory. a) 192 block must be read. b) Assuming the given size of 192 blocks is correct that position is after end of file, so no blocks are read (or 192 to find end of file). If the file size had been 1MB (2048 blocks) 1549 blocks must be read. c) 192 blocks must be read and two written (three if less than 331 bytes in the last block is free). 13

  4. TDDI04 2011-05-14 12.I assume the file is already opened, thus no need to read the directory. I also as- sume the index blocks are cached after the first access. And I assume 32-bit point- ers, thus one block can store 128 pointers. a) 1 read of direct index + 64 direct reads + 1 read of indirect index + 128 indirect reads. b) The maximum file size supported by the described setup is 64+128 blocks, at most 96kB. The mentioned position can not exist in any file. Reading would require at most one access to the direct block to find the indirect block, one access to the indirect block to find the data block, and one access for the data block. Thus at most 3 accesses. c) See b). In the case of space left, 3 accesses to find and read the last block, one write to add new blocks to the index, and one or two accesses to write the data (one if at least 331 bytes are free in the last block). Finding and updating the list of free blocks not counted here. 13.Clearly, the file size limit with indexed allocation is not acceptable. A solution is to add double and triple indirect blocks. With 64 direct pointers, 128 indirect, and 128*128 doubly indirect pointers the maximum file size is 8288kB. Adding a triple indirect block would add another 128*128*128*512B=1024MB. 14.Neither linked nor indexed allocation suffer from external fragmentation since any block can be allocated without regard of location on disk. Both however suffer inter- nal fragmentation, linked in the space not used in the last block at end of file, indexed allocation also in the index blocks not fully used. 15.For simplicity I assume we discuss only two drives. Two drives are enough to ex- plain the points. a) RAID 0 uses two disks to multiplex read and write operations. Losing one drive will loose all data on the two drives. Data availability in case of a disk crash is thus low. Since the operations are multiplexed over all disks the transfer of large data will at best be twice that of a single disk. Since data is multiplexed on low level (less than a block) both disks must partici- pate in each operation. Thus we can NOT handle one request on disk one while handling a second request on disk two. The performance of handling many re- quest would thus be comparable to that of one single disk. b) RAID 1 uses two disks to store all information on both disks. Losing one drive will still leave one copy of all data. Availability is thus high. Write must be done to both disks, thus the performance equal of one disk. Read can be done interleaved on the two disks, thus at most double performance com- pared to one disk. With many requests each disk CAN handle different read requests individually and simultaneous, since the data is cloned on both disks. Thus double perfor- mance at most for reads. Writes still have the performance of one disk. 14

Recommend


More recommend