TRENDING NEWS

POPULAR NEWS

Calculate The Total Time Required To Read A File Of Sectors If The Accessing Done Sequentially

If the operating system knew that a certain application was going to access file data in a sequential manner, how could it exploit this information to improve performance?

Simple. Assume the file is on some rotating disk medium (or even semiconductor disk, which has non-zero access times). The OS reads the file, sequentially, ahead of the point where the application is currently reading it. Done right, the next part of the file required by the application has already arrived in the main memory of the computer just before the program needs it, reducing access times to the overhead of the OS calls, rather than the OS calls + the time to read the file from its storage medium.Operating systems have been doing this kind of thing for a very long time.IRIS, the Data General Nova timesharing system I designed and implemented at Educational Data Systems back in 1970 would allocate files sequentially along tracks on a classic disk drive. IRIS had a modest number of sector buffers used in LRU fashion to support optimized disk access. When an application issued sequential reads, the OS would queue reads of the corresponding requested sector, and initiate the read of the next sequential sector (“read ahead”) when the first one arrived. With suitable interleaving of sectors on a track, it could read every other sector in real time as the disk rotated, even on the Nova with instruction times of a few microseconds. This is a big improvement over just reading sectors when requested.Extending this idea, working with a hospital client, we designed a custom disk controller with a full track buffer and corresponding driver support. The OS would issue sector reads, with the driver turned into track read operations. For random accesses this would read half the track before encountring the necessary sector, amd would automatically continue reading following sectors until the entire track was read as they rotated beneath the head. So sequential reads went at rotational disk speeds for (average) half a track, and then zero delay for the other (average) half of the track.Sequential writes were accomplished by writing to the track buffer, and letting the drive place dirty parts of the track buffer on the disk as the head flew over their corresponding sectors.So streaming sequential rates came pretty close to disk rotation rates. With the computer itself having instructions times measured in a few microseconds, we could keep it pretty busy.I’m sure designers of other operating systems (e.g., OS/360 and likely Multics before that) did similar things. It is rather the obvious thing to do.

What does Disk defragmentor do?

Basically it moves all the info on your hard disk around physically so it's more efficient for the computer to get to, but all your stuff will still be in the same folders logically and such so you don't have to worry about your icons and music. Disk defragmenting can speed up the computer. It should have a button that says something to the effect of "Do I need to Defragment" or Analyze or something to let you know how fragmented your files are now.

Why is a SSD Drive faster than a conventional HDD drive?

For reads: parallelism and locality. For writes: well, it depends.SSDs "stripe" data into arrays which can be accessed in parallel with each other, meaning that if a read operation spans data across multiple chips the on-disk controller can issue parallel reads for the data. The latency of each of these fetches is likely to be constant and low, and in any case, they're parallel. This is in contrast with spinning disk which uses a rotating platter with an essentially serial access pattern. The disk controller figures out which block(s) the data is in, moves the head to the appropriate location, and waits for the disk to complete a rotation so that the read can begin. For serial reads (i.e., large files), subsequent blocks of data may not require moving the head, so they can be fast, but for many small non-sequential reads, an SSD will totally thrash a traditional HD thanks to parallelism and fixed, low latency.SSDs haven't traditionally done much better than spinning disk in write-heavy applications thanks to the need to re-write entire blocks at a time. Essentially, that means that when you want to update some data, you have to update *all* the data in a block, and that can be slow/expensive, particularly if it means a read, an erase, and then a write (all of which must be serial). Articles like this draw better analogies than I can: http://www.anandtech.com/show/27...High-end drives have overcome this by under-subscribing the amount of available storage and using controller smarts to ensure that writes happen to "fresh" blocks and that garbage is collected behind the scenes, eliminating the synchronous erase step. When that's done in parallel, SSDs can compete and win on write speeds, but it pays to watch the specs for how much of the disk is actually "available" (lower tends to be better) and look for reviews that discuss sustained read/write performance numbers.

TRENDING NEWS