H
HeyBub
Guest
Leythos wrote:
> In article , heybub@gmail.com
> says...
>>
>> Leythos wrote:
>>> In article , heybub@gmail.com
>>> says...
>>>>
>>>> Lisa wrote:
>>>>> I was told by a computer repairman that it's not necessary to
>>>>> defrag my laptop. If the hard drive gets full, remove files and
>>>>> always make sure I'm using a virus protection.
>>>>> What are your thoughts?
>>>>
>>>> I can envision a situation in a data center with hundreds of
>>>> thousands of transactions per minute where defragging may be of
>>>> some slight benefit (assuming an NTFS file system).
>>>>
>>>> I can also imagine a user devoted to daily defragging experiencing
>>>> a power interruption during a critical directory manipulation
>>>> process.
>>>
>>> On a small computer with many add/delete/grow/shrink operations,
>>> defrag can significantly impact file access times and can be very
>>> noticeable to users if their system was badly file fragmented before
>>> the defrag.
>>>
>>> White-Space fragmention is not normally an issue, but a file that is
>>> fragmented into 8000 parts will have an impact on system
>>> performance.
>>>
>>> This argument has gone on for decades, but it's the people that
>>> maintain systems across many areas that know the benefits of defrag.
>>
>> Ignorance can be fixed - hence the original question. It's knowing
>> something that is false that's the bigger problem.
>>
>> Considering your example of 8,000 segments, consider: A minimum
>> segment size of 4096 bytes implies a minimum of 32 meg file. A
>> FAT-32 system requires a minimum of 16,000 head movements to gather
>> all the pieces. In this case, with an average access time of 12msec,
>> you'll spend over six minutes just moving the head around. Factor in
>> rotational delay to bring the track marker under the head, then
>> rotational delay to find the sector, and so on, you're up to ten
>> minutes or so to read the file.
>>
>> An NTFS system will suck up the file with ONE head movement. You
>> still have the rotational delays and so forth, but NTFS will cut the
>> six minutes off the slurp-up time.
>>
>> De-fragging an NTFS system DOES have its uses: For those who dust
>> the inside covers of the books on their shelves and weekly scour the
>> inside of the toilet water tank, a sense of satisfaction infuses
>> their very being after a successful operation.
>>
>> I personally think Prozac is cheaper, but to each his own.
>
> Why do you even consider discussing FAT-32?
>
> You do know that the default cluster size for NTFS (anything modern)
> is 4K in most instances, right?
In a FAT-xx system, the head has to move back to the directory to discover
the next segment. This is not the case with NTFS; pieces are read as they
are encountered and reassembled in the proper order in RAM.
>
> How does that impact your math now?
It doesn't.
>
> You might want to start learning about drives, formats, RAID,
> clusters, etc... before you post again.
Heh! I'll wager I know more about the things you mentioned than you can ever
imagine. I started my career designing test suites for 2311 disk drives on
IBM mainframes and have, mostly, kept up.
> In article , heybub@gmail.com
> says...
>>
>> Leythos wrote:
>>> In article , heybub@gmail.com
>>> says...
>>>>
>>>> Lisa wrote:
>>>>> I was told by a computer repairman that it's not necessary to
>>>>> defrag my laptop. If the hard drive gets full, remove files and
>>>>> always make sure I'm using a virus protection.
>>>>> What are your thoughts?
>>>>
>>>> I can envision a situation in a data center with hundreds of
>>>> thousands of transactions per minute where defragging may be of
>>>> some slight benefit (assuming an NTFS file system).
>>>>
>>>> I can also imagine a user devoted to daily defragging experiencing
>>>> a power interruption during a critical directory manipulation
>>>> process.
>>>
>>> On a small computer with many add/delete/grow/shrink operations,
>>> defrag can significantly impact file access times and can be very
>>> noticeable to users if their system was badly file fragmented before
>>> the defrag.
>>>
>>> White-Space fragmention is not normally an issue, but a file that is
>>> fragmented into 8000 parts will have an impact on system
>>> performance.
>>>
>>> This argument has gone on for decades, but it's the people that
>>> maintain systems across many areas that know the benefits of defrag.
>>
>> Ignorance can be fixed - hence the original question. It's knowing
>> something that is false that's the bigger problem.
>>
>> Considering your example of 8,000 segments, consider: A minimum
>> segment size of 4096 bytes implies a minimum of 32 meg file. A
>> FAT-32 system requires a minimum of 16,000 head movements to gather
>> all the pieces. In this case, with an average access time of 12msec,
>> you'll spend over six minutes just moving the head around. Factor in
>> rotational delay to bring the track marker under the head, then
>> rotational delay to find the sector, and so on, you're up to ten
>> minutes or so to read the file.
>>
>> An NTFS system will suck up the file with ONE head movement. You
>> still have the rotational delays and so forth, but NTFS will cut the
>> six minutes off the slurp-up time.
>>
>> De-fragging an NTFS system DOES have its uses: For those who dust
>> the inside covers of the books on their shelves and weekly scour the
>> inside of the toilet water tank, a sense of satisfaction infuses
>> their very being after a successful operation.
>>
>> I personally think Prozac is cheaper, but to each his own.
>
> Why do you even consider discussing FAT-32?
>
> You do know that the default cluster size for NTFS (anything modern)
> is 4K in most instances, right?
In a FAT-xx system, the head has to move back to the directory to discover
the next segment. This is not the case with NTFS; pieces are read as they
are encountered and reassembled in the proper order in RAM.
>
> How does that impact your math now?
It doesn't.
>
> You might want to start learning about drives, formats, RAID,
> clusters, etc... before you post again.
Heh! I'll wager I know more about the things you mentioned than you can ever
imagine. I started my career designing test suites for 2311 disk drives on
IBM mainframes and have, mostly, kept up.