Is defraging necessary?

  • Thread starter Thread starter Lisa
  • Start date Start date
Leythos wrote:

> In article , heybub@gmail.com

> says...

>>

>> Leythos wrote:

>>> In article , heybub@gmail.com

>>> says...

>>>>

>>>> Lisa wrote:

>>>>> I was told by a computer repairman that it's not necessary to

>>>>> defrag my laptop. If the hard drive gets full, remove files and

>>>>> always make sure I'm using a virus protection.

>>>>> What are your thoughts?

>>>>

>>>> I can envision a situation in a data center with hundreds of

>>>> thousands of transactions per minute where defragging may be of

>>>> some slight benefit (assuming an NTFS file system).

>>>>

>>>> I can also imagine a user devoted to daily defragging experiencing

>>>> a power interruption during a critical directory manipulation

>>>> process.

>>>

>>> On a small computer with many add/delete/grow/shrink operations,

>>> defrag can significantly impact file access times and can be very

>>> noticeable to users if their system was badly file fragmented before

>>> the defrag.

>>>

>>> White-Space fragmention is not normally an issue, but a file that is

>>> fragmented into 8000 parts will have an impact on system

>>> performance.

>>>

>>> This argument has gone on for decades, but it's the people that

>>> maintain systems across many areas that know the benefits of defrag.


>>

>> Ignorance can be fixed - hence the original question. It's knowing

>> something that is false that's the bigger problem.

>>

>> Considering your example of 8,000 segments, consider: A minimum

>> segment size of 4096 bytes implies a minimum of 32 meg file. A

>> FAT-32 system requires a minimum of 16,000 head movements to gather

>> all the pieces. In this case, with an average access time of 12msec,

>> you'll spend over six minutes just moving the head around. Factor in

>> rotational delay to bring the track marker under the head, then

>> rotational delay to find the sector, and so on, you're up to ten

>> minutes or so to read the file.

>>

>> An NTFS system will suck up the file with ONE head movement. You

>> still have the rotational delays and so forth, but NTFS will cut the

>> six minutes off the slurp-up time.

>>

>> De-fragging an NTFS system DOES have its uses: For those who dust

>> the inside covers of the books on their shelves and weekly scour the

>> inside of the toilet water tank, a sense of satisfaction infuses

>> their very being after a successful operation.

>>

>> I personally think Prozac is cheaper, but to each his own.


>

> Why do you even consider discussing FAT-32?

>

> You do know that the default cluster size for NTFS (anything modern)

> is 4K in most instances, right?




In a FAT-xx system, the head has to move back to the directory to discover

the next segment. This is not the case with NTFS; pieces are read as they

are encountered and reassembled in the proper order in RAM.



>

> How does that impact your math now?




It doesn't.



>

> You might want to start learning about drives, formats, RAID,

> clusters, etc... before you post again.




Heh! I'll wager I know more about the things you mentioned than you can ever

imagine. I started my career designing test suites for 2311 disk drives on

IBM mainframes and have, mostly, kept up.
 
HeyBub wrote:

> Leythos wrote:

>> In article , heybub@gmail.com

>> says...

>>>

>>> Leythos wrote:

>>>> In article , heybub@gmail.com

>>>> says...

>>>>>

>>>>> Lisa wrote:

>>>>>> I was told by a computer repairman that it's not necessary to

>>>>>> defrag my laptop. If the hard drive gets full, remove files and

>>>>>> always make sure I'm using a virus protection.

>>>>>> What are your thoughts?

>>>>>

>>>>> I can envision a situation in a data center with hundreds of

>>>>> thousands of transactions per minute where defragging may be of

>>>>> some slight benefit (assuming an NTFS file system).

>>>>>

>>>>> I can also imagine a user devoted to daily defragging experiencing

>>>>> a power interruption during a critical directory manipulation

>>>>> process.

>>>>

>>>> On a small computer with many add/delete/grow/shrink operations,

>>>> defrag can significantly impact file access times and can be very

>>>> noticeable to users if their system was badly file fragmented before

>>>> the defrag.

>>>>

>>>> White-Space fragmention is not normally an issue, but a file that is

>>>> fragmented into 8000 parts will have an impact on system

>>>> performance.

>>>>

>>>> This argument has gone on for decades, but it's the people that

>>>> maintain systems across many areas that know the benefits of defrag.

>>>

>>> Ignorance can be fixed - hence the original question. It's knowing

>>> something that is false that's the bigger problem.

>>>

>>> Considering your example of 8,000 segments, consider: A minimum

>>> segment size of 4096 bytes implies a minimum of 32 meg file. A

>>> FAT-32 system requires a minimum of 16,000 head movements to gather

>>> all the pieces. In this case, with an average access time of 12msec,

>>> you'll spend over six minutes just moving the head around. Factor in

>>> rotational delay to bring the track marker under the head, then

>>> rotational delay to find the sector, and so on, you're up to ten

>>> minutes or so to read the file.

>>>

>>> An NTFS system will suck up the file with ONE head movement. You

>>> still have the rotational delays and so forth, but NTFS will cut the

>>> six minutes off the slurp-up time.

>>>

>>> De-fragging an NTFS system DOES have its uses: For those who dust

>>> the inside covers of the books on their shelves and weekly scour the

>>> inside of the toilet water tank, a sense of satisfaction infuses

>>> their very being after a successful operation.

>>>

>>> I personally think Prozac is cheaper, but to each his own.


>>

>> Why do you even consider discussing FAT-32?

>>

>> You do know that the default cluster size for NTFS (anything modern)

>> is 4K in most instances, right?


>

> In a FAT-xx system, the head has to move back to the directory to discover

> the next segment. This is not the case with NTFS; pieces are read as they

> are encountered and reassembled in the proper order in RAM.




But that's not quite the whole story though: The bottom line is that the

files are scattered in fragments all over the hard drive, no matter what

file system you are using, so there will have to be multiple disk sector

seeks and accesses to get them collected together into RAM memory. And if

you've defragged the drive, the number of wildly scattered storage locations

on the drive for these fragments will be greatly reduced (since they will be

in more contiguous sectors), so the net total seek and access times would be

reduced, naturally.
 
HeyBub schreef:

> Leythos wrote:

>> In article , heybub@gmail.com

>> says...

>>> Lisa wrote:

>>>> I was told by a computer repairman that it's not necessary to defrag

>>>> my laptop. If the hard drive gets full, remove files and always

>>>> make sure I'm using a virus protection.

>>>> What are your thoughts?

>>> I can envision a situation in a data center with hundreds of

>>> thousands of transactions per minute where defragging may be of some

>>> slight benefit (assuming an NTFS file system).

>>>

>>> I can also imagine a user devoted to daily defragging experiencing a

>>> power interruption during a critical directory manipulation process.


>> On a small computer with many add/delete/grow/shrink operations,

>> defrag can significantly impact file access times and can be very

>> noticeable to users if their system was badly file fragmented before

>> the defrag.

>>

>> White-Space fragmention is not normally an issue, but a file that is

>> fragmented into 8000 parts will have an impact on system performance.

>>

>> This argument has gone on for decades, but it's the people that

>> maintain systems across many areas that know the benefits of defrag.


>

> Ignorance can be fixed - hence the original question. It's knowing something

> that is false that's the bigger problem.

>

> Considering your example of 8,000 segments, consider: A minimum segment size

> of 4096 bytes implies a minimum of 32 meg file. A FAT-32 system requires a

> minimum of 16,000 head movements to gather all the pieces. In this case,

> with an average access time of 12msec, you'll spend over six minutes just

> moving the head around. Factor in rotational delay to bring the track marker

> under the head, then rotational delay to find the sector, and so on, you're

> up to ten minutes or so to read the file.

>

> An NTFS system will suck up the file with ONE head movement. You still have

> the rotational delays and so forth, but NTFS will cut the six minutes off

> the slurp-up time.




Hi Heybub,



This is the second time I hear you claiming this.

How do you 'envision' the head(s) reading all fragments in one go?



In your example: 8000 fragments. If these are scattered all over the

place, the head has to read a lot of different places before all info is in.

Compare this to one continuous sequential set of data where the head

reads all without extra seeking and/or skipping parts.



Also, and especially on systems that need a huge swapfile, after filling

up your HD a few times can lead to heavily fragmented swapfile. This

gives a performance penalty.



I have seen serious performance improvements (on both FAT32 and NTFS)

after defragging (also the systemfiles with

http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)



Others claim the same. How do you explain that?



Erwin Moller







>

> De-fragging an NTFS system DOES have its uses: For those who dust the inside

> covers of the books on their shelves and weekly scour the inside of the

> toilet water tank, a sense of satisfaction infuses their very being after a

> successful operation.

>

> I personally think Prozac is cheaper, but to each his own.

>

>






--

"There are two ways of constructing a software design: One way is to

make it so simple that there are obviously no deficiencies, and the

other way is to make it so complicated that there are no obvious

deficiencies. The first method is far more difficult."

-- C.A.R. Hoare
 
In article , heybub@gmail.com

says...

> > You do know that the default cluster size for NTFS (anything modern)

> > is 4K in most instances, right?


>

> In a FAT-xx system, the head has to move back to the directory to discover

> the next segment. This is not the case with NTFS; pieces are read as they

> are encountered and reassembled in the proper order in RAM.

>

> >

> > How does that impact your math now?


>

> It doesn't.

>

> >

> > You might want to start learning about drives, formats, RAID,

> > clusters, etc... before you post again.


>

> Heh! I'll wager I know more about the things you mentioned than you can ever

> imagine. I started my career designing test suites for 2311 disk drives on

> IBM mainframes and have, mostly, kept up.

>




And yet you don't seem to understand that on NTFS, file fragmentation

means that the heads still have to MOVE to reach the other fragments.



Try and keep up.



--

You can't trust your best friends, your five senses, only the little

voice inside you that most civilians don't even hear -- Listen to that.

Trust yourself.

spam999free@rrohio.com (remove 999 for proper email address)
 
In article ,

Since_humans_read_this_I_am_spammed_too_much@spamyourself.com says...

> I have seen serious performance improvements (on both FAT32 and NTFS)

> after defragging (also the systemfiles with

> http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)

>

> Others claim the same. How do you explain that?

>




My guess is that he's either a troll or some kid in school that has no

friends so he has to pretend to know something here.



--

You can't trust your best friends, your five senses, only the little

voice inside you that most civilians don't even hear -- Listen to that.

Trust yourself.

spam999free@rrohio.com (remove 999 for proper email address)
 
In news:OJs07Hc9KHA.5476@TK2MSFTNGP06.phx.gbl,

Bob I typed:

> Brian V wrote:

>

>> What about defragmentation with a RAID system? Doesn't

>> this system eliminate file defragmentation? I am under the

>> impression that it is two copies of everything (one on

>> each drive), it is a faster (and ??more stable system??)

>> and more reliable system?


>

> RAID 0 is nothing more than Mirrored Drives, it won't be

> faster or more stable, only provides a identical copy in

> the event a harddrive fails.




Jeez, quit guessing at what you "think" are the facts, dummy!



A RAID 0 (also known as a stripe set or striped volume) splits data evenly

across two or more disks (striped) with no parity information for

redundancy. It is important to note that RAID 0 was not one of the original

RAID levels and provides no data redundancy. RAID 0 is normally used to

increase performance, although it can also be used as a way to create a

small number of large virtual disks out of a large number of small physical

ones.
 
In news:4bf264cb$0$22917$e4fe514c@news.xs4all.nl,

Erwin Moller

typed:

....



>>

>> An NTFS system will suck up the file with ONE head

>> movement. You still have the rotational delays and so

>> forth, but NTFS will cut the six minutes off the slurp-up

>> time.


>

> Hi Heybub,

>

> This is the second time I hear you claiming this.

> How do you 'envision' the head(s) reading all fragments in

> one go?

> In your example: 8000 fragments. If these are scattered all

> over the place, the head has to read a lot of different places

> before all info is in. Compare this to one continuous

> sequential set of data where the head reads all without extra seeking

> and/or skipping parts.

>

> Also, and especially on systems that need a huge swapfile,

> after filling up your HD a few times can lead to heavily fragmented

> swapfile. This gives a performance penalty.

>

> I have seen serious performance improvements (on both FAT32

> and NTFS) after defragging (also the systemfiles with

> http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)

>

> Others claim the same. How do you explain that?

>

> Erwin Moller

>

>


....



Remember, this is the guy who can suspend all laws of physics at his will.

There are a couple such people here in fact. It works for him because the

heads are "magnetic" and so are the data. But the head has a super-magnetic

mode: So, the head just comes down and sucks up all the data it needs from

the disk in one fell swoop. It can tell which ones to slurp up by the

arrangement of the magnetic field on the disk; so when the head goes

super-magnetic, it's only for those data parts that are of the right

polarity; the head just has to sit the until they all collect on it, and

then it moves them over to RAM to be used.!

Sounds pretty simple to me! lol!



HTH,



Twayne`
 
In news:MPG.265c543da821451098a386@us.news.astraweb.com,

Leythos typed:

> In article ,

> Since_humans_read_this_I_am_spammed_too_much@spamyourself.com

> says...

>> I have seen serious performance improvements (on both

>> FAT32 and NTFS) after defragging (also the systemfiles with

>> http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)

>>

>> Others claim the same. How do you explain that?

>>


>

> My guess is that he's either a troll or some kid in school

> that has no friends so he has to pretend to know something

> here.




You may be right, but recall also that there is always the "little knowledge

is dangerous" thing too. e.g. if RAID is used for data redundancy was taught

in school, then RAID 0 is just one of those schemes. He may not have yet

noticed that this is a world of generalities, but very, very specific

generalities that don't intuitively cover all cases.



HTH,



Twayne`
 
Twayne schreef:

> In news:4bf264cb$0$22917$e4fe514c@news.xs4all.nl,

> Erwin Moller

> typed:

> ...

>

>>> An NTFS system will suck up the file with ONE head

>>> movement. You still have the rotational delays and so

>>> forth, but NTFS will cut the six minutes off the slurp-up

>>> time.


>> Hi Heybub,

>>

>> This is the second time I hear you claiming this.

>> How do you 'envision' the head(s) reading all fragments in

>> one go?

>> In your example: 8000 fragments. If these are scattered all

>> over the place, the head has to read a lot of different places

>> before all info is in. Compare this to one continuous

>> sequential set of data where the head reads all without extra seeking

>> and/or skipping parts.

>>

>> Also, and especially on systems that need a huge swapfile,

>> after filling up your HD a few times can lead to heavily fragmented

>> swapfile. This gives a performance penalty.

>>

>> I have seen serious performance improvements (on both FAT32

>> and NTFS) after defragging (also the systemfiles with

>> http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)

>>

>> Others claim the same. How do you explain that?

>>

>> Erwin Moller

>>

>>


> ...

>

> Remember, this is the guy who can suspend all laws of physics at his will.

> There are a couple such people here in fact. It works for him because the

> heads are "magnetic" and so are the data. But the head has a super-magnetic

> mode: So, the head just comes down and sucks up all the data it needs from

> the disk in one fell swoop. It can tell which ones to slurp up by the

> arrangement of the magnetic field on the disk; so when the head goes

> super-magnetic, it's only for those data parts that are of the right

> polarity; the head just has to sit the until they all collect on it, and

> then it moves them over to RAM to be used.!

> Sounds pretty simple to me! lol!






LOL, thanks for that excellent explanation. ;-)



I always find it difficult when to respond and when not.

In cases I feel I see serious misinformation, like here with Heybub, I

feel sorry for people who don't know that, and subsequentially take that

kind of advice seriously.



Ah well, that is how usenet was, is, and probably always will be. ;-)



Regards,

Erwin Moller



>

> HTH,

>

> Twayne`

>

>








--

"There are two ways of constructing a software design: One way is to

make it so simple that there are obviously no deficiencies, and the

other way is to make it so complicated that there are no obvious

deficiencies. The first method is far more difficult."

-- C.A.R. Hoare
 
In news:4bf2e578$0$22941$e4fe514c@news.xs4all.nl,

Erwin Moller

typed:

> Twayne schreef:

>> In news:4bf264cb$0$22917$e4fe514c@news.xs4all.nl,

>> Erwin Moller

>>

>> typed: ...

>>

>>>> An NTFS system will suck up the file with ONE head

>>>> movement. You still have the rotational delays and so

>>>> forth, but NTFS will cut the six minutes off the slurp-up

>>>> time.

>>> Hi Heybub,

>>>

>>> This is the second time I hear you claiming this.

>>> How do you 'envision' the head(s) reading all fragments in

>>> one go?

>>> In your example: 8000 fragments. If these are scattered

>>> all over the place, the head has to read a lot of different

>>> places before all info is in. Compare this to one

>>> continuous sequential set of data where the head reads all without

>>> extra seeking and/or skipping parts.

>>>

>>> Also, and especially on systems that need a huge swapfile,

>>> after filling up your HD a few times can lead to heavily

>>> fragmented swapfile. This gives a performance penalty.

>>>

>>> I have seen serious performance improvements (on both

>>> FAT32 and NTFS) after defragging (also the systemfiles with

>>> http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)

>>>

>>> Others claim the same. How do you explain that?

>>>

>>> Erwin Moller

>>>

>>>


>> ...

>>

>> Remember, this is the guy who can suspend all laws of

>> physics at his will. There are a couple such people here

>> in fact. It works for him because the heads are "magnetic"

>> and so are the data. But the head has a super-magnetic

>> mode: So, the head just comes down and sucks up all the

>> data it needs from the disk in one fell swoop. It can tell

>> which ones to slurp up by the arrangement of the magnetic

>> field on the disk; so when the head goes super-magnetic,

>> it's only for those data parts that are of the right

>> polarity; the head just has to sit the until they all

>> collect on it, and then it moves them over to RAM to be

>> used.! Sounds pretty simple to me! lol!


>

>

> LOL, thanks for that excellent explanation. ;-)

>

> I always find it difficult when to respond and when not.

> In cases I feel I see serious misinformation, like here

> with Heybub, I feel sorry for people who don't know that,

> and subsequentially take that kind of advice seriously.

>

> Ah well, that is how usenet was, is, and probably always

> will be. ;-)

> Regards,

> Erwin Moller

>

>>

>> HTH,

>>

>> Twayne`




I know what you mean, Erwin. Sometimes there's an excuse for it such as

they just don't know better, but even then they have to be urged to pay

attention to the details.



Luck,



Twayne`
 
Back
Top