You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@trafficserver.apache.org by 张练 <wa...@gmail.com> on 2011/05/28 16:55:50 UTC

multi-frag in cache

I'm considering the multi fragment issue. In iocore/cache/CacheWrite.cc,
function CacheVC::openWriteMain, there are phrases :
  if (length > target_fragment_size() &&
      (length < target_fragment_size() + target_fragment_size() / 4))
    write_len = target_fragment_size();
  else
    write_len = length;

I want to know why it does like that?
If one object's size is between [target_fragment_size() +
target_fragment_size() / 4, MAX_FRAG_SIZE) , then does multi fragments
happen?

-- 
Best regards,
mohan_zl

Re: multi-frag in cache

Posted by John Plevyak <jp...@acm.org>.
This code is called periodically as new data is deposited in the buffer in
an
ongoing stream.  When the stream ends, we willl switch to the openWriteClose
path.

So, if we enter this code we might have already fragmented the file.  If the
amount
of data is over 1MB (target_fragment_size()) then we are going to write
something and
because we are not on the XXXClose() we are not writing the header so what
we write
will be a fragment as the header will be written separately.

If we arrive at this code with between 1MB and 1.25MB in the buffer
(regardless
of how many fragments we might already have written) then we write a 1MB
fragment
under the assumption that we are expecting more data and handling 1MB
freelist
allocated blocks is more efficient because of reduced fragmentation and
malloc overhead,
otherwise we write whatever we have.   In all cases, a write here results in
a fragment.
Only if a close has occurred is there a chance that the object will be
unfragmented.

On Sun, May 29, 2011 at 8:10 PM, 张练 <wa...@gmail.com> wrote:

> Can i understand like that: If we have between 1MB and 1.25MB, then
> fragments happens, otherwise not?
> But i'm confused with the logic:




> If in the first time the write size is
> larger than 1.25MB, then no fragments and go to Lagain and try the second
> time, but the next step the write_len is between 1MB and 1.25MB, so multi
> fragments happen.


If we are larger than 1.25MB we write a fragment also, just not a 1MB
fragment,
and we will not have anything in the buffer, true, but because yhe header
will be
written separately it is a fragment nonetheless.

We do not hit Lagain unless we are not writing (or if ntoto() < 0 and we
have
not called the user, but that just gives the user an opportunity to transfer
more data into the buffer in which case the amount we have can never
go down so if we are over 1.25MB before we will be after).


> So although we hoped to write as much as we can if we meet
> large object, but in fact multi fragments does happen, is that what we
> wanted?
>

If we were over 1.25MB we will remain over 1.25MB and we will write out as
much
as we can.  In every case, writing at this point results in a fragment.


>
> On Sun, May 29, 2011 at 12:25 AM, John Plevyak <jp...@acm.org> wrote:
>
> > I assume that this is related to the SSD code handling multiple
> > fragments...
> >
> > The default target fragment size is 1MB.  This code says that if we have
> > between 1MB and 1.25MB then write only 1MB as we will be able to use
> > the fast non-fragmenting buffer freelist to hold the 1MB on read.   Lower
> > than
> > 1MB it will not do the write until a close or more data arrives.  Greater
> > than
> > 1.25 MB and we assume that we are falling behind on writing (when the
> > system
> > is not overloaded this code will be called frequently as data arrives)
> and
> > we
> > sacrifice some potential inefficiency on read by writing as much as we
> can.
> >
> > On Sat, May 28, 2011 at 7:55 AM, 张练 <wa...@gmail.com> wrote:
> >
> > > I'm considering the multi fragment issue. In
> iocore/cache/CacheWrite.cc,
> > > function CacheVC::openWriteMain, there are phrases :
> > >  if (length > target_fragment_size() &&
> > >      (length < target_fragment_size() + target_fragment_size() / 4))
> > >    write_len = target_fragment_size();
> > >  else
> > >    write_len = length;
> > >
> > > I want to know why it does like that?
> > > If one object's size is between [target_fragment_size() +
> > > target_fragment_size() / 4, MAX_FRAG_SIZE) , then does multi fragments
> > > happen?
> > >
> > > --
> > > Best regards,
> > > mohan_zl
> > >
> >
>
>
>
> --
> Best regards,
> Lian Zhang
>

Re: multi-frag in cache

Posted by 张练 <wa...@gmail.com>.
Can i understand like that: If we have between 1MB and 1.25MB, then
fragments happens, otherwise not?
But i'm confused with the logic: If in the first time the write size is
larger than 1.25MB, then no fragments and go to Lagain and try the second
time, but the next step the write_len is between 1MB and 1.25MB, so multi
fragments happen. So although we hoped to write as much as we can if we meet
large object, but in fact multi fragments does happen, is that what we
wanted?

On Sun, May 29, 2011 at 12:25 AM, John Plevyak <jp...@acm.org> wrote:

> I assume that this is related to the SSD code handling multiple
> fragments...
>
> The default target fragment size is 1MB.  This code says that if we have
> between 1MB and 1.25MB then write only 1MB as we will be able to use
> the fast non-fragmenting buffer freelist to hold the 1MB on read.   Lower
> than
> 1MB it will not do the write until a close or more data arrives.  Greater
> than
> 1.25 MB and we assume that we are falling behind on writing (when the
> system
> is not overloaded this code will be called frequently as data arrives) and
> we
> sacrifice some potential inefficiency on read by writing as much as we can.
>
> On Sat, May 28, 2011 at 7:55 AM, 张练 <wa...@gmail.com> wrote:
>
> > I'm considering the multi fragment issue. In iocore/cache/CacheWrite.cc,
> > function CacheVC::openWriteMain, there are phrases :
> >  if (length > target_fragment_size() &&
> >      (length < target_fragment_size() + target_fragment_size() / 4))
> >    write_len = target_fragment_size();
> >  else
> >    write_len = length;
> >
> > I want to know why it does like that?
> > If one object's size is between [target_fragment_size() +
> > target_fragment_size() / 4, MAX_FRAG_SIZE) , then does multi fragments
> > happen?
> >
> > --
> > Best regards,
> > mohan_zl
> >
>



-- 
Best regards,
Lian Zhang

Re: multi-frag in cache

Posted by John Plevyak <jp...@acm.org>.
I assume that this is related to the SSD code handling multiple fragments...

The default target fragment size is 1MB.  This code says that if we have
between 1MB and 1.25MB then write only 1MB as we will be able to use
the fast non-fragmenting buffer freelist to hold the 1MB on read.   Lower
than
1MB it will not do the write until a close or more data arrives.  Greater
than
1.25 MB and we assume that we are falling behind on writing (when the system
is not overloaded this code will be called frequently as data arrives) and
we
sacrifice some potential inefficiency on read by writing as much as we can.

On Sat, May 28, 2011 at 7:55 AM, 张练 <wa...@gmail.com> wrote:

> I'm considering the multi fragment issue. In iocore/cache/CacheWrite.cc,
> function CacheVC::openWriteMain, there are phrases :
>  if (length > target_fragment_size() &&
>      (length < target_fragment_size() + target_fragment_size() / 4))
>    write_len = target_fragment_size();
>  else
>    write_len = length;
>
> I want to know why it does like that?
> If one object's size is between [target_fragment_size() +
> target_fragment_size() / 4, MAX_FRAG_SIZE) , then does multi fragments
> happen?
>
> --
> Best regards,
> mohan_zl
>