You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cocoon.apache.org by Niclas Hedhman <ni...@hedhman.org> on 2005/05/13 09:27:56 UTC

ImageOpReader [ was; Community health]

On Friday 13 May 2005 13:27, Bertrand Delacretaz wrote:
> Le 13 mai 05, à 07:19, Niclas Hedhman a écrit :
> > Can you explain this a bit further? Because I have no clue what you
> > think is
> > the actual problem.
>
> I think Vadim sees a potential denial of service attack, if your system
> allows one to generate images of a very large size.

Our test shows that;
 1. Image generation is in the sub-second range, even for really large images.
    We hit the server 100 concurrent requests of sizes from 500-1500 px, and
    couldn't register any particular load.

 2. No matter how big sizes you generate, the bandwidth that the system is
    connected to will 'run out' way before the CPU gets bogged down. AFAIK, 
    if I have a lot more bandwidth than you, I should be able to DoS your
    system.
    
 3. If the image is too large then an OutOfMemoryError is the result (in 2ms)
    which Tomcat recovers from.

In any event, I can't see this being anything different from any CPU intensive 
webapps, and why it is any different from a complex XSLT transform. But I do 
like to ehar from anyone who got such ideas.


Since this came up, I will introduce a "max-size" parameter, with a default in 
the 1000x1000 or so range.


Cheers
Niclas

Re: ImageOpReader [ was; Community health]

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Friday 13 May 2005 17:48, Upayavira wrote:
> Does it have the 'don't enlarge' option that is in the current image
> reader? That seems quite sensible to me.

No. But I'll add that to the feature list as well. No biggie at all.

The heavy use of photos in a couple of sites we have done, shows that most 
natural resize function we found was the "fit with-in rectangle", which is 
what is actually exposed in the URL space (not my call). That is a "site 
issue" and nothing to do with the Reader itself, which operates through 
parameters.

The "fit with-in" is IMHO the best of "keep-aspect" and "resize-to", since 
when dealing with loads of photos, you have no clue of the original sizes, 
but you know how much screen real-estate you have available. And the concept 
is easy to explain to 'non-geeks'.

Note; Since the reader has not been taken out of Bugzilla, I have not updated 
it with these additional features, and only sitting on my local system.


Cheers
Niclas



Re: ImageOpReader [ was; Community health]

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Friday 13 May 2005 20:21, Vadim Gritsenko wrote:
> Niclas Hedhman wrote:
> > On Friday 13 May 2005 13:27, Bertrand Delacretaz wrote:
> >>Le 13 mai 05, à 07:19, Niclas Hedhman a écrit :
> >>>Can you explain this a bit further? Because I have no clue what you
> >>>think is the actual problem.
> >>
> >>I think Vadim sees a potential denial of service attack, if your system
> >>allows one to generate images of a very large size.
> >
> > Our test shows that;
> >  1. Image generation is in the sub-second range, even for really large
> > images. We hit the server 100 concurrent requests of sizes from 500-1500
> > px, and couldn't register any particular load.
>
> I used 4096 :-) Not sure if it will accept larger image size as well.

If you tried 4200 it would OOME :o)

> >  2. No matter how big sizes you generate, the bandwidth that the system
> > is connected to will 'run out' way before the CPU gets bogged down.
> > AFAIK, if I have a lot more bandwidth than you, I should be able to DoS
> > your system.
>
> DoS is not necessarily overloading CPU - overloading your channel is DoS
> too. If your channel has lots of bandwidth, then DDoS is the way to go :-)

But if I have more bandwidth than you I can always sink your channel, right? 
This is not really an URL issue at all. And it is not my problem :o)
The "fit with-in box" in the URL was a convenience.

> On your place, I personally would not accept arbitrary image size in the
> URL - even if I have it in the URL. I would limit access only to image
> sizes I want to allow. This reduces chance for abuse - and increases chance
> for cache hit (suppose you have zoom control with 5 poisitions and 1000
> positions: latter have higher probability of cache hit, former - higher
> probability of cache miss).

In reality, users will not hack URLs. Only geeks like you guys do that. ;o)
People in general click on the links available.

Another "hack" is that give it a different extension, and you will get a 
different image format back as well, which also reduces the hit rate, by the 
same reasoning. But I must say that things like this makes Cocoon Rock!

> > Since this came up, I will introduce a "max-size" parameter, with a
> > default in the 1000x1000 or so range.

> OT: Why square? Aren't photos ratio 4:3 or some such?

Ok, I make it 1280x1024...

Re: ImageOpReader [ was; Community health]

Posted by Upayavira <uv...@odoko.co.uk>.
Niclas Hedhman wrote:
> On Friday 13 May 2005 13:27, Bertrand Delacretaz wrote:
> 
>>Le 13 mai 05, à 07:19, Niclas Hedhman a écrit :
>>
>>>Can you explain this a bit further? Because I have no clue what you
>>>think is
>>>the actual problem.
>>
>>I think Vadim sees a potential denial of service attack, if your system
>>allows one to generate images of a very large size.
> 
> 
> Our test shows that;
>  1. Image generation is in the sub-second range, even for really large images.
>     We hit the server 100 concurrent requests of sizes from 500-1500 px, and
>     couldn't register any particular load.
> 
>  2. No matter how big sizes you generate, the bandwidth that the system is
>     connected to will 'run out' way before the CPU gets bogged down. AFAIK, 
>     if I have a lot more bandwidth than you, I should be able to DoS your
>     system.
>     
>  3. If the image is too large then an OutOfMemoryError is the result (in 2ms)
>     which Tomcat recovers from.
> 
> In any event, I can't see this being anything different from any CPU intensive 
> webapps, and why it is any different from a complex XSLT transform. But I do 
> like to ehar from anyone who got such ideas.
> 
> 
> Since this came up, I will introduce a "max-size" parameter, with a default in 
> the 1000x1000 or so range.

Does it have the 'don't enlarge' option that is in the current image 
reader? That seems quite sensible to me.

Regards, Upayavira

Re: ImageOpReader [ was; Community health]

Posted by Vadim Gritsenko <va...@reverycodes.com>.
Niclas Hedhman wrote:
> On Friday 13 May 2005 13:27, Bertrand Delacretaz wrote:
> 
>>Le 13 mai 05, à 07:19, Niclas Hedhman a écrit :
>>
>>>Can you explain this a bit further? Because I have no clue what you
>>>think is the actual problem.
>>
>>I think Vadim sees a potential denial of service attack, if your system
>>allows one to generate images of a very large size.
> 
> 
> Our test shows that;
>  1. Image generation is in the sub-second range, even for really large images.
>     We hit the server 100 concurrent requests of sizes from 500-1500 px, and
>     couldn't register any particular load.

I used 4096 :-) Not sure if it will accept larger image size as well.


>  2. No matter how big sizes you generate, the bandwidth that the system is
>     connected to will 'run out' way before the CPU gets bogged down. AFAIK, 
>     if I have a lot more bandwidth than you, I should be able to DoS your
>     system.

DoS is not necessarily overloading CPU - overloading your channel is DoS too. If 
your channel has lots of bandwidth, then DDoS is the way to go :-)


>  3. If the image is too large then an OutOfMemoryError is the result (in 2ms)
>     which Tomcat recovers from.

So that means picking the right size of image... Not sure how healthy large does 
of OOME, though.


> In any event, I can't see this being anything different from any CPU intensive 
> webapps, and why it is any different from a complex XSLT transform. But I do 
> like to ehar from anyone who got such ideas.

XSLT transformrs usually do not have bandwidth / computational dependencies on 
the request parameters.

On your place, I personally would not accept arbitrary image size in the URL - 
even if I have it in the URL. I would limit access only to image sizes I want to 
allow. This reduces chance for abuse - and increases chance for cache hit 
(suppose you have zoom control with 5 poisitions and 1000 positions: latter have 
higher probability of cache hit, former - higher probability of cache miss).


> Since this came up, I will introduce a "max-size" parameter, with a default in 
> the 1000x1000 or so range.

Good idea.

OT: Why square? Aren't photos ratio 4:3 or some such?

Vadim