You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@spamassassin.apache.org by Karsten Bräckelmann <gu...@rudersport.de> on 2011/03/23 21:58:47 UTC

Re: Spam Eating Monkey causing 100% false positives for large institutions

On Wed, 2011-03-23 at 10:18 -1000, Warren Togami Jr. wrote:
> On 3/23/2011 7:38 AM, Blaine Fleming wrote:
> > > In the recent sa-updates, the Spam Eating Monkey rules were
> > > inappropriately enabled.  [...]

> > As soon as the bug was reported on the dev list I disabled the
> > 127.0.0.255 response code to avoid any additional issues.  I will be
> > turning this functionality back on as soon as the SA rules are updated
> > which I assume will be soon.
> 
> I would recommend blackholing those IP addresses at the firewall of the 
> DNS server, especially those 300 million+ sites that are impossible to 
> contact.  They might finally notice they have a serious configuration 
> issue and stop querying if their mail delivery backs up.

Ugh, nasty boy. ;)  You do realize they wouldn't be hammering the SEM
DNS servers, if testrules wouldn't have slipped out accidentally -- by
sa-update.

Personally, I'd much rather prefer to have this resolved by another
manual rule update, so the queries should die down within another 24-48
hours. Obviously, these sites do use sa-update...

Thanks and props to Blaine, for effectively disabling the limit
temporarily, and sustain the load for a while! :)


-- 
char *t="\10pse\0r\0dtu\0.@ghno\x4e\xc8\x79\xf4\xab\x51\x8a\x10\xf4\xf4\xc4";
main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;i<l;i++){ i%8? c<<=1:
(c=*++x); c&128 && (s+=h); if (!(h>>=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}


Re: Spam Eating Monkey causing 100% false positives for large institutions

Posted by Karsten Bräckelmann <gu...@rudersport.de>.
On Wed, 2011-03-23 at 11:08 -1000, Warren Togami Jr. wrote:
> On 3/23/2011 10:58 AM, Karsten Bräckelmann wrote:

> > Ugh, nasty boy. ;)  You do realize they wouldn't be hammering the SEM
> > DNS servers, if testrules wouldn't have slipped out accidentally -- by
> > sa-update.
> >
> > Personally, I'd much rather prefer to have this resolved by another
> > manual rule update, so the queries should die down within another 24-48
> > hours. Obviously, these sites do use sa-update...
> >
> > Thanks and props to Blaine, for effectively disabling the limit
> > temporarily, and sustain the load for a while! :)
> 
> Agreed that would be the ideal solution.  Who knows the procedure?  Is 
> that procedure documented?

Not as much as I would like it to be, but this is documented. See some
of my posts to dev@ the last days...


-- 
char *t="\10pse\0r\0dtu\0.@ghno\x4e\xc8\x79\xf4\xab\x51\x8a\x10\xf4\xf4\xc4";
main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;i<l;i++){ i%8? c<<=1:
(c=*++x); c&128 && (s+=h); if (!(h>>=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}


Re: Spam Eating Monkey causing 100% false positives for large institutions

Posted by "Warren Togami Jr." <wt...@gmail.com>.
On 3/23/2011 10:58 AM, Karsten Bräckelmann wrote:
> On Wed, 2011-03-23 at 10:18 -1000, Warren Togami Jr. wrote:
>> On 3/23/2011 7:38 AM, Blaine Fleming wrote:
>>>> In the recent sa-updates, the Spam Eating Monkey rules were
>>>> inappropriately enabled.  [...]
>
>>> As soon as the bug was reported on the dev list I disabled the
>>> 127.0.0.255 response code to avoid any additional issues.  I will be
>>> turning this functionality back on as soon as the SA rules are updated
>>> which I assume will be soon.
>>
>> I would recommend blackholing those IP addresses at the firewall of the
>> DNS server, especially those 300 million+ sites that are impossible to
>> contact.  They might finally notice they have a serious configuration
>> issue and stop querying if their mail delivery backs up.
>
> Ugh, nasty boy. ;)  You do realize they wouldn't be hammering the SEM
> DNS servers, if testrules wouldn't have slipped out accidentally -- by
> sa-update.
>
> Personally, I'd much rather prefer to have this resolved by another
> manual rule update, so the queries should die down within another 24-48
> hours. Obviously, these sites do use sa-update...
>
> Thanks and props to Blaine, for effectively disabling the limit
> temporarily, and sustain the load for a while! :)
>
>

Agreed that would be the ideal solution.  Who knows the procedure?  Is 
that procedure documented?

Warren