You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@spamassassin.apache.org by Jason Haar <Ja...@trimble.co.nz> on 2005/06/03 01:56:36 UTC

Are the RBL scores high enough?

Hi there

I'm finding a fair chunk of spam gets past SA-3.0.3 with scores of 3-4 
out of 5 even though it got 2+ network test hits.

e.g.

spamd[18676]: result: .  3 - 
DNS_FROM_RFC_ABUSE,DNS_FROM_RFC_POST,FROM_HAS_MIXED_NUMS,RCVD_IN_NJABL_DUL,RCVD_IN_SORBS_DUL 
scantime=4.4,size=1435,mid=<60...@singnet.com.sg>,autolearn=disabled

This had a Subject line of "russian XXXXX unusably in action fervid" - 
so I'm guessing it was spam (;-) - even though it only got a score of 3/5.

Obviously the default values are set that way as a way of implying 
"confidence" in what that means, it's just that I wonder if they need 
updating? I guess I'm referring to the scores in 50_scores.cf.

e.g. RCVD_IN_NJABL_PROXY has a value of 1.0 - and yet the FAQ on the 
NJABL web site (of course) tells you to set "score NJABL_PROXY 3.0" :-)

But the wonderful authors of SA know far more than I do - so are the 
current levels still deemed to be correct?

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +64 3 9635 377 Fax: +64 3 9635 417
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1


Re: Are the RBL scores high enough?

Posted by Matt Kettler <mk...@evi-inc.com>.
Maurice Lucas wrote:
> 
> Now we have to wait for 3.0.4 before there will be any change in the
> static score's

I hate to say it, but 3.0.4 is unlikely to change any scores.

Usually there's a new score set at the beginning of a major release, and one
"tweak" score update somewhere in the middle. All other minor releases are just
bugfixes and don't have a new scoreset.

Take a look at this update history:

2.50 new scoreset and rules
2.51 no change
2.52 no change
2.53 no change
2.54 all scores re-evolved
2.55 no change

2.60 new scoreset and rules
2.61 no change
2.62 no change
2.63 no change
2.64 new scores for network tests,
	opm removed
	xbl added
	8 other rules backported from 3.x

3.0.0 new scoreset and rules
3.0.1 one rule disabled
3.0.2 no change
3.0.3 bayes scores hand-tweaked

Re: Are the RBL scores high enough?

Posted by Maurice Lucas <ms...@taos-it.nl>.
From: "Matt Kettler" <mk...@evi-inc.com>
Sent: Friday, June 03, 2005 9:30 PM


> Kevin Sullivan wrote:
>> On Jun 2, 2005, at 8:27 PM, Matt Kettler wrote:
>>
>>> If one's wrong, they are ALL wrong.
>>>
>>> SA's rule scores are evolved based on a real-world test of a
>>> hand-sorted corpus of fresh spam and ham. The whole scoreset is
>>> evolved simultaneously to optimize the placement pattern.
>>>
>>> Of course, one thing that can affect accuracy is if some spams are
>>> accidentally misplaced into the ham pile it can cause some heavy score
>>> biasing to occur. A little bit of this is unavoidable, as human
>>> mistakes happen, but a lot of it will cause deflated scores and a lot
>>> of FNs.
>>
>>
>> The rule scores are optimized for the spam which was sent at the time
>> that version of SA was released (actually, at the time the rule scoreset
>> was calculated).  Since then, the static SA rules have become less
>> useful since spammers now write their messages to avoid them.  The only
>> rules which spammers cannot easily avoid are the dynamic ones:  bayes
>> and network checks (RBLs, URIBLs, razor, etc).
>>
>> On my systems, I raise the scores for the dynamic tests since they are
>> the only ones which hit a lot of today's spam.
>>
>
> Very true. Most of the static tests (ie: body rule sets like antidrug) 
> spammers
> quickly adapt to after a SA release, and they loose some effectiveness 
> over time.


Maybe we have to make a separate version of the score-file.
So you could install an official SA 3.0.3 release and download a score-file 
say version 3.0.3-date.
And ones every month there will be another official score-file. Spammers can 
adjust there spam to pass the "static" tests but the score will be changed. 
And after the score-file change

Now we have to wait for 3.0.4 before there will be any change in the static 
score's

With kind regards,
Met vriendelijke groet,

Maurice Lucas
TAOS-IT



Re: Worst "Establishment" or "Household Name" Pseudo-Spammers

Posted by Craig Jackson <cj...@localsurface.com>.
Rob McEwen wrote:
> RE: Worst "Establishment" or "Household Name" Pseudo-Spammers
> 
> I've noticed that certain particular Fortune 500 (or similar "house-hold"
> name) companies send an awful lot of e-mail which I can't imagine was signed
> up for. In particular, I see a lot of Overstock and Staples messages sent
> frequently to a variety of my clients. Hopefully, these have an unsubscribe
> link that works and is honored. (I haven't checked.) And I'm sure they don't
> use harvested addresses. But the sheer volume and scope is HUGE and,
> therefore, fishy.
> 
> Any comments on Overstock and Staples?
> 
> Has anyone else noticed others in this same category that sent at this
> frequency?
> 
> (just want to compare notes)
> 
> Rob McEwen
> PowerView Systems
> 
There's a good chance your users are actually signing up for that stuff. 
Ours do.
Craig Jackson


Re: Worst "Establishment" or "Household Name" Pseudo-Spammers

Posted by Robert Menschel <Ro...@Menschel.net>.
Hello Rob,

Friday, June 3, 2005, 12:50:26 PM, you wrote:

RM> RE: Worst "Establishment" or "Household Name" Pseudo-Spammers

RM> Any comments on Overstock and Staples?

Lots of emails from Staples, and as far as I can tell every one has
been subscribed for.  Never seen any spam from them.

No emails from Overstock to my knowledge.  Therefore no spam.

Bob Menschel




Worst "Establishment" or "Household Name" Pseudo-Spammers

Posted by Rob McEwen <ro...@powerviewsystems.com>.
RE: Worst "Establishment" or "Household Name" Pseudo-Spammers

I've noticed that certain particular Fortune 500 (or similar "house-hold"
name) companies send an awful lot of e-mail which I can't imagine was signed
up for. In particular, I see a lot of Overstock and Staples messages sent
frequently to a variety of my clients. Hopefully, these have an unsubscribe
link that works and is honored. (I haven't checked.) And I'm sure they don't
use harvested addresses. But the sheer volume and scope is HUGE and,
therefore, fishy.

Any comments on Overstock and Staples?

Has anyone else noticed others in this same category that sent at this
frequency?

(just want to compare notes)

Rob McEwen
PowerView Systems


Re: Are the RBL scores high enough?

Posted by Matt Kettler <mk...@evi-inc.com>.
Kevin Sullivan wrote:
> On Jun 2, 2005, at 8:27 PM, Matt Kettler wrote:
> 
>> If one's wrong, they are ALL wrong.
>>
>> SA's rule scores are evolved based on a real-world test of a
>> hand-sorted corpus of fresh spam and ham. The whole scoreset is
>> evolved simultaneously to optimize the placement pattern.
>>
>> Of course, one thing that can affect accuracy is if some spams are
>> accidentally misplaced into the ham pile it can cause some heavy score
>> biasing to occur. A little bit of this is unavoidable, as human
>> mistakes happen, but a lot of it will cause deflated scores and a lot
>> of FNs.
> 
> 
> The rule scores are optimized for the spam which was sent at the time
> that version of SA was released (actually, at the time the rule scoreset
> was calculated).  Since then, the static SA rules have become less
> useful since spammers now write their messages to avoid them.  The only
> rules which spammers cannot easily avoid are the dynamic ones:  bayes
> and network checks (RBLs, URIBLs, razor, etc).
> 
> On my systems, I raise the scores for the dynamic tests since they are
> the only ones which hit a lot of today's spam.
> 

Very true. Most of the static tests (ie: body rule sets like antidrug) spammers
quickly adapt to after a SA release, and they loose some effectiveness over time.

However, some dynamic tests have too high a FP rate to have their scores raised
very much. Before raising a score, at least check the S/O ratio in the
STATISTICS*.txt file.

For example RAZOR2_CHECK has a S/O somewhere near 98% (splitting the difference
between set1 at 97.6% and set3 at 98.2%). This means that about 2% of the emails
matched by this rule were in the nonspam pile.

That may not sound bad, but 98% (2% FP rate) is a factor of 20 times worse than
99.9% (0.1% FP rate). (compare the results for RCVD_IN_XBL or URIBL_OB_SURBL to
RAZOR2_CHECK, for example)

Re: Are the RBL scores high enough?

Posted by Kevin Sullivan <ke...@klubkev.org>.
On Jun 2, 2005, at 8:27 PM, Matt Kettler wrote:
> If one's wrong, they are ALL wrong.
>
> SA's rule scores are evolved based on a real-world test of a 
> hand-sorted corpus of fresh spam and ham. The whole scoreset is 
> evolved simultaneously to optimize the placement pattern.
>
> Of course, one thing that can affect accuracy is if some spams are 
> accidentally misplaced into the ham pile it can cause some heavy score 
> biasing to occur. A little bit of this is unavoidable, as human 
> mistakes happen, but a lot of it will cause deflated scores and a lot 
> of FNs.

The rule scores are optimized for the spam which was sent at the time 
that version of SA was released (actually, at the time the rule 
scoreset was calculated).  Since then, the static SA rules have become 
less useful since spammers now write their messages to avoid them.  The 
only rules which spammers cannot easily avoid are the dynamic ones:  
bayes and network checks (RBLs, URIBLs, razor, etc).

On my systems, I raise the scores for the dynamic tests since they are 
the only ones which hit a lot of today's spam.

      -Kevin


Re: Are the RBL scores high enough?

Posted by Matt Kettler <mk...@comcast.net>.
At 08:41 PM 6/2/2005, Jason Haar wrote:
>>If one's wrong, they are ALL wrong.
>
>By that do you mean that a false positive in one RBL tends to show up in 
>them all? Probably too much sharing of data/same sources?

No, I mean if one score in the ruleset is wrong, every score in the ruleset 
is wrong. Since they are scored simultaneously, the score of one rule 
impacts the score of every other rule in the whole ruleset.  


Re: Are the RBL scores high enough?

Posted by Jason Haar <Ja...@trimble.co.nz>.
Matt Kettler wrote:

>>
>> e.g. RCVD_IN_NJABL_PROXY has a value of 1.0 - and yet the FAQ on the 
>> NJABL web site (of course) tells you to set "score NJABL_PROXY 3.0" :-)
>>
>> But the wonderful authors of SA know far more than I do - so are the 
>> current levels still deemed to be correct?
>
>
> If one's wrong, they are ALL wrong.
>

By that do you mean that a false positive in one RBL tends to show up in 
them all? Probably too much sharing of data/same sources?

> SA's rule scores are evolved based on a real-world test of a 
> hand-sorted corpus of fresh spam and ham. The whole scoreset is 
> evolved simultaneously to optimize the placement pattern.
>

...and that's why I asked :-)

Thanks!

-- 
Cheers

Jason Haar
Information Security Manager, Trimble Navigation Ltd.
Phone: +64 3 9635 377 Fax: +64 3 9635 417
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1


Re: Are the RBL scores high enough?

Posted by Matt Kettler <mk...@comcast.net>.
At 07:56 PM 6/2/2005, Jason Haar wrote:
>DNS_FROM_RFC_ABUSE,DNS_FROM_RFC_POST,FROM_HAS_MIXED_NUMS,RCVD_IN_NJABL_DUL,RCVD_IN_SORBS_DUL 
>scantime=4.4,size=1435,mid=<60...@singnet.com.sg>,autolearn=disabled
>
>This had a Subject line of "russian XXXXX unusably in action fervid" - so 
>I'm guessing it was spam (;-) - even though it only got a score of 3/5.
>
>Obviously the default values are set that way as a way of implying 
>"confidence" in what that means, it's just that I wonder if they need 
>updating? I guess I'm referring to the scores in 50_scores.cf.
>
>e.g. RCVD_IN_NJABL_PROXY has a value of 1.0 - and yet the FAQ on the NJABL 
>web site (of course) tells you to set "score NJABL_PROXY 3.0" :-)
>
>But the wonderful authors of SA know far more than I do - so are the 
>current levels still deemed to be correct?

If one's wrong, they are ALL wrong.

SA's rule scores are evolved based on a real-world test of a hand-sorted 
corpus of fresh spam and ham. The whole scoreset is evolved simultaneously 
to optimize the placement pattern.

Of course, one thing that can affect accuracy is if some spams are 
accidentally misplaced into the ham pile it can cause some heavy score 
biasing to occur. A little bit of this is unavoidable, as human mistakes 
happen, but a lot of it will cause deflated scores and a lot of FNs.