You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@shiro.apache.org by "Alan D. Cabrera" <li...@toolazydogs.com> on 2008/07/17 19:28:33 UTC

Crazy idea

I have a crazy idea that I have always wanted to try out just for  
fun.  If you don't want to participate feel free to ignore this thread  
since this experiment will be non-binding.

This is not the first time that I've seen disagreement over feature  
sets and priorities.  I'm sure it's happened to all of us.  There's a  
technique that I use to make the issues plain.

When one evaluates a solution, they usually have a set of criteria  
used to perform the evaluation and so each solution gets a certain  
score depending on how well it meets that criteria.

S_i = sum_j C_j,i

But of course, there's usually no agreement on how well each solution  
meets that criteria.  What has worked well in the past is to average  
everyone's criteria assessment.

C_j_i = average_p C_j,i,p

And also there usually is no agreement on what criteria is relevant  
and so we let everyone submit their criteria and then we weight each  
one by an average of how much people think it's relevant.

W_j = average W_j,p

So the solution gets a score of

S_i = sum W_j * C_j,i

It would be interesting to see what we get with regards to logging.   
Any one care to try this experiment?


Regards,
Alan


Re: Crazy idea

Posted by Alex Karasulu <ak...@apache.org>.
On Thu, Jul 17, 2008 at 1:38 PM, Paul Fremantle <pz...@gmail.com> wrote:

> I keen to try it! Sounds interesting.
>

Dido!


> On Thu, Jul 17, 2008 at 6:34 PM, Tim Veil <tj...@gmail.com> wrote:
> > I'm up for it.
> >
> > On Thu, Jul 17, 2008 at 1:28 PM, Alan D. Cabrera <li...@toolazydogs.com>
> > wrote:
> >
> >> I have a crazy idea that I have always wanted to try out just for fun.
>  If
> >> you don't want to participate feel free to ignore this thread since this
> >> experiment will be non-binding.
> >>
> >> This is not the first time that I've seen disagreement over feature sets
> >> and priorities.  I'm sure it's happened to all of us.  There's a
> technique
> >> that I use to make the issues plain.
> >>
> >> When one evaluates a solution, they usually have a set of criteria used
> to
> >> perform the evaluation and so each solution gets a certain score
> depending
> >> on how well it meets that criteria.
> >>
> >> S_i = sum_j C_j,i
> >>
> >> But of course, there's usually no agreement on how well each solution
> meets
> >> that criteria.  What has worked well in the past is to average
> everyone's
> >> criteria assessment.
> >>
> >> C_j_i = average_p C_j,i,p
> >>
> >> And also there usually is no agreement on what criteria is relevant and
> so
> >> we let everyone submit their criteria and then we weight each one by an
> >> average of how much people think it's relevant.
> >>
> >> W_j = average W_j,p
> >>
> >> So the solution gets a score of
> >>
> >> S_i = sum W_j * C_j,i
> >>
> >> It would be interesting to see what we get with regards to logging.  Any
> >> one care to try this experiment?
> >>
> >>
> >> Regards,
> >> Alan
> >>
> >>
> >
>
>
>
> --
> Paul Fremantle
> Co-Founder and CTO, WSO2
> Apache Synapse PMC Chair
> OASIS WS-RX TC Co-chair
>
> blog: http://pzf.fremantle.org
> paul@wso2.com
>
> "Oxygenating the Web Service Platform", www.wso2.com
>



-- 
Microsoft gives you Windows, Linux gives you the whole house ...

Re: Crazy idea

Posted by Paul Fremantle <pz...@gmail.com>.
I keen to try it! Sounds interesting.

Paul

On Thu, Jul 17, 2008 at 6:34 PM, Tim Veil <tj...@gmail.com> wrote:
> I'm up for it.
>
> On Thu, Jul 17, 2008 at 1:28 PM, Alan D. Cabrera <li...@toolazydogs.com>
> wrote:
>
>> I have a crazy idea that I have always wanted to try out just for fun.  If
>> you don't want to participate feel free to ignore this thread since this
>> experiment will be non-binding.
>>
>> This is not the first time that I've seen disagreement over feature sets
>> and priorities.  I'm sure it's happened to all of us.  There's a technique
>> that I use to make the issues plain.
>>
>> When one evaluates a solution, they usually have a set of criteria used to
>> perform the evaluation and so each solution gets a certain score depending
>> on how well it meets that criteria.
>>
>> S_i = sum_j C_j,i
>>
>> But of course, there's usually no agreement on how well each solution meets
>> that criteria.  What has worked well in the past is to average everyone's
>> criteria assessment.
>>
>> C_j_i = average_p C_j,i,p
>>
>> And also there usually is no agreement on what criteria is relevant and so
>> we let everyone submit their criteria and then we weight each one by an
>> average of how much people think it's relevant.
>>
>> W_j = average W_j,p
>>
>> So the solution gets a score of
>>
>> S_i = sum W_j * C_j,i
>>
>> It would be interesting to see what we get with regards to logging.  Any
>> one care to try this experiment?
>>
>>
>> Regards,
>> Alan
>>
>>
>



-- 
Paul Fremantle
Co-Founder and CTO, WSO2
Apache Synapse PMC Chair
OASIS WS-RX TC Co-chair

blog: http://pzf.fremantle.org
paul@wso2.com

"Oxygenating the Web Service Platform", www.wso2.com

Re: Crazy idea

Posted by Tim Veil <tj...@gmail.com>.
I'm up for it.

On Thu, Jul 17, 2008 at 1:28 PM, Alan D. Cabrera <li...@toolazydogs.com>
wrote:

> I have a crazy idea that I have always wanted to try out just for fun.  If
> you don't want to participate feel free to ignore this thread since this
> experiment will be non-binding.
>
> This is not the first time that I've seen disagreement over feature sets
> and priorities.  I'm sure it's happened to all of us.  There's a technique
> that I use to make the issues plain.
>
> When one evaluates a solution, they usually have a set of criteria used to
> perform the evaluation and so each solution gets a certain score depending
> on how well it meets that criteria.
>
> S_i = sum_j C_j,i
>
> But of course, there's usually no agreement on how well each solution meets
> that criteria.  What has worked well in the past is to average everyone's
> criteria assessment.
>
> C_j_i = average_p C_j,i,p
>
> And also there usually is no agreement on what criteria is relevant and so
> we let everyone submit their criteria and then we weight each one by an
> average of how much people think it's relevant.
>
> W_j = average W_j,p
>
> So the solution gets a score of
>
> S_i = sum W_j * C_j,i
>
> It would be interesting to see what we get with regards to logging.  Any
> one care to try this experiment?
>
>
> Regards,
> Alan
>
>

Re: Crazy idea

Posted by Jeremy Haile <jh...@fastmail.fm>.
Sounds fun to me.  Let's give it a shot!


On Jul 17, 2008, at 1:28 PM, Alan D. Cabrera wrote:

> I have a crazy idea that I have always wanted to try out just for  
> fun.  If you don't want to participate feel free to ignore this  
> thread since this experiment will be non-binding.
>
> This is not the first time that I've seen disagreement over feature  
> sets and priorities.  I'm sure it's happened to all of us.  There's  
> a technique that I use to make the issues plain.
>
> When one evaluates a solution, they usually have a set of criteria  
> used to perform the evaluation and so each solution gets a certain  
> score depending on how well it meets that criteria.
>
> S_i = sum_j C_j,i
>
> But of course, there's usually no agreement on how well each  
> solution meets that criteria.  What has worked well in the past is  
> to average everyone's criteria assessment.
>
> C_j_i = average_p C_j,i,p
>
> And also there usually is no agreement on what criteria is relevant  
> and so we let everyone submit their criteria and then we weight each  
> one by an average of how much people think it's relevant.
>
> W_j = average W_j,p
>
> So the solution gets a score of
>
> S_i = sum W_j * C_j,i
>
> It would be interesting to see what we get with regards to logging.   
> Any one care to try this experiment?
>
>
> Regards,
> Alan
>


Re: Crazy idea

Posted by "Alan D. Cabrera" <li...@toolazydogs.com>.
Here is some of the criteria that I can think of

- Use SLF4J API as a logging API
- No added dependencies
- Graceful Degredation if SLF4J is not in the classpath. The solution  
will check if SLF4J is in the classpath if not then default to java  
util logging. If not on JDK version < 1.4 then use System.out.
- Remove the use of Commons Logging
- Future proof the platform from shifting logging projects
- The ability for users to use multiple logging platforms  
simultaneously in the same JVM

It's a start.  I'll put this in a wiki.  Please chime in with your own  
criteria.


Regards,
Alan

On Jul 17, 2008, at 10:28 AM, Alan D. Cabrera wrote:

> I have a crazy idea that I have always wanted to try out just for  
> fun.  If you don't want to participate feel free to ignore this  
> thread since this experiment will be non-binding.
>
> This is not the first time that I've seen disagreement over feature  
> sets and priorities.  I'm sure it's happened to all of us.  There's  
> a technique that I use to make the issues plain.
>
> When one evaluates a solution, they usually have a set of criteria  
> used to perform the evaluation and so each solution gets a certain  
> score depending on how well it meets that criteria.
>
> S_i = sum_j C_j,i
>
> But of course, there's usually no agreement on how well each  
> solution meets that criteria.  What has worked well in the past is  
> to average everyone's criteria assessment.
>
> C_j_i = average_p C_j,i,p
>
> And also there usually is no agreement on what criteria is relevant  
> and so we let everyone submit their criteria and then we weight each  
> one by an average of how much people think it's relevant.
>
> W_j = average W_j,p
>
> So the solution gets a score of
>
> S_i = sum W_j * C_j,i
>
> It would be interesting to see what we get with regards to logging.   
> Any one care to try this experiment?
>
>
> Regards,
> Alan
>
>