You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@netbeans.apache.org by Eric Bresie <eb...@gmail.com> on 2020/02/15 17:35:53 UTC

Re: Github Actions: Quality: Codacy Reports

I agree a lot of those findings can be subjective in nature. I would wonder
how configurable the checks are for that particular action.
Could maybe investigate others like
https://github.com/marketplace/actions/sonarcloud-scan

That was just one of a wealth of actions available which I thought could
help in the process of improving things in general.

There seems like other possible action which could help like
- code quality (https://github.com/marketplace?category=code-quality)
- test coverage checks (https://github.com/marketplace?category=testing),
- security checks (https://github.com/marketplace?category=security) ,
- localization (https://github.com/marketplace?category=localization) etc.

Not necessarily suggesting things like this should stop any releases or
anything...just helping improve the quality throughout the codebase.

That raises a separate question, is there any kind of "netbeans coding
style guide" which might identify what sorts of "code checks" would be
helpful around this (i.e. identify places the code is not consistent with
tje style guide)?

Eric Bresie
ebresie@gmail.com


On Sun, Jan 26, 2020 at 10:30 PM Tim Boudreau <ni...@gmail.com> wrote:

> Anything like that should be set up to use only a minimal subset of
> warnings that indicate genuine problems. Most of the ones I see on the
> linked page will be absurdly wrong at least some of the time.
>
> For example, the warning not to use fully qualified class names (which is
> harmless and in some situations more robust) will get triggered multiple
> times on hundreds of classes containing generated code from the form
> editor.
>
> The insistence on tests containing the tool’s idea of what an assertion is
> is just silly - ever write a test of an API that calls you back on a
> background thread? If you want to write that clearly, you use assertions
> normally, catch them on the background thread and wait on a countdown latch
> or similar in the test thread, and rethrow the assertion error - and as
> soon as you’ve done that more than once, you’ll wrap that in a nice little
> lambda based call so your test code is clear (but it’s very unlikely there
> tool will recognize your assertions in that scope.
>
> And so forth.
>
> It’s not bad to have these things, but their authors generally compete on
> how big a list of hints they support, so there will be a lot of garbage,
> and you don’t want to wind up distorting the way you code to please a tool
> rather than using common sense.
>
> -Tim
>
> --
> http://timboudreau.com
>