You are viewing a plain text version of this content. The canonical link for it is here.
Posted to apachecon-discuss@apache.org by Christofer Dutz <ch...@c-ware.de> on 2016/09/29 06:48:44 UTC

Some feedback on the new review mechanism

Hi guys,


I just wanted to also take the opportunity to give some feedback on the modified review process:


1. Seeing that I would have to do 30000 decisions sort of turned me off right away (It's sort of ... "yeah let me help" and then getting a huge pile of work dumped on my desk)


2. With that huge amount of possible work I could see only little progress for quite some time put into it) ... 30000 decisions would require reading of 60000 applications. If I assume 30 seconds per application that's about 500 hours which is about 20 days without doing anything else. I sort of quit at about 400 decisions.


3. I noticed for myself that at first you start reading the applications carefully but that accuracy goes down very fast as soon as you get a lot of the talks you reviewed earlier ... unfortunately even if you only think you read it before. I noticed me not reading some similar looking applications and voting for one thinking it's the other. Don't know if this is desirable.


I liked the simple interface however. So how about dropping the Deathmatch approach and just displaying one application, and let the user select how much he likes it (ok ... this is just the way the old version worked, but as I said, I liked the UI ... just clicking once) ... eventually the user could also add tags to the application and suggest tracks.


Looking forward to a great conference :-)


Chris

Re: Some feedback on the new review mechanism

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

>  That said, the 30 reviewers did a HUGE amount of work.

Given the number of reviewers I think that seeing your own talks is not actually an issue as there’s no way a single person could skew the results.

> And we do indeed see a lot of new faces in the speakers, which was a specific goal.

Good to know. Again it certainnly looks like it works better than other processes we have tried.

>  However, the Big Data event, in particular had a hard time
> attracting a reasonable number of reviewers. We need help with that next
> time.

Reviewing those talks was a lot harder, and unless you know the subject matter very well it’s harder to compare talks as the quality of submissions was a lot moe even in quality.

>  I would then be left with a pool of 200 talks, all of which were rated 4 (this is only a slight exaggeration)

No fun at all and means the review process is not working.

> Thank you for your work. 

And thanks so much for your hard work and effort, ApacheCon would exist in the current form without you.

Thanks,
Justin

Re: Some feedback on the new review mechanism

Posted by Rich Bowen <rb...@rcbowen.com>.

On 09/29/2016 02:48 AM, Christofer Dutz wrote:
> Hi guys,
> 
> 
> I just wanted to also take the opportunity to give some feedback on the modified review process:
> 
> 
> 1. Seeing that I would have to do 30000 decisions sort of turned me off right away (It's sort of ... "yeah let me help" and then getting a huge pile of work dumped on my desk)
> 

Yes, I expect that the UI could be presented in a way that is not quite
so overwhelming. That said, the 30 reviewers did a HUGE amount of work.
Looking at the "winning" abstracts, and comparing them to various random
"losing" abstracts, it's clear that the well-written abstracts did, in
fact, bubble to the top, and the one-liners, the confusing, and the
illiterate, did indeed sink to the bottom. And we do indeed see a lot of
new faces in the speakers, which was a specific goal.

But, yeah, seeing that note at the top that I only have 30,000 more
comparisons to go, was very disheartening.

> 
> 2. With that huge amount of possible work I could see only little progress for quite some time put into it) ... 30000 decisions would require reading of 60000 applications. If I assume 30 seconds per application that's about 500 hours which is about 20 days without doing anything else. I sort of quit at about 400 decisions.
> 

After 15 minutes or so, I began to recognize almost all of the
abstracts, and the comparisons sped up. But, yes, if we were indeed
expected to do 30,000 comparisons each, this would take years. So a goal
for the next time we use this tool (if we do in fact keep using this
tool) is to expand the reviewer pool a lot - perhaps reach out to
everyone that has attended past events?

The benefit of this system is that it can harness the time of 1000
people that have 5 minutes, rather than requiring 5 people to spend 1000
minutes. However, the Big Data event, in particular had a hard time
attracting a reasonable number of reviewers. We need help with that next
time.

> 
> 3. I noticed for myself that at first you start reading the applications carefully but that accuracy goes down very fast as soon as you get a lot of the talks you reviewed earlier ... unfortunately even if you only think you read it before. I noticed me not reading some similar looking applications and voting for one thinking it's the other. Don't know if this is desirable.
> 
> 
> I liked the simple interface however. So how about dropping the Deathmatch approach and just displaying one application, and let the user select how much he likes it (ok ... this is just the way the old version worked, but as I said, I liked the UI ... just clicking once) ... eventually the user could also add tags to the application and suggest tracks.


My biggest frustration with the "rate this from 1 to 5" technique that
we are moving away from, is that I would then be left with a pool of 200
talks, all of which were rated 4 (this is only a slight exaggeration)
that I then had to choose from. Usually in complete ignorance of the
subject material. So it really ended up being myself and 2 or 3 other
people choosing a schedule blind, and deriving almost no benefit from
your reviews.

The DeathMatch approach (I like that name!) makes people more brutal,
and, the evidence suggest, more honest. So abstracts that were real
clunkers did indeed sink to the bottom, and end up with large negative
scores.

Thank you for your work. And thank you for your comments on the
interface (everyone!) There's a lot of change that I would also like to
see, once we've moved past the "O MY GOD I HATE CONFERENCE SCHEDULING"
phase. But, overall, this system was, for me anyways, a tiny fraction of
the stress that we go through every time we need to schedule one of
these things.


-- 
Rich Bowen - rbowen@rcbowen.com - @rbowen
http://apachecon.com/ - @apachecon

Re: Some feedback on the new review mechanism

Posted by Shane Curcuru <as...@shanecurcuru.org>.
Big lesson: next year's conference selection process will be awesome,
because we'll be using the tool(s) for the second time, and things will
work nicely!

- Have the daily ration be smaller.  Needs to be small enough to do
while eating lunch, or later (after you've read many abstracts) during a
long coffee break.  If people have more time, let them sign up for a
second round each day.

- If you can post slightly more statistics of how many rankings were
assigned each talk and the distribution, it might help people feel more
comfortable getting away from the traditional 1-5 scale.  (Not that the
1-5 scale was useful, but that's what people are comfortable with).

In particular, hearing there were 50+ scores for every talk was an
"ah-ha!" moment for me.  There really was a lot of input per talk.

I agree with not releasing the whole raw dataset; but posting a little
more on the methodology and approximate scoring of two bubbles vs. three
bubbles etc. would be helpful.

- I'd like to include the "why is this talk important for X audience"
data as well; at least I try to fill that in with something useful for
reviewers and organizers on my proposals that I think add some data.

Thanks!
- Shane

Re: Some feedback on the new review mechanism

Posted by Daniel Gruno <hu...@apache.org>.
On 09/29/2016 10:25 AM, Emmanuel L�charny wrote:
> Le 29/09/16 � 09:55, Daniel Gruno a �crit :
>> I'll see if I can answer some of the critique here.
>> And I'll also note that critique - whether it be positive or negative -
>> is most welcome.
> 
> I just want to know if there is a way to get the score for accepted talks ?
> 
> Thanks !
> 

I don't believe we'll be sharing the data. Maybe in an anonymized
fashion, but certainly not the raw data. Rich may shed more light on this :)

With regards,
Daniel.

Re: Some feedback on the new review mechanism

Posted by Emmanuel Lécharny <el...@gmail.com>.
Le 29/09/16 � 14:03, Rich Bowen a �crit :
>
> On 09/29/2016 04:25 AM, Emmanuel L�charny wrote:
>> Le 29/09/16 � 09:55, Daniel Gruno a �crit :
>>> I'll see if I can answer some of the critique here.
>>> And I'll also note that critique - whether it be positive or negative -
>>> is most welcome.
>> I just want to know if there is a way to get the score for accepted talks ?
> I'd really have to be persuaded that this is a good thing to do. The
> talks that were scheduled, on the ApacheCon EU side, represent the top
> 100 or so rated talks. So if it was scheduled, it scored > 0, and if it
> wasn't, it didn't. Approximately. Sharing exact scores doesn't seem
> like a lot of benefit to me, and, in the past, has led to heated, and
> sometimes hateful, email to me about why a particular talk was scheduled
> and another wasn't.
>
> So, no, I'm not keen on doing this. Perhaps you can persuade me
> otherwise, but at the moment I see only negatives.

I get it, and I agree.



Re: Some feedback on the new review mechanism

Posted by Rich Bowen <rb...@rcbowen.com>.

On 09/29/2016 04:25 AM, Emmanuel L�charny wrote:
> Le 29/09/16 � 09:55, Daniel Gruno a �crit :
>> I'll see if I can answer some of the critique here.
>> And I'll also note that critique - whether it be positive or negative -
>> is most welcome.
> 
> I just want to know if there is a way to get the score for accepted talks ?

I'd really have to be persuaded that this is a good thing to do. The
talks that were scheduled, on the ApacheCon EU side, represent the top
100 or so rated talks. So if it was scheduled, it scored > 0, and if it
wasn't, it didn't. Approximately. Sharing exact scores doesn't seem
like a lot of benefit to me, and, in the past, has led to heated, and
sometimes hateful, email to me about why a particular talk was scheduled
and another wasn't.

So, no, I'm not keen on doing this. Perhaps you can persuade me
otherwise, but at the moment I see only negatives.


-- 
Rich Bowen - rbowen@rcbowen.com - @rbowen
http://apachecon.com/ - @apachecon

Re: Some feedback on the new review mechanism

Posted by Emmanuel Lécharny <el...@gmail.com>.
Le 29/09/16 � 09:55, Daniel Gruno a �crit :
> I'll see if I can answer some of the critique here.
> And I'll also note that critique - whether it be positive or negative -
> is most welcome.

I just want to know if there is a way to get the score for accepted talks ?

Thanks !

Re: Some feedback on the new review mechanism

Posted by Shawn McKinney <sm...@apache.org>.
> On Sep 29, 2016, at 7:04 AM, Rich Bowen <rb...@rcbowen.com> wrote:
> 
> Indeed. And as I reviewed, I became aware that I tend to tout my
> credentials, rather than the value of the talk, in my abstracts. This
> exercise will forever change the way I write abstracts.

+1, I’ll never write an abstract the same way again.  

I wondered the pros / cons of keeping the speaker identity anonymous.  It’s certainly very good to eliminate biases and encourage fresh faces.  But the experience of the speaker should be weighted somehow.  i.e. maybe it was a great abstract but perhaps the talk doesn’t go so well due to lack of experience.  

Perhaps there is a second group after the first who can apply a filter based on speaker experience and proficiency.

Thanks,
Shawn

Re: Some feedback on the new review mechanism

Posted by Rich Bowen <rb...@rcbowen.com>.

On 09/29/2016 04:26 AM, Justin Mclean wrote:
> - Some talks gave away the speakers name, but most were anonymous. Sometime the name is important / most of the time it is not. Perhaps best to edit the occasional name out for the review process? But do you really want to miss out on say Jim's Apache Way talk because he only put in a one line description? 

Indeed. And as I reviewed, I became aware that I tend to tout my
credentials, rather than the value of the talk, in my abstracts. This
exercise will forever change the way I write abstracts.

-- 
Rich Bowen - rbowen@rcbowen.com - @rbowen
http://apachecon.com/ - @apachecon

Re: Some feedback on the new review mechanism

Posted by Daniel Gruno <hu...@apache.org>.
On 09/29/2016 10:26 AM, Justin Mclean wrote:
> Hi,
> 
>> We had 32,000 scores submitted (between 50 and 70 per talk)
> 
> Which seems better than the older systems we have tried.
> 
>> As for the 'daily batch' you had to go through, I'll admit that 720 or
>> whatever the number was, is a tad much. We can definitely lower that
>> number to make it more digestible.
> 
> +1 to that. 50 - 100 matches at most please, otherwise it tends to blur a bit after a while.
> 
> When reviewing I did run into a few minor issues:
> - Some talks gave away the speakers name, but most were anonymous. Sometime the name is important / most of the time it is not. Perhaps best to edit the occasional name out for the review process? But do you really want to miss out on say Jim's Apache Way talk because he only put in a one line description? 

Part of this was time constraints this time, part was using one system
for CFP and one for review, neither of which should be a factor next time.

We will be sure to make it clear to speakers, that the abstract should
preferably be anonymous as much as possible. AND we'll make sure to
state that you must provide a strong abstract if your talk is to be
accepted.

> - Hard to compare totally unlike talks how do you rate a talk in diversity vs a talk on the internals of the HTTP server?

I agree, and that's what the "I don't know" score was for. Perhaps we
should add a 'skip this review' button for when it's really difficult to
figure it out.

> - Similar subject talks didn't come up as often as I like to be compared with each other. there was 3 or 4 Apache ways talks but I don;t think I managed to directly compare them.

We can work on how random the review process is, so it becomes a bit
more evenly distributed, but this would need some discussion, as we
don't wanna create unintended mathematical bias somewhere.

> - I would of really like to see the extra info the speaker submitted (not so much the name and bio) but why this talk in important to Apache and why people would attend. Seems useful to know that.

Yeah, again, two systems trying to talk to each other doesn't always
work :(. Next time it will be one system for everything, which should
make things smoother.

With regards,
Daniel.

> 
> Thanks,
> Justin
> 


Re: Some feedback on the new review mechanism

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> The question is, if these were scheduled next to each other, which one
> do you think people are more likely to attend, given the abstracts.

Honestly I didn’t see it that way - perhaps that wasn’t clear to me. I’ve been involved in a few conferences and scheduling has always been a seperate step to selection. When scheduling you have different criteria and want to try and minimise scheduling similar talks or popular talks against each other, but there's is never an ideal solution that suit all attendees.

For a couple of conferences I’ve been involved I've even randomly generates thousands of schedules to try and fine a possibly optional one that have minimal conflicts,

> Because we try to schedule "totally unlike talks" next to each other

Yep that generally works well.

Thanks,
Justin

Re: Some feedback on the new review mechanism

Posted by Rich Bowen <rb...@rcbowen.com>.

On 09/29/2016 04:26 AM, Justin Mclean wrote:
> - Hard to compare totally unlike talks how do you rate a talk in diversity vs a talk on the internals of the HTTP server?

The question is, if these were scheduled next to each other, which one
do you think people are more likely to attend, given the abstracts.
Because we try to schedule "totally unlike talks" next to each other, so
this is the actual choice people will be making in real life - not
between two Apache Way talks.

-- 
Rich Bowen - rbowen@rcbowen.com - @rbowen
http://apachecon.com/ - @apachecon

Re: Some feedback on the new review mechanism

Posted by Richard Eckart de Castilho <re...@apache.org>.
On 29.09.2016, at 10:26, Justin Mclean <ju...@classsoftware.com> wrote:
> 
> - Some talks gave away the speakers name, but most were anonymous. Sometime the name is important / most of the time it is not. Perhaps best to edit the occasional name out for the review process? But do you really want to miss out on say Jim's Apache Way talk because he only put in a one line description? 

In that direction, it would also be good to consider allowing/requiring reviewers to state a "conflict of interest" on individual submissions.

Cheers,

-- Richard


Re: Some feedback on the new review mechanism

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> We had 32,000 scores submitted (between 50 and 70 per talk)

Which seems better than the older systems we have tried.

> As for the 'daily batch' you had to go through, I'll admit that 720 or
> whatever the number was, is a tad much. We can definitely lower that
> number to make it more digestible.

+1 to that. 50 - 100 matches at most please, otherwise it tends to blur a bit after a while.

When reviewing I did run into a few minor issues:
- Some talks gave away the speakers name, but most were anonymous. Sometime the name is important / most of the time it is not. Perhaps best to edit the occasional name out for the review process? But do you really want to miss out on say Jim's Apache Way talk because he only put in a one line description? 
- Hard to compare totally unlike talks how do you rate a talk in diversity vs a talk on the internals of the HTTP server?
- Similar subject talks didn't come up as often as I like to be compared with each other. there was 3 or 4 Apache ways talks but I don;t think I managed to directly compare them.
- I would of really like to see the extra info the speaker submitted (not so much the name and bio) but why this talk in important to Apache and why people would attend. Seems useful to know that.

Thanks,
Justin

Re: Some feedback on the new review mechanism

Posted by Daniel Gruno <hu...@apache.org>.
I'll see if I can answer some of the critique here.
And I'll also note that critique - whether it be positive or negative -
is most welcome.

On 09/29/2016 09:13 AM, Jan Willem Janssen wrote:
> Hi,
> 
>> On 29 Sep 2016, at 08:48, Christofer Dutz <ch...@c-ware.de> wrote:
>> I just wanted to also take the opportunity to give some feedback on the modified review process:
>>
>>
>> 1. Seeing that I would have to do 30000 decisions sort of turned me off right away (It's sort of ... "yeah let me help" and then getting a huge pile of work dumped on my desk)
>>
>> 2. With that huge amount of possible work I could see only little progress for quite some time put into it) ... 30000 decisions would require reading of 60000 applications. If I assume 30 seconds per application that's about 500 hours which is about 20 days without doing anything else. I sort of quit at about 400 decisions.
>>
>> 3. I noticed for myself that at first you start reading the applications carefully but that accuracy goes down very fast as soon as you get a lot of the talks you reviewed earlier ... unfortunately even if you only think you read it before. I noticed me not reading some similar looking applications and voting for one thinking it's the other. Don't know if this is desirable.
>>
>> I liked the simple interface however. So how about dropping the Deathmatch approach and just displaying one application, and let the user select how much he likes it (ok ... this is just the way the old version worked, but as I said, I liked the UI ... just clicking once) ... eventually the user could also add tags to the application and suggest tracks.

The comparison review was chosen over the one-by-one review after a
lengthy discussion.

No one is/was expected to do all 30k combinations, that would take ages.
The system was designed to randomize the combinations, so that, given
enough time and reviewers, all combinations will be tested on the whole.
We had 32,000 scores submitted (between 50 and 70 per talk), which is
more detailed than what we would have gotten if we had set out to review
each talk on their own (then we would have gotten some 10-15 absolute
scores instead of the 40-70 relative ones) AND it gives us some
statistical advantages.

With the old style review, the majority of talks were sadly all given
pretty much identical scores, which made it very difficult to sort them.
We essentially had no way of knowing "okay, but if you had to choose
between these two identically scoring talks, which would you attend??".
By doing a 'death match' as you say, we can better work out the
likelihood that talks are preferred over other talks, and not only sort
by their total average score, but also the likelihood that talk A will
be rated better than talk B - even if that combination hasn't been
reviewed by you. This gives us additional data to sort and filter by. It
also helps refine scores. With absolute scoring, you have to score 1,2,3
or 4 pts for instance, but how you determine whether a talk gets 2 or 3
pts is a lot more random and improvised than people think, and it rarely
reflects on a true quality in the talk that got 3 pts over the talk that
got 2 pts.

In the end, what we are looking for isn't talks that score a certain
random number of points, but talks that will be attended. By doing a
comparison-style review, it is our firm belief we get closer to that
than by doing a one-on-one.

I'll grant you and the others that we definitely need more reviewers in
general, and especially if we are to get a more true sense of which
talks are most likely to be attended. But that's more of a problem with
us not getting the word out efficiently.

As for the 'daily batch' you had to go through, I'll admit that 720 or
whatever the number was, is a tad much. We can definitely lower that
number to make it more digestible.

The system is by no means perfect, and we're working on improving it as
experiences/responses reach us.

We'll be having some informal discussions at ApacheCon in Seville to
work on the review process, and I hope people will attend :)

With regards,
Daniel.

> 
> I share this as well: given the large amount of proposals, the decision making for all permutations is
> simply too much. Also, due to the relatively small amount of reviewers and the \u201creal\u201d randomisation, I
> think there\u2019s a large bias in the final decisions: I\u2019ve come across the same \u201cmatch\u201d several times,
> which implies that the one talk that lost the battle has a negative bias.
> 
> I\u2019ve tried to do as many \u201cbattles\u201d as possible, but got only up to about 1000 before I was fed up and
> no longer could spend time on it due to other obligations. I\u2019m not sure if I\u2019ve seen all proposals
> (probably not), which is a pity, in my opinion...
> 
> --
> Met vriendelijke groeten | Kind regards
> 
> Jan Willem Janssen | Software Architect
> +31 631 765 814
> 
> 
> My world is something with Amdatu and Apache
> 
> Luminis Technologies
> Churchillplein 1
> 7314 BZ  Apeldoorn
> +31 88 586 46 00
> 
> https://www.luminis.eu
> 
> KvK (CoC) 09 16 28 93
> BTW (VAT) NL8170.94.441.B.01
> 


Re: Some feedback on the new review mechanism

Posted by Richard Eckart de Castilho <re...@apache.org>.
I second Jan's and Chris's opinions. 

This type of crowd-sourcing approach IMHO requires a huge number of reviewers to be successful and it would also require individual reviewers to get less of a load and more feeling of doing something useful. Even reducing the pairs per session to like 10 or 25 would probably help. People could do it quickly during lunch break or on the bus home, etc.

Also a button like "I don't want to see this submission never ever again!" would be nice - for it to be excluded from future pairs.

Cheers,

-- Richard


> On 29.09.2016, at 09:13, Jan Willem Janssen <ja...@luminis.eu> wrote:
> 
>> On 29 Sep 2016, at 08:48, Christofer Dutz <ch...@c-ware.de> wrote:
>> I just wanted to also take the opportunity to give some feedback on the modified review process:
>> 
>> 
>> 1. Seeing that I would have to do 30000 decisions sort of turned me off right away (It's sort of ... "yeah let me help" and then getting a huge pile of work dumped on my desk)
>> 
>> 2. With that huge amount of possible work I could see only little progress for quite some time put into it) ... 30000 decisions would require reading of 60000 applications. If I assume 30 seconds per application that's about 500 hours which is about 20 days without doing anything else. I sort of quit at about 400 decisions.
>> 
>> 3. I noticed for myself that at first you start reading the applications carefully but that accuracy goes down very fast as soon as you get a lot of the talks you reviewed earlier ... unfortunately even if you only think you read it before. I noticed me not reading some similar looking applications and voting for one thinking it's the other. Don't know if this is desirable.
>> 
>> I liked the simple interface however. So how about dropping the Deathmatch approach and just displaying one application, and let the user select how much he likes it (ok ... this is just the way the old version worked, but as I said, I liked the UI ... just clicking once) ... eventually the user could also add tags to the application and suggest tracks.
> 
> I share this as well: given the large amount of proposals, the decision making for all permutations is
> simply too much. Also, due to the relatively small amount of reviewers and the “real” randomisation, I
> think there’s a large bias in the final decisions: I’ve come across the same “match” several times,
> which implies that the one talk that lost the battle has a negative bias.
> 
> I’ve tried to do as many “battles” as possible, but got only up to about 1000 before I was fed up and
> no longer could spend time on it due to other obligations. I’m not sure if I’ve seen all proposals
> (probably not), which is a pity, in my opinion...


Re: Some feedback on the new review mechanism

Posted by Jan Willem Janssen <ja...@luminis.eu>.
Hi,

> On 29 Sep 2016, at 08:48, Christofer Dutz <ch...@c-ware.de> wrote:
> I just wanted to also take the opportunity to give some feedback on the modified review process:
> 
> 
> 1. Seeing that I would have to do 30000 decisions sort of turned me off right away (It's sort of ... "yeah let me help" and then getting a huge pile of work dumped on my desk)
> 
> 2. With that huge amount of possible work I could see only little progress for quite some time put into it) ... 30000 decisions would require reading of 60000 applications. If I assume 30 seconds per application that's about 500 hours which is about 20 days without doing anything else. I sort of quit at about 400 decisions.
> 
> 3. I noticed for myself that at first you start reading the applications carefully but that accuracy goes down very fast as soon as you get a lot of the talks you reviewed earlier ... unfortunately even if you only think you read it before. I noticed me not reading some similar looking applications and voting for one thinking it's the other. Don't know if this is desirable.
> 
> I liked the simple interface however. So how about dropping the Deathmatch approach and just displaying one application, and let the user select how much he likes it (ok ... this is just the way the old version worked, but as I said, I liked the UI ... just clicking once) ... eventually the user could also add tags to the application and suggest tracks.

I share this as well: given the large amount of proposals, the decision making for all permutations is
simply too much. Also, due to the relatively small amount of reviewers and the “real” randomisation, I
think there’s a large bias in the final decisions: I’ve come across the same “match” several times,
which implies that the one talk that lost the battle has a negative bias.

I’ve tried to do as many “battles” as possible, but got only up to about 1000 before I was fed up and
no longer could spend time on it due to other obligations. I’m not sure if I’ve seen all proposals
(probably not), which is a pity, in my opinion...

--
Met vriendelijke groeten | Kind regards

Jan Willem Janssen | Software Architect
+31 631 765 814


My world is something with Amdatu and Apache

Luminis Technologies
Churchillplein 1
7314 BZ  Apeldoorn
+31 88 586 46 00

https://www.luminis.eu

KvK (CoC) 09 16 28 93
BTW (VAT) NL8170.94.441.B.01