IJCAI bidding is upon me

and as last year, it promises to be annoying.

  1. The two highest-ranked papers for me were on Deep Learning. Ranking is supposed to help us bid and supposedly based on a combination of keywords selected by PC members and analysis of papers that they upload.

    I have no knowledge to speak off of Deep Learning, I have never written a Deep Learning paper, and I didn’t select “Deep Learning” as a keyword!

  2. After having done my bids on the first 25 papers (which took time because the titles are not super-informative, so I read the abstracts), I wanted to move on to page 2, only to find out that the system had logged me out, losing all but three of my bids in the process

But at least I’ve encountered a strong contender for the most buzz-wordy title of the year: Improved Kernel Density Estimation Self-organizing Incremental Neural Network to Perform Big Data Analysis!

Advertisements

I’m a (we’re) gatekeeper(s)

I’ve just finished reviewing and discussing for SDM 2018. It’s a good conference, and they give a smaller reviewing load than the dysfunctional ICDM, for instance. I had eight papers to review and it turned out that they were all rejects, either because the ideas were a bit half-baked, or (in the majority of cases) because they were unaware of important related work and therefore didn’t discuss/compare.1

SDM doesn’t blind the reviewers to each other, and I noticed that for the seven papers where I could see others’ reviews, I knew between one and three (out of three) of my coreviewers personally. In some cases, I knew the meta-reviewer as well. Now, I feel that our reviews and decisions were justified2 but if we (as a group of researchers knowing each other) simply didn’t like the direction of the work, for instance, we would have been in the position to block it.

In a sense, it’s unavoidable that a sitution like this occurs in peer review but it becomes more likely if the (sub)field is somewhat specialized and there’s only a certain number of researchers working in it on a high-enough level to be invited as reviewers.

1 This is a side-effect of the publish-or-perish mechanisms: we publish way too much in our field, which makes it often very hard to know all the relevant related work in the first place – especially when one is a PhD student. But letting such papers get published would only worsen the problem.

2 Although this introduces a chicken-egg problem: one of the reasons that I trust the others’ reviews is because I know and respect them and their knowledge.

Great accuracy + forgetting to bet = slight losses

Week Naive Bayes (Avg+OAvg) Naive Bayes (Avg) ANN (Avg + OAvg) ANN (Avg) Neural Network (Adj)
Week 13 12/16 12/16 12/16 9/16 13/16
Through week 13 112/175 115/175 107/175 100/175 105/175

Look at those accuracies! 75% for the classifier I use to bet, as for two others, even 81.25% for the ANN with adjusted statistics. Yet I still lost ~50 euros but this time this is mainly due to me – forgot both to bet the (correctly predicted) Seattle-over-Philadelphia upset and the MNF match (also correctly predicted). The biggest payout was Minnesota-over-Atlanta, btw, a match that the latter two ANN classifiers got wrong.
I never forgot to bet last year, probably because there’s was actually a chance winning – this year, I am just trying to claw some money back, and feel stymied at every turn. 🙂
Finally, purely in accuracy terms, past trends show up again – the Naive Bayes classifiers stand head-and-shoulders above the rest – 64%/65.71% vs 61.14%/57.14%/60%.

The end’s getting closer

Week Naive Bayes (Avg+OAvg) Naive Bayes (Avg) ANN (Avg + OAvg) ANN (Avg) Neural Network (Adj)
Week 11 10/14 10/14 10/14 11/14 8/14
Week 12 13/16 13/16 9/16 12/16 13/16
Through week 12 100/159 103/159 95/159 91/159 92/159

Will you look at those accuracies for the NB models: 71% and 81.25%! And the later won me exactly 19 euros! 😦 There where three matches I couldn’t bet on because the favorite’s odds were too low, I missed two upsets, and incorrectly predicted another one.

Apart from that, I tallied up theoretical winnings through week 11 last week, i.e. if I’d bet US$ 100 per match, for four models:

Week Naive Bayes (Avg+OAvg) Naive Bayes (Avg) ANN (Avg + OAvg) ANN (Avg)
Accuracy through week 11 (%) 60.84 62.94 60.14 55.24
Winnings through week 11 (US$) 28.65 203.02 1018.23 -1484.21
Underdogs correct through week 11 13 12 22 15

I am not surprised by this, it’s absolutely in line with what I observed in the past two seasons. But damn, it hurts, and I have still no idea how to decide which model to pick at the beginning of the season.

Week 9, and this season is really off

Week Naive Bayes (Avg+OAvg) Naive Bayes (Avg) ANN (Avg + OAvg) ANN (Avg) Neural Network (Adj)
Week 9 10/13 9/13 8/13 10/13 9/13
Through week 9 67/115 69/115 64/115 60/115 61/115

Again, decent accuracy, and the season-long accuracy is actually at the same level as during the last two seasons at this point, yet the payout is very much not. To give you an impression of how unusual this season is: in my betting paper, I used a baseline of Vegas money line predictions, assuming that one would always bet on the money line favorite and get all pick ’ems right. This ideal outcome lead to high payouts over the course of the season for 2015/2016 and 2016/2017.

This season, however, the cululative payout so far would be US$ -28.95, compared to US$ 854.21 in 2015.

Week 8…

Week Naive Bayes (Avg+OAvg) Naive Bayes (Avg) ANN (Avg + OAvg) ANN (Avg) Neural Network (Adj)
Week 8 10/13 11/13 10/13 10/13 8/13
Through week 8 57/102 60/102 56/102 50/102 52/102

Good week in terms of accuracy. Also, barely made me any money, which partially has to do with the fact that I again cannot bet larg(ish) amounts. But yeah, it does look as if the NB with average stats is slowly pulling away.