4 months, 1 week ago

Clash Royale Challenge: How to Select Training Decks for Win-rate Prediction

Clash Royale Challenge is the sixth data mining competition organized in association with the Federated Conference on Computer Science and Information Systems (https://fedcsis.org/). This year, the task is related to the problem of selecting an optimal training data subset for learning how to predict win-rates of the most popular Clash Royale decks. The competition is kindly sponsored by eSensei, QED Software and Polish Information Processing Society (PTI).

Clash Royale is a popular video game which combines elements of collectible card game and tower defense genres (https://clashroyale.com/). In this game, players build decks consisting of 8 cards that represent playable troops, buildings, and spells, which they use to attack opponent's towers and defend against their cards. Using good decks is one of the critical abilities of successful Clash Royale players.

In this data mining challenge, we take on a problem of measuring and predicting the deck's effectiveness in 1v1 ladder games. In particular, we would like to find out whether it is possible to train an efficient win-rate prediction model on a relatively small subset of decks, whose win-rates were estimated in the past. Such a task can also be considered in the context of active learning, as a selection of a data batch that should be labeled and used for training a win-rate prediction model.

More details regarding the task and a description of the challenge data can be found in the Task description section.

Special session at FedCSIS'19: As in previous years, a special session devoted to the competition will be held at the conference. We will invite authors of selected challenge reports to extend them for publication in the conference proceedings (after reviews by Organizing Committee members) and presentation at the conference. The publications will be treated as short papers and will be indexed by IEEE Digital Library and Web of Science. The invited teams will be chosen based on their final rank, innovativeness of their approach and quality of the submitted report. 

References:

  • Andrzej Janusz, Łukasz Grad, Dominik Ślęzak: Utilizing Hybrid Information Sources to Learn Representations of Cards in Collectible Card Video Games. ICDM Workshops 2018: 422-429
  • Andrzej Janusz, Dominik Ślęzak, Sebastian Stawicki, Krzysztof Stencel: SENSEI: An Intelligent Advisory System for the eSport Community and Casual Players. WI2018: 754-757
  • Andrzej Janusz, Tomasz Tajmajer, Maciej Swiechowski, Łukasz Grad, Jacek Puczniewski, Dominik Ślęzak: Toward an Intelligent HS Deck Advisor: Lessons Learned from AAIA'18 Data Mining Competition. FedCSIS 2018: 189-192
  • https://royaleapi.com/
Terms & Conditions
 
 

Our Clash Royale Challenge: How to Select Training Decks for Win-rate Prediction has ended. We would like to thank all participants for their involvement and hard work! 

The competition attracted 115 teams from which 43 shared a brief report describing their approach. In total, we received over 1200 submissions.

The official Winners:

  1. Dymitr Ruta, EBTIC, Khalifa University, UAE(team Dymitr)
  2. Ling Cen EBTIC, Khalifa University, UAE and Quang Hieu Vu, ZALORA (team amy)
  3. Cenru Liu, Ngee Ann Polytechnic, Singapur  and Jiahao Cen Nanyang Polytechnic, Singapur (team ru)

Congratulation on your excellent results!

We will be sending invitations to selected other teams during next couple of days.

Rank Team Name Score Submission Date
1
Dymitr
0.255216 2019-06-9 13:15:08
2
amy
0.253017 2019-06-9 21:05:29
3
ru
0.225682 2019-06-9 23:54:01
4
ms
0.224135 2019-06-10 19:41:44
5
-_-
0.221517 2019-06-9 23:51:46
6
ProfesorRapu
0.220632 2019-06-6 21:29:15
7
mmm
0.206217 2019-06-9 06:42:01
8
Magnaci i Czarodzieje
0.200337 2019-06-9 08:11:36
9
ludziej
0.197034 2019-05-26 02:37:25
10
DM course project
0.187766 2019-06-4 00:10:58
11
panda3
0.182201 2019-06-10 14:04:52
12
Mis Amigos
0.181692 2019-06-10 17:43:42
13
Emememsy
0.180416 2019-06-10 21:05:11
14
Robert Benke
0.168978 2019-06-9 07:48:16
15
Tomasz Garbus
0.166824 2019-06-5 21:02:30
16
3 sekundy max
0.165840 2019-06-7 18:27:02
17
Team
0.159741 2019-06-10 18:39:04
18
LegeArtis
0.158371 2019-06-4 10:27:01
19
baseline solution
0.156461 2019-04-24 22:20:29
20
Houdini
0.153417 2019-06-8 02:16:21
21
asdf
0.151016 2019-05-22 17:09:05
22
BigDarkClown
0.148699 2019-06-7 17:59:27
23
TheWinner
0.147718 2019-06-10 10:10:18
24
MIMUW E L I T E
0.141170 2019-06-9 20:05:25
25
ImJustSittingHereLookingAtMyValidationLoss
0.138353 2019-05-29 08:46:54
26
I_Support_the_Vector_Machines
0.135137 2019-05-25 14:29:17
27
maciek
0.120610 2019-06-9 23:06:27
28
szkawicz
0.117453 2019-06-9 22:22:02
29
tralala
0.116705 2019-06-8 17:22:19
30
pknut
0.116524 2019-05-17 13:24:10
31
typNiepokorny
0.115865 2019-06-10 18:20:50
32
Niebezpieczne Janusze
0.109470 2019-06-8 20:29:37
33
kbial
0.106660 2019-06-9 17:20:48
34
Wątka
0.105679 2019-06-9 23:57:04
35
piotrek
0.102050 2019-06-9 20:59:42
36
abc
0.101135 2019-06-8 03:24:01
37
pilusx
0.099945 2019-06-9 20:54:38
38
ludzie_bez_nadziei
0.069547 2019-06-9 23:42:24
39
4_czerwca
0.015196 2019-06-10 21:01:16
40
Dymitr
0.000000 2019-06-10 23:53:55
41
4_czerwca
0.000000 2019-06-10 23:55:48
42
serene_mestorf
No report file found or report rejected. 2019-06-10 22:39:55
43
Radosne Kurki
No report file found or report rejected. 2019-05-15 13:53:00
44
Royalty
No report file found or report rejected. 2019-04-29 17:24:25
45
IuriiM
No report file found or report rejected. 2019-06-7 17:22:24
46
piero
No report file found or report rejected. 2019-06-4 15:47:24
47
kk
No report file found or report rejected. 2019-06-9 22:50:04
48
Los Estribos
No report file found or report rejected. 2019-06-9 22:38:24
49
mathurin
No report file found or report rejected. 2019-04-27 20:20:42
50
DUCKTILE
No report file found or report rejected. 2019-05-14 07:45:34
51
Jan Omeljaniuk
No report file found or report rejected. 2019-06-10 21:17:33
52
GR V TMN
No report file found or report rejected. 2019-05-5 11:52:10
53
DEEVA
No report file found or report rejected. 2019-05-23 08:29:40
54
Relax
No report file found or report rejected. 2019-06-9 17:25:11
55
melanzana
No report file found or report rejected. 2019-06-9 22:52:11
56
APRB
No report file found or report rejected. 2019-05-19 17:11:18
57
Yarno Boelens
No report file found or report rejected. 2019-06-10 22:34:30
58
jj
No report file found or report rejected. 2019-04-27 00:50:13
59
VLADISLAV
No report file found or report rejected. 2019-04-28 00:52:24
60
kokoko
No report file found or report rejected. 2019-04-30 15:50:44
61
Bottom-Up
No report file found or report rejected. 2019-04-30 16:40:29
62
Alphapred
No report file found or report rejected. 2019-05-1 23:37:30
63
---
No report file found or report rejected. 2019-06-8 18:33:54
64
Maju116
No report file found or report rejected. 2019-05-25 17:24:46
65
Szczury
No report file found or report rejected. 2019-06-9 23:29:33
66
Szczury
No report file found or report rejected. 2019-06-9 23:44:06
67
Lure
No report file found or report rejected. 2019-05-2 16:11:23
68
Maju116
No report file found or report rejected. 2019-05-25 11:26:53
Please logIn to the system!

The task: Training data in this challenge consist of 100.000 Clash Royale decks that were most commonly used by players during three consecutive league seasons in 1v1 ladder games. Participants of this challenge are asked to indicate ten subsets of those decks (as lists of the corresponding row numbers) which allows constructing efficient win-rate prediction models. The quality of solutions is assessed by measuring the prediction performance of Support Vector Regression (SVR) models with radial kernels, trained on the indicated data subsets. A test set that is used for the evaluation consists of another collection of decks that were popular during the three next game seasons after the training data period. Will not be revealed to participants before the end of the challenge. It is also worth noticing that the same decks can appear in both the training and evaluation data, but they are likely to have different win-rates. The cause of those differences is the fact that the game evolves in time, players adapt to new strategies, and the balance of individual cards (and their popularity) changes slightly from one season to another.

The values of hyper-parameters of SVR, namely epsilon, C, and gamma, should also be tuned by participants and submitted as a part of the solutions.

Data description and format: The data for this competition is provided in a tabular format, as two files, namely trainingData.csv and validationData.csv. They can be obtained from the Data files section. Each row in those tables corresponds to a Clash Royale deck and is described by four columns. The first one lists eight cards that constitute the deck (the names of individual cards are separated by semicolons). The second and third column shows the number of games played with the deck, and the number of players that were using it, respectively. These values were computed based on over 160.000.000 game results obtained using the RoyaleAPI service (https://royaleapi.com/). The last column indicates estimations of win-rates of the decks, that ware calculated based on games played in the given time window. 

The validation data set consists of 6.000 decks played during the same period as the evaluation data. It is provided to participants to facilitate the evaluation of their solutions without the need for using public Leaderboard.  The evaluation set will not be revealed to participants before completion of the challenge.

The format of submissions: The participants of the competition are asked to indicate ten subsets of the training data, with increasing sizes, that allows training efficient SVR models (on bag-of-cards representations of the selected decks). Sizes of those subsets should be fixed at 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, and 1500 decks. Along with each subset, participants should provide values of three hyper-parameter of the SVR model, that will be used during the evaluation, namely epsilon, C, and gamma.

The submission file should have a textual format. It should contain ten lines corresponding to the consecutive subsets. Each line should start from three numbers separated by semicolons (the values of the hyper-parameters in the order: epsilon; C; gamma). Then, after a single semicolon, there should be a list of integers separated by commas, indicating row numbers of the training data set, that should be used for constructing the model. The length of this list in consecutive lines should match the corresponding subset sizes stated above (i.e., the first line should contain 600 integers, the second line should contain 700 integers, and so on). The section Data files includes an example of a correctly formatted submission file.

Evaluation of results: The submitted solutions are evaluated online, and the preliminary results are published on the competition Leaderboard. The preliminary score is computed on a small subset of the test records, fixed for all participants. The final evaluation is performed after completion of the competition using the remaining part of the test data. Those results will also be published online. It is important to note that only teams which submit a report describing their approach before the end of the contest will qualify for the final evaluation. The winning teams will be officially announced during a special session devoted to this competition, which will be organized at the FedCSIS'19 conference. The evaluation system will become operational on April 25.

The assessment of solutions will be done using the R-squared metric. If we denote a prediction for a test instance i as $f_i$ and its reference win-rate as $y_i$, R-squared can be defined as: $$R^2 = 1 - \frac{RSS}{TSS},$$ where RSS is the residual sum of squares: $$RSS = \sum_i (y_i - f_i)^2,$$ and TSS is the total sum of squares: $$TSS =  \sum_i (y_i - \bar{y})^2,$$ and $$\bar{y} = \frac{1}{N}\sum_i y_i .$$ A value of this metric will be computed independently for predictions conducted by SVR models trained on each of the ten subsets included in the submitted solutions. The final score will be an average of the obtained results. 

In order to download competition files you need to be enrolled.
  • April 24, 2019: start of the competition, data become available,
  • June 9, 2019 (23:59 GMT): deadline for submitting the solutions,
  • June 12, 2019 (23:59 GMT): deadline for sending the reports, end of the competition,
  • June 20, 2019: online publication of the final results, sending invitations for submitting papers for the special session at FedCSIS'19

Authors of the top-ranked solutions (based on the final evaluation scores) will be awarded prizes funded by our sponsors:

  • First Prize: 1000 USD + one free FedCSIS'19 conference registration,
  • Second Prize: 500 USD + one free FedCSIS'19 conference registration,
  • Third Prize: one free FedCSIS'19 conference registration.

The award ceremony will take place during the FedCSIS'19 conference.

  • Andrzej Janusz, University of Warsaw & eSensei
  • Łukasz Grad, eSensei
  • Marek Grzegorowski, University of Warsaw
  • Piotr Biczyk, QED Software
  • Krzysztof Stencel, University of Warsaw
  • Dominik Ślęzak, University of Warsaw & QED Software

In case of any questions please post on the competition forum or write an email at contact {at} knowledgepit.ml 

This forum is for all users to discuss matters related to the competition. Good manners apply!
  Discussion Author Replies Last post
extended deadline for submiting solutions Andrzej 0 by Andrzej
Monday, June 10, 2019, 09:05:34
Why solution deadline was 2 hours earlier than planned? Maciej 5 by Andrzej
Monday, June 10, 2019, 08:58:27
Hyperparameter limit Jan Kanty 1 by Andrzej
Saturday, June 01, 2019, 11:59:35
Data representation used for evaluation Wojciech 1 by Łukasz
Tuesday, May 21, 2019, 13:19:24
502 Bad Gateway Paweł 1 by Andrzej
Thursday, May 16, 2019, 15:49:40
Submission error Henry 2 by Andrzej
Monday, May 13, 2019, 10:09:35
Evaluation SVR Jan Kanty 8 by Dymitr
Tuesday, May 07, 2019, 15:23:45
Baseline solution Jan Kanty 1 by Andrzej
Saturday, April 27, 2019, 13:34:15
Welcome! Andrzej 0 by Andrzej
Wednesday, April 24, 2019, 22:38:55