4 years, 10 months ago

Clash Royale Challenge: How to Select Training Decks for Win-rate Prediction

Clash Royale Challenge is the sixth data mining competition organized in association with the Federated Conference on Computer Science and Information Systems (https://fedcsis.org/). This year, the task is related to the problem of selecting an optimal training data subset for learning how to predict win-rates of the most popular Clash Royale decks. The competition is kindly sponsored by eSensei, QED Software and Polish Information Processing Society (PTI).

Clash Royale is a popular video game which combines elements of collectible card game and tower defense genres (https://clashroyale.com/). In this game, players build decks consisting of 8 cards that represent playable troops, buildings, and spells, which they use to attack opponent's towers and defend against their cards. Using good decks is one of the critical abilities of successful Clash Royale players.

In this data mining challenge, we take on a problem of measuring and predicting the deck's effectiveness in 1v1 ladder games. In particular, we would like to find out whether it is possible to train an efficient win-rate prediction model on a relatively small subset of decks, whose win-rates were estimated in the past. Such a task can also be considered in the context of active learning, as a selection of a data batch that should be labeled and used for training a win-rate prediction model.

More details regarding the task and a description of the challenge data can be found in the Task description section.

Special session at FedCSIS'19: As in previous years, a special session devoted to the competition will be held at the conference. We will invite authors of selected challenge reports to extend them for publication in the conference proceedings (after reviews by Organizing Committee members) and presentation at the conference. The publications will be treated as short papers and will be indexed by IEEE Digital Library and Web of Science. The invited teams will be chosen based on their final rank, innovativeness of their approach and quality of the submitted report. 

References:

  • Andrzej Janusz, Łukasz Grad, Dominik Ślęzak: Utilizing Hybrid Information Sources to Learn Representations of Cards in Collectible Card Video Games. ICDM Workshops 2018: 422-429
  • Andrzej Janusz, Dominik Ślęzak, Sebastian Stawicki, Krzysztof Stencel: SENSEI: An Intelligent Advisory System for the eSport Community and Casual Players. WI2018: 754-757
  • Andrzej Janusz, Tomasz Tajmajer, Maciej Swiechowski, Łukasz Grad, Jacek Puczniewski, Dominik Ślęzak: Toward an Intelligent HS Deck Advisor: Lessons Learned from AAIA'18 Data Mining Competition. FedCSIS 2018: 189-192
  • https://royaleapi.com/
Terms & Conditions
 
 

Our Clash Royale Challenge: How to Select Training Decks for Win-rate Prediction has ended. We would like to thank all participants for their involvement and hard work! 

The competition attracted 115 teams from which 43 shared a brief report describing their approach. In total, we received over 1200 submissions.

The official Winners:

  1. Dymitr Ruta, EBTIC, Khalifa University, UAE(team Dymitr)
  2. Ling Cen EBTIC, Khalifa University, UAE and Quang Hieu Vu, ZALORA (team amy)
  3. Cenru Liu, Ngee Ann Polytechnic, Singapur  and Jiahao Cen Nanyang Polytechnic, Singapur (team ru)

Congratulation on your excellent results!

We will be sending invitations to selected other teams during next couple of days.

Rank Team Name Is Report Preliminary Score Final Score Submissions
1
Dymitr
True 0.2749 0.255216 144
2
amy
True 0.2703 0.253017 123
3
ru
True 0.2352 0.225682 25
4
ms
True 0.2360 0.224135 51
5
-_-
True 0.2147 0.221517 30
6
ProfesorRapu
True 0.2260 0.220632 41
7
mmm
True 0.2379 0.206217 26
8
Magnaci i Czarodzieje
True 0.2012 0.200337 20
9
ludziej
True 0.2243 0.197034 7
10
DM course project
True 0.1775 0.187766 18
11
panda3
True 0.1524 0.182201 15
12
Mis Amigos
True 0.1906 0.181692 18
13
Emememsy
True 0.1839 0.180416 14
14
Robert Benke
True 0.1900 0.168978 30
15
Tomasz Garbus
True 0.1968 0.166824 42
16
3 sekundy max
True 0.1760 0.165840 16
17
Team
True 0.1738 0.159741 11
18
LegeArtis
True 0.1682 0.158371 12
19
baseline solution
True 0.1783 0.156461 1
20
Houdini
True 0.2063 0.153417 27
21
asdf
True 0.1358 0.151016 3
22
BigDarkClown
True 0.1737 0.148699 19
23
TheWinner
True 0.1945 0.147718 3
24
MIMUW E L I T E
True 0.1564 0.141170 38
25
ImJustSittingHereLookingAtMyValidationLoss
True 0.1960 0.138353 62
26
I_Support_the_Vector_Machines
True 0.1817 0.135137 24
27
maciek
True 0.1510 0.120610 9
28
szkawicz
True 0.1698 0.117453 7
29
tralala
True 0.1407 0.116705 6
30
pknut
True 0.1547 0.116524 17
31
typNiepokorny
True 0.1336 0.115865 8
32
Niebezpieczne Janusze
True 0.1504 0.109470 23
33
kbial
True 0.1333 0.106660 5
34
Wątka
True 0.1082 0.105679 5
35
piotrek
True 0.1206 0.102050 4
36
abc
True 0.1490 0.101135 34
37
pilusx
True 0.1241 0.099945 7
38
ludzie_bez_nadziei
True 0.0788 0.069547 12
39
4_czerwca
True 0.0200 0.015196 9
40
Dymitr
True 0.2749 0.000000 144
41
4_czerwca
True 0.0200 0.000000 9
42
serene_mestorf
False 0.2306 No report file found or report rejected. 27
43
Royalty
False 0.1990 No report file found or report rejected. 16
44
IuriiM
False 0.1984 No report file found or report rejected. 4
45
Radosne Kurki
False 0.1959 No report file found or report rejected. 37
46
kk
False 0.1677 No report file found or report rejected. 4
47
DUCKTILE
False 0.1613 No report file found or report rejected. 4
48
GR V TMN
False 0.1594 No report file found or report rejected. 11
49
Jan Omeljaniuk
False 0.1563 No report file found or report rejected. 22
50
Los Estribos
False 0.1496 No report file found or report rejected. 6
51
piero
False 0.1528 No report file found or report rejected. 31
52
mathurin
False 0.1426 No report file found or report rejected. 7
53
DEEVA
False 0.1309 No report file found or report rejected. 34
54
APRB
False 0.1201 No report file found or report rejected. 3
55
melanzana
False 0.1129 No report file found or report rejected. 3
56
Yarno Boelens
False 0.1076 No report file found or report rejected. 3
57
jj
False 0.1050 No report file found or report rejected. 4
58
VLADISLAV
False 0.1050 No report file found or report rejected. 3
59
kokoko
False 0.1050 No report file found or report rejected. 2
60
Bottom-Up
False 0.1050 No report file found or report rejected. 4
61
Alphapred
False 0.1050 No report file found or report rejected. 5
62
---
False 0.1050 No report file found or report rejected. 2
63
Relax
False 0.0877 No report file found or report rejected. 3
64
Maju116
False -0.1191 No report file found or report rejected. 2
65
Szczury
False -0.0610 No report file found or report rejected. 2
66
Szczury
False -0.0610 No report file found or report rejected. 2
67
Lure
False -0.0704 No report file found or report rejected. 1
68
Maju116
False -0.1191 No report file found or report rejected. 2
Please log in to the system!

The task: Training data in this challenge consist of 100.000 Clash Royale decks that were most commonly used by players during three consecutive league seasons in 1v1 ladder games. Participants of this challenge are asked to indicate ten subsets of those decks (as lists of the corresponding row numbers) which allows constructing efficient win-rate prediction models. The quality of solutions is assessed by measuring the prediction performance of Support Vector Regression (SVR) models with radial kernels, trained on the indicated data subsets. A test set that is used for the evaluation consists of another collection of decks that were popular during the three next game seasons after the training data period. Will not be revealed to participants before the end of the challenge. It is also worth noticing that the same decks can appear in both the training and evaluation data, but they are likely to have different win-rates. The cause of those differences is the fact that the game evolves in time, players adapt to new strategies, and the balance of individual cards (and their popularity) changes slightly from one season to another.

The values of hyper-parameters of SVR, namely epsilon, C, and gamma, should also be tuned by participants and submitted as a part of the solutions.

Data description and format: The data for this competition is provided in a tabular format, as two files, namely trainingData.csv and validationData.csv. They can be obtained from the Data files section. Each row in those tables corresponds to a Clash Royale deck and is described by four columns. The first one lists eight cards that constitute the deck (the names of individual cards are separated by semicolons). The second and third column shows the number of games played with the deck, and the number of players that were using it, respectively. These values were computed based on over 160.000.000 game results obtained using the RoyaleAPI service (https://royaleapi.com/). The last column indicates estimations of win-rates of the decks, that ware calculated based on games played in the given time window. 

The validation data set consists of 6.000 decks played during the same period as the evaluation data. It is provided to participants to facilitate the evaluation of their solutions without the need for using public Leaderboard.  The evaluation set will not be revealed to participants before completion of the challenge.

The format of submissions: The participants of the competition are asked to indicate ten subsets of the training data, with increasing sizes, that allows training efficient SVR models (on bag-of-cards representations of the selected decks). Sizes of those subsets should be fixed at 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, and 1500 decks. Along with each subset, participants should provide values of three hyper-parameter of the SVR model, that will be used during the evaluation, namely epsilon, C, and gamma.

The submission file should have a textual format. It should contain ten lines corresponding to the consecutive subsets. Each line should start from three numbers separated by semicolons (the values of the hyper-parameters in the order: epsilon; C; gamma). Then, after a single semicolon, there should be a list of integers separated by commas, indicating row numbers of the training data set, that should be used for constructing the model. The length of this list in consecutive lines should match the corresponding subset sizes stated above (i.e., the first line should contain 600 integers, the second line should contain 700 integers, and so on). The section Data files includes an example of a correctly formatted submission file.

Evaluation of results: The submitted solutions are evaluated online, and the preliminary results are published on the competition Leaderboard. The preliminary score is computed on a small subset of the test records, fixed for all participants. The final evaluation is performed after completion of the competition using the remaining part of the test data. Those results will also be published online. It is important to note that only teams that submit a report describing their approach before the end of the contest will qualify for the final evaluation. The winning teams will be officially announced during a special session devoted to this competition, which will be organized at the FedCSIS'19 conference. The evaluation system will become operational on April 25.

The assessment of solutions will be done using the R-squared metric. If we denote a prediction for a test instance i as $f_i$ and its reference win-rate as $y_i$, R-squared can be defined as: $$R^2 = 1 - \frac{RSS}{TSS},$$ where RSS is the residual sum of squares: $$RSS = \sum_i (y_i - f_i)^2,$$ and TSS is the total sum of squares: $$TSS =  \sum_i (y_i - \bar{y})^2,$$ and $$\bar{y} = \frac{1}{N}\sum_i y_i .$$ A value of this metric will be computed independently for predictions conducted by SVR models trained on each of the ten subsets included in the submitted solutions. The final score will be an average of the obtained results. 

In order to download competition files you need to be enrolled.
  • April 24, 2019: start of the competition, data become available,
  • June 9, 2019 (23:59 GMT): deadline for submitting the solutions,
  • June 12, 2019 (23:59 GMT): deadline for sending the reports, end of the competition,
  • June 20, 2019: online publication of the final results, sending invitations for submitting papers for the special session at FedCSIS'19

Authors of the top-ranked solutions (based on the final evaluation scores) will be awarded prizes funded by our sponsors:

  • First Prize: 1000 USD + one free FedCSIS'19 conference registration,
  • Second Prize: 500 USD + one free FedCSIS'19 conference registration,
  • Third Prize: one free FedCSIS'19 conference registration.

The award ceremony will take place during the FedCSIS'19 conference.

  • Andrzej Janusz, University of Warsaw & eSensei
  • Łukasz Grad, eSensei
  • Marek Grzegorowski, University of Warsaw
  • Piotr Biczyk, QED Software
  • Krzysztof Stencel, University of Warsaw
  • Dominik Ślęzak, University of Warsaw & QED Software

In case of any questions please post on the competition forum or write an email at contact {at} knowledgepit.ml 

This forum is for all users to discuss matters related to the competition. Good manners apply!
  Discussion Author Replies Last post
extended deadline for submiting solutions Andrzej 0 by Andrzej
Monday, June 10, 2019, 09:05:34
Why solution deadline was 2 hours earlier than planned? Maciej 5 by Andrzej
Monday, June 10, 2019, 08:58:27
Hyperparameter limit Jan Kanty 1 by Andrzej
Saturday, June 01, 2019, 11:59:35
Data representation used for evaluation Wojciech 1 by Łukasz
Tuesday, May 21, 2019, 13:19:24
502 Bad Gateway Paweł 1 by Andrzej
Thursday, May 16, 2019, 15:49:40
Submission error Henry 2 by Andrzej
Monday, May 13, 2019, 10:09:35
Evaluation SVR Jan Kanty 8 by Dymitr
Tuesday, May 07, 2019, 15:23:45
Baseline solution Jan Kanty 1 by Andrzej
Saturday, April 27, 2019, 13:34:15
Welcome! Andrzej 0 by Andrzej
Wednesday, April 24, 2019, 22:38:55