3 weeks, 5 days from now

Second semester project for Data Mining 2022/2023 course

This is the second project for students enrolled in the Data Mining course 2022/2023 (and 2023/2024) at the Faculty of Mathematics, Informatics, and Mechanics at the University of Warsaw. The task is to predict topics of scientific publications based on their abstracts.

Overview

The task in this challenge is to predict topics of scientific articles from the ACM Digital Library. The topics correspond to classes from the ACM Computing Classification System (the old version from 1998). This is a multi-label classification problem as each text can be assigned to multiple (at least one) classes.

More details regarding the available data, submission format, and evaluation can be found in the Task description section.

Terms & Conditions
 
 
Please log in to the system!
Rank Team Name Score Submission Date
1
Michał Orzyłowski
0.4372 2023-06-11 22:12:11
2
basiekjusz
0.4347 2023-06-11 21:02:51
3
Team Blue
0.4325 2023-05-31 21:51:17
4
jkozlovvski
0.4309 2023-06-11 11:22:25
5
baseline
0.4290 2023-05-10 21:58:57
6
kzakrzewski
0.4216 2023-06-17 17:24:57
7
Czarek
0.4207 2023-06-15 18:11:52
8
M
0.4123 2023-06-18 21:13:08
9
AL
0.4025 2023-06-18 22:21:12
10
Jan Wojtach
0.3986 2023-06-11 01:01:57
11
bros
0.3818 2023-06-17 02:11:24
12
szumiel
0.3707 2023-06-10 16:30:29
13
Jakub Panasiuk
0.3679 2023-06-11 22:28:48
14
Szymon Karpiński
0.3585 2023-06-9 00:00:24
15
Krystian
0.3325 2023-06-18 21:19:08
16
zecernia
0.3052 2023-06-18 21:26:36
17
Team Turtle
0.2552 2023-06-18 10:09:18
18
Younginn
0.2324 2023-06-7 11:13:40

Data for this project consists of two tables in a tab-separated columns format. Each row in those files corresponds to an abstract of a scientific article from ACM Digital Library, which was assigned to one or more topics from the ACM Computing Classification System.

The training data (DM2023_training_docs_and_labels.tsv) has three columns: the first one is an identifier of a document, the second one stores the text of the abstract, and the third one contains a list of comma-separated topic labels.

The test data (DM2023_test_docs.tsv) has a similar format, but the labels in the third column are missing.

The task and the format of submissions: the task for you is to predict the labels of documents from the test data and submit them to the evaluation system. A correctly formatted submission should be a text file with exactly 100000 lines. Each line should correspond to a document from the test data set (the order matters!) and contain a list of one or more predicted labels, separated by commas.

Evaluation: the quality of submissions will be evaluated using the average F1-score measure, i.e., for each test document, the F1-score between the predicted and true labels will be computed, and the values obtained for all test cases will be averaged.

Solutions will be evaluated online and the preliminary results will be published on the public leaderboard. The preliminary score will be computed on a small subset of the test time series (10%), fixed for all participants. The final evaluation will be performed after completion of the competition using the remaining part of the test data. Those results will also be published online. It is important to note that only teams that submit a report describing their approach before the end of the challenge will qualify for the final evaluation. Participants can submit many solutions but before the competition ends, each team needs to indicate up to two final solutions that will undergo the final evaluation (on the remaining part of the test data).

In case of additional questions, please post them on the competition forum. 

In order to download competition files you need to be enrolled.

This forum is for all users to discuss matters related to the competition. Good manners apply!

There is no topics in this competition.