1 month, 1 week ago
Semester Project for Data Mining 2019/2020 Course
This is the second project for students enrolled in the Data Mining course 2019/2020 at the Faculty of Mathematics, Informatics and Mechanics at the University of Warsaw.
The task in this challenge is to classify abstracts of scientific articles from ACM Digital Library into topics from the ACM Computing Classification System (the old version from 1998). This problem can be regarded as a multi-label classification of textual data.
More details regarding the available data, submission format, and evaluation can be found in the Task description section.
Participants of the challenge are obliged to follow the competition rules:
- This challenge is organized by Andrzej Janusz (the organizer) for students enrolled in the Data Mining course 2019/2020 at the Faculty of Mathematics, Informatics, and Mechanics at the University of Warsaw.
- Participants can work individually or as a team consisting of maximally two persons. The teams need to be formed at the beginning of the challenge. Participants cannot change their teams.
- Each team has a limited number of submissions - the limit is set to 100.
- The number of submissions per day is limited to 5.
- Participants can use data that was made available in the challenge - using any external resources is possible only after receiving explicit consent for the organizer. Queries regarding the external resources need to be issued through the competition forum.
- The deadline for submitting the solutions is June 21, 2020 (23:59 GMT). Late submissions will not be accepted.
- Each team is obliged to provide a short report describing their final solution. The report must contain information such as the name of the team, the names of all team members, and a brief overview of the used approach. The description should explain all data preprocessing steps and model construction steps. It should be submitted in the pdf format using our submission system by June 21, 2020 (23:59 GMT).
- By enrolling in this competition, you grant the organizer the right to process your submissions and reports for the purpose of evaluation and post-competition research.
- The final project score will depend on the quality of the solution (the score obtained in the final evaluation), and on the quality of the submitted report.
Data for this project consists of two tables in a tab-separated columns format. Each row in those files corresponds to an abstract of a scientific article from ACM Digital Library, which was assigned to one or more topics from the ACM Computing Classification System.
The training data (DM2020_training_docs_and_labels.csv) has three columns: the first one is an identifier of a document, the second one stores the text of the abstract, and the third one contains a list of comma-separated topic labels.
The test data (DM2020_test_docs.csv) has a similar format, but the labels in the third column are missing.
The task and the format of submissions: the task for you is to predict the labels of documents from the test data and submit them to the evaluation system. A correctly formatted submission should be a text file with exactly 100000 lines. Each line should correspond to a document from the test data set (the order matters!) and contain a list of one or more predicted labels, separated by commas.
Evaluation: the quality of submissions will be evaluated using the average F1-score measure, i.e., for each test document, the F1-score between the predicted and true labels will be computed, and the values obtained for all test cases will be averaged.
Solutions will be evaluated on-line and the preliminary results will be published on the public leaderboard. The preliminary score will be computed on a small subset of the test time series (10%), fixed for all participants. The final evaluation will be performed after completion of the competition using the remaining part of the test data. Those results will also be published online. It is important to note that only teams that submit a report describing their approach before the end of the challenge will qualify for the final evaluation. Participants can submit many solutions but before the competition ends, each team needs to indicate up to two final solutions that will undergo the final evaluation (on the remaining part of test data).
In case of additional questions, please post them on the competition forum.
Here you can find data for this challenge. To get the data, you need to be enrolled and logged in.
|Rank||Team Name||Is Report||Preliminary Score||Final Score||Submissions|
python to make it harder
Not so standard deviation
Krety w Krainie Danych
|False||0.3832||No report file found or report rejected.||5|
- May 15, 2020: start of the challenge, the data sets become available and submission system is opened
- June 21, 2020 (23:59:59 GMT): submission system closes
- June 21, 2020 (23:59:59 GMT): sending reports due
This forum is for all users to discuss matters related to the competition. Good manners apply!
|Can we use languages other than R?||Jakub||1||by Andrzej
Thursday, May 21, 2020, 17:51:48