6 months ago
Second semester project for the Interactive Machine Learning 2022/2023 course at MIM UW
This is the second semester project for the Interactive Machine Learning 2022/2023 course at MIM UW. This time, the task is to assign correct labels to samples annotated by 'simulated' experts and estimate their true positive rates for a prediction problem related to the identification of firefighter activities based on multiple sensor readings.
The goal of this project is to assign labels to a set of instances annotated by faulty labelers and estimate their true positive rates for all prediction classes.
Competition rules are given in Terms and Conditions.
A detailed description of the task, data, and evaluation metric is in the Task description section.
The deadline for submitting solutions is June 11, 2023
Participants of the challenge are obliged to follow the competition rules:
- This challenge is organized by Andrzej Janusz and Daniel Kałuża (the Organizers) for students enrolled in the Interactive Machine Learning 2022/2023 course at the Faculty of Mathematics, Informatics, and Mechanics at the University of Warsaw.
- The provided data sets are the property of the Organizers and the KnowledgePit platform. It is forbidden to share or redistribute provided data sets to any third party without explicit consent from the Organizers.
- Each team in the competition may consist of only one person. Working in larger groups or sharing solutions with other teams is strictly forbidden.
- Each team has a limited number of submissions - the limit is set to 100.
- The number of submissions per day is limited to 10.
- Participants can only use data made available in the challenge - using any external resources is forbidden. Queries regarding external resources need to be issued through the competition forum.
- It is strictly forbidden to hack the provided data or to exploit any unfair data leak that can improve the solution score. All attempts at making predictions for any test instance using information extracted from other test instances will result in disqualification.
- The deadline for submitting the solutions is June 11, 2023 (23:59 GMT). Late submissions will not be accepted.
- Each team is obliged to provide a short report describing their final solution. The report must contain information such as the name of the team, the names of all team members, the source code of the final solution, and a brief overview of the used approach. It should be submitted in the KnowledgePit submission system by June 11, 2023 (23:59 GMT).
- By enrolling in this competition, you grant the Organizers the right to process your submissions and reports for the purpose of evaluation and post-competition research.
- The final project score will depend on the quality of the solution (the score obtained in the final evaluation), and on the quality of the submitted report and code.
|Rank||Team Name||Is Report||Preliminary Score||Final Score||Submissions|
The task in this project is twofold.
The first part is to estimate the probability of each object belonging to 5 classes based on noisy annotations from many imperfect experts.
The second part is to estimate the true positive rate in each class for every expert, which is one of the indicators of expert quality.
For every sample, you are given both, the representation of this sample in a file named train_X and annotations assigned by experts in the annotations file. (the files are aligned by rows)
Both files are saved in numpy .npy format:
- train_X file is of shape (n_samples, n_features)
- annotations file is of shape (n_samples, n_classes, n_experts)
A 1 in annotations array in place (i, j, k) indicates that the k-th expert has said that the i-th sample has the j-th label. Each expert might indicate that the sample belongs to 0, 1 or multiple classes.
Please keep in mind that not every expert annotated each sample, lack of annotation is indicated as NaN in the appropriate slice of the array.
Moreover, as in the standard active learning scenario, not every sample was labeled by any expert, in other words, there are some samples with no annotations.
Those samples will not be used for the evaluation, but are left to more faithfully model the problem.
Format of submissions: solutions should be submitted as text files with 2 sections.
The first section has a number of rows equal to the number of annotated samples in the dataset. Every line should have 5 floating point numbers
(The samples should be submitted in the same order as in the annotations input file, keep in mind that estimations for not labeled samples should not be submitted)
The second section should contain n_experts lines describing the estimated true positive rates of the experts. Each line should contain exactly 5 comma-separated floats, denoting the estimation of the true positive rate of this expert in the corresponding classes in the same order as in the annotations file.
Evaluation: the evaluation of submitted class probabilities will be done using the macro ROC AUC metric with true labels of the samples. Evaluation of the estimated true positive rates will be done using Spearman rank correlation with the hidden real true positive rates of those experts. The final score will be an average value of those 2 numbers.
During the challenge, your solutions will be evaluated on a fraction of the data set and a fraction of the experts' tpr estimations, and your best preliminary score will be displayed on the public Leaderboard. After the end of the competition, the selected solutions will be evaluated on the remaining part of the data set and this result will be used for the evaluation of the project.