Please respect the rules of the challenge. The organization reserves the right to not consider submissions that fail to abide to the rules presented in this page.

Registration

Both individual and team participations are welcome in the challenge. To ease the registration process, only the contact person needs to register. While doing so, please check that the name, institution/organization and contact information is filled correctly. Failure to do so will invalidate the participation on the challenge.

The contact information provided will not be used for any purposes other than those related to the challenge.

Once the registration is completed and accepted, participants will receive instructions on how to download the dataset.

Please note that each person can only belong to one team.

Participation

Participation in the challenge (and corresponding eligibility for the prizes) is conditioned to:

  1. proper registration;
  2. proper submission of the predictions and code that allows to replicate them;
  3. submission and acceptance of a paper describing the developed method and achieved results to the ICIAR 2018 conference;
  4. registration and participation on the ICIAR 2018 conference;

Results will be made publicly available on the challenge website after the submission deadline. Challenge participants grant the challenge organization the permission to use the result of their methods for other evaluations. Nevertheless, participating teams maintain full ownership and rights to their method. The challenge organization does not claim any ownership or rights to the developed works.

The best performing methods will be invited to collaborate on a journal paper describing and summarizing the different approaches used and the respective results achieved on the challenge. 

Submission

Submission is to be performed at two distinct time points, following the submission deadlines stated on the Home page of the challenge. Details on the submission process are described in the Submission page.

  1. the first submission is related to the ICIAR 2018 paper acceptance deadline. At this stage, participants are required to upload the paper describing their results as well as code that allows to test a new image. Note that the submission of the paper describing the developed method and achieved results to the ICIAR 2018 is to be done independently of the submission on the challenge website.
  2. the second submission corresponds to the delivery of the predictions of the developed methods on the independent test set to be released after the first submission deadline. This submission will be used to rank the methods for prize distribution.

Note that each team can only participate in the challenge once, i.e., each team can only submit an algorithm. If several attempts of submission are performed, only the last will be considered.

At the submission moment, participants have to indicate if they are competing for the two parts of challenge, or only for part A or part B.

Prizes

The challenge prizes will be awarded during the ICIAR 2018 conference, to be held at Póvoa de Varzim, Portugal, June 27-29, 2018. Specifically,  ICIAR 2018 has allocated a total of 1000€ (one thousand euros) in prizes to be distributed among the four teams with the best performance. There will be two winners for each of the two parts of the challenge. The two winners of part A will receive 200€ each, and the two winners of part B will receive 300€ each. A given team can win the two parts and thus receive a total of 500€.

Eligibility to the challenge's prizes is conditioned to the quality of the method, acceptance of the submitted paper, corresponding publication in the ICIAR 2018 proceedings, and participation at the conference. With that in mind, please check the Call for Papers section to ensure that you use the correct paper format.

Evaluation

The evaluation of the submitted methods considers both the correct classification of the microscopy images (part A) and pixel-wise labeling of the whole-slide images (part B). Participants will be ranked separately for the two tasks of the challenge.

Performance on the microscopy images will be evaluated based on the overall prediction accuracy, i.e., the ratio between correct samples and the total number of evaluated images.

Performance on the whole-slide images will be evaluated based on the following score, s:

where "pred" is the predicted class (0, 1, 2 or 3), and "gt" is the ground truth class, i is the linear index of a pixel in the image,  N is the total number of pixels in the image and bin is the binarized value, i.e., is 0 if the label is 0 and 1 if the label is 1, 2 or 3.

This score is based on the accuracy metric, aiming at penalizing more the predictions that are farther from the ground truth value.

Note that, in the denominator, the cases in which the prediction and ground truth are both 0 (normal class) are not counted, since these can be seen as true negative cases.

 

Pythonic version: s=1 - np.sum(np.abs(pred - gt)) / np.sum(np.maximum(np.abs(gt - 0), np.abs(gt - 3.0)) * (1 - (1 - (pred > 0)) * (1 - gt > 0))), where np is the numpy package.

 

Data

The downloaded datasets or any data derived from these datasets, can not be given or redistributed under any circumstances to persons not belonging to the registered team.

The dataset of the challenge can not be used for other purposes in scientific studies and for training or developing other algorithms, including but not limited to those used in commercial products, without prior participation in this challenge.

To ensure a fair comparison of the submitted methods, the usage of private datasets during the development of the methods is not allowed.

Attribution

Using the challenge dataset for scientific publications (including journal publications, conference papers, technical reports, presentations at conferences and meetings) and industrial development/patenting requires citing the data source. At the moment, the citation should refer to the arXiv challenge publication.

Teams must notify the organisers of the challenge about any publication that is (partly) based on the results or data published on this site, in order to the challenge organization being able to maintain a list of publications associated with the challenge.