Key dates

  • 21st of February: Beginning of the challenge.

  • 31th of March: The full training dataset and the evaluation platform will be available.

  • 28th of April: A python script to compute the different scores will be availaible on the website.

  • 19th of May: The online evaluation platform will be available.

  • 19th of May: The full testing dataset will be available. Each participant will have up to 5 attempts to submit their results on the test data to avoid learning from mistakes.

  • 12th of June: Deadline for the submission of the results on the online evaluation platform for the testing dataset.

  • 19th of June: Workshop at FIMH conference: presentation of the results and award ceremony.

Teams submitting their results to the challenge will be invited to participate in writing a journal paper summarizing the outcome of the challenge.