Here is a folder with a script that will be used for evaluation.

lse_eval.zip

Folder structure

The folder includes two python scripts and “evaluation” data.

How to turn in the model

After you turn in the model, I will add the evaluation dataset, run validate.py and get the results.

Metrics

The metrics is coded in the script, the most important number for the evaluation is the F1 score.

F1 score is defined as:

$F1 score = 2*(Precision*Recall)/(Precision+Recall)$ where

$Precision=(True Positives)/(True Positives+False Positives)$