Here is a folder with a script that will be used for evaluation.
lse_eval.zip
Folder structure
The folder includes two python scripts and “evaluation” data.
- **validate.py: This script will perform evaluation of your detector.**
- detector.py: Just a placeholder detector that you can use to run the validate.py and understand the logic. Replace this script with your sophisticated models.
- validation_data: Right now, this does not contain true validation dataset. There is just a few mseed files from the dataset that you were already given. Feel free to add all the data and test your model if you want to see how the metrics works.
- pictures: results from the sample detector. Your models will hopefully perform better!
How to turn in the model
- To prepare your model to be turned in, place it in the folder and change the validate.py to use your detector.
- Please make sure that the validate.py script runs well with your model and gives reasonable results.
- Send me the zipped folder to vaclav@grillo.io by March 14 7 pm London. If the folder is too large, upload it to your favourite cloud and send me a link.
After you turn in the model, I will add the evaluation dataset, run validate.py and get the results.
Metrics
The metrics is coded in the script, the most important number for the evaluation is the F1 score.
F1 score is defined as:
$F1 score = 2*(Precision*Recall)/(Precision+Recall)$
where
$Precision=(True Positives)/(True Positives+False Positives)$