Data are initially saved on the ethoscope in a mariadb SQL database. After every stop, a database dump is also created and stored on the SD card. On the background, the node will interrogate the running SQL database and transfer the data to a SQLlite3 db file which can then be used by rethomics or pythomics for analysis.

To browse through the data:

  1. Identify and download the db file to your computer. If you installed the FTP client on the node, you should use FTP to access those data. In our lab, the FTP is accessible through VPN at ftp://turing.lab.gilest.ro - Firefox can be used to navigate the site or alternatively you can install a proper FTP client like Filezilla.
  2. Make sure your computer has a working copy of the sqlite browser. For windows, you can install it from here. In archlinux, install with sudo pacman -S sqlitebrowswer
  3. Open the db file you've downloaded at point 1. Select the tab named "Browse Data" and then from the dropbox underneath pick the table you want to analyse. To check images, click on IMG_SNAPSHOT. To open the image double click on BLOB.
  4. The table METADATA contains information regarding the experiment settings: user, location, any parameter passed to the interactor modules to control (e.g.) sleep deprivation.
  5. The table CSV_DAM_ACTIVITY contains a DAM data replica with human readable times. For all other tables, times are counted in milliseconds from the start of the experiment. So, if time reads 5250666 that would be 5250.666 seconds or 87.5 minutes.

A full video tutorial of sqlite3 browswer is available in the video below:

https://www.youtube.com/watch?v=2pbL7gTHMRw

A convenient one-liner to interrogate the metadata on the computer where the SQLlite3 db files are stored

#move the where the db files are stored
cd /mnt/data/results

#Example that interrogates ethoscope 115 for date 2020-08-21
sqlite3 `find 115* -type f -name '*.db' | grep 2020-08-21` "select * from metadata;"

Make a video out of the img_snapshots

The following code will do the job

import os
import cv2
import sqlite3
import numpy as np

import argparse
from tqdm import tqdm

def connect_and_extract_video(db_name, resolution=(640, 480)):

    # Connect to the SQLite database
    conn = sqlite3.connect(db_name)
    cursor = conn.cursor()

    # Retrieve the image blobs from the IMG_SNAPSHOTS table
    cursor.execute("SELECT img FROM IMG_SNAPSHOTS")
    images = cursor.fetchall()

    # Convert each image blob to a numpy array
    images = [np.frombuffer(img[0], np.uint8) for img in images]

    # Create a video writer object
    base_name, _ = os.path.splitext(db_name)
    video_name = base_name + ".avi"

    video_writer = cv2.VideoWriter(video_name, cv2.VideoWriter_fourcc(*'XVID'), 30, resolution)
    count = 0

    # Iterate through the images and write them to the video file
    for img, status in zip(images, tqdm(range(len(images)))):
        # Decode the image from the numpy array
        decoded_img = cv2.imdecode(img, cv2.IMREAD_COLOR)
        # Resize the image to the desired video size
        resized_img = cv2.resize(decoded_img, resolution)
        # Write the resized image to the video file
        video_writer.write(resized_img)
        count += 1

    # Release the video writer and close the database connection
    video_writer.release()
    conn.close()
    print ("Processed %s frames into file: %s" % (count, video_name))

if __name__ == '__main__':

    parser = argparse.ArgumentParser(description='Process a db file, extract images and make a video out of them.')
    parser.add_argument('filename', help='The name of the ethoscope sqlite3 file to process')
    args = parser.parse_args()

    connect_and_extract_video(args.filename)

Delete the IMG_SNAPSHOTS table

The IMG_SNAPSHOTS table contains the actual images saved during tracking. It takes considerable space so it may be useful to delete this table when sharing the db files with the external world.