<aside> ⚠️ Hey! This site has been rebuilt over at the main Filtered website. There’s no difference in the content but you can take a look at the shiny new version here.

</aside>

<aside> 🔎 This page provides evidence for our key product claims. It's designed to be used as an off-the-shelf resource for business cases to introduce Filtered to your organisation. If you would like more tailored support with evidence, we're happy to help with that too. We will add to this evidence library as we go.

</aside>

Contents

Algorithmic tagging precision

<aside> 💡 We have demonstrated that algorithmic tagging produces results as pure and precise as skilled human curators.

</aside>

'Precision' is one standard measure of the quality of tagging. It represents how 'pure' tagged output is - what proportion of algorithmically tagged assets are tagged correctly.

$$

{precision} = \\frac{N_{correct-tags}}{N_{total -tags}}

$$

(For comparison, another metric, 'recall', the flip-side of tagging quality - how good an algorithm is at finding needles in haystacks. It's the proportion of assets that are relevant to a skill in the whole asset database that are found by the algorithm.)

To measure precision we need a gold standard for tagging - we need to know when the tag the algorithm has applied is right. We use skilled human curators with expertise in the skill area to define this standard.

So when we talk about precision, we mean the proportion of the time a skilled curator agrees with the algorithmic decisions.

We measure precision all the time to ensure quality in our CI products in real-world conditions. For example with a global pharmaceutical we conducted a controlled experiment using their skills taxonomy and real-world assets from their LMS, resulting in an overall precision of 83%.

How good is 83% though? Again, we use human experts as our standard. When we measure the precision of one expert's tagging against another, we achieve results in the range 60-80%, depending on the skill. So here algorithmic tagging is at least as precise as human expert tagging.

This report provides more detail on these findings.

Time saved through algorithmic tagging

<aside> 💡 For a library of 30,000 learning assets, and a framework of 40 skills, algorithmic tagging saves time equivalent to half a year's work for a skilled human curator.

</aside>

Algorithmic tagging is almost instant once tagging models have been configured, whereas human curation is of course time consuming.

The time taken for a human curator to tag an asset increases with the size of the skills framework; it's harder to decide accurately which of 50 skills applies to an asset than which of ten.

We measured (in a two week sprint where we needed assets well tagged to a variety of skills frameworks) the time it takes skilled human curators to tag content well, and used the results to build this calculator.

Example output from time-saved model

Example output from time-saved model