1. Key Idea
transformer encoder와 RNN을 사용.
→ time series input을 학습시키기 위함.

- transformer incoder에서 쿼리에는 마지막 input만 사용함
→ transformer Encoder에서 QK matrix multiplication의 시간복잡도를 O($L^2$)에서 O($L$)로 줄일 수 있었음.
2. Introduction
- Knowledge tracing, modeling a student's knowlege state based on the history of their learning activities, is one of the rapidly emerging area in AI research.
- Riiid Labs, an AI education solutions provider, released EdNet[1] which is
the world's largest open database for AI education containing more than 100 milion student interactions.
- Using the dataset, they hosted a competition named 'Riiid! Answer Correctness Prediction'[2] on kaggle, challenging you to create the algorithms for predicting a student's answer correctness given his past learning activities.
- Submissions were evaluated on area under the ROC curve between the predicted probability and the observed target.
- This paper presents winning model to the competition.
3. Methodology
1) input feature
- 'question id', 'question part', 'answer correctness', 'current question elapsed time', 'timestamp difference' 총 5개의 feature를 사용
- Current question elapsed time : original database feature인 prior question elapsed time을 변형한 feature이다.
- 'Timestamp difference' : 이전 timestamp와 현재 timestamp의 차이. maximum은 3 day이다.
- Categorical embedding에는 'question id', 'question part', 'answer correctness' 총 3개의 Feature가 쓰임
- continuous embedding에는 마지막 두개 시간관련 feature가 쓰임
2) 검증
- userID에서 앞의 95%는 training에 사용하고 5%는 validation에 사용