Movie-Recommendation WRAP-UP REPORT
program: general_model.py
method: bayes
metric:
goal: maximize
name: valid/recall@10
parameters:
embedding_size:
values: [32, 64, 128, 256]
n_layers:
values: [1,2,3,4]
# user_hidden_size_list:
# values: ["[128, 32]", "[64, 32]", "[128, 64, 32]", "[64, 32, 32]"]
reg_weight:
values: [1e-05,1e-04,1e-03,1e-02]
lr:
values: [0.01,0.005,0.001,0.0005,0.0001]
model:
values:
- LightGCN
distribution: categorical
program : context-aware_model.py
method: bayes
metric:
goal: maximize
name: valid/recall@10
parameters:
dropout_prob:
values:
- 0.2
- 0.4
- 0.5
embedding_size:
values:
- 10
- 20
- 50
lr:
distribution: uniform
max: 0.01
min: 0.001
mlp_hidden_size:
values:
- - 512
- 512
- 512
- - 256
- 256
- 256
- - 128
- 128
- 128
- - 64
- 64
- 64
- - 32
- 32
- 32
model:
distribution: categorical
values:
- DeepFM
program : context-aware_model.py
method: bayes
metric:
goal: maximize
name: valid/recall@10
parameters:
dropout_prob:
values:
- 0.2
- 0.1
embedding_size:
values:
- 70
- 60
- 50
lr:
values: 0.001
mlp_hidden_size:
values: ["[128,128,128]", "[128, 64, 128]", "[64, 64, 64]"]
model:
distribution: categorical
values:
- DeepFM
program: sequential_model.py
method: bayes
metric:
goal: maximize
name: valid/recall@10
parameters:
hidden_dropoup_prob:
values:
- 0.2
- 0.5
initializer_range:
values:
- 0.01
- 0.04
attn_dropout_prob:
values:
- 0.2
- 0.5
layer_norm_eps:
values:
- 1e-12
- 1e-13
weight_decay:
values:
- 0.1
- 0
loss_type:
values:
- BPR
distribution: categorical
n_layers:
max: 4
min: 2
distribution: int_uniform
n_heads:
max: 4
min: 1
distribution: int_uniform
model:
values:
- SASRec
distribution: categorical
FISM
EASE
reg_weight를 바꿔가면서 실험 진행.
BPR
ItemKNN
NeuMF
DMF
valid에서 지표가 안 좋다면 리더보드에서도 그다지 좋지 않은 성능을 보이는 것을 확인할 수 있다.
튜닝에 민감하다.
LightGCN
DeepFM