Sparse linear regression |
1 |
team_52 |
ImputePCA and training and prediction were made using elastic net regression |
2 |
team9_3 |
Features were reconstructed using JIVE multi-omics integration, training data consist of 2020 + 2021 datasets, All four assays and subject information was used, training and prediction was done using ElasticNet |
3 |
team9_4 |
Features were reconstructed using JIVE multi-omics integration, training data consist of 2020 + 2021 datasets, All four assays and subject information was used, training and prediction was done using ElasticNet CV |
4 |
team47 |
SuperLearner Ensemble |
5 |
team48_1 |
Features were using MOFA multi-omics integration and final features were handpciked instead of solely relying on LASSO regression, training data consist of 2021 datasets, All four assays and subject information was used, training and prediction was done using LASSO regression |
6 |
team48_2 |
Features were using MOFA multi-omics integration and final features were handpciked instead of solely relying on LASSO regression, training data consist of 2020 + 2021 datasets, All four assays and subject information was used, training and prediction was done using LASSO regression |
7 |
team5 |
Establishing purpose-built models using multiple co-inertia analysis, features consist of four omics, baseline values of tasks and |
8 |
team6 |
Ensemble approach using SPEAR-constructed supervised multi-omics factors with demographic data |
9 |
team9_2 |
Multi-omics Integration with JIVE and Lasso |
10 |
team_40 |
Different regression models on multi-omics data using features from the baseline (day 0) |
11 |
team25 |
Semi-manual feature selection learned between the 2020↔2021 datasets, followed by linear regression |
12 |
team9_1 |
Multi-omics Integration with JIVE and Basic Linear Regression |
13 |
team49 |
Dimension reduction through Multiple Co-inertia analysis and modeled with Linear mixed effects |
14 |
team32 |
Semi-manual feature selection followed by dimensionality reduction and residual from baseline prediction |
15 |
team50 |
Semi-manual feature selection followed by dimensionality reduction and residual from baseline prediction |
Nonlinear regression (regression trees)
|
16 |
team_38 |
Catboost Regression model trained on 2020 training cohort |
21 |
team_53 |
Catboost Regression model trained on 2021 training cohort |
18 |
team_54 |
Catboost Regression model trained on 2020+2021 training cohort |
19 |
team45 |
Model comparison to determine the best algorithm; Manual feature selection; Random forest regression |
20 |
team46 |
Block forest regression |
21 |
team51 |
Random forest classifier to simulate training individuals, XGboost to determine final ranking |
22 |
team55 |
DecisionTree and Random Forest Regressor |
Others
|
23 |
team30 |
Fully Connected 2-layer neural network with imputation |
24 |
team34 |
AutoML based on the most predictive assay or clinical data (trained on 2020 and tested on 2021) |
25 |
team34 |
AutoML based on the most predictive assay or clinical data (trained on 2020 and tested on 2021) |
Control models
|
26 |
|
Use age of study subject as predictor |
27 |
|
Utilize baseline pre-vaccination state of a task as predictor |