| Algorithm A1: Reproducible Training Pipeline for SpineNeuroSym |
| Input: |
| Raw DICOM spinal X-ray dataset D_raw |
| Config files: |
| dataset.yaml |
| krd_stage1.yaml |
| graph_stage2.yaml |
| end2end_stage3.yaml |
| Output: |
| Final trained model M* |
| Evaluation metrics (Accuracy, Macro-F1, AUROC, …) |
| 1: # Environment setup |
| 2: CreateCondaEnv(name = “spineneurosym”, python = 3.9) |
| 3: InstallPyTorchWithCUDA() |
| 4: InstallDependencies(requirements.txt) |
| 5: # Dataset preparation |
| 6: D_img ← ConvertDICOMtoPNG(D_raw, size = 512×512, mode = “16-bit grayscale”) |
| 7: Split D_img into D_train, D_val, D_test with ratio 70/15/15 (stratified) |
| 8: AnnotateSparseKeypoints(D_train, ratio = 0.15) |
| 9: EnsureAnnotationQuality(kappa_target > 0.8) |
| 10: # Build dataloaders using dataset.yaml |
| 11: ApplyTrainTransforms(D_train) # rotation, flip, noise, normalize |
| 12: CreateDataLoaders(D_train, D_val, D_test) |
| 13: # ------------------------------ |
| 14: # Stage 1 – KRD Training |
| 15: # ------------------------------ |
| 16: M_krd ← InitializeModel(config = “krd_stage1.yaml”) |
| 17: for epoch = 1 to 50 do |
| 18: for each batch B in D_train do |
| 19: y_hat ← M_krd.forward(B) |
| 20: L_krd ← ComputeKrdLoss(y_hat, B.labels) |
| 21: Backpropagate(L_krd) |
| 22: UpdateParameters(M_krd) |
| 23: end for |
| 24: ValidateOn(D_val) and update best checkpoint |
| 25: end for |
| 26: SaveBestCheckpoint(M_krd, path = “checkpoints/krd_best.ckpt”) |
| 27: # ------------------------------ |
| 28: # Stage 2 – Graph Construction Training |
| 29: # ------------------------------ |
| 30: M_graph ← InitializeModel(config = “graph_stage2.yaml”) |
| 31: LoadCheckpoint(M_graph, “checkpoints/krd_best.ckpt”) |
| 32: for epoch = 1 to 50 do |
| 33: for each batch B in D_train do |
| 34: y_hat ← M_graph.forward(B) |
| 35: L_graph ← ComputeGraphLoss(y_hat, B.labels) |
| 36: Backpropagate(L_graph) |
| 37: UpdateParameters(M_graph) |
| 38: end for |
| 39: ValidateOn(D_val) and update best checkpoint |
| 40: end for |
| 41: SaveBestCheckpoint(M_graph, path = “checkpoints/graph_best.ckpt”) |
| 42: # ------------------------------ |
| 43: # Stage 3 – End-to-End Optimization |
| 44: # ------------------------------ |
| 45: M_e2e ← InitializeModel(config = “end2end_stage3.yaml”) |
| 46: LoadCheckpoint(M_e2e, “checkpoints/graph_best.ckpt”) |
| 47: for epoch = 1 to 50 do |
| 48: for each batch B in D_train do |
| 49: y_hat ← M_e2e.forward(B) |
| 50: L_krd ← ComputeKrdLoss(y_hat, B.labels) |
| 51: L_graph ← ComputeGraphLoss(y_hat, B.labels) |
| 52: L_sym ← ComputeSymbolicConsistencyLoss(y_hat) |
| 53: L_total ← WeightedSum(L_krd, L_graph, L_sym) |
| 54: Backpropagate(L_total) |
| 55: UpdateParameters(M_e2e) |
| 56: end for |
| 57: ValidateOn(D_val) and update best checkpoint |
| 58: end for |
| 59: SaveBestCheckpoint(M_e2e, path = “checkpoints/final_model.ckpt”) |
| 60: M* ← LoadModel(“checkpoints/final_model.ckpt”) |
| 61: # ------------------------------ |
| 62: # Evaluation on test set |
| 63: # ------------------------------ |
| 64: SetRandomSeed(42) |
| 65: predictions, labels ← EmptyLists() |
| 66: for each batch B in D_test do |
| 67: with NoGradient(): |
| 68: y_hat ← M*.forward(B) |
| 69: Append(predictions, y_hat) |
| 70: Append(labels, B.labels) |
| 71: end for |
| 72: accuracy ← ComputeAccuracy(predictions, labels) |
| 73: macro_f1 ← ComputeMacroF1(predictions, labels) |
| 74: auroc ← ComputeAUROC(predictions, labels) |
| 75: Report(accuracy, macro_f1, auroc) |
| 76: return M*, {accuracy, macro_f1, auroc} |
| End Algorithm |