/* open the data given in supplementay material 2 */ /* here we assume it is saved in the C drive 'Work' folder */ use "C:\Work\thromb.dta", clear * The columns are self-explanatory, with the exception of the _SAS columns; * these are the empirical bayes / predicted p or logit-p values for each * group in each study, and were obtained from SAS when fitting the * one-stage network meta-analysis to this data using the following code: /* proc nlmixed cov data=thromb qpoints = 1 ; beta = beta1_1*study1 + beta1_2*study2 + beta1_3*study3 + beta1_4*study4 + beta1_5*study5 + beta1_6*study6 + beta1_7*study7 + beta1_8*study8 + beta1_9*study9 + beta1_10*study10 + beta1_11*study11 + beta1_12*study12 + beta1_13*study13 + beta1_14*study14 + beta1_15*study15 + beta1_16*study16 + beta1_17*study17 + beta1_18*study18 + beta1_19*study19 + beta1_20*study20 + beta1_21*study21 + beta1_22*study22 + beta1_23*study23 + beta1_24*study24 + beta1_25*study25 + beta1_26*study26 + beta1_27*study27 + beta1_28*study28 + BA*(beta2 + u2) + CA*(beta3 + u3) + DA*(beta4 + u4) + EA*(beta5 + u5) + FA*(beta6 + u6) + GA*(beta7 + u7) + HA*(beta8 + u8); expbeta=exp(beta); p=expbeta/(1+expbeta); /* force tausq to be same as in the two-stage approach */ /* tausq = .01520341*.01520341 = .00023114 ; cov = 0.5*.00023114 = .00011557*/; model r~binomial(n, p); random u2 u3 u4 u5 u6 u7 u8 ~ normal([0,0,0,0,0,0,0], [.00023114, .00011557, .00023114, .00011557, .00011557, .00023114, .00011557, .00011557, .00011557, .00023114, .00011557, .00011557, .00011557, .00011557, .00023114, .00011557, .00011557, .00011557, .00011557, .00011557, .00023114, .00011557, .00011557, .00011557, .00011557, .00011557, .00011557, .00023114 ]) subject=study out=bayes; /* obtain predicted (empirical bayes) values */ predict beta out = richard; parms beta1_1 = -2.5399 beta1_2 = -2.146 beta1_3 = -2.7649 beta1_4 = -2.7987 beta1_5 = -3.022 beta1_6 = -2.3568 beta1_7 = -2.6782 beta1_8 = -2.6578 beta1_9 = -2.833 beta1_10 = -3.0031 beta1_11 = -2.2268 beta1_12= -3.0235 beta1_13= -3.0562 beta1_14= -2.9274 beta1_15= -3.1124 beta1_16= -2.6796 beta1_17= -2.7253 beta1_18= -2.5594 beta1_19= -2.7453 beta1_20= -2.0477 beta1_21= -3.0331 beta1_22= -3.0037 beta1_23= -2.93 beta1_24= -2.6779 beta1_25= -2.8793 beta1_26= -2.9627 beta1_27= -2.5837 beta1_28= -2.6663 beta2= -0.1711 beta3= 0.003065 beta4= -0.04684 beta5= -0.166 beta6= -0.121 beta7= -0.1987 beta8= 0.02265; run; */ * Let us now derive percentage study weights for the one-stage network meta-analysis * under the consistency assumption * create design matrix for fixed effects* mkmat study1-study28 BA CA DA EA FA GA HA, matrix(X) mat XT = X' mat list XT * Need to define matrix R=AB * A is block diagonal in terms of 1 / number of events mkmat inv_n, matrix(invn) mat A = diag(invn) * Need to define matrix B: this is p(1-p) where p is the predicted p from the meta-analysis * they are given in the dataset in the _SAS columns * they are close, but not identical as expected, to the observed values due to shrinkage gen b = predp_sas*(1-predp_sas) mkmat b, matrix(b) mat B = diag(b) * create design matrix Z for random effects * * there are 7 random effects (See SAS code: u2 to u8) * thus need matrix that is block diagonal with blocks defined by studies * size of the block is decided by the number of rows for that study * so study 1 has 3 rows so we need a block size of 3 rows * 7 cols * the 7 cols are because there are 7 potential random effects * the block needs to be defined by the 'treatment' indicator columns * so, create a new column for each treatment effect from 2 to 8 distinct for each study forvalues i = 1/28{ use "C:\Work\thromb.dta", replace replace BA = 0 if study != `i' replace CA = 0 if study != `i' replace DA = 0 if study != `i' replace EA = 0 if study != `i' replace FA = 0 if study != `i' replace GA = 0 if study != `i' replace HA = 0 if study != `i' * make a matrix mkmat BA CA DA EA FA GA HA, matrix(Z`i'_) } * Now turn matrixes into data* forvalues k = 1/28{ svmat Z`k'_ } * Now we create a new matrix that joins these matrices * * this will create a 58 by (7*28 = 196) mkmat Z1_1-Z28_7, matrix(Z) * Now we need to create matrix G * G is also block diagnonal with (7*28 = 196) rows and 196 columns * Therefore it will be 196 by 196 * it is block diagnonal, across the 28 studies, in a series of 7 by 7 matrix * containing the tau values and covariances * tausq is .00023114 * covariances are 0.5*.00023114 = .00011557 * create a matrix per study where all entries are the covariances * and then swap diagonal to the tau * we invoke mata to use the blockdiag command * then create the full 196 by 196 by using the blocks * mata r=(0.00023113, .00011557, .00011557, .00011557, .00011557, .00011557, .00011557 \ /// .00011557, 0.00023113, .00011557, .00011557, .00011557, .00011557, .00011557 \ /// .00011557, .00011557, 0.00023113, .00011557, .00011557, .00011557, .00011557 \ /// .00011557, .00011557, .00011557, 0.00023113, .00011557, .00011557, .00011557 \ /// .00011557, .00011557, .00011557, .00011557, 0.00023113, .00011557, .00011557 \ /// .00011557, .00011557, .00011557, .00011557, .00011557, 0.00023113, .00011557 \ /// .00011557, .00011557, .00011557, .00011557, .00011557, .00011557, 0.00023113 ) r G2 = blockdiag(r,r) G3 = blockdiag(r,G2) G4 = blockdiag(r,G3) G5 = blockdiag(r,G4) G6 = blockdiag(r,G5) G7 = blockdiag(r,G6) G8 = blockdiag(r,G7) G9 = blockdiag(r,G8) G10 = blockdiag(r,G9) G11 = blockdiag(r,G10) G12 = blockdiag(r,G11) G13 = blockdiag(r,G12) G14 = blockdiag(r,G13) G = blockdiag(G14,G14) G end * this converts the mata matrix G to the STATA data * * force tells stata to not worry about the matrix being longer than the dataset getmata (G*)=G, force mkmat G1-G196, matrix(G) * V can now be derived, which needs to be a 58 by 58 matrix * mat V = (Z*G*Z') + (inv(B)*A) mat invV = syminv(V) * we can now derive Fisher's information matrix mat fish = XT*invV*X * and the variance-covariance matrix mat varb = syminv(fish) * Now we modify V for each study, so that variances are very large * and covariances all zero for studies not involving the study of interest * forvalues i = 1/28 { mat define Vpart`i'_ = V svmat Vpart`i'_ forvalues k = 1/58 { * give all values 0 that are not for the study replace Vpart`i'_`k' = 0 if study !=`i' * give all variance large value that are not fo the study replace Vpart`i'_`k' = 10000000000000000 if study != `i' & _n==`k' } } drop if _n > 58 * Now we can calculate Fisher's information matrix for study i forvalues i = 1/28{ mkmat Vpart`i'_1-Vpart`i'_58, matrix(V`i') mat fish`i' = XT*inv(V`i')*X * the weight matrix (W) for study i can now be derived * mat w`i' = varb*fish`i'*varb } * there are 35 parameters in the network model * we can work out the percentage study weights for all of them * * but the main focus is on the treatment contrast BA to HA * these are parameters 29 to 35 * so we apply equation (12) in the paper to obtain the weights for all these tempname weights postfile `weights' study percwtBA percwtCA percwtDA percwtEA percwtFA percwtGA percwtHA /// using "C:\Work\weights", replace forvalues i = 1/28{ local percwtBA = (100*w`i'[29,29])/ varb[29,29] local percwtCA = (100*w`i'[30,30])/ varb[30,30] local percwtDA = (100*w`i'[31,31])/ varb[31,31] local percwtEA = (100*w`i'[32,32])/ varb[32,32] local percwtFA = (100*w`i'[33,33])/ varb[33,33] local percwtGA = (100*w`i'[34,34])/ varb[34,34] local percwtHA = (100*w`i'[35,35])/ varb[35,35] local study = `i' post `weights' (`study') (`percwtBA') (`percwtCA') (`percwtDA') (`percwtEA') (`percwtFA') (`percwtGA') (`percwtHA') } postclose `weights' * The saved data gives the percentage weights for each study toward each treatment effect use "C:\Work\weights.dta", clear * Note that percentage weights for a two-stage network meta-analysis can * simply be derived using the 'wt' option within the network command use "C:\Work\ES Project\Network meta-analysis\weights\thromb.dta", clear keep study r n treat * set up the data * network setup r n, studyvar(study) trtvar(treat) * perform the network meta-analysis network meta c, wt