Skip to main content
[Preprint]. 2024 Mar 15:2024.03.14.585055. [Version 1] doi: 10.1101/2024.03.14.585055

Table A1.

A summary of the main algorithm parameters is depicted in the table above. Parameters like the bounds and cost function indicate user inputs critical to the optimization problem. Technical parameters like the gradient descent step and mesh size impact the action of the algorithm.

Parameters Symbols Description
Lowerbounds, Upper-bounds, Categorical Flags l, u, c Hard constraints on the range of each parameter and whether they take discrete or continuous values
Experimental Budget, Number of Samples per Iteration M, nj The total experimental budget defines the number of iterations of the algorithm we can perform (i.e. the number of sets of data we can generate) and the number of samples per iteration dictates the total number of individual study designs we can generate per round.
Gradient Descent Step, Perturbation Vector, Scaling Factor, Number of Gradient Descent Sample Points αgj, ej, γ, ngj These parameters define the numerical estimation of the gradient as well as the size of the gradient iteration step. The perturbation vector and scaling factor are used to compute a finite difference equation for the gradient at a certain point, and a new point is generated from the current point and gradient descent step. The number of gradient descent points dictates the number of current minimal points whose gradients are estimated for the next iteration.
Exploration Coefficient, Acquisition Variance Parameter ej, αvj The exploration parameter assigns a fraction of points to explore via our acquisition function derived from the Gaussian process as opposed to local search of already promising points. The acquisition variance parameter emphasizes the exploration of high variance points at large values.
Mesh Size, Directional Polling Set, Discrete Neighborhood Δj, Dj, 𝒩D The mesh size assigns a range for the lattice search, while the directional polling set and discrete neighborhood definitions assign how we explore local points from the mesh center.
Cost Function, Loss Function g(x), L(x) The cost function defines the cost for a single study design and can either be given in closed-form to the optimization program or approximated by a surrogate function from data. The loss function describes the efficacy of a study design and must be computed through experiment or simulation.
Maximum Cost, Lambda Regularization Parameter c, λ The maximum cost parameter describes the limit in cost of a study design and implicitly assigns a feasible region to our study design space. The lambda regularization parameter balances a trade off between cost and loss in our overall optimization output (i.e., how much emphasis we should place on gains in performance versus increases in cost).