Skip to main content
. Author manuscript; available in PMC: 2022 Aug 1.
Published in final edited form as: J Pineal Res. 2021 Jun 20;71(1):e12745. doi: 10.1111/jpi.12745

Figure 2. Example Neural Network.

Figure 2.

A schematic of a neural network, where x is a vector of input data for d input features where xi, v is the vth feature for the ith participant and yi is the network output for the ith participant. In the case of the continuous neural network, yi is a continuous variable representing the network’s prediction of the time of DLMO for the ith participant. In the case of the classification neural network, yi falls between 0 and 1 and if yi< 0.5, the timepoint is classified as falling before DLMO. Nodes lm,n represent hidden nodes in the network, where m is the layer of the node, and n is the index of the node within each layer. The value of each node is determined by lm,n=tanhn=1km-1wm,nlm-1,n+bn for bn a fit constant. The number of nodes in each layer, km for bn a fit constant. The number of nodes in each layer, km, is a hyperparameter which was selected through cross validation.