Perez-Escudero and de Polavieja. 10.1073/pnas.0703183104.

Supporting Information

Files in this Data Supplement:

SI Text
SI Figure 6
SI Figure 7
SI Figure 8
SI Figure 9
SI Table 1
SI Figure 10
SI Figure 11
SI Figure 12
SI Figure 13
SI Figure 14




SI Figure 6

Fig. 6. Clustering of sensory and motorneurons and interneurons in an optimization of full network is different to actual clustering. (A) Average distance between neurons belonging to the same ganglion (diagonal elements) or different ganglia (nondiagonal elements) for sensory and motorneuron positions when the complete network is optimised and a= b = 1/29.3. Clustering error is ec = 12.3%. (B) Same as A but for the actual nematode. (C) Same as A but for interneurons. Ganglion 7 has been omitted because it contains no interneurons. Clustering error is ec = 25.2%. (D) Same as C but for the actual nematode.





SI Figure 7

Fig. 7. Method to dissect networks with noiseless optimal subnetwork and nonoptimal subnetworks with Gaussian noise. (A) Step 2 in the dissection method calculates the average position error in networks of decreasing size eliminating worst located neurons in the order determined in step 1 of the method. Position of somas for optimal subnetwork obtained from an optimization of that subnetwork. Soma positions for the nonoptimal subnetwork obtained from an optimization of the complete network and then adding Gaussian noise of std = 35% to the neurons selected as nonoptimal. Point with plus sign indicates the point that our algorithm finds separating optimal and nonoptimal subnetworks. (B) Estimated size of the nonoptimal subnetwork (blue) and correctly estimated nonoptimal neurons (green).





SI Figure 8

Fig. 8. Same as SI Fig. 7 but Step 1 in the dissection method does not classify neurons using an iterative method





SI Figure 9

Fig. 9. Method to dissect networks with near-optimal subnetwork with low Gaussian noise (std = 5.5%) and nonoptimal subnetworks with larger Gaussian noise (std = 35%). (A) Step 2 in the dissection method calculates the average position error in networks of decreasing size eliminating worst located neurons in the order determined in step 1 of the method. Blue line, case of network with near-optimal and nonoptimal subnetworks. Position of somas for near-optimal subnetwork obtained from an optimization of that subnetwork and adding Gaussian noise of std = 5.5% (as obtained in the elegans near-optimal subnetwork). Soma positions for the nonoptimal subnetwork obtained from an optimization of the complete network and then adding Gaussian noise of std = 35% to the neurons selected as nonoptimal. Red line, noisy network with neuron positions obtained adding Gaussian noise of std = 8% to the optimal position of all neurons. (B) Estimated size of the nonoptimal subnetwork (blue points) and correctly estimated nonoptimal neurons (green points) for networks formed by a near-optimal subnetwork and nonoptimal subnetworks of different sizes. (C) Same as B but for noisy networks of different std. (D) Deviation of neuron position from optimal position in actual nematode consistent with near-optimal subnetwork of Gaussian noise of std = 5.5% and nonoptimal subnetwork of std = 35%.





SI Figure 10

Fig. 10. Encephalization in actual network and optimal subnetwork and optimization of complete network. Histogram of number of neurons along the animal.





SI Figure 11

Fig. 11. Histogram of synapse location in the animal assuming that synapses are at midpoints between soma.





SI Figure 12

Fig. 12. Interaction matrices for nonoptimal and near-optimal neurons. (A) Change in position error in each of the near-optimal neurons (bottom) when removing one of the nonoptimal neurons (left) from the network. Neuron labels in the figure are positioned such that most damaging nonoptimal neurons are at the bottom and most damaged near-optimal neurons to the left. Only the 12 ´ 24 matrix at the bottom-left of the complete 34 ´ 184 matrix (SI Fig. 13) is shown here. (B) Same as A but for the change in position error of nonoptimal neurons. Shown are the 15 ´ 22 of most damaging and most damaged neurons (see SI Fig. 14 for the complete 34 ´ 34 matrix).





SI Figure 13

Fig. 13. Interaction matrix for nonoptimal and near-optimal neurons. Shown is the change in position error in the near-optimal neurons (bottom) when removing one of the nonoptimal neurons (left) from the complete network. Neuron labels in the figure are positioned such that most damaging nonoptimal neurons are at the bottom and most affected near-optimal neurons are to the left.





SI Figure 14

Fig. 14. Interaction matrix between nonoptimal neurons. Shown is the change in position error in the nonoptimal neurons (bottom) when removing one of the nonoptimal neurons (left). Neuron labels in the figure are positioned such that most damaging nonoptimal neurons are at the bottom and most affected nonoptimal neurons are to the left.





Table 1. List of the most nonoptimal neurons in order of decreasing nonoptimality

Index

Name

Type

160

PVQL

Interneuron

161

PVQR

Interneuron

156

PVNL

Motor

128

LUAL

Interneuron

163

PVT

Interneuron

172

RID

Motor

157

PVNR

Motor

158

PVPL

Interneuron

111

DVC

Interneuron

159

PVPR

Interneuron

64

AVG

Sensory

209

SDQL

Interneuron

129

LUAR

Interneuron

152

PVCR

Interneuron

151

PVCL

Interneuron

153

PVDL

Interneuron

154

PVDR

Interneuron

150

PQR

Sensory

165

PVWR

Interneuron

164

PVWL

Interneuron

109

DVA

Interneuron

162

PVR

Sensory

147

PLMR

Sensory

54

AVAL

Interneuron

55

AVAR

Interneuron

92

DA06

Motor

24

ALML

Sensory

72

AVM

Sensory

25

ALMR

Sensory

69

AVKL

Interneuron

210

SDQR

Interneuron

262

VC01

Motor

148

PLNL

Sensory

263

VC02

Motor





SI Text

Derivation of Formula for Positions Minimizing Wiring Cost, Eq. 2.

In the following we give for completeness a very explicit derivation of Eq. 2. Equivalent derivations based on a heavier use of matrix properties can be found in references 1 and 2. When cost increases quadratically with wire length, total cost is given by Eq. 1 with x = 2,

.

This function has a minimum where partial derivatives with respect to the positions of all neurons are zero, for all p = 1,...,N, with

,

and the Kronecker delta (=1 when i = j and 0 otherwise). Because of the Kronecker delta functions, most terms vanish. However, we keep most of them for later convenience. After regrouping terms and removing some of the vanishing terms, we get

As the matrix A is symmetric, the first and second terms are identical, giving

Regrouping terms we obtain

where we have eliminated vanishing terms in the second term. Renaming the summation index j for i in the second term, and using the definition of Q in Eq. 2b, we can write

This is a system of N equations with the soma positions xi as the N unknowns. In matrix notation we can write it as , where , and are column vectors that store the positions of all neurons, sensors and muscles, respectively. Multiplying from the left by both members of the matrix equation, we obtain the solution , as given in Eq. 2a.

Derivation of the Center of Mass Formula, Eq. 4.

When only connections between neurons and organs (sensors and muscles) are considered, and quadratic cost per unit length is assumed, total cost in Eq. 1 reduces to

,

where is the number of sensory and motor neurons. The minimum is characterized by vanishing partial derivatives

.

Note that the neuron positions can be taken out from the sums and Eq. 4 in the main text is then obtained directly.

Robustness of Predictions Using the Center-of-Mass Formula, Eq. 4. First, the prediction is almost independent of any parameters. Although the parameter appears in Eq. 4, only 13 out of 199 neurons are both sensory and motorneurons, and for the remaining 186 neurons the equation further reduces to separate expressions for sensory and motorneurons, which are independent of . In practice, predictions improve for increasing b but, in any case, the maximum difference is for the mean error position and for the clustering error. For different powers for the wire cost in Eq. 3, we found differences in position and clustering errors to be and , respectively (see next section below). Even when changing the connectivity matrix B to a matrix of 1s and 0s, the difference was found to be for mean error position and for clustering error. The reason for this strong robustness is that, although some of the patches of sensors and muscles to which each ganglion is connected are relatively large, each neuron connects to a much smaller mini-patch of sensors or muscles. Fig. 2F in the main text gives the statistics of the size of the mini-patch a single neuron connects to, with a maximum at zero length (single connections) and another maximum at a length of 5% the total length of the animal. Any sensible wiring economy approach for this subnetwork would predict the location of each neuron somewhere within the length of the mini-patch it connects to. Therefore, any modification of the model's parameters should cause position differences below 5% the length of the animal. As mentioned above, on average, these differences are below 1%, and affect the clustering error less than 0.3% (adding up all the contributions listed above, due to changing parameters and and the detailed form of matrix B).

Numerical Wire Cost Minimization for Nonquadratic Cost.

In general, the calculations performed in this article have been for cost increasing quadratically with wire length. However, it appears that the results obtained with quadratic cost would hold even if nonquadratic costs are used. For example, in order to test the robustness of predictions of sensory and motor neurons' positions using the center of mass calculation in Eq. 4, we computed the optimal layouts for costs with exponents from 1.1 to 10 in intervals of 0.1. Note that the total cost of connections between neurons and organs in Eq. 3 can be written as

,

with the cost associated to neuron i. Therefore, the optimization problem reduces to numerically obtain the optimal position of each neuron separately by minimizing its individual cost. A Newton's minimization algorithm was used for these calculations. This algorithm starts at position 0.5, estimates the position of the minimum from the two first derivatives at that point, and iterates until convergence. As cost functions are convex for exponents greater than 1, convergence to the global minimum is guaranteed.

For the general optimization problem, we tested polynomial costs up to degree 4, restricted to be monotonically increasing and convex. Using a multidimensional Newton's algorithm and a multidimensional greatest-gradient algorithm, we obtained no relevant differences compared with quadratic cost. Therefore, predictions seem to be robust independently of the cost function as long as it is monotonically increasing and convex, and the quadratic case seems to be a good representative function.

Detection of Point Separating Optimal From Nonoptimal Subnetworks in and Plots.

In Fig. 5A and SI Fig. 9A the point that one identifies visually as the separation point between the optimal and nonoptimal networks is characterized by separating two regimes of the , one with a very steep slope (at the right of the separation point) and one with a flatter slope (at the left). Algorithms based on the first and second derivatives were found to be too sensitive to small fluctuations of the function even after smoothing. An alternative and more robust algorithm is detailed in the following. Start at the last point, and estimate the derivative there. This estimation is done by averaging the numerical derivatives at the 6, 5, 4... 1 final points, and keeping the highest average value. In this way, the estimate is reliable both for small and large nonoptimal subnetworks. Then, "roll down" until a derivative n times smaller is found. We found that n = 5 always had a good performance and was used in all calculations presented here. This algorithm can, however, get stacked because of small bumps of the function, so we developed a new one based on the same idea but that is robust to small fluctuations, and its results match almost always with visual inspection. The algorithm is as follows:

Estimate derivative at the end, using the last 6 points.

Set Point of Separation = Number of Neurons

If derivative at the end > 0

Do

For c=1 to Point of Separation

Slopes(c)=(e_p(Point of Sep.) - e_p(c))/(Point of Sep. - c)
End for

If Max(Slopes) > Derivative at the end / n

Point of Separation=Index(Max(Slopes))

End if

Loop while Max(Slopes) > Derivative at the end / n

End if.

References

1. Hall K (1970) Management Sci 17:219-229

2. Chklovskii DB (2004) Neural Comput 16:2067-2078