Table 5. Likely strengths and weaknesses of DeepGANnel versus traditional synthesis methods.
Traditional Method | Traditional Method | DeepGANnel | |
---|---|---|---|
A priori assumptions | Stochastic simulation | Everything must be estimated or assumed; channel size, rate constants, open channel noise, thermal noise levels, artefact frequency etc. | None required. |
Authenticity | Stochastic simulation | Depends entirely on the accuracy of a priori-assumptions. Authenticity difficult. | Highly authentic. |
Speed | Stochastic simulation | Moderate speed. | Slow to train, fast to simulate thereafter. |
GPU needed | Stochastic simulation | Typically, these are not used. Future stochastic models may use them. | Realistically these are needed for training, although not for simulation itself. |
Vanilla GAN | yes | - | |
Markov Model | Stochastic simulation | Could include Markovian model structure. | May include Markovian structure, but this is not guaranteed. |
Need for seed data | Stochastic simulation | No. The data can be completely imaginary. | Yes |
Fully Labelled data | Stochastic simulation | Yes | - |
Vanilla GAN | Cannot provide labels in parallel to raw data. | Yes |
This table summarises the pros and cons of DeepGANnel discussed and justified in the text. By definition such comparisons can only be subjective because Traditional Methods vary (entirely dependent on a priori assumptions, that could be simple or complex), as do computing platforms.