Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Feb 1.
Published in final edited form as: Curr Protoc. 2022 Feb;2(2):e374. doi: 10.1002/cpz1.374

Creating Ion Channel Kinetic Models Using Cloud Computing

Kathryn E Mangold 1, Zhuodong Zhou 1, Max Schoening 1, Jonathan D Moreno 1,2, Jonathan R Silva 1,*
PMCID: PMC9006544  NIHMSID: NIHMS1777933  PMID: 35175690

Abstract

Computational modeling of ion channels provides key insight into experimental electrophysiology results and can be used to connect channel dynamics to emergent phenomena observed at the tissue and organ levels. However, creation of these models requires substantial mathematical and computational background. This tutorial seeks to lower the barrier to creating these models by providing an automated pipeline for creating Markov models of an ion channel kinetics dataset. We start by detailing how to encode sample voltage-clamp protocols and experimental data into the program and its implementation in a cloud computing environment. We guide the reader on how to build a containerized instance, push the machine image, and finally run the routine on cluster nodes. While providing open-source code has become the more standard in computational studies, this tutorial provides unprecedented detail on the use of the program and the creation of channel models, starting from inputting the raw experimental data.

Basic Protocol 1: Creation of ion channel kinetic models with a cloud computing environment

Alternative Protocol 1: Instructions for use in a standard high-performance compute cluster

Keywords: Ion channels, kinetic models, optimization

INTRODUCTION:

Computational modeling of ion channels complements experimental electrophysiology through unique insights and predictions of channel dynamics at higher dimensions. Upon creation of kinetic channel model, one may begin to understand how the channel impacts cellular and tissue electrophysiology through other published higher dimensional models (Qu et al. 2014; Rudy and Silva 2006).

Creating these kinetic models, however, requires a mathematical, computational background and resources that may prevent many channel electrophysiologists from creating these kinetic models. Starting from various input structures identified from a previous study (K. E. Mangold et al. 2021), we first guide the reader on how to input representative electrophysiological data to run this model creation routine. We then explain the various program settings to tailor the model search routine. This protocol, to best of our knowledge, is the first comprehensive guide to creating an ion channel kinetic model: starting from gathering data to analyzing the optimized results. While providing open-source source code have become more routine in electrophysiological modeling, providing a detailed tutorial is unusual. Our hope is that this routine and compendium tutorial may lower the “barrier to entry” of kinetic modeling to experimentalists with a basic understanding of computational modeling. In Basic Protocol 1, we detail how to setup a cloud computing environment to run a routine to suggest model structures given an experimental dataset (Figure 1). As an alternative, running the program in a standard high-performance computing Linux environment is detailed in Alternative Protocol 1.

Figure 1.

Figure 1

Protocol Graphical Overview

BASIC PROTOCOL 1

Creation of ion channel kinetic models with a cloud computing environment

Using the AWS cloud computing environment, we guide the reader in how to input their own voltage-clamp protocols and associated data to run this containerized model selection routine. We start by detailing how to create an Amazon Linux instance and how to set up the various security settings. We then walk the reader through the encoding of the sample voltage-protocols and associated experimental data. Finally, we detail containerizing the program and pushing the instance to the AWS Batch servers. If the user instead has access to a private Linux high-performance compute environment, we present instructions for running the program in Alternative Protocol 1.

Materials:

Hardware

A Computer running Windows, Mac, or Linux with an Internet Connection

Protocol steps with step annotations:

Setting up Your AWS Environment

  • 1
    Create an AWS Free Tier account to get started on AWS at https://aws.amazon.com/free/.
    1. Once your account is established, login as a root user to access the AWS Management Console (Figure 2).

    To navigate to the other AWS services, click on “Services” in the upper left corner or search for the desired service in the top search bar.

  • 2
    Launch an EC2 virtual machine (Figure 3).
    1. Click “Launch a virtual machine” to bring up options for an Amazon Machine Image (AMI).
    2. Choose “Amazon Linux 2 AMI.”

    Amazon EC2 instances are virtual machines for performing computations in the cloud. Choosing the Amazon Linux AMI sets up with this virtual machine with a specialized Linux environment.

  • 3
    Choose the t2.micro instance for the Free Tier as shown in Figure 4.
    1. Click “Review and Launch” to review your instance settings.
      Choosing the t2.micro instance starts a basic virtual machine with enough RAM and CPU power to containerize the optimization program before sending to the AWS cloud. You will be prompted to create a private key pair to SSH into your new Amazon Linux instance in later steps.
  • 4
    Download the private key pair and then launch your instance (Figure 5).
    1. Once your instance is launched, you should see a screen like Figure 6.
      The private key pair will allow the user to remote login into the EC2 instance in the next section. For more detailed instructions, read https://docs.aws.amazon.com/quickstarts/latest/vmlaunch/step-1-launch-instance.html.
  • 5
    Navigate to Amazon S3 from the Management Console to make a bucket to store optimization results.
    1. Click “Create bucket” and enter a bucket name and accept all default settings. Upon successful creation, your bucket should look like Figure 7.
      The Amazon S3 bucket service is cloud storage service. The name of the bucket will need to be later entered into the sampleData.csv to periodically save and store result files during the optimization
  • 6
    From the AWS Management Console, navigate to the Amazon Elastic Container Registry.
    1. Click “create a repository.”
      Creating a repository in the Amazon Elastic Container Registry allows for eventual storage of the containerized application image to run on the AWS cloud.
  • 7

    Make sure your repository’s visibility setting is public and enter a name for your repository as in Figure 8.

Figure 2. AWS Management Console.

Figure 2.

The AWS Management Console is the home base where all AWS applications mentioned in this tutorial may be accessed. Note the Services menu in the upper left corner and top search bar.

Figure 3.

Figure 3.

The Amazon Linux 2 AMI.

Figure 4.

Figure 4.

Selection of ‘Instance Type’ for the EC2 instance.

Figure 5.

Figure 5.

Prompt to generate the SSH key pairs and the private key file for the EC2 instance.

Figure 6.

Figure 6.

Running EC2 instance with the Public IPv4 DNS.

Figure 7.

Figure 7.

Successful creation of the S3 bucket.

Figure 8.

Figure 8.

Creation of the Amazon Elastic Container Registry.

Setting Up Security Settings in AWS

  • 8

    Navigate to “My Security Credentials” to retrieve access key IDs and secret access keys as in Figure 9.

    Amazon access and secret key IDs will be used to authenticate saving results in the S3 storage bucket.

  • 9

    Click “Create New Access Key” and download the file (Figure 10).

    KEEP THESE KEYS IN A SECURE LOCATION. Be ready to enter these access key IDs and secret access keys later in the EC2-instance to test your AWS environment.

    For this beginning tutorial, all actions will be performed as the “root” user of the AWS account for ease of instruction. The best practice, however, is to specify users and their associated permissions under Identity and Access Management (IAM). For more information please read: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html

Figure 9.

Figure 9.

Security credentials

Figure 10.

Figure 10.

Your security credentials

Testing your AWS Environment

  • 10

    SSH into your EC2 instance created in the previous section.

    The address to SSH will be shown in the field ‘Public IPv4 DNS’ populated with an address similar to ec2-XX-XXX-X-XXX.us-east-2.compute.amazonaws.com. The syntax to enter in your terminal will look like following:

    ssh -i ~/pathtokey/yourprivatekeyname.pem ec2-user@ec2-XX-XXX-X-XXX.us-east-2.compute.amazonaws.com

    For more information, please see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html

  • 11

    Once authenticated you will see a screen like Figure 11. The remaining commands will be entered from this EC2-instance.

  • 12
    Run ‘aws configure’ (Figure 12):
    1. a) You will be prompted to enter your access key, secret-key, region name and output format. Input the same region name as assigned in your EC2-instance under ‘Availability Zone’ in Figure 6.
    2. b) Enter ‘json’ for default output format.
      These credentials will be stored in the home directory under the path ~/.aws. Entering these credentials will allow access to the S3 cloud storage from the EC2 instance in the next steps.
  • 13

    Run ‘aws s3 ls’ to display your previously created S3 bucket as in Figure 13.

  • 14

    Create a sample file and upload it to your S3 bucket as in Figure 14.

    The general syntax is ‘aws s3 sync your_local_path s3://your-bucket-name/.” Navigate back to S3 bucket to verify the files are listed.

  • 15

    Download all source files from https://github.com/silvalab/AdvIonChannelMMOptimizer (branch Methods) and save all files to your EC2 Instance.

    We recommended to use WinSCP if using Windows to make uploading the files easier.

  • 16
    Run the following series of commands to install and start docker:
    1. ‘sudo yum update -y’
    2. ‘sudo amazon-linux-extras install docker’
    3. ‘sudo service docker start’
    4. ‘sudo usermod -a -G docker ec2-user’
    5. Exit and then reconnect your SSH session
      These commands prepare the EC2 instance for installing and running Docker. Docker will be used to containerize the application in later steps. For more information, please read https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html
Figure 11.

Figure 11.

Home directory of the Amazon Linux 2 EC2-instance

Figure 12.

Figure 12.

‘aws configure’ output

Figure 13.

Figure 13.

‘aws s3 ls’ output

Figure 14. File creation practice.

Figure 14.

Illustration of creating a file called “test.txt” in our home directory. Then we show that we can upload this file and other hidden files in our home directory to our S3 bucket.

Specifying Optimization Settings and Inputting Training/Validation Voltage Clamp Protocols

All the following optimization settings and input training/validation data will be entered into sampleData.csv. (Figure 15)

Figure 15.

Figure 15.

Optimization Settings and headings in sampleData.csv

  • 17

    Update your AWS S3 path with S3 bucket name under the leftmost column under ‘Solver settings.’

    The rest of the solver settings are default populated with values reproduced in Table 1. To customize for your own optimization, change the appropriate field.

  • 18

    Review the listed protocols under the “protocols.lst” and “validation.lst” heading.

    The sample protocols already listed are Ito,f steady state activation and steady state inactivation protocols called ‘WTgv’ and ‘WTinac’ in addition to a current trace at 20 mV, WTtrace20. In the validation column, a Ito,f current trace at 40 mV is specified. (K. et al. 2018)

    Note: Instead of providing an entire voltage-protocol and its associated data for validation, the user may wish to set aside a certain percentage of datapoints for validation within a protocol. Here, in addition to the 40 mV current trace, 20% of the experimental data from WTgv, WTinac and WTtrace20 is randomly selected to serve as validation data. The specified experimental datapoints that are withheld from the training set, which will serve as validation and will be entered later in sampleData.csv (Please see the Supplementary Materials for a thorough discussion on the encoding of the sample protocols and data.)

  • 19

    Update sampleData.csv with experimental training/validation data and voltage protocol as applicable.

    sampleData.csv comes populated with sample training/validation data and voltage protocols. Please see the Supplementary Materials for a detailed outline of the encoding of the sample data.

  • 20

    Review which topologies will be optimized.

    Follow the background under ‘Customizing the Lists of Topologies to Optimize’ in the Supplementary Materials to understand how to specify the various topologies to optimize in the list of arguments to job.sh.

Table 1.

Customizable Solver Settings

Solver Setting Default value Description
double update_mu_rates 0 Bounds for gaussian perturbation of the steady state and kinetic parameters, rs and rk, during optimization. These model’s rate parameters are listed in the optimization output files. The origins of rs and rk are described in Menon et al.(Menon, Spruston, and Kath 2009) and Teed et al. (Teed and Silva 2016)
double update_std_rates 1
double update_mu_args 0 Bounds for gaussian perturbation, voltage dependent arguments in the voltage dependent function
double update_std_args 3
double gamma 0.001 These parameters are part of the adaptive temperature control (Equation 2 in Mangold et al. (Azizi and Zolfaghari 2004; K. E. Mangold et al. 2021)
Factor to increase temperature for each worse solution encountered in simulated annealing
double t0 0.00001 Starting temperature
double rate_min -12 Rate parameters can range between ln(−12) and ln(12) as part of the rs and rk matrices. See Menon et al. and Teed et al. for a background introduction.
ln(rate) limits to initialize
double rate_max 12
double arg_min -80 Voltage dependent arguments min and max, default set to be between −80 mV and 60 mV (physiological voltages during the action potential). These are updated with a gaussian perturbation using ‘update_mu_args’ and ‘update_std_args’ during simulated annealing.
double arg_max 60
int k_max 150 How many iterations to run the optimization
int step 100 Can be used to manipulate the temperature
int display 10 How often the current best cost function and value is computed along with overfitting metrics
int n_chains 10 Number of noninteracting chains used in simulated annealing
int restart 1
string AWS_S3_path “s3://my-bucket/” “my-bucket” is the name of the bucket specified when setting up the AWS environment
int snapshot 50 How often .model and .txt files are written to show progress

Building and Pushing the Containered Instance

  • 21
    Build the container.
    1. Run the following command in the same directory as all downloaded files: docker build -t test --build-arg AWS_ACCESS_KEY_ID_BUILD=<YOUR_ACCESS_KEY_HERE> --build-arg AWS_SECRET_ACCESS_KEY_BUILD=<YOUR_SECRET_ACCESS_KEY_HERE>.

    Running docker build invokes the containerization process of all files needed to run the optimization on the AWS cloud. The option ‘-t’ indicates a tag for the machine image. The trailing dot must be included as it specified Docker to look the Dockerfile in the current working directory. Once successfully built, calling ‘docker images’ will show your built image ready to push to the container registry.

  • 22
    Test that your docker image runs properly.
    1. Call ‘docker run -ti your_image_name /usr/src/dockerdeploy/file.sh 4 3 1 test’.

    The options “-ti” to docker run allow for terminal control. The arguments ‘/usr/src/dockerdeploy/file.sh’, ‘4,’ ‘3,’ ‘1,’ ‘test’ refer to the entry point of the container, the number of states to be optimized, the ID of the model to optimize, number of starts to run, and finally the optimization name, respectively.

  • 23

    Push the container to the Amazon Elastic Container Registry (ECR) (i.e., a repository for machine images to be run on the Amazon clusters) as previously created.

    Upon navigating to the repository in the AWS console, there will be sample commands to push your image from your EC2-instance to the ECR registry. Click ‘Show push commands.’ Copy the URL of the image for the next step.

  • 24
    The Amazon Linux AMI may not have the correct version of the awscli (Amazon Command Line Interface) installed. To smoothly upload to your ECR, make sure awscli version 2 is installed by calling ‘aws –- version.’ (Figure 16).
    1. If your aws-cli version is version 1, run the following series of commands to update your aws-cli:
    2. sudo yum install python2-pip
    3. sudo pip uninstall awscli
    4. unzip awscliv2.zip
    5. sudo ./aws/install
      The latest version of the command line interface needs to be installed to ensure successful transfer of the docker image to the registry.

Figure 16.

Figure 16.

Sample output from AWS CLI Version 2.

Submitting the Job to AWS Batch

  • 25
    Navigate to the ‘wizard’ under the AWS Batch to begin submitting a job (Figure 17).
    1. Select “On-demand” under “Instance configuration” to ensure your sample run will start promptly once submitted to the queue.
    2. Follow the prompts and accept default values except when directed below to setup a compute environment and job queue.
      AWS Batch allows the user to ask and acquire the computing resources necessary to run the program in the cloud.
  • 26

    Copy the image URL listed in the ECR into the field “Image” (Figure 18).

    The image URL specifies which Docker image AWS Batch needs to run on the requested computing resources.

  • 27
    Specify under “Command” the entry point of the container, “/usr/src/dockerdeploy/file.sh”, the number of states in the topology you wish to optimize, followed by the topology identification number, the number of optimization starts, and the name of the optimization.
    1. a) For example, to optimize the third 4-state model with 10 optimization starts and save the results in a folder “test” run:
      /usr/src/dockerdeploy/file.sh 4 3 10 test
  • 28

    Follow the remaining prompts to submit the job.

    Navigating to ‘Dashboard’ allows one to track the progress of the job in the cloud as it switches in state from submitted, to runnable, to starting, and finally running.

  • 29

    Navigate to the S3 bucket and look for a result directory named as above under the folder corresponding to the number of states. (Figure 19).

    Based on the optimization settings set in sampleData.csv, the output will be saved periodically throughout the optimization.

    NOTE: We present the use of the AWS Batch Wizard to allow for an easy start to using AWS Batch. Once comfortable, however, one should transition to using the full console where array jobs may be specified in job definition. AWS offers array jobs where a variable AWS_BATCH_JOB_ARRAY_INDEX will automatically increment up to the size of your array job. In Figure 20, we illustrate the use of this special environmental variable when calling the program in file.sh or job.sh. For more information, please read: https://docs.aws.amazon.com/batch/latest/userguide/array_jobs.html.

    Using the 4-state model space as an example with 11 models (Figure 1), submitting an array job of size 11 will spin off 11 instances each with AWS_BATCH_JOB_ARRAY_INDEX variable ranging from 0 to 10. The entry point to the container “file.sh” may be modified to incorporate the AWS_BATCH_JOB_ARRAY_INDEX as the ID of the model to optimize.

Figure 17.

Figure 17.

AWS Batch Wizard home screen.

Figure 18.

Figure 18.

Specification of the container to run on AWS Batch and passing in commands.

Figure 19. Sample Output Folder on S3.

Figure 19.

Output directory “test” in the State4 folder in the S3 bucket containing all optimization output files.

Figure 20. Specification of $AWS_BATCH_ARRAY_INDEX environmental variable.

Figure 20.

Snippet of “file.sh” with the environmental variable $AWS_BATCH_JOB_ARRAY_INDEX +1 replacing $2.

Alternate Protocol 1

While we highly recommend using the containerized version program on AWS as in Basic Protocol 1, below we outline how to run the program on a standard Linux high-performance compute cluster. We refer the reader to the sections “Specifying Optimization Settings and Inputting Training/Validation Voltage Clamp Protocols” and “Customizing the Lists of Topologies to Optimize” under Protocol 1 as these instructions apply here as well.

Materials

A Computer running Windows, Mac, or Linux with an Internet Connection

An account on a high-performance computing cluster

Protocol steps with step annotations:

  1. Download all source files from https://github.com/silvalab/AdvIonChannelMMOptimizer (branch Methods) and save all files to your home directory.

  2. Install the Intel (Math Kernel Library) MKL libraries.

    We include sample commands for Linux distributions with yum package managers:

    yum-config-manager --add-repo https://yum.repos.intel.com/mkl/setup/intel-mkl.repo

    rpm --import https://yum.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB

    yum install intel-mkl-2018.2–046

    Note: If another version of the MKL library is already installed on your cluster, be sure to update the include and linking paths in the Makefile.

  3. Install Python and load the Matplotlib and Pandas modules.

  4. Run ‘make.’

  5. Update applicable fields and work through the instructions under “Specifying Optimization Settings and Inputting Training/Validation Voltage Clamp Protocols” under Basic Protocol 1 to update the file sampleData.csv.

    Note: The field with AWS_S3_path may be left unchanged for this protocol. All results will be saved locally on your cluster instead.

  6. Run ‘python import_and_run.py sampleData.csv.’

  7. To run a job, make job.sh executable, then run “./job.sh with 4 3 1 test.” The arguments ‘4,’ ‘3,’ ‘1,’ ‘test’ to the number of states to be optimized, the ID of the model to optimize, number of starts to run, and finally the optimization name, respectively.

    Note: All optimization results will be stored in the directory of ‘StateN” where N is the number of states optimized. The results directory will be named as the fourth argument.

    Note: Installing dos2unix and running ‘dos2unix job.sh’ will resolve the following common error: ./job.sh: /bin/basĥM: bad interpreter: No such file or directory

  8. To optimize other topologies, follow the background under ‘Customizing the Lists of Topologies to Optimize’ in the Supplementary Materials to understand how to specify the various topologies to optimize in the list of arguments to job.sh.

COMMENTARY:

Background Information:

Ion channel kinetic modeling has a rich history dating back to the 1960s with the seminal work of Hodgkin and Huxley and the giant squid axon(Hodgkin and Huxley 1952). While the Hodgkin-Huxley formalism of probabilistic gating variables is still used today, modelers have moved to Markov (state-dependent models) to more accurately capture channel dynamics from understanding gained during the decades of prior ion channel experimentation(K. Mangold and Silva 2020). These Markov state models can represent more complex independent and dependent kinetic processes by their discrete state structure and varying connections amongst the states (or topology)(Rudy and Silva 2006).

How does one identify the Markov model structure to create an ion channel model? Practically, more than one topology will fit an experimental dataset, so how will a modeler choose a “correct” topology or series of topologies for an experimental dataset? Human intuition of channel function has principally informed model topology through trial and error after viewing fits to training voltage-clamp protocols and rate parameter optimization(K. E. Mangold et al. 2021). However, this process relies on the “experience” of a modeler and never supposes a “better” model topology exists. Very complex structures that may intuitively account for our current mechanistic understanding of channel gating will appear to fit the training experimental data quite well, but these complex models carry a greater risk of poor parameter identifiability(Clerx et al. 2019; Fink and Noble 2009; Whittaker et al. 2020), which can practically arise as unexpected/variability in excitability effects at the cellular and tissue level(K. E. Mangold et al. 2021). Thus, with the increasingly large sets of experimental data for a channel modeler recapitulate, it is becoming difficult to identify promising structural candidate models at an appropriate complexity to model an experimental dataset.

This protocol sets up a simulated annealing optimization routine over protocols given an input. While the details of numerical optimization are outside the scope of this tutorial, in brief, simulated annealing is based on the principles of metallurgy, and the association with temperature and ability of the metal to change molecular configuration. The optimization is more permissive to larger changes in parameter spaces during the initial optimization (e.g., a “hot” temperature”), but as the simulation “cools”, the solution is constrained around a local minimum. The reader is referred to references (Azizi and Zolfaghari 2004; Lee and Lee 1996) for additional information.

While we have focused the use of this routine on training to voltage-clamp protocols, the systemic review of the unique topologies will be useful to make models of other kinetic biological data. The cost function for the new biological system would of course have to be updated, and likely the overfitting criterion as well, but the systematic analysis of unique Markov structures is certainly widely applicable.

We anticipate there will be voltage-clamp protocols in datasets that are outside the scope of the canonical protocols presented here such as steady-state activation, inactivation, and current traces. Encoding very complex protocols are possible given the various voltage step keywords presented here such as “NONE, PEAK, TRACE, etc.) However, there will be protocols with “steps” that are not encodable in this version of the program. We hope that this tutorial presents the logic of the program to the reader so that necessary tailoring to the source code and cost functions may be made.

We hope that this tutorial will encourage users with varying computational backgrounds to make use of the routine for their own channel kinetic datasets. Further, our hope is that the suggested structures will inspire new understandings of channel dynamics given the experimental dataset and encourage additional experimentation to validate such unique structural options.

Critical Parameters:

The most critical parameter for the routine is the cost function. The cost function encompasses the weighting of the individual voltage-clamp protocols. Here, we calculate cost based on the sum of squared differences between the model and experimental data points, normalized to the experimental data point as in equation 6 in (K. E. Mangold et al. 2021). We have found this normalization works well for fitting to summary curves of electrophysiological data (i.e., steady state activation and inactivation curves), but the sum of squared differences between model and experiment should be sufficient as an alternative as well. The weighting of the voltage-clamp protocols depends on the individual dataset and the goals of the modeling study. We encourage users to select equal weighting of protocols as a starting point but expect users to test out different weightings to explore other possible solutions.

Troubleshooting:

Understanding Results:

Once the algorithm is running expect text files to be periodically saved to Amazon S3 based on the specified snapshot and display iterations. For all topologies specified under the number of states indicated, there will be directories automatically generated that store the progress files. There are two types of progress files: a file named “ModelX.txt” that displays the optimization settings chosen. Below the solver settings, the progress of the optimization (current cost at the associated iteration) will be displayed every ‘display’ and ‘snapshot’ iterations (Figure 21).

Figure 21. Sample optimization progress.

Figure 21.

“ModelX.txt” with the solver settings preceding the cost display every “display” iterations.

After the specified ‘snapshot’ iterations, under the directory corresponding to the number of the unique start (i.e., “Time1”), there will be file titled “ModelXiter_Y.txt” where Y is multiple of the inputted ‘snapshot.’ Figure 22 shows a sample model topology encoding and associated rate parameters.

Figure 22. Example model rate parameters after optimization.

Figure 22.

This topology has 4 states and 3 edges (6 connections) with the second state serving as the rooted open state (green). The ic matrix prescribes the connectivity of the model where −1 is the origin state and 1 is the destination state. For example, in the first column, the first edge starts at state 1 and connects to state 3. The second column prescribes the edge in the opposite direction. The matrices rs, rk and args are part of the guaranteed microscopic reversibility rates as outlined in (Menon, Spruston, and Kath 2009) and(Teed and Silva 2016). These matrices may be pasted into MATLAB to simulate the model during the training and validation voltage-clamp protocols. Various ventricular and atrial cellular models already in MATLAB may be readily modified to run with the specified channel Markov model.

Below the model topology encodings and rate parameter matrices, there will be statements of the model’s current cost enumerated with the cost associated with each model (Figure 23.)

Figure 23. Model cost display.

Figure 23.

Below each model’s optimized rate parameters is the total cost of the model and the associated costs of each of the training protocols. Below the training costs, the total validation cost is displayed along with the individual validation costs. Individual validation costs may be costs associated with withheld datapoints as in “WTgv” and “WTinac” or for an entire validation protocol as in “WTtrace40.”

Below the training and validation costs will be individual protocol datapoints and the model fits. If a certain percentage of data is withheld for validation, those datapoints appear below the heading “VALIDATION” as displayed in Figure 24.

Figure 24. Sample model fits.

Figure 24.

Model fits for the WTgv protocol with the rate parameters as in Figure 23.

Time Considerations:

Creating an AWS account and completing the environment setup should take approximately two hours for the new user. If the routine is running with the sample protocols and dataset provided here as in sampleData.csv, the optimization (as illustrated in Figure 21) will take approximately half an hour. When inputting user specified data and voltage-clamp protocols, the model optimization time will depend on the experimental dataset and number of iterations/starts of an optimization.

Supplementary Material

supinfo

Table 2.

Troubleshooting Guide

Problem Possible Cause Solution
Model cost not decreasing to yield high-fidelity voltage clamp fits -Voltage clamp data is entered incorrectly in sampleData.csv.
-Adaptive temperature simulated annealing parameters need adjustment.
-Model topologies lack the
complexity to recapitulate the dataset.
-Run ‘plot_voltage_protcols.py’ to visualize the protocols inputted into sampleData.csv. See Supplmentary Materials for more information.
-Adjust the adaptive temperature parameters as outlined in (Azizi and Zolfaghari 2004).
-Run the program with more complex topologies.
Error messages There are general error messages included in the program when input is given improperly formatted or unknown. Locally call from your EC2-instance “docker run -ti your image name” to display program output and any error messages before running the routine on AWS Batch.
Optimization does not consistently complete the number of specified iterations. Check that the overfitting penalty is not too “strict” for your dataset. If optimizations do not ever finish the specified maximum iterations as indicated in the solver settings, then the overfitting criterion will need to be adjusted. By default, the program will terminate early if there are three consecutive increases is the ratio of generalization loss to progress. The reader should consult (Prechelt 1998) and (K. E. Mangold et al. 2021) for thorough discussions on tuning generalization loss and progress parameters.

ACKNOWLEDGEMENTS:

This work was supported by an Amazon Web Services Grant awarded to JDM and JRS. This work was also supported by National Institutes of Health NHLBI grants R01HL136553 and T32-HL134635 (KM).

Footnotes

CONFLICT OF INTEREST STATEMENT:

The authors received a researching computing grant from AWS. AWS did not have a role in the protocol design or data collection.

DATA AVAILABILITY STATEMENT:

The data that support the protocol are openly available in https://github.com/silvalab/AdvIonChannelMMOptimizer.

LITERATURE CITED:

  1. Azizi Nader, and Zolfaghari Saeed. 2004. “Adaptive Temperature Control for Simulated Annealing: A Comparative Study.” Computers & Operations Research 31(14): 2439–51. [Google Scholar]
  2. Clerx M, Beattie KA, Gavaghan DJ, and Mirams GR. 2019. “Four Ways to Fit an Ion Channel Model.” Biophysical Journal [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Fink Martin, and Noble Denis. 2009. “Markov Models for Ion Channels: Versatility versus Identifiability and Speed.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367(1896): 2161–79. [DOI] [PubMed] [Google Scholar]
  4. Hodgkin AL, and Huxley AF. 1952. “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve.” The Journal of Physiology 117(4): 500–544. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Johnson Eric K. et al. 2018. “Differential Expression and Remodeling of Transient Outward Potassium Currents in Human Left Ventricles.” Circulation: Arrhythmia and Electrophysiology 11(1): e005914. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Lee Soo-Young, and Lee Kyung Geun. 1996. “Synchronous and Asynchronous Parallel Simulated Annealing with Multiple Markov Chains.” IEEE Transactions on Parallel and Distributed Systems 7(10): 993–1008. [Google Scholar]
  7. Mangold Kathryn E et al. 2021. “Identification of Structures for Ion Channel Kinetic Models.” PLOS Computational Biology 17(8): e1008932. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Mangold Kathryn, and Silva Jonathan R. 2020. “Modeling the Molecular Details of Ion Channels.” Modeling and Simulating Cardiac Electrical Activity: 2–19. [Google Scholar]
  9. Menon Vilas, Spruston Nelson, and Kath William L. 2009. “A State-Mutating Genetic Algorithm to Design Ion-Channel Models.” Proceedings of the National Academy of Sciences 106(39): 16829–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Prechelt Lutz. 1998. “Early Stopping-but When?” In Neural Networks: Tricks of the Trade, Springer, 55–69. [Google Scholar]
  11. Qu Zhilin, Hu Gang, Garfinkel Alan, and Weiss James N. 2014. “Nonlinear and Stochastic Dynamics in the Heart.” Physics Reports 543(2): 61–162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Rudy Yoram, and Silva Jonathan R. 2006. “Computational Biology in the Study of Cardiac Ion Channels and Cell Electrophysiology.” Quarterly reviews of biophysics 39(1): 57–116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Teed Zachary R, and Silva Jonathan R. 2016. “A Computationally Efficient Algorithm for Fitting Ion Channel Parameters.” MethodsX 3: 577–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Whittaker Dominic G et al. 2020. “Calibration of Ionic and Cellular Cardiac Electrophysiology Models.” WIREs Systems Biology and Medicine n/a(n/a): e1482. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supinfo

RESOURCES