Skip to main content
Frontiers in Behavioral Neuroscience logoLink to Frontiers in Behavioral Neuroscience
. 2017 May 12;11:88. doi: 10.3389/fnbeh.2017.00088

Kinoscope: An Open-Source Computer Program for Behavioral Pharmacologists

Nikolaos Kokras 1,2, Dimitrios Baltas 1, Foivos Theocharis 1, Christina Dalla 1,*
PMCID: PMC5427106  PMID: 28553211

Abstract

Behavioral analysis in preclinical neuropsychopharmacology relies on the accurate measurement of animal behavior. Several excellent solutions for computer-assisted behavioral analysis are available for specialized behavioral laboratories wishing to invest significant resources. Herein, we present an open source straightforward software solution aiming at the rapid and easy introduction to an experimental workflow, and at the improvement of training staff members in a better and more reproducible manual scoring of behavioral experiments with the use of visual aids-maps. Currently the program readily supports the Forced Swim Test, Novel Object Recognition test and the Elevated Plus maze test, but with minor modifications can be used for scoring virtually any behavioral test. Additional modules, with predefined templates and scoring parameters, are continuously added. Importantly, the prominent use of visual maps has been shown to improve, in a student-engaging manner, the training and auditing of scoring in behavioral rodent experiments.

Keywords: computer program, behavioral pharmacology, forced swim test, elevated plus maze, novel object recognition, scoring behavior

Introduction

Behavioral analysis in preclinical neuropsychopharmacology relies on the accurate measurement of animal behavior (Kokras and Dalla, 2014; Kokras et al., 2015). Appropriate operating procedures and intensive experimenter training may influence or determine behavioral performance (Chesler et al., 2002; Sousa et al., 2006). Advances in computer science allowed the development of elaborate software which records animal behavior, often with a high degree of automation, taking advantage of intelligent algorithms and image tracking technologies (Noldus, 1991; Noldus et al., 2000, 2001; Zimmerman et al., 2009). However, those commercially available solutions have a high purchasing cost. In addition, automated algorithms may provide better scoring than humans in some cases (Desland et al., 2014) but may also provide less accurate and detailed analysis than humans in certain other cases, as in the forced swim test (distinguishing fine transitions between swimming, climbing and immobility behaviors) and novel object recognition (distinguishing active interest toward the object vs. near vicinity of the animal's head). Several attempts have been done over the last 20 years to develop open-source or freely available computer programs for scoring animal behavior (Moraes and Ferrarezi, 1997; Ottoni, 2000; Taiwanica, 2000; Patel et al., 2006; Poirrier et al., 2006; Aguiar et al., 2007; Blumstein and Daniel, 2007; Otero et al., 2010; Crispim Junior et al., 2012; de Chaumont et al., 2012; Telonis and Margarity, 2015; Friard et al., 2016). Some of those attempts resulted in outdated and probably not under active development computer programs, some were focused on specific models that could not be easily modified for other settings, and some resulted in elaborat solutions that required a significant investment in human resources to develop, adapt and operate. Large-scale behavioral laboratories routinely invest in high-cost commercially available solutions and are also willing to invest human resources in developing in-house esoteric approaches. However, it is not rarely the case that a research team needs a straightforward computer aid to perform a widely-used behavioral test for a specific project. Additionally, in those cases where human scoring is required or desired, there is difficulty in training students and staff members to accurately and reproducibly score animal behavior. This is of paramount importance, as inaccurate scoring by improperly trained personnel may contribute to non-reproducible results. Few computer programs place as priority an interface that facilitates correct training of students and staff. In this context, we have developed a versatile and expandable software package with the aim to provide a ready and easy to use platform for behavioral analysis scoring and a platform through which training in behavioral pharmacology scoring can be facilitated and controlled.

Description of the system

The program is developed in Visual C# and is released under the GNU General Public License, version 3 (GPL v3) (GNU, 2007). It is compatible with personal computers able to run MS Windows XP© or later operating system versions (through MS Windows 10©). There are no other minimum system requirements; hence the program can run on a variety of computers, even outdated. To run the program however, the computer must have installed the MS.NET Framework version 4.0© library extension (Platt, 2002), which is freely available from the manufacturer and which is widely used in many software packages for MS Windows© operating systems. Data generated by the program are stored in a relational database compatible with different engines (SQLite, MySql, MS-SQL, etc). The program, for the time being, contains pre-installed templates for three popular behavioral tests, namely the Forced Swim Test (Slattery and Cryan, 2012), Novel Object Recognition (Akkerman et al., 2012) and the Elevated Plus Maze (Walf and Frye, 2007). Those can be adapted to suit the specific needs of each researcher with regards to number and duration of sessions/trials. This is particularly useful if for example FST is performed as a single or dual session, or when NOR is performed in multiple trials on each day and on multiple sessions along several days. In addition, the concept of having experiments organized and correctly archived is best served by organizing “projects” (protocols): each one can be named accordingly and be operated independently, contains any number of custom-tailored behavioral tests and contains a database of subjects (experimental animals) that will be subjected to one or more behavioral tests according to the “project” (protocol). Also, according to the details that researchers wish to include in the database and in the exported files, descriptors of experimental groups (vehicle, treatment, stress etc.) and of subjects (i.e., sex, age, origin etc.) can be determined. By providing a definition of experimental groups and subjects, a more organized database can be obtained and a cleaner output can be exported, facilitating auditing, archiving and later retrieval. The assignment to groups can also be done at a later stage, even after the scoring process, for those protocols that require subjects to be assigned to groups not randomly but based on behavioral or other criteria. Scoring procedure relies on the researcher indicating with the appropriate keystrokes the observed behavior either when observing the live animal or a video recording of its behavior. The program comes with a predefined set of key mappings, which can be modified, as it is usually done in similar software solutions (Blumstein and Daniel, 2007; Friard et al., 2016). A key strength of the software is the assignment of color codes to each behavioral element, thus providing a visual aid for the researcher while scoring. During scoring a progress bar indicates the elapsed and remaining time along with the observed behaviors in their designated color codes. This allows for trainees to understand how an experienced observer scores and assists them in learning. Furthermore, visualizations representing the organization (time sequence) of observed behaviors can be exported as images. Those are exported in separate files for each animal (in png format) and separately from the main results output (which is exported in a spreadsheet compatible document). This functionality of producing “visual maps” is partially implemented in other programs as well (de Chaumont et al., 2012) and has been proven particularly useful in two ways: firstly, differences in the organization of behaviors between different animals can be easily highlighted and secondly, a trainer and a trainee can visually compare their scoring and discuss possible discrepancies (Figure 1). From our experience (Kokras et al., 2014, 2015, 2017), on rater-independent and blind scoring, the intra- and inter-rater agreement of observers previously trained with Kinoscope reaches a correlation of well beyond r = 0.9, thus significantly increasing the validity and reproducibility of animal behavior data (Figures 2, 3). Such correlation indices are higher than those previously observed in our research team, when scoring was performed without Kinoscope. Upon completion of the manual scoring and according to the type of behavioral test, certain predefined measures are automatically calculated beyond the primary measurements (e.g., Latencies for FST, % in Open Arms in EPM, Discrimination & Preference Indices in NOR). Results are finally exported, either for the whole trial/session or for selected time segments, in csv (comma-separated values) format, which can then be imported in most spreadsheet and statistical software packages. A simplified workflow of the entire use of the system is summarized in Figure 4 with references to a series of Supplemental Figures (S1S6)/Screenshots.

Figure 1.

Figure 1

Representative visual maps produced after scoring a male (top) and a female (bottom) rat during Forced Swim Test. The total length of the visual bar corresponds to the test duration (5 min) and each behavior scored is depicted with a designated color, its time of appearance and its duration. Note that both animals have almost identical total duration of immobility, swimming, and climbing, however the organization in time of the observed behaviors differs significantly between the male and the female rat. Also, note the slight differences between the experienced scorer and a trainee, the latter performing the scoring in a satisfactory way, if examining only the total scores, but still committing some errors when inspecting the visual maps. By comparing the produced visual maps and discussing the animal's performance training can be facilitated in an engaging way and reproducibility can be enhanced.

Figure 2.

Figure 2

Validation data on Forced Swim Test (FST) Scoring. Two experienced raters, after having trained with the Kinoscope program, scored blindly, and independently male and female rats in the 5 min second session of the two-sessions FST. Each animal is represented in a separate row and on each column, the scoring pattern from each blind and independent rater can be seen. Correlation indices were 0.85 for number of recorded behavioral events, 0.98 for immobility behavior (blue color) and 0.90 for immobility latency, 0.89 from swimming (red color), 0.97 for climbing behavior (black color), 0.95 for head shaking frequency (green color). All correlations were highly significant (p < 0.001) as indicated by Pearson's two-tailed test. Full data published in Kokras et al (Kokras et al., 2015). Raw images from Kinoscope were put in order and collated together using ImageJ/Fiji (Schindelin et al., 2012; Schneider et al., 2012).

Figure 3.

Figure 3

Validation data on Novel Object Recognition Scoring (NOR) from a yet unpublished experiment. Two novice student raters, after having trained with the Kinoscope program, scored blindly, and independently male and female rats in the 5 min second trial of a two-trial NOR. Each animal is represented in a separate row and on each column, the scoring pattern from each blind and independent rater can be seen. Correlation indices were 0.90 for number of recorded behavioral events, 0.95 and 0.91 for Object A (red) time and frequency respectively, and 0.94 and 0.87 for Object B (black) time and frequency respectively. Time in general area of the open field is depicted in blue color. All correlations were highly significant (p < 0.001) as indicated by Pearson's two-tailed test. Raw images from Kinoscope were put in order and collated together using ImageJ/Fiji (Schindelin et al., 2012; Schneider et al., 2012).

Figure 4.

Figure 4

Representative workflow of using the Kinoscope, with references to Supplemental Figures (S1S6) (screenshot figures) explaining each step of the procedure.

Conclusion

Accurate behavioral analysis remains of paramount importance in preclinical psychopharmacology (Sousa et al., 2006). Several excellent computer solutions are available for specialized behavioral laboratories wishing to invest in infrastructure or in customizing open-source algorithms that are already available. Kinoscope, being an open source freely available program for behavioral pharmacologists, as well as other neuroscientists performing behavioral experiments, provides a basic but viable alternative. In our experience, the adoption of this software tool happens without imposing any burden on the day-to-day operations of a research team. Moreover, experienced staff members using the Kinoscope can streamline and audit the training of new members, by making use primarily of the visual maps, thus improving the consistency and reproducibility of scoring by novice researchers. Recently several concerns have been raised with regards to the validity of experimental data (Steckler, 2015; Bespalov et al., 2016). Many factors should be taken into account in improving the quality of experimental studies (Kilkenny et al., 2009; McNutt, 2014; Macleod et al., 2015) and perhaps another overlooked factor is the quality of manual scoring of behavioral experiments, which in turn may result in poor inter-rater agreement and inevitably low reproducibility. In our experience (Kokras et al., 2014, 2015, 2017), using Kinoscope's visual maps as visual aids, either in real-time scoring or in later offline auditing, greatly enhanced in an efficient and engaging way the training of new student members and the troubleshooting of poor reproducibility. Positive feedback has also been received from other departments that have used the beta version of this program, and several groups have already used the program for their research (Castelhano-Carlos et al., 2014; Papazoglou et al., 2015; Wiersielis et al., 2016; Lopes et al., 2017; Caetano et al., in press). The program will be under active development, with more behavioral templates scheduled for inclusion soon (Y-maze, Light/Dark, Tail Suspension Test). Additionally, as data transparency and data sharing has been proposed as a remedy for poor data reproducibility (Steckler et al., 2015), a possibility to export, import and exchange results and raw data produced by Kinoscope will be added. In the same context, the open-source code is also available for inspection and possible modifications (e.g., adding more behavioral templates by other research groups) at github.com. The authors also welcome any suggestions for future improvements. Availability of the latest version of the program is through the Sourceforge repository at https://sourceforge.net/projects/kinoscope, and a training video is also available at the same site along with a manual.

Author contributions

NK and CD conceptualized, designed and led the development of the program. DB and FT wrote the software code. All authors contributed to the writing of the manuscript and approved the final version.

Funding

This study has been funded by an IKY Fellowship of Excellence for Postgraduate Studies – Siemens Program to NK. The costs for this open-access publication are supported by the ECNP Network “Preclinical Data Forum” (https://www.ecnp.eu/projects-initiatives/ECNP-networks/List-ECNP-Networks/Preclinical-Data-Forum.aspx). The ECNP Network “Preclinical Data Forum” neither promotes nor endorses the use of the software tool reported in this publication.

Conflict of interest statement

NK has received honoraria and travel support from Janssen-Cilag, Lundbeck, Sanofi-Aventis, Medochemie Generics and Elpen S.A. CD has received honoraria from Janssen-Cilag and travel support from Boehringer Ingelheim. The other authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Authors wish to acknowledge the valuable contribution of our students, V. Kafetzopoulos and T. Mavridis, for beta-testing the program and providing valuable feedback.

Supplementary material

The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fnbeh.2017.00088/full#supplementary-material

Supplemental Figure S1

Screenshot of the computer dialog window for defining test templates.

Supplemental Figure S2

Screenshot of the computer dialog window for defining desired keystroke combinations.

Supplemental Figure S3

Screenshot of the computer dialog window for creating a new project.

Supplemental Figure S4

Screenshot of the computer dialog window for the input of experimental subjects.

Supplemental Figure S5

Screenshot of the computer dialog window for scoring behavior.

Supplemental Figure S6

Screenshot of the computer dialog window for exporting results.

References

  1. Aguiar P., Mendonça L., Galhardo V. (2007). OpenControl: a free opensource software for video tracking and automated control of behavioral mazes. J. Neurosci. Methods, 166, 66–72. 10.1016/j.jneumeth.2007.06.020 [DOI] [PubMed] [Google Scholar]
  2. Akkerman S., Blokland A., Reneerkens O., van Goethem N. P., Bollen E., Gijselaers H. J., et al. (2012). Object recognition testing: methodological considerations on exploration and discrimination measures. Behav. Brain Res., 232, 335–347. 10.1016/j.bbr.2012.03.022 [DOI] [PubMed] [Google Scholar]
  3. Bespalov A., Steckler T., Altevogt B., Koustova E., Skolnick P., Deaver D., et al. (2016). Failed trials for central nervous system disorders do not necessarily invalidate preclinical models and drug targets. Nat. Rev. Drug Disc. 15:516. 10.1038/nrd.2016.88 [DOI] [PubMed] [Google Scholar]
  4. Blumstein D. T., Daniel J. C. (2007). Quantifying Behavior the JWatcher Way. Sunderland, MA: Sinauer Associates Incorporated. [Google Scholar]
  5. Caetano L., Pinheiro H., Patrício P., Mateus-Pinheiro A., Alves N., Coimbra B., et al. (in press). Adenosine A2A receptor regulation of microglia morphological remodeling-gender bias in physiology in a model of chronic anxiety. Mol. Psychiatry. 10.1038/mp.2016.173 [DOI] [PubMed] [Google Scholar]
  6. Castelhano-Carlos M., Costa P. S., Russig H., Sousa N. (2014). PhenoWorld: a new paradigm to screen rodent behavior. Transl. Psychiatry 4, e399. 10.1038/tp.2014.40 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Chesler E. J., Wilson S. G., Lariviere W. R., Rodriguez-Zas S. L., Mogil J. S. (2002). Influences of laboratory environment on behavior. Nat. Neurosci. 5, 1101–1102. 10.1038/nn1102-1101 [DOI] [PubMed] [Google Scholar]
  8. Crispim Junior C. F., Pederiva C. N., Bose R. C., Garcia V. A., Lino-de-Oliveira C., Marino-Neto J. (2012). ETHOWATCHER: validation of a tool for behavioral and video-tracking analysis in laboratory animals. Comput. Biol. Med., 42, 257–264. 10.1016/j.compbiomed.2011.12.002 [DOI] [PubMed] [Google Scholar]
  9. de Chaumont F., Coura R. D., Serreau P., Cressant A., Chabout J., Granon S., et al. (2012). Computerized video analysis of social interactions in mice. Nat. Methods, 9, 410–417. 10.1038/nmeth.1924 [DOI] [PubMed] [Google Scholar]
  10. Desland F. A., Afzal A., Warraich Z., Mocco J. (2014). Manual versus automated rodent behavioral assessment: comparing efficacy and ease of bederson and garcia neurological deficit scores to an open field video-tracking system. J. Cent. Nerv. Syst. Dis. 6, 7–14. 10.4137/JCNSD.S13194 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Friard O., Gamba M., Fitzjohn R. (2016). BORIS: a free, versatile open-source event-logging software for video/audio coding and live observations. Methods Ecol. Evol. 7, 1325–1330. 10.1111/2041-210X.12584 [DOI] [Google Scholar]
  12. GNU (2007). General Public License, version 3. Available online at: http://www.gnu.org/copyleft/gpl.html
  13. Kilkenny C., Parsons N., Kadyszewski E., Festing M. F., Cuthill I. C., Fry D., et al. (2009). Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS ONE 4:e7824. 10.1371/journal.pone.0007824 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Kokras N., Dalla C. (2014). Sex differences in animal models of psychiatric disorders. Br. J. Pharmacol. 171, 4595–4619. 10.1111/bph.12710 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Kokras N., Antoniou K., Mikail H. G., Kafetzopoulos V., Papadopoulou-Daifoti Z., Dalla C. (2015). Forced swim test: What about females? Neuropharmacology 99, 408–421. 10.1016/j.neuropharm.2015.03.016 [DOI] [PubMed] [Google Scholar]
  16. Kokras N., Pastromas N., Porto T. H., Kafetzopoulos V., Mavridis T., Dalla C. (2014). Acute but not sustained aromatase inhibition displays antidepressant properties. Int. J. Neuropsychopharmacol. 17, 1307–1313. 10.1017/S1461145714000212 [DOI] [PubMed] [Google Scholar]
  17. Kokras N., Polissidis A., Antoniou K., Dalla C. (2017). Head shaking in the forced swim test: a robust but unexplored sex difference. Pharmacol. Biochem. Behav. 152, 90–96. 10.1016/j.pbb.2016.05.007 [DOI] [PubMed] [Google Scholar]
  18. Lopes S., Teplytska L., Vaz-Silva J., Dioli C., Trindade R., Morais M., et al. (2017). Tau deletion prevents stress-induced dendritic atrophy in prefrontal cortex: role of synaptic Mitochondria. Cereb. Cortex 27, 2580–2591. 10.1093/cercor/bhw057 [DOI] [PubMed] [Google Scholar]
  19. Macleod M. R., McLean A. L., Kyriakopoulou A., Serghiou S., de Wilde A., Sherratt N., et al. (2015). Risk of bias in reports of in vivo research: a focus for improvement. PLoS Biol. 13:e1002273 10.1371/journal.pbio.1002273 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. McNutt M. (2014). Journals unite for reproducibility. Science 346, 679–679. 10.1126/science.aaa1724 [DOI] [PubMed] [Google Scholar]
  21. Moraes M. F., Ferrarezi C. F. J. (1997). Mont'Alverne and Garcia-Cairasco, N. Low-cost automatic activity data recording system. Braz. J. Med. Biol. Res. 30, 1009–1016. [DOI] [PubMed] [Google Scholar]
  22. Noldus L. P. (1991). The observer: a software system for collection and analysis of observational data. Behav. Res. Methods Instrum. Comp. 23, 415–429. [Google Scholar]
  23. Noldus L. P. J. J., Spink A. J., Tegelenbosch R. A. J. (2001). EthoVision: a versatile video tracking system for automation of behavioral experiments. Behav. Res. Methods Instrum. Comp. 33, 398–414. 10.3758/BF03195394 [DOI] [PubMed] [Google Scholar]
  24. Noldus L. P., Trienes R. J., Hendriksen A. H., Jansen H., Jansen R. G. (2000). The Observer Video-Pro: new software for the collection, management, and presentation of time-structured data from videotapes and digital media files. Behav. Res. Methods Instrum. Comput. 32, 197–206. 10.3758/BF03200802 [DOI] [PubMed] [Google Scholar]
  25. Otero L., Zurita M., Aguayo C., Bonilla C., Rodriguez A., Vaquero J. (2010). Video-Tracking-Box linked to Smart software as a tool for evaluation of locomotor activity and orientation in brain-injured rats. J. Neurosci. Methods 188, 53–57. 10.1016/j.jneumeth.2010.01.036 [DOI] [PubMed] [Google Scholar]
  26. Ottoni E. B. (2000). EthoLog 2.2: a tool for the transcription and timing of behavior observation sessions. Behav. Res. Methods Instrum. Comput. 32, 446–449. 10.3758/BF03200814 [DOI] [PubMed] [Google Scholar]
  27. Papazoglou K., Jean A., Gertler A., Taouis M., Vacher C.-M. (2015). Hippocampal GSK3β as a molecular link between obesity and depression. Mol. Neurobiol. 52, 363–374. 10.1007/s12035-014-8863-x [DOI] [PubMed] [Google Scholar]
  28. Patel P. D., Seasholtz A. F., Patel P. D. (2006). Computer-assisted scoring of the elevated plus maze. BioTechniques 41, 700, 702, 704. 10.2144/000112318 [DOI] [PubMed] [Google Scholar]
  29. Platt D. S. (2002). Introducing Microsoft. NET. Redmond, WA: Microsoft Press. [Google Scholar]
  30. Poirrier J. E., Poirrier L., Leprince P., Maquet P. (2006). Gemvid, an open source, modular, automated activity recording system for rats using digital video. J. Circadian Rhythms 4:10. 10.1186/1740-3391-4-10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Schindelin J., Arganda-Carreras I., Frise E., Kaynig V., Longair M., Pietzsch T., et al. (2012). Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682. 10.1038/nmeth.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Schneider C. A., Rasband W. S., Eliceiri K. W. (2012). NIH Image to ImageJ: 25 years of image analysis. Nat. Methods 9, 671–675. 10.1038/nmeth.2089 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Slattery D. A., Cryan J. F. (2012). Using the rat forced swim test to assess antidepressant-like activity in rodents. Nat. Protoc. 7, 1009–1014. 10.1038/nprot.2012.044 [DOI] [PubMed] [Google Scholar]
  34. Sousa N., Almeida O. F. X., Wotjak C. T. (2006). A hitchhiker's guide to behavioral analysis in laboratory rodents. Genes Brain Behav. 5, 5–24. 10.1111/j.1601-183X.2006.00228.x [DOI] [PubMed] [Google Scholar]
  35. Steckler T. (2015). Editorial: preclinical data reproducibility for R&D-the challenge for neuroscience. Springerplus 4:1. 10.1186/2193-1801-4-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Steckler T., Brose K., Haas M., Kas M. J., Koustova E., Bespalov A., et al. (2015). The preclinical data forum network: a new ECNP initiative to improve data quality and robustness for (preclinical) neuroscience. Eur. Neuropsychopharmacol, 25, 1803–1807. 10.1016/j.euroneuro.2015.05.011 [DOI] [PubMed] [Google Scholar]
  37. Taiwanica Z. (2000). ETHOM: event-recording computer software for the study of animal behavior. Acta Zool. Taiwanica 11, 47–61. 10.6576/AZT.2000.11.(1).4 [DOI] [Google Scholar]
  38. Telonis G., Margarity M. (2015). Phobos: a novel software for recording rodents' behavior during the thigmotaxis and the elevated plus-maze test. Neurosci. Lett. 599, 81–85. 10.1016/j.neulet.2015.05.045 [DOI] [PubMed] [Google Scholar]
  39. Walf A., Frye C. A. (2007). The use of the elevated plus maze as an assay of anxiety-related behavior in rodents. Nat. Protoc. 2, 322–328. 10.1038/nprot.2007.44 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Wiersielis K. R., Wicks B., Simko H., Cohen S. R., Khantsis S., Baksh N., et al. (2016). Sex differences in corticotropin releasing factor-evoked behavior and activated networks. Psychoneuroendocrinology 73, 204–216. 10.1016/j.psyneuen.2016.07.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Zimmerman P. H., Bolhuis J. E., Willemsen A., Meyer E. S., Noldus L. P. (2009). The observer XT: a tool for the integration and synchronization of multimodal signals. Behav. Res. Methods 41, 731–735. 10.3758/BRM.41.3.731 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Figure S1

Screenshot of the computer dialog window for defining test templates.

Supplemental Figure S2

Screenshot of the computer dialog window for defining desired keystroke combinations.

Supplemental Figure S3

Screenshot of the computer dialog window for creating a new project.

Supplemental Figure S4

Screenshot of the computer dialog window for the input of experimental subjects.

Supplemental Figure S5

Screenshot of the computer dialog window for scoring behavior.

Supplemental Figure S6

Screenshot of the computer dialog window for exporting results.


Articles from Frontiers in Behavioral Neuroscience are provided here courtesy of Frontiers Media SA

RESOURCES