Subject |
Biomedical Engineering and Computer Science |
Specific subject area |
Neonatal Intensive Care, Artificial Intelligence and Machine Learning Applications |
Type of data |
Video, audio, and medical information |
How data were acquired |
RGB camera (GoPro Hero), vital sign monitor (Phillips MP-70), near-infrared spectroscopy (INVOS 5100C) |
Data format |
Raw |
Parameters for data collection |
Multimodal data including behavioral (video and audio), physiological (vital signs and cortical activity), contextual, and medical. All the data were scored by trained nurses using clinically validated pain scales. |
Description of data collection |
Data were collected from 58 neonates while undergoing any procedural or postopertive observation in NICUs. |
Data source location |
Tampa General Hospital, Tampa, Florida, United States |
Data accessibility |
To advance the research in clinical and automated pain assessment, we introduced in this paper a publicly available, well-annotated, and multidimensional neonatal pain dataset. To access our dataset, interested researchers must send a request to the Principal Investigator (PI) of this project. The PI will then send back data agreement and sharing forms that have to be signed by the authorized person according to the provided instructions. After receiving the properly signed agreement, we will share the entire dataset via a protective cloud storage. Further details can be found in the following project website. https://rpal.cse.usf.edu/project_neonatal_pain/. |
Related research article |
Salekin, M.S., Zamzmi, G., Goldgof, D., Kasturi, R., Ho, T., Sun, Y., 2021, Multimodal spatio-temporal deep learning approach for neonatal postoperative pain assessment, Computers in Biology and Medicine 129, 104150. https://doi.org/10.1016/j.compbiomed.2020.104150
|