Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2015 May 29;23(1):166–173. doi: 10.1093/jamia/ocv015

Smartphone-based diagnostic for preeclampsia: an mHealth solution for administering the Congo Red Dot (CRD) test in settings with limited resources

Stephan Michael Jonas 1,2,, Thomas Martin Deserno 2, Catalin Sorin Buhimschi 3,4, Jennifer Makin 5, Michael Andrew Choma 1,6,7, Irina Alexandra Buhimschi 3,4,8
PMCID: PMC7814923  PMID: 26026158

Abstract

Objective Morbidity and mortality due to preeclampsia in settings with limited resources often results from delayed diagnosis. The Congo Red Dot (CRD) test, a simple modality to assess the presence of misfolded proteins in urine, shows promise as a diagnostic and prognostic tool for preeclampsia. We propose an innovative mobile health (mHealth) solution that enables the quantification of the CRD test as a batch laboratory test, with minimal cost and equipment.

Methods A smartphone application that guides the user through seven easy steps, and that can be used successfully by non-specialized personnel, was developed. After image acquisition, a robust analysis runs on a smartphone, quantifying the CRD test response without the need for an internet connection or additional hardware. In the first stage, the basic image processing algorithms and supporting test standardizations were developed using urine samples from 218 patients. In the second stage, the standardized procedure was evaluated on 328 urine specimens from 273 women. In the third stage, the application was tested for robustness using four different operators and 94 altered samples.

Results In the first stage, the image processing chain was set up with high correlation to manual analysis (z-test P < 0.001). In the second stage, a high agreement between manual and automated processing was calculated (Lin’s concordance coefficient ρ c = 0.968). In the last stage, sources of error were identified and remedies were developed accordingly. Altered samples resulted in an acceptable concordance with the manual gold-standard (Lin’s ρc = 0.914).

Conclusion Combining smartphone-based image analysis with molecular-specific disease features represents a cost-effective application of mHealth that has the potential to fill gaps in access to health care solutions that are critical to reducing adverse events in resource-poor settings.

Keywords: preeclampsia, mHealth, global health, Congo-red, low-resource, high-throughput

INTRODUCTION

Ubiquitous smartphone ownership has the potential to transform medical care by placing computing power, internet connectivity, and sophisticated sensors (eg, cameras, near-field communication) in the hands of both patients and practitioners. Just as cellphones have gained quick and extensive adoption in countries with rudimentary landline infrastructure, their ability to have a positive impact on medical practice could be greatest in countries and communities in which there are currently few personal computers or widespread or reliable internet access. 1

In the past decade, the notion of mobile health (mHealth) has branched out from electronic health (eHealth) to broadly encompass the “use of mobile computing and communication technologies in health care and public health.” 2 Because smartphones are tools used by individuals, most mHealth applications developed thus far address health promotion, self-management, and communication. 3 Smartphones, however, come with a growing number of powerful embedded sensors that, despite their potential, have largely been neglected. 4 To that end, smartphones offer the possibility of analytical sensing when typical instrumentation or devices needed for the diagnosis of important diseases are precarious or not easily accessible.

So far, most mHealth studies use smartphones only as displays, 5 as readout tools for Lab-on-a-Chip microfluidics technology, 6 or as real-time feedback tools – for example, for electrocardiogram (ECG) signal processing, 7 as a sensor for pose estimation, 8 or to capture and transmit images. 9 Only a few approaches have made use of smartphones as autonomous diagnostic tools. One study used the cell phone’s camera for photoplethysmography, 10 and others used the accelerometer to collect data on tremor or gait characteristics. 11–13 Because of their untapped potential, it is important to develop novel approaches to medical diagnostics based around the native functionality of smartphones. When developing smartphone-based diagnostics, however, it is critical to not simply duplicate existing tests but, rather, to create new tests, using molecular characteristics of disease, that have the potential to exploit the constantly growing technological capabilities of smartphones.

Applications that can use the smartphone as a diagnostic tool without needing additional hardware for image acquisition and processing have been proposed for melanoma detection, so far without noticeable success. 14 In this paper, we propose a new approach to mobile diagnostics utilizing smartphones, without the need for additional sensory hardware, for the analysis of a molecular test for preeclampsia. Our smartphone application does not require an internet connection during use.

Preeclampsia is a pregnancy-related disease that continues to cause significant maternal and fetal morbidity and mortality in settings with limited medical resources. Traditionally, preeclampsia is defined as a clinical syndrome and diagnosed based on the symptoms of hypertension and proteinuria occurring in pregnancy after 20 weeks’ gestational age. Both of these symptoms are often non-specific and might occur in conditions other than preeclampsia. In developed countries, preeclampsia is well-screened-for; however, developing and third-world countries do not have the health care capability and facilities for sophisticated testing for preeclampsia. Therefore, there is a clear need for a new diagnostic testing paradigm specifically developed for resource-poor environments, which require simplicity both in diagnostic modality as well as use.

A molecular test for preeclampsia, the Congo Red Dot (CRD) test, has been recently developed based on the ability of constituents in preeclamptic urine to bind the amyloidophilc dye Congo Red. 15 At the core of the test is the discovery that preeclamptic women eliminate misfolded proteins in their urine, a molecular feature that is proportional with disease severity. 15 The aim of this report was the development and testing of a standardized and easy-to-use testing routine that requires little specialized equipment and enables minimally trained personnel to diagnose preeclampsia in health care settings with limited resources. Our report includes a smartphone-based imaging and automated analytical tool for the CRD test that significantly shortens the processing time and provides an unbiased quantitative result. Although our work was motivated by improving preeclampsia care in resource-poor settings, our high-benefit, low-cost technology platform also has implications for molecular-specific disease testing in resource-rich settings.

METHODS AND RESULTS

Study Design and Specimens

As previously outlined, the CRD test has two parts. 15 The “wet part” of the test consists of the preparation of the urine-Congo Red mixture, spotting the mixture as dots on a nitrocellulose sheet (CRD sheet array), followed by acquisition and storing of the two pictures (Pix1, captured before washing the sheet, and Pix2, captured after the hydrophobic wash). 15 The “dry part” of the test consists of processing the two images, followed by calculating the CRD test result (percent Congo Red Retention [CRR]) for each dot individually and for each subject as the average of the duplicate dots on the array after subtraction of the CRR result from the Blank sample (BLK, which uses phosphate buffer saline [PBS] instead of human urine). 15 This study was conducted in three stages, each of which seeks to synergistically simplify, expedite, and improve both the “wet” and “dry” parts of the CRD test.

In Stage 1, we evaluated a preliminary version of our image processing software tool using stored images that had previously been processed using Adobe Photoshop (Adobe, San Jose, CA), in preparation for manual analysis using ImageJ software ( http://imagej.nih.gov/ij/ ). 15 The results of Stage 1 led us to develop a standardized template for consistent positioning of the sample dots during the “wet part” of the test as well as a mobile-phone-enabled image processing tool to aid in the optimization of the “dry part” of the test.

In Stage 2, we tested these improvements in real-time on newly prepared standardized CRD arrays and analyzed the results for agreement, by comparing them with the manual protocol, and for test accuracy, by comparing them with a disease-relevant prognostic standard (medically indicated delivery for preeclampsia, or MIDPE), because preeclampsia is a progressing disease for which no acceptable gold standard is yet available. Similar to our prior studies, 15,16 we choose MIDPE (a preeclampsia-related near-miss event) as a reference rather than the clinical classification at enrollment, reasoning that: 1) an indication for mandated delivery belongs to a team, 2) it is the last management resort when all other strategies have failed, 3) its resulting outcome cannot be revoked, and, thus, 4) it is less subject to bias. The indications for MIDPE concurred with the recommendations of the American Congress of Obstetricians and Gynecologists (ACOG) and the World Health Organization (WHO) for management of preeclampsia/eclampsia. 17,18

In Stage 3, we analyzed the test results across four operators, including untrained personnel who did not receive any instruction or prior knowledge of our system, to check for robustness and to improve error-handling and system feedback. In addition, we further simplified the “wet part” of the protocol to systematically modify and/or eliminate several steps, in order to achieve the maximum possible simplification without a loss of technical performance.

The urine specimens included in this study were all part of the samples analyzed as part the study that reported the CRD principle. 15 During evaluation, the CRR that was calculated manually by a single expert (IAB) acted as the technical gold standard. Statistical methods are summarized in Supplementary Methods .

Evaluation of Algorithms for Dot Quantification

The Stage 1 dataset originated from previously acquired images (before and after wash, captured using a Nikon Coolpix 4500) that had been manually quantified as part of the initial study. This data set consisted of 18 arrays from a total of 218 subjects. Each array held duplicate spots from 12-15 subjects ( Figure 1 A). Because we were interested in the prominence of the red dots relative to that of the background sheet, we started by testing two possible algorithms that would theoretically enhance the dye color: red channel pixel value divided by green channel (ratio R/G, with R and G ranging from 0-255), to enhance redness information by reducing background color. A second calculation used the red channel information subtracted by the green channel information (difference R-G), with the same reasoning. The green channel was selected over the blue channel because of the Bayer pattern, which makes digital camera sensors more sensitive to green than to blue, in order to match the heightened sensitivity of the human visual system towards the color green. Lastly, we used the luminance conversion equation (see the Supplementary Methods ) to retain the intensity of the colors while eliminating the color information itself ( Figure 1 B). The luminance (L) algorithm is a weighted average of the red, green, and blue color channels and is equal to the intensity of a pixel in a grayscale image. It resembles the method employed by the manual processing routine. Simulated calculations performed with a command-line interface of our application (the command-line version of our processing library was run on a ThinkPad T500 with a Linux Ubuntu operating system) determined that the R/G ( Figure 1 C) and L ( Figure 1 D) calculations returned CRR values that correlated significantly better with the manually derived CRR ( z -test P < 0.001 for both comparisons) compared to R-G ( Figure 1 E). In a comparative accuracy analysis ( Figure 1 F), there was no difference between the area under the receiver operating curve (ROC) (the AUC) of the manually derived CRR (AUC = 0.966; 95%CI: 0.932-0.985) and the CRR that was automatically calculated using the L conversion (0.962; 0.927-0.983; P = 0.579). There was a small, yet statistically significant, decrease in AUC when the CRR value was calculated as R/G (0.956; 0.920-0.979; P = 0.042) compared to manual integration. We attributed this difference to the subtle bathochromic shift (from red to purple) exhibited by select urine specimens, which impacted the R/G ratio significantly more than the L conversion. This, in addition to the generally shorter processing time needed for greyscale images, led us to choose the L-based algorithm for subsequent process development.

Figure 1:

Figure 1:

Comparison of automated CRR calculation algorithms during Stage 1 of the study. Layout of a representative Stage 1 test sheet photographed before (Pix1) and after wash (Pix2) and shown as (A) color and (B) grayscale/luminance image. The blank sample is phosphate buffer saline (PBS) and completely disappears during the wash. U01-U12 represent duplicate dots of urine-Congo Red (CR) samples from different pregnant women, of which five ultimately required a medically indicated delivery for preeclampsia (MIDPE: boxed samples). Relationships of manually and automatically calculated Congo Red Retention (CRR) results for different color to grayscale conversion methods as follows: (C) red channel divided by green channel: R/G; (D) luminance: L; and (E) green channel subtracted from red channel (R-G). The coefficient of correlation ( r ) and level of significance are shown for each graph. (F) Receiver operating curves (ROC) for CRR of Stage 1 samples derived with the manual versus automatic image analysis and different color-to-grayscale conversion algorithms. The curves were plotted for their ability to discriminate between patients who required MIDPE ( n = 59) and those who did not ( n = 159).

Standardization of the CRD Array Size and Layout

For a fully automated routine, the automatic detection of the sheet as well as the position of the dots on the acquired images emerged as a need during Stage 1. In addition, the dimension and orientation of the sheet needed to be known. To achieve this, we standardized and revisited the layout of the CRD array as well as the modality in which Pix1 and Pix2 were acquired as follows: 1) nitrocellulose sheets were cut to a standard size of 4.5-in wide by 6-in long, which is proportional to an iPhone’s screen size, 2) three of the four corner squares of the sheet were punched using a handheld craft paper punch, and 3) a sample positioning template ( Figure 2 ) was printed on a sheet of plain paper, which was then placed inside a plastic sheet protector and under the nitrocellulose sheet. The true-to-size template is included for printout as part of the Supplementary Materials (as Supplementary Data ). The template marks the sample positions for batch processing of up to 41 subjects (each specimen is spotted in duplicate in adjacent cells of two columns). The black center points in each cell are visible through the transparency and serve as a guide for sample placement to assure the predictable dot positioning on the sheet. The sample dots corresponding to the blank (BLK or PBS) are placed in the first left corner. Acquisition of both Pix1 and Pix2 is performed with the array sheet placed on a black surface, which results in the punched holes acting as position markers, without the need for using ink (which might spread during the wash) or another mark that would increase the cost of the sheet. This standardized format of the CRD test array borrows elements from Quick Response (QR) Codes: the sheet has a known aspect ratio and three of the four corners are highlighted. The detection of the three markers at the left and upper corners allows for rotation, deskewing, and perfect superposition of each individual dot in Pix1 with its corresponding fellow (or the position where the dye has been washed off) in Pix2. This last feature is achieved through an image processing sequence customized to run as an application on the same smartphone used to acquire Pix1 and Pix 2 (an iPhone 4 device, in our case).

Figure 2:

Figure 2:

Workflow of the CRD test and image processing routine. Schematic representation of the parts of the CRD test, which starts with the wet part (hydrophobic sheet wash and acquisition of the two images: Pix1 and Pix2), followed by the dry part, which has been programmed as an image processing chain (sheet detection and extraction, cell extraction, and dot detection and extraction) up to the calculation of the Congo Red Retention (CRR) result.

CRD Array Image Processing Sequence

The sequence chains seven imaging processing steps together, as follows: 1) acquisition of images as part of the wet part of the CRD test, 2) sheet detection, 3) sheet extraction, 4) cell extraction, 5) dot detection, 6) dot extraction, and 7) CRR calculation. The process workflow is schematically represented in Figure 2 .

Image Acquisition

The images are acquired using the smartphone’s built-in camera. The resolution of the iPhone 4 camera is sufficient to capture a sample dot, with a maximum number of about 40 pixels in diameter. This results in approximately 1,200 pixels per sample dot. For speed of processing and robustness, each image is converted into luminance grayscale for all of the following steps.

Sheet Detection

Because the exact location and deformation of the sheet is unknown, due to variations in photographic angle, deskewing of the perspective projection is first required. To achieve this, we detect the four corners, then apply a simple interpolation between the four corners, to deskew the sheet into a rectangular image. To find the corners, the sheet itself, and then its edges, have to be located. We knew that 1) the two opposite sides of the sheet have the same length, 2) the corners have a 90-degree angle, and 3) the ratio of the neighboring sides is equal to the ratio of the standardized template (0.75). For automatic detection, the grayscale input image is binarized using the Otsu method. 19 Because the background of the image is black (due to the black photographic background visible through the punch holes), and the sheet is white, the Otsu threshold of the sheet is calculated to separate the foreground (sheet) from the background. To further smooth the resulting binary image, all the holes in the foreground are filled by applying mathematical morphology. Next, a gradient filter is applied, to expose the sheet borders on the image. On this gradient image with over-expressed borders, the corners are detected using the Hough line transform (a method of finding lines in an image). 20 We first calculate a rough Hough line transform to loosely find the four most prominent edges (the borders of the sheet). Next, the four intersections of these lines are extracted. Because these positions do not match the corners perfectly, a second, finer Hough line transform is performed separately on regions of interest around the previously found corners. In this second run, only the two main lines near the corners are extracted and intersected, which improves localization of the corner points while keeping memory usage to a minimum.

Sheet Extraction

From the four corner points, the positions of the longer and shorter edges are estimated and the sheet is perspectively transformed into a rectangular geometry using bilinear interpolation. 21 The geometric correction also reduces the number of pixels per dot to about 700 (downscaling). To account for possible rotated acquisition, the three position markers in the corners of the sheet are detected by calculating the average intensity in each corner and selecting the corner with the highest intensity as a reference point, without a corner marker. The sheet is transposed accordingly, such that markers are located on the upper and left corners. The normalized image now contains the sheet spanning between the image’s four corners. This process is performed individually on Pix1 and Pix2.

Cell Extraction

The positions of the cells for each patient are extracted from the normalized image without further image processing, because the geometry of the array is well defined, based on the standardized sample application template. However, each cell contains two dots at a yet unknown position. This process is performed individually on Pix1 and Pix2.

Dot Detection

All possible sample dots are present in Pix1, but some might disappear during washing (negative testing samples). Thus, the complete dot detection can only be performed on Pix1. To determine the dot positions, a gradient filter is applied to each extracted cell on Pix1 to detect the dot edges. The radius of each dot is estimated. A Hough circle transform is then performed, and the two most prominent circular shapes in each cell are selected. 22 Because the relative position of each dot does not change during washing, the same position information from Pix1 is used for processing Pix2.

Dot Extraction

Once the positions of the dots are known, the intensity of the corresponding pixels can be extracted and summed up. To account for white balance and mild illumination changes, a background subtraction is performed by subtracting the average luminance of the cell outside the dot from the average luminance within the dot (see Supplementary Materials Table 1 ).

CRR Calculation

Analogous with the manual formula, the test result (CRR) is calculated as the ratio of the average intensity of the dots on Pix1 to the average intensity of the dots on Pix2. The value of the blank sample (dots in the left and upper cell positions) is subtracted from all other calculated CRR values on the sheet.

Validation and Equivalence/Non-Inferiority Testing

In Stage 2, images were acquired and processed with an iPhone 4 and an application running the above sequence. For repeatability, the iPhone camera was used for the acquisition of images, but the images were then transferred to a computer, on which processing was performed with an iPhone simulator. Eight CRD arrays containing 328 different urine specimens were analyzed as part of Stage 2. These arrays were prepared specifically for this study from aliquots maintained frozen at –80°C. The specimens originated from 273 different women (55 specimens were subsequent collections at a time later in pregnancy). All the specimens were consecutive with respect to specimen collection and storage. There was no overlap between these specimens and those analyzed as part of Stage 1. The prevalence of the outcome of interest (MIDPE) in the Stage 2 data set was 36% among specimens (118/328) and 40% among subjects (108/273).

Similar to Stage 1, there was a significant level of agreement between the manual and automated CRR measurements. Lin’s concordance coefficient ( ρc ) of 0.968, 95% CI: 0.961-0.974 was qualified as “substantial” based on a Pearson's precision coefficient of ρ = 0.973 and a bias correction factor of Cb = 0.995 ( Figure 3 A). 23,24 The two one-sided test (TOST) procedure determined that the smartphone-enabled CRR calculation was equivalent to the manual integration ( Figure 3 B). 25 This can be easily visualized through the overlapping 90% CIs of the CRRs calculated with the manual versus the automated procedure, irrespective of whether the groups were analyzed as a whole or separated by outcome. The margin of equivalence was 10% for the MIDPE group and 5% for the group without MIDPE and the overall data set. An ROC analysis using the first specimen from each subject determined that there was no statistically significant difference in the AUC between the manual quantification of the CRR (0.911; 0.882-0.935) and the smartphone-enabled calculation (0.923; 0.986-0.945; P = 0.329) ( Figure 3 C).

Figure 3:

Figure 3:

Evaluation and equivalence testing of the smartphone-assisted CRR calculation in Stage 2 of the study. (A) Correlation analysis of manually and automatically calculated CRR using luminance conversion, L for samples on standardized format arrays (Stage 2 analysis). (B) Forest plot comparing manually and automatically calculated CRR in the entire group of samples and separated by clinical outcome (medically indicated delivery for preeclampsia: MIDPE); bars show 90% confidence intervals (90% CIs). (C) Receiver operating curves (ROC) curves of manually and automatically calculated CRR of all samples in Stage 2 experiments plotted for the ability to discriminate between patients who required MIDPE (n = 118) and those who did not ( n = 210).

Processing Time

A screen-by-screen workflow of the iPhone application, illustrating the processing time, is shown in Figure 4 (pictures Pix1 and Pix2 have been previously acquired in the example shown). Utilizing our image processing tool, the time from the conclusion of the “wet part” of the CRD test array to the result was reduced to approximately 2 minutes of processing time on the smartphone.

Figure 4:

Figure 4:

Screen-by-screen workflow of the CRD test array smartphone application. Pix1 and Pix2 can be acquired either using the gallery (saved images) or in real time, using the smartphone camera (Panels 1 and 3). Results of the sheet detection and extraction are shown on screen for operator verification purposes (Panels 2 and 4, original images as overlays). Extracted cells (blue rectangles) and urine dots (green circles) on Pix1 and Pix2 are displayed on the smartphone's screen for operator verification purposes (Panels 5 and 6). The automatically calculated CRR for each subject is displayed on the screen (Panel 7), with the option of sending the results via email, for sharing or archiving purposes (Panel 8).

Performance and Engineering Tolerance Analysis

To verify our algorithm and further improve the robustness of our imaging protocol in Stage 3 of our study, we acquired an additional data set of six standardized CRD arrays. The experiments were performed by an untrained person who was also not given any instruction on how to position the smartphone in order to acquire the images or to avoid uneven illumination and shading (ie, the appropriate camera angle). This data set helped us evaluate possible sources of operator error. These handling issues are summarized in the Supplementary Material (Table 1) along with their impacts on the image processing chain and remedies implemented in an updated version of the protocol. The most often observed handling error was the excess perspective with which Pix1 and Pix2 were acquired. The majority of the issues were remedied by providing the user with instructions on how to reacquire the image in order to avoid each issue. The robustness of our imaging chain was further improved by setting a limit on the level of uneven illumination tolerated on Pix1 and Pix2. The user was prompted through the interface to retake the picture and move away from the light source when the shading exceeded the tolerance level in variance (coefficient of variation >15%).

Additional Optimizations of the Wet Part of the CRD Test

As part of Stage 3, we performed additional arrays on a consecutive set of 94 urine samples, comparing the previously validated “wet part” of the CRD test with two abbreviated versions: one omitting urine protein normalization and the other omitting both protein normalization and the 1-hour agitation with Congo Red. Omitting both urine normalization and agitation (samples mixed with Congo Red were placed immediately on the sheet) resulted in acceptable concordance (Lin’s ρ = 0.914; 0.873-0.942) with the original protocol. In multivariate linear regression, the degree of bias was solely determined by the CRR level and not by position on the sheet, operator proficiency, or urine protein concentration. Accuracy ( Cb = 0.997) exceeded precision (Pearson’s ρ = 0.916), suggesting that although the numbers may vary slightly, omitting normalization will not significantly affect the disease classification. Other experiments were carried out to replace methanol washing with alternatives that are easier to procure and dispose of. Through trial and error, we determined that pharmacy-grade isopropanol (90%) proved more effective than methanol and shortened the washing time (measured from the start of washing until complete blank decolorization) to 7 minutes. Pharmacy-grade ethanol (70%) was not suitable as a methanol substitute. Denaturing agents (acetone, added in the United States to make it undrinkable) affected pore size in the nitrocellulose sheet, which resulted in an unacceptable loss of signal on positive samples.

DISCUSSION

In a recent study, Coskun et al. 26 described a smartphone-aided test for albuminuria in which a fluorescence reading device is attached to a smartphone. The smartphone camera then records the fluorescent image, which is used to calculate the albumin concentration. 26 Although this work highlights the importance of rapid testing for urine markers that are more specific than total proteinuria, this particular modality depends on the availability of additional electronic hardware. Our approach provides an inexpensive molecular test and automated smartphone-based readout that can be performed as a batched laboratory test by modestly trained personnel in almost any environment, from an urban medical center to a lightly staffed field clinic. To accompany the new molecular test, we have created a data processing chain suitable for a smartphone’s processor and memory and have reduced the mHealth imaging system to the smartphone as a standalone device, without requiring internet connectivity. Thus, we have not only eliminated the need for a separate handheld imaging device or other hardware, but have created a smartphone-based diagnostic tool that is also independent of communication data rate, quality of service, and data transfer security. Recently, these issues have been emphasized as some of the main limitations of mHealth applications. 27 As applied to preeclampsia, our mHealth solution brings an objective element into the clinical work-up for preeclampsia which especially in low resource settings relies heavily on subjective interpretation of signs and symptoms by healthcare providers. Ease of use is enhanced, because the test readout is a percentage that is proportional to disease severity. Hence, the proposed method is in line with cutting edge technologies in mHealth.

Although our proposed mobile application was developed for the iPhone, almost all smartphones with a camera comply with our processing application’s requirements. The iPhone 4 was chosen as demonstration device because it is no longer the top-of-the-line device. Hence, our application could be used with outdated devices that have been donated, another factor of importance for third-world countries and other settings with limited resources, which may not have access to the latest or top-of-the-line smartphones.

Our work has been evaluated on over 300 specimens and displayed a high level of agreement with the gold standard of manual quantification of CRR, which indicates technical equivalence. The fraction of time required for manual quantification of the same array is in keeping with the potentially disruptive nature of innovations using mHealth technologies. 28 Although our work focused on enabling rapid and reliable quantification for a new diagnostic modality for preeclampsia, the test will require additional targeted validation and, perhaps, additional design refinement before it can be deployed in a specific clinical setting. This future work will need to focus on improving specific maternal and/or fetal outcomes while optimizing the utilization of certain resources that are particular to the respective clinical setting (that are different from the one used in this study).

According to the United Nations Children’s Fund (UNICEF), in the developing world, 80% of women receive antenatal care (ANC) from a skilled health provider at least once in the course of their pregnancy. 29 However, the quality and number of ANC visits remains suboptimal for effective detection of preeclampsia. As ANC alone has proven ineffective at preventing preeclampsia, failure to measure blood pressure and proteinuria at each ANC visit represents a missed opportunity. 29 In fact, during ANC visits, more women seem to have their blood pressure measured than have their urine screened for proteins. Evidently, without assessing for proteinuria, proper screening, triage or differential diagnosis of preeclampsia from the more benign pregnancy-related hypertensive conditions, such as chronic hypertension and gestational hypertension, cannot be attained. Due to its simplicity and the low cost of required materials, the CRD test has the potential to fill this gap for diagnosing preeclampsia in resource-poor settings. We do not suggest that the CRD test should replace the 24-hour urine protein collection as the “gold standard” for the assessment of proteinuria, when such a test is practical. However, both currently used laboratory methods for the estimation of proteinuria are unable to provide information on the amount of misfolded proteins, which is the principle of the CRD test and may be a process more closely related to the pathophysiology of preeclampsia than total proteinuria. All of the above may explain why, in the previously published study, the CRD test was superior to the dipstick at predicting MIDPE and why some women with a 24-hour proteinuria level below the cut-off of 300 mg displayed a high CRR. 16

Tailoring the specific use of the smartphone-based CRD test is important, given that the diagnostic criteria for preeclampsia, the threshold for indicated delivery, and the incidence of adverse outcomes differ considerably between high- and low-resource clinical settings, because they are impacted by access to tertiary level health care facilities and to neonatal intensive care. 17,30,31

Supplementary Material

Supplementary Data

Acknowledgments

CONCLUSIONS

Our work represents the convergence of two important trends in medicine: mHealth and molecular medicine. Moreover, we demonstrate that molecular-based definitions of disease, when paired with targeted technology development, have the potential to streamline diagnosis and simplify clinical workflow. While improving prenatal care in resource-poor settings motivated the present work, we believe that our approach has implications for clinical diagnostics and health care delivery in resource-rich settings as well. Specifically, our approach has the potential to be a high-benefit, low-cost technology platform, suggesting that intensive research and technology development does not necessarily result in expensive implementations of health care solutions, an important consideration given ever-increasing concerns about technology as a driver of health care costs. 32,33

Funding

This work was supported by funds from Bill and Melinda Gates Foundation Round 5 Grand Challenges Explorations Phase I award entitled “Reducing preeclampsia morbidity through Congo red dot (CRD) test” (grant number OPP10250411, PI: Irina A. Buhimschi). MAC also acknowledges funding from Yale Medical School.

Competing Interests

A patent including parts of this work with the title “Methods and Compositions for Identifying Urine Congophilia” is pending. Authors involved in this article and the patent are SMJ, MAC, and IAB.

Contributors

SMJ: Designed, developed and implemented the image processing chain and applications, developed standardized template, refined the methods, analyzed the data, and drafted and revised the paper. TMD: Advised on and designed image processing and drafted and revised the paper. CSB: Advised on and designed wetlab part and methods development and drafted and revised the paper. JM: Acquired data in Stage 3 experiments, worked on methods refinement, and drafted the paper. MAC: Advised on and designed imaging protocol, developed standardized template, scientific oversight, drafted the paper. IAB: Scientific oversight, acquired data in Stages 1 and 2, methods development, developed standardized template, planned experiments, evaluated test results, and drafted and revised the paper. She is the guarantor.

REFERENCES

  • 1. Lunde S. Secondary. http://www.wipro.com/documents/the-mHealth-case-in-India.pdf . [Google Scholar]
  • 2. Free C, Phillips G, Felix L, et al. . The effectiveness of M-health technologies for improving health and health services: a systematic review protocol . BMC Res Notes. 2010. ; 3 : 250 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Fiordelli M, Diviani N, Schulz PJ . Mapping mHealth research: a decade of evolution . J Med Internet Res. 2013. ; 15 ( 5 ): e95 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Ozdalga E, Ozdalga A, Ahuja N . The smartphone in medicine: a review of current and potential use among physicians and students . J Med Internet Res. 2012. ; 14 ( 5 ): e128 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Hart A, Tallevi K, Wickland D, et al. . A contact-free respiration monitor for smart bed and ambulatory monitoring applications . Conf Proc IEEE Eng Med Biol Soc. 2010. ; 2010:927–930 . [DOI] [PubMed] [Google Scholar]
  • 6. Ruano-Lopez JM, Agirregabiria M, Olabarria G, et al. . The SmartBioPhone, a point of care vision under development through two European projects: OPTOLABCARD and LABONFOIL . Lab Chip. 2009. ; 9 ( 11 ): 1495 – 1499 . [DOI] [PubMed] [Google Scholar]
  • 7. Oresko JJ, Duschl H, Cheng AC . A wearable smartphone-based platform for real-time cardiovascular disease detection via electrocardiogram processing . IEEE Trans Inf Technol Biomed. 2010. ; 14 ( 3 ): 734 – 740 . [DOI] [PubMed] [Google Scholar]
  • 8. Boehret K. Device nags you to sit up straight. Secondary Device nags you to sit up straight. 2013. http://allthingsd.com/20130813/device-nags-you-to-sit-up-straight/ . Accessed August 13, 2013 . [Google Scholar]
  • 9. Engel H, Huang JJ, Tsao CK, et al. . Remote real-time monitoring of free flaps via smartphone photography and 3G wireless Internet: a prospective study evidencing diagnostic accuracy . Microsurgery. 2011. ; 31 ( 8 ): 589 – 595 . [DOI] [PubMed] [Google Scholar]
  • 10. Jonathan E, Leahy M . Investigating a smartphone imaging unit for photoplethysmography . Physiol Meas. 2010. ; 31 ( 11 ): N79 – N83 . [DOI] [PubMed] [Google Scholar]
  • 11. Joundi RA, Brittain JS, Jenkinson N, et al. . Rapid tremor frequency assessment with the iPhone accelerometer . Parkinsonism Relat Disord. 2011. ; 17 ( 4 ): 288 – 290 . [DOI] [PubMed] [Google Scholar]
  • 12. Lemoyne R, Mastroianni T, Cozza M, et al. . Implementation of an iPhone as a wireless accelerometer for quantifying gait characteristics . Conf Proc IEEE Eng Med Biol Soc. 2010. ; 2010:3847–3851 . [DOI] [PubMed] [Google Scholar]
  • 13. Lemoyne R, Mastroianni T, Cozza M, et al. . Implementation of an iPhone for characterizing Parkinson's disease tremor through a wireless accelerometer application . Conf Proc IEEE Eng Med Biol Soc. 2010. ; 2010:4954–4958 . [DOI] [PubMed] [Google Scholar]
  • 14. Wolf JA, Moreau JF, Akilov O, et al. . Diagnostic inaccuracy of smartphone applications for melanoma detection . JAMA Dermatol. 2013. ; 149 ( 4 ): 422 – 426 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Buhimschi IA, Nayeri UA, Zhao G, et al. . Protein misfolding, congophilia, oligomerization and defective amyloid processing in preeclampsia . Sci Transl Med. 2014. ; 6 ( 245 ): 245ra92 . [DOI] [PubMed] [Google Scholar]
  • 16. Buhimschi IA, Zhao G, Funai EF, et al. . Proteomic profiling of urine identifies specific fragments of SERPINA1 and albumin as biomarkers of preeclampsia . Am J Obstet Gynecol. 2008. ; 199 ( 5 ): 551 e1–e16 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. ACOG practice bulletin. Diagnosis and management of preeclampsia and eclampsia. Number 33, January 2002. American College of Obstetricians and Gynecologists. 2002. [PubMed]
  • 18. World Health Organization (WHO). WHO recommendations for prevention and treatment of pre-eclampsia and eclampsia. World Health Organization (WHO): Geneva (Switzerland); 2011:38 p. [PubMed]
  • 19. Otsu N . A threshold selection method from gray-level histograms . IEEE Transactions on Systems, Man, and Cybernetics. 1979. ; 9 ( 1 ): 62 – 66 . [Google Scholar]
  • 20. Hough PVC. Machine analysis of bubble chamber pictures. Proc Int Conf High Energy Accelerators Instrument. 1959;C590914:554–558.
  • 21. Lehmann TM, Gönner C, Spitzer K . Interpolation methods in medical image processing . IEEE Trans Med Imaging . 1999. ; 18 ( 11 ): 1049 – 1075 . [DOI] [PubMed] [Google Scholar]
  • 22. Fernandes LAF, Oliveira MM . Real-time line detection through an improved Hough transform voting scheme . Pattern Recogn. 2008. ; 41 ( 1 ): 299 – 314 . [Google Scholar]
  • 23. Lin LI . A concordance correlation coefficient to evaluate reproducibility . Biometrics. 1989. ; 45 ( 1 ): 255 – 268 . [PubMed] [Google Scholar]
  • 24. McBride GB . A proposal for strength-of-agreement criteria for Lin's Concordance Correlation Coefficient. NIWA Client Report 2005;HAM2005-062 . [Google Scholar]
  • 25. Walker E, Nowacki AS . Understanding equivalence and noninferiority testing . J Gen Intern Med. 2011. ; 26 ( 2 ): 192 – 196 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Coskun AF, Nagi R, Sadeghi K, et al. . Albumin testing in urine using a smart-phone . Lab Chip. 2013. ; 13 ( 21 ): 4231 – 4238 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Adibi S. Mobile health (mHealth) biomedical imaging paradigm . Conf Proc IEEE Eng Med Biol Soc. 2013. ; 2013:6453–6457 . [DOI] [PubMed] [Google Scholar]
  • 28. Labrique A, Vasudevan L, Chang LW, et al. . H_pe for mHealth: more “y” or “o” on the horizon? Int J Med Inform. 2013. ; 82 ( 5 ): 467 – 469 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. World Health Organization (WHO) & UNICEF. Antenatal care in developing countries. Promises, achievements and missed opportunities: an analysis of trends, levels and differentials , 2003.
  • 30. Khan KS, Wojdyla D, Say L, et al. . WHO analysis of causes of maternal death: a systematic review . Lancet. 2006. ; 367 ( 9516 ): 1066 – 1074 . [DOI] [PubMed] [Google Scholar]
  • 31. Guidotti RJ, Jobson D . Detecting Preeclampsia: a Practical Guide. World Health Organization. 2005. http://www.who.int/reproductive-health/publications/pre_eclampsia/detecting_pre_eclampsia.pdf Accessed January 6, 2014 . [Google Scholar]
  • 32. Okunad AA, Murthy VN . Technology as a ‘major driver' of health care costs: a cointegration analysis of the Newhouse conjecture . J Health Econ. 2002. ; 21 ( 1 ): 147 – 159 . [DOI] [PubMed] [Google Scholar]
  • 33. Hyde C, Thornton S . Does screening for pre-eclampsia make sense? BJOG. 2013. ; 120 ( 10 ): 1168 – 1170 . [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES