Skip to main content
Journal of Medical Signals and Sensors logoLink to Journal of Medical Signals and Sensors
. 2013 Oct-Dec;3(4):209–215.

Assessment of Hypernasality for Children with Cleft Palate Based on Cepstrum Analysis

Ehsan Akafi 1, Mansour Vali 1,, Negin Moradi 2, Kowsar Baghban 3
PMCID: PMC3967423  PMID: 24696798

Abstract

Hypernasality is a frequently occurring resonance disorder in children with cleft palate. In general, an operation is necessary to reduce the hypernasality and therefore an assessment of hypernasality is imperative to quantify the effect of the surgery and design the speech therapy sessions, which are crucial after surgery. In this paper, a new quantitative method is proposed to estimate hypernasality. The proposed method used the fact that an autoregressive (AR) model for vocal tract system of a patient with hypernasal speech is not accurate; because of the zeros appear in the frequency response of the vocal tract system. Therefore in our method, hypernasality was estimated by a quantity calculated from comparing the distance between the sequences of cepstrum coefficients extracted from AR model and autoregressive moving average model. K-means and Bayes theorem were utilized to classify the utterances of subjects by means of proposed index. We achieved the accuracy up to 81.12% on utterances and 97.14% on subjects. Since the proposed method needs only computer processing of speech data, compared with other clinical methods it provides a simple evaluation of hypernasality.

Keywords: Cepstrum, cleft palate, hypernasality, speech processing, speech therapy

INTRODUCTION

Hypernasality frequently occurs in children with cleft palate due to excessive nasal resonance perceived during the speech because the oral cavity is not properly separated from the nasal cavity. In these cases in addition to surgical interventions like palatoplasty, patients should receive speech therapy. Therefore, assessment of nasality is necessary to facilitate the evaluation of the operation efficacy and help the therapist to manage the speech therapy sessions. Approaches for the assessment of hypernasality classified into two categories of invasive and non-invasive techniques. The invasive techniques involve the assessment of velopharyngeal function using invasive instruments such as nasendoscopy, videofluoroscopy, etc., in the clinical environment. Non-invasive techniques include clinical assessment and digital signal processing-based techniques. These approaches describe concisely as below.

Invasive Techniques

Multi view videofluoroscopy allows experts to observe the vocal tract structures during connected speech from several spatial planes.[1] Flexible fiber-optic nasendoscopy allows direct observation of velopharyngeal movements during connected speech. In these methods quantitative results cannot be acquired and also they need expensive equipment.

Non-invasive Clinical Assessments

In many types of clinical equipment such as nasometer, pressure, vibration and nasal flow were used as quantities for assessing hypernasal speech. The nasometer uses two separate microphones; first one is placed in front of the mouth and the other in front of the nostrils. The microphones record both the oral and nasal sound pressures and the index of nasometer called nasalance, is defined as the ratio of nasal sound pressure to total sound pressures assessed at the nostrils and mouth. This device is widely used in the clinical field, but it is an expensive device and it is uncomfortable to use for children. Moreover, Horii and Lang introduced the Horii oral nasal coupling index that measured the nasal coupling.[1] This index was derived from signals measured by an accelerometer attached to the outside of the nares and by a microphone placed in front of the mouth. For use these kinds of devices, subjects must attend clinics that make control of the therapy process hard, moreover, uncomfortable condition of using these devices may cause the speech of children be unnatural and the results of assessments became unreliable.

Perceptual Judgment

Perceptual judgments performed with scoring the quality of speech by experienced or well-trained listeners. Since hypernasal speech occurs in conjunction with abnormalities in pitch, loudness, voice quality and these coexisting features affect the perception of nasality, judgment results vary among the listeners.[2] To overcome the problems of invasive methods and non-invasive clinical methods and to avoid disagreement of perceptual judgments, signal processing techniques were proposed.

Signal Processing Techniques

In previous studies, researchers tried to detect hypernasality by analyzing the speech of subjects with cleft lip or palate, synthesized hypernasal speech or nasalized vowels of normal speech.[2] In signal processing-based techniques for hypernasality detection, the assessment is usually carried out by finding the deviation of the spectrum of hypernasal speech from the normal speech.[2,3,4,5] Researchers claimed that nasalization increase the first formant bandwidth and intensity and also introduce nasal formants and antiformants.[3] They compared the output of a low pass filter with cut-off frequency between the first and second formant and a band pass filter that just filter the first formant that both applied to speech samples and found a distinctive difference for nasalized vowels, whereas the normal vowels do not show any remarkable difference. Another proposed method, estimated hypernasality by comparing the distance between sequences of cepstrum coefficients extracted from low order and high order linear predictive (LP) model.[4] The fact that a LP model with a typical order for the human vocal tract system is not accurate when the vocal tract system has zeros in its frequency response was used in this work. This approach assessed by calculating the Pearson correlation coefficient between the nasalance scores and the obtained distances. Lee et al. proposed new index for detect hypernasality called voice low tone to high tone ratio (VLHR).[5] VLHR was defined as the division of power of low frequency part of voice spectrum, into power of high frequency part of voice spectrum. They calculated correlation of nasalance scores with values of VLHR on speech samples of subjects to assess their proposed index. Vijayalakshmi et al. introduced a new quantity based on acoustic analysis on nasalized vowels that showed the existence of a new resonance in the low frequency region (around 250 Hz) of speech spectrum,[2] this fact also was used in another work of these authors, where a LP-based pole modification technique was introduced.[6] They used a higher order LP spectrum to select and weaken the pole corresponding to strongest peak in the low frequency region and then resynthesized new signal. Maximum of the cross-correlation value between the original signal and resynthesized speech signal was taken as a measure for the detection of hypernasality.

Some other works were focused on extracting famous speech features and employing different classification methods. Castellanos et al., Delgado-Trejos et al. and Maier et al. used set of features including pitch, jitter, tone perturbation coefficient, harmonic to noise ratio, energy, zero crossings, linear predictive coefficients (LPC), mel-frequency cepstral coefficients and wavelet transform, also they employed Bayesian classifier, Gaussian mixture model and support vector machine (SVM).[7,8,9] Orozco-Arroyave et al. and Arias-Londoρo et al. concentrated on features related to non-linear dynamics of speech such as correlation dimension, largest Lyapunov exponent and Hurst exponent.[10,11,12]

In this paper, the fact that the vocal tract system of a patient with cleft palate has additional zeros in its frequency response was used. Since an autoregressive (AR) model for frequency response of the vocal tract system of these patients is not accurate, in our method the hypernasality was estimated by comparing the distance between the sequences of cepstrum coefficients of AR model and cepstrum coefficients of autoregressive moving average (ARMA) model.

MATERIALS AND METHODS

During the pronunciation of a vowel, the excitation signal from the vocal cord is assumed to be an ideal impulse train. Under this assumption, the transfer functions of the vocal tract system, which is from the glottis to lips and acts like a filter, considered as all-pole system. Thus, an AR model has been widely used in modeling speech signal analysis. AR coefficients ak used in an AR model of order M are defined as

graphic file with name JMSS-3-209-g001.jpg

Where s (n) is the speech signal and u (n) is the excitation signal produced by the glottis. G is the gain of the excitation signal and also the root mean square value of the residual error between the predicted and original speech signals.[4] Figure 1 represents a simplified model of the human vocal tract. Model of the human vocal tract is assumed to be composed of several lossless acoustic tubes.[13] Nasality depends upon how much and when the velum that separate nasal and oral cavity is open, during pronunciation. Thus, the pronunciation of a speaker with cleft lip or palate or subjects with defective velopharyngeal mechanism is hypernasal because the velum cannot separate the nasal and oral cavities appropriately or because poor timing of velopharyngeal. As mentioned in previous works, hypernasality is characterized by: (1) Amplitude reduction of the first formant, (2) presence of zeros in the spectrum due to the coupling of the nasal cavity and oral cavity, (3) presence of reinforced harmonics (nasal formant) resulting from the sound resonance in the nasal cavity and (4) shift of formants.[4] The characteristic (2) was utilized to estimate hypernasality in this study. If we want to model the vocal tract system of a normal speaker, AR model of order 8-10 is typically used.[14] Since the hypernasal speech signal cannot be modeled accurately by an AR model we can expect that an ARMA model with typical number of poles and the appropriate number of zeros give a more accurate representation of the hypernasal speech. Based on this fact, we propose an algorithm to estimate the hypernasality of speech using the AR and ARMA coefficients. Actually, there will be a significant difference between the cepstrum coefficients obtained from the AR model and from the ARMA model, in the case of the hypernasal speech.

Figure 1.

Figure 1

Simple model of human vocal tract

Methods

Speech signal should pass the pre-processing stage; including data normalization and a pre-emphasis filtering. Pre-emphasis filter with pre-emphasis coefficient α =0.98 defined as

p(z)=1−0.98z−1      (2)

The reasons for employing a pre-emphasis filter consist of eliminating the scattering effect that is introduced when the speech signal is transmitted from the lips through the air; and also removing the spectral component of the larynx from the speech signal.[15] After pre-processing, the signal should pass the windowing stage; The hamming window was applied to speech data for the frame length of 30 ms and the frame was shifted by 15 ms for 50% overlap. Each frame with a length of 30 ms can be assumed to be stationary.[16] The order of 10 was selected as the order of AR model and the number of poles of ARMA model. To choose the number of zeros for ARMA model, we tested our method with different number of zeros. The general form of the ARMA model of vocal tract is defined as

graphic file with name JMSS-3-209-g003.jpg

Where s (n) and u (n) are respectively the speech signal and the excitation signal, na represents the number of poles and nb is the number of zeros plus 1. The Z-transformed version of Eq. 3 is

graphic file with name JMSS-3-209-g004.jpg

Where A (z) and B (z) are polynomials, defined as below

graphic file with name JMSS-3-209-g005.jpg

In our method, ARMA coefficients were calculated with a widely used iterative prediction-error method.[17] This method ensures that all poles and zeros are inside the unit circle. AR coefficients were calculated with a same method by considering nb equal to zero. As our proposed algorithm used the difference between the spectrums obtained using AR and ARMA coefficients to estimate hypernasality, we needed a distance measure between the spectrums. Therefore we used cepstrum coefficients of AR and ARMA model to have two comparable and equal sequences in length. The zero-pole form of the vocal tract filter cited in Eq. 4, is defined as

graphic file with name JMSS-3-209-g006.jpg

Where pi and di are respectively ith pole and ith zero of vocal tract filter. All poles and zeros were considered to be inside the unit circle. Since the system is minimum phase, its cepstrum can be uniquely determined as[18]

graphic file with name JMSS-3-209-g007.jpg

It is easy to see from Eq. 8 that the cepstrum coefficients could be consider as a decaying sequence, which is the reason that a finite number of coefficients are sufficient to approximate it and therefore we can refer to truncated cepstrum as a cepstrum vector.[18]

Let cAR(m) and cARMA(m) be cepstral sequences of AR and ARMA models, respectively. Then, the geometric distance between the cepstral sequences is calculated by

graphic file with name JMSS-3-209-g008.jpg

In order to compute the distance using Eq. 9, it has been reported that a sufficient accuracy can be obtained if M parameter is at least set to 3 times the order of the AR model.[19] In this paper, M was set to 40; also we estimated our method by using 120 cepstrum coefficients, where the variations of cepstrum coefficients become very small, then we compared the two cases to ensure the accuracy of our method. During calculation of distances using Eq. 9, we found that normalizing AR or ARMA cepstrum coefficients on total frames of a speech sample, may lead to a more separable DI value for normal speakers and cleft palate speakers, so we estimated our method with both normalized and non-normalized cepstrum coefficients and compared the results. DI was calculated for each frame of signal, final decision for an utterance made by the average of DI on all of its frames called DIaverage, also final decision for a subject attained by computing the mean of DIaverage on all of his or her utterances. Figure 2 shows a flow chart of our algorithm for calculation of distances and detection of hypernasality.

Figure 2.

Figure 2

Flow chart of hypernasality detection method

Speech Samples

Oral consonants require velopharyngeal closure to accomplish the separation of the oral and nasal cavities. In contrast, nasal consonants involve velopharyngeal opening that allows the propagation of sound energy into the nasal cavity. In children with cleft palate early onset and delayed offset of velar movement occurs before and after the oral cavity occlusion causes the vowel preceding and following nasal consonants to be nasalized for certain durations.[20] Therefore, in this study vowels (/a/) extracted from 392 utterances consisted of disyllables (/pamap/) that uttered by 22 normal subjects and 13 subjects with cleft palate, were used. A series of /p/ and /m/ before and after vowels in the test word requires velopharyngeal closing and opening movements; this context was considered useful for measuring the amount of nasalization. Because both oral phoneme (/p/) and nasal phoneme (/m/) were produced at the labial place of articulation, the influence of the change of articulation position on nasal resonance could be controlled.[20] The age range of the subjects for this study was 4-12 years. Children with cleft palate had the palate repaired through primary surgical correction and also they exhibited moderate or severe hypernasality. In order to collect the acoustic signals a high quality microphone (Shure Beta 54, USA) was used. The microphone was attached to a headset and positioned at a fixed distance of 3 cm away from the right side of the subject's mouth. The signal to noise ratio of all recordings was more than 30 dB. The sampling rate was 44.1 kHz with 16 bits of resolution.

RESULTS AND DISCUSSION

To evaluate the proposed algorithm, we applied the algorithm to each frames of utterances. We expected the DIaverage could be an appropriate index for detecting hypernasality and we could simply separate normal and hypernasal samples by setting a threshold value for DIaverage.

In order to find the optimum number of zeros for ARMA model and also the best upper limit for the summation of Eq. 9 (M parameter), we performed a t-test. This test is employed to study the null hypothesis that data in the two groups of hypernasal and normal, are random samples with equal means, against the alternative that the means are not equal.[21] The P value which is the result of this test could be used as a quantity that showed us how much our data is separable with proposed index; a large P value shows that the calculated indexes for two groups do not have a significant difference.

Hence, we calculated P values for DIaverage values of all utterances that were obtained by different number of zeros for ARMA model and different values of M parameter and chose the cases which have smaller P values to continue our study with them. Table 1 shows the calculated P values in different cases; it is obvious that utilizing 120 normalized cepstrum coefficients had the best result among all. For better assessment of our method we plotted the distributions of DIaverage values of two groups of normal and hypernasal samples by means of boxplot for the best two cases selected from Table 1, in Figure 3.

Table 1.

P values for different parameters of our method

graphic file with name JMSS-3-209-g010.jpg

Figure 3.

Figure 3

Boxplot of DIaverage for subjects with cleft palate and normal subjects by using 120 normalized cepstrum coefficients and autoregressive moving average model with two zeros (left), five zeros (right)

As mentioned before we also calculated the means of DIaverage on all utterances of each subject and plotted the results for the best two cases in Figures 4 and 5. These figures imply that for the proposed method, using two zeros in ARMA model for 120 Normalized cepstrum coefficients had promising results. Figure 4 shows that by setting an appropriate threshold value, we can simply separate two groups of subjects. In order to find the proper threshold value, we applied two well-known classification methods, k-means and Bayes and compared the results.

Figure 4.

Figure 4

Mean of DIaverage for each subject with 120 normalized cepstrum coefficients by using autoregressive moving average model with two zeros

Figure 5.

Figure 5

Mean of DIaverage for each subject with 120 normalized cepstrum coefficients by using autoregressive moving average model with five zeros

K-means partitions the data points into k clusters. This iterative partitioning minimizes the sum of the within-cluster summation of distances of points to the cluster centers, over all clusters. As in our approach each data point represent with one dimension value (DIaverage for each utterance), we consider the average of final values of cluster's centers obtained by k-means approach as the threshold value. Actually in our approach k was considered 2. Based on Bayes theorem, another approach was applied for threshold estimation; we fit a Gaussian distribution function for each group and select the intersection of the two groups’ function as the threshold value.

In order to evaluate these two classifiers the leave-one-out cross-validation was employed. That means that for 35 (the number of subjects) separate times, the classifiers were trained on all data except for one subject and a prediction was made for that subject and related utterances. It means that in each step, k-means was trained on utterances of 34 subjects and the average of the centers of two obtained clusters settled as the threshold value, then the utterances of the rest subject compared with this threshold value. For the approach based on Bayes theorem the same procedure were applied. Actually, the classifiers were trained on utterances, but the results were presented for two levels of utterances and subjects. Table 2 shows the result of classification for all 392 utterances, which 146 of them were hypernasal and 246 utterances were normal through the expert judgments. The represented results are the summation of the results obtained for each step of cross validation test. The average of threshold values of all steps also showed in this table. In the case of subjects, we compared the mean of DIaverage for utterances of the out subject (the subject who is out of the training phase, based on the leave-one-out cross validation) to the obtained threshold value and made the hypernasal or normal decision. The confusion matrix values for the case of subjects represented in Table 3.

Table 2.

Confusion matrix for utterances classification

graphic file with name JMSS-3-209-g014.jpg

Table 3.

Confusion matrix for subjects classification

graphic file with name JMSS-3-209-g015.jpg

For better comparison of the results of two classifiers, sensitivity, specificity and accuracy were calculated and represented in Table 4 for the case of utterances and also in Table 5 for the case of subjects. Note that the balanced accuracy, which is the average of sensitivity and specificity values, is not equal to the accuracy due to the unbalance of the datasets for normal and hypernasal subjects. Furthermore, note that both, the values of sensitivity and specificity, are a tradeoff that depends upon the threshold chosen. These results lead us to establish a protocol for assessment of hypernasality. This protocol starts with recording of several utterances of a nasalized vowel for each subject and continues with calculating DIaverage for all utterances with the method proposed in this paper and finally comparing the mean of DIaverage with settled threshold value and making the hypernasal or normal decision.

Table 4.

Result of the classification on utterances (given in %)

graphic file with name JMSS-3-209-g016.jpg

Table 5.

Result of the classification on subjects (given in %)

graphic file with name JMSS-3-209-g017.jpg

To assess the ability of our approach, some comparison with previous works would be useful. As mentioned in previous sections, some works tried to introduce a quantitative index for assessment of hypernasality. These works represented their results by values of correlation coefficients of their proposed index and either nasalance scores or results of perceptual judgments, which could not be compared with our results. In some other works results of classification with classifiers like SVM that trained on different acoustic feature, were announced. Since these works used different unreachable datasets of subjects with various degrees of hypernasality, we could not compare our results with them.

Therefore to have some comparable results, the algorithm proposed by Rah et al. was simulated and applied on our dataset, the obtained results presented in Tables 6 and 7. Within this algorithm after pre-processing and windowing, LPC coefficients of order 10 and 36 were extracted. The geometric distance between the high and low order LPC considered as an index for hypernasality. Better performance of our approach is obvious from the tables.

Table 6.

Result of the classification on utterances, LPC method (given in %)

graphic file with name JMSS-3-209-g018.jpg

Table 7.

Result of the classification on subjects, LPC method (given in %)

graphic file with name JMSS-3-209-g019.jpg

CONCLUSION

In this paper, we proposed a straightforward method to evaluate hypernasality. The method is much less troublous than the current clinical methods. Our method introduces an index proportionate to the amount of hypernasality that could be used for evaluating the effects of surgery for velopharyngeal insufficiency and cleft palate patients and helps the therapist to control the speech therapy process of these patients. We found out the best results were obtained with comparing the 120 normalized cepstrum coefficients of AR model with 10 poles to ARMA model with two zeros and the same number of poles. In this situation, we achieved classification accuracy of up to 81.12% for utterances and up to 97.14% for subjects. These results show grate improvement of classification accuracy compare with the results of a similar method based on LPC (74.49% for utterances and 77.14% for subjects). In addition, the DIaverage Index or sequences of distances, which were introduced in this paper could be used as a feature or representation vector for using with other classification methods.

Footnotes

Source of Support: Nil

Conflict of Interest: None declared

REFERENCES

  • 1.Horii Y, Lang JE. Distributional analyses of an index of nasal coupling (HONC) in simulated hypernasal speech. Cleft Palate J. 1981;18:279–85. [PubMed] [Google Scholar]
  • 2.Vijayalakshmi P, Reddy MR, O’Shaughnessy D. Acoustic analysis and detection of hypernasality using a group delay function. IEEE Trans Biomed Eng. 2007;54:621–9. doi: 10.1109/TBME.2006.889191. [DOI] [PubMed] [Google Scholar]
  • 3.Cairns DA, Hansen JH, Riski JE. A noninvasive technique for detecting hypernasal speech using a nonlinear operator. IEEE Trans Biomed Eng. 1996;43:35–45. doi: 10.1109/10.477699. [DOI] [PubMed] [Google Scholar]
  • 4.Rah DK, Ko YL, Lee C, Kim DW. A noninvasive estimation of hypernasality using a linear predictive model. Ann Biomed Eng. 2001;29:587–94. doi: 10.1114/1.1380422. [DOI] [PubMed] [Google Scholar]
  • 5.Lee GS, Wang CP, Yang CC, Kuo TB. Voice low tone to high tone ratio: A potential quantitative index for vowel a: And its nasalization. IEEE Trans Biomed Eng. 2006;53:1437–9. doi: 10.1109/TBME.2006.873694. [DOI] [PubMed] [Google Scholar]
  • 6.Vijayalakshmi P, Nagarajan T, Jayanthan RV. Proc. of IEEE Conf. Region 10; TENCON; 2009. Selective pole modification-based technique for the analysis and detection of hypernasality; pp. 1–5. [Google Scholar]
  • 7.Castellanos G, Daza G, Sanchez L, Castrillón O, Suarez J. Engineering in Medicine and Biology Society, 2006. EMBS’06. 28th Annual International Conference of the IEEE; Acoustic speech analysis for hypernasality detection in children; pp. 5507–10. [DOI] [PubMed] [Google Scholar]
  • 8.Delgado-Trejos E, Sepúlveda-Sepúlveda FA, Castellanos-Domýnguez G. Robustness improvement of hypernasal speech detection by acoustic analysis and the Rademacher complexity model. Advances in biomed research. 2009:159–62. [Google Scholar]
  • 9.Maier A, Reuß A, Hacker C, Schuster M, Nöth E. Text, Speech and Dialogue. Springer Berlin Heidelberg: Springer; 2008. Analysis of hypernasal: Speech in children with cleft lip and palate; pp. 389–96. [Google Scholar]
  • 10.Orozco-Arroyave JR, Murillo-Rendón S, Alvarez-Meza AM, Arias-Londono JD, Delgado-Trejos E, Vargas-Bonilla JF, et al. Florence, Italy: This article presented in 12th annual conference of the international speech communication association (Inter Speech); 2011. Automatic selection of acoustic and non-linear dynamic features in voice signals for hypernasality detection. In: Proceedings of Interspeech; pp. 529–32. Available from: http://www.ISCA.org . [Google Scholar]
  • 11.Arias-Londoño JD, Godino-Llorente JI, Sáenz-Lechón N, Osma-Ruiz V, Castellanos-Domínguez G. Automatic detection of pathological voices using complexity measures, noise parameters, and mel-cepstral coefficients. IEEE Trans Biomed Eng. 2011;58:370–9. doi: 10.1109/TBME.2010.2089052. [DOI] [PubMed] [Google Scholar]
  • 12.Orozco-Arroyave JR, Vargas-Bonilla JF, Arias-Londoño JD, Murillo-Rendón S, Castellanos-Domínguez G, Garcés JF. Nonlinear dynamics for hypernasality detection in spanish vowels and words. Cognit Comput. 2012;4:1–10. [Google Scholar]
  • 13.Rabiner LR, Schafer RW. Vol. 100. Englewood Cliffs, NJ: Prentice-Hall; 1978. Digital processing of speech signals. [Google Scholar]
  • 14.Deller J, Proakis J, Hansen JH. New York, USA: John Wiley and Sons; 2000. Discrete-time processing of speech signals; pp. 266–342. [Google Scholar]
  • 15.Gray A, Jr, Markel J. A spectral-flatness measure for studying the autocorrelation method of linear prediction of speech analysis. IEEE Trans Acoust. 1974;22:207–17. [Google Scholar]
  • 16.Mammone RJ, Zhang X, Ramachandran RP. Robust speaker recognition: A feature-based approach. IEEE Signal Process Mag. 1996;13:58. [Google Scholar]
  • 17.Ljung L. Vol. 7632. New Jersey: Prentice Hall Inf and System Sciences Series; 1987. System identification: Theory for the user. [Google Scholar]
  • 18.Huang X, Acero A, Hon HW. Vol. 15. New Jersey: Prentice Hall PTR; 2001. Spoken Language Processing. [Google Scholar]
  • 19.Gray A, Jr, Markel J. Distance measures for speech processing. IEEE Trans Acoust. 1976;24:380–91. [Google Scholar]
  • 20.Ha S, Sim H, Zhi M, Kuehn DP. An acoustic study of the temporal characteristics of nasalization in children with and without cleft palate. Cleft Palate Craniofac J. 2004;41:535–43. doi: 10.1597/02-109.1. [DOI] [PubMed] [Google Scholar]
  • 21.Gibbons JD, Chakraborti S. Vol. 168. New York: Marcel Dekker, Ink; 2003. Nonparametric statistical inference; pp. 247–61. [Google Scholar]

Articles from Journal of Medical Signals and Sensors are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES