Skip to main content
The Journal of the Acoustical Society of America logoLink to The Journal of the Acoustical Society of America
. 2016 Jun 23;139(6):EL209–EL215. doi: 10.1121/1.4954384

Modeling listener perception of speaker similarity in dysarthria

Kaitlin L Lansford 1, Visar Berisha 2, Rene L Utianski 3
PMCID: PMC6910020  PMID: 27369174

Abstract

The current investigation contributes to a perceptual similarity-based approach to dysarthria characterization by utilizing an innovative statistical approach, multinomial logistic regression with sparsity constraints, to identify acoustic features underlying each listener's impressions of speaker similarity. The data-driven approach also permitted an examination of the effect of clinical experience on listeners' impressions of similarity. Listeners, irrespective of level of clinical experience, were found to rely on similar acoustic features during the perceptual sorting task, known as free classification. Overall, the results support the continued advancement of a similarity-based approach to characterizing the communication disorders associated with dysarthria.

1. Introduction

While differential diagnosis of dysarthria subtype is a central goal of clinical practice in motor speech disorders, sub-classification alone does not directly inform intervention strategies (Darley et al., 1975; Duffy, 2013). The reasons are (1) speech symptoms within a given dysarthria subtype may vary along the severity dimension, (2) there is considerable overlap in speech symptoms among the dysarthria subtypes (e.g., imprecise consonants, slow rate), and (3) not all speakers with the same diagnosis exhibit the same speech symptoms. Thus, characterization of the communication disorders caused by dysarthria may require an alternative focus. Toward this end, a taxonomical framework for categorizing dysarthria has been advanced, wherein the central goal is to identify speech features that are common to most if not all speakers with dysarthria. This framework would permit systematic investigation of the perceptual challenges associated with the defining features of dysarthria (Weismer and Kim, 2010).

In a previous report, we demonstrated proof-of-concept of an alternative approach to dysarthria characterization, largely motivated by the taxonomical framework proposed by Weismer and Kim (2010). This approach posits salient speech features contributing to the communication disorders associated with dysarthria may be revealed through the study of perceptual similarity (Lansford et al., 2014). In this original study, we collected speaker similarity data from second-year speech-language pathology master's students using an unconstrained perceptual sorting task, known as auditory free classification. Briefly, we hypothesized that if listeners could identify clusters of similar-sounding speakers with dysarthria, they must have used perceptually salient speech features to accomplish the task. To the extent that such speech features contribute to the associated communication disorder, we suggested that they might be justifiable targets for treatment.

Although auditory free classification offered a suitable framework for an initial study of perceptual similarity in dysarthria, the analytic approach was limited. First, it required that the similarity data be pooled across a sufficient number of listeners to reveal a stable set of abstract dimensions underlying judgments of speaker similarity. Further, salient speech features underlying listeners' impressions of speaker similarity were not revealed directly. Rather, the abstract dimensions underlying speaker-similarity, revealed via multidimensional scaling (MDS), were interpreted through correlation analysis with available acoustic and perceptual measures. An alternative data handling technique that explores more complex relationships between acoustics and groupings of similar-sounding speakers may be better suited for identifying salient speech features associated with perceptual similarity judgments in dysarthria.

Herein, we make use of an innovative statistical approach, multinomial logistic regression with sparsity constraints, capable of identifying salient acoustic features underlying each listener's groupings of similar-sounding speakers. In other words, this technique permits exploration of the strategies used by each listener to make speaker-similarity judgments. Beyond its obvious advantage of identifying salient acoustic features at the level of the listener, it also permits visualization (via MDS analysis) of how consistently (or inconsistently) listeners relied upon the same acoustic features to make their judgments. This capability affords an opportunity to evaluate the effect(s) of potential listener-related factors on judgments of speaker-similarity in dysarthria. Experiential level with disordered speech is one such listener-related factor that may differentially affect similarity judgments in dysarthria. On the basis of extensive work implicating a role of clinical experience on auditory-perceptual analysis of disordered speech (e.g., Helou et al., 2010; Kreiman et al., 1993; cf. Eadie and Kapsner-Smith, 2011) it follows that level of clinical experience could influence similarity judgments in dysarthria. This assumption is further supported by the findings of a recent free classification study that revealed a relationship between raters' linguistic experience and similarity judgments of speakers with a variety of regional American English dialects (Clopper and Pisoni, 2007). The assumption that clinical experience could influence ratings of speaker-similarity motivated the methodological decision in the preliminary study to recruit graduate students with roughly the same level of clinical experience to complete the free classification task (Lansford et al., 2014). This decision allowed us to examine the paradigm of perceptual clustering without the potential influence of strong clinical or experiential bias, but at the cost of limiting ecological validity for clinical application.

The current investigation extends our previous work by utilizing an innovative statistical approach to select the most predictive acoustic features for correctly classifying speakers into listener-defined clusters. If successful, the results of the current study would offer researchers another approach for studying the construct of perceptual similarity in other groups of speakers. The analytic approach also permits examination of the effect of clinical experience on listeners' groupings of similar-sounding speakers. Toward this end, the following groups of listeners, differing in experiential level, completed the auditory free classification task: (1) undergraduate students with little to no clinical experience, (2) second year master's students enrolled in a motor speech disorders class with emerging clinical experience, and (3) practicing speech-language pathologists, currently working with individuals with motor speech disorders, with the greatest level of clinical experience. If listeners were revealed to use similar strategies to make their speaker-similarity judgments (i.e., rely on the same speech features), irrespective of experiential level, the ecological validity of a similarity-based approach to studying communication disorders associated with dysarthria would be supported.

2. Method

2.1. Speakers and stimuli

Productions from 33 speakers were selected from a larger corpus of research in the Arizona State University Motor Speech Disorders (ASU MSD) laboratory. The speakers selected for the present investigation are described in full detail in Lansford et al. (2014). Briefly, speakers were diagnosed with one of the following dysarthria subtypes by neurologists at the Mayo Clinic: ataxic dysarthria secondary to cerebellar degeneration (n = 11), mixed flaccid-spastic dysarthria secondary to amyotrophic lateral sclerosis (n = 10), hyperkinetic dysarthria secondary to Huntington's disease (n = 4), and hypokinetic dysarthria secondary to Parkinson's disease (n = 8). Two speech-language pathologists (SLPs) concurred the dysarthria type was consistent with the underlying medical diagnosis. Dysarthria severity was rated to be moderate to severe.

All speaker stimuli were previously recorded and edited for use in a larger study conducted in the ASU MSD laboratory. The speech sample recording procedure is described in full detail in Lansford et al. (2014). Consistent with the original study, the sentence “The standards committee met this afternoon in an open meeting” was selected for the free classification perceptual sorting task. While listeners were only exposed to a single sentence, acoustic analyses were conducted on a set of five sentences for each speaker, to maximize power in the multinomial logistic regression.

2.2. Listeners

Thirty undergraduate students (UG), all juniors in their first semester of the Communication Science and Disorders course sequence at Florida State University, were recruited to participate. It was assumed that these listeners had little to no clinical experience. In addition, 20 practicing speech language pathologists (SLP) were recruited from the Phoenix and Tallahassee metropolitan areas for this project. The SLP listeners reported the number of years they have been practicing after obtaining their Certificate of Clinical Competence [CCCs; M = 12.63; standard deviation (SD) = 10.38]. Overall, the SLPs had the greatest and most varied levels of clinical experience. All listeners were native speakers of English with self-reported normal cognitive and hearing skills and were reimbursed with a $10 gift card for their participation. The similarity data collected from second year-master's students (MS) enrolled in a Motor Speech Disorders class, originally reported in Lansford et al. (2014), were included in the present set of analyses. It was assumed that these listeners were at roughly equivalent levels of emerging clinical experience.

2.3. Acoustic measurements

Three sets of acoustic metrics—hand-segmented rate and rhythm measures, automatically extracted envelope modulation spectra measures (EMS), and automatically extracted long-term average spectra (LTAS)—were obtained for each speaker (using the methodology described in full detail in Lansford et al., 2014; see Table 1 for a brief description of each set). Each set of features has been demonstrated as complementary. With this robust set of metrics, both spectral and temporal aspects of the speech signal are captured, affording comprehensive acoustic quantification of the speech signal.

Table 1.

Description of the acoustic metrics.

Metric Category Description
Rhythm These temporal metrics, previously used to quantify the rate and rhythmic structure of dysarthric speech, were derived from the segmental durations of vocalic and intervocalic intervals of produced sentences and included: articulation rate, standard deviation of vocalic and intervocalic units, standard deviation of vocalic and intervocalic intervals divided by mean vocalic duration (rate normalized), standard deviation of vocalic + consonantal intervals divided by mean vocalic + consonantal duration (×100), percent of utterance duration composed of vocalic intervals, pairwise variability of vocalic and intervocalic units, pairwise variability index for vocalic and consonantal intervals, and normalized pairwise variability index for vocalic + consonantal intervals (n = 11; see Liss et al., 2009, for full description)
EMS Metrics The envelope modulation spectra (EMS) variables were obtained for the full signal and for each of the 7 octave bands with center frequencies ranging from 125 to 8000 Hz and included: Peak frequency, peak amplitude, E 3-6, Below 4, Above 4, Ratio 4 (n = 42, see Liss et al., 2010, for full description)
Long term average spectra (LTAS) The measures of long term average spectra (LTAS) were normalized to root-mean-square (RMS) energy of entire signal and derived for seven octave bands with center frequencies ranging from 125 to 8000 Hz, and included: RMS energy, standard deviation (for 20 ms windows), range (for 20 ms windows) and pairwise variability (mean difference between successive 20 ms window (n = 28, see Lansford et al., 2014, for full description)

2.4. Procedures

An auditory free classification task, described in full detail in Lansford et al. (2014), was used to collect the similarity data. Briefly, free classification is a perceptual sorting task, in which listeners are asked to group stimuli according to perceived similarity. Recruited listeners were seated in front of computers, equipped with sound attenuating headphones, located in quiet listening cubicles. The free classification task was administered via PowerPoint. Listeners were instructed to listen to all of the speakers' sound files and to group them together in a grid embedded in the PowerPoint slide according to how similar they sounded. Listeners were informed that all of the speakers have dysarthria; however underlying medical etiologies or dysarthria subtypes were not revealed. Participants were instructed to group the speakers according to how similar they sound. Listeners were not provided any other instruction regarding how to make their judgments of similarity and were free to make as many groups as they deemed appropriate. There was no time limit imposed and listeners were permitted to listen to and re-arrange the groupings as needed.

2.5. Data analysis

A sparse multinomial regression analysis was used to explore the relative saliency of select acoustic metrics to judgments of similarity within and across the listener groups. For each listener, we formulated a multinomial regression problem wherein we aimed to identify the acoustic features that best described the selected groupings. Because the acoustic feature space is high-dimensional, we constrained the problem such that only a subset of features were selected (ten for each listener). Sparse multinomial regression aims to jointly identify the top ten features that describe the grouping by enforcing the constraint that most of the coefficients in the regression problem are set to 0, except for the coefficients of the selected features (Tibshirani, 1997). This approach is similar to Ridge regression; however, instead of penalizing the cost function with sum of the squares of the coefficients, the criterion changes to include the sum of the absolute values of the coefficients. It is shown that this results in only a subset of the model coefficients being non-zero, hence only a subset of the features is selected (Tibshirani, 1997). This model includes an additional parameter that controls the sparsity level (the number of non-zero coefficients). We iteratively modified this parameter until only ten acoustic features were selected. The alternative is the suboptimal forward/backward multinomial regression, wherein acoustic features are greedily added to the selected set until a desired number is reached (Hesterberg, 2008). In contrast, the sparsity-constrained approach used here jointly identifies the features that best describe the clusters formed by that listener.

A similarity measure between listeners was developed using the selected acoustic features from the multinomial regression. This similarity measure was used to populate a pair-wise listener similarity matrix. We embedded the listeners on a two-dimensional plot using MDS. The similarity measure is best described using two listeners as an example. Given two listeners and their respective top ten features, we measured the similarity between feature sets and used that as a proxy for listener similarity. Let us denote the complete feature set for listener 1 by X and the ith feature from this set by X(i). Similarly for listener 2, we denote the features by Y and Y(i). For each feature in X, we calculated the correlation coefficient with all features in Y and take the maximum value. This identified the maximum similarity between a given feature X(i) and all features Y. We calculated this value for all features in X and averaged. We calculated the same similarity measure between features in Y and all features in X. This resulted in a symmetric similarity measure between listeners based on their acoustic features. We conducted this analysis on the pooled similarity data to evaluate within-group consistency and make comparisons across groups.

3. Results

3.1. Descriptive statistics

The SLP listeners derived an average of 8.05 clusters of similar-sounding speakers (SD = 2.28), with an average of 4.52 speakers (SD = 1.8) per cluster. Undergraduate student listeners derived an average of 7.5 clusters (SD = 2.64), with an average of 4.97 speakers (SD = 1.86) per cluster. The descriptive results for the SLP and UG listeners were consistent with the previously reported MS results, in which the student listeners derived an average of 7.7 clusters (SD = 2.85) of similar-sounding speakers with an average of 4.96 speakers per cluster (SD = 2.06; Lansford et al., 2014).

3.2. Regression results

The multinomial logistic regression was carried out for each listener in order to identify a subset of ten acoustic features that best explain the listener's groupings. For the purposes of this analysis, we report the top features selected for each listener group. As depicted in Fig. 1, the same eight acoustic features were among the most frequently selected variables for each listener group. The overlapping variables included: peak_freq_all, articulation rate, Below4_8000, PV125, Ratio4_4000, Above4_1000, Ratio4_2000, and Above4_All (where the final number indicates the frequency band from which the measure is derived). Interestingly, the frequency with which each variable was selected by the model was similar across groups.

Fig. 1.

Fig. 1.

(a) and (b) Depicted here are the acoustic measures most frequently selected by the multinomial logistic regression as best characterizing the listener-derived clusters of similar sounding speakers. Of the top ten acoustic features included in the regression models, eight of them overlapped across the groups of listeners with different levels of clinical experience. Percentages along the y axis represent the proportion of listeners in each group for whom the acoustic feature was selected.

3.3. Multidimensional scaling results

Classical multidimensional scaling was used to evaluate how similarly listeners relied on the available acoustic information to make their similarity judgments. The listener groups were pooled and a two-dimensional model was derived to visualize how similarly, or consistently, the listeners grouped together speakers with dysarthria. As revealed in two-dimensional space (see Fig. 2), the listener groups were largely overlapping.

Fig. 2.

Fig. 2.

Results of a two-dimensional MDS analysis that examined the similarity of the individual logistic regression results obtained for the SLP, MS, and UG listeners.

A post hoc dispersion analysis was conducted to substantiate our subjective evaluation of the listener similarity data plotted in the two-dimensional MDS space. Dispersion from 0, defined as the Euclidean distance in the two-dimensional MDS space between a listener and the zero crossing, was computed. For any given listener, the larger the dispersion value, the greater the distance between that listener and the center of the two-dimensional space. Since the MDS was conducted on the pooled data, results of this analysis have the potential to reveal between-group differences (UG, MS, and SLPs) in the spread about 0. Overall, mean dispersion from 0 for the UG, MS, and SLP listeners was 0.11 (SD = 0.06), 0.11 (SD = 0.05), and 0.14 (SD = 0.08), respectively. Results of an analysis of variance (ANOVA) failed to reject the null hypothesis [F(2, 69) = 1.647, p = 0.2]. Thus, the listeners' level of clinical experience did not affect the spread about the 0 crossing of the two-dimensional space.

4. Discussion

The current investigation builds upon previous research on perceptual similarity in dysarthria (Lansford et al., 2014) by using an innovative statistical approach (multinomial regression with sparsity constraints) capable of identifying acoustic features that best characterize each listener's groupings of similar sounding speakers. Importantly, the current statistical approach replicated previous findings, which revealed the saliency of peak frequency (peak_freq_all), an envelope modulation spectra (EMS) metric that captures the duration of the predominant repeating amplitude pattern (Liss et al., 2010), and articulation rate to perceptions of speaker-similarity. Moreover, the current statistical approach implicated acoustic features not previously identified as contributing to speaker-similarity ratings, for example, PV125, an LTAS measure in the frequency band centered around 125 Hz. We can largely interpret PV125 as a measure of nasality, such that if persistent nasal resonance exists, it could manifest itself as atypical, consistent energy in the lower bands, thereby reducing the pairwise variability between ensuing frames. This relationship is prominently revealed by multinomial logistic regression but would be missed by other relational analyses, as it is likely that hypernasality would be either absent or present from a cluster of similar-sounding speakers, and not linearly associated with the abstract dimensions underlying speaker-similarity.

In the original paper, perceptual measures of speech, including estimates of severity, articulatory precision, vocal quality, and intelligibility, were included in the analyses. Several were revealed to be significantly correlated with the abstract dimensions underlying speaker similarity; however, none more than intelligibility (Lansford et al., 2014). Perceptual measures, however, tend to be broad (and subjective) categories, which can be difficult to estimate from the speech signal alone. Thus, in this paper we opted to focus on speech acoustics since they are objective and easier to directly estimate from the speech signal. However, this is not to say that intelligibility and the acoustic measures are not related. In fact, transcription intelligibility (i.e., percent words correct from a transcription task, included in the original study) was significantly correlated with three of the EMS measures selected by the current logistic regression: the Ratio4 measures in the 2000 and 4000 Hz bands (r = −0.65 and −0.47, respectively), and Above4 in the 1000 Hz band (r = 0.39), and all in the expected directions. Thus, the results of the current study, which focused solely on acoustic measures, corroborated previous findings, which revealed the saliency of intelligibility, a perceptual metric, to listeners' impressions of speaker-similarity.

The decision-making process associated with clustering together similar-sounding speakers is likely affected by not only the speech features represented within the cohort of speakers but also a variety of listener-related factors (e.g., experience, attentiveness, motivation). Indeed, listeners' linguistic experience has been revealed to differentially affect perceptual similarity ratings of speakers with different regional American English accents (Clopper and Pisoni, 2006, 2007). Prior to the current work, however, it was unknown how the listener's level of clinical experience would affect similarity impressions of speakers with dysarthria. In the current study, the acoustic features selected by the multinomial regression as underlying the clusters made by both student groups were consistent with those selected for the SLP listeners, with considerably more clinical experience. In fact, eight of the ten most frequently selected acoustic features were identical across the listener groups. These findings are consistent with previous studies in dysarthria that revealed no influence of clinical experience relative to the use and reliability of more structured levels of perceptual analysis (e.g., the identification and rating of perceptual features in dysarthric speech; Bunton et al., 2007; Zyski and Weisiger, 1987). Thus, based on the present findings it can be concluded that level of clinical experience does not appear to factor into the decision-making process associated with designating speakers with dysarthria as sounding similar. It is worth noting, however, that while the listener groups were largely overlapping in the two-dimensional space created by the MDS analysis, there was considerable spread across listeners, including a number of outliers (see Fig. 2). Thus, listener-related factors, other than experiential level, likely contributed, in some way, to listeners' impressions of speaker similarity. Future work should attempt to explicate potential speaker- and listener-related factors contributing to impressions of speaker-similarity in dysarthria.

5. Conclusion

The current investigation assessed the use of an innovative statistical approach to model the decision-making process associated with speaker similarity ratings in dysarthria. Specifically, the analytic procedure was capable of identifying salient speech features associated with each listener's groupings of similar-soundings speakers with dysarthria. This level of analysis allowed for direct comparison of the relative saliency of speech features to impressions of perceptual similarity across listeners and listener groups. Thus, the statistical approach provided a suitable framework for exploring how experiential level affected the listeners' strategies for identifying similar-sounding speakers. In general, the results demonstrated the relative saliency of the acoustic dimensions underlying speaker similarity was not influenced by the listener's level of clinical experience. Overall, the current results replicated and extended the findings from our proof-of-concept work (Lansford et al., 2014), thereby, supporting the continued advancement of a similarity-based approach to dysarthria characterization.

Acknowledgments

This research was supported by grants from the National Institute on Deafness and other Communication Disorders (R01 DC006859, R21 DC012558, and F31 DC10093) and from the American Speech-Language-Hearing Foundation (2012 Speech Science award). We gratefully acknowledge Julie Liss and Steven Sandoval for their contributions to this research.

References and links

  • 1. Bunton, K. , Kent, R. D. , Duffy, J. R. , Rosenbek, J. C. , and Kent, J. F. (2007). “ Listener agreement for auditory-perceptual ratings of dysarthria,” J. Speech Lang. Hear. Res. 50, 1481–1495. 10.1044/1092-4388(2007/102) [DOI] [PubMed] [Google Scholar]
  • 2. Clopper, C. G. , and Pisoni, D. B. (2006). “ Effects of region of origin and geographic mobility on perceptual dialect categorization,” Lang. Var. Change 18, 193–221. 10.1017/S0954394506060091 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Clopper, C. G. , and Pisoni, D. B. (2007). “ Free classification of regional dialects of American English,” J. Phonetics 35, 421–438. 10.1016/j.wocn.2006.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Darley, F. , Aronson, A. , and Brown, J. (1975). Motor Speech Disorders ( Saunders, Philadelphia, PA: ). [Google Scholar]
  • 5. Duffy, J. (2013). Motor Speech Disorders, 3rd ed. ( Elsevier, St. Louis, MO: ). [Google Scholar]
  • 6. Eadie, T. L. , and Kapsner-Smith, M. (2011). “ The effect of listener experience and anchors on judgments of dysphonia,” J. Speech Lang. Hear. Res. 54, 430–447. 10.1044/1092-4388(2010/09-0205) [DOI] [PubMed] [Google Scholar]
  • 7. Helou, L. B. , Soloman, N. P. , Henry, L. R. , Coppit, G. L. , Howard, R. S. , and Stojadinovic, A. (2010). “ The role of listener experience on Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) ratings of postthyroidectomy voice,” Am. J. Speech Lang. Pat. 19, 248–258. 10.1044/1058-0360(2010/09-0012) [DOI] [PubMed] [Google Scholar]
  • 8. Hesterberg, T. , Choi, N. H. , Meier, L. , and Fraley, C. (2008). “ Least angle and L1 penalized regression: A review,” Statist. Sur. 2, 61–93. 10.1214/08-SS035 [DOI] [Google Scholar]
  • 9. Kreiman, J. , Gerratt, B. R. , Kempster, G. B. , Erman, A. , and Berke, G. S. (1993). “ Perceptual evaluation of voice quality: Review, tutorial, and a framework for future research,” J. Speech Lang. Hear. Res. 36, 21–40. 10.1044/jshr.3601.21 [DOI] [PubMed] [Google Scholar]
  • 11. Lansford, K. , Liss, J. M. , and Norton, R. E. (2014). “ Free classification of perceptual similar speakers with dysarthria,” J. Speech Lang. Hear. Res. 57, 2051–2064. 10.1044/2014_JSLHR-S-13-0177 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Liss, J. M. , Legendre, S. , and Lotto, A. J. (2010). “ Discriminating dysarthria type from envelope modulation spectra,” J. Speech Lang. Hear. Res. 53, 1246–1255. 10.1044/1092-4388(2010/09-0121) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Liss, J. M. , White, L. , Mattys, S. L. , Lansford, K. L. , Lotto, A. J. , Spitzer, S. M. , and Caviness, J. N. (2009). “ Quantifying speech rhythm abnormalities in the dysarthrias,” J. Speech Lang. Hear. Res. 52, 1334–1352. 10.1044/1092-4388(2009/08-0208) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Tibshirani, R. (1997). “ The lasso method for variable selection in the Cox model,” Stat. Med. 16, 385–395. 10.1002/(SICI)1097-0258(19970228)16:4<385::AID-SIM380>3.0.CO;2-3 [DOI] [PubMed] [Google Scholar]
  • 15. Weismer, G. , and Kim, Y. (2010). “ Classification and taxonomy of motor speech disorders: What are the issues?,” in Speech Motor Control: New Developments in Basic and Applied Research, edited by Maassen B. and van Lieshout P., 1st ed. ( Oxford University Press, Cambridge, UK: ), pp. 229–241. [Google Scholar]
  • 16. Zyski, B. J. , and Weisiger, B. E. (1987). “ Identification of dysarthria types based on perceptual analysis,” J. Commun. Disord. 20, 367–378. 10.1016/0021-9924(87)90025-6 [DOI] [PubMed] [Google Scholar]

Articles from The Journal of the Acoustical Society of America are provided here courtesy of Acoustical Society of America

RESOURCES