Fronto-Temporal Connectivity is Preserved During Sung but Not Spoken Word Listening, Across the Autism Spectrum

Share Embed


Descrição do Produto

RESEARCH ARTICLE Fronto-Temporal Connectivity is Preserved During Sung but Not Spoken Word Listening, Across the Autism Spectrum Megha Sharda, Rashi Midha, Supriya Malik, Shaneel Mukerji, and Nandini C. Singh Co-occurrence of preserved musical function with language and socio-communicative impairments is a common but understudied feature of Autism Spectrum Disorders (ASD). Given the significant overlap in neural organization of these processes, investigating brain mechanisms underlying speech and music may not only help dissociate the nature of these auditory processes in ASD but also provide a neurobiological basis for development of interventions. Using a passivelistening functional magnetic resonance imaging paradigm with spoken words, sung words and piano tones, we found that 22 children with ASD, with varying levels of functioning, activated bilateral temporal brain networks during sung-word perception, similarly to an age and gender-matched control group. In contrast, spoken-word perception was right-lateralized in ASD and elicited reduced inferior frontal gyrus (IFG) activity which varied as a function of language ability. Diffusion tensor imaging analysis reflected reduced integrity of the left hemisphere fronto-temporal tract in the ASD group and further showed that the hypoactivation in IFG was predicted by integrity of this tract. Subsequent psychophysiological interactions revealed that functional fronto-temporal connectivity, disrupted during spoken-word perception, was preserved during sung-word listening in ASD, suggesting alternate mechanisms of speech and music processing in ASD. Our results thus demonstrate the ability of song to overcome the structural deficit for speech across the autism spectrum and provide a mechanistic basis for efficacy of song-based interventions in ASD. Autism Res 2014, ••: ••–••. © 2014 International Society for Autism Research, Wiley Periodicals, Inc. Keywords: functional MRI; diffusion tensor imaging (DTI); cognitive neuroscience; pediatrics

Introduction The earliest reports on Autism Spectrum Disorder (ASD) documented a co-occurrence of superior musical function alongside language and socio-communicative disabilities as a common feature [Kanner, 1943]. However, research so far has concentrated on characterizing deficits in the speech and communication domain alone, with much of the evidence for preserved or enhanced musical abilities such as superior pitch processing, coming from anecdotal accounts and behavioral studies [Heaton, 2009; Molnar-Szakacs & Heaton, 2012; Mottron, Peretz, & Ménard, 2000; Ouimet et al., 2012]. Researchers have also observed perfect pitch, good melodic memory and a keen sensitivity to music in children with ASD [Heaton, Pring, & Hermelin, 2001; O’Connell, 1974; Sherwin, 1953], and it has been shown that when presented with musical and non-musical stimuli, children with autism showed a slight preference for the more musical stimuli [Buday, 1995; Thaut, 1988]. At the same time, the last decade has seen an increasing interest in the use of music for rehabilitation in various neuro-cognitive disorders [Lin et al., 2011; Schlaug et al., 2010; Schneider et al., 2007].

Beneficial effects of music have been demonstrated on a host of skills such as auditory attention, spatial awareness and motor sequencing. There has been special emphasis on improving socio-communicative responsiveness in children with ASD through potential transfer effects as a result of shared neural mechanisms for speech and music [Molnar-Szakacs & Heaton, 2012; Wigram & Gold, 2006]. The concurrent impairment in functional speech, and preservation of musical functions, suggests that auditory processing mechanisms in ASD may be a combination of enhancements and deficits. Therefore, it might be possible to use the gain/preservation-of-function in music to manage the deficits in speech, and to develop individualized, and effective music-based interventions, both within and beyond the autism spectrum. Although spoken language impairments in ASD are widespread and pervasive, a commonly accepted model of this dysfunction is a disruption of the left fronto-temporal cortical pathway. Various lines of investigation, including structural and functional neuroimaging studies, have implicated the fronto-temporal circuitry including regions such as the left inferior frontal gyrus (IFG) and the left superior and middle temporal regions (STG, MTG) in

From the Department of Cognitive Neuroscience and Neuroimaging, National Brain Research Centre, Gurgaon, India (M.S., R.M., N.C.S.); Southend Klinik-Nurturing Connections, New Delhi, India (S.M.); Creating Connections, Kolkata, India (S.M.); School of Psychology, University of Birmingham, Birmingham, UK (S.M.) Received June 10, 2014; accepted for publication October 1, 2014 Address for correspondence and reprints: Nandini C. Singh, Department of Cognitive Neuroscience and Neuroimaging, National Brain Research Centre, NH-8, Nainwal Mode, Manesar, Gurgaon 122051, India. E-mail: [email protected] Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/aur.1437 © 2014 International Society for Autism Research, Wiley Periodicals, Inc.

INSAR

Autism Research ••: ••–••, 2014

1

the development of normal speech and language function, with a role in both perceptual and linguistic aspects of speech processing [Eyler, Pierce, & Courchesne, 2012; Kana, Libero, & Moore, 2011; Lai et al., 2012; Wan et al., 2012]. While extensive research has implicated this pathway in the spoken language deficit in ASD, music and song processing, which share significant perceptual as well as neural resources with speech in typical populations [Koelsch, 2011; Schön et al., 2010], remain largely unexplored in autism. Evidence from developmental studies has demonstrated that infants show intense and sustained interest in singing-based scenarios, which are constrained by exaggerated pitch and repetitive structure, rather than speech [Nakata & Trehub, 2004]. It has also been shown that sung exchanges between mothers and infants in early childhood facilitate later language development [Nakata & Trehub, 2004]. Although not much is known about how these might manifest in children with ASD, some studies have used singing or intoned vocalizations to induce spoken language in nonverbal children with autism [Lim, 2010; Simpson & Keen, 2011]. There are numerous anecdotal reports that describe the unique and profound effect music has on children with autism [Sacks, 2008]. A number of behavioral studies have also demonstrated the nature of musical skills often possessed by these children including superior pitch and timbreprocessing abilities [Heaton, 2005; Heaton et al., 2001; Molnar-Szakacs & Heaton, 2012], as well as intact emotional responsiveness to music [Allen, Hill, & Heaton, 2009; Caria, Venuti, & de Falco, 2011]. However, very few studies have systematically investigated the efficacy of music-based interventions in children with autism [Simpson & Keen, 2011] or attempted to investigate the neurobiological correlates of preserved musical function in ASD. Recently, an auditory-motor mapping-based intervention demonstrated significant expressive language improvements in nonverbal children with ASD [Wan et al., 2011]. More recently, the only neuroimaging study to have explored the effect of familiar speech and song in a group of low-functioning children with ASD reported decreased left IFG involvement in the speech condition compared to song [Lai et al., 2012]. All the above findings suggest that auditory processing mechanisms in ASD are a combination of impaired as well as preserved functions. A number of theoretical frameworks have been suggested to explain spoken language dysfunction in ASD [Ouimet et al., 2012], but no existing perspective can currently account for the dichotomy in speech and music processing in autism. Some such as the Enhanced Perceptual Functioning Model (EPF) [Mottron et al., 2000, 2006] attempt to explain it as a disorder of enhanced low-level processing, with or without intact global processing. The EPF also suggests that performance in ASD is inversely related to stimulus complexity according to the neural complexity hypothesis

2

[Samson et al., 2011], further implicating bottom-up processes. The complexity hypothesis has recently been extended to the auditory domain based on findings of enhanced simple processing such as pure tone discrimination, but impaired complex processing such as discriminating speech-in-noise [Ouimet et al., 2012]. The rapid temporal processing hypothesis posits that speech perception relies on higher temporal resolution of complex stimuli compared to musical sounds, and is preferentially processed in the left hemisphere [Zatorre, Belin, & Penhune, 2002], making the deficit in ASD a timing problem where slower, repetitive sounds such as music, processed by the right hemisphere, remain intact. Another explanation of impaired speech processing in autism is the abnormal processing of human voice [Abrams et al., 2013; Gervais et al., 2004], which has been further developed as the social motivation hypothesis [Chevallier et al., 2012], a notion which claims that social stimuli such as human voices and faces are not rewarding for children with ASD but fails to explain the intrinsic reward value that music with a human voice component might have. While some of these cognitive theories can be explained by the underconnectivity findings in ASD [Belmonte et al., 2004; Just et al., 2004; Uddin, Supekar, & Menon, 2013] and explain part of its phenotype, none of the above models alone unify the simultaneous loss and gain of abilities in the structurally and functionally overlapping domains of speech and music. While the inherent heterogeneity in aetiology as well as presentation of symptoms in autism may account for the absence of a unifying theory, it might still be possible to formulate a developmentally emergent framework, combining the enhanced perceptual functioning model and a top-down motivational reward process associated with musical stimuli, to better explain a decreased responsiveness to speech while accounting for the preservation of abilities in the musical domain [Karmiloff-Smith, 2010; López, 2013]. Towards a better understanding of this broad goal, the specific objective of the present study was to tease apart the neural mechanisms underlying speech and music perception in ASD and controls, using a multimodal imaging approach and to build upon the findings of Lai et al, 2012. We did this by using a controlled experimental design with carefully matched speech and music stimuli, to address stimulus and familiarity effects and recruited an ASD sample with varying levels of functioning. Using a combination of spoken words, sung words and piano tones in a passive listening task in a combined functional magnetic resonance imaging-diffusion tensor imaging (fMRI-DTI) experiment, we explored mechanisms of speech and music processing in a sample of children with ASD, compared with a group of neurotypical (TYP) controls.

Sharda et al./Intact sung-word connectivity in ASD

INSAR

Materials and Methods Participants Forty-four children in the age group 6–16 years participated in the study. Of these, 22 children were diagnosed with an ASD (age 11.0 (3.4) years), and 22 were typically developing controls (age 10.4 (2.4) years). The ASD group was diagnosed using the Autism Diagnostic Observation Schedule-Generic (ADOS-G) [Lord et al., 2000], Childhood Autism Rating Scale (CARS II), Diagnostic and Statistical Manual of Mental Disorders, 4th Edition criteria and expert medical opinion. Although all children in the ASD group met the cut-off criteria on the ADOS-G, they belonged to different ends of the autism “spectrum.” Eleven children had high-functioning autism, two had pervasive developmental disorder-not otherwise specified, and nine had classical autism with significant limitations. As a result, the groups also differed significantly on their IQ, both verbal and performance, as measured by the Wechsler Abbreviated Scale of Intelligence (WASI). Heterogeneity in socio-cognitive functioning is a norm rather than the exception in ASD. However, most neuroimaging studies in the past have recruited a highfunctioning subgroup of autism for convenience of task implementation [Yerys et al., 2009]. Our objective was to understand general mechanisms of auditory processing during spoken and sung words perception, and study its manifestation as a function of individual ability across the entire autism spectrum. As such, we did not exclude children with lower levels of functioning. Instead, we restricted our experimental paradigm to a passive task where no explicit response was needed from the child. Diagnostic measures from both CARS II and the ADOS-G were found to be significantly related (Fig. S1; r = 0.56, P = 0.004). In addition, the Verbal IQ (VIQ) subscore from WASI significantly predicted communication ability (as measured by the Vineland Communication subscale, r = 0.79, P < 0.0001), symptom severity (as measured by the ADOS-G, r = −0.56, P = 0.004), as well as general cognitive ability as measured by full-scale IQ from WASI (r = 0.91, P < 0.0001) in the ASD group, thereby characterizing the “spectrum” nature of our ASD sample [Lai et al., 2013] and confirming that language abilities as measured by WASI were indeed a measure of symptom severity and heterogeneity in our ASD cohort. However, to minimize effects of differences in cognitive/language ability as well as maturational influences across our samples, we used both age and full-scale IQ as covariates of no interest in all statistical models in fMRI, DTI, as well as connectivity analyses. Any known comorbid, genetic, or neurological conditions (Fragile-X, epilepsy) were exclusion criteria; however, comorbidity with a psychological/psychiatric condition (attention deficit, anxiety) or medicine use was not excluded. The Vineland Adaptive Behaviour Scale

INSAR

(VABS) was used to assess communication and daily living skills in the ASD group. The TYP group was recruited via flyers and through personal contacts and based on parent reports, had no known psychological or neurological deficits and no first-degree relative on the autism spectrum. Participants from both groups were matched pairwise on age (with a minimum of 79% and maximum of 100% match, regression coefficient = 0.92, P < 0.001), gender (16 males, six females) and handedness, had normal hearing and normal or corrected-tonormal vision. Music responsiveness was assessed in all participants using a custom-made questionnaire (Appendix S1). Detailed demographic information can be found in Table 1. Written informed consent was obtained from parents of all participants in accordance with guidelines of the Institutional Review Board of the National Brain Research Centre. Stimuli Stimuli used in the functional imaging paradigm consisted of 30 spoken words, 30 sung words, and 15 piano tones. All words were bisyllabic nouns or verbs familiar to children. The same set of words was spoken as well as sung. All stimuli were recorded by a female with formal musical training. The piano tones consisted of a pair of major notes in the Middle C Octave recorded from a digital piano. The sung words were composed on the same notes as the piano tones, with each tone repeated twice (Table S1). Stimuli were matched on Root Mean Square (RMS) power using Goldwave software (www.goldwave.com) and 16-bit digitized with a sampling rate of 22 KHz with mean duration of 2.5 (0.6) seconds. To ensure that any differences between conditions were not due to psychoacoustic factors, we measured spectral and temporal complexity of stimuli using spectral structure variance [Reddy et al., 2009] and mean peak temporal modulation rate [Rogalsky et al., 2011] (Fig. S2). fMRI Paradigm Our functional imaging experiment was a passive listening task. Past work by Redcay [Redcay & Courchesne, 2008; Redcay, Kennedy, & Courchesne, 2007] has demonstrated the efficacy of passive listening paradigms for speech to be effective in evoking reliable brain activity in infants. Our study builds upon the same approach to explore brain mechanisms underlying sound processing. We adopted a sparse-sampling technique with a long repetition time of 10 sec to minimize acoustic interference from the scanner [Hall et al., 1999] and make participants as comfortable as possible. We used an eventrelated auditory paradigm with three kinds of stimuli as active events intermixed in pseudo-randomized order with 15 null events. Each event consisted of a silent

Sharda et al./Intact sung-word connectivity in ASD

3

Table 1.

Participants’ Demographic Information Groups ASD (n = 22)

TYP (n = 22)

Participant demographics

Mean

SD

Range

Mean

SD

Range

Significance testing

Age (in years) Gender (% male) Handedness (% right-handed) Full-scale IQ (WASIa) Verbal IQ Performance IQ ADOS-Gb Composite Score ADOS-G Communication Subscore ADOS-G Social Subscore VABSc II Composite Score VABS II Communication Subscore VABS II Daily Living Skills Subscore VABS II Socialization Subscore CARSd II

11.0 72.7 95.45 83.14 84.32 84.27 10.81 3.95 6.86 66.25 71.42 74.21 63.26 32.61

3.4 – – 17.8 20.5 16.43 3.47 2.06 2.15 8.84 11.11 11.77 9.91 2.98

6–16 – – 52–129 55–129 55–123 7–16 1–8 4–12 41–80 38–93 48–92 42–79 28–35

10.4 72.7 100 100.27 102.36 97.9 – – – – – – – –

2.4 – – 11.6 12.8 10.3 – – – – – – – –

7–16 – – 79–120 79–127 82–119 – – – – – – – –

T = 0.77, P = 0.45 χ2 = 0, P = 0.995 χ2 = 0.002, P > 0.95 T = 3.78, P = 0.0005 T = 3.5, P = 0.001 T = 3.3, P = 0.002 – – – – – – – –

a

Wechsler’s Abbreviated Scale of Intelligence. Autism Diagnostic Observation Schedule-Generic. c Vineland Adaptive Behaviour Scale. d Childhood Autism Rating Scale. b

period with a stimulus-onset asynchrony of 3.5–4.5 sec, stimulus duration of 2–3 sec, and a silent period adding up to a total of 10 sec. The paradigm consisted of a total of 90 volumes with 30 volumes in each of three runs. During the fMRI scanning session, sound stimuli were delivered binaurally using E-Prime ® software, v.1.1 (Psychological Software Tools, Pittsburgh, PA) and MRIcompatible headphones (Invivo Corp, Gainesville, FL). Supporting head cushions and sandbags were used to minimize movement in participants. Image Acquisition MR images were acquired in a 3.0 T Philips Achieva scanner (Philips, Eindhoven, The Netherlands) with a standard 8-channel head coil. 45 axial slices (3.5 mm thick, no gap) parallel to the AC-PC plane (defined by the anterior commissure (AC) and posterior commissure (PC)), and covering the whole brain were imaged using a T2*-weighted gradient-echo sequence (Repetition Time (TR) = 10 sec, Echo Time (TE) = 45 ms, flip angle = 90°). The field of view was 230 × 230 mm2 and matrix size, 64 × 64. The images were reconstructed as a 128 × 128 matrix with an in-plane resolution of 1.79 × 1.79-mm2. A high-resolution 3D T1-weighted image with the following parameters, TR = 8.4 sec; TE = 3.7 ms; flip angle = 8; 25 cm × 23 cm field of view; 150 slices; 252 × 223 matrix, was acquired for anatomical localization of functional scans. Diffusion-weighted images were acquired using a singleshot spin-echo sequence with TR = 8,000 ms, number of

4

signals averaged (NSA) = 2, field of view = 24 cm2, matrix size = 96 × 96 reconstructed to obtain an in-plane resolution of 1.67 mm2. The slice thickness was 2.5 mm with 65 continuous sections providing total brain coverage. The diffusion images gradient encoding scheme consisted of 16 noncollinear directions with b value of 1,000 s/mm2 and a nondiffusion weighted image. Preparation for Neuroimaging Despite their potential as a noninvasive tool to study human behaviour, there are a number of practical and technical challenges while conducting neuroimaging experiments with children [Kotsoni, Byrd, & Casey, 2006; Raschle et al., 2009]. Some of these include problems of movement, responsiveness, motivation, and alertness. It has been suggested that taking care to acclimatize both parent and child to the procedure often helps in maximizing outcomes while ensuring comfort [Slifer, Koontz, & Cataldo, 2002]. To ensure this, all neuroimaging experiments included a detailed orientation procedure for parents and children as described in the Appendix S1 (SI). fMRI Data Analysis The first two volumes were discarded to allow for signal equilibration. Functional MRI data were analyzed using SPM8 analysis software (http://www.fil.ion.ucl.ac.uk/ spm). Functional images were realigned to correct for motion, spatially transformed to standard stereotaxic space (based on the Montreal Neurological Institute

Sharda et al./Intact sung-word connectivity in ASD

INSAR

[MNI] coordinate system), smoothed with a 6-mm full-width at half-maximum Gaussian kernel to decrease spatial noise prior to statistical analysis. The MNI template rather than a custom-made pediatric template was used for normalization, since it has been reported that there are no significant changes in localization of activations in an earlier study in children [Kang et al., 2003]. Translational movement in millimeters (x, y, z) and rotational motion in degrees (pitch, roll, and yaw) was calculated based on the SPM8 parameters for motion correction of the functional images in each participant. Outlier volumes (scan to scan motion >3 mm and global intensity >2 SD) were identified and repaired with motion correction algorithms using ArtRepair v.4.0 [Mazaika et al., 2009]. The motion parameters were examined for each participant, and datasets with more than one third of outlier volumes per run were discarded. After this quality control check, data from 40 out of the 44 participants (20 in each group) were used for further analyses. In addition, in order to ensure that the variability in our sample due to age and cognitive/language ability measured by WASI full-scale IQ were minimized, all subsequent analyses were adjusted for effects of these variables by including them as covariates of no interest. Data were high-pass filtered at 128s and modeled using a canonical hemodynamic response function. Four conditions—spoken, sung, tones, and rest were modeled using the general linear model framework and analyzed at group level using a random effects model. Contrasts were made for between groups, as well as between-condition effects, and results were reported at a statistical threshold of P < 0.05, corrected for family-wise error (FWE) at cluster level. Between-group comparisons were made using a thresholded, additive mask of regions activated in all three conditions with respect to baseline in both groups.

TYP > ASD for Spoken > Rest. The physiological variable was the blood oxygen level dependent signal or BOLD time-series, averaged over each voxel in the seed ROI, and adjusted for confounds using an omnibus F-contrast. The sung-spoken and the spoken–sung contrasts served as the psychological variables. Two different PPI models were determined for all participants. Each PPI term was calculated as the interaction between the deconvolved physiological variable and the psychological variable. This entire procedure was performed using the generalized psychophysiological interactions framework for SPM8 [McLaren et al., 2012]. PPI images for all participants were then entered into a second-level model to determine group PPI maps for both ASD and TYP groups. Results were reported at P < 0.01, corrected for multiple comparisons using an extent threshold of 75 contiguous voxels. In addition, we also carried out between-group PPI analyses for the contrasts spoken > rest and sung > rest where group (ASD > TYP and TYP > ASD) served as the “psychological” variable [O’Reilly et al., 2012]. Similar analyses described above were carried out in the generalized PPI (gPPI) framework. Age and full-scale IQ were used as covariates of no interest in all analyses. Diffusion Tensor Imaging Analysis DTI data were obtained for 32 out of the 44 children (16 in each group) in the study after a rigorous quality control protocol to exclude images with motion artifacts (SI). Diffusion-weighted images were analyzed using FSL 4.1 (http://www.fmrib.ox.au.uk/fsl) and voxel-wise fractional anisotropy (FA) was calculated for both groups using tract-based spatial statistics (TBSS), the details of which can be found in the Appendix S1.

Results Psychophysiological Interactions Psychophysiological interactions (PPI) are used to identify task-related functional connectivity between a seed region and the rest of the brain in a given psychological context, over and above the main effects of the task [Friston et al., 1997]. The most common approach, of definition of this seed region, is to select a group of voxels with the strongest task effect [O’Reilly et al., 2012]. Even though this seed region of interest (ROI) is selected based on previous analysis, the problem of circularity is overcome by including the main effects of the task in the PPI model. As a result, the PPI model will only detect the functional connectivity effects orthogonal to the main task effects. In the present study, we were interested in identifying the modulatory effect of sung words compared to spoken words and vice-versa in ASD and TYP groups. The seed region was defined as a 6-mm sphere in left IFG using the peak voxel (-38 30 2) from the between-group contrast

INSAR

Comparable Brain Networks for Sung-Word Perception in ASD and TYP Within-group comparisons with respect to baseline (P < 0.05, FWE-corrected at cluster level), activated a bilateral temporal network consisting of superior temporal sulcus (STS), superior temporal gyrus (STG) and middle temporal gyrus (MTG) for processing spoken words with significant hemispheric asymmetry in both groups. The asymmetry for spoken words was leftward in TYP (Laterality Index (LI) = 0.35) and rightward in ASD (LI = −0.55) (SI text and Fig. S3). The sung-word network for both groups, however, was extensively bilateral in regions spanning the temporal and frontal lobe, with no hemispheric asymmetry in the topology of activations. It was also significantly more robust than the spoken-word network based on the degree of activation, demonstrating the salience of sung-word stimuli for both groups during our passive listening task. This salience was further

Sharda et al./Intact sung-word connectivity in ASD

5

sung-word perception as well as tone perception. The spoken-word perception however, revealed a cluster in the left IFG to be significantly more activated in TYP than in ASD (Fig. 2A, B). Additionally, this decreased activation in left IFG during spoken-word perception was significantly correlated with verbal ability in the ASD group as measured by the VIQ from WASI (Fig. 2C) but not in the TYP, further highlighting the role of this region in development of normal language abilities. This WASI VIQ measure was in turn related to the communication ability as measured by the VABS as well as the ADOS (Fig. S1), demonstrating the heterogeneity in the nature of language ability in our sample, both behaviorally and at the level of brain activity. It is interesting that the individuals with significantly higher VIQs show higher activity in left IFG, where as those with significantly lower VIQ have lower than average activity in left IFG. Studies in the past have only focused on homogeneous high-functioning subgroups, and as such these relationships have not been explored across the entire autism spectrum. While our findings may be driven by a handful of individuals at either extreme of the sample, and are extremely preliminary, it might be interesting as well as pertinent to explore these relationships in larger samples, where single outliers may not compromise the reliability of the findings. We did not find any relation between the level of activation during sungword listening and language abilities in either group.

Figure 1. Spoken vs. sung-word perception. The top panel shows axial brain sections (z = 0) with activations for spoken–sung comparisons at P < 0.05, FWE-corrected at cluster level. The TYP group shows an activated cluster in the left angular gyrus, while there were no significant clusters for the ASD group. The bottom panel shows the reverse contrast, sung–spoken at the same statistical threshold, where both groups, ASD and TYP, demonstrate activation of robust and extensive bilateral temporal clusters. The scale bar represents T-values. confirmed by a between-condition comparison for the groups (Fig. 1). While the spoken > sung contrast had no suprathreshold voxels for the ASD group, a region of the left angular gyrus was activated in the TYP group. The sung > spoken contrast, on the other hand activated a similar bilateral temporal network seen in the sung > rest comparison consisting of STS, STG, and MTG in both groups. The tones activated a right-lateralized region of primary auditory cortex for both groups. Decreased Left IFG Activation During Spoken Word Listening in ASD We then performed between-group comparisons to identify regions that were differentially recruited in the two groups while processing the three kinds of stimuli. There were no significant differences between ASD and TYP for

6

Concurrent White-Matter Deficits in Language Networks in ASD A whole-brain TBSS analysis of diffusion-weighted images showed reduced FA in a posterior region of the left Superior Longitudinal Fasciculus (SLF), connecting temporal and frontal regions, in the ASD group compared to TYP. A region-wise TBSS analysis of language tracts in the whole brain identified differences in FA in the dorsal as well as temporal part of the left SLF (Fig. 3A) but not in any of the other language tracts such as Uncinate Fasciculus (UF), Inferior Longitudinal Fasciculus (ILF), and right SLF tracts where FA was measured (Fig. 3B). In addition, the decrease in FA in left SLF in the ASD group also significantly predicted the functional activation in left IFG during spoken-word listening (Fig. 3C), suggesting that the impairment in speech perception in ASD, as evidenced by poor verbal abilities and hypoactivation in left IFG during spoken-word perception, might indeed be related to and reflective of the underlying neuroanatomical impairment in the left hemisphere white-matter pathway. Preserved Fronto-Temporal Connectivity During Sung-Word Listening in ASD Our findings from structural imaging using DTI reflect impairment in the left hemisphere language networks in

Sharda et al./Intact sung-word connectivity in ASD

INSAR

Figure 2. Decreased left IFG activation during spoken-word listening in ASD group is related to verbal ability.( A) An axial brain section showing a cluster in the left IFG with a peak at MNI coordinate −38, 30, 2 activated in the contrast TYP > ASD for spoken–rest at P < 0.05, FWE-corrected at cluster level. (B) A quantification of the parameter estimates in the left IFG cluster during the spoken–rest condition, showing significantly greater activity in TYP (error bars represent SEM). (C) The decreased activity in left IFG during spoken-word listening is significantly related with the language ability of the ASD sample as measured by the verbal IQ (WASI) (r = 0.47, P = 0.042).

the ASD group. Findings from our functional activation study on the other hand, show that only spoken-word processing in the ASD group was impaired, while sungword processing remained intact and comparable to the TYP group. In light of the fact that both speech and music processing share common resources (Patel, 2003), we wanted to explore if there were alternate pathways which might be working to drive the intact sung-word processing in ASD. To address this and further delineate mechanistic differences between spoken and sung-word processing in ASD and TYP, we performed a PPI-based functional connectivity analysis using a left IFG ROI seed. In the sung > spoken context, left IFG showed increased connectivity with bilateral posterior temporal, right parieto-occipital, and right cerebellar regions (Fig. 4B) in the ASD group alone. The TYP group showed increased connectivity within the left inferior frontal cortex. This finding validated that fronto-temporal connectivity in the ASD group was indeed modulated by condition, with perception of sung words evoking such connectivity more than spoken words. The TYP group, on the other hand, did not have fronto-temporal differences in connectivity by condition; instead there was a difference in the extent to which the left IFG was recruited. In the spoken > sung contrast, there were no regions that were more functionally connected with the left IFG in either ASD or TYP. To further explore these differences in connectivity between groups directly, we compared PPI maps during spoken > rest and sung > rest for TYP > ASD and ASD > TYP, respectively (Fig. 4C). While there were no

INSAR

regions which showed increased connectivity with left IFG during spoken-word listening in ASD more than TYP, there was significantly increased connectivity of left IFG with a left temporal region in the TYP > ASD condition, underscoring previous findings of impaired frontotemporal connectivity during speech perception in ASD. During sung-word perception, however, there were no fronto-temporal differences between groups. Instead, the ASD group showed increased connectivity with the left cerebellum, implicated to be part of a fronto-cerebellar circuit involved in various areas of cognitive function, including language as well as song processing [Brown et al., 2004; Callan et al., 2007; Makris et al., 2005]. The TYP group, by contrast, showed increased connectivity with another peri-sylvian region, the right anterior insula, implicated to play a role in articulation as well as emotion-related processes.

Discussion Our results show that speech processing, subserved by a classical fronto-temporal brain circuit, is impaired in ASD. However, song perception remains intact and is indeed subserved by similar functional networks. Integrity of frontal and temporal cortices is considered essential for normal language development. However, both structural and functional MRI studies have shown impairments in these regions in individuals with ASD [Eyler et al., 2012; Lai et al., 2012], consistent with our findings. At the same time, singing is known to engage a bilateral

Sharda et al./Intact sung-word connectivity in ASD

7

Figure 3. Concurrent white-matter microstructure deficits in ASD group. (A) Decreased FA in Left SLF (dorsal as well as temporal parts) (error bars represent SEM). (B) A schematic figure showing the various white-matter tracts involved in language processing and the regions they connect. (C) Correlation between FA of left SLF and BOLD activity during spoken-word listening in the left IFG in the ASD group (r = 0.58, P = 0.018).

network comprising frontal and temporal regions, which overlaps with language-related pathways delineated above [Wan et al., 2010]. The redistribution of activity within the framework of such a preexisting, overlapping network may serve as a central mechanism in functional compensation of the language system upon use of songbased therapies; it may also explain the concurrence of spoken language deficits and spared musical processing in ASD. Speech and song have been extensively used as prototypes for studying language and music organization in the brain [Fedorenko et al., 2009; Merrill et al., 2012; Rogalsky et al., 2011]. Convergent findings have shown that there is extensive overlap between the brain regions underlying the processing of both these categories of sounds [Schön et al., 2010]. However, there are also finegrained acoustic differences which have differential effects on perception, attention, and memory [Merrill

8

et al., 2012; Rogalsky et al., 2011]. As a result, preservation of function in one domain can often co-occur with impairment in another. A common example of such an occurrence arises in individuals with congenital amusia who have significant deficits in pitch processing in the musical domain but normal speech and prosody processing [Tillmann et al., 2011]. Evidence also exists to indicate that such a unique organization of language and music processes in the brain might facilitate transfer of functions from one domain to the other [Patel, 2011, 2013]. The possibility of such transfer has significant implications for rehabilitation of speech and language abilities via music. Lastly, research in the last decade has demonstrated that music is a rich and motivational stimulus which can engage multimodal networks in the brain [Zatorre, Chen, & Penhune, 2007]. Given the mounting evidence for its therapeutic effects, music in the context of ASD might indeed have the potential to

Sharda et al./Intact sung-word connectivity in ASD

INSAR

Figure 4. Preserved fronto-temporal connectivity during sung-word listening in ASD. (A) Location of seed in left IFG (LIFG) used to conduct PPI analyses. (B) Within-group PPI analysis for the sung-spoken context showing increased connectivity of left IFG seed with a posterior temporal region in the ASD group (red) as opposed to localized frontal connectivity in the TYP (blue) group. (C) A direct between-group comparison of the PPI maps for each condition separately revealed increased fronto-temporal connectivity during spoken–rest for the TYP > ASD (blue), but no differences in fronto-temporal connectivity for sung–rest in either ASD > TYP (red) or TYP > ASD (blue).

compensate for existing structural impairments via strengthening of functional networks through a combination of bottom-up and top-down processes. In light of the developmentally emergent model suggested in the introduction, while speech-related processes maybe impaired due to a combination of deficient processing of complex auditory information as well as decreased motivation to orient to social stimuli such as speech, music function, owing to its predictability, structure and intrinsic reward value seems to be processed in a relatively spared or sometimes superior fashion. Furthermore, this alternative compensation of communicative function via music seems possible due to shared fronto-temporal circuitry for speech and music-related processes in the brain elucidated earlier. Our findings of increased left frontal-left cerebellar connectivity during sung-word perception in ASD, may also suggest a possible rewiring at the level of the cortex, where left hemisphere language regions might still be able to perform song-processing functions, compensating for speech, and may be reflected in the left-lateralization of cerebellarcortical connectivity.

INSAR

In the context of this developmental model, two hypotheses had been suggested: entrainment of speech and communicative functions through music via bottom-up processes such as low-level processing of repetitive elements in music, such as rhythm and pitch drones, which could be useful for improving perceptual and fine motor skills and could help in multimodal integration. The other hypothesis, about the inherent motivational power of music, involved more top-down mechanisms for a positive effect on attention and engagement-related processes on the whole. Future studies may attempt to understand and synergize these hypotheses from a neurobiological perspective, to better understand the decreased responsiveness to speech in ASD and best utilize the potential of music-based enhancement of communicative function, not just in autism spectrum disorders, but also in other disorders of impaired fronto-temporal connectivity such as frontotemporal dementia and stroke-induced aphasia. In this study, we investigated brain responses to music vs. speech stimuli in a large group of children across the autism spectrum. As stated earlier, a previous study by Lai

Sharda et al./Intact sung-word connectivity in ASD

9

et al. [2012] investigated the effects of preferred and familiar speech and song in a group of low-functioning children with ASD, many of who were scanned under sedation. Their findings demonstrated decreased involvement of left IFG during speech but not song condition in the ASD group. Although results of Lai et al.’s [2012] study may have been confounded by the fact that participants were low-functioning and were scanned under propofol sedation which may have impacted their ability to concentrate during auditory stimulation [Heinke et al., 2004], the study has significant clinical relevance. Our findings extend those of Lai et al. [2012] to awake, non-sedated children with ASD throughout a range of levels of functioning and show that sung-word perception subserved by fronto-temporal circuitry is intact in autism, despite structural compromise of the main white-matter tract that normally underpins this circuitry. In addition, our experimental paradigm employs carefully matched spoken and sung words stimuli, which are very similar in their acoustic attributes (Fig. S3), yet show distinctly different activation networks in the brain, further confirming that the difference in processing of spoken and sung words is not purely driven by a deficit in information processing [Fabricius, 2012] but has a higher level sociocognitive component, essential for communication. Our passive task with no active behavioral measure may present an interpretive limitation, but on the other hand was essential for attaining the generality of inference conferred by this large, heterogeneous group of subjects. To establish behavioral responses to music, we administered a music responsiveness questionnaire to parents of all children and found that responsiveness was indeed greater for the ASD group, concurrent with their brain activity. While the inherent variability in autism samples may lead to inconsistencies in interpretation of data, and require redefining the nature of existing research methodology, they are an integral, characteristic feature of the ASD phenotype and a distinguishing aspect of this study. The autism cohort used in this sample not only reflects this heterogeneity but also locates the implication of our findings in the context of this variability—an approach that is still uncommon in autism research studies. There is a need for interdisciplinary and integrative approaches that embrace the heterogeneity that describes ASD, and make it central to the existing research norms. To account for statistical inconsistencies resulting from a heterogeneous cohort, all measures were controlled for effects of age and cognitive ability by including them as covariates of no interest in our analyses. Thus, our findings demonstrate that sung-word perception is indeed preserved across the entire autism spectrum and is independent of symptom severity and language ability, and provide a neurobiological basis for the use of music-based therapies across the autism spectrum.

10

The implications of these findings are significant from clinical as well as experimental perspectives. While most of the research on non-savant ASD so far has focused on deficits, and what might not be functional in the brain of an individual with autism, the findings from this study are one of the first in a long line of research that might actually capitalize on the potential of the autistic brain to compensate for its losses. As suggested by Barnhill [2013], the dual connectivity hypothesis could explain how “one form of connectivity might be able to make up for disruption in another form of connectivity” and how this phenomenon could have widespread implications in the development of rehabilitative strategies for neurocognitive disorders, particularly ASD. Acknowledgments We would like to thank the National Brain Research Centre for intramural funding as well as the Department of Science and Technology for funding a grant on autism research. We would also like to thank Mr. Jitender Ahlawat (National Brain Research Centre, India) and Mr. T.R Abhilash, (National Brain Research Centre, India) for technical assistance in data acquisition and all parents and children who participated in our study. We would also like to acknowledge Dr. Kavita Arora, MD, CCST and Dr. Amit Sen, MD, MRCPsych at the Children First Mental Health Institute for offering invaluable comments and support for this study. The authors would also like to thank two anonymous reviewers for their excellent comments that improved the manuscript. None of the authors have any conflict of interest, financial, or otherwise.

References Abrams, D.A., Lynch, C.J., Cheng, K.M., Phillips, J., Supekar, K., et al. (2013). Underconnectivity between voice-selective cortex and reward circuitry in children with autism. Proceedings of the National Academy of Sciences of the United States of America, 110, 12060–12065. Allen, R., Hill, E., & Heaton, P. (2009). The subjective experience of music in autism spectrum disorder. Annals of the New York Academy of Sciences, 1169, 326–331. Barnhill, E. (2013). Neural connectivity, music, and movement: A response to Pat Amos. Frontiers in Integrative Neuroscience, 7(29), 1–3. Belmonte, M.K., Allen, G., Beckel-Mitchener, A., Boulanger, L.M., Carper, R.A., & Webb, S.J. (2004). Autism and abnormal development of brain connectivity. Journal of Neuroscience, 24, 9228–9231. Brown, S., Martinez, M.J., Hodges, D.A., Fox, P.T., & Parsons, L.M. (2004). The song system of the human brain. Brain Research. Cognitive Brain Research, 20, 363–375. Buday, E.M. (1995). The effects of signed and spoken words taught with music on sign and speech imitation by children with autism. Journal of Music Therapy, 32, 189–202.

Sharda et al./Intact sung-word connectivity in ASD

INSAR

Callan, D.E., Kawato, M., Parsons, L., & Turner, R. (2007). Speech and song: The role of the cerebellum. Cerebellum (London, England), 6, 321–327. Caria, A., Venuti, P., & de Falco, S. (2011). Functional and dysfunctional brain circuits underlying emotional processing of music in autism spectrum disorders. Cerebral Cortex, 21, 2838–2849. Chevallier, C., Kohls, G., Troiani, V., Brodkin, E.S., & Schultz, R.T. (2012). The social motivation theory of autism. Trends in Cognitive Sciences, 16, 231–239. Eyler, L.T., Pierce, K., & Courchesne, E. (2012). A failure of left temporal cortex to specialize for language is an early emerging and fundamental property of autism. Brain: A Journal of Neurology, 135(Pt 3), 949–960. Fabricius, T. (2012). On neural systems for speech and song in autism. Brain: A Journal of Neurology, 135, e222. Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37, 1–9. Friston, K.J., Buechel, C., Fink, G.R., Morris, J., Rolls, E., & Dolan, R.J. (1997). Psychophysiological and modulatory interactions in neuroimaging. Neuroimage, 6, 218–229. Gervais, H., Belin, P., Boddaert, N., Leboyer, M., Coez, A., et al. (2004). Abnormal cortical voice processing in autism. Nature Neuroscience, 7, 801–802. Hall, D.A., Haggard, M.P., Akeroyd, M.A., Palmer, A.R., Summerfield, A.Q., et al. (1999). “Sparse” temporal sampling in auditory fMRI. Human Brain Mapping, 7, 213–223. Heaton, P. (2005). Interval and contour processing in autism. Journal of Autism and Developmental Disorders, 35, 787– 793. Heaton, P. (2009). Assessing musical skills in autistic children who are not savants. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 364, 1443– 1447. Heaton, P., Pring, L., & Hermelin, B. (2001). Musical processing in high functioning children with autism. Annals of the New York Academy of Sciences, 930, 443–444. Heinke, W., Kenntner, R., Gunter, T.C., Sammler, D., Olthoff, D., & Koelsch, S. (2004). Sequential effects of increasing propofol sedation on frontal and temporal cortices as indexed by auditory event-related potentials. Anesthesiology, 100, 617–625. Just, M.A., Cherkassky, V.L., Keller, T.A., & Minshew, N.J. (2004). Cortical activation and synchronization during sentence comprehension in high-functioning autism: Evidence of underconnectivity. Brain: A Journal of Neurology, 127, 1811– 1821. Kana, R.K., Libero, L.E., & Moore, M.S. (2011). Disrupted cortical connectivity theory as an explanatory model for autism spectrum disorders. Physics of Life Reviews, 8, 410–437. Kang, H.C., Burgund, E.D., Lugar, H.M., Petersen, S.E., & Schlaggar, B.L. (2003). Comparison of functional activation foci in children and adults using a common stereotactic space. Neuroimage, 19, 16–28. Kanner, L. (1943). Autistic disturbances of affective contact. Nervous Child, 2, 217–250. Karmiloff-Smith, A. (2010). Neuroimaging of the developing brain: Taking “developing” seriously. Human Brain Mapping, 31, 934–941.

INSAR

Koelsch, S. (2011). Toward a neural basis of music perception—A review and updated model. Frontiers in Psychology, 2(110), 1–20. Kotsoni, E., Byrd, D., & Casey, B.J. (2006). Special considerations for functional magnetic resonance imaging of pediatric populations. Journal of Magnetic Resonance Imaging, 23, 877–886. Lai, G., Pantazatos, S.P., Schneider, H., & Hirsch, J. (2012). Neural systems for speech and song in autism. Brain: A Journal of Neurology, 135, 961–975. Lai, M.-C., Lombardo, M.V., Chakrabarti, B., & Baron-Cohen, S. (2013). Subgrouping the autism “spectrum”: Reflections on DSM-5. PLoS Biology, 11, e1001544. Lim, H.A. (2010). Effect of “developmental speech and language training through music” on speech production in children with autism spectrum disorders. Journal of Music Therapy, 47, 2–26. Lin, S.-T., Yang, P., Lai, C.-Y., Su, Y.-Y., Yeh, Y.-C., Huang, M.-F., & Chen, C.-C. (2011). Mental health implications of music: Insight from neuroscientific and clinical studies. Harvard Review of Psychiatry, 19, 34–46. López, B. (2013). Beyond modularisation: The need of a socioneuro-constructionist model of autism. Journal of Autism and Developmental Disorders (Epub, ahead of print). Lord, C., Risi, S., Lambrecht, L., Cook, E.H., Leventhal, B.L., et al. (2000). The autism diagnostic observation schedule-generic: A standard measure of social and communication deficits associated with the spectrum of autism. Journal of Autism and Developmental Disorders, 30, 205–223. Makris, N., Schlerf, J.E., Hodge, S.M., Haselgrove, C., Albaugh, M.D., et al. (2005). MRI-based surface-assisted parcellation of human cerebellar cortex: An anatomically specified method with estimate of reliability. Neuroimage, 25, 1146–1160. Mazaika, P.K., Hoeft, F., Glover, G.H., & Reiss, A.L. (2009). Methods and software for fMRI analysis of clinical subjects. Neuroimage, 47, S58. McLaren, D.G., Ries, M.L., Xu, G., & Johnson, S.C. (2012). A generalized form of context-dependent psychophysiological interactions (gPPI): A comparison to standard approaches. Neuroimage, 61, 1277–1286. Merrill, J., Sammler, D., Bangert, M., Goldhahn, D., Lohmann, G., Turner, R., & Friederici, A.D. (2012). Perception of words and pitch patterns in song and speech. Frontiers in Psychology, 3(76), 1–13. Molnar-Szakacs, I., & Heaton, P. (2012). Music: A unique window into the world of autism. Annals of the New York Academy of Sciences, 1252, 318–324. Mottron, L., Peretz, I., & Ménard, E. (2000). Local and global processing of music in high-functioning persons with autism: Beyond central coherence? Journal of Child Psychology and Psychiatry, and Allied Disciplines, 41, 1057–1065. Mottron, L., Dawson, M., Soulières, I., Hubert, B., & Burack, J. (2006). Enhanced perceptual functioning in autism: An update, and eight principles of autistic perception. Journal of Autism and Developmental Disorders, 36, 27–43. Nakata, T., & Trehub, S.E. (2004). Infants’ responsiveness to maternal speech and singing. Infant Behavior and Development, 27, 455–464. O’Connell, T.S. (1974). The musical life of an autistic boy. Journal of Autism and Childhood Schizophrenia, 4, 223–229.

Sharda et al./Intact sung-word connectivity in ASD

11

O’Reilly, J.X., Woolrich, M.W., Behrens, T.E.J., Smith, S.M., & Johansen-Berg, H. (2012). Tools of the trade: Psychophysiological interactions and functional connectivity. Social Cognitive and Affective Neuroscience, 7, 604–609. Ouimet, T., Foster, N.E.V., Tryfon, A., & Hyde, K.L. (2012). Auditory-musical processing in autism spectrum disorders: A review of behavioral and brain imaging studies. Annals of the New York Academy of Sciences, 1252, 325–331. Patel, A.D. (2003). Language, music, syntax and the brain. Nature Neuroscience, 6(7), 674–681. Patel, A.D. (2011). Why would musical training benefit the neural encoding of speech? The OPERA hypothesis. Frontiers in Psychology, 2(142), 1–14. Patel, A.D. (2013). Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis. Hearing Research, 308, 98–108. Raschle, N.M., Lee, M., Buechler, R., Christodoulou, J.A., Chang, M., et al. (2009). Making MR imaging child’s play—Pediatric neuroimaging protocol, guidelines and procedure. Journal of Visualized Experiments, 29, 1–5. Redcay, E., & Courchesne, E. (2008). Deviant functional magnetic resonance imaging patterns of brain activity to speech in 2–3-year-old children with autism spectrum disorder. Biological Psychiatry, 64, 589–598. Redcay, E., Kennedy, D.P., & Courchesne, E. (2007). fMRI during natural sleep as a method to study brain function during early childhood. Neuroimage, 38, 696–707. Reddy, R.K., Ramachandra, V., Kumar, N., & Singh, N.C. (2009). Categorization of environmental sounds. Biological Cybernetics, 100, 299–306. Rogalsky, C., Rong, F., Saberi, K., & Hickok, G. (2011). Functional anatomy of language and music perception: Temporal and structural factors investigated using functional magnetic resonance imaging. Journal of Neuroscience, 31, 3843–3852. Sacks, O. (2008). Musicophilia: Tales of music and the brain. New York: Alfred A. Knopf. Samson, F., Hyde, K.L., Bertone, A., Soulières, I., Mendrek, A., et al. (2011). Atypical processing of auditory temporal complexity in autistics. Neuropsychologia, 49, 546–555. Schlaug, G., Norton, A., Marchina, S., Zipse, L., & Wan, C.Y. (2010). From singing to speaking: Facilitating recovery from nonfluent aphasia. Future Neurology, 5, 657–665. Schneider, S., Schönle, P.W., Altenmüller, E., & Münte, T.F. (2007). Using musical instruments to improve motor skill recovery following a stroke. Journal of Neurology, 254, 1339–1346. Schön, D., Gordon, R., Campagne, A., Magne, C., Astésano, C., Anton, J.-L., & Besson, M. (2010). Similar cerebral networks in language, music and song perception. Neuroimage, 51, 450– 461. Sherwin, A.C. (1953). Reactions to music of autistic (schizophrenic) children. American Journal of Psychiatry, 109, 823– 831. Simpson, K., & Keen, D. (2011). Music interventions for children with autism: Narrative review of the literature. Journal of Autism and Developmental Disorders, 41, 1507–1514. Slifer, K.J., Koontz, K.L., & Cataldo, M.F. (2002). Operantcontingency-based preparation of children for functional magnetic resonance imaging. Journal of Applied Behavior Analysis, 35, 191–194.

12

Thaut, M.H. (1988). Measuring musical responsiveness in autistic children: A comparative analysis of improvised musical tone sequences of autistic, normal, and mentally retarded individuals. Journal of Autism and Developmental Disorders, 18, 561–571. Tillmann, B., Rusconi, E., Traube, C., Butterworth, B., Umiltà, C., & Peretz, I. (2011). Fine-grained pitch processing of music and speech in congenital amusia. Journal of the Acoustical Society of America, 130, 4089–4096. Uddin, L.Q., Supekar, K., & Menon, V. (2013). Reconceptualizing functional brain connectivity in autism from a developmental perspective. Frontiers in human neuroscience, 7(458), 1–11. Wan, C.Y., Rüber, T., Hohmann, A., & Schlaug, G. (2010). The therapeutic effects of singing in neurological disorders. Music Perception, 27, 287–295. Wan, C.Y., Bazen, L., Baars, R., Libenson, A., Zipse, et al. (2011). Auditory-motor mapping training as an intervention to facilitate speech output in non-verbal children with autism: A proof of concept study. PLoS ONE, 6, e25505. Wan, C.Y., Marchina, S., Norton, A., & Schlaug, G. (2012). Atypical hemispheric asymmetry in the arcuate fasciculus of completely nonverbal children with autism. Annals of the New York Academy of Sciences, 1252, 332–337. Wigram, T., & Gold, C. (2006). Music therapy in the assessment and treatment of autistic spectrum disorder: Clinical application and research evidence. Child: Care, Health and Development, 32, 535–542. Yerys, B.E., Jankowski, K.F., Shook, D., Rosenberger, L.R., Barnes, K.A., et al. (2009). The fMRI success rate of children and adolescents: Typical development, epilepsy, attention deficit/ hyperactivity disorder, and autism spectrum disorders. Human Brain Mapping, 30, 3426–3435. Zatorre, R.J., Belin, P., & Penhune, V.B. (2002). Structure and function of auditory cortex: Music and speech. Trends in Cognitive Sciences, 6, 37–46. Zatorre, R.J., Chen, J.L., & Penhune, V.B. (2007). When the brain plays music: Auditory-motor interactions in music perception and production. Nature Reviews. Neuroscience, 8, 547– 558.

Supporting Information Additional Supporting Information may be found in the online version of this article at the publisher’s web-site: Figure S1. Cognitive characterization of clinical sample. Panel (A) shows the correlation between the diagnostic measures, ADOS-G and CARS II, used for the ASD group. Sections (B) and (C) show the correlation between symptom severity and full-scale IQ and verbal IQ as measured by the WASI, respectively. Section (D) shows the relation between communication ability as measured by the VABS and the verbal IQ. The above graphs characterize the heterogeneity in our ASD sample in terms of symptom severity as well as language and communication ability and graphically depict the relationship

Sharda et al./Intact sung-word connectivity in ASD

INSAR

between the different measures used to characterize our clinical ASD sample. Figure S2. Stimulus Characteristics. (A) Representative stimuli from three categories: Spoken words, sung words, and piano tones, with respective waveforms (top panel) and spectrograms (bottom panel), (B) mean peak temporal modulation rate of the three kinds of stimuli as a metric of rate of change of information. One-way ANOVA-on-ranks showed no significant differences (H(1,2)=3.159, P = 0.2), (C) mean spectral structure variance of the three stimuli was significantly different as measured by a one-way ANOVA (F(1,2) = 7.68, P = 0.001, posthoc Tukey’s test, spoken > sung, P < 0.001), highlighting the difference in the spectral complexity of the stimulus categories (error bars represent SD). Figure S3. Overlapping bilateral temporal activation for sung-word perception in ASD and TYP. The top panel shows axial brain slices with overlapping (purple)

INSAR

activations for ASD (red) and TYP (blue) groups for the three conditions: spoken–rest, sung–rest, and tones–rest at P < 0.05, FWE-corrected at cluster level. The bottom panel shows the lateralization of the temporal lobe activity during each of the conditions for both groups. The sung–rest contrast is comparably bilateral for both groups whereas the spoken–rest is right lateralized for ASD and left lateralized for TYP and the tone–rest condition is right lateralized for both groups. Figure S4. Mean RMS displacement of DTI scans in ASD and TYP groups. The box-and-whisker plot represents the mean RMS values of displacement of each DTI volume of participants in both groups, before and after eddy correction. There were no significant differences in RMS displacement between the groups (P = 0.837) Table S1. List of stimuli. Appendix S1. Supplementary Information (SI).

Sharda et al./Intact sung-word connectivity in ASD

13

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.