Spectral versus temporal features in dichotic listening

Share Embed


Descrição do Produto

BRAIN

AND LANGUAGE

Spectral

7, 375-386(1979)

versus Temporal PIERRE

Features

L. DIVENYI

Veterans

in Dichotic

Listening

AND ROBERT EFRON*

Administration Medical Martinez, California

Center

Ear advantage for the processing of dichotic speech sounds can be separated into two components. One of these components is an ear advantage for those phonetic features that are based on spectral acoustic cues. This ear advantage follows the direction of a given individual’s ear dominance for the processing of spectral information in dichotic sounds, whether speech or nonspeech. The other factor represents a right-ear advantage for the processing of temporal information in dichotic sounds, whether speech or nonspeech. The present experiments were successful in dissociating these two factors. Since the results clearly show that ear advantage for speech is influenced by ear dominance for spectral information, a full understanding of the asymmetry in the perceptual salience of speech sounds in any individual will not be possible without knowing his ear dominance.

INTRODUCTION When two different auditory signals are presented simultaneously, one to each ear, one of them is usually perceived as having a greater perceptual salience than the other. Two main types of such perceptual asymmetry have been reported. The first is observed when the two dichotic signals are speech sounds; the sound in the right ear (in about 80% of right-handed individuals) is identified more accurately than the one in the left. In the second type of asymmetry, observable when the two signals are pure tones, the pitch of the tone in either ear may dominate the binaural percept. The purpose of the experiments described in this report was to identify the interaction between these two types of asymmetry when speech sounds are used. The first asymmetry has been called the right-ear advantage (REA) for speech (Kimura, 1961) and has been assumed to reflect a left-hemispheric dominance for the processing of speech sounds (see Liberman, 1975, for a review). Although the REA is small (typically a 5-15% difference beRequests for reprints should be sent to Dr. Pierre L. Divenyi, Speech and Hearing Research Labs. VA Medical Center, 150 Muir Road, Martinez, CA 94553. * Also: Department of Neurology, University of California, Davis. 375 0093-934X/79/030375-12$02.00/0 Copyright 0 1979 by Academic Press. Inc. All rights of reproduction in any form reserved.

376

DIVENYI

AND EFRON

tween right- and left-ear recognition scores), it has been observed for a wide variety of both naturally occurring and synthetic speech sounds, words as well as nonsense syllables (Kimura, 1961, 1967), e.g., consonant-vowel (CV), vowel-consonant (VC) (Shankweiler & Studdert-Kennedy, 1967; Studdert-Kennedy & Shankweiler, 1970), consonant-vowel-consonant (CVC) (Haggard & Parkinson, 1971), and other combinations. Not all speech sounds, however, are better identified by the right ear: in particular, steady-state vowel sounds (Shankweiler & Studdert-Kennedy, 1967) or isolated fricatives (Darwin, 1975) yield no reliable REA. However, when vowels occur in the context of other speech sounds, as in CVC syllables, a REA reappears (Haggard, 1971; Weiss & House, 1973), suggesting that it is a consequence of an ear asymmetry in the processing of consonantal information. A REA may also accompany certain types of nonspeech signals-such as tonal sequences with frequency transitions (Halperin, Nachshon, and Carmon, 1973) or Morse-code patterns (Papcun, Krashen, Terbeek, Remington, and Harshman, 1974). These last findings, coupled with existing evidence that the left hemisphere, and in particular the left temporal lobe, plays a special role in the processing of temporal information (Efron, 1963), suggest that a REA may become manifest only when the subject is listening to auditory signals which vary in the time domain-a class of acoustic stimuli within which speech is only a special case (Berlin & Cullen,

1975).

The second type of auditory perceptual asymmetry arises when the two dichotic signals are two tones relatively close in frequency (Efron & Yund, 1974, 1976). For almost every listener, the pitch mixture of such a dichotic complex is dominated, to a greater or lesser degree, by the tone presented either to the right or to the left ear-very few individuals experience a perfectly balanced dichotic pitch mixture. Unlike the ear advantage for speech, there are about as many right-ear dominant individuals as there are left-ear dominant ones. Ear dominance for pitch (ED) is independent of handedness as well as of the ear advantage observed with dichotic speech sounds (Yund & Efron, 1976). On the other hand, ED is correlated with a difference in the frequency resolving power of the two ears (Divenyi, Efron, & Yund, 1977). It thus seems reasonable to assume that ED is a consequence of an asymmetry in the processing of spectral information and is produced by a mechanism different from that responsible for the REA observed with time-varying auditory signals. However, since speech sounds do carry spectral information, we might expect the REA for speech, in subjects with right ED, to be confounded with this right ED for tones. Conversely, in subjects who are left-ear dominant for tones, any REA for speech sounds must be a consequence of some other, possibly time-related, asymmetry that is unique to speech processing. In the experiments described below, we attempted to uncover

SPECTRUM VS TIME IN REA FOR SPEECH

377

those critical acoustic-phonetic features of the speech stimulus that might cause a subject with a left ED for the pitch of pure tones to display a REA for speech sounds. METHODS Experimental

Procedures

The experimental paradigm employed for the measurement of ear asymmetry in the perception of dichotically presented speech sounds was identical to the one developed in our laboratory to assess the direction and the degree of ear dominance for the pitch of dichotic two-tone complexes (Efron & Yund, 1974, 1976). The paradigm, illustrated in Fig. I, is a Two-Alternative Forced-Choice (ZAFC) method. In any given block of trials, the stimulus consisted of two monaurally discriminable sounds, “A” and “B.” In half of the trials (at random), the left ear was presented with sound “A” followed (after a 500-msec interval) by sound “B” and, simultaneously, the right ear with sound “B” followed by sound “A” (see upper half of Fig. 1). In the other half of the trials the order ofpresentation was reversed (see lower half of Fig. 1). The subjects’ task was to indicate (by pressing one of two labeled keys) whether the dichotic succession they heard sounded more like “A”-“B” or “B”-“A.” When the intensities of the right and left ear sounds are approximately equal, the two dichotic complexes of a trial will sound identical only for the subject who has no ear dominance for the particular stimulus attribute which differentiates sounds “A” from “B:” for this subject, the task will be impossible and he will respond at a chance level. On the other hand, when one of the ears is dominanr for the given stimulus attribute, the subject will be likely to report the stimulus succession presented to that ear more often than that presented to the other. The degree of his ear dominance for a given pair of sounds “A” and “B” thus will be reflected in the proportion of his responses corresponding to the stimulus succession in, say, his left ear. With such a scoring system, a 100% score signifies a strong left-ear dominance, a 0% score a strong right-ear dominance, and a 50% score no ear dominance, with respect to the given pair of sounds A and B in question. For the purpose of testing ear asymmetries in the perception of speech sounds, this paradigm offers significant advantages over other, more frequently used procedures. The main advantage of this method derives from the fact that it does not require explicit phonetic naming (Kimura. 1961), verbal identification (Shankweiler & Studdert-Kennedy, 1967), or rating (Repp, 1978) of the stimuli; rather, the subjects’ task consists merely of making a simple binary choice. Since the two fused binaural percepts (“A” in the left together with “B” in the right ear vs. “B” in the left together with “A” in the right ear) will sound different for the subject who has a certain amount of ear dominance for the stimulus attribute in which “A” and “B” differ, the binary choice in question is related to discrimination, i.e.. the simplest of all psychophysical tasks. Thus, the task simply requires the subject to indicate whether the two dichotic sounds of a trial correspond to his internal, self-determined labels “A”-“B” or “B”-“A.” (It makes no difference to the results or interpretation precisely what the internal representation of a given pair of dichotic complexes may be for any given subject.) A further advantage of the paradigm is that it obviates the need to consider the issue of stimulus dominance (Repp, 1976): Any such stimulus dominance is cancelled by this technique.

Apparatus All stimuli were generated prior to the experiment by a laboratory computer (DEC PDP 8/E) and were stored on a magnetic disk. During the experiment proper, these stimuli were retrieved from the disk and converted to analog form by a two-channel D/A converter. The rate of conversion was 50 kHz for pure tone stimuli (Experiment I) and 12.5 kHz for synthesized speech and speech-like stimuli (Experiments II and III). Thus, the spectrum of

378

DIVENYI

AND EFRON

speech stimuli extended up to 6250 Hz. The output of the D/A converters was filtered (Krohn-Hite Model 3500R Band-Pass Filters), amplified (Philbrick Model P85AU operational amplifiers), and attenuated (Hewlett-Packard Model 350D step attenuators) before reaching the subject’s earphones(TDH39 in MX-4l/AR cushions). The subject was seated in a sound attenuated room (IAC Model 401). Presentation of the stimuli, including control of timing and synchronization of the left and right channels, as well as collection and analysis of the subject’s responses were accomplished by use of the computer. The spectral structure of all stimuli was verified by spectral analysis (Spectral Dynamics Model 335 FFT analyzer). In addition, spectrographic recordings were made for all CV syllables.

Subjects Three female and three male subjects, students and professionals, between 22 and 42 years of age and with clinically normal hearing participated in the experiments. Five subjects were unequivocally right-handed and one (PD), one of the authors, is ambidexterous. All subjects had had previous experience in psychoacoustic experiments. They were selected from a larger group of listeners on the basis of their ear dominance for the pitch of dichotically presented pairs of tones. Since the objective of the study was to investigate ear advantage for selected acoustic-phonetic features in individuals who are left-ear dominant for spectral information, five subjects had moderate-to-strong left-ear dominance and one, the control, had a moderate right-ear dominance. Prior to the collection of the data reported below, each subject went through a training period of variable length. The objective of this training period was to make the subjects familiar with both the task and the percept of dichotic speech stimuli. Two criteria had to be reached before the training period was terminated: (i) monaural discrimination of the two sounds had to reach perfect (100% correct) performance, and (ii) the degree of ear advantage (i.e., the percentage of left-ear responses) obtained in two consecutive blocks had to differ by no more than one standard error of the mean.

Left Right

Left Right FIG. I. Schematic diagram of the dichotic stimulus pattern. “A” and “B” were two sounds having identical duration 1. The temporal separation of the two dichotic sounds was 500 ms. For any given trial, the configuration in the upper half of the figure and that in the lower half of the figure occurred with equal probability.

SPECTRUM

VS TIME

IN REA FOR SPEECH

EXPERIMENT

379

I

In Experiment I sounds “A” and “B” of Fig. 1 were a pair of pure tones with frequencies of 1650 and 1750 Hz, respectively. They were presented at 80 dB SPL with a lOO-msecduration and 5-msec rise and fall times. The object of this experiment was to establish the direction and degree of the six subjects’ ear dominance for the pitch of a dichotic pair of tones. The subjects’ task consisted of indicating, by pressing the appropriate key, whether the pitch succession they heard was “high-low” (A-B) or “low-high” (B-A). The obtained ear dominance estimates are summarized in Column 1 Table 1 as the percentage of 880 responses that corresponded to the frequency succession in the left ear. Thus, one subject (CB) shows a moderate right-ear dominance, two subjects (PM and SP) a moderate left-ear dominance, and three subjects (AG, JW, and PD) a strong left-ear dominance. All table entries differ from the 50% (i.e., no-ear dominance) point at the .OOl level. EXPERIMENT

II

The Stimuli In Experiment II, sounds “A” and “B” (Fig. 1) were synthesized speech or speech-like stimuli. They were produced by using a digital version of the Haskins Laboratories’ parallel-resonance analog synthesizer.’ The glottal-type periodic source had a fundamental frequency corresponding to that of a male speaker. The stimuli included one pair of single-formant pseudo-vowels, three pairs of four-formant vowels, and two pairs of CV syllables. The total duration of all stimuli was restricted to a maximum of 120 msec, for reasons of equipment limitations; this total duration included the rise and fall times (5 msec for vowels and 1 msec for CVs). The CV syllables were presented at a level that reached 90 dB SPL during the steady-state segment of the vowel portion of the sounds (about 80 dB SL). However, because the dichotic vowel pairs could not be made simultaneously identical in loudness as well as in intensity (loudness of vowels being a function of the compactness of the formant structure), one member of the pair would have invariably dominated the binaural percept. In order to eliminate discrimination cues caused by such stimulus dominance (Repp, 1976, 1978), a (random) intensity difference of ?4 and +8 dB was introduced from trial to trial. Within each block, only about 85% of the trials were presented dichotitally; whereas for about 15% of the trials only one ear received the sequence “A”- “B” or “B”-“A.” These monaural trials were included for the purpose of monitoring discriminability of the two sounds. All ‘The digital synthesis program was developed and tested by A. M. Engebretson Garfield at Central Institute for the Deaf, St. Louis.

and S.

** p < ,001.

18** 86** 1OO** 81** 93** 100**

1650-1750 Hz sine waves 1

* .OOl < p < .05.

CB PM AG SP JW PD

Subject

TABLE

1

20** 90** 75** 61** 95** 100**

1650- 1750 Hz one-formant vowels 18** 65** 57% 54* 65** 27**

[II-be1 vowels 3 23** 85** 55* 54* 59** 16**

VI with Fl changing 4

Stimulus Pair

12** 74** 55** 52ns 62** 88**

[II with F2 changing 5

EAR DOMINANCE AND EAR ADVANTAGE SCORES OF SIX SWHECTS (PERCENTAGE OF RESPONSES CORRESPONDING TO STIMULUS IN LEFT EAR)

20** 78** 41** 32** 45* 31**

[gal-Wal 6

30** 87** 66** 77** .55* 89**

[bal-[gal 7

:

5

z ,;

SPECTRUM

VS TIME

1N REA FOR SPEECH

381

conditions of Experiment II represent pairs of “A” and “B” which all subjects were able to discriminate monaurally 100% of the time, both with their right and their left ear. The single-formant pseudo-vowels were periodic sounds produced by exciting a 50-Hz wide resonator with a glottal-type (quasi-triangular) waveform. The pitch contour and the amplitude envelope were constant for both stimuli of the pair. The center frequency of the resonator was either 1650 or 1750 Hz (Column 2, Table 1). The vowel sounds were produced by exciting a digital analog of a parallel network of four formant-resonators with the same glottal-type waveform. Again, the pitch contour and the amplitude envelope were constant for all stimuli. The vowel set included two English vowels ([I] and [ae]) with formant structures corresponding to that specified by Peterson and Barney (1952) for a male speaker. In addition, two non-English vowels were also included. These stimuli were produced by changing the resonant frequency of either the first (Fl) or the second (F2) formant of the vowel [I]. Specifically, the two vowels were [I] with a high Fl (+ 140 Hz) and [I] with a low F2 (-400 Hz). The two English and two nonEnglish vowels were arranged into three pairs for dichotic presentation. One dichotic pair ([I]-[ae]) differed in three formants, i.e., with respect to the acoustic equivalent of the features of both tongue height and frontback position (Column 3, Table 1). The vowels in the second pair differed only in their first-formant characteristics ([I] with two different Fl values), i.e., the difference between “A” and “B” lay in the acoustic variable that accounts for a considerable portion of the feature [high] (Column 4, Table 1). In the remaining pair only the second-formant characteristics were different ([I] with two different F2 values), i.e., “A” and “B” differed only with regard to the acoustic parameter that underlies the feature [front] (Column 5, Table 1). Finally, the list of dichotic stimuli included two CV syllables. One pair ([ba]-[gal, Column 6, Table 1) differed only in the feature of place of articulation ([bilabial] and [velar]), whereas the other ([gal-[ka], Column 7, Table 1) only in the feature of voicing ([voiced]). The [ba]-[gal pair differed only in the frequency and intensity characteristics of formant transitions; the voice-onset-time (11 msec) as well as the fundamental frequency contour and the amplitude envelope remained identical. There was no difference between the two CV tokens with respect to the rate of Fl, F2, and F3 transitions. (To a minor extent, this latter constraint violated the rules for speech synthesis, since in natural speech the duration of velar CV transitions are generally longer than those of bilabial CV transitions. Nevertheless, the two tokens were judged by expert listeners as quite realistic.) The [gal-[ka] pair differed only in voice-onset-time (11 and 41 msec, respectively). The fundamental frequency contour, amplitude envelope, and formant trajectories were identical. (Because the

382

DIVENYI

AND EFRON

Haskins Laboratories parallel-resonance synthesis technique was used, the plosive burst was replaced by continuous, low-level noise exciting the second and third formant resonators, from the beginning of the token up to the onset of voicing.) Results Results of the experiment are shown in Columns 2-7 of Table 1. Again, the table entries represent the percentage of a particular subject’s responses that corresponded to the stimulus succession in his left ear. Each entry is based on a minimum of 600 observations. The data indicate that the control subject (CB) with right-ear dominance for pure tones displayed a REA for all speech sound pairs. It is also apparent that Subject PM represents the other extreme: she has a left-ear advantage (LEA) for all pairs of speech sounds and, thus, falls in the class that comprises about 20% of right-handed individuals who demonstrate a LEA for the perception of speech (Kimura, 1961). Subjects AG, SP, and JW are of particular interest: they have a LEA for vowels and vowel-like sounds (Columns 2-5) as well as for the feature of place-of-articulation in CVs (Column 7), but acquire a REA only for the feature of voicing in CVs. The trend of Subject PD’s data closely resembles that of these three subjects with one exception: he also shows a REA in those dichotic vowel pairs which differ exclusively (Fl in [I]) or predominantly ([I]-[ae]) in the acoustic characteristics of low-frequency formants. Taking the standard error of the mean as an indicator of the expected variability (based on the underlying binomial distribution), all data points differ significantly from the no-ear-dominance (50%) point, with a single exception (Subject SP, vowel [I] with F2 changing). EXPERIMENT

III

Subject PD was left-ear dominant for the pitch of dichotic tone pairs (Experiment I) and exhibited a LEA for the single-formant pseudo-vowels as well as for vowel sounds differing only in F2, but acquired a REA for those vowel sounds in which the major part of spectral information was carried by Fl (Experiment II). In order to test the generality of this observation, this subject was tested in Experiment III for ear advantage with regard to four further vowel pairs. Some of these vowel pairs shared the [+high] and some the [-high] feature. The Stimuli The four dichotic vowel pairs used in this experiment were composed of four English and two non-English vowels. The English vowels were two sharing the [+high] feature ([u] and [U]) and two sharing the [-high] feature ([a] and [A]). The two non-English vowels were produced by changing the envelope of one of the formants of the [A] vowel, i.e., either

383

SPECTRUM VS TIME IN REA FOR SPEECH

by lowering the center frequency of Fl by 90 Hz or by raising that of F2 by 210 Hz. The four dichotic pairs were [u]-[U], [a]-[A], [A] with two different first formants, and [A] with two different second formants. Similarly to Experiment II, the vowels were presented at an intensity difference of &4 and +8 dB, in order to avoid cues based on stimulus dominance. Results Results of this experiment are shown in Columns 4-7 of Table 2. For the purpose of comparison, Subject PD’s data for the vowel pairs of Experiment I are repeated in Columns l-3. Each table entry is based on a minimum of 600 observations. The trend of ear advantage in all these vowel pairs is consistent: (i) in vowel pairs where Fl carries the bulk of spectral information a REA appears (Columns 1, 2, 4, 7, Table 2), whereas (ii) vowels discriminable mainly on the basis of F2 elicit a left-ear advantage (Columns 3, 5, 6, Table 2). All ear advantage scores in Table 2 differ from the 50% point at the .OOl level. DISCUSSION

Two main findings emerge from the results of the present experiments. First, a listener who is right-ear dominant for the processing of spectral information, i.e., for the pitch of dichotically presented tones, will also be likely to display a REA for any pair of dichotic speech sounds. Second, most listeners who are left-ear dominant for spectral information will retain this ear dominance for those dichotic speech sounds that differ only in their spectral structure but will be likely to acquire a REA when the two speech sounds differ in a feature which is predominantly temporal, namely, voicing in CVs (VOT). These results concur with previously reported findings showing a REA for voicing in dichotic stop-vowel sounds (e.g., Studdert-Kennedy & Shankweiler, 1970; Haggard & Parkinson, 1971). The novelty in the present data is that, contrary to previous reports, the feature of place of TABLE 2 EAR ADVANTAGE SCORES (SEE TABLE 1) OF SUBJECT PD FOR VARIOUS VOWEL SOUND PAIRS

Stimulus pair

tu1-KJ1

1

with F2 changing 3

4

21

16

88

40

[II-be1

[Al

[Al

5

with F2 changing 6

with Fl changing 7

92

98

39

[II

[II with Fl changing 2

tal-thl

384

DIVENYI

AND EFRON

articulation was found to yield a REA only when the subject was already right-ear dominant for spectral information. Since place of articulation is a feature based on predominantly spectral stimulus attributes (as are vowel features), the above results suggest that REA consists of only one functional asymmetry-a right-ear superiority for the processing and ordering of temporal information. Such a right-ear superiority is not unique to the processing of speech sounds: it has been demonstrated to accompany the perception of temporally-complex nonspeech sounds as well (Halperin et al., 1973; Papcun et al., 1974). These reports, together with the evidence provided by the present experiments, strongly suggest that the fundamental cause of the REA for speech sounds lies in the superior ability of the right ear and, by inference, the left hemisphere, to process the temporal structure of the speech signal. This view is consistent with the many reports of a profound deficit in the processing of temporal features of speech and nonspeech signals seen in patients with left temporal-lobe lesions (Blumstein, Baker, and Goodglass, 1977a, Blumstein, Cooper, Zurif, & Caramazza, 1977b; Chedru, Bastard, & Efron, 1978; Efron, 1963; Lackner & Teuber, 1973; Sasanuma, Tatsumi, Kiritani, & Fujisaki, 1973; Swisher & Hirsh, 1972; Tallal & Newcombe, 1978)’ In other words, speech may be “special” only in the Sensethat it represents the temporally-complex auditory stimulus to which man is most exposed and on which he must rely for his survival in a social environment. The idea that REA consists of a right-ear superiority for temporal information processing may provide a basis for explaning the REA that Subject PD displayed for certain dichotic vowel pairs. Actually, the vowels for which this subject showed a REA were only those which could be distinguished because of few first-formant frequencies. Since these first-formant peaks generally lie below 500 Hz, and since the temporal structure of the waveform (i.e., periodicity) plays a far greater role in the encoding of low frequencies than does the spatial structure (i.e., spectrum, see Moore, 1973), it is quite possible that this subject’s REA for low-F1 (i.e., [+high]) vowels may also reflect an ear advantage for temporal stimulus attributes. The reason for which only this subject exhibited this special ear-advantage pattern for vowels could lie in his extensive musical training which may have provided him with a better-than-average capacity to analyze the fine temporal structure in periodic waveforms. However, he was also different from the other subjects in that he was initially left-handed and is now ambidexterous. Nevertheless, the peculiar * However, it has been shown in patients having left temporal-lobe lesions (Blumstein et al., 1977a)that deficits in the perception of place of articulation in CVs are at least as large as those found in the perception of voicing. Our conclusions, in light of these findings, point toward the likelihood that such patients have difficulty perceiving frequency transitions (rate and extent). Indeed, recently published data by Tallal and Newcombe (1978) indicate that aphasic patients are unable to follow formant transitions at a normal level.

SPECTRUM VS TIME IN REA FOR SPEECH

385

nature of this subject’s changing ear advantage for vowels does not undermine the main conclusions of this study. It should be stressed that the dissociation of spectral and temporal factors in REA could be achieved only in subjects who were left-eav dominant for spectral information: an equivalent dissociation in subjects who are already right-ear dominant is not possible by the present methods. ACKNOWLEDGMENTS The authors wish to acknowledge the assistance of James T. Wright in the generation of the stimuli and the helpful comments of Catherine Baker and John Ohala on earlier versions of the manuscript. Portions of this article have been reported at the Fourth Annual Meeting of the Berkeley Linguistic Society (February 1978)and at the 95th Meeting of the Acoustical Society of America (Providence, RI., May 1978). The research has been supported by the Veterans Administration.

REFERENCES Berlin, C. I., & Cullen, J. K. (1975). Dichotic signs of speech mode listening. In Srructure and process in speech perception. A. Cohen & S. Cl. Nooteboom (Eds.), Berlin/New York: Springer-Verlag. Pp. 296-31 I. Blumstein, S. E., Baker, E., & Goodglass, H. (1977a). Phonological factors in auditory comprehension in aphasia. Neuropsychologia, 15, 19-30. Blumstein, S. E., Cooper, W. E., Zurif, E. B., & Caramazza, A. (1977b). The perception and production of voice-onset time in aphasia. Neuropsychologiu. 15, 371-383. Chedru, F., Bastard, V., & Efron, R. (1978). Auditory micropattern discrimination in brain 16, 141-149. damaged subjects. Neuropsychologia, Darwin, C. J. (1975). Ear differences and hemispheric specialization. In Hemispheric B. Milner (Eds.), Cambridge, MA: MIT Press. Pp. 57-63. specialization and interaction. Divenyi, P. L., Efron, R., & Yund, E. W. (1977). Ear dominance in dichotic chords and ear superiority in frequency discrimination. Journal of the Acoustical Society of America, 62, 624-632. Efron, R. (1963). Temporal perception, aphasia, and deja vu. Brain, 86, 403-424. Efron, R., & Yund, E. W. (1974). Dichotic competition of simultaneous tone bursts of different frequency. I. Dissociation of pitch from lateralization and loudness. Neuropsychologia,

12, 249-256.

Efron, R., & Yund, E. W. (1976). Ear dominance and intensity independence in the perception of dichotic chords. Journal of the Acoustical Society of America, 59, 889-898. Haggard, M. P. (1971). Encoding and the REA for speech signals. Quarter/y Journal of Experimental

Psychology,

23, 34-45.

Haggard, M. P., & Parkinson, A. M. (1971). Stimulus and task factors as determinants of ear Psychology, 23, 168-177. advantages. Quarterly Journal of Experimental Hall, J. L., & Goldstein, M. H. (1968). Representations of binaural stimuli by single units in primary auditory cortex of unanesthetized cats. Journa/ of the Acoustical Sociery of America, 43, 456-461. Halperin, Y., Nachshon, I., & Carmon, A. (1973). Shift of ear superiority in dichotic listening to temporally patterned nonverbal stimuli. Journal of the Acoustical Society of America,

53, 46-50.

Kimura, D. (1961). Cerebral dominance and the perception of verbal stimuli. Journal of Psychology, 15, 166-171.

Canadian

DIVENYI

386

AND EFRON

Kimura, D. (1967). Functional asymmetry of the brain in dichotic listening. Cortex, 3, 163-178. Lackner, J. R., & Teuber, H.-L. (1973). Alterations in auditory fusion thresholds after cerebral injury in man. Neuropsychologiu, 11, 409-415. Liberman, A. M. (1975). The specialization of the language hemisphere. In Hemispheric specialization andinferuction. B. Mimer (Ed.), Cambridge, MA: MIT Press, Pp. 43-56. Milner, B. (1967). Brain mechanisms suggested by studies of temporal lobes. In Brain mechanisms underlying speech and language. F. L. Darley (Ed.), New York: Grune & Stratton. Pp. 122-145. Moore, B. C. J. (1973). Frequency difference hmens for short-duration tones. Journal ofthe Acoustical

Society of America,

54, 610-619.

Papcun, G., Krashen, S., Terbeek, D., Remington, R., & Harshman, R. (1974). Is the left hemisphere specialized for speech, language, and/or something else? Journal of the Acoustical

Society of America,

55, 319-327.

Peterson, C. E., & Barney, H. L. (1952). Control methods used in the study of vowels. Journal of the Acoustical Society of America, 24, 174-184. Repp, B. H. (1976). Identification of dichotic fusions. Journal of the Acoustical Society of America,

60, 456-469.

Repp, B. H. (1978). Stimulus dominance and ear dominance in the perception of dichotic voicing contrasts. Brain and Language, 5, 3 10-330. Rosenzweig, M. R. (1951). Representations of the two ears at the auditory cortex. American Journal

of Physiology,

167, 147-158.

Sasanuma, S., Tatsumi, I. F., Kiritani, S., & Fujisaki, H. (1973). Auditory perception of signal duration in aphasia patients. Annual Bulletin Research of the Znstifute for Logopedics

and Phoniatrics

University

of Tokyo, 7, 65-72.

Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Identification of consonants and vowels presented to left and right ears. Quarter/y Journal of Experimental Psychology, 19, 59-63.

Studdert-Kennedy, M., & Shankweiler, D. P. (1970). Hemispheric specialization for speech perception. Journal of the Acoustical Society of America, 48, 579-594. Swisher, L. P., & Hirsh, I. J. (1972). Brain damage and the ordering of two temporally successive stimuli. Neuropsychologia, 10, 137-152. Tallal, P. & Newcombe, F. (1978). Impairment of auditory perception and language comprehension in dysphasia. Brain and Language, 5, 13-24. Weiss, M. S., & House, A. S. (1973). Perception of dichotically presented vowels. Journal of the Acoustical Society of America, 53, 51-58. Yund, E. W., & Efron, R. (1976). Dichotic competition of simultaneous tone bursts of different frequency. IV. Correlation with dichotic competition of speech signals. Bruin and Language,

3, 246-254.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.