Timing in talking: what is it used for, and how is it controlled? (2024)

As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsem*nt of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice

Timing in talking: what is it used for, and how is it controlled? (1)

The Royal Society PublishingPhilosophical Transactions BAboutBrowse By SubjectAlertsFree Trial

Philos Trans R Soc Lond B Biol Sci. 2014 Dec 19; 369(1658): 20130395.

PMCID: PMC4240962

PMID: 25385773

Author information Copyright and License information PMC Disclaimer

Abstract

In the first part of the paper, we summarize the linguistic factors that shape speech timing patterns, including the prosodic structures which govern them, and suggest that speech timing patterns are used to aid utterance recognition. In the spirit of optimal control theory, we propose that recognition requirements are balanced against requirements such as rate of speech and style, as well as movement costs, to yield (near-)optimal planned surface timing patterns; additional factors may influence the implementation of that plan. In the second part of the paper, we discuss theories of timing control in models of speech production and motor control. We present three types of evidence that support models of speech production that involve extrinsic timing. These include (i) increasing variability with increases in interval duration, (ii) evidence that speakers refer to and plan surface durations, and (iii) independent timing of movement onsets and offsets.

Keywords: extrinsic speech timing, prosodic structure, speech production, smooth signal redundancy, optimal control theory, phonetic implementation

1. Introduction

Timing is an integral part of every aspect of speech production: individual movements of the rib cage, oral articulators and laryngeal structures; their coordinated motor activity and the speech sounds they produce. Understanding speech production therefore requires understanding timing: what it is used for, and how it is controlled. In this paper, we first review our current understanding of what speakers use timing for, and how this understanding was acquired by researchers, and then we focus on two different views of how timing is controlled: with and without an extrinsic timekeeping mechanism. We then present evidence that seems to require an extrinsic timekeeping mechanism. Space prevents us from detailing the methods involved in measuring timing, but see Turk et al. [1] for measurement methods based on acoustic landmarks [2] and Perkell et al. [3] for a method based on landmarks in movement traces.

2. What is speech timing used for?

The traditional way of determining what speakers use timing for is to conduct controlled experiments in which a factor of interest is systematically varied, keeping other factors constant. For example, in experiments testing whether vowel type has a systematic effect on duration, different vowels can be embedded in a constant carrier phrase, e.g. Say dad again versus Say did again. Such experiments have shown systematic differences between different speech sounds (e.g. [4]), which are therefore hypothesized to have a characteristic ‘intrinsic’ duration [5]. Analogously, experiments that vary higher-level prosodic structure have shown systematic effects of prominence and constituent boundaries on duration. For example, a comparison of dad in Say DAD again versus in SAY dad again shows that DAD is systematically longer when phrasally prominent (see [6] for a review). Moreover, depending on how the speaker chooses to prosodically produce a syntactic string, words before major constituent boundaries are often systematically longer than constituent-medial words, e.g. cousin is longer in Mary GEORGE's cousin] [baked the cake, where it is at the end of a phrase, when compared with cousin in Mary's cousin GEORGE] [baked the cake, where it is medial.

Experiments conducted from the 1950s through the 1980s established a long list of factors that appeared to affect speech timing. These include

  • (i) vowel and consonant type

  • (ii) contextual factors, e.g.

    • — prominence (word stress, phrasal stress)

    • — syntax

    • — predictability

    • — adjacent segment type

  • (iii) global factors, e.g.

    • — speech rate

    • — speech style (e.g. clear versus relaxed)

([710] inter alia; for reviews, see [4,6,11]). In addition, there are many other possible factors not yet integrated into current models that may also influence speech timing under special circ*mstances, such as speaking to an external beat.

However, since the late 1970s and 1980s (e.g. [12,13]), it has become clear that the view that each factor has a separate, direct effect on timing is problematic. Syntax is problematic because it has only an indirect influence on phonetic form, and predictability is problematic because many of its effects appear to be shared with other factors. In the following sections, we address these two problems and show how these factors relate to prosodic structure, which we see as a central aspect of the interface between language and speech. In our view, prosodic structure, segmental identity and segmental context are the factors that have a direct effect on the speaker's surface phonetic plan, including speech timing. When planning speech production, speakers balance these factors against non-grammatical factors such as speech rate and other stylistic requirements, clarity requirements and movement costs (e.g. energy, time) to yield a specification of the desired temporal patterns for a spoken utterance.

(a) The problem with syntax

Although it is clear that some syntactic manipulations have a measurable effect on duration (and other phonetic parameters), not all do. Consider for example

  • — Mary George's cousin]? ate a piece of cake

  • — Her cousin]? ate a piece of cake

  • — She]? ate a piece of cake.

In these examples, where ]? is used to indicate a possible site of boundary-related cues, the likelihood of these cues decreases for shorter subject noun phrases. That is, the longer subject noun phrase (Mary George's cousin) is more likely than the shorter ones (Her cousin and She) to show boundary-related phonetic cues such as pre-boundary lengthening and pause, even though they all share the same syntactic structure [1417]. There are also some phonetic indicators of constituent boundaries that occur where syntax would not predict them, as in Sesame street is brought to you by] … the Children's Television Workshop, where a break occurs within a prepositional phrase [18]. Finally, levels of embedding found in syntax are often absent in speech [19]: for example, the utterance above has a right branching syntactic structure (figure 1, top), whereas its spoken phrasing is flatter (figure 1, bottom).

Timing in talking: what is it used for, and how is it controlled? (2)

Schematic diagram of the syntactic structure for ‘this is the cat that ate the rat that ate the cheese’ (top), and a possible prosodic structure for the same utterance (bottom).

(b) Prosodic structure as a solution

(i) Prosodic constituent structure

Along with other findings from segmental phonology [12] and intonational phonology [20], these findings suggest that a structure that is influenced by syntax, but not isomorphic to it, directly defines the groupings observed in speech. This structure, called prosodic constituent structure, is hierarchical, and includes constituents such as words and perhaps feet or syllables at lower levels, and phrases of various sizes at higher levels. Although there are debates about many aspects of the prosodic hierarchy, e.g. about the number of levels in the hierarchy, and about the name and definition of each constituent type, there is general agreement about its hierarchical nature, and about the fact that it is flatter and more symmetric than syntactic structure [12,21]. An example of prosodic structure is shown in figure 2.

Timing in talking: what is it used for, and how is it controlled? (3)

An example prosodic structure for Mary's cousin George baked the cake. Pword, prosodic word.

Prosodic constituent structure is a likely linguistic universal, although different languages may elect different sets of levels from the universal hierarchy [22]. It has measurable effects on durational phenomena such as initial lengthening, final (or pre-boundary) lengthening, polysyllabic shortening (the shortening of syllables when more occur in a constituent), polysegmental shortening (the shortening of segments when more occur in a constituent) and pause (see [5,23] for reviews). Support for the universality of prosodic structure comes from the ubiquitous occurrence of final and initial lengthening patterns that reflect a structural hierarchy in languages of the world [24,25].

Phrasally related initial and final lengthening affect specific parts of initial and final words, respectively. Initial lengthening appears to be primarily localized on the initial C in phrase–initial CV and CCV sequences [26,27]. In final position, most of the lengthening occurs on the rhyme of the final word. Smaller, but significant amounts of lengthening have also been observed on lexically stressed syllable rhymes when the lexically stressed syllable is pre-final, as in Michigan or Trinidad (see [28] for Dutch and [29] for American English). Lengthening at other sites, e.g. the onset consonant of the phrase–final syllable rhyme, has also been observed, but these effects are sporadic in the sense that they appear to be study- or material-dependent, and may possibly be speaker-dependent. For both initial and final lengthening, the magnitude of the durational effects varies with boundary strength: stronger boundaries (e.g. phrases) are generally associated with greater degrees of lengthening [30,31] but interestingly not with a longer string in the domain of lengthening [28] (for discussions of polysyllabic shortening, see [3234]).

Prosodic constituent structure also affects non-durational phonetic parameters, such as constituent-initial and final voice quality modifications [3539], supralaryngeal articulatory modifications (e.g. phrase–initial strengthening, syllable–final lenition [25,40,41]), the use of word- or phrasal-prominence near the beginnings or ends of constituents [16,42], as well as intonational phenomena, e.g. phrase–final lowering, phrase–initial reset (cf. [20,43] among others).

(ii) Prosodic prominence structure

Prosodic structure also includes prosodic prominence structure, which describes different degrees of stress/accent found in words and phrases. For example, in one prosodification of the phrase Mary's cousin George, George is the most prominent word in the phrase, and is said to bear phrasal stress (also called sentence stress, or accent). In the words Mary and cousin, the word–initial syllables Ma(r)- and cou- are more prominent than the second syllables in these words, and are said to bear word- or lexical stress. Figure3 shows a grid-like representation of prominence structure [4446], illustrated for this phrase.

Timing in talking: what is it used for, and how is it controlled? (4)

A grid-like representation of prominence structure for Mary's cousin George.

Like prosodic constituent structure, prosodic prominence structure is hierarchical, with word-stress near the bottom of the hierarchy, and phrasal stress at higher levels [47]. It also has measurable effects on duration, but the effects of prominence on duration appear to be different from those related to prosodic constituent boundaries [32,4850]. For example, monosyllabic words show different effects of phrasal prominence versus final lengthening: prominence increases the nucleus duration most, followed by the syllable onset, then optionally the coda, whereas final lengthening increases the nucleus duration most, followed by the coda, then (optionally) the onset. Prosodic prominence structure not only affects duration, but also affects other articulatory parameters such as articulatory distinctiveness and voice quality, and their acoustic consequences (e.g. formant structure and spectral balance) [51,52].

(iii) Prosodic structure as the interface between language and speech

The proposal that prosodic structure serves as an interface between language on the one hand, and speech on the other hand is illustrated in figure 4 (based on a similar figure in [53], see also [54], inter alia). The figure illustrates the indirect effects of factors such as syntax, utterance length, focus, etc., on surface phonetics, via prosody. Prosodic structure has direct influence on the phonetic plan. During speech planning, prosodic effects on phonetic parameters such as duration are balanced against the effects of segmental identity and context, as well as non-grammatical factors (e.g. rate and style of speech, clarity requirements, movement costs, etc.), on those same parameters.

Timing in talking: what is it used for, and how is it controlled? (5)

Prosodic structure as the interface between language and speech. Based on a similar figurein Shattuck-Hufnagel & Turk [53], illustrating some of the factors that influence phonetic planning. This diagram is intended as a tool for identifying and thinking about factors that influence phonetic planning, and is a proposal for how they interact. (Online version in colour.)

Several aspects of figure 4 are worthy of comment. First, we assume that the non-grammatical factors have a direct influence on the plans for surface phonetic form, rather than influencing the phonological plan. Although factors such as rate and style of speech have been described as directly affecting aspects of prosody (e.g. fewer ‘breaks' at faster rates of speech, cf. [55]), our view is that a speaker plans the same prosodic structure (i.e. same relative prominence and relative boundary strength structure) for a given utterance at different rates of speech, but that the planned phonetic manifestation of this structure is different at different rates. This is because the rate-of-speech requirement must be balanced against the prosodic structure requirement in determining optimum surface phonetic characteristics that meet the competing demands. Second, the factors mentioned in figure 4 are intended to be a preliminary indicator of factors that might be at work, and may not be exhaustive. Related to this, there are other factors that are known to influence phonetics that remain to be investigated, for example, the adjustments that might be made in response to an interlocutor (possibly including non-speech input), a noisy environment or intense emotion. These adjustments might relate to phonological planning, e.g. choices of prosodic structure, or might be non-grammatical, e.g. reflected in specifications of rate or clarity, and would therefore be balanced against prosodic structure requirements in influencing the phonetic plan. And there are other candidate factors, such as cognitive processing costs and constraints, whose effects are not yet well-understood. Figure4 is therefore intended as a tool for identifying and thinking about factors that influence phonetic planning and as a proposal for how they might interact.

(c) The problem with predictability

If we accept that prosodic structure has a measurable effect on duration, then another factor in the list becomes problematic: predictability. What we refer to as ‘predictability’ is the likelihood of a word given its context (linguistic and pragmatic/real-world) and frequency of use, i.e. the likelihood that a word can be guessed from its context. It has long been observed that more predictable words are produced with shorter durations than less predictable words [5659]. For example, Lieberman [56] observed that more predictable words are shorter and less acoustically salient; he found that the word nine in A stitch in time saves nine (highly predictable context) was shorter than the word nine in The number that you will hear is nine.

The problem with predictability as a factor affecting duration is that it is unclear whether prosodic structure and predictability are both motivated as separate factors affecting duration. This is because prosodic structure and predictability are not independent. When predictability is low, syllables are more likely to be prosodically prominent, and words are more likely to be demarcated using prosodic boundary correlates such as initial- and final-lengthening and pause. For example, the word operas in the phrase health operas is more likely to bear phrasal stress than the word issues in the phrase health issues, possibly, because issues in this context is more predictable [60,61]. In addition, the word nine may be longer in the phrase The number that you will hear is nine than in the phrase A stitch in time saves nine, because the nine in the former sentence is less predictable, and therefore the word boundary will be more saliently marked by lengthening on the word–initial /n/.

(i) Prosodic structure as the interface between predictability and acoustic salience: a solution to the predictability problem

Earlier studies [60,62] proposed that prosodic structure is the interface between predictability and acoustic salience, that is, prosodic structure is used to control acoustic salience in order to signal relative predictability [61]. Aylett [61] proposed that in this way prosodic structure makes all words in an utterance equally easy to recognize. This proposal was termed the smooth signal redundancy hypothesis (figure 5, based on a similar figure in [62]).

Timing in talking: what is it used for, and how is it controlled? (6)

The complementary relationship between predictability (language redundancy) and acoustic salience yields smooth signal redundancy (equal recognition likelihood throughout an utterance). Based on a similar figure in [62]. (Online version in colour.)

In the sentence Who's the author?, ‘Who's in its context (___the author?) is more predictable than author in its (full) context (Who's the ___?); the is even more predictable (context: Who's__author?); and furthermore, the word–initial syllable , au(th)- is relatively unpredictable compared with the second syllable –(th)or. The smooth signal redundancy hypothesis states that an utterance's predictability profile (also called language redundancy) is inversely reflected in the prosodic structure of the elements (e.g. syllables and words) in the utterance. Prosodic structure is used to control the acoustic salience of surface phonetics (through prosodic prominence and boundary strength), so that the recognition likelihood of each element in the utterance is approximately equal, i.e. signal redundancy is smooth. As discussed in Aylett & Turk [60], the smooth signal redundancy profile is advantageous because it increases the likelihood of recognizing all of the elements in the utterance. The p(recognition)1 of the entire sequence corresponds to the product of the p(recognition) of each element in the sequence, and will therefore be greater if p(recognition) of each element is equal, than if p(recognition) of different elements is different.

As discussed in Turk [62], the idea that prosodic structure reflects predictability provides an explanation for the effect of utterance length on the likelihood of boundary occurrence and on boundary strength. This is because, all other things being equal, words are harder to guess (less predictable) in longer utterances. To understand why this is, consider a two-syllable utterance. All things being equal, there are two possible ways to parse such an utterance. As a sequence of two monosyllabic words, or as a single disyllabic word.

parsing option 1: [ syl ]word  [ syl ]word

parsing option 2: [ syl    syl ]word

For a three-syllable utterance, the number of possible parsings increases to four:

parsing option 1: [ syl ]word [ syl ]word [ syl ]word

parsing option 2: [ syl ]word [ syl   syl ]word

parsing option 3: [ syl    syl ]word  [ syl ]word

parsing option 4: [ syl    syl    syl ]word

And for a four-syllable utterance, the number of possible parsings is even larger, i.e. eight. However, when a phrase boundary is inserted anywhere in the utterance, the number of possible parsings is halved. As this example illustrates, when predictability is relatively low, because an utterance is long, prosodic structure can be used to increase recognition likelihood by signalling constituent boundaries.

Aylett & Turk [60] proposed that predictability is a composite factor that directly influences prosodic structure, and thereby indirectly controls acoustic salience [61]. That is, all of the factors at the top of figure 6 contribute to the predictability of elements in an utterance. For example, a word's lexical frequency, together with its syntactic and semantic context, its real-world context (pragmatics) and utterance length, combine to predict how likely a particular word would be (i.e. how easily a word could be guessed) in that particular context. Aylett [61] refers to this predictability as ‘language redundancy’. Our current hypothesis is that the predictability of each element in an utterance relates to its predictability on the basis of both preceding and following elements (i.e. the full context), as well as its frequency of use and likelihood on the basis of real-world context, but note that it is an important research question to determine exactly what contributes to an element's predictability/language redundancy. As discussed in Turk [62], the speaker can compute predictability (language redundancy) on the basis of his/her own language and real-world experience. The speaker can incorporate information about the listener's knowledge, but need not do so.

Timing in talking: what is it used for, and how is it controlled? (7)

Factors that shape surface phonetics and their relationship to predictability, acoustic salience and recognition likelihood. Based on a similar figure in [60,62]. (Online version in colour.)

As noted above, our hypothesis is that language redundancy is used to plan prosodic structure in order to make the recognition likelihood of each element equal. This goal of even recognition likelihood (or smooth signal redundancy) is balanced against other goals, such as speaking clearly, quickly or in rhythm as well as movement costs (e.g. time, energy) when speakers plan the surface phonetic properties of a spoken utterance.

Aylett & Turk [60] provide supporting evidence for the view that prosodic prominence structure reflects predictability: both prosodic prominence structure and predictability (word frequency, syllable transitional probability and first versus second mention of a word) largely accounted for the same variance in syllable duration in a large corpus study of spontaneous speech [61]. Further supporting evidence includes findings that word durations are longer, and pauses and intonational boundaries more likely, in less predictable sequences [15,63], discussed in Turk [62].

(d) Summary of section 2

What is speech timing used for? We propose that one of its main purposes is to make utterances easier to recognize, by signalling the identity of individual speech sounds (e.g. did versus dad), and also signalling (and compensating for) the relative predictability of syllables and words in larger utterances. Because timing effects are implemented on very specific stretches of speech that relate to prosodic constituents (e.g. final lengthening occurs primarily on the rhyme of the final syllable; prominence-related lengthening occurs primarily on the stressed syllable nucleus and onset), it appears that predictability does not have a direct effect on surface phonetics (including timing), but rather its effects are mediated by prosodic structure (see other supporting arguments in [62]). We propose that the goal of making speech easier to recognize by smoothing signal redundancy is balanced against other goals and costs when planning surface durations in speech.

3. How is speech timing controlled?

Here, we address two different views of speech timing control: with and without an extrinsic timekeeper. Both approaches assume that surface timing patterns result from processes available for general non-speech motor control, but they propose very different mechanisms to generate those surface phenomena. Extrinsic timing approaches involve the use of a system-extrinsic timekeeper, which tracks, represents and specifies time in units that are not defined within the system (in the case of speech, the system would be the speech motor control system). By contrast, intrinsic timing systems do not involve system-extrinsic timekeepers. In such systems, all aspects of surface timing emerge from within-system characteristics. Any within-system timing specification is made in terms of within-system units, e.g. within-system oscillator periods or phasing. We note that we will call extrinsic any system that involves at least some timing computation by an extrinsic timing mechanism. However, we suspect that in many, if not all, extrinsic timing systems there may be aspects of surface timing that are emergent and do not need to be specified by the extrinsic timekeeper.

We first present the two approaches, and then three types of timing phenomena that suggest extrinsic timekeeper control.

(a) Timing with an extrinsic timekeeper

Extrinsic timekeepers can be used in motor control for a variety of functions, including tracking the passage of time, measuring time, representing time as well as specifying time as a parameter of movement. Theories of speech and non-speech motor control that assume an extrinsic timekeeper include Directions Into Velocities of Articulators (DIVA) [64,65], based on Vector Integration To Endpoint (VITE) [66], and many optimal control theory models (e.g. [67]). These models assume that desired movement durations can be specified as part of the plan for an utterance, and that the passage of time (and/or the time remaining) within a movement can be continuously tracked during the implementation of that plan. Within these models, state (e.g. spatial) information is also tracked continuously, and timing information is integrated with state information to generate appropriate movement velocities at each time point. For example, DIVA [64] and VITE [66] assume that at each point in time, a temporal GO signal is multiplied by the difference vector (distance remaining to the target assuming a straight line path) to give instantaneous movement speed. In Bullock & Grossberg [66], GO is a function of time that is proportional to 1 divided by the time-to-target-attainment at the current instantaneous movement speed (cf. Lee's tau [68]). Because the GO signal in Bullock & Grossberg [66] is an increasing function of time, and the distance to the target decreases as a function of time, multiplying GO by the distance remaining until the target at each point in time yields a bell-shaped velocity profile [69]. The same GO for two different movement distances leads to equal movement durations for both, with higher peak velocities for the movement involving a greater distance. A larger GO for a given movement distance will yield a faster speed and therefore a shorter movement duration.

Optimal control theory models assume that we generate movements that are optimal in the sense that they meet task requirements at minimum cost. Many models of motor control in the optimal control theory framework are like DIVA in that they assume that we continuously monitor the states of our effectors (e.g. their position and velocity) in relation to the task goals, continuously updating our motor commands on the basis of state information to accomplish goals in a near-optimal way (but see [70,71] for an exception). In these models, movements are generated via a control policy that determines the optimal movement from any current state given the task goals and costs of movement. The control policy (which can be a solution to a set of equations) is determined by minimizing a cost function defining the task goals, costs of movement and their relative weightings in the current situation. Cost function minimization leads to the specification of values for all of the parameters in the model.

Although optimal control theory models do not necessarily require the use of an extrinsic timekeeper, many models developed within this framework use time as a parameter of movement and/or as a cost, and therefore assume one [67,69,70]. In many optimal control theory models that use extrinsic timekeepers, cost function minimization leads to the specification of movement parameters, including movement duration, where the optimal movement duration is the one that best satisfies the task requirements and minimizes movement costs. This movement duration results from several aspects of the cost function, including the specification of time as a task requirement, the cost of time, the cost of temporal inaccuracy and the temporal consequences of other movement costs, e.g. spatial inaccuracy at the movement target, or endpoint [69,7274]. The goals of a movement will determine whether all of these aspects are included in the cost function. For example, if a movement must be produced within a certain time (as in tasks with a periodic rhythm), time would be an explicit task requirement, and spatial inaccuracy would be included in the costs.

In contrast to tasks that require a specified duration as a task goal, purely spatial tasks might not involve an explicit goal for movement duration, but there would be temporal consequences of other task requirements, e.g. of spatial accuracy at target achievement, because faster movements can be produced when there are less stringent spatial accuracy requirements. In addition, empirical findings show that movements are usually produced in the minimum time consistent with other task requirements, suggesting that time itself is a cost [73,74].

Why should time be a cost? One possibility is that longer movements have more temporal variability [75]. This could be explained by the view that the mechanism that meters out time is variable, and hence more variability is expected to accumulate for longer duration intervals. However, this would not explain minimized durations observed in tasks where temporal accuracy is not an issue. Shadmehr and co-workers [69,72], following Harris & Wolpert [73], offer an explanation that relates movement speed to reward. That is, moving fast is desirable, because we get to a rewarding state quickly; moving slowly is suboptimal because it delays the next desirable state. Evidence in the literature supports the view that getting to a rewarding state more quickly is preferred. For example, Jimura et al. [76] found that thirsty undergraduates preferred to receive a small amount of water now, rather than more later (see [69] for additional evidence).

The optimal control theory framework is particularly attractive for speech timing, which appears to involve the influence of many different prioritizable factors. It has been used successfully to model simple movements, and to model aspects of speech timing [70,71]. We note however, that although many if not most optimal control theory models of motor control assume an extrinsic timekeeper, this theoretical framework is a theory of parameter value optimization, and can also be used in intrinsic timing models that do not use extrinsic timekeepers.

Simko & Cummins' embodied task dynamics model [70,71] is an interesting case: an example of a theory of speech motor control in which time is used only as a cost (where surface utterance duration is penalized), but not a parameter of movement. In avoiding the use of time as a parameter of movement, this model is similar to the articulatory phonology/task dynamics (AP/TD) approach, discussed in more detail below. However, even though time is not a parameter of movement in this model, an extrinsic timekeeping mechanism is nevertheless required to specify and represent the utterance duration quantity that it penalizes. On the definition that we presented in §2b, we would therefore classify it as an extrinsic timing model, even though it makes less extensive use of an extrinsic timekeeper than other types of extrinsic models.

In summary, many models of motor control use extrinsic timekeepers and many of these are optimal control theory models. In §3b, we discuss a different approach, that is, timing without an extrinsic timekeeper in AP/TD. Although this model currently provides the most comprehensive account of timing effects in speech production, we believe extrinsic models should be considered, for reasons laid out in §3c below.

(b) Timing without an extrinsic timekeeper in articulatory phonology/task dynamics

The main theory of speech production that assumes that surface timing phenomena can be produced without an extrinsic timekeeper is AP/TD [7783]. This theory is particularly important because it currently provides the most comprehensive account of timing phenomena observed in speech, and has led to a number of significant insights into the nature of speech production, such as the understanding that coarticulation between adjacent sounds is often a matter of articulatory overlap rather than of feature changes in the phonemic features that define the words. The model is based on oscillators; this key feature enables it to produce surface timing patterns without an extrinsic timekeeper.

AP/TD is unlike traditional phonological theories which assume that units of phonological contrast are symbolic, i.e. do not contain quantitative specifications for how articulatory movement should unfold. In AP/TD, units of phonological contrast are gestures, defined as equations of motion that determine how constrictions will be formed in the vocal tract; constriction releases are modelled as movement back to a neutral vocal tract position. In this framework, each dimension of gestural movement towards a constriction goal is modelled as movement towards an equilibrium position in a damped, mass-spring system (analogous to the movement of a mass attached to a spring). The gesture's starting position is analogous to the position to which the mass attached to the spring is stretched, and the equilibrium position is the target position that is approached by the mass after releasing the spring. Because the system is critically damped, the mass does not oscillate, but rather asymptotes towards (approaches, but never quite reaches) the equilibrium position. It can thus be described as having point-attractor dynamics. The time required to approximate a constriction target (gestural settling time) is intrinsic to the system because it is dictated by the parameters of the mass-spring oscillator, i.e. its stiffness, mass and damping coefficients.

Other aspects of timing within AP/TD are also determined by oscillators. As we explain below, point-attractor oscillators are additionally used to adjust the timing of gestures at positions defined by prosodic structure, i.e. for final lengthening and prominence-related lengthening [81,82]. AP/TD also uses two types of freely oscillating oscillators (i.e. oscillators with limit cycle rather than point-attractor dynamics): (i) gestural planning oscillators, and (ii) a hierarchy of coupled suprasegmental planning oscillators (syllable, foot and phrase oscillators). These oscillators are used during utterance planning to determine (i) relative timing among gestures (intergestural coordination), (ii) the amount of time that each gesture shapes the vocal tract (gestural activation) and (iii) some aspects of timing attributed to suprasegmental (i.e. prosodic) structure.

In this framework, intergestural timing is determined by the relative phasing among gestural planning oscillators assigned to each gesture, and does not need to be specified by an extrinsic timekeeper. For example, if two gestural planning oscillators entrain in-phase during utterance planning, then the physical gestures that correspond to each planning oscillator will begin at the same time. Other phasing relationships are also possible, but the most stable entrainment patterns are predicted to be the most common, i.e. in-phase and anti-phase. For a more complete discussion of intergestural timing, see Nam et al. [83].

The amount of time that each gesture is active (i.e. its activation interval) is derived from other parameters within the system and does not need to be specified extrinsically. Gestural activation intervals specify the amount of time that a gesture actively shapes the vocal tract. Gestures whose activation intervals are as long as their settling times will have enough time to approximate their targets. On the other hand, if gestural activation intervals are shorter than gestural settling times, targets will not be approximated and undershoot will occur. If gestural activation intervals are longer than gestural settling times, then gestures will continue to asymptote towards their targets for the length of the activation interval (and will thus appear to be in a quasi-steady state for the duration of the activation interval).

Gestural activation interval timing is intrinsic, because activation intervals are specified within the model as a fixed proportion of each planning oscillator's cycle. Because gestural planning oscillations are coupled to the oscillations of the suprasegmental hierarchy of syllable-, foot- and phrase-oscillations, the physical duration of activation will depend on the frequency of oscillation of this whole planning oscillator ensemble, i.e. on overall speech rate. When speech rate (i.e. planning oscillator ensemble frequency) increases, activation intervals will be physically shorter, and undershoot will be more likely, although gestural activation intervals will still correspond to the same gestural planning oscillator proportion. Likewise, when speech rate decreases, activation intervals will be longer, and more time will be spent asymptoting (getting closer and closer) to the gesture's target.

Temporal aspects of prosodic structure are also intrinsic to the system and do not need to be extrinsically specified. There are two aspects of prosodic timing in this framework: first, interactions among higher-level organizing oscillators (e.g. syllable, foot, phrase) specify the rates of syllable, foot and phrase production. These oscillation rates, in turn, affect the rates of planning oscillators for individual gestures, which determine gestural activation intervals, because each activation interval corresponds to a proportion of a planning oscillator cycle. The second aspect of prosodic timing has to do with adjustments that are made to all gestures that are concurrently active within a specified interval (mentioned briefly above). For example, the lengthenings that commonly occur at prosodically privileged positions in an utterance, e.g. boundary-related and prominence-related lengthening, are generated by proportionally stretching the activation intervals of boundary-adjacent or prominent gestures [81,82].

Global timing, i.e. overall speech rate, is specified by the utterance-specific oscillation rate of the ensemble of suprasegmental and gestural planning oscillators, but again does not involve the specification of surface duration [82].

In the current form of AP/TD, surface timing characteristics cannot be specified, nor is there a mechanism that can keep track of the output durations while they are being produced, or measure them after they are produced. These features are not required in the model, because once speakers have chosen a rate of speech and have imposed prosodic boundaries and prominences on an ordered sequence of gesturally specified words, surface timing patterns emerge from the interacting mechanisms of the system.

Simko & Cummins' embodied task dynamic model [70,71] is similar to AP/TD in that it uses mass-spring oscillator systems for gestures. In this model, some aspects of surface timing are emergent, i.e. they result from the stiffness specification of the mass-spring system, and other aspects result from the coordination of these oscillators in terms of their phasing. However, as discussed above, Simko & Cummins' model cannot be considered a strictly intrinsic timing model because it uses an extrinsic timekeeping mechanism to represent an utterance duration cost and therefore the surface duration of each utterance.

Although the use of intrinsic timing has the advantage of minimizing the planning required for each utterance, the three kinds of evidence we present in the next section are difficult to reconcile with intrinsic timing approach adopted in AP/TD, and are suggestive of extrinsic timekeeper control.

(c) Evidence for extrinsic timing

In §3c(i),(ii) which follow, we provide evidence that challenges the intrinsic timing aspect of the AP/TD model, because it supports the use of an extrinsic timekeeping mechanism in speech and non-speech motor control. In §3c(iii), we present evidence which is difficult to explain in mass-spring systems, although it may be implementable. These lines of evidence motivate us to consider extrinsic timing models that include time as a parameter of movement.

(i) Increasing variability with increases in interval duration: evidence for an extrinsic timekeeper

Patterns of variability in the timing of intervals support an extrinsic timing mechanism. Many studies show more variability in interval duration for longer intervals defined by movement [8491], and as explained in Schmidt et al. [85, p. 422], these findings are expected in extrinsic timing models: ‘the mechanism that meters out intervals of time … is variable, and the amount of variability is directly proportional to the length of the interval of time to be metered out.’ The relationship of variability to mean duration follows Weber's law, with an approximately constant coefficient of variation (standard deviation/mean) across a range of intervals (from tens of milliseconds to seconds and possibly longer), for both humans and animals, consistent with an extrinsic timing mechanism [83,84,8993]. Support for the view that the same timekeeping mechanisms are used in perception comes from a Weber relationship between difference threshold and interval duration in perceptual discrimination tasks [87].

The Weber relationship between standard deviation and interval duration, suggestive of noise in a timing process and therefore of an extrinsic timing mechanism, is observed in many production tasks, including

  • 1.Single-timed interval production, where participants reproduce a single interval to match the duration of a model, using, e.g. taps [86,87,90,94], for intervals ranging from 0 to 1050 ms.

  • 2.Movements made to a metronome: for moving a stylus to and from a target, with interbeat intervals from 200 to 500 ms [85].

  • 3.Movements made to an internally recalled rhythm in a continuation paradigm [88,90,94] among others: participants first produce a movement (e.g. tapping) in synchrony with a metronome (pacing phase), and continue the rhythm after the metronome is turned off (continuation phase). Typically, interval duration measurements are made from the continuation phase; standard deviations and mean interval durations are computed over a series of trials. Ivry & Hazeltine [90] found patterns of increased variability for longer tapping interval duration for intervals ranging from 325 to 500 ms. Spencer & Zelaznik [91] observed increased variability for longer tapping and continuous circle drawing intervals, as well as back-and-forth line drawing intervals, for intervals ranging from 300 to 500 ms (see also [94]).

  • 4.Speech movements and intervals. Byrd & Saltzman [95] found that variability increased with movement duration for measured durations of lip aperture closings associated with a trans-boundary /m/-schwa-/m/ sequence. Movements of different durations were elicited in conditions designed to systematically vary the prosodic boundary strength before the second /m/. For example, the target sequence mam- in mommamia was described as having no word boundary before its second /m/, whereas in momma mimi the second /m/ was separated by a word boundary from the preceding vowel. In other cases, the second /m/ was separated from the preceding vowel by a stronger boundary, and was either phrase- or utterance-initial. Movement durations were generally longer for stronger boundaries, because of constituent-final lengthening, whose magnitude increases with boundary strength (cf. [30] for acoustic measures). Data from Turk & Shattuck-Hufnagel [29] show a similar pattern for phrase-final versus phrase-medial word-final syllable rhyme measures, based on landmarks in the acoustic signal. Rhyme duration means and standard deviations were considerably higher for phrase-final words when compared with phrase-medial words; that is, monosyllabic words (e.g. Tom) had phrase-final mean durations of 346 ms (82 ms s.d.) versus phrase-medial mean durations of 193 ms (47 ms s.d.).

In AP/TD, longer movement durations at phrase boundaries arise by stretching the activation intervals in the vicinity of the boundary, that is, by decreasing the oscillation rate of a planning oscillator ensemble in a specified interval, while leaving the number of oscillations the same. Within this framework, therefore, there are no additional ‘ticks’ of an utterance-specific clock that could be used to explain the source of the additional temporal variability. Thus, the substantial body of evidence supporting increased variability with longer-duration movements is inconsistent with the AP/TD model of motor timing.

(ii) Surface timing constraints and goal specifications: evidence that surface durations are part of the phonetic plan for an utterance

Within AP/TD, desired surface durations cannot be specified as part of the utterance plan. For example, gesture durations in phrase-final position reflect the settling-time of their mass-spring system, their gestural activation interval and an adjustment which lengthens the gestural activation intervals at the boundary [81]. But in AP/TD, the surface duration emerges from these mechanisms alone, and cannot be specified in the original utterance plan.

However, Nakai et al. [96] suggest that a constraint on surface durations of phonemically short vowels in phrase-final position may be required to preserve the short versus long phonemic contrast in Northern Finnish. In Northern Finnish, disyllabic words with a phonemically short vowel in the word-final syllable (CVCV(C)), the final-syllable vowel is described as phonetically half-long because its duration is intermediate between that of the short vowel in other contexts and that of the contrasting long vowel (VV). The authors observed that the magnitude of final, accentual and combined lengthening on the half-long vowel was restricted (e.g. 17% combined accentual + final lengthening on the half-long vowel versus 68% on the long vowel in the same context). Support for a surface duration constraint also comes from observations that lengthening magnitudes were smaller for half-long vowels with longer phrase-medial durations; Nakai et al. [96] found a negative correlation between phrase-medial half-long vowel durations and the magnitude of phrase-final lengthening. These results are consistent with the view that the surface durations of the (phonemically short) half-long vowel are restricted in order to avoid endangering the phonemic short versus long vowel quantity contrast in this language. Although it is possible to implement this type of effect in AP/TD, the effect is difficult to explain within the theory, because surface durations cannot be measured, represented or referred to as motivating factors.

Additional support for the representation of surface durations can be found in studies of rate of speech effects and durational correlates of prosodic structure and quantity [9799]. These studies find that there is considerable variability in the strategies that different speakers use to implement these factors, but that nevertheless speakers all achieve a common surface duration pattern of relatively long surface durations, e.g. in phrase-final position, at slow speech rates and for phonemically long vowels. These findings challenge intrinsic timing in AP/TD because they suggest the equivalence of different strategies that result in similar surface duration patterns, and therefore support the specification of surface duration goals.

In summary, the two types of evidence we presented in §3c(i),(ii) strongly support the use of extrinsic timekeepers to measure, represent and specify surface movement and/or interval durations in speech. This evidence therefore supports models like DIVA/VITE in which duration is a planned parameter of movement. Although duration is not a parameter of movement in Simko & Cummins' [70,71] model, this model could probably be modified to account for these data, because Simko & Cummins use an extrinsic timekeeper to specify an utterance duration cost. However, their model might need to be amended to measure, represent and specify durations of constituents smaller than the utterance. Currently, in their model, although whole-utterance duration cost specification requires an extrinsic timer, surface timing of constituents smaller than the utterance (e.g. syllables, individual gestures) arise from phasing relations among gestures and from gestural stiffness, and is not specified directly. In §3c(iii), we present evidence which challenges this approach as well as AP/TD's approach, because it is difficult to account for in mass-spring systems. This evidence therefore motivates the consideration of extrinsic timing models of speech production which include time as a parameter of movement (and not simply as a cost).

(iii) Independent planning of the timing of movement onset versus target attainment: evidence difficult to account for in mass-spring models

Lee commented [68] ‘it is frequently not critical when a movement starts—just so long as it does not start too late. For example, an experienced driver who knows the car and road conditions can start braking safely for an obstacle a bit later than an inexperienced driver…’ This type of example suggests that timing variability is different at target attainment versus movement onset, difficult to explain in mass-spring models such as AP/TD, but easier to explain in extrinsic timing models because they can allow separate timing specification and prioritization for target attainment versus other parts of movement [100].

Several studies have confirmed the finding of differential variability in the timing of target attainment, compared with the timing of other movement events such as movement onset ([91,101104], for non-speech motor activity; [105] for speech). For example, Bootsma & van Wieringen [102] showed that the timing of initiating forehand drives in table tennis was more than twice as variable as the timing of paddle contact with the ball. Forehand drives in this experiment had average movement times that ranged between 92 and 178 ms. Timing accuracy at paddle–ball contact was estimated on the basis of the ratio of standard deviation of the direction of travel of the paddle and its mean rate of change, and was calculated to be within 2–5 ms. By contrast, movement time standard deviations ranged from 5 to 21 ms, depending on the player, showing that movement initiation times were much more variable.

Perkell & Matthies [105] showed a similar pattern of timing variability for upper lip protrusion movements during spoken /i_u/ sequences, where the number of intervocalic consonants varied systematically. They observed lower variability in the timing of target attainment (maximum protrusion) relative to voicing onset for /u/, when compared with the timing of a point shortly after movement onset (maximum acceleration), relative to voicing onset for the same vowel. This pattern suggests a tighter temporal coordination of maximum lip protrusion with voicing onset than of lip protrusion movement onset with voicing onset. These findings suggest that target attainment timing is controlled independently of movement onset timing, and that target attainment timing takes higher priority. These findings are not predicted by mass-spring models in which the timing of movement onset is not independent from the timing of target achievement. That is, while AP/TD does provide a mechanism for separately adjusting the timing of the beginning and the end of an activation interval (by applying its prosodic ‘stretching’ mechanism to a proportion of the interval), it does not provide a mechanism by which these timings could be differently variable.

By contrast, an extrinsic timing mechanism can, in principle, (i) plan the timing of movement onset independently of the timing of target attainment, and (ii) account for the possibility of different degrees of variability in these two time points, as would be the case if the timing of target attainment has a higher priority than the timing of the movement onset, resulting in online adjustments to achieve high priority goals.

The separate control of different parts of a movement is also supported by evidence from spatial variability at target achievement versus other parts of movement. The first line of evidence for differential degrees of variability at different points in a movement trajectory comes from work by Todorov & Jordan [67], who found lower spatial variability at target achievement compared with elsewhere in movements, for a task in which participants moved a pointer through a series of circular targets on a flat table.2 When analysing their results, they sampled each movement trajectory at 100 equally spaced points along the path. They computed the average movement path, and determined spatial deviations from the average path at each of the 100 points. Results showed that spatial deviations from the average path were lowest at the circular targets, and higher in between. Paulignan et al. [106] report similar results for shorter-than-a-second reaching movements (variability greater for first half of reaching movement, compared with the second half as the hand approached the target), as do Liu & Todorov [107] for two reaching tasks. They [107] found that spatial variability was lowest at the beginning and end of each movement, and highest in between. Presumably the variability was low at the beginning of movement, because the movements started from a fixed point, and was low at the end of movement, because the end was the target.

These results suggest that actors are able to identify parts of a trajectory that relate most closely to task performance, and are able to prioritize spatial accuracy in these parts of the trajectory. The results are also consistent with the view that actors make use of a feedback-based error correction system to implement error correction in the parts of the trajectory whose accuracy has been prioritized. On this view, errors in planned movement trajectories (as evidenced by deviations from the mean) can be left uncorrected if they do not interfere with task performance. In addition, these data suggest that separate parts of movement are identified, so that spatial accuracy can be prioritized, something that would be straightforward if these same points were also identified for differential timing prioritization in an extrinsic timing model. Different degrees of spatial (as well as timing) variability at different parts of movement are difficult to explain in mass-spring models, though perhaps not impossible to implement.

(d) Summary of section 3

Here, we reviewed two types of timing control theory: (i) without an extrinsic timekeeper, exemplified by AP/TD and (ii) with an extrinsic timekeeper, exemplified by DIVA and many types of optimal control theory models. Several findings challenge models such as AP/TD that do not make use of an extrinsic timekeeper. These findings include greater timing variability for longer duration intervals when compared with shorter duration intervals, the apparent use of a durational constraint in Northern Finnish, as well as the use of different strategies to achieve the same duration patterns as a speech planning goal. Additionally, we presented evidence of differential timing variability at movement end when compared with movement onset. This evidence is difficult to explain in intrinsic timing (mass-spring) models, but is more straightforward to account for in extrinsic timing models. Models of speech timing control that involve an extrinsic timekeeper are therefore worth investigating, although they will require extensive development to account for the range of phenomena currently captured by AP/TD.

4. Conclusion

Understanding speech timing requires an understanding of both what timing is used for, and how it is controlled. We propose that one goal of speech timing is to make speech understandable, and that this goal is balanced against other goals, such as speaking quickly, to give the surface timing properties of speech. This view is based on findings from controlled experiments, as well as from analyses of relationships among factors proposed to account for surface timing patterns. We also presented two alternative ways of modelling surface timing patterns: (i) as an emergent property of motor control, without the involvement of an extrinsic timekeeper (as in AP/TD) and (ii) as the result of desired durational specifications made possible by an extrinsic timekeeper (as in DIVA/VITE and many optimal control theory models, where desired durational specifications are balanced against other task requirements and costs to generate (near-)optimal movements). Although the AP/TD framework currently exceeds other models in its ability to account for speech timing phenomena, several findings present challenges for this framework, and raise the possibility that models of motor control that involve an extrinsic timekeeper may ultimately provide a simpler and more comprehensive account of speech timing behaviour.

While there are aspects of what timing is used for, and of the structures that govern it, that still remain to be discovered, our current understanding of these two aspects of speech timing is more advanced than our understanding of the mechanisms that are used to control it. It is hoped that advances in experimentation, modelling and neuroscience will eventually lead to a better match between our understanding of speech timing patterns and our models of how these patterns arise.

Acknowledgements

We thank Jelena Krivokapic and two anonymous reviewers for useful comments on previous versions of this manuscript, Elliot Saltzman and Louis Goldstein for tutorial discussions on articulatory phonology/task dynamics, and Dave Lee for discussions of General Tau theory. Any errors are ours.

Endnotes

1Note that by definition, probability values never exceed 1.

2Target-to-target movement durations were comparable to those observed in speech (i.e. approx. 100–400 ms).

Funding statement

This work was supported by an Arts and Humanities Research Council fellowship (AH/1002758/1) to the first author, and NIH R01-DC008780 to the second author.

References

1. Turk A, Nakai S, Sugahara M.2006. Acoustic segment durations in prosodic research: a practical guide. In Methods in empirical prosody research (eds Sudhoff S, Lenertová D, Meyer R, Pappert S, Augurzky P, Mleinek I, Richter N, Schliesser J.), pp. 1–28. Berlin, Germany: De Gruyter. [Google Scholar]

2. Stevens KN.2002. Toward a model for lexical access based on acoustic landmarks and distinctive features. J. Acoust. Soc. Am.111, 1873–1891. [PubMed] [Google Scholar]

3. Perkell JS, Zandipour M, Matthies ML, Lane H.2002. Economy of effort in different speaking conditions. I. A preliminary study of intersubject differences and modeling issues. J. Acoust. Soc. Am.112, 1627–1641. ( 10.1121/1.1506369) [PubMed] [CrossRef] [Google Scholar]

4. Peterson G, Lehiste I.1960. Duration of syllable nuclei in English. J. Acoust. Soc. Am.32, 693–703. ( 10.1121/1.1908183) [CrossRef] [Google Scholar]

5. Klatt DH.1976. Linguistic uses of segmental duration in English: Acoustic and perceptual evidence. J. Acoust. Soc. Am.59, 1208–1221. ( 10.1121/1.380986) [PubMed] [CrossRef] [Google Scholar]

6. Fletcher J.2010. The prosody of speech: timing and rhythm. In The handbook of phonetic sciences, 2nd edn (eds Hardcastle WJ, Laver J, Gibbon FE.), pp. 521–602. London, UK: Blackwell. [Google Scholar]

7. Delattre PC.1966. A comparison of syllable length conditioning among languages. Int. Rev. Appl. Linguist.4, 183–198. ( 10.1515/iral.1966.4.1-4.183) [CrossRef] [Google Scholar]

8. Lindblom B.1968. Temporal organization of syllable production. In Reports of the 6th Int. Congress of Acoustics, Tokyo, Japan (ed. Y Kohasi), pp. B29–B30. Tokyo, Japan: Maruzen.

9. Lehiste I.1970. Suprasegmentals. Cambridge, MA: MIT Press. [Google Scholar]

10. Nooteboom S.1972. Production and perception of vowel duration; a study of durational properties of vowels in Dutch. Unpublished PhD thesis, University of Utrecht, The Netherlands. [Google Scholar]

11. van Santen JPH.1992. Contextual effects on vowel duration. Speech Commun.11, 513–546. ( 10.1016/0167-6393(92)90027-5) [CrossRef] [Google Scholar]

12. Selkirk EO.1978. On prosodic structure and its relation to syntactic structure. In Nordic prosody II (ed. Fretheim T.), pp. 111–140. Trondheim, Norway: TAPIR. [Google Scholar]

13. Nespor M, Vogel I.1986. Prosodic phonology. Dordrecht, The Netherlands: Foris Publications. [Google Scholar]

14. Watson D, Gibson E.2004. The relationship between intonational phrasing and syntactic structure in language production. Lang. Cogn. Process.19, 713–755. ( 10.1080/01690960444000070) [CrossRef] [Google Scholar]

15. Watson D, Breen M, Gibson E.2006. The role of syntactic obligatoriness in the production of intonational boundaries. J. Exp. Psychol. Learn. Mem. Cogn.32, 1045–1056. ( 10.1037/0278-7393.32.5.1045) [PubMed] [CrossRef] [Google Scholar]

16. Astésano C, Bard EG, Turk A.2007. Structural influences on initial accent placement in French. Lang. Speech50, 423–446. ( 10.1177/00238309070500030501) [PubMed] [CrossRef] [Google Scholar]

17. Kainada E.2010. Phonetic and phonological nature of prosodic boundaries: evidence from Modern Greek. Unpublished PhD thesis, University of Edinburgh, UK. [Google Scholar]

18. Jackendoff R.1987. Consciousness and the computational mind. Explorations in cognitive science, no. 3, p. 329Cambridge, MA: MIT Press. [Google Scholar]

19. Chomsky N, Halle M.1968. The sound pattern of English. New York, NY: Harper & Row. [Google Scholar]

20. Beckman ME, Pierrehumbert JB.1986. Intonational structure in Japanese and English. Phonol. Yearbook3, 255–309. ( 10.1017/S095267570000066X) [CrossRef] [Google Scholar]

21. Gee JP, Grosjean F.1983. Performance structures: a psycholinguistic and linguistic appraisal. Cogn. Psychol.15, 411–458. ( 10.1016/0010-0285(83)90014-2) [CrossRef] [Google Scholar]

22. Jun S-A. (ed.). 2005. Prosodic typology: the phonology of intonation and phrasing. Oxford, UK: Oxford University Press. [Google Scholar]

23. White L.2002. English speech timing: a domain and locus approachUnpublished PhD thesis, University of Edinburgh, UK. [Google Scholar]

24. Vaissière J.1983. Language independent prosodic features. In Prosody: models and measurements (eds Cutler A, Ladd DR.), pp. 53–65. Berlin, Germany: Springer. [Google Scholar]

25. Keating P, Cho T, Fougeron C, Hsu C-S.2003. Domain-initial strengthening in four languages. In Phonetic interpretation: papersin laboratory phonology VI (eds Local J, Ogden R, Temple R.), pp. 143–161. Cambridge, UK: Cambridge University Press. [Google Scholar]

26. Cho T, Keating P.2009. Effects of initial position versus prominence in English. J. Phon.37, 466–485. ( 10.1016/j.wocn.2009.08.001) [CrossRef] [Google Scholar]

27. Bombien L, Mooshammer C, Hoole P, Kühnert B.2010. Prosodic and segmental effects on EPG contact patterns of word-initial German clusters. J. Phon.38, 388–403. ( 10.1016/j.wocn.2010.03.003) [CrossRef] [Google Scholar]

28. Cambier-Langeveld T.1997. The domain of final lengthening in the production of Dutch. In Linguistics in the Netherlands (eds Coerts J, Hoop HD.), pp. 13–24. Amsterdam, The Netherlands: John Benjamins. [Google Scholar]

29. Turk AE, Shattuck-Hufnagel S.2007. Multiple targets of phrase-final lengthening in American English words. J. Phon.35, 445–472. ( 10.1016/j.wocn.2006.12.001) [CrossRef] [Google Scholar]

30. Wightman CW, Shattuck-Hufnagel S, Ostendorf M, Price PJ.1992. Segmental durations in the vicinity of prosodic phrase boundaries. J. Acoust. Soc. Am.91, 1707–1717. ( 10.1121/1.402450) [PubMed] [CrossRef] [Google Scholar]

31. Keating P.2006. Phonetic encoding of prosodic structure. In Speech production: models, phonetic processes, and techniques (eds Harrington J, Tabain M.), pp. 167–186. New York, NY: Psychology Press. [Google Scholar]

32. Turk AE, Shattuck-Hufnagel S.2000. Word-boundary-related duration patterns in English. J. Phon.28, 397–440. ( 10.1006/jpho.2000.0123) [CrossRef] [Google Scholar]

33. Turk A.2012. The temporal implementation of prosodic structure. In The Oxford handbook of laboratory phonology (eds Cohn A, Fougeron C, Huffman M.), pp. 242–253. Oxford, UK: Oxford University Press. [Google Scholar]

34. Turk A, Shattuck-Hufnagel S.2013. What is speech rhythm? A commentary inspired by Arvaniti and Rodriquez, Krivokapić, and Goswami and Leong. Lab. Phonol.4, 93–118. ( 10.1515/lp-2013-0005) [CrossRef] [Google Scholar]

35. Pierrehumbert J, Talkin D.1992. Lenition of /h/ and glottal stop. In Papers in laboratory phonology II: gesture, segment prosody (eds Docherty GJ, Ladd DR.), pp. 90–117. Cambridge, UK: Cambridge University Press. [Google Scholar]

36. Ogden R.2004. Non-modal voice quality and turn-taking in Finnish. In Sound patterns in interaction: cross-linguistic studies from conversation (eds Couper-Kuhlen E, Ford C.), pp. 29–62. Amsterdam, The Netherlands: John Benjamins. [Google Scholar]

37. Dilley L, Shattuck-Hufnagel S, Ostendorf M.1996. Glottalization of word-initial vowels as a function of prosodic structure. J. Phon.24, 423–444. ( 10.1006/jpho.1996.0023) [CrossRef] [Google Scholar]

38. Redi L, Shattuck-Hufnagel S.2001. Variation in the realization of glottalization in normal speakers. J. Phon.29, 407–429. ( 10.1006/jpho.2001.0145) [CrossRef] [Google Scholar]

39. Tanaka H.2004. Prosody for marking transition-relevance places in Japanese conversation: the case of turns unmarked by utterance-final objects. In Sound patterns in interaction: cross-linguistic studies from conversation (eds Couper-Kuhlen E, Ford C.), pp. 63–96. Amsterdam, The Netherlands: John Benjamins. [Google Scholar]

40. Fougeron C, Keating P.1997. Articulatory strengthening at edges of prosodic domains. J. Acoust. Soc. Am.101, 3728–3740. ( 10.1121/1.418332) [PubMed] [CrossRef] [Google Scholar]

41. Lavoie L.2001. Consonant strength: phonological patterns and phonetic manifestations. London, UK: Routledge. [Google Scholar]

42. Shattuck-Hufnagel S, Ostendorf M, Ross K.1994. Stress shift and early pitch accent placement in lexical items in American English. J. Phon.22, 357–388. [Google Scholar]

43. Ladd DR.2008. Intonational phonology, 2nd ednCambridge, UK: Cambridge University Press. [Google Scholar]

44. Hayes B.1983. A grid-based theory of English meter. Linguist. Inq.14, 357–393. [Google Scholar]

45. Selkirk EO.1984. Phonology and syntax: the relation between sound and structure. Cambridge, MA: MIT Press. [Google Scholar]

46. Halle M, Vergnaud J-R.1987. An essay on stress. Cambridge, MA: MIT Press. [Google Scholar]

47. Beckman ME, Edwards J.1994. Articulatory evidence for differentiating stress categories. In Papers in laboratory phonology III: phonologicalstructure and phonetic form (ed. Keating P.), pp. 7–33. Cambridge, UK: Cambridge University Press. [Google Scholar]

48. Beckman ME, Edwards J.1992. Intonational categories and the articulatory control of duration. In Speech perception, production and linguistic structure (eds Tohkura Y, Vatikiotis-Bateson E, Sagisaka Y.), pp. 359–376. Tokyo, Japan: OHM Publishing Co., Ltd. [Google Scholar]

49. Turk AE, Sawusch JR.1997. The domain of accentual lengthening in American English. J. Phon.25, 25–41. ( 10.1006/jpho.1996.0032) [CrossRef] [Google Scholar]

50. Mo Y, Cole J, Hasegawa-Johnson M.2010. Prosodic effects on temporal structure of monosyllabic CVC words in American English. Proc. 5th Speech Prosody Conf. 2010, Chicago, IL.100208, pp. 1–4. [Google Scholar]

51. Heuven VJJP, Sluijter AMC.1996. Notes on the phonetics of word prosody. In Stress patterns of the world, part 1: background (HIL Publications) (eds Goedemans R, Hulst HVD, Visch E.), pp. 233–269. The Hague, The Netherlands: Holland Academic Graphics. [Google Scholar]

52. Cho T.2006. Manifestation of prosodic structure in articulation: evidence from lip kinematics in English. In Laboratory phonology, vol. 8 (eds Goldstein L, Whalen DH, Best CT.), pp. 519–548. Berlin, Germany: Mouton de Gruyter. [Google Scholar]

53. Shattuck-Hufnagel S, Turk A.1996. A prosody tutorial for investigators of auditory sentence processing. J. Psycholinguist. Res.25, 193–247. ( 10.1007/BF01708572) [PubMed] [CrossRef] [Google Scholar]

54. Keating P, Shattuck-Hufnagel S.2002. A prosodic view of word form encoding for speech production. UCLA Work. Pap. Phon.101, 112–156. [Google Scholar]

55. Caspers J.1994. Pitch movements under time pressure: effects of speech rate on the melodic marking of accents and boundaries in Dutch. The Hague, The Netherlands: Holland Academic Graphics. [Google Scholar]

56. Lieberman P.1963. Some effects of semantic and grammatical context on the production and perception of speech. Lang. Speech6, 172–187. [Google Scholar]

57. Fowler C, Housum J.1987. Talkers’ signaling of ‘new’ and ‘old’ words in speech and listeners’ perception and use of the distinction. J. Mem. Lang.26, 489–504. ( 10.1016/0749-596X(87)90136-7) [CrossRef] [Google Scholar]

58. Jurafsky D, Bell A, Gregory M, Raymond W.2001. Probabilistic relations between words: evidence from reduction in lexical production. In Frequency and the emergence of linguistic structure (eds Bybee J, Hopper P.), pp. 229–254. Amsterdam, The Netherlands: John Benjamins. [Google Scholar]

59. Bell A, Brenier J, Gregory M, Girand C, Jurafsky D.2009. Predictability effects on durations of content and function words in conversational English. J. Mem. Lang.60, 92–111. ( 10.1016/j.jml.2008.06.003) [CrossRef] [Google Scholar]

60. Aylett M, Turk A.2004. The smooth signal redundancy hypothesis: a functional explanation for relationships between redundancy, prosodic prominence, and duration on spontaneous speech. Lang. Speech47, 31–56. ( 10.1177/00238309040470010201) [PubMed] [CrossRef] [Google Scholar]

61. Aylett MP.2000. Stochastic suprasegmentals: relationships between redundancy, prosodic structure and care of articulation in spontaneous speech. Unpublished PhD thesis, University of Edinburgh, UK. [Google Scholar]

62. Turk A.2010. Does prosodic constituency signal relative predictability? A smooth signal redundancy hypothesis. J. Lab. Phonol.1, 227–262. ( 10.1515/LABPHON.2010.012) [CrossRef] [Google Scholar]

63. Gahl S, Garnsey S.2004. Knowledge of grammar, knowledge of usage: syntactic probabilities affect pronunciation variation. Language80, 748–775. ( 10.1353/lan.2004.0185) [CrossRef] [Google Scholar]

64. Guenther FH.1995. Speech sound acquisition, coarticulation, and rate effects in a neural-network model of speech production. Psychol. Rev.102, 594–621. ( 10.1037/0033-295X.102.3.594) [PubMed] [CrossRef] [Google Scholar]

65. Guenther F.2006. Cortical interactions underlying the production of speech sounds. J. Commun. Disord.39, 350–365. ( 10.1016/j.jcomdis.2006.06.013) [PubMed] [CrossRef] [Google Scholar]

66. Bullock D, Grossberg S.1988. Neural dynamics of planned arm movements: emergent invariants and speed accuracy properties during trajectory formation. Psychol. Rev.95, 49–90. ( 10.1037/0033-295X.95.1.49) [PubMed] [CrossRef] [Google Scholar]

67. Todorov E, Jordan MI.2002. Optimal feedback control as a theory of motor coordination. Nat. Neurosci.5, 1226–1235. ( 10.1038/nn963) [PubMed] [CrossRef] [Google Scholar]

68. Lee DN.1998. Guiding movement by coupling taus. Ecol. Psychol.10, 221–250. ( 10.1080/10407413.1998.9652683) [CrossRef] [Google Scholar]

69. Shadmehr R, Mussa-Ivaldi S.2012. Biological learning and control: how the brain builds representations, predicts events, and makes decisions. Cambridge, MA: MIT Press. [Google Scholar]

70. Simko J, Cummins F.2010. Embodied task dynamics. Psychol. Rev.117, 1229–1246. ( 10.1037/a0020490) [PubMed] [CrossRef] [Google Scholar]

71. Simko J, Cummins F.2011. Sequencing and optimization within an embodied task dynamic model. Cogn. Sci.35, 527–562. ( 10.1111/j.1551-6709.2010.01159.x) [CrossRef] [Google Scholar]

72. Shadmehr R, Orban de Xivry JJ, Xu-Wilson M, Shih T-Y.2010. Temporal discounting of reward and the cost of time in motor control. J. Neurosci.30, 10 507–10 516. ( 10.1523/JNEUROSCI.1343-10.2010) [PMC free article] [PubMed] [CrossRef] [Google Scholar]

73. Harris CM, Wolpert DM.2006. The main sequence of saccades optimizes speed–accuracy trade-off. Biol. Cybern.95, 21–29. ( 10.1007/s00422-006-0064-x) [PMC free article] [PubMed] [CrossRef] [Google Scholar]

74. Tanaka H, Krakauer JW, Qian N.2006. An optimization principle for determining movement duration. J. Neurophysiol.95, 3875–3886. ( 10.1152/jn.00751.2005) [PubMed] [CrossRef] [Google Scholar]

75. Hanco*ck PA, Newell KM.1985. The movement speed–accuracy relationship in space–time. In Motor behavior: programming, control, and acquisition (eds Heuer H, Kleinbeck U, Schmidt K-H.), pp. 153–185. Berlin, Germany: Springer. [Google Scholar]

76. Jimura K, Myerson J, Hilgard J, Braver TS, Green L.2009. Are people really more patient than other animals? Evidence from human discounting of real liquid rewards. Psychon. Bull. Rev.16, 1071–1075. ( 10.3758/PBR.16.6.1071) [PMC free article] [PubMed] [CrossRef] [Google Scholar]

77. Browman CP, Goldstein L.1985. Dynamic modeling of phonetic structure. In Phonetic linguistics (ed. Fromkin VA.), pp. 35–53. New York, NY: Academic Press. [Google Scholar]

78. Browman CP, Goldstein L.1992. Articulatory phonology: an overview. Phonetica49, 155–180. ( 10.1159/000261913) [PubMed] [CrossRef] [Google Scholar]

79. Saltzman E, Kelso JAS.1987. Skilled actions: a task-dynamic approach. Psychol. Rev.94, 84–106. ( 10.1037/0033-295X.94.1.84) [PubMed] [CrossRef] [Google Scholar]

80. Saltzman EL, Munhall K.1989. A dynamical approach to gestural patterning in speech production. Ecol. Psychol.1, 333–382. ( 10.1207/s15326969eco0104_2) [CrossRef] [Google Scholar]

81. Byrd D, Saltzman E.2003. The elastic phrase: modeling the dynamics of boundary-adjacent lengthening. J. Phon.31, 149–180. ( 10.1016/S0095-4470(02)00085-2) [CrossRef] [Google Scholar]

82. Saltzman E, Nam H, Krivokapic J, Goldstein L.2008. A task-dynamic toolkit for modeling the effects of prosodic structure on articulation. In Proc. 4th Int. Conf. on Speech Prosody, Campinas, Brazil(eds PA Barbosa, S Madureira, C Reis). [Google Scholar]

83. Nam H, Goldstein L, Saltzman E.2010. Self-organization of syllable structure: a coupled oscillator model. In Approaches to phonological complexity (eds Pellegrino F, Marisco E, Chitoran I.), pp. 299–328. Berlin, Germany: Mouton de Gruyter. [Google Scholar]

84. Treisman M.1963. Temporal discrimination and the indifference interval: implications for a model of the internal clock. Psychol. Monogr.77, 1–31. ( 10.1037/h0093864) [PubMed] [CrossRef] [Google Scholar]

85. Schmidt RA, Zelaznik H, Hawkins B, Frank JS, Quinn JT.1979. Motor-output variability: a theory for the accuracy of rapid motor acts. Psychol. Rev.86, 415–451. ( 10.1037/0033-295X.86.5.415) [PubMed] [CrossRef] [Google Scholar]

86. Rosenbaum DA, Patashnik O.1980. A mental clock setting process revealed by reaction times. In Tutorials in motor behavior (eds Stelmach GE, Requin J.), pp. 487–499. Amsterdam, The Netherlands: North-Holland; Publishing Company. [Google Scholar]

87. Rosenbaum DA, Patashnik O.1980. Time to time in the human motor system. In Attention and performance, vol. VIII (ed. Nickerson RS.), pp. 93–106. Hillsdale, NJ: Erlbaum. [Google Scholar]

88. Wing AM.1980. The long and short of timing in response sequences. In Tutorials in motor behavior (eds Stelmach GE, Requin J.), pp. 469–484. Amsterdam, The Netherlands: North-Holland Publishing Company. [Google Scholar]

89. Ivry R, Corcos DM.1993. Slicing the variability pie: component analysis of coordination and motor dysfunction. In Variability in motor control(eds K Newell, D Corcos), pp. 415–447. Champaign, IL: Human Kinetic Publishers. [Google Scholar]

90. Ivry RB, Hazeltine RE.1995. Perception and production of temporal intervals across a range of durations: evidence for a common timing mechanism. J. Exp. Psychol. Hum. Percept. Perform.21, 3–18. ( 10.1037/0096-1523.21.1.3) [PubMed] [CrossRef] [Google Scholar]

91. Spencer RMC, Zelaznik HN.2003. Weber (slope) analyses of timing variability in tapping and drawing tasks. J. Mot. Behav.35, 371–381. ( 10.1080/00222890309603157) [PubMed] [CrossRef] [Google Scholar]

92. Gibbon J.1977. Scalar expectancy theory and Weber's law in animal timing. Psychol. Rev.84, 279–325. ( 10.1037/0033-295X.84.3.279) [CrossRef] [Google Scholar]

93. Gibbon J, Malapani C, Dale CL, Gallistel CR.1997. Toward a neurobiology of temporal cognition: advances and challenges. Curr. Opin. Neurobiol.7, 170–184. ( 10.1016/S0959-4388(97)80005-0) [PubMed] [CrossRef] [Google Scholar]

94. Merchant H, Zarco W, Bartolo R, Prado L.2008. The context of temporal processing is represented in the multidimensional relationships between timing tasks. PLoS ONE3, e3169 ( 10.1371/journal.pone.0003169) [PMC free article] [PubMed] [CrossRef] [Google Scholar]

95. Byrd D, Saltzman E.1998. Intragestural dynamics of multiple prosodic boundaries. J. Phon.26, 173–199. ( 10.1006/jpho.1998.0071) [CrossRef] [Google Scholar]

96. Nakai S, Turk A, Suomi K, Granlund S, Ylitalo R, Kunnari S.2012. Quantity and constraints on the temporal implementation of phrasal prosody in Northern Finnish. J. Phon.40, 796–807. ( 10.1016/j.wocn.2012.08.003) [CrossRef] [Google Scholar]

97. Berry J.2011. Speaking rate effects on normal aspects of articulation: outcomes and issues. Perspect. Speech Sci. Orofacial Disord.21, 15–26. ( 10.1044/ssod21.1.15) [CrossRef] [Google Scholar]

98. Edwards J, Beckman ME, Fletcher J.1991. The articulatory kinematics of final lengthening. J. Acoust. Soc. Am.89, 369–382. ( 10.1121/1.400674) [PubMed] [CrossRef] [Google Scholar]

99. Hertrich I, Ackermann H.1997. Articulatory control of phonological vowel length contrasts: kinematic analysis of labial gestures. J. Acoust. Soc. Am.102, 523–536. ( 10.1121/1.419725) [PubMed] [CrossRef] [Google Scholar]

100. Shaffer LH.1982. Rhythm and timing in skill. Psychol. Rev.89, 109–122. ( 10.1037/0033-295X.89.2.109) [PubMed] [CrossRef] [Google Scholar]

101. Billon M, Semjen A, Stelmach GE.1996. The timing effects of accent production in periodic finger-tapping sequences. J. Mot. Behav.28, 198–210. ( 10.1080/00222895.1996.9941745) [PubMed] [CrossRef] [Google Scholar]

102. Bootsma R, van Wieringen PC.1990. Timing an attacking forehand drive in table tennis. J. Exp. Psychol. Hum. Percept. Perform.16, 21–29. ( 10.1037/0096-1523.16.1.21) [CrossRef] [Google Scholar]

103. Craig C, Pepping GJ, Grealy M.2005. Intercepting beats in predesignated target zones. Exp. Brain Res.165, 490–504. ( 10.1007/s00221-005-2322-x) [PubMed] [CrossRef] [Google Scholar]

104. Zelaznik HN, Rosenbaum DA.2010. Timing processes are correlated when tasks share a salient event. J. Exp. Psychol. Hum. Percept. Perform.36, 1565–1575. ( 10.1037/a0020380) [PubMed] [CrossRef] [Google Scholar]

105. Perkell JS, Matthies ML.1992. Temporal measures of anticipatory labial coarticulation for the vowel /u/: within-subject and cross-subject variability. J. Acoust. Soc. Am.91, 2911–2925. ( 10.1121/1.403778) [PubMed] [CrossRef] [Google Scholar]

106. Paulignan Y, MacKenzie C, Marteniuk R, Jeannerod M.1991. Selective perturbation of visual input during prehension: 1. The effects of changing object position. Exp. Brain Res.83, 502–512. ( 10.1007/BF00229827) [PubMed] [CrossRef] [Google Scholar]

107. Liu D, Todorov E.2007. Evidence for the flexible sensorimotor strategies predicted by optimal feedback control. J. Neurosci.27, 9354–9368. ( 10.1523/jneurosci.1110-06.2007) [PMC free article] [PubMed] [CrossRef] [Google Scholar]

Articles from Philosophical Transactions of the Royal Society B: Biological Sciences are provided here courtesy of The Royal Society

Timing in talking: what is it used for, and how is it controlled? (2024)
Top Articles
Latest Posts
Article information

Author: Sen. Ignacio Ratke

Last Updated:

Views: 5594

Rating: 4.6 / 5 (56 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Sen. Ignacio Ratke

Birthday: 1999-05-27

Address: Apt. 171 8116 Bailey Via, Roberthaven, GA 58289

Phone: +2585395768220

Job: Lead Liaison

Hobby: Lockpicking, LARPing, Lego building, Lapidary, Macrame, Book restoration, Bodybuilding

Introduction: My name is Sen. Ignacio Ratke, I am a adventurous, zealous, outstanding, agreeable, precious, excited, gifted person who loves writing and wants to share my knowledge and understanding with you.