Brad Story, Ph.D.


Email: bstory@email.arizona.edu

Click here for CV

Brad Story is a Professor of Speech, Language, and Hearing Sciences at the University of Arizona. 

Our research is concerned with the acoustics, mechanics, and physiology of human sound production. In our laboratory we view the structure and function of the respiratory, laryngeal, and upper airway systems collectively as an instrument of communication that produces sound embedded with layers of information. Whereas most of this work is directed toward understanding the sound producing mechanisms of speech, we also have interests in the singing voice, musical instruments, and understanding how listeners decode the information in the acoustic speech signal. 

The approach we take consists primarily of developing computational, physically-based models with which we attempt to simulate the observed behavior of specific components of the speech production system. Creating a model of speech production is somewhat like animating a short movie. A structure is precisely defined mathematically in a static, stationary sense, and then is "moved" in time by changing the spatial positions of all parts of the structure. The result is a dynamic virtual object whose shape or other characteristics changes over the time course of a specified action. Whereas the "object" in a cartoon or animated movie is the imaginary body of a character, for a model of speech production, the objects are mathematical representations of the articulators' (tongue, soft palate, larynx, jaw, lips) physical properties or some simplified form of them (e.g. the shape of vocal tract). These virtual articulators can be moved by specifying spatial positions or postures that the model must attempt to achieve. Because the model is based on physical principles, the simulated movement must obey the laws of physics and results in generally smooth speech-like movements. 

Simultaneous with the movement is the simulated generation of sound waves by vibration of the vocal folds, and their propagation through the pharynx and oral cavity. The result is a sound wave that is similar to a recording of speech. It can be listened to, analyzed, and compared to real speech. Our model is currently at the stage where it can be used to facilitate studying problems such as how children acquire the ability to speak, how neurologic conditions imposed by Parkinson disease or cerebral palsy affect speech production, and how listeners separate the spoken message from the sound qualities of the person speaking. There still much to learn about how we actually execute the movements of the articulators to produce a stream of sound that is recognized as intelligible speech, as well as understanding how children develop the ability to speak under conditions where the sound-producing structures undergo nearly continuous growth, and hence continuously changing sound characteristics. We continue to pursue research in both these areas.