iDAPT Research Logo

Communication Team Research Projects

Oral Dynamics Lab (ODL): Van Lieshout, Namasivayam, Ben-David

We study dynamic principles of motor control and feedback in adults with and without speech disorders, as well as the role of emotion in spoken language across the life span. We also investigate the impact of motor therapy on children with speech sound disorders, including childhood apraxia.

prismlab (Tom Chau)

We are working with children and youth with severe disabilities who do not have functional speech or gestures. We focus on novel access technologies (e.g., facial thermography, near-infrared spectroscopy-based brain-machine interfaces) that harness somatic and physiological pathways for communication. We have also developed a training protocol for students, teachers, parents and educational assistants to facilitate the use of communication technology in the classroom.

Jennifer Campos / Kathy Pichora-Fuller

There is now convincing evidence linking falls with hearing loss, yet the specific nature of this link is unclear. It is possible, for instance, that hearing loss causes problems with orienting because binaural cues are reduced, that hearing loss taxes cognitive resources in complex environments, or that there is a shared pathology of the auditory and vestibular systems. Our research uses realistic, Virtual Reality technologies to better understand the link between hearing and balance in a way that will help identify those at risk of falling and provide methods of intervention.

VTV Lab: Yunusova, Baljko, Faloutsos (PIs)

We are developing a series of computer games for speech therapy for adults with conditions that affect the clarity of speech, such as Parkinson’s disease, stroke etc. The games are focused on interactive visualizations of the tongue movements and on learning how to control the tongue movements during speaking via the games.  Right now we are recruiting individuals with Parkinson’s disease for a pilot study.

Frank Rudzicz

Project 1: I am working on developing a robot that will be able to understand and speak with older adults in everyday situations, helping individuals with dementia in day-to-day tasks.

Project 2: Developing an iPad app that elicits stories from older adults with dementia through looking at meaningful artifacts (e.g., pictures).

Kathy McGilton

Alzheimer’s disease (AD) is expected to rise to over 500,000 cases by 2031. Most people with severe AD are living in long-term care (LTC) facilities. Caring for these residents can be challenging, as up to 90% of them have behavioural problems. This eventually leads to staff turnover, which has a negative effect on the quality of care delivered. One of major problem is that nurses have trouble communicating with and understanding the residents. Residents are unable to express their needs to the staff. Most of providers are unregulated, such as personal support workers or health care aides and often do not receive any training on how to talk to residents with AD. The aim of this proposal is to provide workers with a Resident Centered Communication Intervention (RCCI.) The intervention consists of three parts: (1) the development of an individualized communication care plan for each resident; (2) a one-day workshop for care workers; and (3) a care worker support system to provide help when they use the communication plans in practice. Data from all the providers will be collected before the intervention, and at 1 month and 3 months after the intervention has been completed. The ultimate goal of this research is to improve the quality of life and care for residents with communication problems in LTC facilities. This research addresses the priority of quality care for residents with dementia by enhancing provider's communication and providing evidence of the link between the RCCI approach, everyday practice, and the residents' well-being.

Craig Chambers

Research in my laboratory examines the mental processes involved in the ability to produce and comprehend spoken language in real time.  This work explores the communicative abilities of children, young adults, and older adults.

SMART Lab (Ryerson University) - Frank Russo

Drs. Frank Russo (PI), Steven Livingstone, Naresh Vempala, Paolo Ammirante.

Project 1: We are developing a new “singing therapy” for Parkinson’s that is helping to rehabilitate lost functioning in vocal-emotional and facial communication.

Project 2: We are investigating perception of music and emotional speech in individuals with hearing loss (with and without hearing aids).

Project 3: We have developed a massive audiovisual database of emotional speech and song samples (>7000 trials). The database is currently being normed in different populations.

Project 4: We are investigating the use of somatosensory input to support various aspects of auditory perception.

Project 5: We are using machine learning methods for determining mood on the basis of peripheral physiological sensors Project 6: We are developing listener-centered methods for facilitating mood induction through

Elizabeth Rochon

Ongoing studies in Dr. Rochon’s lab are designed to better characterize the nature and extent of language comprehension and production impairments in Alzheimer's disease and primary progressive aphasia (PPA). In a longitudinal study that is investigating the nature of the connected speech impairment in PPA, Dr. Rochon and collaborators are also using magnetic resonance imaging (MRI) and diffusion tenser imaging (DTI) to examine typical patterns of progression of pathology in the brain and to specify further the abnormalities that characterize the syndromes.

Other work involves the development and the assessment of new treatments for language impairments for patients who have had strokes. In addition, using functional magnetic resonance imaging (fMRI), diffusion tenser imaging (DTI) and Magnetoencephalography (MEG), Dr. Rochon and collaborators are researching the neural processing characteristics associated with changes in naming performance as a function of different treatment regimens for naming impairments in aphasia. In another study, the researchers are investigating the usability of remotely delivered treatment (i.e., over the internet) to patients post stoke for their naming impairments. 

Gurjit Singh

Broadly speaking, my research focuses on auditory, cognitive, and social factors that lead to success with hearing instruments and audiological rehabilitation. Specific projects in the lab focus on teleaudiology, cognitive factors that contribute to speech understanding in complex listening environments, and the role of social support in aural rehabilitation.