Yes - you read that right. Artificial Intelligence (AI) is indeed for crybabies and more specifically, crying babies, thanks to a cell phone app called ChatterBabyTM . AI has become so ubiquitous in our everyday lives, that its presence is evident even in the formative years of human life. Gone are the days that a baby’s first words are “Mama” or “Dada.” A recent New York Post article  shares a story about how one baby’s first word was “Alexa,” which is Amazon’s hands-free voice controlled virtual assistant. Before forming words, however, babies communicate through crying, and now AI is here to help with that too!
Powered by machine learning, the app ChatterBaby is designed to predict the reason behind a baby’s cry [Figure 1]. This app works by listening to a baby’s cry. Then, using a combination of signal processing and machine learning algorithms, ChatterBaby computes thousands of acoustic features and compares them to features of cries already in the database. The algorithm then employs many of these features, including energy of the cry, duration, and the amount of silence within a cry to predict which type of cry is occurring. Lastly, ChatterBaby calculates the probability of the baby’s pain, fussiness, and hunger. Currently, ChatterBaby only predicts these three states (i.e. pain, fussiness, and hunger) because they are not dependent on the age of the baby and have consistent acoustic patterns for newborns and older babies.
ChatterBaby’s cry database consists of painful cries recorded from babies who were receiving vaccines or ear-piercings, and a collection of fussy, hungry, anxious, colicky, and scared cries. The cries in the database were labeled as such by a panel of veteran mothers, and only the cries whose labels were universally agreed upon were used to teach the ChatterBaby algorithm. Over 90% of the time, ChatterBaby has correctly identified cries due to pain from recordings of babies getting vaccinated. More specifically, data from ChatterBaby has demonstrated that babies in pain have high-energy cries whereas a fussy cry has shown to have more periods of silence .
Created by a team at the University of California Los Angeles (UCLA), ChatterBaby’s initial purpose was to detect and interpret infant sounds in an effort to help deaf parents better understand their baby’s cries, and as a result, respond more appropriately to their infants’ and toddlers’ vocal cues. ChatterBaby has been modified to assist all caregivers with understanding their infants’ cries while helping the research team at ChatterBaby also understand how cries affect the babies themselves. Researchers at UCLA are currently using data from consenting ChatterBaby users to investigate how a baby’s cry can affect their likelihood for early developmental issues such as autism.
Previous studies [4-6] have found a link between autism and atypical vocal qualities, and moreover, concluded that vocal parameters may help to differentiate between children with or at risk for autism diagnosis. A preliminary study at Brown University by Sheinkopf et al. investigated the differences in acoustic characteristics between a group of babies at risk for autism and a low-risk comparison group. In this study, the infant cries were categorized as either pain-related or non-pain-related, based on vocal recordings. They found that at-risk babies exhibited pain-related cries with a higher and more variable fundamental frequency (F0) than infants at low-risk; demonstrating that disruptions in cry acoustics may play a role in the atypical vocal signature of autism at an early age .
This team at Brown also developed a computer-aided software, Cry Analyzer, to further understand the acoustic differences between children at risk for autism and those not at risk. The Cry Analyzer, also powered by machine learning, analyzes cry audio for many parameters including frequency characteristics (i.e. acoustic volume and phonetics) and then uses an algorithm to select the most useful characteristics to identify the acoustic differences between the two groups .
So is AI really for crybabies? Well, not quite, but we can see how an app like ChatterBaby could help parents and caregivers better understand their babies - one cry at a time. At this point, it is unclear where the future of AI powered cry decoders is headed, but audio data captured by these methods has potential to yield some very interesting results, and further strengthen already published findings. More importantly, analysis of this data could provide some insight into early childhood brain development, and ultimately, help many children in the future.
Staff Writer, Signal to Noise Magazine
PhD Candidate, Physics and Biology in Medicine Graduate Program, UCLA
 Gollayan, C. This baby’s first word was ‘Alexa.’ The New York Post (2018, June 4). https://nypost.com/2018/06/04/this-babys-first-word-was-alexa/
 Oller, D.K., Niyogi, P., Gray, S., Richards, J.A., Gilkerson, J., Xu, D., Yapanel, U. & Warren, S.F. Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development. Proc Natl Acad Sci.107(30),13354-13359 (2010).
 Schoen, E., Paul, R. & Chawarska, K. Phonology and vocal behavior in toddlers with autism spectrum disorders. Autism Res. 4(3), 177-188 (2011).
 Sheinkopf, S.J., Mundy, P., Oller, D.K. & Steffens, M. Vocal atypicalities of preverbal autistic children. J Autism Dev Disord. 30(4), 345-354 (2000).
 Sheinkopf, S.J., Iverson, J.M., Rinaldi, M.L. & Lester, B.M. Atypical cry acoustics in 6-month-old infants at risk for autism spectrum disorder. Autism Res. 5(5), 331-339 (2012).
 Reggiannini, B., Sheinkopf, S.J., Silverman, H.F., Li, X. & Lester, B.M. A flexible analysis tool for the quantitative acoustic assessment of infant cry. J Speech Lang Hear Res. 56(5), 1415-1428 (2013).