In many busy homes around the world, it’s not uncommon for kids to give instructions to Apple’s Siri or Amazon’s Alexa. They can create a game by asking the voice-activated personal assistant (VAPA) what time it is, or requesting a popular song. While it may seem like a mundane part of home life, there is a lot going on. VAPAs are continuously listening for, recording and processing acoustic events in a process known as “eavesmining”, a portmanteau of eavesdropping and datamining. This raises important concerns related to privacy and surveillance as well as issues of discrimination, as sound traces of people’s lives are data fed and scrutinized by algorithms.
These concerns are compounded when we apply them to children. Their data accumulates over a lifetime in ways that go far beyond the far-reaching consequences they’ve ever collected on their parents that we haven’t even begun to understand.
The adoption of VAPAs is growing at a staggering pace as it includes mobile phones, smart speakers and an ever-increasing number of Internet-connected products. These include children’s digital toys, home security systems that listen for break-ins, and smart doorbells that can negotiate sidewalks.
There are pressing issues arising from the collection, storage and analysis of sonic data as they pertain to parents, youth and children. Alarms have been raised in the past – in 2014, privacy advocates raised concerns about how much Amazon Echo was listening, what data was being collected and how the data would be used by Amazon’s recommendation engine.
And yet, despite these concerns, VAPA and other eavesmining systems have spread rapidly. Recent market research has predicted that by 2024, the number of voice-activated devices will exceed 8.4 billion.
Recording more than just speech
More is being collected than just spoken statements, as VAPA and other eavesminning systems listen to individual characteristics of voices that unintentionally reveal biometric and behavioral characteristics such as age, gender, health, addiction and personality. .
Information about an acoustic environment (such as a noisy apartment) or specific sound events (such as breaking glass) can be collected through “auditory visual analysis” to make decisions about what is happening in that environment. Is.
The eavesmining system already has a recent track record of collaborating with law enforcement agencies and being summoned for data in criminal investigations. This raises concerns about other forms of surveillance and profiling of children and families.
For example, smart speaker data can be used to create profiles such as “noisy homes,” “disciplinary parenting styles” or “troubled youth.” This could, in the future, be used by governments to profile people dependent on social assistance or families in distress with potentially dire consequences.
There are also new eavesmining systems called “aggression detectors” presented as a solution to keep children safe. These techniques include microphone systems loaded with machine learning software that skeptically claim can help anticipate incidents of violence by listening for cues to increase volume and emotion in voices, and other sounds such as glass breaking. can.
Aggression detectors are advertised in school safety magazines and law enforcement conventions. They have been deployed in public places, hospitals and high schools under the guise of being able to prevent and detect mass shootings and other cases of deadly violence.
But there are serious issues surrounding the efficacy and reliability of these systems. One brand of detector repeatedly misinterpreted children’s vocal cues, including coughing, screaming and cheering, as indicators of aggression. This raises the question of who is being protected and who will be made less secure by its design.
Some children and youth will be disproportionately harmed by this type of securitized hearing, and the interests of all families will not be protected or served equally. A recurrent criticism of voice-activated technology is that it reproduces cultural and racial prejudices by imposing vocal norms and misidentifying culturally diverse types of speech with respect to language, pronunciation, dialect, and slang. .
We can speculate that the speech and voices of racialized children and youth will be misinterpreted as offensive voices. This disturbing prophecy should come as no surprise as it follows a deeply rooted colonial and white supremacist history that continually polices the “Sonic Color Line.” Sound Policy Evemining is a rich site of information and monitoring as the sound activities of children and families have become a valuable source of data collected, monitored, stored, analyzed and sold without the knowledge of the subject to thousands of third parties. These companies are profit-driven with some ethical obligations to children and their data.
With no legal requirement to erase this data, the data accumulates over the children’s lifetime, potentially forever. It is unknown how long and how far these digital marks will be as children age, how widely this data will be shared, or how cross-referenced this data will be with other data. These questions have a serious impact on children’s lives at present and as they age.
Eavesdropping poses innumerable threats in terms of privacy, surveillance and discrimination. Individualized recommendations, such as informational privacy education and digital literacy training, would be ineffective in addressing these problems and would place a huge responsibility on families to develop the literacy needed to combat eavesmining in public and private spaces.
We need to consider the advancement of a collective framework that addresses the unique risks and realities of eavesmining. Perhaps the development of an unbiased listening practice principles – an auditory spin on “fair information practice principles” – will help evaluate the platforms and processes that influence the sound lives of children and families.