${vImageAlt}
The uses of affective computing ranges from fairly “wholesome” applications to those that could get even more sinister, according to USask researcher William Buschert. (Photo: RawPixel)

Computers are learning to read our feelings from our faces. What are the risks?

If computers are now capable of reading our feelings from our faces, is our emotional privacy at risk? And what are the ethical consequences, asks University of Saskatchewan (USask) researcher William Buschert.

In a recent article in the National Post, William Buschert, philosophy lecturer and co-ordinator for applied ethics programming at USask, presents a number of ideas from a forthcoming paper on hypocrisy, privacy and affective computing at the annual conference of the Canadian Society for the Study of Practical Ethics, part of the Congress of the Humanities and Social Sciences taking place in Vancouver from June 1-7.

While the rise of affective computing has raised a number of concerns, including exposure of hypocrisy across our society, Buschert imagines an even more troubling scenario: Without emotional privacy, we can’t be truly free.

“Having some emotional privacy is necessary to actually make up your own mind, to act autonomously,” said Buschert.

The uses of affective computing ranges from fairly “wholesome” applications, Buschert said, to those that could get even more sinister.

“Affective computing adds a massively deeper layer. Not only is your identity subject to surveillance, your movements subject to surveillance … what’s going on in your own head is potentially subject to surveillance, too.”

Read more about Buschert’s research at the National Post.

Share this story