top of page

Wearables, Affective Computing, and Empatica's Life-Changing Technology: An Interview with Rosalind Picard

This week, Empatica announced their FDA-cleared consumer wearable and epilepsy monitoring system, EpiMonitor. We sat down with Rosalind Picard, the Co-founder and Chief Scientist of Empatica and Director of Affective Computing Research at MIT to discuss her journey and the groundbreaking work of Empatica. 

Hanna: Can you share a bit about your journey and what inspired you to get involved at this intersection of science and technology?  

Rosalind Picard: In college, I became curious about how computers work and studied electrical engineering, with extra classes in computer engineering.  After my bachelors, I was excited to accept my first-choice job designing the newest most advanced computer chips at AT&T Bell Labs. They agreed to pay for a year of graduate work at MIT, which was the #1 program.  At MIT, my advisor said about one of my newfangled brain-inspired computer designs, "This is a cool architecture, but what is it optimized to run?"  This spurred my interest in understanding algorithmically how the brain works – how its ‘algorithms’ or processes are optimized for its architecture. I started reading every paper and book I could find on computational vision and pattern recognition methods. After pleasing my advisor and completing my master’s thesis, I went back to work at Bell Labs to pay them back.  I found a way to complete the chip-design project they gave me in months instead of the two years they predicted it would require.  I soon returned to MIT for my PhD research, where I worked as a teaching assistant and wrote problems and solutions for professors’ books while pursuing my research interests.  Bell Labs also employed me as an ongoing consultant “whenever I could work” which helped pay the bills.  Near the end of my PhD, my advisor finally received research funding to pay me, but the funding was for HDTV research, and I would have to switch my thesis focus to work on that.  I don’t watch TV, and I already knew a lot about algorithms for optimizing digital video as I’d designed chips for them, so I wasn’t interested. Meanwhile, a new professor at the MIT Media Lab offered me funding to continue my current research if I would work with him –creating new algorithms merging ideas from statistical physics, pattern analysis, and brain science.  So, I moved to the Media Lab to, ironically, avoid working on media.  While I was completing my PhD, they hired me to join their faculty.  That was awesome! We worked to grow the field of “content-based retrieval” – advancing computer-vision algorithms to identify image and video content. This led to today’s image and video search tools.  

At a faculty retreat, we decided to start a new effort in “perceptual computing” and I undertook the challenge to integrate vision with other senses. Over break, I read a book about a man who tasted shapes. When he ate soup, he felt shapes in his hands.  I pondered how such synesthesia worked in the brain.  Richard Cytowic, a neurologist and the author of the book, found evidence that it involved deeper parts of the brain, areas ignored by the Harvard brain experts we were learning from. Those deeper brain regions were known for emotion, memory, and attention. As a woman trying to be taken seriously in science and engineering, I didn't want to get near the topic of emotion.  However, memory and attention were important, so I started learning all I could about those brain regions.  Alas, the more I learned, the more it showed what I did not want to find: Emotion was important.  Further, it was important not only for the perception work I was trying to model, but also it was likely important for general intelligence – for everything I knew that AI had been trying to do.  Emotion now looked like something that had to be studied. 

Hanna: About 30 years ago, you coined the term "affective computing". Can you explain what affective computing is? How have you seen this field evolve?  

Rosalind Picard: I defined affective computing as computing that relates to, arises from, or deliberately influences emotion and other affective phenomena. I chose the word “affect” which is a higher-level and broader category than emotion.  At that time, and to this day, there are more than a hundred definitions of emotion put forth by emotion theorists. I needed to build on top of that minefield.   

“Affective” has the additional benefit of being nicely confused with “effective”.  My first work in the area focused on giving computers skills of emotional intelligence, but affective computing is even larger than that. Affective technologies also help us collect and smartly process objective data around many health and medical conditions – recognizing that emotion signals communicate between almost every organ. 

Sometimes people ask if this is the same as “Emotion AI.”  That is a subset of affective computing, where a lot of progress has been made.  That area has grown rapidly over the last decades as machine learning has enabled great improvements in the accuracy of tracking and labeling movements on faces, paralinguistic information in speech, nonverbal gestures, and inferring more about what somebody might be feeling in a particular situation. In the beginning, it was hard to build an algorithm to accurately find a face, much less track a raised lip-corner, but now tools like smile detection are highly accurate.   

Hanna: In your book and when we met at the MIT Media Lab, you discussed your initial struggle with being a female scientist who works on emotions. Could you share a bit about your experience with this? 

Rosalind Picard: As a student through ten years of college, I never had a class with a female professor. It was not uncommon at Bell Labs that I would be the only woman in a room filled with fifty or more men.  Being able to have serious scientific conversations with male colleagues meant a lot to me, and I worked hard to blend in and be taken seriously. I believed they would write me off as an “emotional female” if I called attention to a topic like emotion, which was perceived as irrational and non-scientific.   

I was never so nervous as when I first spoke about affective computing. In one of the first meetings, a highly respected senior AI researcher came up to me afterwards and said, "Emotion is just noise – why are you wasting time working on it?” I knew his research involved speech and so I asked him if pets responded to what their owner said or to how their owner said it.  For example, if you yell at a dog, does it put its ears back and tail down because of the words you chose? I described that studies show human infants respond to affective pitch contours even before they develop speech.   Is it possible that emotional signals help guide attention and learning before we start differentiating semantics?" He listened.  It wasn’t long before he started working on affective computing for speech.  

At another conference where I was known for my computer vision work, I overheard a colleague pointing at me say, "Can you believe what she’s working on now? She used to do respectable research". My heart sank, feeling like my hard-earned respect was being trashed by people quick to judge without data.  Years later, I smiled when this same researcher came up to me and said he had started working on affective computing.  He also discovered that getting quality data was a super hard process: could I share some of my data with him?  I was happy to help. The field started to take off, especially in Europe. 

Later, MIT gave me tenure, bundling my Affective Computing book with my serious mathematical papers in my dossier.  I heard the letter writers wrote that my work brought rigor, and “made it respectable to study emotion.”  Today affective computing is a global field, built by many international researchers, with an annual international conference, global professional society, and a top-ranked journal, “Transactions of Affective Computing” produced by the world’s largest technical professional society dedicated to advancing technology for the benefit of humanity (the IEEE). An estimated 1/3 of the Fortune Global 500 use affective computing today. 

Hanna Edgren: You took a big risk in studying affective computing. How do you think about the complexity of this?

Rosalind Picard:  I changed from a shy, risk-averse child to somebody who has taken a lot of risks, so maybe women need to be challenged.  One of the most important early neuroscience studies of emotion was done in males – and showed that when their regions of the brain involved in emotion are damaged, it makes them less rational.  We now know that a healthy emotional system helps both men and women adapt to complex, unpredictable inputs, operate efficiently, and convert intelligent reasoning into action.   

Initially I was surprised that emotion performs many helpful functions for a human at a level below awareness, even without a person feeling or appearing emotional. I now think of emotion like weather: We always have weather.  Very rarely is it calling attention to itself and getting out of whack.  When it does, you have an extreme emotion event like an angry outburst, and that is like an extreme weather event, such as a tornado. Similarly, both weather and emotion are complex to measure, each having many continuously changing variables.   

Hanna: That is a great analogy. You founded the Affective Computing Research Group at the MIT Media Lab. What have you been recently working on and what are some exciting projects that have emerged? 

Rosalind Picard: Our research today focuses on advancing affective-cognitive technologies that people can use to improve their health and well-being.  We gather objective data obtained through wearables, smartphones, and other sources you are willing to opt into sharing, always honoring privacy and showing respect for people’s feelings.  We build new kinds of algorithms, using both existing AI models and improving and advancing new ones. With this work, we have been improving monitoring and creating the ability to forecast personal changes in health, which enables prevention. 

As a group, we have spun out over a dozen companies, two of which I co-founded:  Affectiva, which is now owned by Smart Eye, AB, and Empatica. 

Hanna: Empatica, a company you co-founded, has developed incredibly impactful wearable technology. Can you tell us about Empatica and what problems you are looking to address? 

Rosalind Picard: It’s been hard for clinicians, researchers, and physicians to get quality physiology and behavioral data from patients during daily life.  Consumer-advertised devices, while they can be running 30 or more fun apps, randomly drop data and, even worse, have notoriously ruined studies by giving only processed data such as heart rate, and changing how they process it in the middle of a multi-month study, making results not comparable within the study.  Getting raw data can prevent such a nightmare, but most consumer devices don’t prioritize data.  Empatica prioritizes quality data and makes it easy to get.  They also provide all the raw data.  They further provide more than a hundred digital biomarkers requested in clinical studies, such as walk tests, sleep fragmentation, autonomic signals, and more.  A growing number of their biomarkers are validated by FDA (e.g. SpO2 level, sleep-wake, pulse-rate variability) and they also include testing across diverse skin tones. 

Hanna: How have you seen Empatica's technology create impact and support the lives of people?  

Rosalind Picard: I saw another story just this morning where someone said the Empatica smartwatch, Embrace, got them there in time to save a loved one's life. The Embrace today is the only smartwatch seizure monitor that has passed FDA’s validation tests.  It’s also on early lists as one of the first AI-based products approved by FDA.  Embrace alerts a designated caregiver to check on the wearer if it detects that they are likely having a grand mal seizure.  Seizures can kill you, and the risk of death is five times higher if you are alone at the time of a seizure.   I’m super proud of our team for staying focused through all the iterations it took to get this product into the marketplace.  Please tell people you know who have seizures that it’s important to line up a caregiver and make sure they can come to your help quickly when you might have a seizure. 

Hanna: What has worked and what has been a challenge when translating technology from academia to commercialization? 

Rosalind Picard: In academia, we can chase almost any idea we want, and we only need to get a device and data good enough to answer the research questions we have.  In commercialization, a product not only has to have a market to sustain it, but also for a medical AI device, it has to pass a growing number of FDA’s tests – not just for functioning clinically, being biocompatible and EMC and bench testing  but also now it must pass tests of cybersecurity, usability, and much more – it has to demonstrate accuracy across different skin colors, body types, ages, health conditions under conditions at rest or a variety of kinds of motion, and all the processes associated with data privacy handling, adverse events, software development and security, updates, and more, must follow standard operating procedures.  And FDA will drop in and audit these processes.  And, then there are boatloads of start-up business challenges on top of this.  It’s not for the timid.  I’m reminded of the joke about how to know if you would like to own a boat:  Stand under a cold shower and rip up hundred-dollar bills.  Well, if you want to build a medical wearable, the bills are higher and you’re in that cold shower with a lot of people shredding them, and government officials auditing how you tear each bill, and they change the rules on how the tearing should happen and make you run months and maybe years of additional tests. That’s the positive, simplified description. But, in the end, the results are far better than a fleet of boats: You have enabled a way to save lives.  When you see a family celebrating their child’s birthday instead of a funeral, it was all worthwhile. 

Another challenge is the resistance in the business world to work in stigmatized areas.  Epilepsy is underfunded because of a ridiculous stigma. If you share any message, please share that anybody with a brain can get epilepsy. It should not be stigmatized. Cameron Boyce, a famous Disney actor with epilepsy died recently at the age of 20.  Deaths like his are helping to call attention to how common this condition is: 1 in 26 people in America will get epilepsy at some point in their life, and it can kill you.  

Hanna: How do you think about the importance of balancing innovation with ethical considerations?  

Rosalind Picard: In my book Affective Computing, I included a chapter on potential ethical concerns.  Some people asked me when they saw it, the first book on the topic, “Why are you including that chapter? You could sabotage the whole area before anybody even starts to work on it.”  I wrote about concerns up front because I think the onus should be on innovators to think ahead not just about the good they’re trying to do, but also how it might be twisted for bad purposes.  With the pace of technology innovation picking up, it’s more important than ever that AI’s makers – those of us who understand AI best –work to envision and prevent its potential misuse before we deploy it.  I advocate to my students that our research papers should have a paragraph that states the limitations of the findings in plain English, as a step toward being transparent and responsible.  And thesis documents should talk about potential ethical concerns, not just the great new innovations that we are making, but where they might go wrong.  We must shoulder the responsibility up front because we understand how AI works and can be shaped, while it can take regulators decades to figure it out, and then the harm is done. 

Hanna: What are you most excited and perhaps most concerned about in the field of affective computing, particularly with the rise of ML and AI applications in healthcare? 

Rosalind Picard: There are exciting achievements, but also massive overhype about AI, the latter voiced by those who stand to benefit financially from inflating valuations.  I am concerned when I hear people who don’t have solid evidence make proclamations such as 80% of people’s jobs are going to be replaced by AI.  Such AI speculations drive up the financial wealth of a few and drive down the mental health of the many.  Increasingly, I’ve met good people who are depressed because they are told that they and their children’s children will be obsolete with AI.  I urge leaders to make more responsible proclamations:  Consider that having a mind, self-awareness, conscience, knowledge of truth, ability to take initiative and identify and solve complex problems, and show emotional intelligence and great teamwork, are all skills highly prized by employers, and advanced machine-learning models don’t do these things in any general sense. Nor is it simply a matter of training the existing models on more data to get them to do these things.  I’ve been teaching, developing, applying, and extending advanced machine learning methods since the early 90s and the models, while super powerful, also have fundamental limitations.   

At the same time, I’m excited by advances in AI, new efficiencies it brings, and especially wearables and what they can reveal when combined intelligently with well-designed long-term studies.   We can use AI with wearables to move from today’s “sick care” to true preventative care, learning what keeps people resilient and healthy.  We are seeing wearable data enable advance forecasting of health events – from respiratory infections to seizures, changes mental health, and more.  I believe that in the future, you will know in advance when your risk increases for a migraine or seizure and be able to choose evidence-based personalized recommendations to prevent or mitigate it.  Medicine is being transformed, and it’s a revolution that is good for everyone.  Further, these advances are creating new jobs. 

Hanna: Lastly, is there anything you want to add or wish I asked?  

Rosalind Picard:   Empatica hit an exciting milestone, collecting wearable data in Antarctica, so now we are on every continent.  There was a wonderful surprise in the study.  It began with super-fit participants undertaking an arduous trek across the continent, while their bodies were monitored to understand the impact of extreme conditions; cold, exercise, and limited diet. Under the most difficult conditions, it is typical to eject everything that is annoying or uncomfortable.  We observed this in the early days of our MIT research when a team of runners in the Boston Marathon ejected all our wearables as intolerable before reaching mile 19.  We learned a lot from that failure, and from other deployments.  The Antarctica study included lots of devices, one of which was the Empatica EmbracePlus. EmbracePlus made it the whole grueling journey, while several well-known sensors were ejected in the first days. While we never designed it for surviving such extremes, we did design it by learning from people with epilepsy, who deal with some of the toughest stressors.  Our team worked hard to make it super-comfortable to wear continuously, because this could make a life-or-death difference for a person with unpredictable seizure timing.  The success in Antarctica is one of many times where I’ve seen that working hard to design a product to reduce a person’s struggles, especially stigmatized ones, can lead to a leap in innovation that benefits many others.  I feel privileged to have played a small role in these wearable advances and see them as just the beginning of something much greater.  


bottom of page