Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a question: Do you do trials in a controlled environment, where you actually have proper feedback and a distinct comparison between self-described state and machine analysis? Because in my opinion, systems like these are like modern day version of astrology (at least, when they are only based on vision and not things like fMRT imaging or proper psychological analysis). I know seriously depressed people, who always had a smile on their face (maybe a social coping mechanism), as well as "angry" looking coworkers, who had a very good mood most of the time. It very easy do misinterpret a persons mood, when the only "interaction" is: looking at them, and analyzing their facial features.

When these things are used outside a controlled environment, things could get even more complicated: weird beards, squeezing of your eyes because of excessive sunshine, reflexive glasses, etc.



I see 2 separate issues here:

1. Accurate collection of facial features. Illumination, occlusions, head rotation, etc. may seriously affect accuracy, but this is exactly our main focus right now. We are at the very start of the process, yet early experiments and some recent papers show that it should be doable.

2. Correlation between real and detected emotional state. At the moment we concentrate on the 6 basic emotions and don't detect less common expressions like depression with smiling face. This topic is definitely interesting and I'm pretty much sure it's possible to implement given enough training data, but right now we try to concentrate on different things.


> I'm pretty much sure it's possible to implement given enough training data

No, the point of the comment you are replying to is that there are emotions that are impossible to detect using external information. We can hide our emotions very well. The question is to what extent does external emotional information provide monetizable value?


> there are emotions that are impossible to detect using external information. We can hide our emotions very well.

This is an assumption which I'm not convinced holds true. Just because we can hide our assumptions well enough to fool other people doesn't necessarily imply that's it's impossible to detect using external information.


I've seen some pretty convincing expressions of emotions from actors who were obviously not at the time, in love, in pain, in anger etc. I'm pretty certain that any system that takes your facial appearance and no other information (e.g. you are an actor, you are currently on a movie set), it would have no way to distinguish genuine from false emotion.


If we are talking about professional actors trying to trick the tracker, then yes, it should be pretty hard to design software to overcome it. But most people aren't that good, and although they can mislead their friends or collegues, they still leave clues to detect a fake emotion. If you are interested, Paul Ekman has quite a lot of literature on the topic, e.g. see [1].

[1]: http://www.ekmaninternational.com/paul-ekman-international-p...


But humans are notoriously bad at picking up on details, and things like music and scenery can have a big impact on our perceptions. I'm not saying that you're wrong, I'm just saying that in the absence of any evidence to the contrary I don't think we can just assume that you're right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: