It's a very worthwhile and interesting point you raise. My feeling, based on the limited research I have read on the subject, is that we 'read' expressions on humanoid robots quite differently to how we read it on human or near-human faces.

One example is C-3PO from Star Wars - Even though his face is completely expressionless, and his body language extremely limited, we identify with him through the ways in which he is similar to us - he has arms, legs, eyes, voice, etc. Apparently though, once a human likeness gets to about 90% or so accuracy, we focus on the differences. This is borne out by research that found people tended to react poorly to the human characters in the Final Fantasy movie, and also to the Jennifer Garner animations in the Alias XBox game. People apparently found themselves getting vaguely unsettled, since it was almost like watching animated corpses rather than real people, as there were key aspects of body language that were unconsciously anticipated but absent.

In a humanoid robot that is still significantly different from a human, we automatically disregard the body language cues we would expect in a human. We respond to a masked robot face differently to a masked human face, because our unconscious expectations are different.

I read this research a couple of years ago, and I wish I could find the original article detailing it.