Would it be easy to wrapping somebody with a 3d camera setup and push them into a recording gamey? Intimately, I imply there are a few discipline challenges in the sensation that EQ2 Plat there aren't any 3d-movie inside-videogame codecs, safest wow gold but its nowhere nigh as grand as the head makes it out to be.Galore of the examples you've given don't traverse over into "uncanny vale" dominion - nobody would get stuffed animals befuddled with echt animals
Second, this isn't computer animation. It's just video processing. If you still need to do high resolution motion capture to produce your images, you haven't replaced the actor. You've merely edited their appearance in the performance. They didn't even bother to go so far as to take the captured motion and paste key bits of it together into the speech. They just had her sit there and say the whole thing, then "rendered" it.
I'd say it's past the uncanny valley. That's not to say that I can't tell it's fake. She looks a little fake. Something is wrong-- her face is too still or something. But she doesn't look like a zombie. She's not distractingly creepy. That's all they're really shooting for at the moment, right?ut, there is an assumption about what is acceptable... what is the norm? At the moment, we're in a rapid transition phase. There are relatively few human-enough-like examples within our day-to-day existence. I would suggest that as these emulants (to coin a term) become more prevalent and pervasive, their familiarity will reduce the perception of their being bad.
We've come a long ways in the 35+ years since I used an ASR-33 Teletype over a 110-baud modem to a time-shared 8KB minicomputer. That sounds like a long time, and in some respects, it is. Today's generation has seen rapid advancements in game consoles, and even now, the best still appear really good, but still unreal. My guess is that in 5-10-20 years, when the visuals become even better, AND THERE HAS BEEN AN EXTENDED PERIOD OF FAMILIARITY, there will be less of a gap to leap. Not just because the visuals got better, but because we have become more familiar with them.
An aside: Look into the eyes of a young baby. Watch how they make eye contact, and don't let go. Watch how intently they examine you. That's setting up neurons and patterns of what is safe, good, bad, and everything else. P.S. I wonder if the transition from the old black and white TVs to today's HDTV sets has run through a similar perception challenge?I wish I'd somehow had a chance to view this before EQ2 Plat knowing that it was a computer animation... say, a side-by-side comparison of a real and an animated person and a challenge to guess which was animated.
To me, "Emily" did not look real and did look uncanny. Actually, it reminded me of nothing so much as one of those videos where they replace a baby's mouth with animation so that it appears to be talking like an adult. It seemed to me that the animation's "mouth" was not stably positioned on its "face;" when the head turned, I perceived a change in the position of the mouth relative to the face. Something about the skin didn't look right, either.
Would I have accepted it as real if I were expecting "real?" Yes. But that's not the same thing.Some years back I took part in an experiment to gauge something about necessary bit rates and algorithms to make synthesized speech sound real. What struck me forcibly was that, in this experiment, when you were listening to the best synthesized speech, if I'd had no standard of comparison I'd have said it was real. But when they switched to a real voice saying the same thing, there was the most amazing sensation, almost a tactile sensation of sound shaped by warmth and moisture. Only after you heard the real thing did the synthesized speech seem cold and mechanical.
The only thing new here is that the equipment required to do the motion EQ2 Plat capture has been reduced to a single video camera. The facial movements are not being generated by a computer, merely copied from an actor so it's still nowhere near a believable simulation of a human face.Image Metrics calls this "performance transfer technology". It's not really animation; it's more of a scheme for pasting face A onto actor B. Quite a bit of this already goes on; often, when you see a stunt performer's face on screen, the face of the principal has been transferred to the image of the stunt performer. |