Saturday, September 19, 2015

Here is a rough version of our level.

This is our level. I matches the companies stage that we are working with. The simulation starts in the bottom middle and works around to the character by the car. The reason we did this is for the element of surprise and that the officer would have to walk over to the person to initiate conversation with him. He will be kneeling down and facing away from you grumbling. You get close and he stands up in your face and then the decision tree starts.


Script

Here is a rough draft of our script...

New Character

In order to fit the huge need to have a character that can simultaneously use ...

-Code driven animation cycles that are baked from motion capture or hand animation.
-Code driven phoneme animation that is driven by recorded audio files.
-Code driven expressions that fire depending on his anger level.

In order to have this level of control over the character we need to back the body animations and leave the anger morph targets so they can be driven by code.

I was not willing to put the time in to test the products that have been built, mainly because I can build one for our needs and I can control how it is exported from Maya.

This is our low poly model that is based off of my face scan. I also show a couple of expressions we are using.


Start of a new semester...

Over the summer we met with a UHP trainer veteran and the supervisors at POST and learned that the most beneficial inputs that we could have for our simulator are

a VR experience,

-voice tone recognition
-passive hand movement recognition
-monitored distance

They felt that these inputs were much more aligned with their training and relatable in a VR environment. We opted to forgo having the RealSence in scenario 2. We are working on building for a VR company here in Utah. The company has a physical space that we are modeling to.