Thursday 4 December 2014

Unity 4.6 - Two Birds with One Stone

I'm taking this weekend off to do a game jam with a team of awesome people I gathered up from different walks of life. I am hoping to use this game jam to just have some fun and not forget why I'm doing this Msc in the first place - to make games!

But doing this game jam will also give me a chance to test out the new GUI system that Unity 4.6 has implemented. I am hoping to apply what I learn in the game jam to help speed up the simulations UI development. As this is not something any of us has thought about in-depth.

I will post a link with more details on how my game jam is going in my personal blog here. For now here's an example of something I made tonight (also works with X360 controllers!):




Ps. You'll need the Unity webplayer plugin installed. Don't worry it's safe :)

That's it for now,
Cheers,
Rob


Monday 1 December 2014

Infrared Game Mechanic v0.1

Alrighty,

Now that I got my raycasting working in my other prototype my next step was to get an Infrared mechanic up and running. We intent to use photography in our project (please see Lukes blog post about this here). Since we are planning to use this I had to come up with a way to implement this.

Akshayan had a very good point that we should do it in a non-boiler plated way. To allow the Astronautics department to edit the mechanic if they so required. This was a very good idea and I kept it in the back of my mind. To implement this mechanic I wanted two major things,
  • Get input (via the X360 controller).
  • Change the texture (material really) of all objects that will be affected by the Infrared mechanic.
 Before I bore you to death with my blog post this is what the end result of this mechanic looks like, I attached it to the rotation prototype:



Please note that this is only v0.1 and the textures along with most assets are a placeholder. Thanks Luke for the cool skybox :D

Adding the Scan-mechanic to the X360 controller
The first thing I had to do was bind an 'Infra' preset (I set this up in the inputs menu in the engine) to Axis 3.  This axis was what the two trigger buttons were tied to. Once done I could manipulate the triggers.

I wanted to know what floating values were being assigned to each trigger as both trigger buttons share the same axis. I could have looked this up on the API reference but I felt like experimenting tonight! I added a statement so check if the Input axis values was greater than 0, just to see what happened.

This didn't work as I had hoped as the left trigger was greater than zero and letting go of both triggers was also greater than zero.

So I thought I'd just output whatever values were being assigned to the axis depending on what trigger I held down. And what value was outputting when I had no trigger held. So I added some console prints:

 
Adding Debug Text to output Axis

And this showed me that the left trigger was being read as Infra, and had a floating point value of 1. The right trigger had a floating point number of -1 and no trigger being held down output a 0.

Throwing out some debug text to see axis values

Now that I could get the trigger inputs I could use them to do something cool, like for example change the material on a game object if I held the trigger down. Then change them back if I let go of the trigger.

I did this by first assigning 3 variables to store the default materials of the three planets. That way I could change the materials back to the original ones when I let go of the triggers.

Next up I had a condition that checked to see if the 'Infra' axis value was greater than 0.5. For some reason it instantiates at 0.014... so I couldn't use greater than 0. I'm not sure why it reads input like this then shoots back to 0. This might just be my old X360 pad's wear and tear.

But for now as a work around I just use the value 0.5. This means that if the left trigger is held down (axis 3 reads 1) the condition is true.

My Trigger Down Code

Now I just had to replace the debug code with the actual Infrared feature. To do this I assigned a public Material variable and used the engines Drag-and-drop method of attaching a placeholder Infrared material I whipped up. I did not hardcode the Infrared material into the script as I want the Astronautics department to be able to modify the material is they so choose. As Akshayan's idea about a modifiable system was a very good one. So for now I did this quick and dirty.

So the final code is pretty simple. I swap the materials from defaults to 'Infrared mode' while the left trigger is being held down (checked once per frame). And when I let go of the trigger (Axis 3 goes from 1 to 0) the default mats are loaded in again.

Swap Mats Code

The end result is a material swapping mechanic that can give us a cool infrared scanning mode!


Normal Mode
Scan mode!
Even though all the assets are placeholders and could look better, the mechanic works!

Other tasks done today:

  • Gave Matias and Konrad the contact details of the Kingston College staff member. This way they will be able to take point with this and gather the required information for their student feedback surveys. They will also need to make the amendments to the survey the tutor asked for. I figured this was better than me being an unnecessary middle man.
  • Got in touch with the technicians again to try and get my hands on the Oculus Rift. Was given details of two people who currently have the two DPK1's signed out. So my next step would be to contact them and ask if I could snag one for a day or so. Probably put this on the backburner until I have something concrete to test out since I don't want to waste peoples time.

Sunday 30 November 2014

Following up on Project 3rd-parties

Quick notes, I e-mailed the module leader at Kingston College to see if they had time to review our survey for the UX guys stuff. Hoping to hear back sometime this week. 

As a precaution we might have to start looking into other options. I won't have time to do this and develop so one of the UX guys might have to take over here.

I also e-mailed the technicians about the Oculus Rift again, as I missed them on the 18th when I dropped by. Trying to organise access to the hardware for sometime in the next few weeks to run my prototypes.

That's it for now as it's very early and I'm still blurry eyed.

'Look at Object' mechanic - using Raycasting

Ok so this has a lot less math and a lot more fidgeting around with the engine. The end result is a lot less code as I depend on the engine to do most of the brunt work.

I never used Raycasting before, so I made use of the Unity video tutorial (found here). And as always the Unity API references.

It seemed pretty straight forward and it was. The most trouble I had was adjusting my Raycast to work well with my world. I ran into the following Hicups:

  • My Ray was attached to my camera, as this is what I was looking around with. The cube infront of my camera would get in my way and constantly block all other collision my ray might have encountered. The same issue happened to my Respawn-boundary I set up.

To fix this I added the planets to a separate layer and used to optional arguments to ignore everything outside of this layer. As an added precaution I instantiate the ray at the (cameras position)*2 to avoid the cube entirely so this would fix the issue regardless of layers.

My ray was defined as: 
Ray ray = new Ray(gameObject.transform.position*2, gameObject.transform.forward);

And my collision check was done as:
if(Physics.Raycast(ray, out hit, 500, 9)) [...] Where 500 was the length of the ray and 9 was the layer to avoid.

  • At first I was using Mathf.Infinity as the rays distance, but found it tricky to use with the debug visual I had set up (to see the raycast in-engine while running). 

As such I changed the distance from infinity to 500. This made the debugging code work and I could see the raycast properly.

Raycast being shown in-engine while run

Debug code:
Debug.DrawRay(transform.position, gameObject.transform.forward*500);

One final snag was where I was updating a value in the wrong method (not per frame) which meant the raycast wouldn't rotate with the camera. A minor oversight which I quickly fixed and the final result was a new "Look at object" mechanic that used raycasting.

This method of work is a lot easier to use as I use the engine to do a lot of the work for me. At the moment it's a bit to precise with the raycast, meaning I have to center the camera exactly at the right spot. I could fix this by tweaking the collision boxes.

However as this is just a prototype and we have yet to build the actual assets for use, this will not require tweaking until later.

'Look at Object' mechanic - using the Dot Product

As I mentioned in the presentation I used the dot product to multiply two vectors together. To find out at which angle I was viewing a game object (b) from a different game object (a).

I first found this solution by searching around on the internet. I found some psueodo code (which I will post at the end) which made mention that the dot product would be ideal for this, as such I started reading into the math some more. I did this as I feel that I could learn a lot about how this mechanic would work if I delved into the details a bit. Also I am not overly confident on my math skills so I always try and sponge up as much as I can/when I can.

The Dot product is defined as:


We do not list (ax . by) or (bx . ay)  as these equate to 0.

By rotating the vector b to a baseline (to 0 or the x-axis) we end up with:




We have the vector a multiplied by the vector b which is equal to the magnitude (length) of vector a multiplied by vector b multiplied by the cosine (the angle between the two vectors).

This can be calculated and will results in a scalar value (a single number). Which we then use in some way which I will describe in my code below.

I'm pretty happy I spent some time reading up on sin and cos as I mentioned in a previous blog post. It seems that they are used often in 3D calculation in games and simulations. From what I have seen over the last few weeks I'm going to need to spend some more time brushing up on my 3D math skills. At least I learnt SohCahToa! So I know that we could calculate the cos in the formula above by multiplying the Adjacent with the Hypotenuse of the angle between the two vectors! Isn't it amazing what you can learn with google? 

Even though I understand the fundamentals of the math behind this I still would like to spend some time reading up on this. Unfortunately this is only one mechanic and there are many more I need to develop! So I will have to make a note of this and put it on the backburner for now. The sites I used to research Dot product were Wikipedia, Better Explained, Mathisfun and the Unity API reference.

Now that I had the dot product I looked at what vectors I would need to use. According to the API reference I would use the .forward of the gameobject I wanted to have look around (so in this case the camera). Additionally I would need to get the direction of the target object from my (camera's) current position.

This made sense when comparing it to the psuedo code I found online. The only difference was the psuedo code normalized the value (so between 1 and 0) after working out the direction of the view object from my camera. This was done probably for ease of use when it came to comparisons (if greater than 0, less than 0.5 etc).

The psuedo code can be found here and I will list it below as well:


  • Vec3 dirFromMeToObject = (objPos - myPos).normal
  • Vec3 myCurrentFacingDir = trans.forward
  • if (Vec3.Dot(dirFromMeToObject, myCurrentFacingDir) > 0)
  • //object is with a 180 degree arc in front of us
  • if (Vec3.Dot(dirFromMeToObject, myCurrentFacingDir) > 0.5)
  • //object is with a 90 degree arc in front of us
  • if (Vec3.Dot(dirFromMeToObject, myCurrentFacingDir) > 0.75)
  • //object is with a 45 degree arc in front of us

  • Understanding the psuedo code was trivial after learning what a Dot product was. Now that I understood this solution I could start implementing it. It would require some set-up first however. Before manipulating the vectors (which would be Vector3s) I would need to get the coordinates from transforms and store them in variables, then build the required Vector3s based on those coordinates.

    So in closing I came up with the following code solution:

    using UnityEngine;
    using System.Collections;

    /*This script will return output based on the angle an object is being viewed at from a different object (in this case the camera)
    It does this by using the dot product to multiply the two Vector3's of the two game objects.
    Taken from the API reference, the dot product is a float value (a scalar value) equal to the magnitudes of the two vectors multiplied together
    and then multiplied by the cosine of the angle between them. (The cos would be the angle between the two Vectors) */


    public class LookAtObject : MonoBehaviour {

        //Initialising some vars to store coordinates of the objects vectors
        Vector3 myCurrentFront;
        Vector3 planet1Pos;
        Vector3 currentPos;
        Vector3 dirMeToObj;
        Transform tmpStore;
        Transform tmpMyVec;

        float tmpStoreX;
        float tmpStoreY;
        float tmpStoreZ;

        float tmpMyVecX;
        float tmpMyVecY;
        float tmpMyVecZ;

        //If we are viewing the secondary object from x angle this is toggled and we have output
        public bool IsActive;

        // Use this for initialization
        void Start () {
            IsActive = false;
        }
       
        // Update is called once per frame
        void Update () {
            calcLookAt();
        }

        void calcLookAt(){
            //We get the current facing of the primary game object
            //We use this to ensure the other game object is infront of us
            myCurrentFront = gameObject.transform.forward;
           
            //Temp store for the other objects transform (Planet1)
            tmpStore = GameObject.Find("Planet1").gameObject.transform;
            //Primary game object's (camera) transform is stored in here
            tmpMyVec = gameObject.transform;

            //Storing each separate coordinate for a vector, from both primary and secondary game objects
            //This is to build two Vector3's to be used in the Dot Product calculation
            tmpMyVecX = tmpMyVec.position.x;
            tmpMyVecY = tmpMyVec.position.y;
            tmpMyVecZ = tmpMyVec.position.z;
           
            tmpStoreX = tmpStore.position.x;
            tmpStoreY = tmpStore.position.y;
            tmpStoreZ = tmpStore.position.z;

            //Building the new Vectors to be used in the comparison calculation
            planet1Pos = new Vector3(tmpStoreX, tmpStoreY, tmpStoreZ);
            currentPos = new Vector3(tmpMyVecX, tmpMyVecY, tmpMyVecZ);

            //Gets the magnitude of the primary object to the secondary one and then normalizes it
            dirMeToObj = (planet1Pos - currentPos).normalized;

            //180 degrees infront of us - performs the .Dot product calculation between the direction value and the front facing part of the primary objects transform
            if(Vector3.Dot (dirMeToObj, myCurrentFront) > 0){
                //Debug.Log ("Looking at Planet Area at 180 degrees");
            }

            //90 degrees infront of us
            if(Vector3.Dot (dirMeToObj, myCurrentFront) > 0.5){
                //Debug.Log ("Looking at Planet Area at 90 degrees");
            }

            //45 degrees infront of us
            if(Vector3.Dot (dirMeToObj, myCurrentFront) > 0.75){
                Debug.Log ("Looking at Planet Area at 45 degrees");
                Debug.Log (Vector3.Dot (dirMeToObj, myCurrentFront)); //used for debugging
                IsActive = true;
            }else{
                IsActive = false;
            }
        }

        public bool getIsActive(){
            return IsActive;
        }

    }


    This all works pretty well (as I demonstrated in the presentation). However as a few developers informed me, it would be much easier to just use Raycasting. With that in mind, I looked into it and I'll show my findings in my next post.