Wednesday 25 March 2015

Astronautics Simulation - Revisited (Post Mortem)

Even though the module was over and I received my grade, I spent a weekend fixing up this project to get it ready to demo on the 18th of March. I did the demonstration on Wednesday and things went well, which I am happy with. The members of the astronautics department seemed to like it a great deal and had many VR/Oculus questions. Below I'll summarise what I did to get the project finished and ready. After this I will cover future work and improvements that could be worked on.

In my last Astronautics Simulation post, I covered briefly what worked and what did not work with the system. I used these points to fix the original issues. I rebuilt the simulation, reusing the assets and some of the code from the previous version. However I removed the first minigame and designed 3 others. I also fixed the skybox bug in minigame 2 and completed it. I then added audio, a GUI and narration into the simulation, created by myself using Photoshop and Audacity along with a voice changer. The system still follows a similar modular structure. I'll map these below:

Main Menu:



The main menu had issues due to constricted movement. Even though this didn't stop the user from using the system, most of the playtest feedback received spoke of allowing users to move around this main menu with more freedom. Due to the nature of the Oculus Rift, many users wanted to explore every level, even the menu-panning by walking and looking around.

There was also a bug where the user could get turned around when booting into the menu, before putting on the rift. The tracking data would incorrectly set where the rift was pointing at the time before the user had it on their heads. I also found this issue to occur when loading from one level into another (a Unity Scene in this case). The tracking data would update before the level finished loading, causing the users' center view to be set off-center in-game. I fixed this issue  by offering the users a way to reload each level by pressing a button on the pad or key on the keyboard. This would allow the level to reset and the new tracking position to be set correctly. This is more of a work around than a bug fix, but it works well enough.

The new main menu was reconstructed to be a level in itself. This was changed from the row menu panning mechanic used previously. This new menu allows the user to select a minigame from 4 different panels which hover above the floor. The user is also able to navigate around the room and look at the various satellites and other eye-candy dotted around the level.  This allows for less constricted movement, which works better on the rift. The user selects a level the same way as before, by looking at the desired panel and pressing Enter or A on the gamepad.

Minigame 1 (New): - Weather Analyser



The original minigame1 was completely removed. The underlying design did not work well with the rift and as such I thought it best to start over. The new minigame has a similar look but deals with the planet Neptune rather than its moon Triton. The user can now look around unconstricted as they pan around the planet Neptune along the X and Y axis. Getting closer and further away from the planet as the enter / exit the dark side of Neptune.

The objective of this minigame is to navigate around Neptune and spot various weather anomalies. The user has a simple GUI consisting of a crossair and a context panel. Once the user spots a weather anomaly (by looking at it through the crossair) a message pops up at the bottom in the form of a context GUI panel. Once this occurs a voice over narration also starts to play explaining the weather anomaly in more detail.

At this moment in time there are a total of 5 weather anomalies to find. There is no time limit as the way the game is designed ensures the user always has a point of reference and as much motion sickness it not an issue. Camera pan speed was also slowed down to a comfortable level to aid in this. And at any time the user can quit out of the game via the B button on pad or the N button on the keyboard. This will allow them to play as long as they feel comfortable before quitting out.

Minigame 2 (Reworked): - Infrared Spectrography



This minigame worked well but was not feature complete and had a nasty bug in it. In this version the game was finished off and the terrible Skybox seams were fixed.

Apart from that not much changed from the original concept. The user uses the RB/LB/RT/LT buttons on the pad or UIOP on the keyboard to select a infrared mode. Based on the mode selected they will be able to identify different objects in the galaxy they see infront of them. The users can view the level in 360 degrees and they are not constricted in anyway in terms of camera panning along any axis.

Minigame 3 (New): - Probe/Orbiter/Satellite museum


This minigame is a simple room and allows the user to walk around inside it. In this room the users will find a close up model of both the Orbiter and Probe. These two models were recreated based on the CAD files provided by our supervisor. As much detail as possible was put into the recreation of these models. When the user walks up to one of these models and is close enough, the narrator will start narrating the background information about the Probe/Orbiter.

Along with these models the user can find 6 of Neptunes satellites, modeled by Begoña in great detail. When the user walks up to these the narrator provides the user with information about each satellite including physical traits.

Minigame 4 (New): - Triton Orbit


This moon was created by one of our modelers, Begoña, and includes a normal mapped texture taken from actual images of Triton. This minigame is pretty similar to minigame one, however the objective here is to observe where the photo quality of the texture drops due to lack if imaging data. This is to show how our understanding on Triton can be increased by additional visits and investigation.

Future Work

There is a lot of room for improvement. I'll cover this below in a general way and then per level.

Generally it would be a good addition to the simulation to add some particle effects or shaders that draw stars or interstellar gas as you orbit around things in the game. Even if this is a far off effect it would add to the immersion of the system.

It would be good to have more than 1 audio track in the game. This can be easily added the difficulty would be in tracking down appropriate audio tracks to use, with permissions.

The narration could be improved upon. As it stands I use a voice generator and modify the output using a voice changer to up the semitones of the voice by 2. This makes the voice sound warmer and more human but it is still not ideal. best solution would be to have somebody read out the narration rather than use synthetic voice generation tools.

The objectives in most of the minigames are similar in nature, it would be good to add additional gameplay elements. If this is done however, it's important to keep in mind which mechanics would work well in VR and which would not. At the moment due to time constraints I followed a similar path for all of the minigames as I knew they would work well in VR. Additional research into designing for VR would be of great use here.

Main Menu improvements

The lightning in this level is not perfect yet. This is largely due to the lack of shaders around the planet Neptune. because I did not have time to create these I had to use a hacky-way to make the planet look nicer. I did this by using lighting. But it's not ideal. It would be good to spend more time on the lighting and improving the Neptune planet itself.

The floor is a very basic texture and is pretty bright. It also has some issues with level of detail when rendering on the rift. It would be better to get a nicer texture and maybe redesign the level to match it.

Mission 1 improvements

Having a planet Neptune with a much more detailed texture would be great here. Preferably one that shows different weather patterns on it (such as the dark spot). 

Additionally it would be nice to have a UI graphic that guides the user towards different weather anomalies as at the moment they are difficult to track down. It would also be good to have a gUI graphic appear on the planet where the weather anomaly was spotted to inform the user about which ones they found and which are not yet found.

Mission 2 improvements

Add additional things to find. At the moment there are only 5-6 objects to find in this game. It would also benefit from GUI guidance and tagging of objects found.

Touching up the near and mid infrared skyboxes to be more realistic would also be a great improvement.

The only major bug left over in the system occurs in this minigame. For some unknown reason, movement controls will break game-wide (although not that often). The bug starts occuring when switching vision mode to the far infrared spectrum. There is not logical reason this should happen, but it does. Further more reloading the level does not fix the issue. The only way to fix this is to reboot the game. I suspect something is going wrong with the OVR Character controller. Further investigation will be required to see what is up here.

Mission 3 improvements

Adding a way to stop narration (by pressing a button for example) would be good. This would allow the user to stop the narrator if they accidentally triggered narration. Another solution could be having the user press a button to start the appropriate narration when looking at an object.

Mission 4 improvements

This is a pretty basic game. Expanding on this in more detail would be good. Adding additional narration or objectives perhaps.

In closing I'm pretty happy that I reworked the old simulation and got to a point where it was at least bug free and feature complete. If required and time permits I'll take a look at building onto this with hopes of the system being used for the task it was created for.


Monday 23 March 2015

Oculus Rift Side Project - Dolly zoom test

I did a test a few weeks back (quite a while now) to see how the Dolly Effect worked with cameras and their frustum. The idea was proposed by my supervisor as something that might be interesting to explore in my free time. we discussed this during one of my MSP module meeting.

I did not actively know about this 'Vertigo' effect until it was mentioned by my supervisor. Only after speaking with him did I realize that I have seen this effect in many movies. I thought it would be interesting to look into. I was specifically interested in how this would apply to the Oculus Rift and what sort of effect it would have on users when in use. 

I started by looking into the Dolly Effect on Wikipedia (Wikipedia, 2015), as it gave a good overview with examples as well as the maths that went into creating the effect. 

After this I looked at how to implement this in a prototype and hook it up to the rift. The easiest way I thought, would be to use Unity since I had other projects already set up in that engine to work with the rift. On an off chance I looked at the Unity documentation and found that they had all the code written for a dolly zoom effect! (Unity Documentation, 2015) This meant the only modifications I had to make was to convert the javascript code into C#. And after this, get the effect to apply to the Oculus Rift.

Converting the code into C# wasn't really required but I thought it would help me understand how they implemented the mathematics behind the concept better. And it did, converting the code did not take long at all. 

The next step was attempting to get the effect to work on the DK2. I couldn't simply attach my dolly effect script to each Oculus camera as this broke the cameras during runtime. I had to instead modify the OVRCamera script that came with the Unity integration package. I did this by 'merging' my script with the OVRCamera script. Most of the merging was pretty straight forward stuff. Adding variable declarations to the top etc. The only really interesting part was that I had to make use of the LateUpdate() method rather than a normal Update method. As the rift updates differently than standard Unity Update(). Once this was in place, the code worked like a charm and the effect worked awesome! The code changes can be found below, summarized:




What was even more neat was the fact that I would attach the focal object in the scene via a public variable and doing so would give a different effect. Stretching towards the door (see blow video for example). Or skewing the door when heading towards a teddybear. or bursting through the door completely.  It feels pretty trippy when this effect is used on the rift :D



Pretty fun stuff!

Wikipedia. (2015) Dolly zoom. [Online] Available from: http://en.wikipedia.org/wiki/Dolly_zoom [Accessed: 3 March 2015].
Unity Documentation. (2015) Dolly Zoom (AKA the “Trombone” Effect). [Online] Available from: http://docs.unity3d.com/Manual/DollyZoom.html [Accessed: 3 March 2015].
Youtube. (2015) Dolly Effect / Vertigo Effect on DK2 in Unity. Available from: https://www.youtube.com/watch?v=UfhAJR4cjzU [Accessed: 3 March 2015].

Sunday 8 February 2015

Astronautics Simulation - Continued

I spent last week testing the simulation on my Oculus Rift (DK2). I also had two other people try it out, one who has used an Oculus Rift (DK1) before and another who has never tried it. This was a happy coincidence really but allowed me to try and gauge motion sickness when using the system.

Please see my findings below:

Main Manu:

What works:

  • The menu panels work pretty well and look really cool! Due to me using (a type of) raycasting it is able to select menu elements just by looking at them and pressing the A button on a pad or the Return key.
  • The moon orbiting the planet (One of the many Begoña made) really pops out from everything else on the scene and every person who has played on the simulation wanted to walk underneath it and look up instinctively.
  • Luke's placeholder skybox holds up quite well on the rift, with no seams or artifacts, users of the system always end up looking around and behind them and feel like they are floating in space. I might leave this skybox in since it seems to be working well but I'll have to do further testing on this once the other skyboxes are fixed.


What doesn't work:

  • The controls, or shall I say a lack thereof. All users of the system wanted to be able to move around freely. This was a reoccurring theme for most of the play sessions. It appears that constricting movement (which is listed as a bad idea according in the best practises document) is not the way to go. All users including myself felt constricted, when wanting to get a closer few of the menu's elements.


MiniGame 1:

What works:

  • Well...it doesn't crash? Apart from that not much. The assets and skybox do well together and the Triton moon looks great. 
  • The game functions as well, and does what it sets out to do. Only a few Oculus specific bugs cropped up.

What doesn't work:
  • The gameplay however is not ideal. This is mostly a design issues as I designed the game without ever using an Oculus Rift. The freedom of the camera controls (even when I try and constrict them) interferes with the gameplay to a large extent. Constricting camera movement causes disorientation and potential motion sickness after one game has been played through. This playthrough was also done by the person with the strongest stomach. This is what I was afraid of during development I wanted to test for.
  • There are a few bugs that occur due to locking the camera at certain times. For example the user can get turned around and due to the camera lock get stuck on what it behind them. Even when not the case, the camera panning can make them feel as if they are looking behind them, when they are just off-center to the moon. This is a pretty good simulation of the disorientation, and I suppose one could argue the game works to well! But it is not fun and needs to change.
  • The UI is far to large and blurry, these UI elements were placeholders anyhow, and need to be changed for better ones.

I will have to redesign this game. The good news being that I have an idea in mind that would be similar, but lend much to the visual feedback that seems so pivotal when using the Oculus Rift.

MiniGame 2:


What works:


  • The good thing about this game is that less seems to be more. Everything works really well in this game. All the people who tried it, appear to be able to play it for a prolonged time. Well after the timelimit. This is probably due to the fact that the game is not very fast paced. The movement is also not constricted and the user is allowed to look around freely and is encouraged to do so.


  • The skyboxes also look really cool. The areas that have no seams are really immersive and draws the users attention which is exactly what is intended.


As a side-note debugging your game level by standing inside it is a surreal experience!


What doesn't work:


  • The seams! The damn seams in the skyboxes are a visual  eye-sore and needs to be fixed. I have a new skybox ready to test and I hope that this will resolve the issue.
  • The only other issue is the lack of content to look at in terms of gameobjects. More of these need to be added and this shouldn't take to long but relies on the skyboxes heavily so these need to be fixed first.
  • Again the UI here (even though placeholder) needs to be changed for better ones that are less blurry and off center. It is tricky to view a stereoscopic UI in a 2D environment during development.

Conclusion:
At least now with some testing behind me and more to come in the near future I can make a start on fixing these issues.

Thursday 5 February 2015

Media Specialist Practice - Research

Deciding on a Specialization
As my next module, Media Specialist Practice, is in full swing I started looking into a specialization that I would like to use in this modules project.

Due to my current employment, and after looking through work related criteria (in terms of skill sets employers are currently looking for in developers) I came to the conclusion that I would like to get some experience in graphics programming.

Having only briefly heard about graphics programming, I had no knowledge about what it really entailed, only that it was an important facet in the games industry and specifically in development. Most of the games I have developed were not in-depth enough to require any custom graphics programming. I tended to rely on ready made shaders available with off-the-shelf game engines.

I started reading up on it and saw that graphics programming was used to draw object in 3d space. Rather than use a machines CPU the GPU on graphics cards could be used to draw elements or visual effects to the screen.

There were two main lines of graphics technologies. OpenGL and DirectX. I started reading into these to see which would be best to look into as I would not be able to cover both. After reading up on the internet there were pro's and cons for both sides. DirectX was better maintained but only worked for Windows, for example. Where as OpenGL was platform independent, but followed more of an open source route and as such could be disorganized and chaotic at times. The deciding factor for me as asking developers at my job and seeing what they used in their day-to-day. This was how I landed on OpenGL as my implementation of choice.

Research into OpenGL
Having absolutely no experience in OpenGL or graphics programming I started research into subject. Starting with the basics. I found a lecture (Youtube, 2014) on Youtube posted by SIGGRAPH University titled "An Introduction to OpenGL Programming". This was a three hour lecture, and I expected it to be quite in depth. 

However I was surprised to find that the 3 hours were jammed full with a basic overview of the OpenGL pipeline, including examples. This lecture gave me a good, basic, general overview of OpenGL, but quite a bit of it went over my head. This was due to three problems as I saw it:

  • My basic knowledge of C++ could make understanding the code a bit more of a challenge but not impossible
  • My lack of knowledge with 3D maths, specifically linear algebra and matrices made it difficult for me to understand the methodologies
  • My basic experience with using shaders meant I did not know what polygons, vertices and fragment shaders were


Now that I knew where I was failing in terms of understanding OpenGL I set out to improve on it. During the lecture above, one of the lecturers made mention of a book called the OpenGL Red Book. I made note of this and looked it up. It is freely available on the net and is quite comprehensive. I made a start at reading up on this. Thus far I have made it through the first chapter.

Additionally I googled around for more 3D math based tutorials. I found the following tutorial which seemed prefect for what I required, titled "Learning Modern 3D Graphics Programming" (McKesson, 2012).

With regards to improving my C++ knowledge I looked into buying a book on the language. Personally I find myself better capable of learning when approaching a problem or field from the top and working my way down. This is opposed to the 'starting at the basics and working your way to the top'. It is due to this fact that I was super happy to have found a book titled "Accelerated C++ Practical Programming by Example". I grabbed a copy off Amazon and have been working my way through it. It makes heavy use of the standard library before working its way down into the details. I find it easier to learn the details once I have a solid understanding on the overall structure and how all the components fit together.

Finally, I realized that I was getting lost in the OpenGL field. The graphics programming field was much bigger than I had first expected. I thought I'd start looking at practical ways to implement some of the OpenGL stuff I was reading up on, when the time came. I knew that I would be using Unity to some extent, as the coursework theme covered Unity. Also my line of thought was that if I was going to use a game engine I wanted to make use of one I knew well and understood. Especially since I'd have my hands full with the graphics stuff.

I did some quick research on wether or not Unity could use OpenGL in the creation of shaders and I saw that it was a possibility. I do need to do more research into this currently as I know windows systems use D3D and not OpenGL. But unless I am mistaken I can force my program to make use of OpenGL. I will need to look into this some more.

After I knew I had a good chance of making use of Unity with OpenGL I started reading up on shaders in Unity. I did this by making use of the Unity tutorials and scripting reference. This is currently on going but I have thus far read up on all of Unity's Built-in Shader's making use of the Built-in Shader Guide (Unity, 2015). I learnt about Normal, Transparent, Cutout, Reflective and self illuminating shaders in detail.

In this guide I learnt a lot about what you can accomplish using shaders and I found the Normal mapping shader to be of particular interest. This shader would add depth to textures but was not a geometric shader, It only changed how lighting and shadows were used with the texture. I used normal mapping in my previous module and I was really impressed by it. I had a model with a great texture but it was appearing blurry the pre-baked lighting looked off. After applying a normal map to the object it suddenly sprang to life. I need to make a note about normal mapping / bump mapping as a potential topic for this module.

As it stands now, I will continue to look into things and try to come up with a proper proposal to discuss with my lecturers this Tuesday.

That's all for now,
Rob

References:
Youtube. (2014) SIGGRAPH University : "An Introduction to OpenGL Programming". [Online] Available from: https://www.youtube.com/watch?v=6-9XFm7XAT8 [Accessed: 21st January 2015]. 
Unity. (2015) Built-in Shader Guide. [Online] Available from: http://docs.unity3d.com/Manual/Built-inShaderGuide.html [Accessed: 4th February 2015].
McKesson. (2012) Learning Modern 3D Graphics Programming. [Online]  Available from: http://www.arcsynthesis.org/gltut/index.html [Accessed: 19th January 2015].

Sunday 1 February 2015

The End ... ?

Well my Digital Studio Practise module is done now. The system however is not so I'm aiming to fix it up properly as time permits. It's important to keep in mind that while I would like to work on this and get it done, I am really busy at work during the day. My free-time game project is continuing but largely due to my amazing team and I can't keep expecting them to do all the work, so this is another matter I need to keep in mind.

Additionally my new module has already started and I have been looking into OpenGL stuff. There is so much to learn! And there is so much math! But I am chipping away at things as often as I can.

More on that though when I finalise my topic for my next module. With regards to the DSP system I am trying to finish up, I thought it'd be useful to keep a list of things I need to do:

Features:
  • Research topics for final two minigames
  • Implement final two minigames
  • Implement admin panel, if this is still a feature that will add to the system overall
  • Finish off minigame 2 by adding other level assets, add UI
  • Add audio to game
  • See if more detail can be added to Minigame 1 - Triton

Bugs to fix:
  • Fix camera panning bug on main menu -Fixed
  • Fix panning speed on menu load-in -Fixed
  • Change skybox on main menu to Visual from minigame 2
  • Fix skyboxes in minigame 2

That's it for now.
Rob

Saturday 17 January 2015

Rough work - UML diagrams for Minigame 2

Please find below the rough work done for mini game 2.

Minigame 2 use cases - draft

More to come to this section

Rough work - UML diagrams for Minigame 1

The below are various rough use cases I did for our first Balancing minigame. Along with a class diagram:

Use case draft on Bottom Left Page

Use Case Diagram reworked

Class Diagram based on above usecase diagram

Class diagram followed by class dependancies
In the next part I'll be covering the second minigame.

Rough work - Use Cases and Class diagram for Main Menu

Please find below the rough use case diagrams, activity diagram and class diagram for our Main Menu system I sketched up:

Basic high level use case - on Right Page

Activity Diagram on Right Page. High level use case on top left

Class Diagram for Main Menu
Please find the rough work for minigame one in the next post.

Rough work - Brainstorming notes

Gearing up towards the final report(s)/deliverables I thought it prudent to post all the notes and technical designs I sketched out for the system I was working on. Please find these in the next few posts.

Note: Please note that I was working backwards in my notebook and as such the pages read from right to left.

This post covers some basic brain storming I did on the first minigame titled "Balancing minigame".

Page 1 - Minigame brainstorming

Page 2 - Minigame brainstorming

Page 3 - Minigame brainstorming

Page 4 - Minigame brainstorming

Page 5 - Minigame brainstorming
Page 5 also covers a basic use case diagram for the presentation segment of our overall system. The next section will cover the main menu UML diagrams.

Saturday 3 January 2015

Presentation update video

As I won't be in the country at the time of the presentation I made a small video update. A link can be found here:

https://www.youtube.com/watch?v=EeyZtM1HfWY&feature=youtu.be



That's it, small update tonight.
Rob

Friday 2 January 2015

Infrared Mini Game 2 - Research

I am running very short on time as I will be leaving the country for a few days soon. It is due to this that I took it upon myself to look into research for the second mini game. This minigame will be about the use of Infrared in space exploration. This is not a complete literature review, as my role is to focus on the design/implementation of the minigames. But it should be enough for me to make a start on the design and implementation on my second Infrared based minigame.

Research

From what I've gathered on the Internet and by talking with Dr.Claus in one of our meetings, Infrared is used to explore and see things in space that would otherwise not be viewable. Or as easily viewable, due to interferences. It is a form of electromagnetic radiation. An example of this can be seen in the image here.

I started by looking at the research links Begoña posted regarding Beagle 2, after Dr.Claus mentioned to us that it might be worth while to look into it. I looked through a few of her links here (http://astronauticssimulation.blogspot.co.uk/2014/12/more-about-dr-malcolm-claus-meeting.html).

I stumbled across the Infrared Mineralogical Mapping Spectrometer (OMEGA) experiment documentation after viewing the NASA Beagle 2 experiments page. I spent an hour reading up on the documentation here and around the topic of spectrometers but this information was very technical and way over my head. This specific entry was also just a proposal and no actual data was collected. I also realized I needed something simpler and more eye catching for my mini game. But the mention of spectrometers gave me a starting point.

In our final team meeting we proposed to look into an "explore the galaxy" option. As it may break up the constant 'staring at planets and moons' theme. Rather than focus on one single object in space. This minigame would be about a few different ones viewed from afar. I looked a bit deeper into this with spectronomy, using youtube and I found this video here:

https://www.youtube.com/watch?v=faW_G3ctB8A - Exploring the Infrared Universe. This was a very useful video clip and it introduced me to Infrared exploration in a way that I understood. In the video they make mention of the, amongst other things, Near / Mid and Far infrared wavelengths and what each is used for. 

The video goes further to make mention of the different space telescopes, IRAS / ISO / Akari and Spitzer. This information was just what I needed. I summarized the juicy bits below:

Near / mid / far infrared

As an object cools it will transmit its radiation at progressively longer wavelengths and therefore further into the infrared.

  • Near infrared = radiation wavelengths that are longer than those in the visible spectrum (what we can see normally). Cooler red stars will become more apparent and interstellar dust will become transparent when viewed with near infrared wavelengths.
  • Mid infrared = the cool interstellar dust itself starts to shine. Interstellar dust can often be found around celestial objects such as red stars.
  • Far infrared = transmitted by very cold objects. Using this wavelength astronomers can observe the cold radiation of protostars. And stare into the center of galaxies, including the Milky Way. This allows us to make observations on objects very, very far away as we can bypass the plethora of noisy data that may otherwise appear.

Observing the Infrared spectrum from Earth is difficult because molecules in the atmosphere interfere with the observation. Additionally the Earths own infrared radiation interferes with observation. It is due to this and the best way to measure the infrared spectrum is from in space.

Spacecraft & Instruments

The video went on to explain the different space craft used in Infrared imaging, briefly. Again a summary of what was said can be found below.

  • IRAS - The first ever space based observatory for Infrared wavelength measurement.
  • ISO - Detected water in the universe. In the atmosphere around planets in our galaxy.
    • - Dust and gas that fill the space between stars is called the interstellar medium. ISO found a carbon rich material called polycyclic aromatic hydrocarbon. In space this materials presence is a strong advocate to organic chemistry and can be used in research for life on other planets.
    • - Discovered that there was a peak of star formation about 3 billion years ago. This discovery was achieved as ISO was able to use the infrared spectrum to view past the interferences that normally surrounded galaxies.
    • - Andromeda (our neighbor Galaxy) is considered to be a typical spiral galaxy. However ISO discovered that it was made up of several concentric ring, of a very cold dust around 13 Kalvin. Far to cold to be viewable on the visual wavelengths.
    • - In a nearby galaxy fast moving streams of plasma was observed being released from the center of the galaxy, but until the introduction of ISO, we were unable to view through all the gas and dust to see into the center. ISO revealed, using Infrared wavelength that the central object in this galaxy was a black hole.

  • Akari - Japanese infrared astronomy satellite, not much information on this.
  • Spitzer - NASA's infrared space observatory launched in 2003

Herschel Space Observatory

After those the ESA's Herschel space observatory was build. The worlds largest space telescope. It allowed unparallelled exploration capabilities and allows us to probe space in much more detail than we were before, using the Infrared spectrum. Herchel consists of the following parts, which make up its payload.

  • PACS - Photo conductor array camera and spectrometer - Can study young galaxies and star forming nebula. It is the first spectrometer capable of obtaining the complete image of an object at once.
  • SPIRE - Spectral and photometric imagine receiver - Designed to exploit wavelengths that have never been studied before. Can be used to study the history of star formation in the universe
  • HIFI - Heterodine instrument for the far infrared - High res spectrometer also designed to observe unexploited wavelengths. It is able to identify individual molecular species. Used to study galaxy development and star formation.

The remaining parts of the payload consists of the shielding and cooling systems. All of these are found underneath the huge primary mirror (the largest of its kind in space).

After watching the aforementioned video I started looking at Herschel in itself and found the following youtube videos very interesting:


The lecture videos were very interesting and gave some very good photographic examples of star formations using Herschel. Allowing users to see otherwise unviewable 'extra galactic background'. This was achieved by using the SPIRE camera mentioned earlier. 

The 'ESA Herschel Space Observatory: 1st year achievements and early science results part 2' video also gave examples of star formation of the Polaris (or Ursa Minor) and Aquila constellations of space. The video went on to demonstrate how one of these looked likely to have stars at some stage while the other did not. This was done via measurements of Infrared wavelengths and scatter graphs.

They also made mention that the HIFI system was used to obtain 'the most complete spectrum of molecular gas at high spectral resolution ever'.

Again this is where graphs, unknown calculations and terminology started to come in but it was still useful to get an overview of just how the Herschel space observatory and more importantly Infrared wavelength measurements were being used in a practical way.

They made mention of a Herschel Atlas program. Which was the biggest undertaking for space-area measurement. This program might be worth looking into if I need some new ideas for minigames. At this moment in time however, the team has agreed that I will be doing two minigames only and AK the other two.

Rough Concepts based off research

Thus far I have come up with two core concepts for my infrared mini game based on the research above:

  • Star Formation using Infrared - This could be something simple like the connect the dots game in Dragon Age or what the mastery's mechanic looks like in Skyrim.
  • Difference Wavelengths - Using Different type of Wavelength to see and identify different things in a nebula (Short/Mid/Far Infrared Wavelengths)


Either of these will work. I think I have enough research about infrared for now. My next objective will be to look into how to make a nice looking nebula skybox to use. Regardless of which option I pick from the above - I will need to have this made and it will need to look as amazing as possible. I might even need three versions (one for each potential wavelength).

This is the next step I need to take.

References:

Infrared in Space Exploration Websites:

Youtube:

Wikipedia:

Reports/Journals:
<Add Neptune Report reference about payloads here>


Thursday 1 January 2015

Rollbacks and roll forwards - Life on the bleeding edge

I'm taking a moment away from my GUI shenanigans to write a small update.

In short, I might have jumped the gun with regards to my rollback. A few weeks ago I spent some time looking at how to use the new Unity 4.6 GUI with Oculus rift. I kept coming back to this topic here.

Without the hardware to test this out, I grew increasingly concerned that I would develop the simulation on a version of the engine that was not fully supported yet. From what I gathered some people were having trouble seeing their applications output to the actual Rift device.

What I should have done was investigate this issue in more depth but as I was so distracted with the development (and a bit of team management) I didn't do this.

From the very beginning of the project, I have had to keep an eye on monthly (even weekly) updates from Oculus and Unity as they are very active on the development of their products.

For example when our module started Unity Free Oculus support didn't exist. A month later it was up and running and being used by many people, and updates have been whizzing by.

It is with this mentality in mind that I thought I would rollback to build 4.5.5. My thoughts about this was that once 4.6 was fully ready I would be able to update my project to 4.6 pretty quickly and just create a new UI.

However if I created the simulation on 4.6 and it turned out to not work properly on the Oculus Rift, I would not be able to quickly rollback to 4.5. In fact it took me a full day to get my 4.6 content running on 4.5 as I had to manually import all assets and re-add them to scenes.

Without the hardware I was running on assumptions. That is until I read deeper into things. It seems from what I can tell the user in the above post was using the Direct to Drive option on Oculus Rift rather than the extended mode. This is still an issue with the DK2 when developing and is a minor annoyance rather than a deal breaker.

On the 23rd of December the following post was made on the Unity blog. Stating "The Unity Free integration for Oculus gives you access to the exact same Oculus features as users of Unity Pro. You can use Unity 4.6 and the Oculus integration package to deploy any sort of VR content imaginable to the Rift!"

So what does this mean? Basically I am now rolling all the work I did today in 4.5 to 4.6 and then I will be merging it with my previous work I had already done in 4.6. A bit of time wasted, but it's my own fault for not spending the time reading up on things more.

At least now I can get a UI up and running on the Rift. See my previous post titled "Minigame work cont. - Looking into new Unity 4.6 UI with the Oculus Rift" for more on this.

Cheers,
Rob