Update #4: Post Presentation

Presentation

In our presentation we introduced our game concept for the first time. Mainly focusing on the game design, but also involving how we decided to use the modalities we our now planning to use. In our presentation we wanted to emphasize on our plan to keep the use of modalites as natural as possible, so that it feels like all the inputs and outputs are natural result of the game design and not used just to use enough modaliteis.  We also briefly mentioned our thoughts regarding the design of icons and figures in the game, however, this point is not completely decided yet.

We have just gotten started with the programming and so far it is going quite well, but you will hear more about it in our next post!

Update #3 – Project E

Hi guys,

today we want to give you a short update of our project. Basically, we just specified our idea further during the group work exercise.

In short, we want to implement some kind of “shooting game” (think of Space Invaders), which is not your usual kind of shooter. We want to try to make our multimodal inputs the key feature of the game – so in order to “win” the game, you have to master the usage of the wii mote and the kinect (so in the one hand you have the wii mot and the other one is used for gestures). We still don’t have a name for our game, so if you have any interesting inputs, tell us in the comments section. We tought about something like “Drag and Destroy” (wip titel). So in the end, we decided to implement only one game and not several mini games.

This game has different key features, which will be explained more in detail during our presentation on wednesday. For now, we will only list some features so that you get an idea about our concept:

(1) This game features different areas on the screen. Enemies can only be destroyed on specific areas. The player has to drag and drop them in the “destroyable area”.

(2) It also features different kind of enemy types and therefore different ways of destroying them  (pressing specific buttons, holding the enemy for several seconds).

(3) The player has a weapon in order to destroy enemies. It is possible to implement different type of weapons so that you need specific weapons for specific enemies. Changing weapons can be achieved by doing specific gestures or pressing buttons.

This is it for now. For more information just be present during the presentation.

Until next week!

Update #2 Further Decisions

Hello Again!

In our last post we wrote a lot about our initial ideas and plans. However, we have now specified our ideas a bit further. We have decided to use the Microsoft Kinect, first generation, due to it being user than the second generation when working with processing, but also due to the lack of need of the newer functionality that comes with the second generation. We have also, as previously mentioned, decided to use the Nintendo Wii Remote.

In our last post we also discussed the possibility of head tracking, however, we have now decided not to go further with this idea, but rather to focus on gesture recognition by using the Microsoft Kinect. With The Nintendo Wii Remote we are mainly planning to use for tactile feedback by using the buttons and mapping them for a desired effect. As output modalities we are mainly discussing visual effects as well as, sound output.

In addition, we also discussed which game that would be the best to implement, there were some worries regarding the difficulty of implementing several games due to time constraints. Therefore, we are most likely going to focus on a single game instead of several. Which one it is will probably become clearer after today’s exercise.

Until next week!

Update #1 – Idea and Modalities

Hey guys,

in this blog post we want to spend some time updating you on a lot of what is happening on the project.

Since our first group meeting (basically the day we met and formed the group) we have been thinking about possible ideas we would like to work on this semester. Naturally, this idea has to be possible to implement and within the limits of our resources. On the one hand, we thought about modifying existing systems into their respective multimodal version. We would achieve this by adding several input and output modalities (check our 2nd blog post for a more detailed description of the project work). For example, we thought about creating a multimodal version of a calculator by adding several different ways of input possibilities as well as output possiblities. This was one possible way on taking on this project but in the end modality was not the key feature of these kind of ideas. That is why we decided to keep on trying to come up with something more nice and sublime. Then Linh came up with a great idea where we all instantly thought “this has to be the one”.

So basically, we want to implement different short, simple and challenging games, so called minigames. The style of the minigames varies because of different input modalities we will choose for this system.

This is a work in progress idea so we still haven’t finalized our decisions but for now we would like to implement games which are based on simple touching/tactile inputs by using a Nintendo Wii Remote (10 easiliy accesable buttons, probably the Plus version) and possibly even trying to find use of the accelerometer. In addition, the Microsoft Kinect would serve as a visual input. We still haven’t decieded on what kind of visual recognition we would like to implement (e.g. full body recognition, face recognition, gaze tracking). Interestingly, we also thought about some kind of pulse meter for another optional input. On of our ideas was that as soon as your pulse rises, the difficulty or speed of the game will rise too (but still, just an optional thought). The output will be visual and sonic, so graphic generation (display) and sound generation (maybe some fancy things like 3D sound generation and or playing of simple sounds).

In the following, you can see wip titles of games which are listed on our “possible-to-implement list”:

Catch Me If You Can – Catching a moving point using certain buttons
Duck Hunt – Shooting objects, which are moving on the screen
Hit The Ball Back – Basically baseball, maybe with some kind of speed recognition
Don’t Touch The Objects – Evading moving objects
Draw Something – Drawing by gesturing
Different Sport Activities

There is still room for more ideas but this is what we initially came up with. For now, we try to get more familiar with our input choices and try to see if these devices can do everything we want them to do or rather how we would like them to function.

That’s all for today, see you next week!
Group E

The Rotating Snakes Illusion: An Attempted Explanation

Hey guys,

In this hand-­in, we will discuss how the optical illusion “Rotating Snakes”, by Kitaoka, works.

This specific illusion is a part of the group called Peripheral Drift Illusions, which refers to the kind of illusions that creates an abnormal illusion of motion that can be observed in peripheral vision. Earlier reports indicate that either fixation instabilities or ocular drifts are enough to make a this drawing of snakes seemingly appear to dance. However, recent reports and studies specify more precisely that transient oculomotor events are the main factors of creating this kind of illusion. In the following, we shall concentrate on the essential conditions driving this illusion.

Often, illusory motion can be observed in a direction when going from a dark region to a lighter region, or from a lighter region to a dark region, but the effect is amplified when, as in the case of the Rotating Snakes, only four different regions of luminances are used. This combination is said to create signals of local motion in the visual system.

However, more specifically O’Reilly’s paper introduces three mechanisms, specifically for Peripheral Drift Illusions. The mechanisms mentioned are luminance, global contrast and local contrast. In this case, luminance represents the intensity of the light on the retina from a given point in the Illusion. Global contrast is the difference given by comparing the average luminance and the luminance at a particular point of the Illusion. Local contrast is the difference of the  luminance of a point compared to the points around it.

A suggested explanation has to do with the contrast-induced latency differences in neural responses. It states that areas of high global contrast are processed faster than the ones that has a low one and therefore results in local motion signals from the high contrast areas against the areas with lower contrast. In the end, these latency differences try to trick the brain’s motion detectors.

This explanation is focusing on oculomotor events (saccades, micro saccades and blinks) which are triggering this illusion. A saccade in general implies a swift, coincident and microscopic movement of both eyes. They jump from one fixation point to another. On the other hand, micro saccades refer to involuntary movements during continuous fixation. These ocular events try to restore the retinal image which in the end triggers the modified motion signal. Otero-Millian et. al. discovered in their research trials a relation between these events by measuring them in correlation to their perception which in the end supports this assumption.

Sources:
http://www.ucl.ac.uk/~ucbpmor/docs/case_study3_mor_web.pdf

http://www.psy.ritsumei.ac.jp/~akitaoka/PDrift.pdf

http://www.neuralcorrelate.com/smc_lab/files/publications/martinez-conde_etal_nrn13.pdf

http://www.jneurosci.org/content/32/17/6043.full.pdf

Introduction

Hello everyone,

So we started our blog and from now on try to keep you guys updated about everything related to our project and to express our ideas and thoughts about multimodal interaction with respect to specific topics.

In the following 10 weeks our group has to build a multimodal system with the following restrictions:

1) With respect to input modalites, we have to choose at least two different ones
2) The same applies to the output modalities (2 different ones)

We are still in the process of brainstorming and discussing which kind of system we want to implement and how we want to do it. But don’t worry, we will keep you updated as promised.

And to wrap things up we want to introduce ourselves, so that you know who is in charge of this project:

Group E consists of:

Sven Lukanek (21), Computer Engineering B.Sc.
Linh Tran (22), Computer Science B.Sc.
Carl Nilsson (23), Industrial Engineering & Management M.Sc.
Nils Walter (24), Computer Engineering, M.Sc.
Milad Takhasomi (25), Industrial Engineering M.Sc.

We are all from Germany, except for Carl – he is from Sweden 🙂

That’s it for now, until next week!

Group E