Mixed Reality

16.04.2018 ~ 04.05.2018

– Mixed Reality Overview

 

This project is about mixed reality. This project was conducted with a lecturer from London. The project lasts for three weeks, using Unity and the Vuforia.

unity_logo.png

Vuforia Logo OLx2a896.png

Unity, as mentioned earlier, is a game engine that provides development environments for 3D and 2D video games, and an integrated authoring tool for creating interactive content such as 3D animation, architectural visualization, and virtual reality. Vuforia is a unity plug-in that makes it easy to work with augmented reality.

Vuforia is an Augmented Reality Software Development Kit (SDK) for mobile devices that enables the creation of Augmented Reality applications. It uses Computer Vision technology to recognize and track planar images (Image Targets) and simple 3D objects, such as boxes, in real-time. This image registration capability enables developers to position and orient virtual objects, such as 3D models and other media, in relation to real world images when these are viewed through the camera of a mobile device. The virtual object then tracks the position and orientation of the image in real-time so that the viewer’s perspective on the object corresponds with their perspective on the Image Target, so that it appears that the virtual object is a part of the real world scene.

On Monday, we briefly heard what we will do in the project and learned what mixed reality is.

VR, AR, and MR technologies that provide an immersive experience for users are collectively referred to as immersive media and have recently been referred to as Extended Reality (XR). Virtual reality is a technology that enables users to realistically experience situations that are difficult to experience in the real world by expanding and sharing the user’s sensory information in a computer-generated virtual environment. Augmented reality is a technology that provides users with a communication environment and information by synthesizing virtual contents in the real world and real objects.

If virtual reality lets you experience content disconnected from the real world through immersive devices, then augmented reality is distinguished in that it presents content that is fused with the real world. Recently, as the reality of the combination of the virtual world and the real world becomes more natural on the extension of augmented reality, a technology for maximizing the user’s immersion experience has been developed.

This immersive experience expands the scope of the current visual-based virtual information to the five senses. In a single user environment per device, it is expected to be further maximized as a multi-user environment in which multiple users can share and communicate with the same virtual space without limiting distance.

The core technologies for implementing VR, AR, and MR technologies include immersive visualization technology, realistic interaction technology, virtual reality environment creation, and simulation technology. In the case of augmented reality, sensing and tracking technologies, image summation, and real-time augmented reality interactions can be cited because real and virtual images must be combined with each other to be matched in a three-dimensional real space and real-time interaction is possible.

 

On Wednesday, there was a lecture on how to use a tool called Vuforia. First, I need to register a project on the site and get a license key. Second, upload the tracking object or background images. Tutor explained more detail about which format images or object should I can upload for tracking background.

Later, I learned how to import Vuforia plug-in into Unity and how to register a license key into my project, and how to import my own uploaded files into Unity so that I can configure them in Vuforia Image Targeting. Then downloaded sample 3D modeling from the Internet was placed on the targeting image and run the project. After that through webcam we can find to aware the model on the targeting image.

After that, we learned how to upload a video on the targeting image and how to move a 3D object with a virtual controller. I also learn learned how to animate objects using a tool called ‘Adobe Mixamo’. And I learned how to build an Android or IOS application.

There was also a short workshop on Thursday that about how to implement virtual buttons in augmented reality applications. This workshop was hard to follow and understand, so, later came home and recreated.

 

Week 2

 

There was a personal tutorial on Monday about presented about ideas. I researched what is a ‘Virtual YouTuber’ over the weekend and introduced that to tutors. I asked how the concept of bringing this virtual character content to full reality using augmented reality. In addition, I told the tutors that I would create a virtual stage using location-based augmented reality applications using GPS or real buildings or sculptures. But the moment tutors asked the question “Why?”. The idea I presented was just my interest so, I couldn’t answer the question “Why?” there was a not critical meaning.

As my tutor understands, my explanation is similar to the ‘control project’ I’ve worked on before, and tutors recommend looking for a Miseenscene and combining with it. And tutors said I can make a storyboard instead of complete work.

After Monday’s tutorial, I worked on my personal work and research, and finally I have a ‘Studio Support’ time on Friday and next Monday.  I asked to tutor for new concept idea about ‘Virtual Museum’. And I got positive feedback.

In addition to I’ve been getting Android SDK and JDK errors in building Android applications since Wednesday. So, I asked the tutor for help, but it still didn’t work out. During the weekend, I visited the internet community and finally solved it by deleting and reinstalling everything.

 

 

Week 3

 

Over the weekend, I researched the virtual reality exhibition. Because, it was judged that there was no way to make critical positioning results with the work of making MMD and virtual stage. I decided to make a little more critical work. So, the idea was the concept of MR art museum using VR painting or 3D assets. With my advantage of 10 years painting experience before coming to interaction design, I asked, ” How can I bring more interest to the public?

The outline is as follow. The current museums and galleries are also gradually changing from ‘seeing’ to ‘experienced’ places using projection mapping or mixed reality. From this point of view, I focused on mixed reality combined with landscape painting, not portrait or sculptures. The reason why I choose a landscape painting is that the landscape painting can deliver more impression and emotion to the viewers in my mind.

And if the landscape painting existed on reality. I can make the surrounding environment of painting as a mixed reality and show to the viewers where the composition is drawn, and where the painter got inspired from. I thought that is great way and chance to understand artists perspectives and tastes.

This will ultimately consist of ‘Augmented Virtual Reality’ based on the actual painting. In reality, the use of VR devices in places with many floating populations, such as museums, was limited in time and cost. But, developing an AR application using a viewer own smartphone camera is more effective way to introduce. The QR code on the ticket before entering the museum will provide a MR experience application that matches the theme.

And if it would be developed further in the future, as viewer explore the landscape around the painting, viewer can imagine what artists have seen. Also, viewer can suggest new perspective different where artist might have drawn in the landscape. Imagine after that storing the individual different perspective in the virtual space and sharing it with others. I hoped to everybody can share and see fresh inspiration in a unique perspective with an individual visual ‘beauty’ and ‘taste’.

 

The idea and making are another matter. First, getting the same data of the terrain where painting have been drawn was a problem. In addition, it would take a long time to produce it in any way. So, I decided to build a low-level prototype. So, I don’t use unfamiliar tools like Maya or Probuilder. I decided to reconstruct the artist’s painting into 3D using a tool called ‘Tilt Brush’ developed by Google exclusively for VR painting (= 3D painting) and to use it as an asset for augmented reality applications.

And in the case of landscape painting, I choose the paintings of my favorite impressionist artists.

20180507_232705.jpg

Tilt4.jpg

Screenshot_20180507-233056Screenshot_20180507-233103Screenshot_20180507-233124

Tilt3.jpg

Screenshot_20180507-232920Screenshot_20180507-232939Tilt1.jpg

Screenshot_20180507-232854Screenshot_20180507-232912Tilt4.png

Screenshot_20180507-232952Screenshot_20180507-233014Screenshot_20180507-233030

The prototype video is shown below. The current phone model is the Galaxy S7. Perhaps the application does not optimize with my phone so there was a lot of frame drops occurred in Unity applications.

Thank you for reading my blog 🙂

 

The Glasgow school of Art

Interaction Design Year 2

YongWon Choi

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s