BT

Andreia Gaita on .NET and Mono, Unity, VR
Recorded at:

| Interview with Andreia Gaita Follow 0 Followers by Werner Schuster Follow 7 Followers on Jan 23, 2016 |
26:34

Bio Andreia Gaita works at GitHub and is a C#/C++ developer and long-time OSS and Mono contributor that found herself in the world of game development and VR when she started working at Unity Technologies. For the past 16 years she has been involved in the development of cross-platform applications, services and libraries, creating bindings and making tools that help developers be successful.

Sponsored Content

Code Mesh, the Alternative Programming Conference, focuses on promoting useful non-mainstream technologies to the software industry. The underlying theme is "the right tool for the job", as opposed to automatically choosing the tool at hand.

   

1. We are here at Code Mesh 2015 in London. I am sitting here with Andreia Gaita. So, Andreia, who are you?

I am Andreia Gaita. I am a developer of C# and C++ and I work at GitHub doing the Visual Studio extension there and I was also a Mono core developer. I do not that much that often nowadays, but I still hold it in my heart and smile. I worked at Xamarin at Unity, doing their scripting. So I have been jumping around between industries.

   

2. At GitHub you are GitHub's official Visual Studio tool smith?

Pretty much. Yes. That is me. The Visual Studio extension is very new. So, that is what I do.

   

3. What does the extension do? Is it mainly for interacting with GitHub?

It is a developer tool giving you integration with GitHub. So Visual Studio already comes with Git source control tooling, but there are things that are specific to GitHub like pull request, like issues, even just authenticating to GitHub when you want to push and pull anything and you have two-factor authentication or anything or if you have a GitHub Enterprise instance and you have your own authentication. So all these little integrations have been really hard for developers up until now, because authentication is always hard and having GitHub specific features in Visual Studio is something that people have been wanting. So this is basically what I am doing. The first version of the extension basically gives you authentication and all the infrastructure basically, the things that make it work. We are working now on doing pull requests which will be coming out soon. So people can do pull requests from inside the code and interact and hopefully see comments on their commits and all this stuff inside Visual Studio.

   

4. So you are writing this in C# or what do you people use to build Visual Studio extensions?

It is all C#. You can do other languages, but C# is my language of choice and it is also one with the most tooling inside Visual Studio and there is a lot of C# knowledge in GitHub itself because we already have the desktop client, which is also in C#. So, it is pretty much C#.

   

5. You mentioned Mono. What is the current state of Mono now that Core CLR and all the Microsoft Open Source .NET things are around? What do you think?

Well, Mono has moved on to mobile for a while. It did that when Novell imploded in 2011 and Xamarin was started and basically moved into the mobile space which implies a lot of optimizations on the runtimes and a lot of different work than when you are doing a desktop compiler and language. So, when Core CLR came out it is obvious we do not need Mono anymore, but it is different because Core CLR is still a .NET that does not know how to do cross platform decently and effectively. It is always about the tooling and intuition with the systems. The code is obviously all there and it runs and cross-compiles, but from that to actually running on your phone in a decent way or running across all of the platforms that Mono runs and Mono runs across a lot of platforms. It is not the same thing. There is a lot of stuff that you need to have to make that happen. So core CLR and Mono are not converging, but they are complimenting each other in that there is a lot of very good code in Core CLR that can be potentially used in Mono to replace all the stuff that was not that good. Core CLR can use the way Mono does things to improve their cross platform things and also you can use core CLR as a way of experimenting with different things, without worrying that you are going to break a huge segment of the market and people who are using Mono for their applications right now. So they all exist in a space and it is really, really amazing to see it come out and people starting doing that and, funny enough, the reason why Core CLR runs on Mac is because Mono people made it run on Mac. So, you know, everything is useful.

   

6. Certainly. I think you mentioned Unity and Unity is using Mono. Would you explain what Unity is?

Unity is a game platform, a middleware. It is more than just a game engine, it is a game IDE, it is a gaming creation tool and everything that you need to create a game. It gives you an IDE so you can drag and drop cubes and put things in a view and it is almost like a PhotoShop editor. It also gives you scripting via C# so you can code your games in C#. It gives you a ton of libraries to do all sorts of things that you need in a game, from accessing sensor data for a phone, for an accelerometer or for, you know, VR rendering, or anything that you need to do in networking and everything. So it is a complete package for developers and it runs Mono for the scripting environment because most of the engine is on C++, but you really do not want to give that just to people making the game. You want to have a scripting environment which all games have. It is very common: you have the engine running on whatever language it is running on and then you provide a scripting environment so that people can interact with the APIs and be able to create their game and you want that scripting environment to be as simple and as isolated as possible, but yet have access to everything without a lot of hoops because things need to be performant. So C# falls into a really nice niche of having a very efficient way of calling into native land and back. You can do millions of calls without performance loss and you can have memory allocators in native land that you can access in C#. So it is a language that is very well integrated with native. It gives you ways of having performance there and, at the same time, it is very cross-platform because of Mono and Unity runs on 27 platforms now. The only way to do this with one code base is to have something that runs on 27 platforms.

So that is Mono. It is the reason why Unity cannot upgrade the Mono version as often as they would like because they have to run on all of these platforms and they have to maintain all of this stuff, but they do get the amazing advantage of being able to run the same code on 27 platforms, which, if you think about it, most people do not even know that there exist 27 different platforms in the world that you can run code on, but there you go.

Werner: I think there's Linux, Mac and this other one... Microsoft.

Yes, exactly. And then there is Playstations 4 and 3, there is Xbox and Xbox One and then there are Samsungs and Tizens and Blackberries and there is all these different types of Androids that are out there. They are customized. Plus the Universal Windows Apps and Windows Store Apps. There are a lot of platforms.

   

7. So Mono is basically used like Lua?

Yes, but there is a difference: it is used like Lua and it gives you a scripting advantage, but the entire IDE is also made in C#. When you are coding in Unity you can also actually modify the IDE itself in the same project so your game can also provide tools for other people to create levels inside Unity, for your games, with your tooling and it is all running exactly in the same code. So you are extending the editor at the same time you are building your game and you can do this all in C#. So they have been pushing a lot of the limits of cross-platform development with C#.

   

8. You gave a talk here at Code Mesh. Can you give us a quick overview of what you talked about?

The talk was VR Best Practices and it was basically a rundown of the existing state of hardware of VR as we have it now and the best practices that we know right now, because it is a field that we do not know. But it is what we know now: what can make you sick, what you should avoid, what types of things you should do or not do when you are creating content – some pitfalls that other people have already experienced. It is mostly about simulation sickness because that is the breaker for all these experiences – if you get sick, it is done and you get sick very quickly and some people are most susceptible to it than others. There is things that you should be doing or not doing to avoid this. There has been enough research and work for the last two or three years that has determined “This is a trigger”, “There are things that you should be doing”. So the talk was going through all of these most common things, something like “To note: Do not do this, do that” and where we are right now basically.

Werner: So if I want to make people sick, what are the big things that I should do.

Just do this motion [sway side to side] in front of your face in VR – it is enough to make most people sick. Strafing and parallel movement triggers simulation sickness very, very quickly. Also, if you make a character go up the stairs instead of taking the elevator and they are running up the stairs, they will see the floor in a diagonal parallel motion, especially if they look down. All of these motions easily trigger simulation sickness. If you want to simulate an earthquake, you go “Shaky, shaky” with the camera – yes, that will do it. Most of the times, even in real life, you will probably feel somewhat sick, but it is mostly movement that is not triggered by the user because if that happens, your inner sense of self over where you are, which you always have – you always know if your body is moving or not moving, you have an inner sense of this - it will clash against what you are seeing and your eyes override a lot of the experience, enough that it can make you believe that you are there and moving, even though you know you are not, but you can see it. But if you just move your character around without the user triggering that action, very easily people will go “Wow, it is really bad!” So movement in general is a problem, which is annoying really because you want to move, right? So it is a question of being very careful about the speed and how people move in the game. It should follow your movement, so it should be natural and never different from what you are expecting. That leads also to latency issues and frame rate issues where when you move, the information that goes into screen is too slow, it does not match what you expect. So if you move your head around and the game is slow and there is a latency problem, the sensor data gets read and then eventually goes into the screen to render what you are seeing. If this is higher than 20 milliseconds, you will feel it and it will make you sick because there is the discrepancy between what you see and what you expect. So basically it is very easy to make people sick. Bad experiences are very easy. Making good experiences is hard. But then again – we are still learning. It is something that we are just learning now: what makes a good experience and what makes a bad experience. There are some best practices that we know, others are still being discovered and there is a lot of content that has not been created yet and things that nobody has ever done, right? So it is hard.

Werner: I think, John Carmack tweeted about research that says that it helps if you put the user’s nose into the field of vision – I don't know if that was a joke or not.

It is half and half. It is again having a point of reference to motion, right? If you do not have a point of reference, if you see things moving around and your eyes do not have anything to focus on, you are going to be more affected by movement. It will not necessarily make you sick, but what movement does is that it will clash against your sense of “I am not moving”. If you have something, like a nose of something, that your eyes can actually focus on, you will minimize that, especially being in the center, we have more movement detectors on the sides of your eyes, so the periphery is more in tune with motion basically and less in the center. So any movement in the periphery is more likely to make you sick, it is more likely to be noticed too, but it is more blurry because obviously also the lenses and everything. There is more focus in the front than on the sides. So a combination of the blurriness and motion on the sides can trigger simulation sickness as well. Something in the center is going to be in focus and if it is fixed, like a nose, that is the minimization factor. I do not think I want a nose there, but it is one more clue that all of this is related to fixed things versus moving and points of view and all of this stuff. So it is just one more clue that “Ok. That makes sense!”

Werner: It is interesting how early we are in this learning process of VR given that we have been talking about VR for 25 years, but it is only becoming real now. I guess what drew me to VR was that a few years ago Carmack was talking about how important latency is and even now it is hard to make graphics fast enough. We are still struggling. So it is interesting.

Yes. I got interested in VR reading “Snow Crash” and “Diamond Age” because of the descriptions of how you could provide educational material for someone in the VR environment. “Diamond Age” is all about learning through a book that is VR and having motion capture of someone playing a character in a play where everybody else is also being in VR, and common VR experiences with other people, which is fascinating and interesting. And yes, latency - they have been doing a lot of research towards this. It is the problem with pushing all of those things. You would think we have [good enough] graphics cards, right? But the truth is that when you are rendering to a VR device, you are actually doing as if it is a 4K monitor. That is the resolution that you have to push to be able to have both eyes seeing something real. You are wasting a lot of pixels basically. It is almost going twice as much as you actually want to render and doing 4K – we just started with the 4K monitors – you really, really have to have a lot of pixels being pushed and a lot of power behind it. The reason we have not been there yet is that to develop VR, you need to have a lot of money and you need to develop before you have content because it is such a different platform. How are you going to build content if you do not have a device? How are you going to build a device, if you do not have content? If you are not selling it. Because if you have built a device and there is no content, you are not going to sell it. So how are you going to do this? You have to have a lot of money. The only way to have, the reason why VR exists right now is because a lot of people with a lot of money thought it was a really cool idea and literally it is the Oculus with Facebook giving them money and it is Valve because they have a lot of money and they like cool gadgets. This is how VR is getting driven: by people that love gadgets and have a lot of money. Because the devices are coming out and there is not going to be content for them and you cannot sell things without content, right? It is really hard. But these companies have enough money that they can push past that hump, where we have devices but now we have to build content, but we are not going to go bankrupt on this. So it is not that we could not do it before, because we could, it is just that you need all of these things together to actually make it work.

   

9. As you mentioned, there is lots of things to learn if you want to get into making your own VR. What kind of resources do you use? Books or blogs? Who do you follow? How do you learn about this stuff?

First and foremost it is the blogs for the hardware developers. Not the blogs, but the documentation. Oculus has best practices guidelines and how-tos for the first things. You should be looking at their demos as a starter. Then other vendors as well – Unity has best practices and Real also has their best practices, especially Leap Motion. Leap Motion is an interesting player in that they do sensors for detecting your hands in space so you can attach this to the headset and have your hands being tracked in space and augment your experiences with your hands. They are doing a lot of research on UI, on how to do UI properly, how to render your hands and while all the vendors are worried about just doing the VR headset itself, they are not worried about doing content, how to build content – they have some ideas on what to do, but they are not building the content – Leap Motion is actually building a set of UI widgets for VR. So things that are interactable – things that show up, pop up, like menus – and experimenting with scroll bars versus drop downs versus what works when you are in a VR environment then you want a HUD in front of you, which is a satisfying thing actually, just having things that you can poke at. It is really nice. So, they are a really nice blog to follow. They actually have a VR jam right now so you can get their hardware at a discount - the little sensor – and then you can send in an experience and hopefully make a lot of money. But you know, it is a game jam. They do a lot of game jams. Then there are no books. There is really no books.

Werner: It is a “wild west” right now basically.

It is. Nobody knows. You can see existing things, some things are working very well, like there is different types of experiences and games that are actually like “Oh, this is interesting and new. This would not work in any other medium, but this actually works here” There is a lot of people experimenting with a lot of things. It is also hard because most of this stuff is prototypes, except for the Gear VR. So you might want to get your hands on a prototype and try it. We are at the end of the year right now and everything is going to be launched next year. It is not sure when, but they are going to announce everything in March 2016. All of them: Oculus and Valve and Sony. They are all going to announce the consumer products in March and then they might hit the stores in June or July. Some of them have cut off and are no longer selling the headsets because of this, but there is a ton of headsets out there actually. There is not only Oculus. There is a ton, at least 10 or 12 different companies doing headsets. There is even the OS VR, which is an open source headset. You can download the schematics and build your own headset if you want to do that or buy their kit and build it. They have an open source SDK to plug into it. The headsets are not that expensive because the headset itself is a simple piece of technology. It is a bunch of sensors and it is a screen and everything. Unity is free, so you can totally get Unity and a headset and start playing around and then it is more a question of building experiences for this, especially for people who do not do games normally. I would actually suggest playing with UI, like playing with menus, floating menus and screens and things like that, or even trying different types of things that are not games like rendering areas for architecture or a different type of education or at least playing with a lot of the game jam and the Indie festival - the Indie IGF games. There is an IGF every year and there are almost 600 games submitted to it and a lot of Oculus and VR games that are also submitted. Some of them are not games. Some of them are experiences as well, like different ways of interacting with space. So it is just trying things out and seeing what works and maybe coming up with something that nobody has ever done. It is honestly a new medium. Some people are discovering new ways of doing things. Other people are going to discover more. So, again, fortunately, the headset is not that expensive. It is an interesting “Do not get your hopes up right now, being all amazed at everything”, but it is a step. We are getting there. It is a few months away. So if you want to get ready, you can totally try it now and start getting ready for when all of these headsets come out and then there is an opportunity for content, there is an opportunity for tools, there is an opportunity for all sorts of things to grow out of this new environment basically.

Werner: We have some exciting times ahead.

Yes.

Werner: I have been looking forward to basically having a VR headset and working like in “Minority Report” and building up my arms.

Yes. The arm thing, the controllers are definitely interesting and exciting because of the interactivity inside. You put the headset on and what do you do? How do you interact? Right now we are stuck with keyboards or Xbox controllers, but all of the hardware vendors are going to launch controllers. Each one of them is going to be different. Literally everybody has a different approach. It is one of those things. And not only that – there is third party accessories. They are also launching different ways of tracking you in space and tracking your hands. That is the fun bit! You are moving along in space and are actually being tracked, inside the space. So you do not have to use a controller. Or if you have to use a controller, you can see your hands or there can be something else - swords or whatever. So those are the fun parts.

Werner: Lots to look to forward to. You have given us lots of information. Thank you, Andreia.

Thank you.

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and don't miss out on content that matters to you

BT