It’s been quite a slow summer workwise, so I’ve had plenty time on my hands.
I’ve been toying with an idea to create a “digital pop-up book”, that is to say a book that comes complete with a google cardboard headset, and combines a story with Augmented and Virtual Reality.
I created the first version in AIR, and did manage to get everything working. However, performance was not great. I was using four Stage3D Proxies, which worked fine on my own phone, an Oppo 7 Find, but on smaller phones some levels would disappear for no apparent reason, though I suspect there wasn’t enough VRAM available. The Oppo has a dedicated graphics chip, and can easily fool you into thinking everything’s pukka.
Just when I thought I would have to drop Flash, in order to carry on working with Web3D, Stage3D came along and I’ve rarely looked back since. But it’s time to look forward.
Now, as then, when I open Unity I can’t help feeling I’ve gone back to the Flash MX days.
Attaching scripts to actual graphical objects the way we used to, each MovieClip having an onClipEvent(load) and onClipEvent(enterFrame). I wonder if that was the inspiration for Unity’s Start(), Update() template?
In Unity the MovieClip is loosely replaced by the GameObject. That’s another thing that put me off Unity. It assumes you’re making a game, which generally I’m not.
The transition has been completely painless, for apart from slight differences in syntax, everything is quite familiar.
I’m used to sitting with pages of code in FlashDevelop, and only seeing my 3D creations after compiling. I actually wanted a similar workflow in Unity, though now I am getting used to the way Unity works, and the ability to create structure, without necessarily having a “Main” class.
(I found out how to do that too, but don’t need my comfort blanket quite as much any more 🙂
In Unity you do set a lot of variable and properties through the User Interface, but the user interface is created from your own code, so basically if you work the way I do, you are building a custom user interface for Editing and tweeking your application as you go along. This is actually a very cool approach. I just never got that far in my previous brief attempts to master Unity.
Once you discover the power of a full blown 3D engine, it’s easy to go mad.
As I work primarily on Mobile and tablet, I have had to tone it down a bit, but it’s fantastic to be able to work in a WYSIWYG environment again, the way we used to in the early Flash years.
Visual Studio is a pleasure to code in. Once I got the intellisense working, I’ve been able to use it to discover all the properties and methods available on different types of objects and I’m now feeling just as at home as in FlashDevelop.
Unlike most Flashers, I’m not quitting AS3 to take up a new technology, just adding to my toolbox.
Now I’m feeling comfortable with C# a whole new world has opened up and I’m excited to see where it takes me.
My first experiments have been in “Mixed Reality”, that is a combination of Virtual and Augmented reality using the Vuforia library. Sometimes it’s so easy it feels like cheating. While ActionScript has always been a challenging platform to master (and therefore in itself very satisfying when getting something to work!!), the combination of Unity and C# provides a powerhouse of new possibilities, as all the simple stuff is taken care of. I’m not used to anything being “under the hood”, and even libraries like Away3D, I’ve customised and modified for my own ends.
Though I’ve had to go from Flash Hero to Unity Zero, I’m quickly clawing my way up to where I can call myself a Unity Developer.