Horizon Engine Part 1- Rendering

Hi, I’m Blake, the lead programmer on Memento.  The other programmers (who will hopefully have entries soon) are Matt DeNovio, Torey Scheer, and Kyle Haas.  Matt and I are working on the engine and gameplay while Kyle and Torey are working on tools, gameplay, and other front end stuff.  As Dustin explained in an earlier post, we are developing in ActionScript 3.0 (AS3) making a Flash game.  The reason for choosing this environment was two-fold.  First, our team is very familiar with the AS3 language and surrounding framework and felt developing in it would lead to a quicker dev cycle (which we needed since we hope to have the project done by the end of October).  The second reason was we wanted to be able to have this be played on multiple platforms.  We are developing with our primary target as Windows, but eventually we would like to be able to port it to other OS’s and potentially mobile, which using the Adobe Integrated Runtime (AIR) is fairly easy.

The other main technology we are using on this project (which is really the core to the engine) is Stage3D.  Stage3D is Adobe’s “low level” graphics API which wraps the graphics language of the native device allowing the programmer to write once for multiple platforms. Essentially, Stage3D is an API wrapper for technologies like DirectX and OpenGL that allows them to work in Flash. The downside to Stage3D is that since Adobe wants to target so many platforms, Stage3D does not have all the functionality of some of the other graphics packages.  Memento, while having its own visual style, does not really need a lot of the more advanced graphics features, so we felt it was a safe choice.

The main design goal behind the Horizon Engine was to build what we need.  This engine is meant to be pretty specific to this project (though can be used in future endeavors), so we are trying to make sure that we are making things the right level of “modular”.  What I mean by that is we are not necessarily trying to build an all inclusive system like Starling, but further into development of the game we do not want a lot of time lost because we need to change something and it causes everything else to break.  Now since “engine” is a bit of an umbrella term, I would like to more precisely outline what exactly our engine covers.  The Horizon Engine’s components are: a Stage3D powered renderer, a resource manager, string/XML handlers, an audio handler, and other miscellaneous utilities (like our keyboard manager).  In this article, I would like to talk a bit about what it is like to use Stage3D, what we are doing with it, and some complication we have had so far.

To begin, Stage3D is pretty easy to use once you find some article on how to use it.  The two main resources for us have been the Adobe Stage3D article series and Jackson Dunstan.  Matt and I have both had some experience with using OpenGL, so the basic concepts of how a graphics package worked transferred really easily to Stage3D.  Our basic set up is actually very similar to the Adobe tutorial, with some changes to the shader and using classes to represent the different rendered objects (called RenderObject’s in our engine).  We currently support alpha on images, color information, and textures in terms of what the fragment shader is using to display our objects.  We are planning on adding mipmapping eventually so that some of the further back objects look more like they should.

The two main issues we have had so far have to do with the Stage3D shading language and with our camera. In terms of the shaders, Matt has been working on creating dynamic lighting for the game, but was having some trouble with it mainly due to strange behavior in the shader. Stage3D uses its own language known as Adobe Graphics Assembly Language (AGAL) with the key word being assembly.  AGAL uses a small set of bytecodes to manipulate the input data.  It is not complicated as much as it is a bit annoying and hard to debug.  Here’s an example from our fragment shader:

"tex ft1, v0, fs0 <2d, nearest, mipnone>\n" +
"sub ft0.x, ft1.a, fc0.x\n" +
"kil ft0.x\n" +
"add ft2, ft1, v1\n" +
"mov oc, ft2"

Matt will talk about some the complications he has had in a later post, but we have definitely had issues figuring out how exactly the shader is working with the components of each vector/matrix, and why our return results are not exactly what we expect.

Everything is a 2D plane despite the background look like it has depth

Everything is a 2D plane despite the background look like it has depth

The main problem that I have had thus far is figuring out how to position the camera (or view matrix).  Memento is a 2D game that is simulating 3D space.  Therefore, we are not using any 3D models.  We felt the best way to model this in game and make it appear correctly was to represent everything using perspective projection.  Everything in the game is a 2D plane that is angled properly with the camera to make it look like its coming from the vanishing point on another 2D plane.  What was a bit complex about this though was that unlike Unity we we didn’t have a camera to fool around with and set (like we did with Energy Drive).  The way we solve this issue was I created a  debug camera that allows the camera to move around (basically changing the view matrix) allowing us to making it appear like we are not just manipulating planes.

This concludes the first article on the engine.  In the future we are going to talk about the resource manager and the resource management pipeline, the string and audio pipeline, and some more of the problem we have run into (and are going to run into) along the way.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s