* You are viewing the archive for the ‘Unity3D’ Category

2D Game Prototyping in Unity3D: Orthographic Projection

After having discussed Unity3D’s GUI class and the GUISpriteUI system as two different methods of creating 2D games in Unity3D, we’re now ready to discuss a third method: combining 3D graphics with orthographic projection.

For this to work, you’ll have to create a scene in 3D, and then set up the camera to use orthographic projection instead of perspective projection.

If all you want is to use 2D textures, you can simply create cubes and assign them materials with these textures.

The fact you’re using a 3D engine to create your 2D graphics actually allows you to do more than that; the possibilities include using the physics engine, or use 3D animation blending for your characters.

To set up your camera correctly, you have to set Projection to Orthographic, and you have to set the Orthographic Size.

Especially in prototypes where the physics engine needs to be used, this can come in very handy. The following video gives you a peek behind the scenes of the Gremlin and Bayou Bird prototypes:

The fifth and last post in this series about achieving 2D in Unity3D will summarize the advantages and disadvantages of the different methods, so you’ll know which method to choose depending on your needs.

2D Game Prototyping in Unity3D: Sprite Manager Systems

As I explained in my previous post, usage of Unity3D‘s built-in GUI class can be a real performance killer on the iOS and Android platforms due to the high amount of draw calls.

Using a Sprite Manager system, such as GUISpriteUI, this performance issue can be entirely resolved.

To reduce the high number of draw calls, you could generate a 3D mesh on the fly, containing all separate rectangles for your 2D graphics. This mesh can then be sent to the GPU in a single draw call, along with one big texture atlas containing all your 2D images.

This is exactly what a system like GUISpriteUI is doing. It dramatically reduces the number of draw calls (perfect for iOS and Android), but requires some set-up effort.

Unfortunately, every advantage has its disadvantage, and this isn’t different for the GUISpriteUI system.

In order to make it work, you’ll need some additional time to set up this system. Also, you’ll have to create a sprite object for each image you want to show, making it slower to use. I’m sure this may be no big deal when implementing a full game, but when prototyping, this is slowing you down.

We can conclude that this system is perfect for 2D UI in both 2D and 3D games, but that it’s less interesting to use for rapid prototyping.

2D Game Prototyping in Unity3D Using the GUI Class

Using the GUI class is probably the simplest way to create 2D game prototypes in Unity3D.

To draw an image to the screen, you only need a single line of code; no objects need to be instantiated whatsoever:

GUI.DrawTexture((Texture2D)Resources.Load("bunny"));

For this to work, this line needs to be in the OnGUI method, and a file named bunny has to exist in the Assets/Resources folder (the actual file would be named bunny.png, bunny.psd, or whatever your favorite file format is).

For many uses, this is fast enough.

When you’re drawing many images every frame, there’s some room for optimization:

  • The OnGUI method is called multiple times per frame (once for each input event, and once for the rendering). As we’re only drawing a texture here, it’s enough to draw the texture only when rendering.
  • Also, Resources.Load is called each time. This doesn’t mean our bunny is loaded from disk all the time; instead, it’s only loaded the first time, and loaded from cache every subsequent time. Still, the cached data is retrieved every time by string – a relatively slow operation.

So in case the GUI code is getting slow, you could try the following:

// in your class declaration
Texture2D bunny;
 
// in Start()
 bunny = (Texture2D)Resources.Load("bunny");
 
// in OnGUI()
if (Event.current.type == EventType.Repaint)
{
	GUI.DrawTexture(bunny);
}

Using the GUI class has a major drawback on iOS and Android: each image or text string is sent separately to the GPU, resulting in many draw calls. As these are really slow operations on mobile platforms, this is a real performance killer.

An other drawback is that you won’t be able to use the built-in physics engine. Depending on what you want to prototype, this may be an issue.

In the next post, I’ll discuss how sprite manager systems, such as GUISpriteUI, can be used to get faster 2D graphics on mobile platforms.

Prototyping 2D Games in Unity3D

For many of our prototypes, we’re using Unity3D. Even if the gameplay is 2D.

There are several different ways to create 2D games and prototypes in Unity3D.
In the posts I’ll make the following days, I’ll be commenting on three possible solutions:

  • Using Unity3D’s GUI class
  • Using a sprite manager system, such as GUISpriteUI
  • Using 3D objects and orthographic projection

A peek behind the scenes of the Gremlin prototype

Unity Roadmap 2011: Three Features we’re Most Wild About

As you may have noticed, we’re using Unity3D quite a lot for our prototyping services.

Unity3D is a cross-platform game engine, allowing us to prototype for PC, Mac, iOS and Android without having to specialize in a separate game engine for each platform.

A few hours ago, the Unity Roadmap 2011 has been posted to the Unity3D blog. This roadmap lists a number of features that will be added to the Unity engine, so it’s a very interesting read for Unity3D adepts.

These are the three features we’re most excited about:

  1. Flash export: By using the upcoming Molehill API – which will allow proper real-time 3D in Flash – Unity will offer the option to export to Flash. Currently, only people who have the Unity plug-in installed can see Unity content in the browser. This new feature will make sure Unity3D content can easily be delivered to nearly anyone’s web browser, opening the doors for advergames developed in Unity3D.
  2. Crowd simulation: Normally only supported in high-end AAA game engines, this is one of those features that would take a huge amount of time to develop if you had to do it yourself. As it’s opening the doors to prototyping of crowd-based game mechanics, it’s a feature we’ll surely play around with.
  3. Microphone and webcam support: An additional means of input for games is always welcome, as it encourages developers innovate. Microphone and webcam support could facilitate prototyping singing games and augmented reality games.

Ürban Pad

The last few days, I have been examining Ürban PAD, a tool created by Gamr7, which allows you to create a 3D city rapidly. This would be a perfect solution for prototyping open world games situated in a city.

By combining rules in a visual way, the tool allows you to create buildings, blocks, streets, and most interestingly: city layouts. One way to create a city layout, is to start off with a big rectangle, and subsequently split it in parts to make the street network denser and denser.

Steps in the process of creating a street network in Ürban Pad

For now, we’re able to create a static city in Unity3D, with materials and collision detection.
To be sure the 3D models created by Ürban PAD are fine for mobile use, we created a simple first-person checkpoint race game, running on a Samsung Galaxy Tab:

Currently, the only thing that Ürban PAD is exporting is a static, textured mesh. I’m still trying to find out if it’s possible to export additional data, making it more easy to add pedestrian and car behavior, pathfinding, physics, dynamic traffic lights, etc.
This doesn’t seem supported by the software, but a work-around may be possible. We’re having contact with the people of Gamr7 to see if this would be possible.

I can conclude that Ürban PAD is certainly useful to create a city for a prototype rapidly. For more visually rich games, it may not yet be ready, as there isn’t much information exported besides the mesh.
Either way, I would surely recommend to keep a close eye on this, as the software is under active development.

Color Collider Prototype Video

A new video has been uploaded to PreviewLabs’ YouTube channel.

It shows the prototype we did for Color Collider, a game developed by Crazy Monkey Studios and published by Capcom.

The prototype was used to test the core mechanics, to find out if it’s possible to create enough interesting puzzles with these mechanics, and to pitch the game concept to publishers.

Prototyped Game Released by Capcom

We’re proud to announce that a game we prototyped for has been released by Capcom, a publisher known by the masses from properties as Street Fighter and Resident Evil. The game is called Color Collider and was developed by Crazy Monkey Studios.

The concept was conceived at a brainstorm organized by PreviewLabs, and prototyped by us as well – along with two other concepts. Crazy Monkey Studios used the prototypes for pitching purposes at various trade shows, which resulted in a deal with Capcom.

This is how the concept is summarized in the review at 148Apps:

Do you remember the part of kindergarten where you learned about how to make colors? Hopefully you did, because Color Collider will be putting this to the test in a big way. The key mechanics of the game revolve around guiding colored marbles from the top of the stage into baskets of sorts, located at the bottom of the screen.

We’ll soon post a video of the prototype in action, so you’ll be able to see how the prototype developed in five days compares to the full game!

If you want us to develop a prototype and maximize your chances to land a deal with a major publisher, contact us!

Unity and PlayMaker: The best of both worlds

Following-up on my brief review of PlayMaker, I’m back to explain you about how PreviewLabs may use PlayMaker in the future.

As I concluded before, the best part of PlayMaker can give you nice visual overview of a finite-state machine (FSM), an oft used construct in rapid game prototyping.
This overview is something you surely don’t have when implementing a FSM in code.

However, from a programmer’s point of view, it’s quite clumsy to use PlayMaker to add behavior to the FSM’s states.

The solution is to take the best of both worlds: creating states and connecting them using PlayMaker, and writing the states’ behavior in your favorite Unity programming language.

To be able to do this, you have to add events in PlayMaker’s FSM Editor. These events can be used to add the state transitions in a visual way, and can be raised in your code.

The following is an example state class to use with PlayMaker:

using UnityEngine;
using HutongGames.PlayMaker;
 
public class Idle : MonoBehaviour {
 
	private PlayMakerFSM fsm;
 
	// Use this for initialization
	public virtual void Start () {
		// Initialize the behavior
		fsm = (PlayMakerFSM)gameObject.GetComponent(typeof(PlayMakerFSM));
	}
 
	// Update code goes here
	public virtual void Update() {
		// Call this to raise the event of your choice
		fsm.Fsm.Event("FINISHED");
	}
}

To add this code to a state, you need to add a ScriptControl Action in the FSM Editor, and then add a script.

Feel free to comment to this post and share your experiences!

Research: PlayMaker

As part of our technology research project, I have been investigating PlayMaker, a plug-in for Unity3D which allows you to create finite-state machines (FSMs) in a visual way.


Finite-state machines (FSMs) divide logic in a finite number of states. They are great for most prototypes as they allow writing complex behaviors in a single class, while keeping the code clean. Because PreviewLabs is using FSMs quite extensively, I thought it would be interesting to have a closer look at PlayMaker.

The PlayMaker plug-in allows you to add a FSM component to any Unity3D GameObject, and define the states and their logic in a visual way, using the FSM Editor.

On the left side of the editor, you can see the states of the FSM. At the right, you see the actions defined for the selected state. Most of these actions allow you to access the Unity API functionality, but you can also write your own actions.

Because FSMs are perfectly suited to implement AI, I used PlayMaker to make a simple game where the player has to avoid being shot by a cannon. PlayMaker allowed me to get a quick first result, but as the game evolved and got more complicated, the series of actions got more complicated too.

While you would expect visual programming to give you a better overview of your game logic, implementing the behavior for the states was a very confusing experience. To be honest, it’s a lot easier to write code to implement this – at least from a programmer’s point of view.

Is PlayMaker then all bad for us? No, the one thing I really find interesting and useful is that it can give you a visual overview of a FSM and its transitions, while allowing you to change the FSM by linking states in a different way.

In a next blog post, I will explain you more about how you can get the best from both worlds.