Friday 28 September 2012

Some more Unity editor scripting goodies

Last post I showed you how to make some boring additions to the editor, this post I'll step it up to slightly-less-than-boring.

As I mentioned in a previous post, our game has triggers, and when you hit a trigger sound happens.  Normally, you might make these triggers into a prefab, and, every time you wanted to make a new one you'd create a new instance of the prefab, attach your sound, place the trigger, and adjust the collider.  In order to simplify this process I created a drag & drop area so you can drop in a sound file and it sets everything up such that you need only adjust the horizontal position and radius of the trigger.


Here is a the example code for a drag and drop GUI in the inspector.  Luckily, it's not very complicated.  First, we create a new Rect, which will serve as our drag and drop area, in this example it's 50 units tall and expands to the width of the inspector.  Then we capture the current Event, with Event.current.  This will tell us if the user is performing a drag operation.  There are lot a lot of Event Types, but we're only concerned with DragPerform and DragUpdated.

At this point we check if the user's drag is inside our Rect, otherwise we can ignore it.  This is done by calling Contains() on our Rect and passing it the mouse position.  Next I set the DragAndDrop.visualMode to DragAndDropVisualMode.Copy.  This little touch gives the user visual feedback that he's within the drag area bounds by changing the mouse cursor.  Finally, if the event is a DragPerform, as opposed to DragUpdated, then we call DragAndDrop.AcceptDrag(), which is probably important for something.

So now let's do something with our newly dragged object.

In this snippet I added a block of code to our switch statement.  The idea is that for each object that the user dropped, check if it's an audio clip.  If it is, create a new Trigger, make it a child of our TriggerContainer, set it's position to that of our container, and set it's audio clip to the one dragged.  To do this, first I load the prefab using AssetDatabase.LoadAssetAtPath() and pass it the path to our trigger relative to the root of the project.  Then, we simply Instantiate() a new one, set it's position to that of our container, and set it's parent to our container.  As you may notice, setting the parent/child relationship is done through the transform object, which isn't necessarily obvious.  Finally, I finish it off by grabbing the AudioSource object and setting it's clip to the one that was dragged in.

When you're done, it should look something like this:

and dragging in new AudioClips will create new Triggers:

So there it is.  I covered making a drag and drop GUI, loading assets from an editor script, and instantiating new parent/child objects.

Friday 21 September 2012

How to (start) writing a custom inspector in Unity

Unity's editor extension documentation is quite sparse, so I'm sharing some of what I did this week to help confused developers.

Here is more or less a template for starting a custom inspector script:

What we have here is an inspector for a Monobehaviour of type Trigger, as denoted by [CustomEditor(typeof(Trigger))], which should be adapted for your own objects.  Editor scripts inherit from the Editor class and have an OnEnable function, which works similarly to Start in a normal monobehaviour.

In the beginning


The first thing you'll notice is I convert target into a SerializedObject.  The reason you want to do this is explained in this video, which you'll probably want to watch anyway to learn more about editor scripting.  After that you'll probably want access to the variables in your monobehaviour.  My triggers are round, so they have a public int called radius, and I get access to it by calling FindProperty("radius") and assigning it to a SerializedProperty.

Showing stuff


Now you can access all the properties of your object, so you're going to want to show them to the user. So let's take this party to the OnInspectorGUI() function.  First of all, you're going to want to call Update() on your serialized object.  You do this because of reasons.  Seriously, I don't know what this does but apparently it's important even though nobody thought to mention it anywhere.  Next I added a label for posterity, and a property field.  PropertyField displays the default inspector control for the given property type, which in this case is an int.  Finish by calling ApplyModifiedProperties() on your object.  This applies all the changes you made, but also gives you the ability to undo and other goodies, which clear from the documentation right?. Anyway, at this point you've duplicated exactly what the default inspector view gives you, good work!  By the way, incase you actually wanted to draw the default inspector you can call DrawDefaultInspector() as well.

Showing cooler stuff


Let's make something more interesting, like a dropdown.  I gave my Triggers a priority variable, which is an int, but sometimes it's hard to remember if a low number is priority or high priority.  So instead I added a dropdown with the descriptive choices low, medium, and high.

The code looks something like this.  It the same code as before, but with an added property, a list of words to fill the dropdown and the code to draw it.  The dropdown, or "Popup", is between a horizontal layout block, as denoted by HorizontalBegin() and HorizontalEnd().  Basically all this does is put all the GUI elements between in the block next to each other.  The Popup() function is pretty straightforward, give it the current index and an array of strings to fill itself, and it returns the selected index.  When you're done, it should look something like this


So now you've gotten a taste of building a custom inspector.  Later I'll post about how to make a drag and drop area and other goodies, but if you want to get ahead you should watch this video.

My experience with Unity Editor Scripting

Once upon a time I decided I was going to extend the Unity editor in order to make the most awesomest adventure game maker of all time.  After a few weeks of struggling through Unity's extremely sparse editor extension documentation, I was able to make a barely useable turd of a tool.  At this point I decided it would be easier to make an in-game editor that could save and load level data.  Admittedly, this was pretty cool because anybody could tweak the levels while playing, but part of the exercise was to explore the editor scripting functionality, which was a great big failure.

This week, however, I discovered the Unite sessions, including intro to editor scripting and advanced editor scripting, which are a blessing for a budding editor scripter.  Working with the Unity editor used to feel like this

But after watching those videos it feels like this

So, big thanks to Shawn White, Tim Cooper and Yilmaz Kiymaz.

Friday 14 September 2012

Intuitive flick gestures in Unity


One thing I implemented was flick gestures for iOS in Unity.  This turned out to be a little trickier than anticipated, because it must be carefully designed to feel good.  In our project, flicking is the primary method of interaction, so it's crucial that it's comfortable.


The challenge


Our project has the player throwing objects from a first-person perspective.  We use flick gestures to control the throws, so we have to translate a 2D gesture into a 3D velocity. So to get started, we'll consider what we have to work with.

The 2D gesture has a start position and end position, both in screen coordinates, as well as the time it took to complete the gesture.  In other words, we have the distance, time, and direction of the swipe in screen space.


Getting velocity


The first thing I did was turn the screen space velocity into world space velocity.  To do this, I took the swipe time divided by the swipe distance and multiplied it by a fudge value.  Although this strikes me as a fairly naive approach, it turned out to be surprisingly effective.  To finish, I spent some time tweaking the fudge value until the velocity of the thrown stones aligned with my flick gesture in a way that felt comfortable and intuitive.


Getting direction


First try


Translating the direction of the flick into a 3D direction took a little more effort.  The first thing I tried was getting the angle of the swipe, creating a rotation matrix, and applying it to the forward vector of the camera. At the time we were only concerned with the rotation of the stone around y, in other words the yaw.  This approach felt terrible.  As a player it was difficult to know where you'd throw your stone based on your flick gesture, which means it was completely unintuitive and deemed no good.


Second try


The next attempt was to throw the stone in the direction that the player's finger stopped.  Luckily, Unity has a function ScreenPointToRay built-in to the camera, so all I had to do was take the final position of the touch, and convert it into a ray.  So when I throw the stone, I take the direction of the ray multiplied by the velocity we calculated earlier.  

Initially I was concerned that this approach would feel awkward.  The user could potentially swipe from the bottom left towards the top right, but if she didn't cross the centre of the screen the stone would still fly leftwards.  It turns out I was dead wrong for two reasons.  First, moves like these are incredibly unlikely.  For example, a user who wants to throw a stone right isn't likely to start from the bottom left of the screen and move only halfway across.  He's much more likely to start from the center of the screen and move right.  Second, control is so good that the player is likely to understand it immediately.  Thus he'll never make some sort of awkward flick and be surprised by the result.

So, as you may have guessed, this is the scheme we stuck with.  In fact, it felt so good that I decided to revisit the velocity equation.  I was curious if using world space would feel better than screen space distance for determining the velocity of the stone.  Although it turned out badly due to the perspective of our camera, I'll explain how this is done for those want to try it.


Getting world space distance of the flick


Using our trusty ScreenPointToRay function, we can calculate the exact distance that the player's finger travelled in the game world.  To do this, turn the starting position and ending position of the flick into rays.  Then, perform a raycast onto your terrain, or other relevant collider, using each of these rays.  If the raycast hits, you can extract the world space coordinates of the collision from the RaycastHit object.  Finally, subtract one set of coordinates from the other one and you've got a vector that'll tell you the distance travelled in the game world.


Finishing touches


As a finishing touch you'll probably want to add some flick gesture recognition.  For example, checking the time it took the player to perform the gesture.  This will allow you to differentiate between a flick and a slow swipe. If the direction of you flick matters you may want to check that as well.  For example, in our game we only want flicks that travel from bottom to top.  To determine the direction of the flick I would use atan2 to get the angle of the flick vector and decide if it's within your range of acceptable values.

The code I used while prototyping can be found here: https://gist.github.com/3723838.  Please leave any questions or suggestions in the comments.