Monday, October 27, 2014

Debugging Features

This past week I had two main jobs: design an overall architecture for the game and implement basic debugging features in-game. Both should not have been that hard, but the way Unity implements things makes the process much harder than it should.

I implemented two features for debugging: a debug menu and debug attributes. The debug menu allows the programmer to open up a menu and change the state of the game. This is exactly like an in-game console, but with a graphical interface. The simplest thing that the menu should do was to draw debug lines. Debug lines can be almost anything from player trajectory to collision box. The idea for this was when draw debug lines was activated was to draw the collision box for each entity. But Unity being Unity, it made what should be a simple task next to impossible. Unity does not have a built-in function to draw a line. No, in order to draw a line, I could either do Debug.DrawLine() or Gizmos.DrawLine(). Both of these do what I want, but the lines only show up in the scene tab, not the game tab. This is problematic because I need to see these lines when I debug the game in the game tab. I won't have access to the scene tab unless I pause the game. That's why I need a method to do it in the OnGUI event. Even drawing a thin box, proved to be problematic. I could use Unity's built in OpenGL to draw the line, but this would be overkill for what I need to do and defeats the purpose of using an engine in the first place. However, I may need to create a GL Utils static class just to do basic functionality that Unity is too lazy to implement (if there is a reason, I haven't found it).

Example of Debug Menu

The second feature I implemented was a debug attributes. This allows the programmer to cycle through each entity and display the attributes of that entity on the screen for easy debugging without relying on Debug.Log to update each entity each frame for cluster. This was much easier to implement; however, what I hoped to have implemented was if draw debug lines were activated, then the collision box belonging to the selected entity would highlight in a different color so that the programmer could tell which entity was which. 


Wednesday, October 15, 2014

Benchmarking

One of the most common things that are done in game dev is looping over objects to do repetitive task every frame of the game. It is usually here where the bottle necks of game become evident. This happens to be more the case with Will of the Wisp, a game that's main mechanic is based on controlling swarms. The wrong algorithm for a minion or allocating more memory (increasing the Garbage Collector task) than necessary could lead to huge performance spikes that will devastate playability. In order to help swarm performance, two generic aspects of the swarm were tested, how C# allocates memory when using operators over inline methods to manipulate vectors. The other will be how the for loop vs. for each loop effects overall performance and if the iterator is too much overhead.

C# Memory Allocation

There are two ways to manipulate a vector. The first is to use operators which are static overloads that take in both vectors and return a new vector. It seems with all the use of creating new vectors, that it may allocate more memory in the heap than should be necessary. This will create a performance spike when the garbage collector deallocates the extraneous memory. An alternative would be to use methods to update the vectors that would have been updated with the operators. This would stop the extraneous allocation which should reduce both the overhead of initializing new memory and the cost of the garbage collector. 

Test that uses Vector operators:
var object1Pos = new Vector2(10, 5); 
var object2Pos = new Vector2(-5, 3); 
Vector2 object1Trajectory = (object2Pos - object1Pos).normalized; 
object1Trajectory *= 8; object1Pos += object1Trajectory;

Test that uses Vector methods:
var object1Pos = new Vector2(10, 5); 
var object2Pos = new Vector2(-5, 3); 
var object1Trajectory = new Vector2(object2Pos); 
object1Trajectory.Add(object1Pos * -1); 
object1Trajectory.Normalize(); 
object1Trajectory.Scale(8); 
object1Pos.Add(object1Trajectory);

The code above were used in the benchmarking tests that were able to get the average computational time while eliminating the testing overhead costs and initial performance cost of loading libraries. The benchmarking test also measured how much memory was allocated in the heap and how much would be collected by the garbage collector.


The memory is in kilobytes and the time is in milliseconds. As can be seen from the tests, using methods is 28% more efficient than operators, though the increase isn't that significant. This was to be expected since the methods don't have to deal with the overhead of allocation. What went against my initial hypothesis was that the amount of memory allocated to the heap was the same in both operator and method algorithms. Looking deeper into how C# compiles, the use of 'new' doesn't always allocate the object to the heap and set the variable to that memory address. The compiler will allocate local objects to the stack where deallocation isn't a concern. The take away is that even though using methods to update the vector will provide some increase in performance, the garbage collector will do the same amount of work which is more of a bottle neck than the overhead of allocating stack memory. The operator code above is more readable than the method code below it, so for the meantime it will be better to use vector operators over methods. If the performance of the code dictates that the bottleneck is really vector operators, then the code may be changed to use methods over the operators.

For vs. Foreach

When iterating over lists or array, there are two options: for or for each loop. The for each loop is faster to write and more readable; however, at first look, there is overhead cost of generating and iterating using an iterator which needs an IEnumerable. For each loops also has restrictions on deleting objects or replacing the object in the iterator. That's why it seems that a for loop which is properly optimized to only access memory once each iteration would be a better choice in regards to performance.

There will be five tests that will iterate over a list of vectors and an array of vectors. The simple loop will only call the object once while doing a simple method call. The complex loop will do several calls to the object and more advanced calculations. The for loop will test two different kinds of complex loops. The first kind will be when the object is called, it will grab the object from the array each time. The second time will save the object in a temp variable while the object is manipulated.

Simple For Loop:
for (int index = 0; index < list.Count; index++)
{
    list[index].Normalize();
}


Simple For Each Loop:
foreach (Vector2 vector in list)
{
    vector.Normalize();
}


Complex For Loop (1):
Vector2 position = new Vector2(10, 10);
for (int index = 0; index < list.Count; index++)
{
    list[index].Normalize();
    list[index] *= 10;
    position += list[index];
}


Complex For Loop (2):
Vector2 position = new Vector2(10, 10);
for (int index = 0; index < list.Count; index++)
{
    Vector2 vector = list[index];
    vector.Normalize();
    vector *= 10;
    position += vector;
}


Complex For Each Loop:
Vector2 position = new Vector2(10, 10);
foreach (Vector2 vector in list)
{
    vector.Normalize();
    vector.Scale(10);
    position += vector;
}


Each method was tested twice using a List and an Array. The size of the list grew by a factor of two until the final loop was over an array size of one million.

The time it takes to iterate each test for the list.

The time it takes to iterate each test for the array

As is seen from above, there is no determinable difference between a simple loop for both a for loop and a for each loop; however, there is a slight improvement of the performance of looping over an array over a list. Also the for loop has a slight advantage over the for each loop because the for each loop has an overhead cost that outweighs the cost of the basic operation. These overhead costs are not significant enough to be a bottleneck, so the for each loop for a simple loop will provide clarity and readability.

The surprise is in the complex loop. Unsurprisingly, the second complex for loop did much worse than the second complex for loop because the first for loop has to make more reads from memory which does impact performance. The surprise comes that the for each loop did better than the for loop over the long run. The overhead cost of the for each loop didn't have the loss in performance as my original hypothesis and the observations of other programmers online when talking about performance in Unity. The for each loop does have a slight advantage when iterating over an array than the List. I would have loved to do the performance and calculations on a GameObject to get a more accurate view of how the loop would act in game. 

Based on the calculations, the for each loop would be the better loop to go with for both performance and readability. Where it loses in performance, the loss isn't significant enough to be a bottleneck. 








Tuesday, October 7, 2014

Prototype Pitch and Shadows of Mordor

 My team pitched the prototype to an industry panel. Since the original inception of the pitch, the game has changed to the point that it is something entirely else. The mechanic still focuses on controlling a swarm, but the twin stick shooter was taken out and instead the player uses the swarm as their primary weapon. It's been definitely hard to keep up with each change that the designers want the swarm to do, and each change they want takes away from the properties of the swarm to the point that it feels more micromanaging. Swarms should never be micromanaged; instead, they needed the player to give out general orders to the swarm under their control, and they execute the algorithm for the swarm state that they are in. Even talking to other people who I trust give a frank evaluation, wanted the swarm to be more macromanaged, than what was presented. They also wanted the twin-stick shooter back, as now the right stick is worthless.

After the pitch from the industry panel, we felt good about the presentation and felt that the game has a good chance of making it through and being selected to be made to production when Spring hits. I'm not stressing much and just planned to relax during the weekend. When I got home after the presentation, I was ecstatic to find that Amazon finally delivered Shadows of Mordor, a game that I was really excited to play. The last game I played was Assassin's Creed 4: Black Flag, which was 2 months ago. Playing through Shadows of Mordor, I definitely could tell what all the comparisons to Assassin's Creed was. For all intents and purposes, Shadows of Mordor is exactly like the Assassin's Creed series: from stealth, to finding viewpoints, the overworld map, even to assassinations. However, despite all its similarities, Shadows of Mordor is a really fun game and I feel that it does stealth better than the Assassin's Creed series. Exploring Udun and Nurnen is really fun and reading the encyclopedia (what Assassin's Creed has for places, events, and people) about the world of Middle Earth was really fascinating. I love Tolkien novels, including the Similarion, and seeing this expanded universe in the game form was something, I've been looking forward to since getting tired of the same old rehash game of the Lord of the Rings trilogy. I haven't finished the game yet, but everything I played makes me excited to finish the game and hope with anticipation what Monolith comes out with next.