Monday, December 15, 2014

Alpha Trailer

Now that Eye of the Swarm has entered Alpha, we have a trailer to show off and get excitement for the final product. Eye of the Swarm will be released Spring 2015, hopefully on Xbox One.



Friday, December 12, 2014

EAE Open House


Today was the EAE Open House, among the other capstone projects and the master studio thesis games we displayed the alpha build of Eye of the Swarm to students, professors and industry professionals. We were all excited to see how our game would perform among fresh eyes that have not played the game in its earlier stages. We felt the game would do well, but were quite surprised at how well the game would be praised. Not only did the game receive universal praise, but the industry professionals went to the main professors, Robert Kessler and Roger Altizer, to praise our game to them. We received much feedback for ways to improve our game, and as we go into the alpha stage we will implement the feedback that we feel will improve our game for Beta. Overall, I am very happy at the stage our game is in and the positive feedback has relieved the many months of stress and heartache to get our game to fun state. 



Alpha



It is the end of the semester and Eye of the Swarm has now officially entered Alpha. Alpha means that the game is feature complete in which no new mechanics should be implemented. The swarm works in a fluid and nice way and we have a boss fight that feels great. During Alpha, we will implement at least two new bosses and polish them for the final product. Other than that, programming will insist of fixing bugs and fine-tuning mechanics to get the right feel for the game. This past sprint we were rushing to get our game to Alpha and ready for EAE Open Day (presenting the alpha build to our peers).


My main priority was to fix the bugs of swarm and several UI mechanics. I added a Credit Screen, Splash Screen that showed the team's and EAE's logo, and a win screen that teased the next boss. 



When we presented the Alpha build to class, the game was well received by the students and professors alike. Despite all the criticisms we've gotten in the past months and changing the direction of the game almost every sprint, we have a game that is fun to play and that we are proud to present. I, and the team, am really excited to show this game off at EAE Open House and see what feedback we get. 



Sunday, November 30, 2014

Swarm Force

As part of the new direction with only fighting bosses in arena's, attacking a boss weakpoint is dependent on how hard the overall swarm hits the weakpoing. In order for there to do damage, the swarm needs to build up enough force to pack a punch past a threshold. That may be hard to determine by looking at the change of rate of the swarm. Part of my task was to create a way to show how much damage the swarm can do when they strike that is independent of any enemy weakpoint. My first approach was to change the color of the swarm based on their speed; however, when the swarm takes damage, they also change color to signify the swarm has been hurt. So I changed it to have a trail behind the swarm that indicates how much force they would impart on impact. I thought using a particle effect that would emit behind them would create a nice effect. Part of the problem was that the change in force direction was so much that the emitting particles would go in all different direction. It was also hard to distinguish between the particle effect and the swarm. Trying to determine another way to have a stream behind the swarm, I decided to look into the trail renderer that I happened to see when selecting the particle system.

The trail renderer draws a line based on its change in position. While it was possible to change the trail renderer's length based on velocity (or force), I decided it was better to change the color of the line based on a gradient using the velocity (or force). Originally, I had the trail renderer follow the position of the geometric mean of the swarm. Based on the advice of my teammates, I added the trail renderer to each minion in the swarm.

The Trail Renderer normally

The Trail Renderer when the swarm has enough force

Based on the aethestic value of the swarm emiting a trail as they orbit the orb, we all decided this was the best way to show off the force of the swarm and how it will affect the weakpoint of a boss. This does create some performance issues which will be addressed; however, it is not the bottleneck as the collision between the swarm and an enemy weakpoint is where the biggest performance spike is. 

Friday, November 21, 2014

Gravity Well

video

During class on Tuesday when we presented the sprint build of the game, it did not get the best reception. The main complaints was that they felt the mechanics of the game were too boring. The past two days, we thought about ways of either saving the mechanics we have now or creating a new mechanic that could be completed before two weeks when we have to present at EAE open house. There were two mechanics that we were split on at the meeting: using a gravity well and splitting the swarm between the two joysticks. The split between two joysticks is using the same mechanics as before, except you get rid of the central character and manipulate the swarm the same way. At its current iteration, each minion of the swarm would go towards the joystick point that they were closest to and use boid rules to swarm around that central point. The player could then use the two points to direct the swarm by attacking the enemy at two points and slamming each one into the enemy at its weak point. My idea was to use a gravity well that the swarm would be attracted to. The player could then use the right joystick to add direction onto the swarm to create a directional attack. If the swarm go too wild, then the player could create a dampening field that would increase its gravitational pull and slow the swarm caught in it to an insignificant speed. I was tasked to prototype this so that the team could play with both and get a better idea which mechanic would be more fun.

The above video shows that implementation (though it still needs work with fine-tuning). When it starts, it just shows the swarm orbiting around the orb. At about 15 seconds in, I move the right joystick to add the offset on the swarm and give them direction. At about 36 seconds, I use the dampening field and show its effects.

In video games, there are two main ways to tackle a physics problem. The first way is to fake the physics and use animations and scripted events to imitate a physics problem. This gives the programmer and player more control over the object; however, the downfall is that its too jilted and stiff and  doesn't look right. The second way is to use a more dynamic approach by essentially creating a simulator that calculates the next points using known physics equations. This create a more natural feeling to the movement of the object, but it takes control out of the hand of the player and is damn difficult to get right. The difficult part is all the constants and variables. It would be a miracle if I could use the gravitational constant of the universe for finding the force of gravity, But it such a low number that its decimal is on the order of magnitude of 6. This would make objects too slow. Use a number too large and the objects move too fast and too out of control. There are many things that can go wrong when using the dynamic approach, but if fine-tuned correctly can create such beautiful motion and animation with an amazing feeling. That's why I like using the dynamic approach over the static, scripted approach that many of my friends have tried getting me to use. I too must accept facts that if we can't get the feeling we want, we don't have the time to play with it and get the right feeling in under three weeks. We may have to use the scripted approach or go with another mechanic.

Wednesday, November 19, 2014

Globals Maybe Evil

95% of the time, global variables are more evil than goto. Good software design demands that a variable not be more accessible than it should be. If other objects need the variable, then there are ways to pass the information without making it available to everyone and without checking for misuse. The only time a global should be public if its a static const (or just a const), since there is no possible misuse since other objects can only read the data. In game development, using static consts is how properties of objects, such as velocity at certain states or values that a conditional needs to pass, are kept and defined. The main complaint of this approach is that the value cannot be reassigned and the game will need to be recompiled in order to try a different value. This simply isn't true, with knowledge of debugging, static consts can be changed in memory to reflect a different value without recompiling. With these tools, Unity gives the use of creating public globals that can be edited from the inspector. These values can even be changed while the game is running.

To many, this option seems quite attractive. When I started in Unity 3 years, this option seemed attractive. Now that I am a much better software engineer now than I was then, I know better. The only value of this approach is passing in game object prefabs and textures than using GameObject.Load() which has worse performance than using a public variable. Even then, there is no reason to make it public for all to read and change without testing the data. Data should always be tested, no matter who's giving it. By using a single point of entry for data to be written to a variable, such as a setter, then there it can be safely assumed that the variable used from then on shouldn't need additional testing. However, if the data can be changed from anywhere at anytime, then the data will constantly need to be tested and vetted for correctness. This is a horrible practice and requires more code than is necessary.

In Unity, all game objects are spilt into two: the global prefab and a specific instance in the game. Public values can be changed in both with the prefab affecting all instances, but changing the value of an instance will not affect the prefab or any other instance. This seems to make sense on the surface, until you realize what these values are suppose to represent. These values are suppose to replace static consts that can be changed to get the right value. The speed of a game object should not be represented in the inspector to be changed arbitrarily, instead it should have const values that can affect the true speed such as acceleration, min/max speeds, or speeds depending on state. Why would these values need to be changed arbitrarily on instance. Why should one instance need to have a larger max speed than another? If such a difference needs to established, then the variable should never have been a static const in the first place. That value should be established in the object itself or established in the object that creates the entity and passed to the newly created entity.

If the public variable can be affected in two different places, that creates many subtle bugs that may be hard to track. When testing a game object, the values are generally changed in the instance inspector and not the prefab. When the right values are found, more often than those changes aren't saved back into the prefab. Sometimes they are, and sometimes the values are directly edited into the prefab; however, there have been many times that I saved a value in the prefab, but it didn't percolate into all the instances of the game object.

Despite everything I stated, there is always the biggest reason why using public variables to directly impact what should be a static const: anyone can access it. I already showed that when letting anyone access it, you need to test the data constantly; however, this approach gives more scope to the variable than it needs to. The only object that should access the variable is the inspector. No other game object, or Unity Engine, needs to access this variable and the fact they do have access is a very scary thing. I feel that using a public variable give way more power and control to everything than what is gained from the convenience of directly updating the variable at runtime. A global variable has created many subtle bugs that are hard to track down and fix.

At some point, the team at Unity decided this was one of the worst ideas they had, so they implemented measures so that you could use the inspector and keep scope. By making a variable private, but serializable the inspector can still gain access to the variable and the variable is given the correct scope so that no other object can access it. Using serialize also opens the door to many other options for the inspector. By creating structs or classes that encapsulate another object or makes a field more readable: for example

[System.Serializable]
struct Range
{
    float min;
    float max;
}

This creates greater readability and makes the inspector much nicer to read so that we don't need two variables for min and max. Despite these improvements, there are still problems. These values are still meant to represent static consts, but are still represented by private var. While only the inspector and the object they are assigned to can only update them, that means the object it is assigned to can still update it. This is not the behavior we want from the object and we have to create a contract with the programmer that they will not change these variables in the object. Even though I wish these values could be made static const and still accessed from the inspector, I understand the difficulty in doing so and why it may be non-existent on a their list of priorities when there is so much that hasn't been incorporated (such as a draw line method in the GUI). You can't win every battle and I will take compromise where I can. A private serialize value that can be edited is a hell of a lot better than a public global.

Are globals evil: yes, yes they are. But, there are many times in which they are a necessary evil.

Friday, November 7, 2014

Level Design

For this current sprint, my task was to create a pipeline and tools to design levels so that the designers could then design their own levels for the game. While the task itself wasn't that hard, there are many design decisions that I had to make with the team to ensure the process of building a level be as efficient as it can be. The very first thing I wanted to get solved was borders. This is an underwater game so borders aren't that simple. The bottom of the sea floor is obvious, but there isn't an upper limit. Where does the player stop, a certain depth below sea level, or can they surface and that's the border. While there isn't a good solution for the game yet, we decided on a soft border in which if the player gets too close to the surface, then some force (pressures of water or too much light is bothering) will force them down. Also with that soft border there are some design decisions. How will it force the player down. Does it just add a vector onto the player's trajectory that is so great they are forced down, or does it take control out of the players hand and play an animation as the player swims down to the correct level. I chose the latter as I feel it has the most aesthetic value to it and can give a more logical reason why the player can't swim too close to the surface. I also chose it because it gave me the opportunity to change the architecture of Entity types (Player, Swarm, and Enemies) to be more closely related in terms of assignment and state control. I feel that I may need to change it to the more dynamic approach of forcing the player down, but not taking control away from them. Fortunately, I've added a way to have multiple ways of being able to move the player down so that it is easy to switch based on designers choice or if there is some other choice of the type of soft boundary.

Hard boundaries (moving into a wall) was a lot easier, but I still had to experiment with ways to create hard boundaries in the most efficient way. The initial way I did it was to use a series of rectangle that would act as the collision wall. It would be a series of them, because they could be used to approximate a curved wall since Unity does not have curved lines and it would be too computationally heavy to do collision detection with a spline. While this method did work and is the standard way of doing it, making the rectangles took way too long, was hell trying to scale, rotate, and move each rectangle, and there was a possibility of a gap in the wall potentially creating future, hard to find bugs. I learned of an alternative way using Unity's edge collider. It allows the designer to create a line that can be subdivided into many lines that can be used to approximate the curve. It is the same principle as the rectangles, but it much faster to create as the designer only has to select a midpoint between the two endpoints and move it to where they need it. The only problem is which certain types of collisions in which the object is moving at a great speed. The edge collider has a much smaller room for errors as opposed to the rectangle. This problem doesn't come up to frequently and if there is the possibility of either capping its speed or using a predictive collision to determine if it will hit the edge using discrete steps. This would be much easier than the added buffer that rectangles give especially since the edge collider will cut down the time it will take to set up the walls and it is much easier to fine-tune and fix later on.

The placement of enemies is a very important. An enemy that lies in wait in a corner of the level that is quite away from the player will not only suck up important CPU, but depending their AI will chase the player from across the level. One of the fixes I have for this is by using an enemy trigger. When the designer places an enemy in the level, they can hook the enemy up with a trigger that when activated will create the enemies offscreen and allows the enemy to attack the player saving CPU. The main problem with this is placement of the trigger, especially in a more open-space world. The trigger can only be so big, and it is a rectangle, and creating more than one for the same group of enemies can open up some potential bugs. While it is great in some cases, it is not a catch-all. The other solution, would be to define their behavior if the player is a certain distance away from them. If the enemy can't see them, then they do some default state such as patrol or try to kill each other, and when the player gets close enough they go into their attack state. While this may solve the enemy not going across the level to kill the player, it does have a bunch of enemies that aren't useful taking up CPU. A fix to this would be if they were a certain distance away, then to become disabled so they aren't taking up CPU. This will probably be the best since when the player is far enough away, then they aren't doing anything, and when the player is close enough, but not too close to attack, they are activated and do their default behavior. While it would be nice for the enemies to do their default behavior no matter where the player is, this just isn't feasible in terms of performance. The best bet would to use the trigger where possible and if that doesn't work, then to use the default behaviour if the player is close enough.

Those were the most basic things that I covered so far. There are still some aspect that I haven't talked to anyone else, such as when a level ends, how it decides the next level to go to and where the designer can specify that. The level design is very iterative and will change as we get a better idea how we want our levels to look and feel.

Monday, October 27, 2014

Debugging Features

This past week I had two main jobs: design an overall architecture for the game and implement basic debugging features in-game. Both should not have been that hard, but the way Unity implements things makes the process much harder than it should.

I implemented two features for debugging: a debug menu and debug attributes. The debug menu allows the programmer to open up a menu and change the state of the game. This is exactly like an in-game console, but with a graphical interface. The simplest thing that the menu should do was to draw debug lines. Debug lines can be almost anything from player trajectory to collision box. The idea for this was when draw debug lines was activated was to draw the collision box for each entity. But Unity being Unity, it made what should be a simple task next to impossible. Unity does not have a built-in function to draw a line. No, in order to draw a line, I could either do Debug.DrawLine() or Gizmos.DrawLine(). Both of these do what I want, but the lines only show up in the scene tab, not the game tab. This is problematic because I need to see these lines when I debug the game in the game tab. I won't have access to the scene tab unless I pause the game. That's why I need a method to do it in the OnGUI event. Even drawing a thin box, proved to be problematic. I could use Unity's built in OpenGL to draw the line, but this would be overkill for what I need to do and defeats the purpose of using an engine in the first place. However, I may need to create a GL Utils static class just to do basic functionality that Unity is too lazy to implement (if there is a reason, I haven't found it).

Example of Debug Menu

The second feature I implemented was a debug attributes. This allows the programmer to cycle through each entity and display the attributes of that entity on the screen for easy debugging without relying on Debug.Log to update each entity each frame for cluster. This was much easier to implement; however, what I hoped to have implemented was if draw debug lines were activated, then the collision box belonging to the selected entity would highlight in a different color so that the programmer could tell which entity was which. 


Wednesday, October 15, 2014

Benchmarking

One of the most common things that are done in game dev is looping over objects to do repetitive task every frame of the game. It is usually here where the bottle necks of game become evident. This happens to be more the case with Will of the Wisp, a game that's main mechanic is based on controlling swarms. The wrong algorithm for a minion or allocating more memory (increasing the Garbage Collector task) than necessary could lead to huge performance spikes that will devastate playability. In order to help swarm performance, two generic aspects of the swarm were tested, how C# allocates memory when using operators over inline methods to manipulate vectors. The other will be how the for loop vs. for each loop effects overall performance and if the iterator is too much overhead.

C# Memory Allocation

There are two ways to manipulate a vector. The first is to use operators which are static overloads that take in both vectors and return a new vector. It seems with all the use of creating new vectors, that it may allocate more memory in the heap than should be necessary. This will create a performance spike when the garbage collector deallocates the extraneous memory. An alternative would be to use methods to update the vectors that would have been updated with the operators. This would stop the extraneous allocation which should reduce both the overhead of initializing new memory and the cost of the garbage collector. 

Test that uses Vector operators:
var object1Pos = new Vector2(10, 5); 
var object2Pos = new Vector2(-5, 3); 
Vector2 object1Trajectory = (object2Pos - object1Pos).normalized; 
object1Trajectory *= 8; object1Pos += object1Trajectory;

Test that uses Vector methods:
var object1Pos = new Vector2(10, 5); 
var object2Pos = new Vector2(-5, 3); 
var object1Trajectory = new Vector2(object2Pos); 
object1Trajectory.Add(object1Pos * -1); 
object1Trajectory.Normalize(); 
object1Trajectory.Scale(8); 
object1Pos.Add(object1Trajectory);

The code above were used in the benchmarking tests that were able to get the average computational time while eliminating the testing overhead costs and initial performance cost of loading libraries. The benchmarking test also measured how much memory was allocated in the heap and how much would be collected by the garbage collector.


The memory is in kilobytes and the time is in milliseconds. As can be seen from the tests, using methods is 28% more efficient than operators, though the increase isn't that significant. This was to be expected since the methods don't have to deal with the overhead of allocation. What went against my initial hypothesis was that the amount of memory allocated to the heap was the same in both operator and method algorithms. Looking deeper into how C# compiles, the use of 'new' doesn't always allocate the object to the heap and set the variable to that memory address. The compiler will allocate local objects to the stack where deallocation isn't a concern. The take away is that even though using methods to update the vector will provide some increase in performance, the garbage collector will do the same amount of work which is more of a bottle neck than the overhead of allocating stack memory. The operator code above is more readable than the method code below it, so for the meantime it will be better to use vector operators over methods. If the performance of the code dictates that the bottleneck is really vector operators, then the code may be changed to use methods over the operators.

For vs. Foreach

When iterating over lists or array, there are two options: for or for each loop. The for each loop is faster to write and more readable; however, at first look, there is overhead cost of generating and iterating using an iterator which needs an IEnumerable. For each loops also has restrictions on deleting objects or replacing the object in the iterator. That's why it seems that a for loop which is properly optimized to only access memory once each iteration would be a better choice in regards to performance.

There will be five tests that will iterate over a list of vectors and an array of vectors. The simple loop will only call the object once while doing a simple method call. The complex loop will do several calls to the object and more advanced calculations. The for loop will test two different kinds of complex loops. The first kind will be when the object is called, it will grab the object from the array each time. The second time will save the object in a temp variable while the object is manipulated.

Simple For Loop:
for (int index = 0; index < list.Count; index++)
{
    list[index].Normalize();
}


Simple For Each Loop:
foreach (Vector2 vector in list)
{
    vector.Normalize();
}


Complex For Loop (1):
Vector2 position = new Vector2(10, 10);
for (int index = 0; index < list.Count; index++)
{
    list[index].Normalize();
    list[index] *= 10;
    position += list[index];
}


Complex For Loop (2):
Vector2 position = new Vector2(10, 10);
for (int index = 0; index < list.Count; index++)
{
    Vector2 vector = list[index];
    vector.Normalize();
    vector *= 10;
    position += vector;
}


Complex For Each Loop:
Vector2 position = new Vector2(10, 10);
foreach (Vector2 vector in list)
{
    vector.Normalize();
    vector.Scale(10);
    position += vector;
}


Each method was tested twice using a List and an Array. The size of the list grew by a factor of two until the final loop was over an array size of one million.

The time it takes to iterate each test for the list.

The time it takes to iterate each test for the array

As is seen from above, there is no determinable difference between a simple loop for both a for loop and a for each loop; however, there is a slight improvement of the performance of looping over an array over a list. Also the for loop has a slight advantage over the for each loop because the for each loop has an overhead cost that outweighs the cost of the basic operation. These overhead costs are not significant enough to be a bottleneck, so the for each loop for a simple loop will provide clarity and readability.

The surprise is in the complex loop. Unsurprisingly, the second complex for loop did much worse than the second complex for loop because the first for loop has to make more reads from memory which does impact performance. The surprise comes that the for each loop did better than the for loop over the long run. The overhead cost of the for each loop didn't have the loss in performance as my original hypothesis and the observations of other programmers online when talking about performance in Unity. The for each loop does have a slight advantage when iterating over an array than the List. I would have loved to do the performance and calculations on a GameObject to get a more accurate view of how the loop would act in game. 

Based on the calculations, the for each loop would be the better loop to go with for both performance and readability. Where it loses in performance, the loss isn't significant enough to be a bottleneck. 








Tuesday, October 7, 2014

Prototype Pitch and Shadows of Mordor

 My team pitched the prototype to an industry panel. Since the original inception of the pitch, the game has changed to the point that it is something entirely else. The mechanic still focuses on controlling a swarm, but the twin stick shooter was taken out and instead the player uses the swarm as their primary weapon. It's been definitely hard to keep up with each change that the designers want the swarm to do, and each change they want takes away from the properties of the swarm to the point that it feels more micromanaging. Swarms should never be micromanaged; instead, they needed the player to give out general orders to the swarm under their control, and they execute the algorithm for the swarm state that they are in. Even talking to other people who I trust give a frank evaluation, wanted the swarm to be more macromanaged, than what was presented. They also wanted the twin-stick shooter back, as now the right stick is worthless.

After the pitch from the industry panel, we felt good about the presentation and felt that the game has a good chance of making it through and being selected to be made to production when Spring hits. I'm not stressing much and just planned to relax during the weekend. When I got home after the presentation, I was ecstatic to find that Amazon finally delivered Shadows of Mordor, a game that I was really excited to play. The last game I played was Assassin's Creed 4: Black Flag, which was 2 months ago. Playing through Shadows of Mordor, I definitely could tell what all the comparisons to Assassin's Creed was. For all intents and purposes, Shadows of Mordor is exactly like the Assassin's Creed series: from stealth, to finding viewpoints, the overworld map, even to assassinations. However, despite all its similarities, Shadows of Mordor is a really fun game and I feel that it does stealth better than the Assassin's Creed series. Exploring Udun and Nurnen is really fun and reading the encyclopedia (what Assassin's Creed has for places, events, and people) about the world of Middle Earth was really fascinating. I love Tolkien novels, including the Similarion, and seeing this expanded universe in the game form was something, I've been looking forward to since getting tired of the same old rehash game of the Lord of the Rings trilogy. I haven't finished the game yet, but everything I played makes me excited to finish the game and hope with anticipation what Monolith comes out with next.

Tuesday, September 23, 2014

Swarming

New Feature: the player has a swarm that acts as a shield and can also be used to attack the enemy. I've been tasked to program this swarm behavior that has two states: protect and attack. In protect, the swarm will move around the player acting as a shield. The swarm is still vunerable to fire from both the enemy and the player, so if the player wished to fire at the enemy they risk the chance of destroying their shield. Attack is where each member of the swarm will latch onto the nearest enemy and attack them.

To implement this, I needed to brush up my boid skills. Boids are the simulation of a group of objects that act as a swarm or flock. They follow three simple rules: seperation, alignment, and cohesion. Seperation assures that each member of the swarm doesn't collide with another. Alignment is that they steer towards their target (which can be a leader or the average heading of the nearest flocks). Cohesion is where they steer to move towards the average position of the swarm. For this algorithm, the swarm doesn't act alone, they base their movements off a target creating a leader-follower paradigm. Boids are incredibly difficult to get right, and require fine-tuning to get right. Each rule produces a vector of where they must move, the sum of the vectors will produce their true trajectory. When creating the boids for this game, I ran into problems getting the boids to act in a fluid manner.

The big problem I ran into is when the swarm gets close to the player, then they should circle around the player acting as a shield. In order to get the trajectory to circle around the player, I figured it would just be the tangent of the cirlce they would make. The tangent is just the perpendicular line of the radius vector. The radius vector is easily obtained by the taking the difference between the two objects position vectors and rotating it 90 degrees to get the tangent. When I ran the code, the swarm would form a circle around the player, but they wouldn't move. Only playing with the degrees of rotation did I find that by rotating the radius vector by 45 degrees, they would move in an elliptical manner around the player which makes sense. This gets me the results I'm after, but I'm still confused why rotating 90 degrees doesn't get me the result I wanted. However, for prototype, its good enough and that's what I'll take. I need to move on to creating enemy classes (which I'll cover next week).

Tuesday, September 16, 2014

Human Body Defense Prototype

I've been assigned to the team, Human Body Defense, a top-down shooter where you must destroy various diseases to heal the body. Our prototype will focus on the main mechanic, using your enemies abilities to adapt to your environment. This means that when you destroy an enemy, you gain some of their ability such as partial immunity to their attacks as well as their weapon to upgrade your own. Thus far, we've programmed a player and an enemy that can shoot at each other, move around and die. Since A.I. is my favorite thing, I took the responsibility of implementing basic enemy behavior. Mostly stalking behavior and correcting course change every 1/2 second while firing every second. The enemy also stalks the player meaning that it will follow the player up until a min distance plus some epsilon. If it is within that min distance, then the enemy will retreat back while firing at the player. Obviously this isn't very exciting, but it does show the concept of an enemy that will attack the player, and when it dies it will drop its weapon for the player to take and adapt to.

Monday, September 8, 2014

Phone Screen with Naughty Dog

I had the opportunity to interview with Naughty Dog for a Game Programming position, and it was completely different than  what I thought it would be. I had two technical interviews for internships at EA and Amazon, and it consisted of the standard questions: "Tell me about yourself", talking about game projects I made, and technical questions about data structures, algorithms, and c++ technical questions. That's what I was studying for when preparing for this interview, specifically talking points about myself, abilities, and my experiences at EA Tiburon. When the phone interview came up, I did not realize that a technical phone screen was completely different from a technical phone interview. After a quick introduction, he went straight into the technical questions. This threw me off a bit as I was expecting the "tell me about yourself" question first.

Even though being thrown off, I was thinking that I would still be fine by not trying to sell myself and stick to the technical which is my strength. A question about dot product comes up which I knew, then he asked an alternative equation for dot product. That is where I started to freeze up, I didn't know. Then he asked me how to find the angle between two vectors, again I struggled to find an answer until I could say that it included dot product. This is where I started to tense up and stress out. The rest of the linear algebra questions only went down hill. When asked how to find the normal vector given three points, I completely forgotten about cross product and was able to google the answer by given the equation. When pressed what the name the matrix was, I couldn't answer. I'd forgotten cross product, one of the simplest concepts of linear algebra. I was only able to have some relief when he started asking about data structures which I felt good on.

It has been a little less than a year since I last needed to do linear algebra, and it didn't cross my mind to refresh my knowledge of it. Linear Algebra was my favorite subject in math, and I consider myself really good at it. After the interview, I was able to look up the concepts and could then answer the questions. My failure to refresh myself on basic linear algebra concepts cost me the opportunity to prove my knowledge of linear algebra and be able to, possibly, continue on the interview process at Naughty Dog. While I haven't heard anything yet about my application and the next steps, I have little hope that I will continue through. All I can do is continue to put applications in to other game studios and when I get the opportunity to interview again, I know what to expect with a phone screen and to add linear algebra to a list of subjects to study and refresh myself.

Monday, September 1, 2014

German Expressionism

I'm finally taking the capstone class for EAE program, where I have to create a video game with a group that will be published. Right now, I'm coming up with pitches. Part of my problem with this is that the ideas that I have for games are not appropriate for this class because of size and time. While the time is only two semesters, about 8 months, the actual time to work on the game will be closer to 5-6 months. This is due to planning and getting approval for the game to publish will take up a few of those months. I'm very interested in making large scale, RPG games that would take a AAA studio several years to make. Trying to scale down has not been easy.

Starting out thinking of pitches, my ideas have not been very good. I've been coming up with stock platformers and caper games, even going as far to rip ideas straight from movies because I'm so unoriginal. Then I started to think about old German expressionist films from the 1920's. Films like Nosferatu, Metropolis, and The Cabinet of Dr. Caligari. All these films have a distinct style of Chiaroscuro, color scheme, and a very distorted view of reality that ventures into dreamlike.

 Still from The Cabinet of Dr. Caligari (source: http://www.filmsquish.com/guts/files/images/caligari12.jpg)

That me me wonder what video games that might make use of the visual style. Turns out, the style is rarely used in AAA games and seldom in indie games. I would think that the german expressionism would be more used in indie games, or student games, as film students overuse the style to show an imitation of creativity. The closest I could find was Limbo. 

Still from Limbo (source: http://i.telegraph.co.uk/multimedia/archive/01683/limbo-game1_1683129c.jpg)

Limbo certainly shares an artistic style with Dr. Calibari and the other black and white expressionist films due to its lighting and color scheme. While the game certainly has a dreamlike state, it doesn't match Calibari's use of distorted perspectives and shapes. The emphasis on a distorted reality isn't the decisive factor if a film is an expressionist film; Nosferatu relied more heavily on lighting than a distored reality.

Many games are more inspired with film noir, which itself was inspired by German expressionism, which features the use of Chiaroscuro in a more sedated reality. Contrast is a game which seems to be inspired by the early film noir of the '30s and '40s, but when looking at the style, would be more reminiscent of Nosferatu. 

still from Nosferatu (source: http://www.derek-turner.com/wp-content/uploads/2013/10/nosferatu-4.jpeg)
still from Contrast (source: http://4.images.gametrailers.com/image_root/vid_thumbs/2013/11_nov_2013/nov_11/gt_contrast_review_em_11-13_6am.jpg?)

In many respects, Contrast could be considered an expressionist film. The game is set in an alternate reality 1920's where Albert Einstein's theory of relativity is used as justification for an extremely distorted reality where the user can shift into the shadow they project and move around in that plane of existence. 

still from Contrast (source: http://cdn.destructoid.com//ul/265659-C1.jpg)

When playing the game, the game feels more similar to a film noir, like M, than by a German expressionist film, like Nosferatu. This is the problem with art styles; there are so many interpretations that it cannot easily be defined nor categorized. There are films that have elements of the style, but could not be considered that style, just like many film noirs can't be considered German expressionism. 

That is why I want to make a game that is truly based on German expressionism than film noir. I want a game that has heavy use of light and shadow, and has a heavily distorted view of reality. One of the games that I will be pitching will be based on that style. A game of hyperbolic emotion with little to no reality. A game where the player has to go through many points in time to right a lifetime of evil. Each time point will be a heavily distorted view of the reality that actually was, and how the main character saw the world. The hub where the player can traverse to the different time points would be like an M.C. Escher drawing with distorted perspectives. 

source: https://thefalloutgirl.files.wordpress.com/2011/10/escher-big.jpg

It'll probably turn out that I'm the only one passionate about this idea, and it won't make it past the first selection process; however, I do want to make a German expressionist game and I will have both an art style and an idea in my back pocket to when I could potentially make the game. A concrete idea is more than what most people have, even if it does suck.