December 27, 2016

Looking back at 2016

I figure with the end of 2016 coming closer it is time to look back at the year. One of the major differences this year has been my ongoing project. Not only has it taken up a large portion of my time, but it has also been a huge learning experience and refresher on things I had forgotten from my undergraduate engineering courses. I also have come to release that frequent updates to this blog are just not going to happen. It isn't because I don't want to post updates, but I tend to prefer working on my project than posting about it.

Near the end of 2015, I wanted to get back into working on my own games. Shortly after finishing my master's program I was just mentally and psychologically exhausted. I had spent a significant amount of time playing games with a few friends and started to get an itch for making my own game. I was originally thinking to make a mobile game that I would want to play while taking transit. Something I could easily pause but also be complicated and requiring a good amount of problem-solving. At the time I had spent many hours playing Factorio, and so I was wanting to create a tower defense which required you to collect resources, manage power/communications, and fend off attacks from a moderately intelligent adversary.

I got a very basic prototype done after a few weeks, but I started to envision a much bigger project. I was no longer taking transit simply because driving took about 1/5 the time. Together both of those factors made me change my mind. I began looking into technologies and libraries I could use to make it happen. Most of my development had focused on 2D but I had done some basic 3D work. In the end, I decided it would be worth my time to look into Unreal 4 vs Unity 5 as my experience with both engines had shown they had extremely robust rendering engines that would easily outdo anything I could create. They also had the nice feature of being able to support multiple platforms. The end result was my decision to go with Unreal 4, as the performance as my rendering engine exceeded what Unity 5 could deliver.

After a few weeks of hacking together the general concept for what I wanted to create I through together a basic prototype in April 2016. Which you can take a look at below:

It was very simple but gave me the foundation for what I was going to build. As time went on I slowly started to build up a robust engine which delegated the rendering to Unreal. I still use the approach today and surprisingly it allows for rapid iterations and excellent utilization of the hardware.

I had a short and bitter fight with Unreal's GUI frameworks in May of 2016 and decided to not bother fighting with them. Instead, I decided to stick a web overlay on top of the game canvas and use web technologies for my interface. On the surface, it seems more complicated, but it turns out to be significantly simpler and more productive. It also means I can create really good and solid looking user interfaces.

Everything in my game is built to be networked and scalable. Over the summer I worked on pushing towards planetary sized worlds. Optimizing and reworking terrain generation, and pushing the limits of what can be done with today's technology. Eventually, I got terrain generation to a point where I could very quickly create terrain on the fly and travel at high speeds with fairly minimal lag. This is when I started to notice the limitations of my physics engine, more on that later.

I then started working on making the terrain modifiable in September of 2016. I was originally planning to make use of an embedded database engine but I quickly came to realize that a database just isn't fast enough. That isn't to say the databases I tested were not high quality, or that they couldn't scale. It was mostly the fact I wanted to make them store and retrieve data with extremely low latency. I also didn't need to do a significant amount of querying so most of the advantages of a database were not helpful. I ended up writing my own storage system which allowed me to handle high levels of concurrency with low latency by keeping fragments of data in memory and using an asynchronous event-based approach. It worked really well and was straightforward when it came to implementing replication across clients. Something I only partially implemented, enough to prove it would work but then something else caught my attention.

The physics engine was causing most of my headaches. Again, not the fault of the physics engine. It was doing a great job for what it was designed for. It just wasn't scaling well and caused a massive amount of headaches and complexity in my engine because I was having to delegate the game structures to something the physics engine could work with. I was also suffering from issues with high speeds and the occasional terrible collision response handling. I also had to write my own 'fix' for issues with tunneling because they solutions provided by the physics engine only worked most of the time. I could consistently break it.

I had written my own physics engine for a 2D game many years ago. I was able to handle 17000 objects colliding at 60 fps using a few strategies I developed. I discovered the names of some of the strategies a few years later. It sort of hit me, there are no 2D physics engines that could handle that many objects and I have doubts any will be able to in the near future. Part of the reason is they are generalized solutions and can't make certain classes of optimizations in the problem space. Their generalizability makes them inefficient, which isn't a bad thing, it just means they are only limited to micro-optimizations. Such as using SIMD instructions, GPU acceleration, newer hierarchy structures, or newer algorithms. Whereas if I wrote my own I could tailor it to my application and only make the physics engine do the work that it actually needs to. I would also be able to deal with high-speed collisions and tunneling without having to fight against the physics engine.

After reading a number of books on the subject, a few research papers, a thesis, and looking at various open source physics engines, I started thinking about how I could integrate a custom physics engine into my game engine. It was now December of 2016, but I wasn't ready to hook it up and swap out my existing physics engine interface. I wanted to start building a prototype physics engine. Very quickly I also realized that I would need a visual debugger to help me develop the engine and come up with test cases which I could use to verify correctness as I switch from naive implementations to heavily optimized solutions. I slapped together a quick visual debugger and was able to see my simulations. I also made sure I could easily step forward through time to make sure everything was behaving correctly.

It sort of bothers me. I originally wanted to use a bunch of frameworks and libraries to make my life easier and allow me to develop the game faster. Instead, I kept having to deal with limitations and find workarounds for them and noticed my productivity decreased. I do have the fortunate benefit of not having a deadline and so I can take the hit of writing my own solutions where appropriate.

This year has been a huge learning process, a year ago I would have never dreamed of writing my own physics engine or writing something instead of using a database for storage. I certainly would have thought by this point I'd have something playable instead of a sort of early tech demo. I guess that is the problem when you have to build your own technologies to solve problems others have yet to solve.

Well enough about the past, where am I going next?
Short-term (Q1 2017)

  • [Physics] Expand capabilities of the physics engine, such as supporting more shapes
  • [Physics] Deal with high-speed interactions and tunnelling
  • [Engine] Switch from existing physics engine to custom physics engine
  • [Graphics] Finish implementing client-side terrain updates

Medium-term (Q2-Q3 2017)

  • [Graphics] Determine what type of art style to use.
  • [Physics] Optimize custom physics engine including special handling for game
  • [Gameplay] Start working on core gameplay
  • [Interface] Revisit UI with updated technologies

Long-term (Q4 2017+)

  • [Gameplay] Get playable demo
  • [Graphics] Polish rough areas

September 11, 2016

Summer 2016 and some thoughts

It only feels like it was only a few weeks ago I posted something here. However, time currently tells me it has been more than two months, so much for my biweekly posting plan. I've been pretty busy making more progress with my project but also been feeling a bit of a burnout and needing some time to relax. Most of the last two months has been dedicated to solving some of the harder problems in my project. Some of the challenges involved sketching out pages of design for weeks at a time without writing any code. Others were solved by writing tests or small prototypes.

I had hit a complexity wall. Sometimes you get to a stage in a project where you know what you want to do but cannot wrap your head around the complexity of it. On paper it seemed straight forward, but once I started to code it I had to fight with making sure it was doing exactly what I wanted. I was entering into territory where writing tests was simply not sufficient. As it evolved and changed any tests I wrote were quickly obsolete and meaningless.

It felt like I was back to when I was first learning programming and trying to create a text-based game. I was about 10 years old and had been coding for only a few months, but I had created a game that was about a 1000 lines long. It was my first experience of the complexity wall and my realization where writing software is hard and time consuming. Years later I hit it again, at the time I was working on creating fairly complex game inside of Warcraft 3. I was getting close to 10k lines of JASS2 (the scripting language for Warcraft 3).

There was a realization; software is developed on a spectrum. On one end you can write code which is fragile and small, but can get the job done quickly. Or you can write code which is resilient and large, but takes a long time to get the job done. I noticed over the years how the fragile and small code is a common theme among programming contests and many successful projects. It is really attractive because it takes the least amount of time to get something done. In the case of programming contests it gives you more time to think about solving the problem and lets you finish the contest in less time. In the case of successful projects, it allows you to get the product out and start making money sooner.

I've also seen the dark side of fragile and small code, one that is often ignored. I've seen many games start out good, push out features quickly, and start making money. Then a few years go by and the project ends up being abandoned because progress slows down to a grinding halt, bugs are constantly plaguing the game, and the money runs dry which forces the developer to stop supporting the game. You frequently hear the words 'rewriting', 'bug fixing', 'new systems', etc. If you don't believe me, look at Steam and a vast majority of the indie games... the ones you only heard small things about and the ones that haven't turned into wildly popular games (the ones that after a few years are still < 100k copies sold). It makes me wonder if we are going to have a repeat of the 'video game crash of 1983', mobile apps have not helped with this. Writing resilient and large code is not ideal when it comes to games either. It simply takes too long to get the product out the door, many game projects from start to 'release' tend to take a few years or less. Early access has helped but fragile and small software can also take advantage of early access.

So what is ideal? Well, the short answer is nobody really knows. The long answer depends on what type of software you are producing. If you are writing flight software for an aircraft, or writing software for autonomous vehicles, or safety systems to support industrial applications, then writing resilient software with minimal bugs and high reliability is critical. If you are writing a mobile game, or a simple app for a device where bugs are not critical and the market is constantly shifting to the next hot thing, then getting your app out as quickly as possible is critical. You cannot escape the project management triangle:

Indie developers really are stuck with 'Cheap' so there are really only 2 best case options:
- Cheap & Fast, but not Good
- Cheap & Good, but not Fast
When I say 'good' I am not referring to the quality of the game, but the quality of the software produced for the game. When I say 'fast' I'm not referring to the game running well on low-end machines or getting high frame rates, but the game getting released sooner than later. Those with a keen eye will realize if a game is getting delayed it is because they are trying to go from Cheap & Fast to Cheap & Good. As far as I know, in software there is no feasible way to do it. You can go from Cheap & Fast to Fast & Good, but it seems to kill most indie games.

So where am I going with all this? Well, with my current project I find the Cheap & Fast to be unacceptable. Actually I generally stick to Cheap & Good, which typically means it takes a long time for me to write a project. It is also what turned me off from coding contests and coding competitions. The realization they promote writing terrible to maintain code and reward those who can make something work quickly even if the code itself is throw away. The problem is few want to invest the time in a contest or competition to have a competition for creating a robust system which scales and has a high quality. However, the real-world is that kind of competition and the success of a project is almost never on the quality of the software itself, but on the quality of the overall product in market. The issue is, low quality products can become popular and high quality products can end up never being profitable. However, when I think about it high quality products which do become successful tend to stay popular over the long term. I cannot think of any low quality products which have stood against the test of time. I believe that is why I tend to prefer the idea of a Cheap & Good project over Cheap & Fast project.

I've found a sort of way to cheat in the fast part, with two simple things: technology and architecture. It basically means my speed is limited to the technologies I choose. Which is why I'm continuously looking for better frameworks/libraries/languages to use. If I don't, then my development speed is constantly the same speed which really limits the types of projects I can take on. The other part is architecture, something which you must learn and experience to really understand. Part of the reason I am always working on projects on the side because it takes a long time to improve on if you don't build an understanding of the ways you can solve problems or improve your knowledge. It doesn't mean I am breaking the project management triangle, it just means I'm improving my overall resource capabilities. A diagram to visualize:
Essentially, you can never get into the fast area, but you can certainly get closer to it by improving your total resource capabilities. In the diagram this is going from the black circle to the white circle. Obviously getting to the white circle is never going to happen, but moving away from the black one happens every time you improve your tool-set, gain knowledge which helps you solve a problem, or find a better approach to the overall problem. Yes, every time you do one of those you will pay a little bit in the cost area, but the gains are higher productivity while maintaining high quality and keeping the overall cost of the project low.

A real-world example would be game engines. You could write your own, you could use Unity, Unreal, or another one. Every time you write your own there is a cost, the time it will take for you to write it. Unity and Unreal usually come with a price tag or take a slice out of your profits. The decision you need to make is deciding if the cost is worth the benefits. Sometimes using a framework adds additional development cost with no long-term benefits. Other times using a very popular and well maintained open source framework comes with minimal development costs and significant long-term benefits. It is a hard problem knowing when to write your own, when to stop using a framework and when to pick up a new one. The same can be applied to other aspects of software development, such as languages, design tools, development environments, deployment tools, build tools, source control, etc.

Sometimes I write my own, but it is usually for smaller components and typically I look into how others before me have solved the problem. Building on the knowledge and lessons learned from others will tend to be better than pretending you are an expert and writing your own solution in isolation. You can learn from mistakes without having to experience the mistakes first hand. Don't reinvent the wheel, but when you need to make some special custom wheel to solve your exact problem make sure you understand why the wheel shouldn't be square. Don't try to use a hammer for a screw. Picking a framework or library just because you are familiar with it doesn't mean it will help you solve your problem any faster. Maybe during the early stages you will see lots of benefits but over time you will start to notice the pains of the decision. You or the people who continue with the project will have to live with the pain of the decision you made.

After a summer of being low on the tangible productivity of my project, I still feel a very strong sense of accomplishment. I am working towards fleshing out some of the bigger and complex features of my project. I am writing them in a way I know it is very unlikely I will ever have to rewrite them. A fairly bold statement, but one I strongly believe in. I feel that if you spend the time understanding the problem and prototype out various solutions you will eventually arrive at a solution which will do everything you want it to and be extremely easy to expand or reuse. It doesn't need to be generalized, but it does need to be flexible enough so there is a straight forward way to get to your end goal(s). I think that is the advantage I have of being both the designer and the developer of my project. I know what I want, and I am the one working towards making it a reality.

A minor side note: It is a bit of a scary thought thinking about how a single large complex feature in my project took about 2 months to complete. Maybe a couple weeks less than that if I consider the time I took for vacation and just general down time.

June 27, 2016

Playing with Kotlin

Today there are many JVM languages, and to be fair I have a strong opinion on most of them. My first non-Java JVM language was Scala, which I still have mixed feelings about. I was just very turned off by the language since it has many powerful features but when I was trying to make it my main language I was constantly plagued with issues of tool support for an IDE. Groovy did not impress me because I just simply cannot see the value in dynamic languages beyond smaller projects. I prefer a compiler catching an entire category of bugs and avoiding having to look at documentation to tell me what type my third-party library is returning to me or what types the parameters should be. It is why I really enjoy using auto for locals and lambdas in C++ and var in C#. Sometimes the type doesn't matter to me, or it is so blatantly obvious I don't see why I would need to declare it.

Unfortunately, C# (or at least to my current knowledge) does not have the concept of constant local variables. I tend to like knowing up front if a variable is going to change it's value over the scope of the function, it means I have less things to think about as I'm evaluating code. Java is only just starting to dip it's toe into type inference, but as a whole the language is terribly verbose. Then on top of that you have a large part of the Java community that believes everything should be abstracted to the point of meaninglessness. If I need to look at more than a file or two to know what I am working with then there is simply too much abstraction. As they say, less is more, and being concise while maintaining clarity is a heck of a lot easier to maintain than abstracting away all the details. Naturally, if you are writing a public interface you should make it abstract though the extent to abstract really depends on the language and the need to maintain an ABI or API.

I like functional features, but I'm still not set on using a functional language which is why I don't use Clojure (which again is dynamic so another point against it). Functional languages are good at solving some problems but can very overly verbose at solving others, where a loop and state would just be simpler. I played with Xtend for a while, and it seemed to fit right where I would like a language to fit. However, I got frustrated by how fragile the compiler was, and it was decently supported I just couldn't put faith behind it or trust that the project will be maintained for a long time. At the time I was using it I heard about Kotlin and I read a few things about it, but realized it hadn't been around for very long and the developers were not maintaining backwards compatibility. However, that all changed recently when they were preparing their release candidates for version 1.0 and at that point they would be backwards compatible. I was using Intellij after being a long time user of Eclipse, and to be fair having the ability to set Eclipse key bindings has made my transition to JetBrains' tool set very straightforward.

I used Eclipse for almost 10 years, and over the years I have tried other IDEs for many different languages. I did use WebStorm for Javascript a few years back and was quite impressed by it. PhpStorm I more recently used and was continued to be impressed by how solid of an IDE the Intellij derivatives were. After giving Intellij another shot recently I was simply blown away at the quality and the performance of the IDE, especially with how easy it was to manage builds and libraries. Then I tried Kotlin. I was expecting to find Kotlin + Intellij similar to Xtend + Eclipse... and I couldn't have been more wrong. The experience so far after using it for about 8 months has been nothing but solid with the odd minor hiccup that were extremely easy to workaround or resolve. The issue tracker for JetBrains' products and seeing the responses and watching fixes has been very impressive.

The only thing that really drives me nuts from JetBrains these days is Reshaper++ and Visual Studio... However, I feel most of the problems are from working on very large solutions in Visual Studio and not from the Resharper++ plugin. If I could make use of CLion for building with the Visual Studio tool chain I could only dream of how much more productive I would be in C++. I have debated trying to make everything work with QtCreator, which is a decent IDE, but I would rather spend the time coding than trying to make my tools work.

Kotlin brings a number of things to the table that I just have a hard time finding with most other languages. The first thing is Java interoperability being treated as a first class citizen. This means using Java libraries is extremely simple and straight-forward. It also means using the JDK isn't frowned upon or difficult. I can reuse my knowledge from Java without having to learn yet another standard library. Kotlin supports extension functions and properties, which simply means using the JDK isn't difficult because it has been well extended by Kotlin to offer very use and powerful features that you simply don't have in other JVM languages. An example is converting between lists and arrays, something dead simple in Kotlin, but painful in Java and annoying in other languages.

Type inference and null checking are my favorite features, but next on the list would be method and property extensions. I think an example is more useful here so let there be some example code:

fun Double.abs(): Double = Math.abs(this)

I will assume basic Java knowledge when talking about the Kotlin example code. The above is a function definition which extends the double type (notice I'm talking about double and not Double) to have a method called "abs". This method is defined as calling Math.abs on itself and returning the value. A few examples:

println((3.0).abs()) // Prints 3.0
println((-2.0).abs()) // Prints 2.0
var x = 2.0 / -1
println(x.abs()) // Prints 2.0

Now of course those familiar with Java will be freaking out about auto-boxing. My response is, there isn't any. The Kotlin compiler is pretty smart about that. It simply will replace the code to look roughly like:

System.out.println(Math.abs(3.0)); // Prints 3.0
System.out.println(Math.abs(-2.0)); // Prints 2.0
double x = 2.0 / -1;
System.out.println(Math.abs(x)); // Prints 2.0

Coding in Kotlin really feels like Java but with a ton of syntactic sugar and a very smart compiler that will make intelligent choices for you. The above example is extremely simple, so let's see a more complicated example:

inline fun < reified T : Lock, reified R> T.useLock(task: () -> R) : R {
    try {
        this.lock()
        return task()
    } finally {
        this.unlock()
    }
}

Unfortunately, I don't have a good syntax highlighter for Kotlin so you will have to accept pre-tagged code. The above probably looks pretty strange to a Java developer. You can probably guess it is a function definition. However, has the "inline" keyword on it, which means the Kotlin compiler will inline the function. The generic signature is two parts: first, "reified T : Lock" which means capture the type of T which is a subclass of Lock. In this case "java.util.concurrent.locks.Lock"; second, "reified R" which means capture the type of R. This means unlike regular Java generics, you actually have full type information of the generic parameters in this function and they are not erased. Now, if you are a Java developer your jaw should drop from that statement.

The function extends all types of type T, which means anything that is or extends "java.util.concurrent.locks.Lock". You will need to import the definition to make use of it, but Intellij will automatically import it for you when you need it. The function will be called "useLock". It will take a lambda that has no parameters and returns a value of type R. The function itself returns a value of type R. The body should be fairly straight forward, it calls the lock and unlock method of the object's it extends and invokes the lambda in the critical section then returns the value from the invoked lambda. Long story short, you now have a method that applies automatic lock management to any locks you might consider using. So how would you make use of this? Well here is an example:

fun requestInputKey(key: String): Int {
    return readLock.useLock({
        inputKeyIdMap[key]
    }) ?: writeLock.useLock({
        val value = inputKeyIdMap[key] ?: currentKeyId++
        inputKeyIdMap[key] = value
        value
    })
}

I will leave it as an exercise to the reader to fully understand the details of the above code. However, it basically provides a read-write lock around a map. If the value key is defined in the map, it acquires the read lock and returns the integer that it maps to. If the key is not found in the map, it upgrades the read lock and acquires the write-lock. It rechecks the value, and if it doesn't exist it generates a new key value, assigns it to the map and returns the value. A comparable example in Java can be found on the JDK8 docs page for ReentrantReadWriteLock, just look at the first example usage and compare it with the above code. The above example has significantly less code, and given the inlining and reified types it doesn't incur any overhead over the Java code. Moreover, I find it significantly easier to understand as I don't need to look at all the try/finally blocks or worry about null values. Nulls have to be explicitly specified and checked in Kotlin, so say goodbye to the famous NullPointerExceptions.

There are a number of other useful extensions that can be written such as automatic resource closing (keep in mind Kotlin runs on Java 6), automatic memory management for objects which need to be explicitly deleted, suppression of exceptions, method reference parameters, and more. If you want to give it a try you can run it in the browser (did I mention they are polishing their Kotlin to JavaScript compiler?). Months later, and thousands of lines of code later I continue to be impressed by the language and can't wait for what it will bring to the table in the future.