December 27, 2016

Looking back at 2016

I figure with the end of 2016 coming closer it is time to look back at the year. One of the major differences this year has been my ongoing project. Not only has it taken up a large portion of my time, but it has also been a huge learning experience and refresher on things I had forgotten from my undergraduate engineering courses. I also have come to release that frequent updates to this blog are just not going to happen. It isn't because I don't want to post updates, but I tend to prefer working on my project than posting about it.

Near the end of 2015, I wanted to get back into working on my own games. Shortly after finishing my master's program I was just mentally and psychologically exhausted. I had spent a significant amount of time playing games with a few friends and started to get an itch for making my own game. I was originally thinking to make a mobile game that I would want to play while taking transit. Something I could easily pause but also be complicated and requiring a good amount of problem-solving. At the time I had spent many hours playing Factorio, and so I was wanting to create a tower defense which required you to collect resources, manage power/communications, and fend off attacks from a moderately intelligent adversary.

I got a very basic prototype done after a few weeks, but I started to envision a much bigger project. I was no longer taking transit simply because driving took about 1/5 the time. Together both of those factors made me change my mind. I began looking into technologies and libraries I could use to make it happen. Most of my development had focused on 2D but I had done some basic 3D work. In the end, I decided it would be worth my time to look into Unreal 4 vs Unity 5 as my experience with both engines had shown they had extremely robust rendering engines that would easily outdo anything I could create. They also had the nice feature of being able to support multiple platforms. The end result was my decision to go with Unreal 4, as the performance as my rendering engine exceeded what Unity 5 could deliver.

After a few weeks of hacking together the general concept for what I wanted to create I through together a basic prototype in April 2016. Which you can take a look at below:

It was very simple but gave me the foundation for what I was going to build. As time went on I slowly started to build up a robust engine which delegated the rendering to Unreal. I still use the approach today and surprisingly it allows for rapid iterations and excellent utilization of the hardware.

I had a short and bitter fight with Unreal's GUI frameworks in May of 2016 and decided to not bother fighting with them. Instead, I decided to stick a web overlay on top of the game canvas and use web technologies for my interface. On the surface, it seems more complicated, but it turns out to be significantly simpler and more productive. It also means I can create really good and solid looking user interfaces.

Everything in my game is built to be networked and scalable. Over the summer I worked on pushing towards planetary sized worlds. Optimizing and reworking terrain generation, and pushing the limits of what can be done with today's technology. Eventually, I got terrain generation to a point where I could very quickly create terrain on the fly and travel at high speeds with fairly minimal lag. This is when I started to notice the limitations of my physics engine, more on that later.

I then started working on making the terrain modifiable in September of 2016. I was originally planning to make use of an embedded database engine but I quickly came to realize that a database just isn't fast enough. That isn't to say the databases I tested were not high quality, or that they couldn't scale. It was mostly the fact I wanted to make them store and retrieve data with extremely low latency. I also didn't need to do a significant amount of querying so most of the advantages of a database were not helpful. I ended up writing my own storage system which allowed me to handle high levels of concurrency with low latency by keeping fragments of data in memory and using an asynchronous event-based approach. It worked really well and was straightforward when it came to implementing replication across clients. Something I only partially implemented, enough to prove it would work but then something else caught my attention.

The physics engine was causing most of my headaches. Again, not the fault of the physics engine. It was doing a great job for what it was designed for. It just wasn't scaling well and caused a massive amount of headaches and complexity in my engine because I was having to delegate the game structures to something the physics engine could work with. I was also suffering from issues with high speeds and the occasional terrible collision response handling. I also had to write my own 'fix' for issues with tunneling because they solutions provided by the physics engine only worked most of the time. I could consistently break it.

I had written my own physics engine for a 2D game many years ago. I was able to handle 17000 objects colliding at 60 fps using a few strategies I developed. I discovered the names of some of the strategies a few years later. It sort of hit me, there are no 2D physics engines that could handle that many objects and I have doubts any will be able to in the near future. Part of the reason is they are generalized solutions and can't make certain classes of optimizations in the problem space. Their generalizability makes them inefficient, which isn't a bad thing, it just means they are only limited to micro-optimizations. Such as using SIMD instructions, GPU acceleration, newer hierarchy structures, or newer algorithms. Whereas if I wrote my own I could tailor it to my application and only make the physics engine do the work that it actually needs to. I would also be able to deal with high-speed collisions and tunneling without having to fight against the physics engine.

After reading a number of books on the subject, a few research papers, a thesis, and looking at various open source physics engines, I started thinking about how I could integrate a custom physics engine into my game engine. It was now December of 2016, but I wasn't ready to hook it up and swap out my existing physics engine interface. I wanted to start building a prototype physics engine. Very quickly I also realized that I would need a visual debugger to help me develop the engine and come up with test cases which I could use to verify correctness as I switch from naive implementations to heavily optimized solutions. I slapped together a quick visual debugger and was able to see my simulations. I also made sure I could easily step forward through time to make sure everything was behaving correctly.

It sort of bothers me. I originally wanted to use a bunch of frameworks and libraries to make my life easier and allow me to develop the game faster. Instead, I kept having to deal with limitations and find workarounds for them and noticed my productivity decreased. I do have the fortunate benefit of not having a deadline and so I can take the hit of writing my own solutions where appropriate.

This year has been a huge learning process, a year ago I would have never dreamed of writing my own physics engine or writing something instead of using a database for storage. I certainly would have thought by this point I'd have something playable instead of a sort of early tech demo. I guess that is the problem when you have to build your own technologies to solve problems others have yet to solve.

Well enough about the past, where am I going next?
Short-term (Q1 2017)

  • [Physics] Expand capabilities of the physics engine, such as supporting more shapes
  • [Physics] Deal with high-speed interactions and tunnelling
  • [Engine] Switch from existing physics engine to custom physics engine
  • [Graphics] Finish implementing client-side terrain updates

Medium-term (Q2-Q3 2017)

  • [Graphics] Determine what type of art style to use.
  • [Physics] Optimize custom physics engine including special handling for game
  • [Gameplay] Start working on core gameplay
  • [Interface] Revisit UI with updated technologies

Long-term (Q4 2017+)

  • [Gameplay] Get playable demo
  • [Graphics] Polish rough areas

September 11, 2016

Summer 2016 and some thoughts

It only feels like it was only a few weeks ago I posted something here. However, time currently tells me it has been more than two months, so much for my biweekly posting plan. I've been pretty busy making more progress with my project but also been feeling a bit of a burnout and needing some time to relax. Most of the last two months has been dedicated to solving some of the harder problems in my project. Some of the challenges involved sketching out pages of design for weeks at a time without writing any code. Others were solved by writing tests or small prototypes.

I had hit a complexity wall. Sometimes you get to a stage in a project where you know what you want to do but cannot wrap your head around the complexity of it. On paper it seemed straight forward, but once I started to code it I had to fight with making sure it was doing exactly what I wanted. I was entering into territory where writing tests was simply not sufficient. As it evolved and changed any tests I wrote were quickly obsolete and meaningless.

It felt like I was back to when I was first learning programming and trying to create a text-based game. I was about 10 years old and had been coding for only a few months, but I had created a game that was about a 1000 lines long. It was my first experience of the complexity wall and my realization where writing software is hard and time consuming. Years later I hit it again, at the time I was working on creating fairly complex game inside of Warcraft 3. I was getting close to 10k lines of JASS2 (the scripting language for Warcraft 3).

There was a realization; software is developed on a spectrum. On one end you can write code which is fragile and small, but can get the job done quickly. Or you can write code which is resilient and large, but takes a long time to get the job done. I noticed over the years how the fragile and small code is a common theme among programming contests and many successful projects. It is really attractive because it takes the least amount of time to get something done. In the case of programming contests it gives you more time to think about solving the problem and lets you finish the contest in less time. In the case of successful projects, it allows you to get the product out and start making money sooner.

I've also seen the dark side of fragile and small code, one that is often ignored. I've seen many games start out good, push out features quickly, and start making money. Then a few years go by and the project ends up being abandoned because progress slows down to a grinding halt, bugs are constantly plaguing the game, and the money runs dry which forces the developer to stop supporting the game. You frequently hear the words 'rewriting', 'bug fixing', 'new systems', etc. If you don't believe me, look at Steam and a vast majority of the indie games... the ones you only heard small things about and the ones that haven't turned into wildly popular games (the ones that after a few years are still < 100k copies sold). It makes me wonder if we are going to have a repeat of the 'video game crash of 1983', mobile apps have not helped with this. Writing resilient and large code is not ideal when it comes to games either. It simply takes too long to get the product out the door, many game projects from start to 'release' tend to take a few years or less. Early access has helped but fragile and small software can also take advantage of early access.

So what is ideal? Well, the short answer is nobody really knows. The long answer depends on what type of software you are producing. If you are writing flight software for an aircraft, or writing software for autonomous vehicles, or safety systems to support industrial applications, then writing resilient software with minimal bugs and high reliability is critical. If you are writing a mobile game, or a simple app for a device where bugs are not critical and the market is constantly shifting to the next hot thing, then getting your app out as quickly as possible is critical. You cannot escape the project management triangle:

Indie developers really are stuck with 'Cheap' so there are really only 2 best case options:
- Cheap & Fast, but not Good
- Cheap & Good, but not Fast
When I say 'good' I am not referring to the quality of the game, but the quality of the software produced for the game. When I say 'fast' I'm not referring to the game running well on low-end machines or getting high frame rates, but the game getting released sooner than later. Those with a keen eye will realize if a game is getting delayed it is because they are trying to go from Cheap & Fast to Cheap & Good. As far as I know, in software there is no feasible way to do it. You can go from Cheap & Fast to Fast & Good, but it seems to kill most indie games.

So where am I going with all this? Well, with my current project I find the Cheap & Fast to be unacceptable. Actually I generally stick to Cheap & Good, which typically means it takes a long time for me to write a project. It is also what turned me off from coding contests and coding competitions. The realization they promote writing terrible to maintain code and reward those who can make something work quickly even if the code itself is throw away. The problem is few want to invest the time in a contest or competition to have a competition for creating a robust system which scales and has a high quality. However, the real-world is that kind of competition and the success of a project is almost never on the quality of the software itself, but on the quality of the overall product in market. The issue is, low quality products can become popular and high quality products can end up never being profitable. However, when I think about it high quality products which do become successful tend to stay popular over the long term. I cannot think of any low quality products which have stood against the test of time. I believe that is why I tend to prefer the idea of a Cheap & Good project over Cheap & Fast project.

I've found a sort of way to cheat in the fast part, with two simple things: technology and architecture. It basically means my speed is limited to the technologies I choose. Which is why I'm continuously looking for better frameworks/libraries/languages to use. If I don't, then my development speed is constantly the same speed which really limits the types of projects I can take on. The other part is architecture, something which you must learn and experience to really understand. Part of the reason I am always working on projects on the side because it takes a long time to improve on if you don't build an understanding of the ways you can solve problems or improve your knowledge. It doesn't mean I am breaking the project management triangle, it just means I'm improving my overall resource capabilities. A diagram to visualize:
Essentially, you can never get into the fast area, but you can certainly get closer to it by improving your total resource capabilities. In the diagram this is going from the black circle to the white circle. Obviously getting to the white circle is never going to happen, but moving away from the black one happens every time you improve your tool-set, gain knowledge which helps you solve a problem, or find a better approach to the overall problem. Yes, every time you do one of those you will pay a little bit in the cost area, but the gains are higher productivity while maintaining high quality and keeping the overall cost of the project low.

A real-world example would be game engines. You could write your own, you could use Unity, Unreal, or another one. Every time you write your own there is a cost, the time it will take for you to write it. Unity and Unreal usually come with a price tag or take a slice out of your profits. The decision you need to make is deciding if the cost is worth the benefits. Sometimes using a framework adds additional development cost with no long-term benefits. Other times using a very popular and well maintained open source framework comes with minimal development costs and significant long-term benefits. It is a hard problem knowing when to write your own, when to stop using a framework and when to pick up a new one. The same can be applied to other aspects of software development, such as languages, design tools, development environments, deployment tools, build tools, source control, etc.

Sometimes I write my own, but it is usually for smaller components and typically I look into how others before me have solved the problem. Building on the knowledge and lessons learned from others will tend to be better than pretending you are an expert and writing your own solution in isolation. You can learn from mistakes without having to experience the mistakes first hand. Don't reinvent the wheel, but when you need to make some special custom wheel to solve your exact problem make sure you understand why the wheel shouldn't be square. Don't try to use a hammer for a screw. Picking a framework or library just because you are familiar with it doesn't mean it will help you solve your problem any faster. Maybe during the early stages you will see lots of benefits but over time you will start to notice the pains of the decision. You or the people who continue with the project will have to live with the pain of the decision you made.

After a summer of being low on the tangible productivity of my project, I still feel a very strong sense of accomplishment. I am working towards fleshing out some of the bigger and complex features of my project. I am writing them in a way I know it is very unlikely I will ever have to rewrite them. A fairly bold statement, but one I strongly believe in. I feel that if you spend the time understanding the problem and prototype out various solutions you will eventually arrive at a solution which will do everything you want it to and be extremely easy to expand or reuse. It doesn't need to be generalized, but it does need to be flexible enough so there is a straight forward way to get to your end goal(s). I think that is the advantage I have of being both the designer and the developer of my project. I know what I want, and I am the one working towards making it a reality.

A minor side note: It is a bit of a scary thought thinking about how a single large complex feature in my project took about 2 months to complete. Maybe a couple weeks less than that if I consider the time I took for vacation and just general down time.

June 27, 2016

Playing with Kotlin

Today there are many JVM languages, and to be fair I have a strong opinion on most of them. My first non-Java JVM language was Scala, which I still have mixed feelings about. I was just very turned off by the language since it has many powerful features but when I was trying to make it my main language I was constantly plagued with issues of tool support for an IDE. Groovy did not impress me because I just simply cannot see the value in dynamic languages beyond smaller projects. I prefer a compiler catching an entire category of bugs and avoiding having to look at documentation to tell me what type my third-party library is returning to me or what types the parameters should be. It is why I really enjoy using auto for locals and lambdas in C++ and var in C#. Sometimes the type doesn't matter to me, or it is so blatantly obvious I don't see why I would need to declare it.

Unfortunately, C# (or at least to my current knowledge) does not have the concept of constant local variables. I tend to like knowing up front if a variable is going to change it's value over the scope of the function, it means I have less things to think about as I'm evaluating code. Java is only just starting to dip it's toe into type inference, but as a whole the language is terribly verbose. Then on top of that you have a large part of the Java community that believes everything should be abstracted to the point of meaninglessness. If I need to look at more than a file or two to know what I am working with then there is simply too much abstraction. As they say, less is more, and being concise while maintaining clarity is a heck of a lot easier to maintain than abstracting away all the details. Naturally, if you are writing a public interface you should make it abstract though the extent to abstract really depends on the language and the need to maintain an ABI or API.

I like functional features, but I'm still not set on using a functional language which is why I don't use Clojure (which again is dynamic so another point against it). Functional languages are good at solving some problems but can very overly verbose at solving others, where a loop and state would just be simpler. I played with Xtend for a while, and it seemed to fit right where I would like a language to fit. However, I got frustrated by how fragile the compiler was, and it was decently supported I just couldn't put faith behind it or trust that the project will be maintained for a long time. At the time I was using it I heard about Kotlin and I read a few things about it, but realized it hadn't been around for very long and the developers were not maintaining backwards compatibility. However, that all changed recently when they were preparing their release candidates for version 1.0 and at that point they would be backwards compatible. I was using Intellij after being a long time user of Eclipse, and to be fair having the ability to set Eclipse key bindings has made my transition to JetBrains' tool set very straightforward.

I used Eclipse for almost 10 years, and over the years I have tried other IDEs for many different languages. I did use WebStorm for Javascript a few years back and was quite impressed by it. PhpStorm I more recently used and was continued to be impressed by how solid of an IDE the Intellij derivatives were. After giving Intellij another shot recently I was simply blown away at the quality and the performance of the IDE, especially with how easy it was to manage builds and libraries. Then I tried Kotlin. I was expecting to find Kotlin + Intellij similar to Xtend + Eclipse... and I couldn't have been more wrong. The experience so far after using it for about 8 months has been nothing but solid with the odd minor hiccup that were extremely easy to workaround or resolve. The issue tracker for JetBrains' products and seeing the responses and watching fixes has been very impressive.

The only thing that really drives me nuts from JetBrains these days is Reshaper++ and Visual Studio... However, I feel most of the problems are from working on very large solutions in Visual Studio and not from the Resharper++ plugin. If I could make use of CLion for building with the Visual Studio tool chain I could only dream of how much more productive I would be in C++. I have debated trying to make everything work with QtCreator, which is a decent IDE, but I would rather spend the time coding than trying to make my tools work.

Kotlin brings a number of things to the table that I just have a hard time finding with most other languages. The first thing is Java interoperability being treated as a first class citizen. This means using Java libraries is extremely simple and straight-forward. It also means using the JDK isn't frowned upon or difficult. I can reuse my knowledge from Java without having to learn yet another standard library. Kotlin supports extension functions and properties, which simply means using the JDK isn't difficult because it has been well extended by Kotlin to offer very use and powerful features that you simply don't have in other JVM languages. An example is converting between lists and arrays, something dead simple in Kotlin, but painful in Java and annoying in other languages.

Type inference and null checking are my favorite features, but next on the list would be method and property extensions. I think an example is more useful here so let there be some example code:

fun Double.abs(): Double = Math.abs(this)

I will assume basic Java knowledge when talking about the Kotlin example code. The above is a function definition which extends the double type (notice I'm talking about double and not Double) to have a method called "abs". This method is defined as calling Math.abs on itself and returning the value. A few examples:

println((3.0).abs()) // Prints 3.0
println((-2.0).abs()) // Prints 2.0
var x = 2.0 / -1
println(x.abs()) // Prints 2.0

Now of course those familiar with Java will be freaking out about auto-boxing. My response is, there isn't any. The Kotlin compiler is pretty smart about that. It simply will replace the code to look roughly like:

System.out.println(Math.abs(3.0)); // Prints 3.0
System.out.println(Math.abs(-2.0)); // Prints 2.0
double x = 2.0 / -1;
System.out.println(Math.abs(x)); // Prints 2.0

Coding in Kotlin really feels like Java but with a ton of syntactic sugar and a very smart compiler that will make intelligent choices for you. The above example is extremely simple, so let's see a more complicated example:

inline fun < reified T : Lock, reified R> T.useLock(task: () -> R) : R {
    try {
        this.lock()
        return task()
    } finally {
        this.unlock()
    }
}

Unfortunately, I don't have a good syntax highlighter for Kotlin so you will have to accept pre-tagged code. The above probably looks pretty strange to a Java developer. You can probably guess it is a function definition. However, has the "inline" keyword on it, which means the Kotlin compiler will inline the function. The generic signature is two parts: first, "reified T : Lock" which means capture the type of T which is a subclass of Lock. In this case "java.util.concurrent.locks.Lock"; second, "reified R" which means capture the type of R. This means unlike regular Java generics, you actually have full type information of the generic parameters in this function and they are not erased. Now, if you are a Java developer your jaw should drop from that statement.

The function extends all types of type T, which means anything that is or extends "java.util.concurrent.locks.Lock". You will need to import the definition to make use of it, but Intellij will automatically import it for you when you need it. The function will be called "useLock". It will take a lambda that has no parameters and returns a value of type R. The function itself returns a value of type R. The body should be fairly straight forward, it calls the lock and unlock method of the object's it extends and invokes the lambda in the critical section then returns the value from the invoked lambda. Long story short, you now have a method that applies automatic lock management to any locks you might consider using. So how would you make use of this? Well here is an example:

fun requestInputKey(key: String): Int {
    return readLock.useLock({
        inputKeyIdMap[key]
    }) ?: writeLock.useLock({
        val value = inputKeyIdMap[key] ?: currentKeyId++
        inputKeyIdMap[key] = value
        value
    })
}

I will leave it as an exercise to the reader to fully understand the details of the above code. However, it basically provides a read-write lock around a map. If the value key is defined in the map, it acquires the read lock and returns the integer that it maps to. If the key is not found in the map, it upgrades the read lock and acquires the write-lock. It rechecks the value, and if it doesn't exist it generates a new key value, assigns it to the map and returns the value. A comparable example in Java can be found on the JDK8 docs page for ReentrantReadWriteLock, just look at the first example usage and compare it with the above code. The above example has significantly less code, and given the inlining and reified types it doesn't incur any overhead over the Java code. Moreover, I find it significantly easier to understand as I don't need to look at all the try/finally blocks or worry about null values. Nulls have to be explicitly specified and checked in Kotlin, so say goodbye to the famous NullPointerExceptions.

There are a number of other useful extensions that can be written such as automatic resource closing (keep in mind Kotlin runs on Java 6), automatic memory management for objects which need to be explicitly deleted, suppression of exceptions, method reference parameters, and more. If you want to give it a try you can run it in the browser (did I mention they are polishing their Kotlin to JavaScript compiler?). Months later, and thousands of lines of code later I continue to be impressed by the language and can't wait for what it will bring to the table in the future.

June 6, 2016

Multi-threaded, Networked, and Expanding

Since my last post a significant amount of my time as been working through some of the fun and exciting challenges of working with Unreal 4 and fleshing out some of the larger architectural parts of the project. At some point I'm going to have to put a name on the project and while I have a working internal name that I currently use, I still haven't decided if I want to stick with it. Naming is hard. I sometimes find giving something a good name is harder than actually building the thing you are trying to name. So for now, I'm going to call it 'the project'.

I can never stress it enough about how important architecture is when it comes to building an application. Throwing software together and expecting a good result is like taking a box full of puzzle pieces expecting the puzzle to assemble after you've shaken the box. Sure, over the lifetime of the universe it might happen, but lets be honest, nobody has time for that. I've seen many games go from being single-player to multi-player, go from being single-threaded to multi-threaded, or go from running well on a single-platform to running well on multiple platforms. It is possible, but doing it after the fact is a very huge investment compared to setting things up correctly the first time. I believe the term is technical debt, and when it comes knocking on the door for payment it has the tendency to stick around. One of the many design decisions I make when starting most projects is to figure out what my options are, and then more importantly how to make it happen with the least amount of effort from myself over the long term. If I look only at the short term is is very easy to slap together a prototype and get something practically unmaintainable out the door. Sadly, I see it happen far too often where a project goes out before it is ready. If you want some good examples take a look at some of the early access games on steam.

I have been under the strong belief over the last 10 years or so that any demanding project which isn't making use of multiple threads is just not being designed well. I do see the value of simplicity in writing a single-threaded application, but I also know it isn't hard to multi-thread an application if you go with an event or task-based approach. (It isn't coincidence we are seeing task-based features being added to our current modern languages like C#, C++, JavaScript, etc).  One advantage of being event-based is the ability to easily scale the application. Like everything, it does come with a cost, in particular it makes tracing the flow of logic a bit more complicated and reasoning about the system is slightly more challenging. However, it forces you to make clean separations of concerns and have an extremely low-coupling. I'm not a huge fan of actor-based systems, but the general idea behind actors I really like and make use of them. Below is an example of why I like this approach.

In the above image you can see CPU activity of my computer when running the project I am working on under a moderate artificial work-load. Now I say artificial because I am still in the process of connecting everything together. Okay, that is a little bit of a lie, it is all connected I have just been cleaning up my communication layers between different areas of the project. Nevertheless, the above image is an example of an application which will scale with processing power. Moreover, half of the above load would vanish if I added a second computer. With my given architecture it is possible for me to have the application run on a server cluster without a huge investment of my time. I am not planning for it, but the option is there if I ever see the need to spend my time expanding the system. The reason I can do this is because I have designed the architecture of the system to operate this way.

Again, if I just wanted to get a working prototype done and out the door that could have easily happened over a month ago. In fact I did have a basic working prototype in Unreal 4 a while ago. However, my concerns with Unreal's architecture made me decide on going with a hybrid approach where I am using Unreal 4 without building most of the project inside of it. In the next few weeks I will be able to start focusing on fleshing out the project instead of getting all the parts working together. Essentially, I'm on the edge of getting into the fun part of the project, but until then the shroud of mystery about the project will remain.

May 15, 2016

An interesting start

Now I did say a biweekly schedule for updates here, but I've been focusing pretty heavily on the project that I didn't really want to pass last week just to post a new update.

I think I am starting to feel the burn out of working on my project plus working full time. The last time I really felt this way was when I was working on Combatics Evolution. I really enjoyed that time even though it was extremely exhausting. I had a minor setback earlier this week when one of my drives that happens to house a large portion of the project decided to start dying on me. Luckily I've moved my data onto a new drive and created some online backups to keep my mind at ease. However, it still cost me about two days and a late night to get things set up again. I will be looking into getting a proper mirrored raid setup for my system in the near future but it isn't a pressing concern.

Unfortunately, a lot of my time over the last 3 weeks has been all about prototyping and proving architectural decisions. Something I was successfully able to finish last week and very happy to report that each component of the system I am building will work together nicely. Now, I am still going to intentionally not talk about exactly what I am working on, but I will talk about a few of the things I was prototyping and a bit of the logic behind it.

I'm going to be using Unreal 4 as my graphics engine. Notice I said graphics engine and not game engine. Part of my decision behind this was I really want to make use of the power of Unreal 4 when it comes to rendering and handling user input. I continued to spend a good amount of time digging into some of the internals of it as I started figuring out how I wanted to do the user interface. Unreal 4's GUI system is terrible, just terrible and poorly documented. I tend to find most of Unreal 4 is poorly documented and it is usually easier to look at the source code than to ask Google or read the documentation. This is one of the big downsides of Unreal 4, there is a lack of community and as a result a lack of good documentation and tutorials. Most of the tutorials I find tend to be all about blueprints... which is basically next to useless for me as I don't see the value in putting complex logic into a graphical based programming language. I think Unreal 4 was really targeted at non-developers and thus most of the guides and help talk about blueprints. Very few guides exist for 'how do I do X in C++' which has really made working with Unreal 4 a painfully slow grinding experience. I do see how once I become more fluent with the engine and gain knowledge about how I should be using it things will become noticeably better. The downside is that every hour I spend trying to figure out how to do something is an hour I am not actually doing something.

The result of this is a painful development experience, and don't get me even started at how rough the experience is using C++ in Visual Studio. It reminds me of when I was working at Autodesk and about half the time I was waiting on a compiler and loading of binaries to just run a quick check of the application. My development style is very closely tied to a rapid PDCA cycle, which means anything that slows down the cycle, also slows down how I develop code. From my experience, most people who code follow a similar pattern where you develop the code in pieces over time instead of trying to write a large chunk of code without testing it. This is why I'm intending to only use Unreal 4 as a graphics engine. I want to harness the power of the engine but I don't want to be slowed down by the lack of documentation and the architecture of the engine which is inappropriate for my project. If I really wanted to stick with using Unreal 4 for the entire project then I would end up having to fight the engine at every turn to make it do what I need it to do. I don't want to fight it, I just want to use it.

So how do I plan to escape the engine, and won't that be a lot more work? Well, yes, it will be a lot more work upfront as I need to come up with the complete system and I will need to clearly define communication interfaces between the Unreal engine and the rest of the system. This isn't a knee jerk reaction, I've been planning this and prototyping some aspects of the for the last 2 months. Recently (about 5 weeks ago) I decided to take a nose dive into Unreal and see what the engine had to offer. Basically evaluate how it works, what it can provide me, and build a basic prototype of the project I am planning to create in the end. My conclusion was I want to use the engine only as a graphics engine and interface for the user. Everything else I plan to make use of other frameworks and libraries in a modular architecture. Essentially, the way I am architecting the system will allow me to completely drop Unreal and replace it with something else, such as Unity. I won't do that unless the need arises, but it will always be an option.

I am now at a point where all the technologies and components are tested and most of the non-trivial ones I have worked out how to integrate them together. I plan to make use of a large number of libraries and avoid writing code that isn't directly related to the project or connecting the libraries together. My current task is getting the full stack networking up and running, and perform some real-world testing of it to make sure the whole architecture is sound and practical. Once that task is complete and I then need to finish work on getting a reasonable way to get automated testing for it all working. I will also likely spend a small amount of time getting the basics of a build system together that will package everything nicely together. Once that is all complete, then I can start looking into actually building the project. I know I will need to brush up on my Blender knowledge in the next few weeks, but I'm going to defer that for as long as possible because I really would like to get the project going forward and out of the prototyping stage. Once I'm out of my initial prototyping stage and have the full stack functional I will start to talk about the details of the project, but until then I will continue to keep it light on the details.

April 24, 2016

Now where was I...

It would appear that things haven't changed around here in a while. I'm going to shake it up a bit.

When I last posted I was in graduate studies with very little spare time, and when I had spare time it really was to just try and relax and have some fun. It has been almost a year since I finished off my master's degree. In that time I've been doing what I feel like catching up on things I missed while in school.

One thing I have been doing a lot more since I graduated was work on my hobby projects. I sort of decided around December that I wanted to get back to working on things and spend less time playing games. Before that point I had poked a few things here and there kept myself a bit familiar with things going on such as the arrival of a free version of the Unreal 4 engine. Updates with Libgdx and news about what to expect with Java 9.

I've been working full time and with it a very heavy web development role which has brought my web skills back to the modern age and I've come to learn a lot more PHP, Javascript and SQL than I ever knew before. It was nice to get really sharp with my web skills as going forward it will be helpful if I ever need to crack out a website or dig into a database engine as a storage mechanism. It is sort of strange that before a year ago I never really valued the power of a database as most of the things I do with games has really been, keep it fast, keep it in memory and make it run well with multiple threads. When it comes to databases, you sort of get all that for free(ish) plus networking, data storage, persistence, redundancy if you scale it with multiple systems, etc.

I have also come to understand different types of database systems. SQL vs no-SQL, Graphing, and Geospatial. Which has lead me to start thinking about how I could turn them into powerful back-ends for games. Maybe not what you typically think of when it comes to games, but I really feel they would be of use.

There also has been two 'recent' languages that have caught my attention: Kotlin and Rust. Kotlin, I feel very much at home in because of my very extensive experience in the JVM ecosystem. Rust I haven't found the time to dig into but it is on my list of things to try out. I have a suspicion it will be a very pleasant low-level language. Another language I poked at a bit during my graduate studies what D. Which I have to say is a decent language if you come from a C++ background as it sort of melds together C++ and C#.

I have gotten myself very familiar with Kotlin and been using it for a couple of projects I got on the go right now. One of them was a survival-based tower defense (think of a more complicated version of Plants vs Zombies). Which I developed to an almost playable state but then decided to drop the project for something else that I've been itching to work on. Using Kotlin I realized I have finally found a language which I really like working with. While I have used Java for years, part of me really hated the verboseness of it and I started playing around with Scala, Groovy, and Xtend. Each of which are decent but none of them I was satisfied with their tool or library support. In my opinion the quality of tools and libraries for a language outweighs everything else about a language. Kotlin sort of fixes that issue. Not only is it a non-verbose version of Java but it is very tightly integrated with Java such that making use of Java libraries is natural and straight forward. Xtend was very much the same way, but the tool support was fairly buggy and the syntax seemed more hacked together than well thought out.

Anyways, I mentioned I have a new project I am working on... but for now I'm not going to say too much about it. It really has to do with the fact I am prototyping it out right now just to see if the tools I want to use will play nicely together. So far it has been good and I've been making progress but I feel its far too early to go into the details of what I am working on. Which goes back to my first point about shaking things up. Someone I know mentioned I hadn't updated my blog in a while... and it isn't because I haven't been up to much... more the opposite really. Now that things are sort of settled down in my life and things are a lot more stable I feel like updating this blog more often would be useful. I'm thinking to try and stick to a biweekly schedule... But I'm sure even with my best intentions I'll miss the odd week here or there.