Friday, 13 June 2014

More dungeon layout experiments

After my simulated annealing experiment, I had a few more ideas for dungeon level generation.

Fake force-directed graph layout


Force-directed graph layout is a neat way to layout graphs in a way that looks nice when rendered. Here are some very nifty realtime implementations in Javascript:

d3
https://github.com/mbostock/d3/wiki/Force-Layout.

Springy
http://getspringy.com/demo.html

These types of layout algorithms work by treating the nodes of the graph as charged particles which repel each other, and the edges as springs connecting them. The spring forces oppose the repulsive forces. The algorithm is essentially a two-dimensional physical simulation. After the forces reach equilibrium, or after some number of iterations, the nodes and edges naturally tend to be laid out in a visually pleasing way.

I tried a similar technique for generating and laying out out a dungeon map. I start by generating a random tree of a limited depth. Next, I assign a randomly sized rectangle to each tree node. The rectangles are spread out in 2D space in a way that guarantees they will not overlap (using a simple minimum bounding box).

Now, the edges of the tree which connect the rectangles are treated like springs. Unlike in the graph layout algorithm, there are no repulsive forces between the rooms. I wanted them all to glob together under the forces of the springs.

Rather than implementing a full physics simulator, I just created a cheap hack which translates each rectangle towards its parent in the tree until there is a collision. The algorithm iterates until it reaches logjam.

The results are not too bad, but it tends to result in large disconnected sections. I also construct a search tree at the end to find the connected rooms.

In the process, I knocked together some Python classes for representing and rendering 2D geometry, including a 2D vector, rectangle, minimum bounding box, grid-based collision detection, and a general purpose tree. I'm sure there are plenty of better implementations out there already, but it was fun.


Roomspray


After finishing the force-directed implementation, I had a simpler idea based on the initial layout step. The initial layout step used an expanding minimum bounding box to spread out rooms in a disconnected way.

Instead, I wanted a way to track exactly where the edges of the current rooms were, and connect new rooms directly onto the existing rooms.

Since I already had vectors, rectangles, collision detection etc from the force-directed layout algorithm, I just needed an efficient data structure for keeping track of the edges of the current map.

I decided to use a list of vectors sorted by angle, pointing from the origin at empty spaces on the grid adjacent to existing rooms. Rooms are added sequentially at random vectors. Each time a room is added, the vector list is updated to reflect the boundaries including the new room.

I like this method because it's simple, and I think it could be easily modified to generate effectively infinite maps procedurally "just in time" as a player explores the map by shifting the origin to follow the player.

In implementing this I learned that Python doesn't have an efficient sorted list implementation built in. It does have a package for binary searching a sorted list, which is fine for looking up entries in a static list. However, inserting and deleting at an arbitrary index in a Python list has O(n) performance according to the documentation.

Generally a sorted list is backed by a balanced binary tree for O(log(n)) performance for inserts, deletes etc. I used the blist package for this.

Tuesday, 20 May 2014

Native looking web apps for iPhone and Android

Developing mobile applications can be a frustrating exercise.

In particular, if you want to develop a native app for an Apple device, you have one choice of programming language (Objective C), one choice of IDE (Xcode), and one choice of operating system (OSX). Before you can write a single line of code, you need to purchase a Macintosh computer, and register as an Apple Developer. Testing must be done in an emulator or on the native device. After you finish your work, it needs to go through the somewhat arbitrary App Store approval process. Finally, if you want to port it to another platform, you can expect to rewrite large amounts of code.

Fortunately, there is another way which allows you to use your existing development tools and skills. Every mobile device has a browser. You can create a web application using any combination of web development tools and techniques you're already familiar with.

You might be concerned that this approach won't look and feel like a native app. Maybe you are worried about being able to use all the hardware peripherals in the device such as GPS? Or, possibly you're unsure about the performance of a user interface driven by Javascript?

I'll address these concerns, but first, here's an example. This is a rapid prototype I developed for our sales team. Our company designs and manufactures rack-mounted radio base stations. Our distributor wanted to demonstrate the monitoring capabilities to potential customers using an iPad.



Native look and feel


As you can see from the video, it's possible to very closely replicate the native iOS look and feel in a web application. I used a framework called iWebKit, which is a painstakingly crafted and maintained set of CSS styles. Applying the styles to your app is as simple as assigning the correct classes to elements in your UI design. There are similar options for Android too, like NativeDroid.

Apart from fonts, colours etc, there are a few other small considerations which are also easily taken care of. Firstly, the viewport can be configured using a Safari specific meta-tag (or the equivalent for Android). This allows you to hide the address bar and navigation buttons, set the screen orientation, and fix the UI dimensions to the native resolution of the device. These all go a long way towards making the app feel native.

Finally, it's easy and recommended to supply desktop icons for your app, so that it looks like any other app installed on the phone. Here's how to do it on iOS and Android.


Hardware peripherals


With HTML5, there are many powerful APIs available to applications in the browser. Geolocation (GPS), audio, hardware accelerated 3D graphics, battery monitoring and offline storage just to name a few. Unless your app requires unusual functionality, chances are that a web application will be able to access the hardware functions required.


Performance


Modern mobile devices are very powerful. The ARM based microcontrollers in smartphones can clock over 1 GHz, have multiple cores, hardware accelerated graphics and access to gigabytes of RAM. Not so long ago, those kinds of specs were found in new laptop computers.

Javascript has also come a long way. Google in particular has a major vested interest in Javascript performance and developed the V8 Javascript engine.

These changes add up to mean that almost any browser app will perform snappily on a mobile device.

Monday, 19 May 2014

Dungeon layout with simulated annealing

This started with the idea of generating a tile-based dungeon. Rather than reading up on how others have done it, I wanted to see what I could come up with independently.

I remembered pen and paper games from my childhood like Hero Quest and Space Crusade. Space Crusade had rectangular room tiles which could be arranged in many different ways to create ship layouts. This seemed like a good starting point.

I wrote a short program which shuffles rectangular tiles on a grid to create a dungeon map using a technique called Simulated Annealing. Here's a video of the program arranging 100 rooms into a dungeon layout. It finds a solution after 254 moves. The program runs faster than the video shows - it's slowed down to make the process more visible.


The code was written in Python, using numpy matrices to represent the grid. The results are rendered using the OpenGL, and saved to disk with PIL (the Python Imaging Library).

Real Annealing


In the real world, annealing is a heat-treatment process used on glass and metal. My understanding of real annealing is very rough, but here it goes. The material is heated up to the point where the internal structure can change more easily, but not so hot that it becomes liquid or changes shape. This allows the structure to settle naturally, which can improve the material. The goal is to make it stronger, harder less brittle, etc. The material is then cooled in a controlled way, leaving it with the new and improved structure.


Heuristics


But what does annealing have to do with software? Before I answer that, a quick note on heuristics.

When there is no method for solving a problem directly, and too many possible solutions to try them all in a reasonable amount of time, how can we search for an answer? Taking the dungeon layout problem as an example, the number of possible arrangements of 100 tiles on a grid is very large, so we can't just try random layouts until we stumble on a valid one.

Instead we can use a heuristic. Essentially, a "rough" method which isn't particularly clever, and may not solve the problem perfectly, but generally produces a reasonable solution. Simulated annealing is a heuristic search.

Simulated Annealing


Let's start with a definition from Wikipedia:

Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space.

More simply, Simulated Annealing is a method for searching for a reasonable solution when there are too many possibilities to try them all. The name "annealing" comes from the heat-treatment process, which apparently inspired the simulated annealing technique. Simulated annealing gradually improves the quality of a solution, similar to the way real annealing works on materials, and uses a "cooling" effect.

To apply simulated annealing, we first need a measure of the quality of a solution.

Next, we make small random changes to gradually improve a bad solution into a good one. If a small change would improve the quality measure of the solution, we make the change and continue. Generally, if a change would reduce the quality measure, we don't make that change.

However, for many problems, it is possible to get stuck on a solution with low quality, but which is still better than all nearby solutions. A "local optimum". Any small change from there would reduce the quality measure, so the process cannot progress through the solution space toward a good solution.



To get around this issue, simulated annealing uses an exponential cooling process. Initially, the solution has low quality and high temperature. When it is hot, there is a high random chance that we make a chosen move even if it reduces the quality measure. This allows backtracking away from local optima. As the quality increases and the system cools, the chance of accepting a "backwards" move is reduced, and only moves which improve quality are made.

Dungeon Annealing


First, a quality measure. A good dungeon layout has:

  1. every room connected to at least one other room, and
  2. no two rooms overlapping each other.

I used two measurements which I called "disconnectiness" and "overlappage". I'm sure they have real mathematical names, but I don't know them. Disconnectiness and overlappage are bad, so a good layout has low values - ideally zero.

We start with a random layout of rectangles on a grid. Next, we pick a change to the layout: move one room to a random location. If the changed layout would have lower disconnectiness and overlappage, it is better. We continue annealing until there is no disconnectiness or overlappage, or until an iteration limit is reached.

Further Reading


I originally read about Simulated Annealing in The Algorithm Design Manual by Steve Skiena. He mentions the application of PCB (Printed Circuit Board) layout, which seemed like a similar (but of course more complex) problem to laying out dungeon tiles. I've worked with engineers who use CAD tools to lay out circuit boards, and heard their opinions on automatically generated layouts (terrible and bizarre), so I was curious to see what kind of wacky dungeons simulated annealing might come up with.

Thursday, 15 May 2014

Detecting similar documents with "shingling"

The Problem


I looked into this based on an example interview question. Imagine that you're implementing a meta-search engine for job advertisements. The search engine queries a number of job databases, and combines the results.

One issue is that the same job is often advertised in more than one database. How would you detect duplicate listings across the sources? Keep in mind that the listings may not be completely identical. There may be slight differences in the wording, order of text, punctuation etc.

The question boils down to how to measure the similarity of two documents. Near-duplicates are documents with a high similarity measure. There is a simple way to measure this which is also quite fast on relatively small sets of short documents.

Jaccard Similarity


There is a mathematical measure of similarity known as the Jaccard similarity. Without getting too theoretical, Jaccard similarity is a value from 0 to 1 defined for two sets. It describes how much sets A and B overlap. It is calculated as: 

|AB||AB|
Venn diagram of sets A and B

In the diagram on the right, the Jaccard similarity is the size of the dark blue overlapping area divided by the total area of A and B.

Sets with Jaccard similarity 1.0 are identical. Sets with Jaccard similarity 0.0 are completely distinct. Sets with Jaccard similarity 0.95 are highly similar, but not identical.

Shingling


How can we turn job descriptions into sets to calculate the Jaccard similarity?

A straightforward idea is to use the set of words in the text. For example "How much wood would a woodchuck chuck if a woodchuck could chuck wood?" contains 9 unique words:

a, chuck, could, if, how, much, wood, woodchuck, would

Seems reasonable, but descriptions for similar jobs at different companies probably use very similar words. They may have a high Jaccard similarity even though they are different advertisements.

A better approach which includes information about word order is "shingling". A shingle is just a small slice of text from the document. Typically, shingles are 5 to 9 characters long, and can span multiple words. Using the previous example and shingles of length 7, the first three shingles would be:

"How muc", "ow much", "w much "

The entire document can be shingled this way, and the shingles used as the document set.

Putting it together


The basic algorithm is:

1) Extract shingle sets for each document.
2) For every pair of documents:
    a) Calculate size of intersection of shingle sets.
    b) Calculate size of union of shingle sets.
    c) If a÷b> flag the pair as duplicates.
  flag the pair as duplicates.
The value c is a similarity threshold. I chose 0.5.

I implemented this basic algorithm in Java as a practice exercise. I ran the quick and dirty implementation over news articles downloaded from news websites, some of which were very similar. It seemed able to identify articles on the same topic, or articles with large sections of identical text, depending on the similarity threshold used.

But how efficient is this naive implementation?

Time


Time complexity is dominated by step 2). Using a hash-based set data structure (such as HashSet in Java) it is:

O(d2.n)

Where d is the number of documents and n is the document length in characters, which is approximately the same as the number of shingles. This is reasonable for relatively small collections of moderately long documents.

Space


Space complexity is not great, but again not an issue on a small scale. The uncompressed shingle sets extracted in step 1) use:

O(d.n.k)

For shingles of size k. In other words, k times as much memory as the documents themselves!

Further reading


 times as much memory as the documents themselves for shingles of size .I learned about shingling and Jaccard similarity in "Mining of Massive Datasets" by Rajaraman and Ullman (2013), which describes a more elegant and scalable technique for working with longer documents called "minhash signatures". It is much more efficient in both time and space, but still relatively straightforward.