Not sure why, an rather straight-forward attempt to port PathFinder to C# has lead me into a spree of improvement attempt on it. It has invoke the inner beast within me, that crave for challenge and problem solving. It has taken me more than 2 weeks now trying to do it.
The good thing is, when PathFinder v1.1 hits. It gonna be better than ever. That is if all my theories to optimise and improve the current version works. Well for a start, I've managed to improve the path-smoothing by a great deal. It's basically what that's mentioned here in this post, only better.
Apart from that, I've added binary heap to the algorithm. Thanks to this well written article here. It's basically a sorting mechanism that eliminates the need to search through the long list of nodes looking for the shortest path. It's suppose to make the algorithm at least 2-3 time faster on a longer route or larger map. Well, it has certain done that. But I'm still not entirely sure how useful it is, for a very good reason.
That reason being I've been tinkered with various ideas to make the path-finding component goes faster. One solution that seems very promising is clustering smaller nodes, which is a square into a bigger rectangular or square. Bigger square cover more area of course so the map ends up with much less nodes, significantly reduce the nodes that need searching. In other words, instead of scanning through the graph at the step of a small node at a time, the algorithm can skip through large area with no obstacle within it. And so, if the number of node need searching is a lot less, I do wonder how useful binary heap will be in this case. After all, it has its own overhead in the sorting mechanism.
That reason being I've been tinkered with various ideas to make the path-finding component goes faster. One solution that seems very promising is clustering smaller nodes, which is a square into a bigger rectangular or square. Bigger square cover more area of course so the map ends up with much less nodes, significantly reduce the nodes that need searching. In other words, instead of scanning through the graph at the step of a small node at a time, the algorithm can skip through large area with no obstacle within it. And so, if the number of node need searching is a lot less, I do wonder how useful binary heap will be in this case. After all, it has its own overhead in the sorting mechanism.
Standard grid-based node |
Merge nodes within a clear area into a single node |
Back to the clustering of smaller node into bigger node, it's a rather simple concept. Just take the two images above as an example, searching for a path in using the smaller, evenly placed node in the first image could easily need hundreds, if not thousands of iteration through each nodes. But in the second image, a path that originally traverse through hundreds of node is simplified down to a relatively very small number of nodes. Obviously, this is just a variant of two tier path-finding. And to an extent, it is just like navigation-mesh solution. It's not something revolutionary. But hey, whatever solution that works best right?
No comments:
Post a Comment