Results

The experiments have been conducted, the conclusions have been drawn, and the report has been written and submitted.

The binary tree part of the BSP algorithm was indeed a more efficient way to process triangles in the scene than brute force. Simple scenes saw a speedup of about 1.10-1.15x in rendering.

However, since my BSP implementation was such that it split some triangles into smaller ones (in order to place them on either side of the splitting planes that partition the space), the scene complexity as represented by the BSP tree increased from the original scene. In the more complex scenes, where there were more triangles to split and to split based upon, the complexity of the BSP tree increased dramatically. This counteracted the increased performance, to the point that rendering with BSP took longer.

One particular scene, though being the most complex of the test scenes, worked around this by only having axis-aligned faces and having all vertices aligned to a 3D grid. However, adapting the scene to the renderer, rather than vice versa, is not a real solution. I suppose this is the reason why most BSP ray tracers use other approaches.

Based on the results, I suspect that a BSP algorithm that partitions the space with geometry-independent axis-aligned splitting planes would perform better. I think that my implementation, while it does work in some sense, would not be beneficial in a real-world application.

Optimizations

In the last post I wrote about how rendering with my BSP algorithm was slower than using brute force. The algorithm has since been optimized in two ways:

1. Smarter BSP Tree Traversal

There are now additional cases in which branches are not traversed. For instance, if an intersection is found in the branch that is closer to the camera, the other branch is not traversed. It is indeed a very essential strategy in BSP tree traversal, and implementing it made rendering about twice as fast.

2. Shuffled Triangle List

This has to do with the compilation of the BSP tree, rather than the traversal. Previously, when creating a node in the tree, the node’s splitting plane was based on the first triangle in the given triangle list. This meant that if the input data was somewhat spatially ordered (which I believe it is, when it is exported from Blender), the tree would become very unbalanced.

Shuffling the triangle list makes the tree “randomly balanced”, which in most cases is better than “consistently unbalanced”. A side effect of this is that the amount of work required for rendering is nondeterministic for a given input.

The bar for improving the compiler further is relatively low; it just needs to be better than guessing. If I can figure out a simple way to do it, I will.

Performance Impact

Based on very limited testing, it seems that these two changes have made the BSP render times to be pretty much on par with brute force, on average. Previously, rendering with BSP could take more than three times longer.

BSP Implemented

After reading more about BSP from scientific articles, it dawned on me that the BSP tree is supposed to be traversed repeatedly for each ray from the camera, as opposed to being traversed once before the actual rendering.

The BSP algorithm is now implemented. It does have some minor glitches, but for the purposes of this study, fixing them is not worth the effort.

Slightly glitchy BSP render

The particular BSP algorithm that I have implemented partitions the scene based on the planes in which the triangles reside. Triangles that intersect these planes are split. From what I can tell so far, this method seems quite horrible for most scenes, because the original triangles are split into so many smaller pieces when the BSP tree is compiled. One of my test scenes consists of 132 triangles, but the resulting BSP tree contains 1902 triangles, and the scene takes significantly longer to render with BSP than with brute force.

I am currently in the process of creating scenes for the study, and among them I will attempt to construct a scene that benefits from this kind of BSP. I believe such a scene would be one in which triangles generally are aligned to a 3D grid. This would mimic the other BSP approaches which partition the space into predetermined cubes, regardless of where the scene’s triangles are. In hindsight, such an algorithm is probably much more efficient in most cases.

BSP: Work in Progress

Over the last few days I have been implementing the BSP tree compiler, as well as traverser. Until now there has been nothing new to show except for segfaults. I can now report that my implementation works, in the sense that the program does not crash, and a subset of triangles are given to the renderer.

However, the tree traversal algorithm, which determines the set of triangles that are rendered, is currently wrong. As an example, here is a scene with BSP disabled. That is, all polygons are considered for rendering.

An overview of a large (roofless) room

If the BSP algorithm was correct, the render would look exactly the same with BSP enabled, yet the following image is produced instead.

A partially rendered overview of a large (roofless) room

The course’s lecture materials and Wikipedia have conflicting information about what to actually do with the BSP tree. The lecture slides suggest that BSP is about disregarding polygons outside of the view of the camera (such as those that are behind the camera).

Wikipedia’s description, however, suggests that BSP is about disregarding polygons that are obstructed by others, only taking into account the position of the camera, not its rotation. Perhaps there are several different ways to utilize the tree.

My current implementation is based on the lecture slides, but I don’t grasp how the logic is supposed to work, even for the example given in the slides. Also, even if the logic is correct, it’s very likely that there are errors in my implementation of that logic; in above image, all polygons are in the view of the camera, and should thus be rendered in any case.

My work continues with figuring all of this out.

Materials and Direct Lighting

Latest developments:

A red box rendered with absolutely no shading

A red box rendered with direct light shading, lit with two point  lights

The Beginnings of a Ray Tracer

Development of the ray tracer began a couple of days ago. Its design is similar to that of the ray tracer we wrote as a lab exercise on the course, and it borrows some pieces of code from it. Still, as far as my work effort is concerned, I am more or less rewriting it from scratch, making some diverging design decisions along the way.

For instance, there is no need for any kind of real-time interactivity, and thus such things will not be implemented in the first place (come to think of it, there is as of now a main loop, but that is going out the window very soon). However, the most notable actually new functionality in this very early stage is the .obj file parser. It allows me to render any scene exported from Blender, provided that all the faces have been triangulated.

Next up on my todo list is to conceptually swap some of the spatial axes around to match how they are in Blender, and then implement some lighting shaders. I should then be well on my way to start implementing the BSP algorithm.

Early test with arrows pointing along the spatial axes