Browsed by
Author: Sebastian

Path Tracing vs Ray Tracing

Path Tracing vs Ray Tracing

Path tracing is all the rage in the offline rendering space these days. From Cycles to SuperFly (based on the open source Cycles) to Octane, most new rendering engines seem to be using this technology. Sometimes referred to as “unbiased, physically correct rendering” what is path tracing, how is it different to ray tracing and is it the future of high quality offline rendering? I will be looking to answer all of those questions in this blog post for anyone confused by the changing landscape of rendering engines (note that I will be talking about offline rendering in this post, as opposed to realtime rendering).

So first up the question: what is path tracing? Unfortunately the name fails to be terribly descriptive and when I first heard about it I thought it was simply a different name for ray tracing. In fact perhaps the easiest way to explain path tracing is to compare it to the more familiar ray tracing. So in ray tracing a ray is sent out from the virtual camera into the scene and traced until it intersects with a solid body. At this point a ray is cast to each of the light sources in the scene to calculate illumination and surface shading is calculated for the intersection point. If the surface is transparent the ray is sent out further into the scene, possibly at an angle to simulate refraction. If the surface is reflective a ray is sent out at an angle away from the object. Now I often see ray tracing touted as a magic fix for rendering (usually in discussions on realtime rendering for games) in online discussions, as if ray tracing somehow provides physically accurate results. Well it doesn’t. It comes closer than triangle rasterization (the technology employed in almost all games, and what graphics cards are optimized for) but it’s no simulation of reality. It gives us reflections and refractions virtually for free and it gives very nice hard shadows (unfortunately in the real world shadows are rarely if ever perfectly sharp). So just like rasterization engines have to cheat to achieve reflections and refractions (pay close attention to reflective surfaces in games, they either reflect only a static scene, or are very blurry or reflect only objects that are on screen), a ray tracer has to cheat to get soft shadows, caustics, and global illumination to name a few effects required to achieve photo realism.

Now a path tracer is like a ray tracer on steroids. Instead of sending out one ray it sends out tens, hundreds or even thousands of rays for each pixel to be rendered. When it hits a surface it doesn’t trace a path to every light source, instead it bounces the ray off the surface and keeps bouncing it until it hits a light source or exhausts some bounce limit. It then calculates the amount of light transferred all the way to the pixel, including any colour information gathered from surfaces along the way. It then averages out the values calculated from all the paths that were traced into the scene to get the final pixel colour value. If that sounds like a rather brute force approach to you then you are right. It requires a ton of computing power and if you don’t send out enough rays per pixel or don’t trace the paths far enough into the scene then you end up with a very spotty image as many pixels fail to find any light sources from their rays. It also requires light sources to have actual sizes, a bit of a departure from traditional point light sources that have a position but are treated like an infinitely small point in space (which works fine for ray tracing and rasterization because they only care about where the light is in space, but a path tracer needs to be able to intersect the light source). Now path tracing gives us all of the things that ray tracing doesn’t give us out of the box: soft shadows, caustics and global illumination. You should still not confuse it with being a true simulation of the real world however, since it still doesn’t fully simulate complex surfaces like skin, instead relying on shader tricks like subsurface scattering to fake these. There is also a practical limit to the number of paths you can trace from each pixel and how far you can trace them before giving up. If you were to simulate photons in the real world you would have to cast billions of paths and trace them almost infinitely (well at least until they leave the area you are rendering) and you would have to do this in an environment modelled down to an atomic scale. That’s not practical so I think we will always be stuck with an approximation, after all we just need to create images that look real to humans, which is a much lower threshold than “simulate reality completely”.

So is path tracing the future of high quality rendering? I do think it is. As computers (and in particular GPUs) continue to scale up in speed path tracing continues to become more and more practical and it requires less cheating than ray tracing (and far less than rasterization). This means less work for the artists to achieve stunning photoreal results, and that’s always a good thing (I think anyone who has worked with a high end ray tracer like MentalRay can appreciate just how tricky it can be to tune all the myriad of options to achieve the result you want). That being said I think that at the present I would not recommend using a path tracer in all circumstances. In the end it is all about choosing the right tool for the job. For single images where you want as much quality as possible and don’t mind it taking potentially hours to render path tracing is great. However if you need to render a large number of images (for a comic or an animation) path tracing may not be the right choice (especially if you are a solo artist without a render farm). In that case tweaking a ray tracer can give you almost as nice results but with a fraction of the render time.

The crux of the problem is that with a path tracer you are locked into an all or nothing approach. If you turn down quality too much you get a grainy image, which you can use to preview but which is wholly unsuitable for production use. So to get a usable image you have to tweak the quality settings until you are just at the point where most of the grain is gone or use progressive refinement and let it run until it looks good (which is a great feature by the way). In contrast with a ray tracer you can generally turn off most of the expensive features (global illumination mostly) and get a very high quality result rendered very quickly. And losing GI is often not a big deal, most competent artists can quickly fake most of what you get with GI by tweaking ambient lighting and popping in a few extra weak lights in places that don’t actually have any light sources (for example to fake light bouncing off a red wall you might just place a weak red area light onto the wall that is good enough to fool just about anyone looking at the resulting image). Of course faking always requires more time and skill on the side of the artist so eventually this won’t be needed, but until path tracing times are measured in minutes per frame, as opposed to the hours or days they are now, ray tracing (or rasterization, especially micropolygon rasterizers like the one powering RenderMan) remains the better option for many classes of rendering tasks.

My Latest Project

My Latest Project

Just a quick update as to what I’m working on at the moment. Right now my time is devoted almost entirely to an indie game I’m developing called Voxity. It’s a voxel based city builder. I’m blogging on the development over at the official site: www.voxitygame.com. So come on over and follow what’s happening with that.

What Happened to TVRage?

What Happened to TVRage?

For over two weeks now trying to load TVRage.com results in an error message from CloudFlare and a cached copy returned. It looks like the site has gone down, perhaps permanently. This would be bad news, as it was my go-to for information on what shows are running and worse, their API was what I used with my TVShowRenamer app that I created for myself to automatically rename shows to match the format that I use. This is making me very sad. I’ve been looking at TVDB to cover the API needs but it looks like their API isn’t as user friendly as the TVRage one was for my purposes (it appears for example that you have to download all episode information for a show at once, rather than being able to search for a single particular episode).

Going back to manually renaming every downloaded episode to the format that I use, as well as manually looking up the names for the episodes simply doesn’t sound appealing to me, so I’ll have to come up with a solution. I will probably have to have another good look at the TVDB API and make it work somehow, but that means a significant amount of work. All because TVRage simply decided to go under randomly. What a nuisance.

Drawing on the Surface – SketchBook vs Photoshop vs MangaStudio

Drawing on the Surface – SketchBook vs Photoshop vs MangaStudio

So when I got my SurfacePro3 a few months back one of the key reasons for getting it over other options was that comes with a pressure-sensitive pen that makes it a great tool for drawing. Having used a Wacom tablet in the past, I was pretty excited by the idea of being able to draw directly on the screen (while you do get used to the tablet after a while there is still always a bit of a weird disconnect where you are drawing in one place and the actual output is in another). A Wacom tablet is also not the most portable thing, sure you can hook it up to a laptop and draw on the go but that is still pretty unwieldy. Being able to draw anywhere is huge plus.

Ok, so I’ve sold the reasons for why you might want to draw on the Surface. Anyway, for the first few months I was too busy writing code to get much drawing done, so I didn’t really give it much thought. Fast forward to a couple of weeks ago: with a bit of time on my hands I decided to give drawing on the Surface another chance.

First I tried out Autodesk SketchBook. Initially I tried out the desktop version and then I gave the tablet optimized version a try. Both work pretty well for basic sketching. The tablet version in particular is nice. It has even fewer features than the desktop version but in a way that’s a plus: it just gets out of the way and gives you a canvas to work on. The only feature I would like to see added to the table edition is the ability to create different canvas sizes. It just gives you one size and that’s it. I haven’t yet tried creating a different sized canvas in the desktop version and then opening it with the tablet version, although that should probably work. That’s more hassle than it really should be just to get a bigger canvas though. Overall, for some quick sketching the tablet version of SketchBook is fantastic though: simple, clean, and elegant. You can focus entirely on what you are drawing without worrying about the technical minutiae.

Next I tried out the staple of digital artists everywhere: Photoshop. I first tried one of the older versions (CS6) and that proved a catastrophe. The UI was tiny because of the high DPI screen, and the pen pressure did not work (apparently you can enable it with a special driver, but I did not try this). After some searching on the web I found out that newer versions are more Surface friendly, so I proceeded to install CC2015. This was a far better experience immediately. The UI is scaled by default so you can actually use it and pen pressure works out of the box. I did not spend a great deal of time drawing in Photoshop but from the maybe 2 hours or so that I spent on a sketch it worked very well. If I wanted to do any serious painting I think this is still what I would use. It is also still one of the best when you want to add that final bit of polish to an image. However, I wouldn’t use it for sketching, it just feels too heavy in the end. Of course that’s a matter of preference. If you really like Photoshop you certainly can use it for sketching and just about anything else.

Lastly I tried MangaStudio5. I had previously heard about this software and honestly dismissed it out of hand because of the name, after all I’m not really interested in drawing Manga. But several comments on the web mentioned that MangaStudio works particularly well with the Surface so I decided to try it out too. My first impression wasn’t the best: OMG what is that interface? There are about a million buttons and toolbars everywhere and it can take a little getting used to. So I decided to watch a couple of videos on YouTube to get a basic run-down of the interface and that helped. Once you get the basic hang of things it actually does work rather well. The interface takes a little time to adjust to but it isn’t actually anywhere near as bad as my first impressions had me believe. I still haven’t spent a huge amount of time with it but I suspect for any sort of comics work I’ll probably be using it from now on. The brush engine is fantastic and it is super easy to add nice speech-balloons, panels and other comic-y stuff.

Overall, all three software packages actually work very well with the Surface. They all use finger input for moving/zooming/rotating and only draw when the pen is used. All work well with the pressure sensitive Surface pen out of the box. They are all fast and responsive on the Surface hardware (those Intel integrated graphics chipsets sure have come a long way). Which to use really depends on what you want to do and personal preference. SketchBook is great for unencumbered sketching. Photoshop is great for painting and putting that final polish on an image. MangaStudio is great for any sort of comics work. A workflow could realistically incorporate all three, perhaps like this: SketchBook -> MangaStudio -> Photoshop. If you are planning on just using one and want the answer to the question: “which should I choose?”, then I would suggest giving them all a try (all have some sort of free trial available) and seeing for yourself because at the end of the day it really is highly subjective (though personally I would recommend getting SketchBook in addition to either of the other software packages, it is cheap enough that it shouldn’t be a problem and it is a great tool to just play around with ideas in).

Making Noise

Making Noise

So I recently made use of the brilliant LibNoise library to create some complex noise to use as a heightmap. LibNoise makes it easy to create various noise modules (Perlin, Vornoi, Multifractal, etc…) and transform/combine them in various ways. It’s great except for one thing: visualizing exactly what is happening when you tweak parameters can be hard. For the project I was working on I just blindly tuned things until the final product looked about right. This worked ok, but it often felt like fumbling in the dark and a lot of time was wasted changing some numbers and running the program again.

With some spare time on my hands I decided there might be a better way. So I sat down and built a graphical node editor that uses LibNoise. With the magic of WPF (when it works it is amazing, though when it doesn’t it’s like pulling teeth) I created NoiseMaker, a screenshot of which can be seen below:

Capture-noisemaker2

You can drop in various modules, link them together and tweak their parameters. A small image shows a preview of the result at every stage so you can see exactly what is going on (there are many tools that work similar to this such as Substance Designer or the Hypershade editor in Maya). Nodes are coloured according to type (generator, transform, utility) and can be moved around freely. The work area can be zoomed in and out, and scrolled around in.

It mostly works although a number of the LibNoise modules still need to be implemented. There is also currently no way to save/load a node network once it is created. Also, right now connections between nodes can be broken, but nodes themselves cannot be deleted. With a little more work it could be quite a useful little application though.

The general idea of a graph editor like this can be useful for a wide range of projects. Now that I have the basics working I could easily adapt it to any number of other projects, and that’s perhaps the most important reason for doing it in the first place.

The Crossroads to the Future of the Personal Computer

The Crossroads to the Future of the Personal Computer

The Personal Computer (PC) is about to go through another radical evolution. We’ve already moved from the typical beige boxes of the late 80’s and the 90’s to a world where most people use a laptop as their primary computer. Of course the desktop has not disappeared; it continues to rule when the absolute bleeding edge in power is required, i.e. for high-end workstations (for example for CGI work or other graphics and multimedia applications) and for gaming (although this continues to be less true, even mid-end laptops perform admirably in all but the most demanding games these days). For the vast majority of users the reality is simply that computers have been fast enough for what they want to do for the last decade, which means that technological progress has continued to make that same computing power cheaper and smaller than ever before.

So now we are quickly approaching the point where people will have all the computing power they need in their pocket in the form of a smartphone. Arguably we are already there for many people and in fact it is becoming increasingly common among young people that they see no need for a computer other than their smartphone. It is my prediction that over the next decade we will see a dramatic shift to the smartphone becoming the primary computing platform for everyone that doesn’t need bleeding-edge performance.

The obvious counterargument is that the phone form-factor is absolutely dreadful for any sort of real productivity work. Trying to do anything much more complicated than writing a simple E-mail can be an exercise in frustration. For this reason I believe we see a new sort of hardware ecosystem evolve around expanding the capabilities of the smartphone while you are working at a desk. Technically it can be done today by using a Bluetooth keyboard and mouse, and hooking up a large monitor with HDMI. Ideally the monitor would be connected wirelessly as well (perhaps using Miracast or similar technology) but we aren’t quite there yet. The advantage of such a setup should be immediately obvious. You could have dumb terminals (consisting of mouse, keyboard and screen) at home and at work while you carry the “brains” around in your pocket. You have all the advantages of full-size peripherals when you need them but you still have your full computer with you wherever you go. Additionally, I can imagine future tablets morphing into simply being ultra-thin touchscreens with no built-in processing capabilities that simply connect to your smartphone as well. There simply won’t be a compelling reason to have more than one CPU per person except for rare cases.

So now that I’ve laid out my vision of the future of personal computing I can tackle the heart of this blog post: who will make this vision happen? Right now we are seeing a convergence from two sides. On the one side is the mobile space with Android running on ARM processors. On the other side is the PC (desktop or laptop) running Windows on Intel (or AMD) chips. Intel is already aggressively pursuing the ultra-low-power market and have made significant inroads with cheap and small tablets running full versions of Windows 8.1. The next step is smartphones with x86 processors running the desktop version of Windows, something I fully expect to see within the next year or two. In the meantime Android continues to grow up. Google continues to add features to its mobile OS and there are already a number of netbooks available running Android.

While there is always the possibility that Android and Windows will reach some kind of stable equilibrium history seems to suggest that one or the other will eventually come out with the lions-share, with the loser becoming a niche player or disappearing entirely. It will be exciting to see which of these two titans, Google and Microsoft, will end up dominating the personal computing space of the future, or will it be an entirely different third player yet to come? I can’t wait to find out.

21st Century Energy Policy

21st Century Energy Policy

It is truly appalling to think that we have all the technological tools available to us to create a truly modern energy ecosystem and avert massive ecological disaster looming from climate change, and the only thing lacking is political will. Not only that, but we can achieve this without destroying the economic engines of the industrial world (a common complaint from right-wing conservatives when the subject of climate change is broached). To accomplish this we would need to move away from fossil fuels entirely (something that should be possible within a 20 year time frame by my estimation), which would have the added benefit of removing much of the financial base of fundamentalist Islamist terror groups such as ISIS. So what do we need to do?

The most obvious part of the solution is to continue to invest in renewable energy, primarily in the form of wind and solar. This is the part that is actually well on track, with huge investments being made world wide (and particularly in Europe, with Germany spearheading the charge). Wind and solar are already very competitive in price (see Apple’s recent announcement to invest big in solar energy for their buildings, bringing not only good PR but actually an estimated savings over using grid electricity).

However, they are no silver bullet to the energy problem. Both wind and solar suffer from reliability problems and have a certain amount of geographic dependency (solar works great near the equator but is nigh useless near the poles). Large scale energy storage is as yet an unsolved problem. Eventually we will have a world-wide smart grid with a huge superconducting backbone, possibly with space-based solar and/or fusion power covering the base load. Right now this is sci-fi and not reality, so what should we be doing with what we have available? In parallel with continued investment in renewables we should be overhauling the electric grid to handle much larger loads from electric vehicles (more on that later) and build a fully smart grid. With a smart grid the utility company is able to turn on and off devices such as fridges and air conditioning as needed to balance the power distribution, improving efficiency and, crucially, able to compensate for some of the reliability problems of wind and solar.

With all of this there is still going to be a need for base load power generation, and for this we should be investing heavily in nuclear fission to cover the last bit, while working to shut down all fossil fuel based power plants, starting with the abomination that is coal. Instead, we are seeing the opposite trend right now, with supposedly environmentally conscious Germany moving to tear down their existing nuclear infrastructure and building new coal plants to replace them. Similarly, France and Japan have both announced a reduction in reliance on nuclear power. Once again, the inevitable result will be increased fossil fuel consumption to fill the holes in electricity generation. This would almost be comical if it didn’t put our entire planet at risk by raising the continued CO2 pollution of some of the world’s biggest industrial powers.

At this point a large number of you will complain that nuclear is the worst possible option, and let’s not sugar coat it: there are real dangers associated with nuclear and there is no such thing as a 100% safe nuclear plant no matter what some proponents like to say. However, the dangers are also well understood and nuclear fission is not some uncontrollable magic, it is science. It is already safer than burning coal (and releases far less radioactivity into the environment). When you take into account the danger that climate change poses to our civilization the equation just comes out even better. The waste can be stored or reprocessed (note that the radioactive material used is mined from the earth in the first place and not magically created in the reactor). There is also enough fuel available to drive our civilization for centuries to come, especially if you add Thorium to the equation (a massively abundant radioactive material that can be used in advanced reactor technology), and it only needs to tide us over until fusion and/or space-based solar become a viable option.

The final piece of the energy puzzle is the move to all-electric transportation. To this end governments around the world should be investing heavily in creating cheap electric vehicles (working with established auto makers), slowly rising petrol taxes and building recharging infrastructure. This also requires us to upgrade electric grids and install more generating capacity to handle the influx of demand (which should also drop electricity prices, driving economic growth and making electric vehicles an even more attractive prospect). We should be working to phase out the internal combustion engine within the next 20 years (which will also reduce air and noise pollution in our cities as an added benefit).

Overall, this rather simple plan, if enacted by all major industrial powers, would be a massive boon to society. We eliminate the sword of Damocles that is global warming, drive economic growth, reduce pollution and clip the wings of the pathological fundamentalist Islamist movements in the Middle-East. It is win-win-win situation, all that is needed is the political will to pull it off, and an end to the irrational fear of nuclear fission that much of the public have (thank you media, for playing up the comparatively minor damage caused by the Fukushima meltdown, while almost ignoring the massive loss of life and destruction of infrastructure caused by the earthquake/tsunami combo all across Japan). Until next time, remember that science is the engine that drives prosperity.