Browsed by
Author: Sebastian

WebGL and Unity in 2019

WebGL and Unity in 2019

I’ve recently updated a few of my games to Unity 2019.1 (at least one from as far back as Unity 5.5), and made some improvements to the web page that they are embedded in – including adding custom loading graphics, and disabling the Unity mobile warning. I thought I would share my experience in this blog post and provide some snippets of code that might help others working with the Unity WebGL exporter. If you want to check out my games in action you can find them here: Quantum Foam Games.

Perhaps the biggest change when it comes to WebGL export in Unity 2019.1 is that WASM (Web Assembly) is now the only choice you have. There are workarounds available to anyone that also needs ASM.js versions of their game exported (if you must support ancient versions of Chrome, or – ugh! – IE 11). I personally opted to just use WASM on my site going forward. It works well on all up to date versions of Chrome, FireFox, Safari and Edge, including mobile versions. According to caniuse.com the coverage for WASM is at 86%, which is good enough for me. As to performance, I don’t have any measurements, but subjectively I compared an old ASM.js build to a new WASM one in Chrome loading from local disk, and the WASM build feels like it loads instantly, while the old ASM.js build hangs for several seconds as it compiles the multi-megabyte Javascript file. So the startup performance benefit is certainly there (others, including the official Unity blog have posted numbers that also back this up).

The other big change (and this must be recent, since the Unity documentation doesn’t actually mention this yet) is that you no longer have to specify how much memory your game will need (making use of the new dynamic heap grow tech that was added to Emscripten). This is a big deal, especially for small indie devs that don’t really want to spend a huge amount of time profiling their game. Previously you had to specify the maximum heap size for your game, which usually just meant picking some plausible sounding number like 256MB and calling it a day, but then you might be wasting a bunch of then player’s RAM for no reason, or players might find that the game crashes after playing for a while because the game suddenly runs out of memory. So the onus was on the developer to profile your game, figure out how much memory it would typically use, and test it extensively to figure out the perfect number. Well, no more! Unity will now grow the heap dynamically and as long as the user has enough RAM available your game will keep on running. Fantastic!

The bad news is that nothing much else has really changed in the last two years or so. WebGL builds are still huge (5-10MB for a small game). There are still issues with graphical fidelity compared to other build targets. You still can’t use linear color space and some other features because then you are locked to WebGL2.0, and in all this time Apple still hasn’t got that working in Safari (and completely giving up on the iOS market is just too big of a sacrifice for me, and I think for most other devs too). Now WebGPU is in the process of being created as next-gen replacement for WebGL, which might improve things at some point, but we are still over a year away from that becoming reality. The other cool thing on the horizon is Project Tiny from Unity, which promises to build truly web-first builds from within Unity, with tiny builds (just a few hundred KB for a small game). Unfortunately right now that only supports Javascript (a big reason for using Unity rather than something like Phaser is so that I get to use C# and not Javascript), and it only supports 2D for now. However, C# support and 3D capabilities are planned for Project Tiny in the future, and once those start to be added I’ll be sure to take another look at it as an alternative to the WebGL export option.

OK, now onto some code snippets to help anyone else working on Unity WebGL builds. First let’s look at removing the mobile warning. I know the Unity team mean well with this, but really there should be an option to disable this somewhere in the WebGL build options, especially in 2019 where most mid-high end mobile devices actually will run the Unity WebGL content very well. Even when testing on the 2018 budget iPad (which only has an anemic 2GB of RAM) my games ran without a hitch (well except for the missing WebGL2.0 support in Safari, get that fixed please Apple!). With most devices from the last 2-3 years having at least 3GB of RAM or more most Unity WebGL games should run pretty well on most mobile devices. Anyway, disabling the check is quite easy. Simply add the following two lines of JS code immediately before the code that instantiates the game (the line looks like “var unityInstance = UnityLoader.instantiate…) in index.html:

UnityLoader.Error.Handler = function() {};
UnityLoader.compatibilityCheck = function(e,t,r) { t(); };

And that’s it! You may want to comment out the line that sets the error handler to an empty function while you are testing your game, but once the game is live I would leave that in, to prevent showing any confusing error pop-ups to your players that they won’t be able to do much with anyway.

Next, you will probably want to create a custom loader for you game(s). The default one is rather drab and includes the big Unity logo branding. Again look for the line of code that instantiates the Unity instance. Now modify it to and change the part that says {onProgress: UnityProgress} to point to your own Javascript function (e.g. {onProgress: MyProgress}). Mine looks like this (note that I use JQuery since I have it included on my pages already for other tasks as well, if you don’t want to include it you can easily modify the code to work without it):

function MyProgress( unityInstance, progress ) {
    if( !unityInstance.Module {
        return;
    }
    if( progress == 1 ) {
        $('.loader').hide();
    }
    $('#progress').text( (progress*100)+'%' );
}

The code should be self-explanatory, but the progress number passed into the function is a number from 0-1 that represents how far the game has loaded. You can use this number to animate a progress bar or just show the number as a percentage to the user as I have done. Once progress hits 1 the game is loaded and you can hide the element that you have used as the loading indicator (probably a <div> with some text and/or images in it, in my case a <div> with the “loader” class on it).

Next, you will likely want your game to fill out the entire window, or at least to adapt based on the size of some element on the page. The default Unity WebGL template just forces your game to sit in the middle of the page at a fixed size. While that does make it a little easier since you don’t have to code your game to adapt to the screen size, it is less than ideal. Once you’ve made sure your game works at variable window sizes you can add the following JavaScript to your page:

function setDimensions() {
    var gameContainer = $('#gameContainer');
    var unityCanvas = $('canvas');

    unityCanvas.width( gameContainer.innerWidth() );
    unityCanvas.height( gameContainer.innerHeight() );
}

var resizeTimer;

$(window).on( 'resize', function(e) {
    clearTimeout( resizeTimer );
    resizeTimer = setTimeout( setDimensions, 150 );
});

This code again assumes that you have JQuery loaded into your page, and that you have a <div> element with the id set to “gameContainer” on your page, that is styled to respond to the size of the window (on my site I used some FlexBox CSS magic to fill out the <div> to cover the entire page, except for a thin header bar along the top). The code above takes care of resizing the image and ensures that it only actually resizes the Unity canvas when the user has stopped changing the size of the window, as this is an expensive operation.

Finally I coded up a simple PHP template that would load up any WebGL game, so that I could easily maintain a consistent look across all of my games. I also enabled “Name files as hashes” and “Data caching” in the WebGL publishing settings. This is a good idea as it allows you to cache the game with the user so it loads super fast if they return to it, but also ensures that if anything is changed that it is re-downloaded (if you don’t hash the file names you are at the mercy of the browser deciding if the game data is out of date or not, meaning the player may not get the latest version or worse yet may end up with files from multiple different versions resulting in the game crashing). I didn’t want to have to edit my PHP template each time I uploaded a new version though, and I wanted it to work with all of my games with no modification so I added the following code to find the JSON configuration file, and the JavaScript that contains the loader (UnityLoader.js when you don’t ouput hashed file names):

$unityloaderfile = basename(glob( 'Build/*.js' )[0]);
$webgljsonfile = basename(glob( 'Build/*.json' )[0]);

$unity_cfg = json_decode( file_get_contents( "Build/$webgljsonfile" ), true );

I also load the JSON configuration file and use that to get the name and version number of the game to display in my header bar. You can then include the loader like so:

<script src="Build/<?=$unityloaderfile;?>"></script>

And instantiate the UnityInstance like this:

var unityInstance = UnityLoader.instantiate("gameContainer", "Build/<?=$webgljsonfile;?>", {onProgress: MyProgress});

Easy. This way you don’t have to mess around with creating custom HTML templates for Unity to use in the build process, and when you update your game all you have to do is replace the contents of your Build directory with the new files (just be sure to remember to delete the old files first!).

If you are wondering how to style the CSS to fill the window, just load up any of the games on my site (https://www.quantumfoamgames.com) and inspect the pages, it’s all right there in game.css. Hopefully this helps someone else when trying to figure out the best way to work with the Unity WebGL exporter. I intend to play around with it some more, and to work on some more games soon, and I will try to document my findings in some future blog post(s).

A Graphics Programmer’s Thoughts on RTX

A Graphics Programmer’s Thoughts on RTX

As a programmer and computer graphics enthusiast it is always an exciting time when a new line of graphics cards is announced. This is certainly true about Nvidia’s launch of their new line of RTX branded cards. Much has been written about the launch from the perspective of gamers, and frankly if you just look at today’s games the picture doesn’t look so good. But in this post I’m going to look a bit more at the future and the promise that the new RTX architecture holds for games yet to come.

First we need to talk about the price. These cards are expensive. I know some people try to shrug it off as Nvidia re-aligning their naming convention or some claptrap like that. Sure a 2070 has similar performance to a 1080 for about the same price but in every previous new generation the new cards provided better performance at the same price point. So what happened? Well, it looks like a perfect storm of events. One obvious issue is that AMD is still not remotely competitive with Nvidia at the high end, meaning these cards are mostly competing against the 10xx generation GTX cards. A lack of competition is never good for pricing (at least not from the eyes of the consumer, I’m sure Nvidia is laughing all the way to the bank). The other issue is that these new chips are huge. All those extra features for realtime raytracing and the tensor cores for machine learning all come at a hefty price in terms of silicon real estate. They offer exciting possibilities for the future, and I will talk about that more later, but right now all of those transistors are doing very little to speed up current games. Given the massive die sizes on these chips it leads me to believe that Nvidia was counting on a process shrink to 7nm when they planned the RTX chips initially. Then, when it became obvious that the 7nm node wouldn’t be ready in time for RTX they decided to go ahead with the raytracing and tensor core features anyway rather than wait and push them out another generation. Thus we are left with huge (754mm² for the 2080Ti), and therefore expensive, chips. Of course all of that is speculation on my part, but it would explain a lot about the pricing of the RTX cards.

Now let’s talk about what makes RTX exciting: all the new features. I’m actually going to start with the tensor cores. I’m somewhat surprised that they left these in the consumer cards actually, rather than keeping them for the Titan line and up. I don’t actually think these will do much for gaming graphics directly at all. I know Nvidia has made a big deal out of their deep learning antialiasing solution but frankly it looks a bit gimmicky to me and the fact that you need to train it specifically for every game is not great. However, the tensor cores are fantastic news for anyone playing around with deep learning. In fact the 2080Ti provides almost identical deep learning performance to the Titan V at half the price. This should go a long way to making machine learning accessible to many more people, and that will further accelerate progress in what is already a fast-moving field. Fantastic news for students and hobbyists that want to experiment with deep learning, but also perhaps a bit of wasted silicon for what is still the biggest market for these cards: gamers. Only time will tell if game developers find interesting uses for the tensor cores in their games, but my hunch is that they won’t, at least not before the current cards are long obsolete.

Now on to the real meat of RTX: raytracing. This is where this new generation of chips promises a true revolution in computer graphics. It really is exciting, and I say this as someone who is actually rather skeptical as to the value of raytracing. It is often billed as a magic bullet solution to producing realistic computer graphics, and it really isn’t (something I talk about in my previous post comparing raytracing and pathtracing here). I’m actually really glad that the realtime raytracing solutions (DXR, RTX) focus on a hybrid approach rather than trying to push pure raytracing. Why? Because it would require radical changes in the tooling and development process for making games, as well as sidelining a lot of silicon (which would be needed to be retained to support all the non-raytracing games anyway) and years of R&D that have pushed the limits on rasterization.

So what can we expect from raytracing? Going by the likes of Battlefield 5 it looks like it will be mostly fancy reflections to begin with, which makes sense since that’s something that raytracing excels at (and something that requires a ton of nasty hacks to simulate in a rasterization-based engine). This also mirrors the way that shaders were first used when they became a standard feature: we initially saw a wave of games that added a single effect to an otherwise fixed-function pipeline (such as the shiny water effect in Morrowind). So that’s the majority of what we can expect from raytracing in games for the next few years: nicer reflections and fancier shadows, since those are relatively simple to execute. Far more interesting I think is the possibility of far improved realtime global illumination (something that the new Metro game promises for RTX owners), as well as other creative uses for the raytracing capability, because casting rays into a scene has all sorts of interesting applications. Consider for example parallel occlusion mapping. This technique creates a very good 3D offset effect using a 2D texture and ray-casting. This is quite expensive to do using current shader hardware, but might be possible to accelerate vastly with raytracing capabilities of RTX. Another possibility is to use the raytracing hardware to render voxel scenes without the need for rasterization at all. Again this is possible using current shader technology, but it is taxing (see for example the GigaVoxels rendering library). An RTX-optimised version of this might be significantly faster. Outside of games there is also a possibility to accelerate GPU-based offline raytracers/pathtracers such as OTOY Octane Render, or the GPU mode of Cycles, making these much faster in the future.

So where does that leave us? Well, I think for now we can mostly expect gimmicks like nicer reflections and/or shadows (at significant framerate cost), and I think right now the RTX cards are mostly of interest to graphics developers looking to experiment with the next generation of graphics techniques. Adoption will also depend of AMD and Intel also supporting DXR in their upcoming graphics chips (AMD has already hinted that they are working on some sort of raytracing support for their future chips). As the hardware becomes more common outside of just the very high-end enthusiast crowd I think we’ll start to see some really exciting things done with it, but I would also expect this take at least a couple of years (at which point we should also see the second or maybe even third-generation of raytracing chips from Nvidia). The future for graphics continues to be interesting, I can’t wait to see it.

Path Tracing vs Ray Tracing

Path Tracing vs Ray Tracing

Path tracing is all the rage in the offline rendering space these days. From Cycles to SuperFly (based on the open source Cycles) to Octane, most new rendering engines seem to be using this technology. Sometimes referred to as “unbiased, physically correct rendering” what is path tracing, how is it different to ray tracing and is it the future of high quality offline rendering? I will be looking to answer all of those questions in this blog post for anyone confused by the changing landscape of rendering engines (note that I will be talking about offline rendering in this post, as opposed to realtime rendering).

So first up the question: what is path tracing? Unfortunately the name fails to be terribly descriptive and when I first heard about it I thought it was simply a different name for ray tracing. In fact perhaps the easiest way to explain path tracing is to compare it to the more familiar ray tracing. So in ray tracing a ray is sent out from the virtual camera into the scene and traced until it intersects with a solid body. At this point a ray is cast to each of the light sources in the scene to calculate illumination and surface shading is calculated for the intersection point. If the surface is transparent the ray is sent out further into the scene, possibly at an angle to simulate refraction. If the surface is reflective a ray is sent out at an angle away from the object. Now I often see ray tracing touted as a magic fix for rendering (usually in discussions on realtime rendering for games) in online discussions, as if ray tracing somehow provides physically accurate results. Well it doesn’t. It comes closer than triangle rasterization (the technology employed in almost all games, and what graphics cards are optimized for) but it’s no simulation of reality. It gives us reflections and refractions virtually for free and it gives very nice hard shadows (unfortunately in the real world shadows are rarely if ever perfectly sharp). So just like rasterization engines have to cheat to achieve reflections and refractions (pay close attention to reflective surfaces in games, they either reflect only a static scene, or are very blurry or reflect only objects that are on screen), a ray tracer has to cheat to get soft shadows, caustics, and global illumination to name a few effects required to achieve photo realism.

Now a path tracer is like a ray tracer on steroids. Instead of sending out one ray it sends out tens, hundreds or even thousands of rays for each pixel to be rendered. When it hits a surface it doesn’t trace a path to every light source, instead it bounces the ray off the surface and keeps bouncing it until it hits a light source or exhausts some bounce limit. It then calculates the amount of light transferred all the way to the pixel, including any colour information gathered from surfaces along the way. It then averages out the values calculated from all the paths that were traced into the scene to get the final pixel colour value. If that sounds like a rather brute force approach to you then you are right. It requires a ton of computing power and if you don’t send out enough rays per pixel or don’t trace the paths far enough into the scene then you end up with a very spotty image as many pixels fail to find any light sources from their rays. It also requires light sources to have actual sizes, a bit of a departure from traditional point light sources that have a position but are treated like an infinitely small point in space (which works fine for ray tracing and rasterization because they only care about where the light is in space, but a path tracer needs to be able to intersect the light source). Now path tracing gives us all of the things that ray tracing doesn’t give us out of the box: soft shadows, caustics and global illumination. You should still not confuse it with being a true simulation of the real world however, since it still doesn’t fully simulate complex surfaces like skin, instead relying on shader tricks like subsurface scattering to fake these. There is also a practical limit to the number of paths you can trace from each pixel and how far you can trace them before giving up. If you were to simulate photons in the real world you would have to cast billions of paths and trace them almost infinitely (well at least until they leave the area you are rendering) and you would have to do this in an environment modelled down to an atomic scale. That’s not practical so I think we will always be stuck with an approximation, after all we just need to create images that look real to humans, which is a much lower threshold than “simulate reality completely”.

So is path tracing the future of high quality rendering? I do think it is. As computers (and in particular GPUs) continue to scale up in speed path tracing continues to become more and more practical and it requires less cheating than ray tracing (and far less than rasterization). This means less work for the artists to achieve stunning photoreal results, and that’s always a good thing (I think anyone who has worked with a high end ray tracer like MentalRay can appreciate just how tricky it can be to tune all the myriad of options to achieve the result you want). That being said I think that at the present I would not recommend using a path tracer in all circumstances. In the end it is all about choosing the right tool for the job. For single images where you want as much quality as possible and don’t mind it taking potentially hours to render path tracing is great. However if you need to render a large number of images (for a comic or an animation) path tracing may not be the right choice (especially if you are a solo artist without a render farm). In that case tweaking a ray tracer can give you almost as nice results but with a fraction of the render time.

The crux of the problem is that with a path tracer you are locked into an all or nothing approach. If you turn down quality too much you get a grainy image, which you can use to preview but which is wholly unsuitable for production use. So to get a usable image you have to tweak the quality settings until you are just at the point where most of the grain is gone or use progressive refinement and let it run until it looks good (which is a great feature by the way). In contrast with a ray tracer you can generally turn off most of the expensive features (global illumination mostly) and get a very high quality result rendered very quickly. And losing GI is often not a big deal, most competent artists can quickly fake most of what you get with GI by tweaking ambient lighting and popping in a few extra weak lights in places that don’t actually have any light sources (for example to fake light bouncing off a red wall you might just place a weak red area light onto the wall that is good enough to fool just about anyone looking at the resulting image). Of course faking always requires more time and skill on the side of the artist so eventually this won’t be needed, but until path tracing times are measured in minutes per frame, as opposed to the hours or days they are now, ray tracing (or rasterization, especially micropolygon rasterizers like the one powering RenderMan) remains the better option for many classes of rendering tasks.

What Happened to TVRage?

What Happened to TVRage?

For over two weeks now trying to load TVRage.com results in an error message from CloudFlare and a cached copy returned. It looks like the site has gone down, perhaps permanently. This would be bad news, as it was my go-to for information on what shows are running and worse, their API was what I used with my TVShowRenamer app that I created for myself to automatically rename shows to match the format that I use. This is making me very sad. I’ve been looking at TVDB to cover the API needs but it looks like their API isn’t as user friendly as the TVRage one was for my purposes (it appears for example that you have to download all episode information for a show at once, rather than being able to search for a single particular episode).

Going back to manually renaming every downloaded episode to the format that I use, as well as manually looking up the names for the episodes simply doesn’t sound appealing to me, so I’ll have to come up with a solution. I will probably have to have another good look at the TVDB API and make it work somehow, but that means a significant amount of work. All because TVRage simply decided to go under randomly. What a nuisance.

Drawing on the Surface – SketchBook vs Photoshop vs MangaStudio

Drawing on the Surface – SketchBook vs Photoshop vs MangaStudio

So when I got my SurfacePro3 a few months back one of the key reasons for getting it over other options was that comes with a pressure-sensitive pen that makes it a great tool for drawing. Having used a Wacom tablet in the past, I was pretty excited by the idea of being able to draw directly on the screen (while you do get used to the tablet after a while there is still always a bit of a weird disconnect where you are drawing in one place and the actual output is in another). A Wacom tablet is also not the most portable thing, sure you can hook it up to a laptop and draw on the go but that is still pretty unwieldy. Being able to draw anywhere is huge plus.

Ok, so I’ve sold the reasons for why you might want to draw on the Surface. Anyway, for the first few months I was too busy writing code to get much drawing done, so I didn’t really give it much thought. Fast forward to a couple of weeks ago: with a bit of time on my hands I decided to give drawing on the Surface another chance.

First I tried out Autodesk SketchBook. Initially I tried out the desktop version and then I gave the tablet optimized version a try. Both work pretty well for basic sketching. The tablet version in particular is nice. It has even fewer features than the desktop version but in a way that’s a plus: it just gets out of the way and gives you a canvas to work on. The only feature I would like to see added to the table edition is the ability to create different canvas sizes. It just gives you one size and that’s it. I haven’t yet tried creating a different sized canvas in the desktop version and then opening it with the tablet version, although that should probably work. That’s more hassle than it really should be just to get a bigger canvas though. Overall, for some quick sketching the tablet version of SketchBook is fantastic though: simple, clean, and elegant. You can focus entirely on what you are drawing without worrying about the technical minutiae.

Next I tried out the staple of digital artists everywhere: Photoshop. I first tried one of the older versions (CS6) and that proved a catastrophe. The UI was tiny because of the high DPI screen, and the pen pressure did not work (apparently you can enable it with a special driver, but I did not try this). After some searching on the web I found out that newer versions are more Surface friendly, so I proceeded to install CC2015. This was a far better experience immediately. The UI is scaled by default so you can actually use it and pen pressure works out of the box. I did not spend a great deal of time drawing in Photoshop but from the maybe 2 hours or so that I spent on a sketch it worked very well. If I wanted to do any serious painting I think this is still what I would use. It is also still one of the best when you want to add that final bit of polish to an image. However, I wouldn’t use it for sketching, it just feels too heavy in the end. Of course that’s a matter of preference. If you really like Photoshop you certainly can use it for sketching and just about anything else.

Lastly I tried MangaStudio5. I had previously heard about this software and honestly dismissed it out of hand because of the name, after all I’m not really interested in drawing Manga. But several comments on the web mentioned that MangaStudio works particularly well with the Surface so I decided to try it out too. My first impression wasn’t the best: OMG what is that interface? There are about a million buttons and toolbars everywhere and it can take a little getting used to. So I decided to watch a couple of videos on YouTube to get a basic run-down of the interface and that helped. Once you get the basic hang of things it actually does work rather well. The interface takes a little time to adjust to but it isn’t actually anywhere near as bad as my first impressions had me believe. I still haven’t spent a huge amount of time with it but I suspect for any sort of comics work I’ll probably be using it from now on. The brush engine is fantastic and it is super easy to add nice speech-balloons, panels and other comic-y stuff.

Overall, all three software packages actually work very well with the Surface. They all use finger input for moving/zooming/rotating and only draw when the pen is used. All work well with the pressure sensitive Surface pen out of the box. They are all fast and responsive on the Surface hardware (those Intel integrated graphics chipsets sure have come a long way). Which to use really depends on what you want to do and personal preference. SketchBook is great for unencumbered sketching. Photoshop is great for painting and putting that final polish on an image. MangaStudio is great for any sort of comics work. A workflow could realistically incorporate all three, perhaps like this: SketchBook -> MangaStudio -> Photoshop. If you are planning on just using one and want the answer to the question: “which should I choose?”, then I would suggest giving them all a try (all have some sort of free trial available) and seeing for yourself because at the end of the day it really is highly subjective (though personally I would recommend getting SketchBook in addition to either of the other software packages, it is cheap enough that it shouldn’t be a problem and it is a great tool to just play around with ideas in).

Making Noise

Making Noise

So I recently made use of the brilliant LibNoise library to create some complex noise to use as a heightmap. LibNoise makes it easy to create various noise modules (Perlin, Vornoi, Multifractal, etc…) and transform/combine them in various ways. It’s great except for one thing: visualizing exactly what is happening when you tweak parameters can be hard. For the project I was working on I just blindly tuned things until the final product looked about right. This worked ok, but it often felt like fumbling in the dark and a lot of time was wasted changing some numbers and running the program again.

With some spare time on my hands I decided there might be a better way. So I sat down and built a graphical node editor that uses LibNoise. With the magic of WPF (when it works it is amazing, though when it doesn’t it’s like pulling teeth) I created NoiseMaker, a screenshot of which can be seen below:

Capture-noisemaker2

You can drop in various modules, link them together and tweak their parameters. A small image shows a preview of the result at every stage so you can see exactly what is going on (there are many tools that work similar to this such as Substance Designer or the Hypershade editor in Maya). Nodes are coloured according to type (generator, transform, utility) and can be moved around freely. The work area can be zoomed in and out, and scrolled around in.

It mostly works although a number of the LibNoise modules still need to be implemented. There is also currently no way to save/load a node network once it is created. Also, right now connections between nodes can be broken, but nodes themselves cannot be deleted. With a little more work it could be quite a useful little application though.

The general idea of a graph editor like this can be useful for a wide range of projects. Now that I have the basics working I could easily adapt it to any number of other projects, and that’s perhaps the most important reason for doing it in the first place.

The Crossroads to the Future of the Personal Computer

The Crossroads to the Future of the Personal Computer

The Personal Computer (PC) is about to go through another radical evolution. We’ve already moved from the typical beige boxes of the late 80’s and the 90’s to a world where most people use a laptop as their primary computer. Of course the desktop has not disappeared; it continues to rule when the absolute bleeding edge in power is required, i.e. for high-end workstations (for example for CGI work or other graphics and multimedia applications) and for gaming (although this continues to be less true, even mid-end laptops perform admirably in all but the most demanding games these days). For the vast majority of users the reality is simply that computers have been fast enough for what they want to do for the last decade, which means that technological progress has continued to make that same computing power cheaper and smaller than ever before.

So now we are quickly approaching the point where people will have all the computing power they need in their pocket in the form of a smartphone. Arguably we are already there for many people and in fact it is becoming increasingly common among young people that they see no need for a computer other than their smartphone. It is my prediction that over the next decade we will see a dramatic shift to the smartphone becoming the primary computing platform for everyone that doesn’t need bleeding-edge performance.

The obvious counterargument is that the phone form-factor is absolutely dreadful for any sort of real productivity work. Trying to do anything much more complicated than writing a simple E-mail can be an exercise in frustration. For this reason I believe we see a new sort of hardware ecosystem evolve around expanding the capabilities of the smartphone while you are working at a desk. Technically it can be done today by using a Bluetooth keyboard and mouse, and hooking up a large monitor with HDMI. Ideally the monitor would be connected wirelessly as well (perhaps using Miracast or similar technology) but we aren’t quite there yet. The advantage of such a setup should be immediately obvious. You could have dumb terminals (consisting of mouse, keyboard and screen) at home and at work while you carry the “brains” around in your pocket. You have all the advantages of full-size peripherals when you need them but you still have your full computer with you wherever you go. Additionally, I can imagine future tablets morphing into simply being ultra-thin touchscreens with no built-in processing capabilities that simply connect to your smartphone as well. There simply won’t be a compelling reason to have more than one CPU per person except for rare cases.

So now that I’ve laid out my vision of the future of personal computing I can tackle the heart of this blog post: who will make this vision happen? Right now we are seeing a convergence from two sides. On the one side is the mobile space with Android running on ARM processors. On the other side is the PC (desktop or laptop) running Windows on Intel (or AMD) chips. Intel is already aggressively pursuing the ultra-low-power market and have made significant inroads with cheap and small tablets running full versions of Windows 8.1. The next step is smartphones with x86 processors running the desktop version of Windows, something I fully expect to see within the next year or two. In the meantime Android continues to grow up. Google continues to add features to its mobile OS and there are already a number of netbooks available running Android.

While there is always the possibility that Android and Windows will reach some kind of stable equilibrium history seems to suggest that one or the other will eventually come out with the lions-share, with the loser becoming a niche player or disappearing entirely. It will be exciting to see which of these two titans, Google and Microsoft, will end up dominating the personal computing space of the future, or will it be an entirely different third player yet to come? I can’t wait to find out.

21st Century Energy Policy

21st Century Energy Policy

It is truly appalling to think that we have all the technological tools available to us to create a truly modern energy ecosystem and avert massive ecological disaster looming from climate change, and the only thing lacking is political will. Not only that, but we can achieve this without destroying the economic engines of the industrial world (a common complaint from right-wing conservatives when the subject of climate change is broached). To accomplish this we would need to move away from fossil fuels entirely (something that should be possible within a 20 year time frame by my estimation), which would have the added benefit of removing much of the financial base of fundamentalist Islamist terror groups such as ISIS. So what do we need to do?

The most obvious part of the solution is to continue to invest in renewable energy, primarily in the form of wind and solar. This is the part that is actually well on track, with huge investments being made world wide (and particularly in Europe, with Germany spearheading the charge). Wind and solar are already very competitive in price (see Apple’s recent announcement to invest big in solar energy for their buildings, bringing not only good PR but actually an estimated savings over using grid electricity).

However, they are no silver bullet to the energy problem. Both wind and solar suffer from reliability problems and have a certain amount of geographic dependency (solar works great near the equator but is nigh useless near the poles). Large scale energy storage is as yet an unsolved problem. Eventually we will have a world-wide smart grid with a huge superconducting backbone, possibly with space-based solar and/or fusion power covering the base load. Right now this is sci-fi and not reality, so what should we be doing with what we have available? In parallel with continued investment in renewables we should be overhauling the electric grid to handle much larger loads from electric vehicles (more on that later) and build a fully smart grid. With a smart grid the utility company is able to turn on and off devices such as fridges and air conditioning as needed to balance the power distribution, improving efficiency and, crucially, able to compensate for some of the reliability problems of wind and solar.

With all of this there is still going to be a need for base load power generation, and for this we should be investing heavily in nuclear fission to cover the last bit, while working to shut down all fossil fuel based power plants, starting with the abomination that is coal. Instead, we are seeing the opposite trend right now, with supposedly environmentally conscious Germany moving to tear down their existing nuclear infrastructure and building new coal plants to replace them. Similarly, France and Japan have both announced a reduction in reliance on nuclear power. Once again, the inevitable result will be increased fossil fuel consumption to fill the holes in electricity generation. This would almost be comical if it didn’t put our entire planet at risk by raising the continued CO2 pollution of some of the world’s biggest industrial powers.

At this point a large number of you will complain that nuclear is the worst possible option, and let’s not sugar coat it: there are real dangers associated with nuclear and there is no such thing as a 100% safe nuclear plant no matter what some proponents like to say. However, the dangers are also well understood and nuclear fission is not some uncontrollable magic, it is science. It is already safer than burning coal (and releases far less radioactivity into the environment). When you take into account the danger that climate change poses to our civilization the equation just comes out even better. The waste can be stored or reprocessed (note that the radioactive material used is mined from the earth in the first place and not magically created in the reactor). There is also enough fuel available to drive our civilization for centuries to come, especially if you add Thorium to the equation (a massively abundant radioactive material that can be used in advanced reactor technology), and it only needs to tide us over until fusion and/or space-based solar become a viable option.

The final piece of the energy puzzle is the move to all-electric transportation. To this end governments around the world should be investing heavily in creating cheap electric vehicles (working with established auto makers), slowly rising petrol taxes and building recharging infrastructure. This also requires us to upgrade electric grids and install more generating capacity to handle the influx of demand (which should also drop electricity prices, driving economic growth and making electric vehicles an even more attractive prospect). We should be working to phase out the internal combustion engine within the next 20 years (which will also reduce air and noise pollution in our cities as an added benefit).

Overall, this rather simple plan, if enacted by all major industrial powers, would be a massive boon to society. We eliminate the sword of Damocles that is global warming, drive economic growth, reduce pollution and clip the wings of the pathological fundamentalist Islamist movements in the Middle-East. It is win-win-win situation, all that is needed is the political will to pull it off, and an end to the irrational fear of nuclear fission that much of the public have (thank you media, for playing up the comparatively minor damage caused by the Fukushima meltdown, while almost ignoring the massive loss of life and destruction of infrastructure caused by the earthquake/tsunami combo all across Japan). Until next time, remember that science is the engine that drives prosperity.