C++11 Smart Pointers are Not Just For Memory

AN IMPORTANT UPDATE TO THIS POST

I compiled all of this code using MS Visual Studio and it worked as I expected it to. However, when I tried using this approach with g++ and libstdc++ 3.3.6, the compilation failed. It turned out, the g++ implementation of unique_ptr compares the internal pointer to std::nullptr somewhere in its destructor.

At first I thought that maybe this is was a bug in libstdc++. However, I should have read the C++ standard more attentively: the pointer type managed by the smart pointers has to fulfill the nullable pointer requirement, which includes comparability to nullptr, so that is not a bug at all.

There is a work-around for that, which involves wrapping the handle into a type that satisfies the nullable pointer requirements.

I found this StackOverflow question asked by someone who ran into the exact same problem. The answer provides a good explanation as well as the workaround.

Moral: 1) library developers are smarter than you think they are, and certainly smarter than you; 2) RTFM



RAII is a useful idiom. It helps prevent resource leaks by making sure that resources are freed when they are no longer needed, and as such, provides a simple form of garbage collection.

The C++11 standard library provides a number of class templates (shared_ptr, unique_ptr, weak_ptr) to use this idiom for memory management.

However, memory is not the only resource that we have to manage. Sometimes we have to make sure a handle (e.g. a file descriptor, an OpenGL texture object, a HWND, etc.) returned by some library is freed properly.

There is a useful trick that allows you to use the standard smart pointer templates to manage such handles as well.

To accomplish this, we need to do two things:

  • Specify how the resource should be freed;
  • Make sure that internally, the smart pointer stores a handle, not a pointer

All smart pointer templates have an additional template parameter that allows us to specify a custom deleter type. A deleter type is basically just a class that has an overloaded operator() which accepts a pointer (or a handle, in our case), and frees the resource associated with it.

Let’s show this on a simple example. In this example, I’ll write a custom deleter for an OpenGL shader:

We can then use it with a smart pointer to replace the default deleter:

But we’re not done yet. This unique_ptr will store a pointer to GLuint internally, but we really want it to just store a GLuint, our handle. So how do we accomplish that?

The internal “dumb” pointer of std::unique_ptr<T, D> doesn’t just have the type “T*”. Its type is std::unique_ptr<T,D>::pointer, which is defined by the standard to be std::remove_reference<D>::type::pointer, or, if such type does not exist, T*.

In other words, if you add a typedef named “pointer” to your custom deleter, it will be used by the smart pointer internally. So, to get the desired behavior we just have to do this:

So, now our trick works! We create a new OpenGL shader during initialization and it will be deleted once the smart pointer goes out of scope!

There still is one caveat. The indirection operator won’t work as expected with this pointer. Writing something like:

glAttachShader(p, *smart_shader);

yields a somewhat cryptic compilation error in Visual Studio 2012. You should use the get() member function instead, like so:

glAttachShader(p, smart_shader.get());

It’s pretty easy to understand why this is happening by taking a look at the implementation of operator* in unique_ptr:

 

It’s trying to apply the indirection operator to a GLuint. Clearly, this should fail.

So why does it work when we don’t invoke the indirection operator on the smart pointer? That’s because the compiler won’t even attempt to generate code for a non-virtual member function of a class template if it’s never used (this behavior is, in fact, enforced by the language standard). Incidentally, code for virtual member functions will ALWAYS be generated (I’ll let you figure out why :) )

 

Brave Ball

Finishing projects feels good. Even if it’s a simple, modest project, like this little game of mine.

It began in summer 2012, during a 12-hour layover in Moscow. After wandering around the airport for about an hour I got bored, so I broke out my laptop and started coding. I wanted to write something that had interactive graphics in it – I still remember how fun it was to follow NeHe’s tutorials years ago, and I wanted to re-capture that feeling.

Anyway, I ended up with a blob of messy C code that displayed a bunch of moving colored rectangles. I left it untouched for several months, but later I decided to come back to it and actually finish what I started. Ultimately I decided to rewrite it from scratch, this time using C++. So I tried to spend an hour or two after work writing code, drawing sprites and experimenting. If I actually worked every day, I’d finish in about a month, but actually the process took longer than that because I’m a lazy bastard.

The game has a sad excuse for a plot: the happy colorful world of Weird Little Creatures has been invaded by dark evil Meanies. You have to run and save all the defenseless younglings. Basically you just jump from platform to platform, avoid contact with the enemies and save the little creatures. In the end, there is an exit portal that you must step into.

I thought the game was pretty easy, but a couple of people I showed it to told me it was really hard. I don’t know, maybe I have played it so much that I got used to it. You can download it and see for yourself!

Some random stuff:

  • I used MS Visual Studio to write code, SDL/OpenGL for graphics, DevIL to load images, BASS to play sounds and music, mtPaint and GIMP to draw sprites.
  • I thought I’d eventually have to write collision detection between various geometric shapes, but it turns out you can do pretty well with rectangles too, if your art lends itself well to it.
  • You can actually draw sprites for your game, even if you’re not an artist. Just pick a simple visual style. I’ve never drawn sprites before, so I tried to have as little detail as possible, and compensate the lack of detail with motion, for example the antennae on the characters move, if the “protagonist” stops, it starts looking around, etc.
  • The mountains in the background are loaded from an image, but that image was generated procedurally (thanks to this great tutorial). Fractals can help you a great deal with creating art.
  • I rolled my own “packed file” format to store assets in a single file, but I’m probably going to use PhysFS in my next project.

The source code for the game (including assets) is on my Github. I’m not particularly happy with the “engine”, but hey, it’s the first version. Anything you make for the first, and even second time, is gonna suck.

P.S.: Thanks to __twc for generously providing that awesome in-game music!

The Book That Every Programmer Should Read

No, it’s not Knuth’s “The Art of Programming”. I’m talking about quite an easy-to-read (compared to TAoP) book, which, in fact, does not require any engineering or mathematical background from the reader.

I am talking about C. Petzold’s “CODE”. It is a truly remarkable book about how computers work. Let me explain why I think this book is so awesome.

The book starts from the very beginning, from explaining what code is, bringing several examples, like Morse code and Braille’s system. It then goes on to explain how electricity works and how it can be used to represent information with binary codes.

In later chapters, Boolean algebra is explained and the author shows us how to build basic components of circuits (logic gates) with battery, wires and simple relays. Those are used later to build an 8-bit adder circuit and a RAM array. Eventually, the author describes a computer with a simple instruction set and random-access memory.

This book is so amazing because it shows how simple principles can be combined to create the basis of the complex modern technology. You should read it to (at least approximately) understand what really goes on behind all the loops, pointers, variables, jumps and complex data structures. It won’t give you insights about the inner workings of Intel’s processors, but you will understand the basic things which make that pile of plastic and metal do math, display pretty moving pictures and download pr0n from the internets. Which is very important.

P.S.

During my second year at the university we were taught a course which was called “Computer Architecture”. What we really did during that course can be summarized as (painful) memorization of 8086′s instructions and writing a few programs in freaking Turbo Assembler (the year was 2008, by the way).

If only instead of all that crap we were taught what was written in Petzold’s book, it would’ve been one of the most useful courses ever, because:

  • It would really be about computer architecture and not about memorizing instructions or writing programs for a painfully outdated compiler;
  • I would actually learn something.

That is all.

Goto is Not Evil, Okay?

First, this comic, before anyone posts it in the comments:

Now let’s get down to business.

If you’re a beginner at programming, you should probably read this.

It’s one of the first things they teach you in the programming class. The goto statement is evil, they say. You should not use it in your programs. You will create messy, ugly, unmaintainable code. Students memorize this and as a result, some of them will go apeshit about any code containing a single measly goto. I used to be like that, too.

The opinion that goto is bad is usually based on an infamous paper by E. Dijkstra, “The Go To Statement Considered Harmful“. Come to think of it, I wonder how many of the students that are told so, actually go and read that paper. If you haven’t, you should go read it now, by the way.

Anyway, the problem with goto is that it, allegedly, leads to creation of unmaintainable code. In this post I will show by example how avoiding the usage of goto can lead to code that is even less maintainable.

Let’s suppose we’re writing some code in C, and we have to do all the memory management by hand. We’re writing a function that internally allocates a bunch of buffers. There are several situations in which the function will fail. We want to handle all of them properly, by returning a corresponding error code or whatever. We also want to avoid any memory leaks, so we must free all the allocated buffers before returning. How should we free those buffers? The obvious solution is to call free() for each of them before exiting. Assume we have code that looks like this:

There is a big problem with this approach. It will become more obvious if we imagine that there is more than one place in which the function may fail after allocating the buffer. We will have multiple exit points, and the cleanup code will be duplicated at each of them, which introduces the possibility of a memory leak. If we decide that we need another buffer, we might forget to free it in one of the exit points of the procedure. It will start spinning out of control if we need even more cleanup actions.

The solution to this problem is to keep the cleanup code in a single place, but how do we do that?

One might suggest creating a special “cleanup” procedure that receives pointers to the buffers that need to be deallocated and calls free() for each of them. I consider this solution highly inconvenient and butt-ugly. I hope you agree with me on this one.

The simple and effective solution is to use the dreaded goto:

Here, we have a single exit point. All the cleanup actions are grouped nicely in a single place, and they are not duplicated. You don’t need to be Albert Einstein to see that this piece of code is cleaner and more maintainable that the initial one. Gotos don’t automatically make your code somehow “tainted”.

Yet another example of when goto might be useful is breaking out of deeply nested loops. One might argue that lots of deeply nested loops are a sign that something is seriously wrong with your design. I think that this is true in most cases, but whatever, there still might be situations in which four nested loops are the right thing to do. And in such cases breaking out of the innermost loop without using goto is a major PITA.

Now I’m not saying you should go and sprinkle your current project with unconditional jumps. You really don’t want to do that in most cases.

But the point of this post is not that goto is your friend. It is that cargo cult is bad.

The Crisis of Linux on Desktop

I have been a Linux user for roughly six years. I started in 2005 by trying Knoppix from a live CD, then dual-booting with Windows and then running Linux  exclusively on my two desktop computers (home and work) and laptop.

During that time I tried several distros and switched between several desktop environments. My very first desktop environment was KDE. I liked it (no wonder – before that, I had been using freaking twm!). Versions 3.2 and 3.5 were pretty good – in fact I could run them perfectly fine on my old PC with a 700 Mhz processor and just 128 megabytes of RAM.

Then came along KDE 4 – at first I was happy with it because it had some pretty neat features that were quite new and exciting at that time. But  I quickly became disappointed with it.  It was a resource hog, but it didn’t matter that much, since I already had a shiny new PC. What irritated me most was its instability. Something would crash unexpectedly every now and then. I decided that KDE devs needed more time to brush up Plasma and whatnot, so I switched to GNOME.

I found GNOME to be an improvement over KDE in terms of resource usage and usability, so I stuck with it for quite a while. It did have its own problems, but whatever, nothing in this world is perfect. And then there was GNOME 3.

Of course, I respect the work that the GNOME devs put into this release, but I find it just horrible. I will not try to explain the reasons behind this opinion here, you can google it up yourself, it’s all over the internet. Few people are happy with GNOME 3.

After being scared with GNOME 3 I tried running back to KDE. Tough luck – the new KDE turned out to be just about as scary. I’m not sure what happened to it, but out-of-the-box, the “plasma netbook” user interface was eating away at my laptop’s dual-core CPU so intensively I could hear the loud noise of the CPU fan. And the whole thing was not too responsive either. Remember, it’s a “netbook inteface”. How is this supposed o run on a cheap netbook if it makes a dualcore CPU sweat?

Anyway, what I realized was that KDE had become unusable (for me) and GNOME was going to become unusable in the nearest future unless its developers came to their senses.

As of now, I think that the world of Linux on desktop is facing a crisis of sorts due to the lack of an adequate desktop environment. GNOME 2 was the default environment in many popular distros, and if it is going to be replaced by GNOME 3, things are not going to be good. And I don’t think that KDE will work for people who are used to GNOME 2.

There are a couple of ways out of this situation for an average user:

  • Just stick with whatever you have and get used to the changes;
  • Man up and throw away desktop environments altogether, install a naked window manager and be in charge of your desktop experience;
  • Switch to some other, less popular DE like XFCE or LXDE

For most people (1) is not an option (no pun intended). The new GNOME seriously hinders productivity.

I personally chose option (2), but I understand that the majority of users does not have the desire or time to go through all the trouble of whipping up your own improvised “desktop environment”.

So what remains is switching to some less popular DE. The problem is, XFCE seems to be the only viable choice here. LXDE doesn’t seem mature enough, but XFCE feels just about right. Plus, if more people flock to XFCE, there will be more motivation for its developers to work on their product and improve it.

In fact, XFCE has already welcomed many refugees from the GNOME land, including Linus Torvalds himself.