We have met the enemy and he is us: Why current DRM is doomed from birth

Despite the best efforts of a lot of intelligent people in the content distribution business, pirates continue to break new DRM schemes. While the debate on whether DRM is a good thing or not is both heated and has been going on for a long time, there is a more fundamental problem with basically all DRM schemes, and it is this problem that allows them to be broken: DRM is based on the principle of encrypting the content to be protected in a way that ensures that only legitimate consumers may access it.


Cryptography is “…the practice and study of techniques for secure communication in the presence of third parties…” (Wikipedia). In common cryptography, there are three parties: A sender, a receiver and an adversary. The two first parties want to exchange a message without the third party. But in DRM, there are only two parties: A content rights owner and a content consumer. The first can be thought of as the sender in the chryptographic world. The rights owner wants to transmit a piece of content to a consumer in a way that ensures that noone but the consumer is able to access the content. The consumer can be thought of as the receiver. They want to access the content. Incidentally, most consumers don’t really care about keeping the content secret from everyone else.

By using cryptography to ensure that the content cannot be shared, the content rights owner must also make sure that the intended receiver can access the content. This means that the consumer must be given the means to decrypt the message. In some DRM schemes, this can be achieved through dedicated hardware, such as the Kindle ebook reader from Amazon, or even early DVD players (before the DRM on DVDs were broken, that is). In other shcemes, it is merely a piece of software, such as iTunes from Apple.

But we’re missing the adversary from the chryptographic world. And the problem is that the content consumer may act as the adversary. In the perfect world, the consumer would respect the rights of the content rights owner, but in reality, the consumer might want to share the content with a close friend (or everyone on the internet). This is what DRM is intented to prevent, but it is in this scenario that the critical flaw of DRM is revealed: The adversary is now the consumer, and consequently knows how to decrypt the message. And while this is certainly a flaw in the protocol, it is there by design.

Breaking DRM

Even in situations where a direct digital copy of the content isn’t immediately available, everyone (born before 2000) would still be able to make a copy of a piece of DRM-protected music throug the use of a casette tape recorder. This copy will not have the same digital quality as the source, and some would argue that the DRM isn’t really broken, as a perfect copy hasn’t been made, but the fact remains that the content sought to be copy-protected has been copied.

But even if we only consider perfect digital copies to be a breach of DRM, there is still a giant problem. Cryptography for the sake of keeping something secret relies on a shared secret between the sender and the receiver, which should be unavailable to the adversary. The moment the adversary obtains this shared secret, the message can no longer be considered secret either. Which means that the only task left for a pirate is to extract the shared secret, write a utility that can decrypt the content and save it in decrypted form, and the DRM scheme is oficially broken.

So the battle between DRM makers and pirates is really a matter of making it hard to extract the shared secret, but since it must still be possible for legitimate consumers to access the content, the DRM makers are severely limited in this endeavor.

The battle is lost in advance for software-based DRM, as the means of decrypting is readily available to the pirate. For hardware-based DRM, it is a bit harder for the pirate, but since there are a lot of people dedicated to break DRM, it is usually only a matter of time, and the fundamental problem remains that the content must be accessible by the consumer.

Online DRM

So far, the only approach to DRM that seems to be invulnerable is schemes based on the consumer being online while accessing the content, which will allow the content rights owner to tightly control access in real time. This has been implemented for computer games, but have other flaws. The content rights owners now force the consumer to be online in order to access the content. If transferred to music, this would mean that it would be impossible to load the music on a portable music player for a hiking trip to an area without internet coverage, which most hikers would probably not like. And even worse: It means that the content will become unavailable if the server that controls access should ever become offline for any reason.

Whether this is a problem depends entirely on the terms of the agreement between the rights owner and the consumer. As we currently understand a purchase, this is entirely unacceptable, but changes to property law and new kinds of rights transfer may change that. That is a task for the future, but right now, DRM, as we know it, is broken.


Learning LaTeX will ruin your career in IT

This may be a rather shocking statement to make, and to be fair maybe a bit melodramatic. However, there is still some truth to it.

Let me get the assumptions out of the way: You are

  • Attending college with a scientific major, most likely Comp. Sci.
  • Expecting to get a job doing IT-work (e.g. not in academia).

In this case, save yourself the future agony, and stay away from LaTeX. It might seem like a good idea at the time. Hell, it might even seem like a great idea. But it isn’t!.

The sad truth is, that while you may get nicely formatted papers during your college years, and may have the chance to impress a nice-looking literature major, you will be utterly doomed once you enter the real world outside the walls of academia.

Continue reading

The song of the mockingbird

I have a new favourite drink: Mockito!

For my Computer Science thesis, I worked with the annotation processing framework of the java compiler. It is actually a pretty cool framework that allows you to extend the compiler by processing annotations at compile time, and it can be set up to work simply by including a jar on the classpath. The API is actually pretty nice, using fancy stuff like  mirror-based reflections and whatnot. But it was a pain in the ass to unit-test. As my components lived in the middle of a compiler pipeline, there was a huge environment of objects that it assumed the existence of, and while the courses i took on test-driven development taught me to use a test stub to factor out components that I wasn’t testing, it just seemed like an overwhelming task in this case. So I created a lot of Java source code and wrote some jUnit-fu to turn it into test cases for my compiler extention. It wasn’t pretty, but it got the job done.

Now, imagine my joy, and the feeling of facepalm, when I started in my new job, opened up the relatively new Java codebase and discovered the concept of a mocking framework! The ability to hot-swap objects with mocks that just returns whatever you tell them to does not sound like such a big deal until you’ve tried to test small components in a big world, but boy, does it change the way testing works.

To those who unit-test and haven’t yet tried a mocking framework, I urge you to try it out. It works particularly well with some sort of dependency injection scheme, as you can mock the dependency injection and through that inject your mock objects whereever you want them. As for frameworks, I have only personally tried Mockito, but there are several out there, all with strengths and weaknesses.

Lego Millenium Falcon

Box containing Lego Millenium Falcon set

It has been some time since the last post. The reasons are many, most of them bad, but I just haven’t really had anything to write about.

That has changed now, though. I would like to share my summer holiday project: Building a scale model of the Millenium Falcon in Lego!

The box

I am no genius. The model is a pre-packaged building set from Lego, complete with an assembly manual. But it is a rather large set, which the box would indicate.

This is no set for beginners. Lego has rated it as 16+, and it contains about 4,500 pieces! It is officially the largest kit to be produced by Lego.

As for the model, its final dimensions are 82cm x 57cm x 23 cm, which is huge. The set also comes with five lego figures (Luke, Leia, Han, Chewbacca and Obi-Wan). They are ordinary lego figures, and they are almost to scale with respect to the model, which should also give an idea of the size of the model.

The assembly

AssStructural Skeleton of modelembling the model took more than a week. That was, however, not concentrated building time, but real time. In total, I guess I spent about three or four days on building the model.

In the usual Lego way, the first thing built is the skeleton of the model. This is not that fancy, just a sturdy cross to add all the other stuff onto. After that, all the plating comes on, which gives the model it’s looks.The falcon is about halfway assembled. Some plates are attached

At this stage, I realized that I had made a small error a lot of steps back. That resulted in having to carefully tear the model apart. The Lego building style has a lot of locks that ensures that the pieces stay together, and doing any modifications requires a small amount of carefully applied violence to the model. Luckily, Lego is a rather forgiving material, allowing for slight bending, so the errror was corrected without problems.

Except for that one mistake, the building process went smooth, and plate by plate, the model started to take shape. It was really fun to watch how small Lego bricks were combined to give the illusion of intricate mechanical equipment on the surface of the model. And it was a really great feeling when the last plate was added to the model.

The only thing remaining at this stage was to find a place for it in my room, which is a bit cramped, but having completed the assembly, that doesn’t seem that bad.

Final model


I took a lot of images during the process. Instead of inserting them all into this blog post, I will just link to picasa, where they are all on display. I apologize for the quality of the images. They were taken with my phone, which doesn’t have the greatest camera in the world.


Building this model has been a really great experience. It is also the masterpiece in my collection of Lego Star Wars models, that also include 1:12 scale models of a X-Wing fighter and Darth Waders tie-fighter (X1 prototype). On the other hand, it is a bit sad, as I will probably never find anything to top this when it comes to Lego, but I would still recommend it to anyone who are interested in building Lego models. That is, if they can get their hands on the set, as it appears to no longer be on sale.

Living with Ubuntu 10.04

In my last post, I documented the upgrade of my laptop from 9.10 to 10.04. That was almost three weeks ago. In the meantime, I’ve been living with my new system and using it daily to do my work on, so here’s my thoughts and post-upgrade modifications of the clean system.
First I must apologize for not being completely throughout in my last post. The installation of the rotation support required a bit more work than documented.

Specifically, the daemon failed to start automatically, and instead of fixing it, I simply added the daemon to my startup programs in Gnome. This meant that I had to grant my own user sudo access to setkeycodes instead of the video group, as the daemon was now running with my user information. Other than that, no problems.

The modifications

I’m in a constant war with the small (12.1 inch) monitor on my laptop, and the fact that its maximum resolution is a mighty 1024 x 768 doesn’t make things any better, so I’m always on the lookout for new ways to get most out of my screen. Especially vertical space.

Given that we finally got a magic combination of X, compiz and Gnome that seems to handle a rotating screen without any major problems, combined with the fact that my computer can actually run compiz without any major lagging, means that I want a dock. Sure, they hail from the evil depths of Apple, but just because they were used somewhere else first, doesn’t make them bad.


I tried several docks, and consulted Google a lot on the matter (it seems to be a hot topic on several blogs), but finally settled on Avant Window Navigator, or awn for short. For reference, I also tried Gnome-Do in dock mode (Docky) and Cairo dock.

Docky was actually quite nice, especially the fact that it was still gnome-do, just with dock features, but the lack of configuration options bugged me. I’m still using gnome-do, but with the classic skin. More about that later. Cairo dock was also really nice, but paradoxically, it had too many configuration options. Not that I don’t like those, but I seemed to get stuck trying to get what I wanted. Finally, it seems like awn is a bit more stable than Cairo-dock, or at least handles crashes better. Also, awn is the only dock who has a good “stack” plugin for showing the contents of a folder directly in the dock.

So, when I had the dock configured as I wanted it, placed at the bottom of my screen, and containing both the main application menu, desktop switcher, a list of running programs, a clock and a systray, I realized that the gnome panel at the top of my screen was just sitting there, taking up 24 precious pixels of my vertical screen estate, without any function.

Getting rid of gnome-panel

It turns out that the panel has grown more tenacious over the years. Just a few versions of Gnome ago, it was possible to get rid of the panels by just right-clicking and selecting the “remove this panel” option, but that function has  been slightly capped. It is now impossible to get rid of the last panel.

Or, not impossible, but absolutely not easy. In order to get rid of it, you need to open


navigate to /desktop/gnome/session/required_components and clear the value for “panel”. Then you need to log out and in again for the change to take effect, but you should be greeted by a nice, panel-free, desktop.

The only caveat with disabling gnome-panel this way, is that with it goes the run dialog that was previously available at Alt+F2, which can be seen as a loss if you use it much.

Finishing touches

Even though my dock has a main menu, which is the same as the gnome-panel had, I really like other ways of getting applications to run. First of, there is the now-disabled Alt+F2 run dialog. This has been completely replaced by Gnome-do. There is a slight problem with infrequent crashes, which makes me thankful for the previously mentioned menu, but other than that, it just works! Running applications, opening files, taking quick notes, looking up word definitions, connecting to ssh hosts and controlling Virtualbox VM’s are just some of the things that Gnome-do makes easier.

But when you want fine-grained control, there really isn’t anything like a command line. In that department it doesn’t hurt to have a terminal hotkeyed, but the best thing is really an always-active terminal that can be summoned and dismissed at the press of a key. I give you Tilda!

It is most often described as a Gnome version of YaKuake, which itself is just a ripoff of the terminal in Quake. It is a terminal emulator that is always running, and slides down from the top of your screen at the press of a key (F12 in my case, can be customized). And when you don’t need it anymore, the same key will make it disappear (but not close) again, meaning that you can start a program in the terminal, dismiss the terminal, and the program will continue to run. And just like Gnome-terminal (or just about any other terminal emulator out there),  tilda has tabs, which means that you can have as many terminals as you want hidden away at the top of your screen.

Oh, and while I’m on the topic of terminals, do yourself a favor and install the xfonts-terminus packages and use that as default font in your terminal. Your eyes will thank you later.


I’m one of those geeky persons that will probably never stop customizing my desktop environment, but for now, it seems like I’ve found something that I can actually just use without messing around with it all the time. It doesn’t get in the way of my work, doesn’t take up any screen space at all and I have easy access to everything. This one is a winner.

Ubuntu 10.4 on Lenovo X61t: Success

Despite my earlier misconceptions about the whole upgrade procedure, it seems that it went really well. I used the following procedure

  1. Make a full backup of my home dir. This includes all hidden (dotted) files
  2. Reinstall from live-cd (With new partitions, the old ones were ext-3)
  3. Copy back documents and other important files, like ssh keys
  4. Install a minimum of programs in order to feel comfortable
  5. Make the tablet work agan
  6. Make rotation work

Making the tablet and stylus work

This was a real pain, as the configuration method has been changed again. In order to configure the stylus, we now have to edit files in


specifically the  10-wacom.conf file. The good news is that the familiar syntax from xorg.conf is back, and even more so, it seems to be staying.

For more information about the configuration, look at thinkwiki.org. The page is about trackpoint configuration, something you would want to do anyway, but the section about xorg.conf.d is the one that gives a hint about the process.

My specific configuration needs were:

"TPCButton" "on"
"Button2" "3"
"Button3" "3"

in order to prevent the stylus from sending clicks when the tip isn’t touching the screen and map the single button to a left-click.

Rotation support

The tablet can be made to rotate automatically when the screen is swiveled. In order to do this, I needed to fetch the sources from the tablet-screen-rotation-support branch of the Tabuntu project.

In order to compile the source on my pristine system I had to install


And when compilation and installation was done (as per the INSTALL file), I had to manually create the acpi event listeners. The specific event strings for the swivel events can be found with

sudo acpi_listen

And for both events, you need to run


With that done, all that remains is to bind the rotate button to the


program that was also installed. Fortunately, this can be done with the gnome keyboard shortcuts manager.

…It even looks like compiz behaves well when rotating the screen 🙂

All in all the most satisfying clean upgrade I’ve done in years

sudo apt-get dist-upgrade

It’s that time of the year again. Time to upgrade my Ubuntu installation to the
newest version.

In the past, this has been a mixed pleasure.
I’ve been running Ubuntu on my laptop since 6.04, and while
dist-upgrading has become a lot easier, I’m still not completely relaxed about it.

In the early days, I had to re-install my system when doing an upgrade, as the automatic upgrade scripts would always break something.
Now, however, those problems are mostly gone, and the last upgrade went without a hitch.

This time, though, I’m contemplating doing it the old-fashioned way, mostly to get rid of all the cruft that has gathered on my hard drive during the last year.
Old versions of programs that I no longer use, and stuff like that.
The real problem with that is to remember my system configuration in order to get my computer back the way it was.
Like most users, I like to personalize my system to fit my needs, which has resulted in a lot of small alterations, most of which I can no longer remember.
The modifications I’m most worried about is the ones for tablet support (I have a Lenovo x61t) which are a pain to get right, even with the help of google, but also smaller configurations, like the gnome theme I’m using and stuff like that.
Most of those settings are saved in my home directory, but I plan on cleaning that one out too, in order to get some fresh config-files in my system.

Another solution might be to just save myself the trouble and keep using my computer as it is… but who am I kidding. I want new! I want shiny!

I will try to document the process here, if I survive…