New design

A few days ago, I decided to give my blog a new look. Consequently, I wanted to upgrade nanoblogger from version 3.3 to at least 3.4. or even 3.5. Thinking about the upgrade procedure however, gave me a headache - there are just too many small fixes and workarounds I tinkered into nanoblogger's source code. After I spent some time searching for alternatives, I eventually ended up with Tinkerer, a Python-based static blog compiler. Apart from being actively developed, Tinkerer has two advantages over nanoblogger I especially want to emphasize.

First of all, Tinkerer is fast. Completely rebuilding my blog takes just about 2 seconds - nanoblogger needs over 3 minutes for the same task. Secondly, Tinkerer offers source code highlighting for many programming and markup languages by using Pygments.

Additionally, transferring the old blog postings from nanoblogger was easier than expected. I wrote a small shell script that converts the *.txt files inside nanoblogger's data directory into a format known by Tinkerer. Of course, this just automates some steps of the process and can't spare you the work of manually fixing errors and warnings Tinkerer might report. Still, it saved me a lot of work.

On the downside, I already stumbled on some bugs - if you plan to use Tinkerer for your own blog, this might save you a lot of trouble, if you're repeatedly getting unexplainable UnicodeErrors.

Hardly known

Most Linux and some Ubuntu users know a certain set of command-line programs for interactive shell usage. Most importantly, there are the standard tools from the GNU core utilities which cover many aspects of everyday's work. You'll find these tools preinstalled on almost every Linux-based desktop or server system (embedded systems often tend to use all-in-one tools like BusyBox as a replacement for the core utilities). Additionally, some of the commonly used tools like grep or strings are found in separate packages, which are also available on most systems.

Thats why these programs are already thoroughly discussed in many books, blogs or internet forums. Yet, there are some hardly known, but useful shell programs that even seasoned Linux users might not know. This blog post will introduce two of these tools I consider to be quiet convenient for their special purpose. The first one, iprint, might be one of the smallest pieces of software available in the Debian repositories. The source code of this handy utility consists of 23 lines of C code, the compiled ELF executable occupies about 6KB of precious disk space on my system. Still, if you're a programmer, you might consider these 23 lines useful for your work: i <arg> shows the decimal, hexadecimal, octal and binary representation of arg. If the value of arg corresponds to a printable ASCII character, the respective character is printed as well. If you precede arg with 0, 0x or 0b, arg is considered to be an octal, hexadecimal or binary value. Of course, you may pass multiple values to iprint in one call. The second tool is somehow related to grep. One thing I like especially about grep is the possibility to highlight the matches in most terminals with the use of ANSI escape codes. For some purposes however, I wanted to have a tool that highlights specific keyword without filtering the input text for these keywords. A quick internet search showed, that histring might be exactly what I was looking for. Unfortunately, many hyperlinks pointing to projects pages for histring were no longer available. The GRML repository however, did not only have a compiled version of histring available, it also provided the source code. So after compiling, you may invoke histring more or less just like grep - support for case-insensitive matching and regular expressions is already included. ;)

Default parameters

Recently I came across an interesting snippet of C++ code:

#include <iostream>
class Base {
    virtual void message1(){ std::cout << "Base message1" << std::endl; }
    virtual void message2(std::string param = "Base message2"){ std::cout << param << std::endl; }
class Derived : public Base {
    virtual void message1(){ std::cout << "Derived message1" << std::endl; }
    virtual void message2(std::string param = "Derived message2"){ std::cout << param << std::endl; }
int main(){
    Derived d;
    Base *base = &d;
    return 0;

If you compile this with g++ and run the produced binary you'll get the following output:

Derived message1
Base message2

At first glance this might look a little confusing. It seems like the correct overloaded function is only called for message1 but not for mesage2. However, if you change the function bodies as follows, you can see that the correct function is called both times:

class Base {
    virtual void message1(){ std::cout << "Base::message1 Base message1" << std::endl; }
    virtual void message2(std::string param = "Base message2"){ std::cout << "Base::message2 " << param << std::endl; }
class Derived : public Base {
    virtual void message1(){ std::cout << "Derived::message1 Derived message1" << std::endl; }
    virtual void message2(std::string param = "Derived message2"){ std::cout << "Derived::message2 "<< param << std::endl; }

Derived::message1 Derived message1
Derived::message2 Base message2

As you can see, the correct overloaded functions of the derived class are called in both cases. The main problem stems from the default parameter for message2. Default parameters for C++ functions are resolved at compile time depending on the static type (here: Base), whereas the correct function is determined from the dynamic type (here: Derived). As a rule of thumb, you should never change the values for default parameters in inherited functions. Although it's legal and the result is well defined, doing so will only lead to confusion and subtle errors. Another approach to avoid this issue is to abstain from using default parameters for virtual functions at all. If you want to know more about this an many other quirks of C++ you should take a look at Scott Meyer's book Effective C++.

Security through obscurity

From the variety of available email clients, I found Claws Mail to be my favorite (maybe 'cause after 6 years of Linux, I still haven't found the time to configure mutt...). Anyway, in today's posting I will not praise the advantages of Claws Mail, but rant a little about one of its "security" features. Like most programs, Claws Mail stores its configuration in a separate directory in the user's home folder. This folder contains, among other things, all account information. Since Claws Mail doesn't offer any kind of password manger or "master password" one would think, that the passwords for the mail accounts are stored in plain text. However, the accountrc file contains base64-encoded strings of DES-encrypted passwords. At this point, one should wonder how the program can encrypt the passwords without asking the user for a password. The solution is simple - the password is hardcoded into the binary. With this knowledge it's obvious that this approach is a clear case of security through obscurity. Given the accountrc file and the binary everyone can easily decrypt the passwords, i.e. with this standalone C program. If you're asking for more security than restrictive file permissions for your home folder can provide, you still got several options. Patch Claws Mail's sourcecode in order to use a real password safe for the storage of the passwords, use file encryption (either for your complete home folder, or just for ~/.claws-mail, e.g. with encfs), or switch to another email client.

Toggle SSL

To switch easily between the HTTP and HTTPS version of a website, I wrote a small plugin for Vimperator that can be found here. Save it into ~/.vimperator/plugins/ and restart Firefox. You should now be able to switch between the HTTP and HTTPS version of a website by pressing \h.

Advanced I/O redirection

Recently I had to commit a bunch of changes via SVN. Cause it's really recommended to review all changes made in the working directory before actually committing the data, I issued a svn status | grep ^M to see all files that have been modified since the last commit. The result was a fairly long list of files and I wanted to check which changes where actually made to each individual file. Of course, every SVN user knows about svn diff or even better svn diff | less , which gives a complete diff of all modified files. However, I don't really like this just glues diff after diff together an if you scroll too fast, you will miss one or more small but important changes. That's why i wanted to have a mechanism, that shows one diffed file at a time until I explicitly proceed to the next file. My first approach was a simple one-liner:

svn status | grep ^M | awk '{print $2}' | while read l; do echo "****** $l ******"; svn diff "$l" ; read tmp; done

As you will notice, this doesn't really work - the two read commands take turns in reading the output of svn status. One elegant solution for this includes the use of the shell builtin exec:

exec 3<&0
svn status | grep ^M | awk '{print $2}' | while read l; do echo "****** $l ******"; svn diff "$l" | less ; read tmp <&3 ;done

The line following the shebang creates a copy of the current stdin (filehandle 0) and assigns it to a new filehandle 3 i.e. 0 and 3 both will read commands from the keyboard being the default in a newly created shell. In the next line filehandle 0 is redirected several times (remember: a | b redirects the stdout of a into the stdin of b), so the first read reads its lines from the awk command. The second read, however, reads its input from filehandle 3 which still has the value that filehandle 0 had in the beginning of the script, i.e. it reads the keyboard input (I also piped svn diff through less, but that's just a small enhancement which is unrelated to the main problem). This is just a simple example for the powers of bash's redirection, more complex ones do exist ;)

Flashplayer issues

While older versions of Adobe's Flashplayer for Linux made content like Youtube videos accessible via the /tmp filesystem, the latest versions hide these files from the user by exploiting a feature of unlink:

If the name was the last link to a file but any processes
still have the file open the file will remain in existence until
the last file descriptor referring to it is closed.

In other words, the flashplayer creates a new file in /tmp, deletes the file right away with unlink but keeps the filehandle open, so the flashplayer process may still access the file. This however, may lead to confusion - df reveals that the free space on /tmp is shrinking, while du doesn't show any growing files at all. One way to fix this issue is simple - use library preloading to overwrite the original unlink function used by firefox: Download the tgz-archive, unpack it and make it. If the previous steps were successful, you should now have a file available. The last step is to tell firefox (or more precisely the dynamic linker) to use the unlink function from this file rather than the one from your C Standard library:

LD_PRELOAD=/path/to/ firefox

The LD_PRELOAD environment variable tells the dynamic linker to search for libraries in non-standard locations - in this case in our library file You might want to add an alias like the following to your environment, but for obvious reasons you shouldn't globally export LD_PRELOAD.

alias ff="LD_PRELOAD=/path/to/ firefox"

Yet there is one drawback with this solution: even if you close firefox, the files in /tmp will persist, so you may want to delete them manually from time to time...