This post will be about the reason splitting and rewriting part of GNOME Calculator in two as I see it (without absolute knowledge about the project, so if you disagree with anything and/or have any ideas, just let me know), to raise awareness of the process (possibly late, but not too late)

The problem

GNOME calculator is a handy little application. Long story short, it is a calculator application for GNOME as you all know. Written (and rewritten) for GNOME, it includes lexer, parser, and evaluation of expressions, plus a GTK+-based user interface to access the features of the calculation engine. This "engine" currently lives inside the projects' tree lib folder, and is used as a static library for the application. The library seemed fairly well split from the user interface, but it turned out there is a dependency on gtk textview, because of a mathematical equation subclassing Gtk.SourceBuffer (from gtktextview library) for easier handling, which is a subclass of Gtk.TextBuffer (from gtk+). So gtk+ is a transitive dependency of the calculator library. Moreso, the library also has a direct dependency on gtk+ due to having a reference to a Gtk.TextTag for "marking" the answer part of the equation to be able to reuse it and/or find it programatically in the text view or visually by marking it with bold characters.

With all these stated, you can see that if you would like to build a simple console application for evaluating expressions using this library, you would have to pull in gtk and gtksourceview as a dependency, which might not be the best thing to do.

The solution

It would be great to have the calculation engine without a dependency on any display toolkit so that it can be reused by projects. The library source could live inside gnome-calculator source tree, but could also live separately. I would go with the first option for now (looking easier to me), unless you have a good reason to move it to a separate git repo. And if you have, please share your reason.

The library license

Now the harder question is: if we want the library to be used by other projects, what license should it use? Currently, as it lives inside the gnome-calculator tree, it is licensed using GPL(v3), but that has certain restrictions, and some people argue that LGPL is usually a better fit for libraries. I'm always afraid of licensing questions, and I'm always asking for help in these matters, so as the person asking for the library split was asking for a re-licensing of the library I got a bit worried. But fortunately it will not be the first relicensing in history, so there is quite some information on how to do it. And it involves contacting ALL authors to check if they are OK with the re-licensing. And there are quite a few authors, starting from easy-to-find people with email addresses to people without email addresses and to companies without contact (e.g. Sun Microsystems - acquired by Oracle since quite some time).

The process

Daniel Espinoza Ortiz is working hard for quite some time to make this happen, with a couple of people jumping in with ideas and feedback on his MR or the related bug report.
Daniel has already contacted all the authors he could (with some level of success and some responses for accepting the relicensing), and he is working hard on re-implementing the parts of the code authored by people/entities who could not be contacted or who didn't respond.

Feedback is important

I can see the obvious benefit of these changes for the project am willing to continue (being translated to review and accept the related MR) with the re-licensing and library split, with the benefits outweighing the disadvantages to me. But I might have missed something you know or you have already experienced while doing something similar, so it would be great if you could help Daniel and us out to finish this transition in any of the following ways after checking the discussion in the MR or in the issue:
  • if you are an author/contributor of any of the files in gnome-calculator/lib folder and you didn't receive anything related to re-licensing, please state if you accept or refuse the re-licensing from GPL to LGPL
  • if you have any suggestions on the process (either re-licensing or splitting out a backend library of a project), what to look out for, just share in a quick comment here or in the MR or the issue
  • if you have any suggestions on the MR, just join the discussion

So, Google Summer of Code came to an end. I did (try my best to) mentor Ruxandra on her quest to modernize Five or More.

Some quick thoughts on the reasoning why using Vala as the language. And no flame-wars intended here. Just my personal thoughts:
* GObject with C requires too much boilerplate code
* Vala source is translated to C
* Most of the games under the GNOME umbrella are in Vala (they have been ported some time ago), thus potential new contributors can get involved easier
* is a very comprehensive API reference, easy to browse (although similar ones exist for each language advertised on the - see javascript, c++, python, c with some of them being easier to browse than others) - hoping that the new Developer portal will solve this and make API references browsable and have a programming language combobox for toggling between all of the ones listed, regardless of the chosen technology stack for it.

There are some arguments against Vala (not that efficient, syntax not OK for everyone, Vala being internally developed by GNOME), and I even agree with some of them (not sure if maintaining a language or finding a language and maintaining proper bindings for it takes more development effort - I saw interest in Gtk development with Rust, but other than librsvg - from the GNOME world - no one seemed to really jump on it).

So, given the above, Ruxandra did successfully port the old game from C to Vala, carefully rewriting the game piece-by-piece, mostly independently from the existing code (although there are some parts which have been ported line by line) and the existing Vala port (used as a reference in some places). The only thing which didn't get in (and honestly, I think I can say fortunately) is raster-based themes for the game. I think that these days, when we are transitioning to having the application icons only in scalable vector format (svg) it's not worth investing in raster-based themes, which either look horrible on huge resolutions, or need huge resource files at a huge resolution to look fine on big screens/hidpi screens and downscaling on non-hidpi screens.

I am sorry if anyone will feel offended by the removal of the two raster-based themes (dots and gumball), please let me know, and I will personally handle them being replace by similar vector-based art. Until then, if you somehow preferred using any of those themes, after the next release you will fall back to the default theme, which is the color balls vector theme.

So, what next?
* I'm waiting until the 3.30 release mid-september
* the code changes Ruxandra proposed will get in after that release, feedback should probably get incorporated by then - thanks to everyone who did and/or will post feedback on the merge request
* the new vector theme I have created (as a replacement to the removed dots and gumball themes) will get in after adding the possibility of having themes with different animation frame counts, currently all animations only have 4 frames, the one I am proposing needs at least 5 to look smooth (see it in action/animated here). Design and artist feedback is more than welcome on it (I'm a developer, so constructive criticism only)

So, thanks to Ruxandra, Five or More 3.32 will be a completely new application, with a new theme. And ... ssh... also planning to support mobile (Librem 5).

In a previous post I speculated about adding tagged unions to Vala. Let’s do that again.

In order to support the C usecase, Vala would need to include 2 types of tagged unions, a legacy one, and an idiomatic one. The legacy should look like the C version of a tagged union:

struct tagged {
    short tag;
    union variants {
        string text;
        int num;
} tagged;

This is needed to bind the C libs out there. The type of the tag should be something that can be easily compared IMO (numeric datatypes, C enums, and string). The binding would then look like this:

[CCode (cname = "tagged", union_tag_field = "tag" union_tag_type = "short" union_field = "variants")]
union Tagged {
    [CCode (cname = "text", union_tag_id = "1")]
    [CCode (cname = "num", union_tag_id = "2")]

The idiomatic ones, however, would actually look like a Rust struct, so if we declare:

public union OrderStatus {
	CANCELLED {string reason, string cancelled_by},
	REJECTED {string reason},
	COMPLETED {DateTime completed_at},
	ON_HOLD {string reason, Datetime until}

We should get:

enum OrderStatusTag {
} OrderStatusTag

union order_status {
    struct accepted {OrderStatusTag tag};
    struct cancelled {OrderStatusTag tag; string reason, string cancelled_by};
    struct rejected {OrderStatusTag tag; string reason};
    struct completed {OrderStatusTag tag; GDateTime completed_at};
    struct on_hold {OrderStatusTag tag; string reason; GDatetime until};
} OrderStatus;

Fun things to support: GVariant variant types (unboxing, serialization, etc.), GValue, JSON representations.

The fresh new tooling used for development in the GNOME project (gitlab, meson, docker, flatpak) has a lots of potential.

Some applications providing nightly flatpaks using Gitlab CI, and while it is nice, I wanted to start with something simple, like taking a screenshot of the application built with Gitlab CI+meson. Did ask on #gnome-hackers, and several hackers jumped right in with ideas, but it turned out no one has done it recently.

Thanks to the suggestions and a couple of DuckDuckGo searches I implemented taking screenshots of gnome-calculator (having initial GitLab CI pipelines thanks to Robert Ancell), anyone can check it out on the ci-screenshots branch GitLab CI config.

Screenshot of Calculator from Gitlab CI
The idea is fairly simple (inspired from a github repo)
* Build the application using meson
* Start Xvfb, a virtual framebuffer X Server
* Start the application, in this case Calculator
* Take a screenshot using the import tool from ImageMagick (using the window name you want to screenshot or the whole desktop)

This seems to work, with some glitches:
* the fallback appmenu mode is used, which means the application icon is shown in the headerbar and reveals the appmenu on click (this is something I use, but is not the default)
* The titlebar contains all three window controls, on default Fedora there's only the close button.
* The fonts also do not look like the default Cantarell for me
* A strange border appears all around the window

All of the three issues might be caused by the fact that there's no real user with default GTK/GNOME and session settings, so GitLab CI is using some sort of system-defaults of Fedora (I only added the screenshot stuff to the Fedora build for now). If you happen to have an idea on how to fix/improve any of the above, just let me know on #gnome-hackers (evfool) or in a comment.

I’ve started to use Kotlin professionally, and keeping an eye on Rust. Both offer a lot of niceties that I wish we could adapt for Vala, but there’s one that keeps popping up my mind everytime: pattern matching.

The simplest pattern matching we have is C unions, and a lot of c-libs use them. Unfortunately, the current handling of unions in Vala is a disgrace, and there’s no alternative for it. But I believe we can import some syntax from both Kotlin and Rust. Here is my proposal of how should unions work in Vala:

//Opening a bracket defines an "anonymous struct"
public union OrderStatus {
	CANCELLED {string reason, string cancelled_by},
	REJECTED {string reason},
	COMPLETED {DateTime completed_at}
	ON_HOLD {string reason, Datetime until}

match (order.status) {
	ACCEPTED -> info("Cool!");
	CANCELLED -> debug(@"Not okay, it was cancelled because of $(it.reason) by $(it.cancelled_by)");
	REJECTED as that -> info (@"Rejected: $that.reason");
	default -> error("What is this?? There's no implicit \"it\" here because it's a catch-all!")

public union NestedUnion {
	union COMPLEX {
		HARD{string reason}

//The additional field belongs to the wrapping struct

public union ComplexUnion {
	THIRD {string reason};
	//parent field, children cannot have a field with the same name
	uint32 timestamp;

//Maybe this is not a good idea

public union VeryComplex {
		override void run() {
		override void run() {
		override void run() {
		override void do() {
			debug ("Yay!");
	//They are all required to implement it!
	abstract void run();
	//Optionally overriden
	virtual void do() {
	//Can't touch this
	public void execute() {

//In this case, they reuse existing datatypes

public union ExternalUnion {

public void method () {
	var order = new Order(OrderStatus.ON_HOLD("reason",;
	var other_order = new Order(OrderStatus.CANCELLED(cancelled_by = "desiderantes", reason = "who knows!")); 
	var nested = NestedUnion.COMPLEX.HARD(reason = "no reason at all");
	//'match' can return a value, but all branches should return the same type
	//this 'match' in particular is exhaustive, so no default needed, but if you return a value from 'match', you have to either
	//cover all cases or have a default branch
	NestedUnion another_nested = get_from_network();
	var reason = match (another_nested) {
		SIMPLE -> "Just because";
		COMPLEX -> match (it) {
			SOFT -> "Really easy";
			//if not renamed, then you'll lose access to the it from outer context, as it'll be shadowed
			HARD as that -> that.reason;
	//This errors
	var complex = ComplexUnion(123456789);
	var complex = ComplexUnion();
	var complex = ComplexUnion.FIRST();

	//This should work
	var complex = ComplexUnion.THIRD(123456789, "I can");
	var complex = ComplexUnion.THIRD(reason = "Just because", timestamp = 321654987);
	match (complex) {
		//properties from the parent are only accessible from the parent reference, no implicit parent var
		FIRST -> debug(@"$(complex.timestamp)");
		SECOND -> debug ("Oops");
		THIRD -> debug @("$(complex.timestamp) by $(it.reason)");
	var external = ExternalUnion.STRING("this string is required");

The internal structure (C-wise) of my proposed tagged union is not anything new, it has been done a lot before in C land (here is an explanation from the Rust viewpoint)

Five or More

This year I have proposed a Google Summer of Code idea (we are in student applications period) for modernizing Five-or-More, a game left out from the last games modernization round, when most of the games have been ported to Vala.

Five or More

Several people did ask about what this project was, some of them started contributing, so I got in trouble with finding tasks which will not get obsoleted by the full Vala rewrite, so I started reporting tasks, and they started to solve them at a pace I had a hard time to follow with reviews, but here are the most important updates so far, already on master:
  • migration from intltool to gettext
  • migration from autotools to meson
  • split the monolithic five-or-more.c file into several components
  • migration from custom games score tracking to libgnome-games-support
There still are some more tasks open, Flatpak packaging is being worked on, and let's see if someone really applies for the modernization, then we will need some help from a designer, and casual gamers will simply have to wait for the next stable release to enjoy a modern (inside-out, modernized from the git hosting, throughout the build infrastructure and through the programming language, all the way up to an improved UI/UX). But that is the future.


The current default theme (32x32 images), created by jimmac in 2000
I have been working a bit recently on Atomix (being my first project migrated to GNOME gitlab, congratulations to everyone involved on that front). On that front other than the internal modernization (using gettext, using meson) I have been experimenting with graphics changes, to overcome the barely visible connections between atoms and to modernize the looks a bit. I have been experimenting with Kenney's public domain images for now, CC0 images, and I have to say I'm quite pleased with the results (still need some fine-tuning). As a picture's worth one thousand words, I'm posting pictures worth three thousand words, a screenshot of the current interface, a screenshot with my preferred theme, and one with the adjustments suggested by my wife.
Initially I was thinking about using "this much" colors, but... we are talking about a game, it can not and probably should not strictly follow the color scheme used by applications. In my wildest dreams games should also use different theming for the components, something more playful, colorful, relaxing.

Light theme, darker background, lighter walls, more visible connections
Dark theme, lighter background, thinner walls, visible connections
What do you think? Which one of the three do you prefer and why?

Consider this as a call for help, if you are a designer, or just want to have fun doing some artwork for a small game, contact me to fine-tune the current theme to have a more modern look (the old one is already grown-up, having 18 years).
I did work a bit on various projects, I will try to summarize and ask for help, as any of these fronts could use help, feel free to contact me if you are interested.

Let's start with the netstats (hard)work @antares has done (still under review for merging into libgtop master, #1 merge request on libgtop gitlab): she did investigate a lot to find the best way to get per-process network statistics into libgtop, something Usage and System Monitor both should benefit from. This is implemented currently as a root daemon using libpcap for capturing packets and summing their sizes, exposing a dbus-interface, congratulate her for the great job and tremendous patience she has shown enduring all my reviews and nitpicking comments.
In the long-term I would like to also support the gtop daemon used on *BSDs on Linux, which we couldn't get to work, but Ben Dejean has already come up with a solution to our problems and with his help I'm sure we will have a libgtop linux daemon. For the internal network statistics calculation we are investigating alternatives to libpcap (an ebpf-based - based on a suggestion from libgtop senior Ben Dejean, mostly running in-kernel -  also needing root, and another suggestion from Philip Whitnall using cgroups + iptables + netfilter, which could get away with less privileges). Any ideas, other options or help on that front would be welcome, these ideas are only sketched, so replacing the currently working libpcap-based one will take some work.

System Monitor using libdazzle cpu chart
On another front, I have tried improving System Monitor charting, namely transitioning from the custom charts implementation to the graphs from libdazzle, which are also used in Builder. This is a work in progress, anyone can check out the wip-charts branch of system monitor. Currently the CPU chart was migrated, still using the dazzle CPU chart implementation, meaning the colors are not customizable using the color pickers and there are no labels on the axis, nor labels with the current usage for each CPU, see the screenshot. It would be great to finish the implementation with a custom gobject class allowing for customization of the colors, then I would happy to drop the custom graphing implementation. Thanks go to Christian Hergert for being helpful on libdazzle, timely reviews and suggestions on libdazzle-related issues.

We are getting close to the 3.28 release, we are in the freeze, so it's time for a quick summary of what happened this cycle with the projects I occasionally contribute to.

Calculator was the major player this cycle (well, lately, to be more precise) having a quick bugs cleanup (both on gnome bugzilla and ubuntu launchpad), merging of older patches, creating new bugs by merging old patches, here's a few of the most relevant ones:
* Meson port got into the calculator repository, thanks Robert Ancell and Niels de Graef for the patches sent, and for the people reporting bugs since that happened, I am trying to keep up with the bugs as they are coming in. Thankfully, Meson is not only faster, but a lot easier to understand and use (and with better reference) for me, so the fixes do not (always) start with me shouting out for help. Please, go ahead and try the meson build and if you find anything to complain about, just do it on the issue tracker (bugzilla, or hopefully gitlab soon).
* If you did use the ans variable, be aware, that it was replaced with _ variable (do you know python), to avoid being confused with a time unit in a language. This was quite a big failure of me handling the issue, as it popped up late in the cycle, and didn't know how to handle it, as it would have required a freeze exception, translation updates, etc, something which I wasn't ready for (the first one is the hardest). Instead, I chose the easy way for me, which meant lots of headache for some people (not being to be able to use results of previous calculations with the given locale), more headache for maintainers in various distributions dealing with the bugs and patches related to that. Well, that is something I need to get better at, namely not choosing the easy way out and postponing to the next cycle in case of bugs found late in the cycle.
* Calculator is resizable again. This is a somewhat controversial move (just like the one to make it non-resizable), hopefully people will forget me for it. For now it is freely resizable, the history view (the view showing previous calculations) expanding vertically, and the buttons remain fixed-height. The problem is that both the history and the buttons area expands horizontally, and the buttons expanding horizontally can result in very wide buttons, which is not ideal. Thankfully, Allan Day already has the mockups ready on how the calculator should resize, and Emanuelle Bassi already has built emeus, a constraint-based GTK container, because I haven't found a way of describing Allan's mockups in current GTK+CSS terms.

System-Monitor is not dead yet, and will not die anytime soon. That is a statement which was not clear until now, with Usage being in development. We had several discussions with the designers about how to make one application out of the two, merging the two, but we agreed that probably the target audience is different: Usage is for simple use-cases, easy-to-use (and fairly beautiful, I have to admit), and System Monitor is for monitoring your system, your processes. Usage handles applications, with as few details as possible, e.g. network traffic, disk usage, CPU usage, while System Monitor monitors processes, their statuses, uptimes, cgroups, open files, etc. for advanced users.
On the development front there are only a couple of changes worth mentioning there:
* Dark theme support for charts - if you use a dark theme, I'm sure you've already been blinded by the hardcoded white background of the Resources' charts. Thanks to Voldemar Khramtsov it is fixed, please check the implementation with your themes and report bugs. I have experimented with ~15 different themes, and made sure to have theme-based colors for the background and the grid of the charts, but it is really hard to make the non-theme dependent colors of the charts visible on a theme-dependent background, and we might need some more tweaking. Ideas are welcome.
* Multiple terms search in process list - you can filter the process list by multiple words, separated with ' ' or '|', e.g. "foo bar" or "foo|bar" for showing only processes matching foo or bar. There was a discussion on making this search regexp-based, but I didn't see a use-case for it. Let me know if you would use a regexp-filtering in the process tree, and explain why you would use (real use-case), and I will re-consider the decision taken in the bug.

Other recent work:
* My first Meson port was swell-foop, the same as for everything I do, please check it, and if you see anything wrong, just let me know.
* I did some GetText migrations, mostly on games, and most of them have already been merged, and already received some fixes (e.g. appdata was not installed, as I thought that removing the APPSTREAM_XML automake rules is part of the migration).
* I finally got to play a bit with flatpak and flatpak-builder, which greatly simplify building and distributing apps. I intend to do some more excercises on the games I maintain, as the games from the old GNOME Games suite (not to be confused with the new GNOME Games application for playing emulator games) are not present on flathub.
* I rediscovered my old project eMines, written as an elementary minesweeper using py-gtk (almost 7 years ago), just pulled it and it is still working.

Uh-oh, and I almost forgot, I proposed a GSoC idea for modernizing Five-Or-More, aka GNOME lines a bit, as that game didn't get the quick make-over most other games received a couple of years ago. Let's see whether it will happen or not.
I feel like I have failed as a maintainer of GNOME modules, due to the fact that I have been busy lately with other tasks, and could not really handle my maintainer tasks, bugfixing, but it is November again, Bug Squash Month for GNOME. I will do my best to take the challenge and do the 5-a-day (5 bugs triaged per day) for GNOME this month.

Today I had a couple of comments and fixes on System Monitor and Calculator, and probably I will continue tomorrow on these two, and jump to the games afterwards. If you have any annoyances, would like me to prioritize certain bugs (preferably from libgtop, system-monitor, gnome-calculator, swell-foop, lightsoff, five-or-more, atomix, gnome-mines), just let me know, and I will do my best.

I needed a way to emit the notify signal of one of my objects from another place, and Vala didn’t show me a straightforward way to do it. If you need it for some reason, here’s a code snippet showing you how:

using GLib;

public class TestClass : GLib.Object {
	public string test1 {get;set;default = "test1";}
	//Ths one won't emit on assignment
	[CCode (notify = false)]
	public string test2 {get;set;default = "test2";}

	public static void main (string[] args) {
		var test = new TestClass ();
		test.notify["test1"].connect (() => GLib.print ("test1 notification\n"));
		test.notify["test2"].connect (() => GLib.print ("test2 notification\n"));
		test.test1 = "Ahoy";
		test.test2 = "Újale";
		test.test2 = "Ajúa";
		ParamSpec pspec = ((ObjectClass) typeof (TestClass).class_ref ()).find_property ("test2");
		GLib.print ("First try, will not work %s\n",;
		GLib.print ("second one, this is how it works");
		test.notify["test2"] (pspec);

Where I work, we deal often deal with large datasets from which we copy the relevant entries into the program memory. However, doing so typically incurs a very large usage of memory, which could leads to memory-bound parallelism if multiple instances are launched.

The memory-bound parallelism issue is arising when a system cannot execute more tasks due to a lack of available memory. It is essentially wasting of all other available resources such as CPU time.

To address this kind of issue, I’ll describe in this post a strategy using memory-mapped files and on-demand processing over a very common data format in bioinformatics: FASTA. The use case is pretty simple: we want to query small and arbitrary subsequences without having to precondition them in allocated memory.

About Virtual Memory Space

The virtual address space is large, very. Think of all the addresse values a 64 bit pointer can take. That’s about 18 quitillions of addressable bytes, which is enough to never be bothered with.

Understandbly, no computer can hold that much of memory. Instead, the operating system partitions the virtual memory into pages and the physical memory into frames. It uses a cache algorithm and load addressed pages into physical frames. Unused pages are stored on disk, in the available swap partitions or compressed into physical memory if you use Zswap1.

The mmap2 system call makes a correspondance between a file and pages in virtual memory. Addressing the memory where the file has been mapped will result in the kernel fetching its content dynamically. Moreover, if multiple processes map the same file, the same frames (i.e. physical memory) will be used across all of them.

void * mmap (void *addr,
             size_t length,
             int prot,
             int flags,
             int fd,
             off_t offset);

Where addr hint the operating system for a memory location, length indicates the size of the mapping, prot indicates permissions on the region, flags holds various options, fd is a file descriptor and offset is a byte offset from the file content. The returned value is the mapped address.

We can use this feature at our advantage by loading our data once and transparently share them across all the instances of our program.

I’m using GLib, a portable C library, and its providen GMappedFile to carefully wrap mmap with reference counting.

g_auto (GMappedFile) fasta_map = g_mapped_file_new ("hg38.fa",

Our Use Case

To be more specific our use case only require to view small windows (~7 nucleotides) of the sequence at once. If we assume 80 nucleotides per line, we have 80 possible windows from which 73 are free of newlines. The probability for a random subsequence of length 7 of landing on a newline is thus approximately 8.75%.

For the great majority of cases, assuming uniformly distributed subsequence requests, we can simply return the address from the mapped memory.

From now on, we assume that the in-memory mapped document has already been indexed by bookeeping the beginnings of each sequences, which can be easily done with memchr3. The sequence pointer points to some start of a sequence and sequence_len indicate the length before the next one.

To work efficiently, it is worth to index the newlines. For this purpose, we use a GPtrArray, which is a simple pointer array implementation that we populate with the addresses of the newlines in the mapped buffer.

const gchar *sequence = "ACTG\nACTG";
gsize sequence_len    = 9;

g_autoptr (GPtrArray) sequence_skips =
    g_ptr_array_sized_new (sequence_len / 80); // line feed every 80 characters

const gchar* seq = sequence;
while ((seq = memchr (seq, '\n', sequence_len - (seq - sequence))))
    g_ptr_array_add (sequence_skips, (gpointer) seq);
    seq++; // jump right after the line feed

A newline can either preceed, follow or land within the subsequence.

  • all thoses preceeding the desired subsquence shifts the sequence to the right
  • all those within the subsequence must be stripped
  • the remaining newlines can be safely ignored

If only the first or last condition apply, we’re in the 92.5% of the cases as we can simply return the corresponding memory address.

gsize subsequence_offset = 1;
gsize subsequence_len = 7;

We first position our subsequence at its initial location.

const gchar *subsequence = sequence + subsequence_offset;

We need some bookkeeping for filling a fixed-width buffer if a newline land within our subsequence.

static gchar subsequence_buffer[64];
gsize subsequence_buffer_offset = 0;

Now, for each linefeed we’ve collected, we’re going to test our three conditions and either move the subsequence right or fill the static buffer.

The second condition require some work. Using the indexed newlines, we basically trim the sequence into a static buffer that is returned. Although we lose thread safety working this way, it will be mitigated by process-level parallelism.

gint i;
for (i = 0; i < sequence_skips->len; i++)
    const gchar *linefeed = g_ptr_array_index (sequence_skips, i);
    if (linefeed <= subsequence)
        subsequence++; // move the subsequence right
    else if (linefeed < subsequence + subsequence_len)
        // length until the next linefeed
        gsize len_to_copy = linefeed - subsequence;

        memcpy (subsequence_buffer + subsequence_buffer_offset,

        subsequence_buffer_offset += len_to_copy;
        subsequence += len_to_copy + 1; // jump right after the linefeed
        break; // linefeed supersedes the subsequence

Lastly we check whether or not we’ve used the static buffer, in which case we copy any trailing sequence.

if (subsequence_buffer_offset > 0)
    if (subsequence_buffer_offset < subsequence_len)
        memcpy (subsequence_buffer + subsequence_buffer_offset,
                subsequence_len - subsequence_buffer_offset);

    return subsequence_buffer;
    return subsequence;

It’s possible to use a binary search strategy to obtain the range of newlines affecting the position of the requested subsequence, but since the number of newlines is considerably small, I ignored this optimization so far.

Here we are with our zero-copy FASTA parser that efficiently look for small subsequences.

P.S.: This technique has been used for the C rewrite of miRBooking4 I’ve been working on these past weeks.

The rewrite of in Vala using Valum has been completed and should be deployed eventually be elementary OS team (see pull #40). There’s a couple of interesting stuff there too:

  • experimental search API using JSON via the /search endpoint
  • GLruCache now has Vala bindings and an improved API
  • an eventual GMysql wrapper around the C client API if extracting the classes I wrote is worth it

In the meantime, you can test it at and report any regression on the pull-request.

Valum 0.3 has been patched and improved while I have been working on the 0.4 feature set. There’s a work-in-progress WebSocket middleware, VSGI 1.0 and support for PyGObject planned.

If everything goes as planned, I should finish the AJP backend and maybe consider Lwan.

On the top, there’s Windows support coming, although the most difficult part is to test it. I might need some help there to setup AppVeyor CI.

I’m aware of the harsh discussions about the state of Vala and whether or not it will just end into an abysmal void. I would advocate inertia here: the current state of the language still make it an excelllent candidate for writing GNOME-related software and this is not expected to change.

The first release candidate for Valum 0.3 has been launched today!

Get it, test it and be the first to find a bug! The final release will come shortly after along with various Linux distributions packages.

This post review the changes and features that have been introduced since the 0.2. There’s been a lot of work, so take a comfortable seat and brew yourself a strong coffee.

The most significant change has probably been the introduction of Meson as a build system and all the new deployment strategy it now makes possible.

If you prefer avoiding a full install, it’s not possible to use it as a subproject. These are defined as subdirectories of subprojects, which you can conveniently track using git submodules.

project('', 'c', 'vala')

glib = dependency('glib-2.0')
gobject = dependency('gobject-2.0')
gio = dependency('gio-2.0')
soup = dependency('libsoup-2.4')
vsgi = subproject('valum').get_variable('vsgi')
valum = subproject('valum').get_variable('valum')

executable('app', 'app.vala',
           dependencies: [glib, gobject, gio, soup, vsgi, valum])

Once installed, however, all that is needed is to pass --pkg=valum-0.3 to the Vala compiler.

vala --pkg=valum-0.3 app.vala

In app.vala,

using Valum;
using VSGI;

public int main (string[] args) {
    var app = new Router ();

    app.get ("/", (req, res) => {
        return res.expand_utf8 ("Hello world!");

    return Server.@new ("http", handler: app)
                 .run (args);

There’s been a lot of new features and I hope I won’t miss any!

There’s a new url_for utility in Router that comes with named route. It basically allow one to reverse URLs patterns defined with rules and raw paths.

All that is needed is to pass a name to rule, path or any method helper.

I discovered the : notation for named varidic arguments if they alternate between strings and values. This is typically used to initialize GLib.Object.

using Valum;
using VSGI;

var app = new Router ();

app.get ("/", (req, res) => {
    return "<a href=\"%s\">View profile of %s</a>".printf (
        app.url_for ("user", id: "5"), "John Doe");

app.get ("/users/<int:id>", (req, res, next, ctx) => {
    var id = ctx["id"].get_string ();
    return res.expand_utf8 ("Hello %s!".printf (id));
}, "user");

In Router, we also have:

  • asterisk to handle * URI
  • once for performing initialization
  • path for a path-based route
  • rule to replace method
  • register_type rather than a GLib.HashTable<string, Regex>

Another significant change is that the previous state stack has been replaced by a context tree with recursive key resolution. It pretty much maps string to GLib.Value in a non-destructive way.

In terms of new middlewares, you’ll be glad to see all the built-in functionnalities we now support:

  • authentication with support for the Basic scheme via authenticate
  • content negotiation via negotiate, accept and more!
  • static resource delivery from GLib.File and GLib.Resource bundles
  • basic to strip the Router responsibilities
  • subdomain
  • basepath to prefix URLs
  • cache_control to set the Cache-Control header
  • branch on raised status codes
  • perform work safely and don’t let any error leak!
  • stream events with stream_events

Now, which one to cover?

The basepath is my personal favourite, because it allow one to create prefix-agnostic routers.

var app = new Router ();
var api = new Router ();

// matches '/api/v1/'
api.get ("/", (req, res) => {
    return res.expand_utf8 ("Hello world!");

app.use (basepath ("/api/v1", api.handle));

The only missing feature is to retranslate URLs directly from the body. I think we could use some GLib.Converter here.

The negotiate middleware and it’s derivatives are really handy for declaring the available representations of a resource.

app.get ("/", accept ("text/html; text/plain", (req, res, next, ctx, ct) => {
    switch (ct) {
        case "text/html":
            return res.expand_utf8 ("");
        case "text/plain":
            return "Hello world!";
            assert_not_reached ();

There’s a lot of stuff happening in each of them so refer to the docs!

Quick review into Request and Response, we now have the following helpers:

  • lookup_query to fetch a query item and deal with its null case
  • lookup_cookie and lookup_signed_cookie to fetch a cookie
  • cookies to get cookies from a request and response
  • convert to apply a GLib.Converter
  • append to append a chunk into the response body
  • expand to write a buffer into the response body
  • expand_stream to pipe a stream
  • expand_file to pipe a file
  • end to end a response properly
  • tee to tee the response body into an additional stream

All the utilities to write the body come in _bytes and _utf8 variants. The latter properly set the content charset when appliable.

Back into Server, implementation have been modularized with GLib.Module and are now dynamically loaded. What used to be a VSGI.<server> namespace now has become simply ("<name>"). Implementations are installed in ${prefix}/${libdir}/vsgi-0.3/servers, which can be overwritten by the VSGI_SERVER_PATH environment variable.

The VSGI specification is not yet 1.0, so please, don’t write a custom server for now or if you do so, please submit it for inclusion. There’s some work-in-progress for Lwan and AJP as I speak if you have some time to spend.

Options have been moved into GLib.Object properties and a new listen API based on GLib.SocketAddress makes it more convenient than ever.

using VSGI;

var tls_cert = new TlsCertificate.from_files ("localhost.cert",
var http_server = ("http", https: true,
                                      tls_certificate: tls_cert);

http_server.set_application_callback ((req, res) => {
    return res.expand_utf8 ("Hello world!");

http_server.listen (new InetSocketAddress (new InetAddress.loopback (SocketFamily.IPV4), 3003));

new MainLoop ().run ();

The GLib.Application code has been extracted into the new VSGI.Application cushion used when calling run. It parses the CLI, set the logger and handle SIGTERM into a graceful shutdown.

Server can also fork to scale on multicore architectures. I’ve backtracked on the Worker class to deal with IPC communication, but if anyone is interested into building a nice clustering system, I would be glad to look into it.

That wraps it up, the rest can be discovered in the updated docs. The API docs should be available shortly via

I manage to cover this exhaustively with abidiff, a really nice tool to diff two ELF files.

Long-term notes

Here’s some long-term notes for things I couldn’t put into this release or that I plan at a much longer term.

  • multipart streams
  • digest authentication
  • async delegates
  • epoll and kqueue with wip/pollcore
  • schedule future release with the GNOME project
  • GIR introspection and typelibs for PyGObject and Gjs

The GIR and typelibs stuff might not be suitable for Valum, but VSGI could have a bright future with Python or JavaScript bindings.

Coming releases will be much less time-consuming as there’s been a big step to make to have something actually usable. Maybe every 6 months or so.

Actually, this post should have been a part (the last one) of my "Need for PC" blog posts (1, 2, 3), but this also deserves a separate blog post on its own.

So, I have a fresh install on a fresh PC, what do I do next (and why)? Here's a list of the GNOME Shell extensions I use, and the (highly opinionated) motivation for me using them.
    • dash to dock - I need an always-visible or intelligent-autohide icon-only window list to be able to see my open windows all the time, and to launch my favorites. That is an old habit of mine, but I simply can't live without it. I usually set dash to dock to expand vertically on the left side, as I come from a Unity world, and this made the transition easier, but with the settings available you can make yourself comfortable even if you're transitioning from MacOS X or Windows 7-8-10, with a couple of clicks.
    • alternatetab - I need window switching with Alt-Tab to any of my running application windows. I don't want to think about "which workspace is this window on?" or "do I want to switch to another instance of the same app or another app?". It helps me tidy up my window list from time to time, and keeps me productive. coverflow alt-tab is another option here, for people who like eye-candy, for me the animations and reflections are a bit too much, but if you like that, it's also a good replacement for the default tabbing behaviour.
      • applications menu - I rarely use it, I mostly got used to search for apps in GNOME Shell, but the Activities button is not for me, I access that using the META key, and removing the Activities button leaves an empty space on the top left corner. It's the perfect place for a "Start menu". The applications menu is a good option, installed by default for the GNOME classic session, but if you need a more complex menu, with search, recents, web bookmarks, places, and a lot more (resembling the Start menu, but without ads ;) ), then gno-menu is the way to go.
        • pump up/down the volume - I think this habit of mine also comes from Unity, but I use middle-click the sound icon to mute, and also would like to see a visual feedback when I am adjusting my volume via scrolling over the sound icon. A small tooltip, which I have to stare at to read doesn't count here. Better volume indicator does exacly what I need, no less, no more. Just perfect. I just wish it would be the default GNOME Shell behaviour.
          • selecting sound output device - I usually have multiple possible output devices (speakers and headphones) and multiple possible input devices (webcam microphone, jack microphone, etc), and I need to switch between these : switch to speakers/headphones fast, when receiving a call, switch the microphone. Opening the sound setting, selecting the input and output devices would take too much time, but "there's an app for that" (understand: extension), called Sound output device chooser, which can also choose the sound input device, and it's nicely integrated with the sound menu. Perfect for the job.
            • monitoring the system - information at a glance about my computer, CPU usage. I prefer to have a chart in the top bar, so there's only one option. This plugin has lots of settings, the preferences are kind of chaotic, but once you set it up, it just works. I only have a 200 px wide CPU chart in my top bar, that's all I need to see if something is misbehaving (firefox/flash/gnome-shell/some others happen to use 50%+ CPU just because they can)
            • tray area - although tray icons have been "deprecated" quite some time ago, there are some applications which can not/will not forget them. Most notable ones are Skype and Dropbox. The fallback notification area (bottom left corner) kindof conflicts with my left-side expanded dash to dock extension, so I use topIcons plus to move them back to the right corner.
            • top bar dropdown arrows - with Application menu/Gno-Menu an application and a keyboard layout switcher, the number of small triangles eating up space in the top bar goes up to 4. I understand that I have to know that the menu, the application name (appmenu), the keyboard layout switcher and the power/sound/network menu are clickable and will expand on click, but the triangles are too much. So, I remove the dropdown arrows.
            These tend to be the most important ones. A short list of other extensions I use, but are not a bare necessity:
            • Freon - for keeping an eye on the temperatures/fan speeds of your PC
            • Switcher - keyboard-only application launcher/switcher
            • Dynamic panel transparency - for making the top bar transparent without full-screen apps, but making it solid if an app is maximized. Eye-candy, but looks nice ( ssssht, secret - it might become the default behavior ) . It would be even nicer if it could also affect dash to dock.
              With these tweaks, I can use GNOME Shell, and can be fairly productive. How about you? Which extensions are you using? What would you change in GNOME Shell?
              As promised, after a long wait, here's some details about the operating system and software I have installed from day-0. This is a shortlist I usually install on each of my computers, so I will also provide a short why for each bullet.
              A side-note is that, although I tend to use the command-line a lot, the setup contains (only) a single cut-and-paste terminal command, the rest is entirely done using the
                1. Base system: Fedora (latest release of Workstation - 24 at installation time).
                  Reasons for choosing Fedora:
                  • user-friendly and developer-friendly
                  • includes latest stable GNOME stack - contains latest bugfixes and latest features - relevant also from both user and GNOME developer perspective
                  • most developer tools I use are bundled by default
                2. Fedy : a simple tool for configuring Fedora, installing the proprietary software I need to use.
                  The items I always install from Fedy:
                  • Archive formats - support for RAR and the likes, not installed by default
                  • Multimedia codecs - support for audio and video formats, MP3 and the likes
                  • Steam - for the child inside me
                  • ?Adobe flash - I wish this wasn't necessary, but sometimes it is
                  • Better font rendering - this could also be default, and may become obsolete in the near future
                  • Disk I/O scheduler - Advertised as a Performance-boost for SSD and HDD
                3. Media players
                  • Kodi - the media player I install on all my devices, be it tablet, PC,
                    laptop, Raspberry PI - extensible, supports library management, sharing on the local network, remote control, "Ambilight" clone for driving RGB leds behind my TV
                  • VLC - for one-shot video playback - Kodi is the best, but too heavy for basic video playback
                  • Audacious - for one-shot audio playback and playing sets of songs - as I grew up with WinAmp,  and audacious has support for Classic WinAmp skins, but also a standard GTK interface
                4. Graphics
                  • GIMP - photo editing and post-processing
                  • Inkscape - vector graphics editor
                  • Inkscape-sozi - extension for Inkscape presentation editing - whenever I need a good presentation, I create a vector-graphics presentation with inkscape+sozi, because it's so much better than a plain libreoffice(powerpoint) presentation - more like prezi
                With these installed, my system is ready to be used. Time for tweaking the user interface a bit, so next up is customizing GNOME Shell with extensions.
                As promised, back with the PowerMac G5 ATX mod final build pictures, as the PC is already complete and working. Actually, I have built the GNOME 3.22.0 release tarballs for several modules using this (and have tested building other stuff and also a bit of gaming, to see the temperatures, they are OK). Measured power consumption is almost all the time (even with all cores at 100%) below the one of my old PC idling (this one idles at ~35W and 65-70W under load or in-game).

                With every component mounted
                Intake fans in front
                Rear exhaust fans
                CPU cover mounted
                Plastic cover in-place, before mounting the sidepanel

                In a future post, I'll summarize the software setup, including GNOME Shell extensions I can't live without, of course, with some screenshots.

                I have discovered Meson a couple of years back and since then use it for most of my projects written in Vala. This post is an attempt at describing the good, bad and ugly of the build system.

                So, what is Meson?

                • a build system
                • portable (see Python portability)
                • a Ninja generator
                • use case oriented
                • fast
                • opiniated

                What it’s not?

                • a general purpose build system
                • a Turing-complete language
                • extensible (only in Python)

                It handle 80% of the cases nicely and elegantly.

                Since it is use case oriented, features are introduced on need. It keeps a tight balance between conciseness, generality and features.

                It mixes configure and build step so that the build essentially become one big tree. Then, the build system determine what goes into the configuration and what goes into the build.

                The cognitive load is very low, which means it’s very easy to learn the basics and make actual usage of it. This is critical, because all the time spent on setting the build hardly contribute to the project goal.

                The following is a basic build that check for dependencies (using pkg-config) and build an executable:

                project('Meson Example', 'c', 'vala')
                glib = dependency('glib-2.0')
                gobject = dependency('gobject-2.0')
                executable('app', 'app.vala', dependencies: [glib, gobject])

                Building becomes a piece of cake:

                mkdir build && cd build
                meson ..

                Only a few keywords are sufficient for most builds:

                • executable
                • library with shared_library and static_library
                • dependency
                • declare_dependency

                Built-in benchmarks and tests, just pass the executable to either benchmark or test.

                The main downside is that if what you want to do is not supported, you either have to hack things or wait until the feature gets into the build system.

                The system is very opiniated. It’s both a good and bad thing. Good since you don’t need to write a lot to get most jobs done. Bad because you might hit a wall eventually.

                There’s also the Python question. It requires at least 3.4. This is becoming less an problematic as old distributions progressively die out, but still can prevent you now. Here’s a few ideas to remedy this problem:

                • build a dependency-free zipball (see issue #588)
                • backport Meson to older Python version

                Meson is getting better over time and so far has managed to become the best build system for Vala. This is why I highly recommend it.

                When I say everything torn apart, I mean it

                Preparing the case

                Choosing a non-mATX-compatible case to start with gave me major headaches, but simply put I have found no mATX case with a similar look. I had to work quite a bit to make the G5 case work with an mATX motherboard.
                During shipping, as usual for these computer, the outer case stands have been bent, resulting in a less pleasant look. To fix it, had to rip the whole thing apart, meaning taking out the inner case to be able to "bend" the outer case stands back in their original position.
                I did not expect that I will have to do this, but as I already had the case torn apart, I have decided to apply a new paint. It is not perfect, but it's ok for me, the outer case with grey base-paint and metallic grey paint applied over it, looks similar to the original (except for the Apple logo being mostly gone). The inner case was painted matt black, and it looks fine. However, when mounting back the inner case in the outer case, in some places the black paint fell off, so I had to reapply the paint.
                I also had to cut the back IO plate as close to the side as possible to fit an mATX IO plate, with the standard mentioning 45x158 mm, but the standard G5 backplate is somewhere around 40x190 mm.

                G5 PSU internals replaced

                Modding the PSU

                • Remove PSU internals
                • Get an ATX PSU with a 120mm fan on top (in my case a Seasonic SS330HB)
                • Disassemble it completely (remove the cooling fan from the top and the case)
                • Mount the internals of the power supply in the G5 power supply case
                • Create-buy a longer cable with an Y-splitter with 2-pin male plugs for the fans
                • Mount the new 60mm fans (I have used Scythe Mini Kaze 60mm)
                • The resulting PSU
                • Assemble the whole thing again

                Preparing mATX motherboard mount

                • Use an old mATX motherboard as a template
                • Break the mounts standing in the way of the motherboard
                • Mark the mounting holes
                • Use (a part of) the original cable organizer for the SATA power cable going to the HDD cage/optical drive
                • Mount old mATX motherboard with glue applied to the stands, so that they stick to the case (I did not go with the new one at first, as I had to push it hard for the stand-offs to stick, and I did not want to damage the new one)
                • Test wiring of the power button and the power led with an old mATX motherboard (I have used a different led, a red one to match the motherboard leds)
                • Wire USB and audio
                • Remove the old mATX motherboard
                • Mount the new mATX motherboard in place

                The complete PC part list for the build is:
                PCPartPicker part list / Price breakdown by merchant
                Type Item Price
                CPU Intel Core i7-6700T 2.8GHz Quad-Core OEM/Tray Processor Purchased For $366.42
                CPU Cooler ARCTIC Alpine 11 Plus Fluid Dynamic Bearing CPU Cooler Purchased For $12.17
                Motherboard MSI B150M MORTAR Micro ATX LGA1151 Motherboard Purchased For $85.70
                Memory Kingston HyperX Fury Black 16GB (2 x 8GB) DDR4-2133 Memory Purchased For $89.88
                Storage Kingston SSDNow V300 Series 120GB 2.5" Solid State Drive Purchased For $50.00
                Storage Toshiba 1TB 3.5" 7200RPM Internal Hard Drive Purchased For $50.00
                Video Card XFX Radeon HD 4550 1GB Video Card Purchased For $25.00
                Case Fan ARCTIC Arctic F8 PWM 31.0 CFM 80mm Fan Purchased For $4.30
                Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.17
                Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.17
                Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.50
                Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.50
                Other PowerMac G5 Purchased For $25
                Prices include shipping, taxes, rebates, and discounts
                Total $725.81
                Generated by PCPartPicker 2016-08-11 09:17 EDT-0400

                Bill of additional materials used so far:
                1x gray basepaint - $6
                1x matt black paint - $6
                1x metallic silver paint - $3
                1x matt black paint - $3

                motherboard template - $2.5
                motherboard stands ~ $2.5
                power supply ~ $23
                2x Scythe Mini Kaze fans for the PSU - $14
                1x Bracket adapter 2x2.5 HDD/SSD to 3.5 bay for mounting SSD - $4

                I have realized quite some time ago that my PC is struggling to keep up with the pace, so I have decided that it is time for an upgrade (after almost 6 years with my Dell Inspiron 560 minitower with C2D Q8300 quad-core).

                I have "upgraded" the video card a couple of months ago due to the old one not supporting OpenGL3.2 needed by GtkGLArea. First I went with an ATI Radeon HD6770 I received from my gamer brother, but it was loud and I did not use it as much as it's worth using, with a high cost (108W TDP, bumped the consumption of the idle PC by 30-40W from 70-80W to 110-120W), so I have traded it for another one: a low-consumption (Passively cooled - 25W TDP) Ati Radeon HD4550 working well with Linux and all my Steam games whenever I am gaming (casual gamer). Consumption went back to 90-100W.

                After that came the power supply, replacing the Dell-Provided 300W supply with a more efficient one, a 330W Seasonic SS330HB. This resulted in another 20W drop in power consumption, idling below 70W.

                The processor being fairly old, and having a 95W TDP, but with the performance way below today's i7 processors with the same TDP, it might be worth upgrading, but that means motherboard + CPU + cooler + memory upgrade, but as I have the rest of the components, I will reuse them, and add a new (old) case to the equation, a PowerMac G5 from around 2004.

                So here's the basic plan:
                Case - PowerMac G5 modded for mATX compatibility, and repainted - metallic silver the outer case, matt black the inner case - inspired by Mike 7060's G5 Mod
                CPU - Intel core i7 6700T - 35W TDP
                Cooler - Arctic Alpine 11 Plus - silent, bigger brother of the fanless Arctic Alpine 11 Passive (for up to 35 W TDP, the i7 6700T being right at the edge, I did not want to risk)
                MotherBoard - 1151 socket, DDR4, USB3, 4-pin CPU and case fan controller socket, HDMI and DVI video outs being the requirements - I chose the MSI B150M Mortar because of guaranteed Linux compatibility (thanks Phoronix), 2 onboard PWM case fan controllers + PWM controlled CPU fan
                Memory - 2x8GB DDR4 Kit - Kingston Hyperx Fury
                PSU - Seasonic SS-330HB mounted inside the G5 PSU case, original G5 PSU fans replaced with 2x 60mm Scythe Mini Kaze for silent operation
                Case Cooling - Front 2x 92mm - Arctic F9 PWM PST in the original mounts

                Video card - Onboard Intel or optional ATI Radeon HD4550 if (probably will not happen) the onboard will not be enough
                Optical drive (not sure if it is required) - start with existing DVD-RW drive
                Storage - 120 GB Kingston V300 + 1TB HDD - existing

                Plans for later
                (later/optional) update optical drive to a Blu-Ray drive
                (later/optional)  Arctic F9 PWM PST, in the original G5 intake mounts or 120 mm Arctic F12 PWM PST in new intake mounts.

                I'll soon be back with details on preparing the case, probably the hardest part of the whole build. The new parts are already ordered (the CPU was pretty hard to find on stock, and will be delivered in a week or so instead of the usual 1-2 days).

                Valum now support dynamically loadable server implementation with GModule!

                Server are typically looked up in /usr/lib64/vsgi/servers with the libvsgi-<name>.so pattern (although this is highly system-dependent).

                This works by setting the RPATH to $ORIGIN/vsgi/servers of the VSGI shared library so that it looks into that folder first.

                The VSGI_SERVER_PATH environment variable can be set as well to explicitly provide a directory containing implementations.

                To implement a compliant VSGI server, all you need is a server_init symbol which complies with ServerInitFunc delegate like the following:

                public Type server_init (TypeModule type_module) {
                    return typeof (VSGI.Custom.Server);
                public class VSGI.Custom.Server : VSGI.Server {
                    // ...

                It has to return a type that is derived from VSGI.Server and instantiable with The Vala compiler will automatically generate the code to register class and interfaces into the type_module parameter.

                Some code from CGI has been moved into VSGI to provide uniform handling of its environment variables. If the protocol you want complies with that, just subclass (or directly use) VSGI.CGI.Request and it will perform all the required initialization.

                public class VSGI.Custom.Request : VSGI.CGI.Request {
                    public Request (IOStream connection, string[] environment) {
                        base (connection, environment);

                For more flexibility, servers can be loaded with ServerModule directly, allowing one to specify an explicit lookup directory and control when the module should be loaded or unloaded.

                var cgi_module = new ServerModule (null, "cgi");
                if (!cgi_module.load ()) {
                    assert_not_reached ();
                var server = (cgi_module.server_type);

                I received very useful support from Nirbheek Chauhan and Tim-Philipp Müller for setting the necessary build configuration for that feature.

                I recently finished and merged support for content negotiation.

                The implementation is really simple: one provide a header, a string describing expecations and a callback invoked with the negotiated representation. If no expectation is met, a 406 Not Acceptable is raised.

                app.get ("/", negotiate ("Accept", "text/xml; application/json",
                                         (req, res, next, ctx, content_type) => {
                    // produce according to 'content_type'

                Content negotiation is a nice feature of the HTTP protocol allowing a client and a server to negotiate the representation (eg. content type, language, encoding) of a resource.

                One very nice part allows the user agent to state a preference and the server to express quality for a given representation. This is done by specifying the q parameter and the negotiation process attempt to maximize the product of both values.

                The following example express that the XML version is poor quality, which is typically the case when it’s not the source document. JSON would be favoured – implicitly q=1 – if the client does not state any particular preference.

                accept ("text/xml; q=0.1, application/json", () => {

                Mounted as a top-level middleware, it provide a nice way of setting a Content-Type: text/html; charset=UTF-8 header and filter out non-compliant clients.

                using Tmpl;
                using Valum;
                var app = new Router ();
                app.use (accept ("text/html", () => {
                    return next ();
                app.use (accept_charset ("UTF-8", () => {
                    return next ();
                var home = new Template.from_path ("templates/home.html");
                app.get ("/", (req, res) => {
                    home.expand (res.body, null);

                This is another step forward a 0.3 release!

                Ever heard of fork?

                using GLib;
                using VSGI.HTTP;
                var server = new Server ("", () => {
                    return res.expand_utf8 ("Hello world!");
                server.listen (new VariantDict ().end ());
                server.fork ();
                new MainLoop ().run ();

                Yeah, there’s a new API for listening and forking with custom options…

                The fork system call will actually copy the whole process into a new process, running the exact same program.

                Although memory is not shared, file descriptors are, so you can have workers listening on common interfaces.

                I notably tested the whole thing on our cluster at IRIC. It’s a 64 cores Xeon Core i7 setup.

                wrk -c 1024 -t 32

                With a single worker:

                Running 10s test @
                  32 threads and 1024 connections
                  Thread Stats   Avg      Stdev     Max   +/- Stdev
                    Latency    54.35ms   95.96ms   1.93s    98.78%
                    Req/Sec   165.81    228.28     2.04k    86.08%
                  41741 requests in 10.10s, 5.89MB read
                  Socket errors: connect 35, read 0, write 0, timeout 13
                Requests/sec:   4132.53
                Transfer/sec:    597.28KB

                With 63 forks (64 workers):

                Running 10s test @
                  32 threads and 1024 connections
                  Thread Stats   Avg      Stdev     Max   +/- Stdev
                    Latency    60.83ms  210.70ms   2.00s    93.58%
                    Req/Sec     2.99k   797.97     7.44k    70.33%
                  956577 requests in 10.10s, 135.02MB read
                  Socket errors: connect 35, read 0, write 0, timeout 17
                Requests/sec:  94720.20
                Transfer/sec:     13.37MB

                It’s about 1500 req/sec per worker and an speedup of a factor of 23. The latency is almost not affected.

                The past few days, I’ve been working on a really nice libmemcached GLib wrapper.

                • main loop integration
                • fully asynchronous API
                • error handling

                The whole code is available under the LGPLv3 from arteymix/libmemcached-glib.

                It should reach 1.0 very quickly, only a few features are missing:

                • a couple of function wrappers
                • integration for libmemcachedutil
                • async I/O improvements

                Once released, it might be interesting to build a GTK UI for Memcached upon that work. Meanwhile, it will be a very useful tool to build fast web applications with Valum.

                This post describe a feature I will attempt to implement this summer.

                The declaration of async delegate is simply extending a traditional delegate with the async trait.

                public async delegate void AsyncDelegate (GLib.OutputStream @out);

                The syntax of callback is the same. It’s not necessary to add anything since the async trait is infered from the type of the variable holding it.

                AsyncDelegate d = (@out) => {
                    yield @out.write_all_async ("Hello world!".data, null);

                Just like regular callback, asynchronous callbacks are first-class citizen.

                public async void test_async (AsyncDelegate callback,
                                              OutputStream  @out) {
                    yield callback (@out);

                It’s also possible to pass an asynchronous function which is type-compatible with the delegate signature:

                public async void hello_world_async (OutputStream @out)
                    yield @out.write_all_async ("Hello world!".data);
                yield test_async (hello_world_async, @out);


                I still need to figure out how to handle chaining for async lambda. Here’s a few ideas:

                • refer to the callback using this (weird..)
                • introduce a callback keyword
                AsyncDelegate d = (@out) => {
                    Idle.add (this.callback);
                AsyncDelegate d = (@out) => {
                    Idle.add (callback);

                How it would end-up for Valum

                Most of the framework could be revamped with the async trait in ApplicationCallback, HandlerCallback and NextCallback.

                app.@get ("/me", (req, res, next) => {
                    if (req.lookup_signed_cookies ("session") == null) {
                        return yield next (req, res);
                    return yield res.extend_utf8_async ("Hello world!".data);

                The semantic for the return value would simply state if the request has been handled instead of being eventually handled.

                As you might already know, GNOME 3.20 has been released, with a number of improvements, fixes, future-proofing changes, preparations for wayland prime-time.

                Here's a short list of my favourite features from Delhi:
                • Files search improvements (see here)
                • Photos has basic photo editing support - crop and filters (see here)
                • Control center mouse panel revamped (see here)
                • Keyboard shortcuts window for some apps (see here) - although I have not managed to do this for any of the apps I maintain, I plan to do it for 3.22, as I consider it a useful feature in the sea of keyboard shortcuts
                 I will shortly summarize what happened in some of the games from GNOME:
                • Mines got keyboard navigation updates and fixes, thanks to Isaac Lenton
                • Atomix 
                  • has a gameplay tip starting window
                  • has updated artwork to match the GNOME 3 world, thanks to Jakub Steiner
                • Five or more got a new hires icon, thanks to Jakub Steiner
                All in all, congrats for everyone contributing to GNOME 3.20, keep up the good work.

                  I have recently introduced a basepath middleware and I thought it would be relevant to describe it further.

                  It’s been possible, since a while, to compose routers using subrouting. This is very important to write modular applications.

                  var app = new Router ();
                  var user = new Router ();
                  user.get ("/user/<int:id>", (req, res, next, ctx) => {
                      var id = ctx["id"] as string;
                      var user = new User.from_id (id);
                      res.extend_utf8 ("Welcome %s", user.username);
                  app.rule ("/user", user.handle);

                  Now, using basepath, it’s possible to design the user router without specifying the /user prefix on rules.

                  This is very important, because we want to be able to design the user router as if it were the root and rebase it on need upon any prefix.

                  var app = new Router ();
                  var user = new Router ();
                  user.get ("/<int:id>", (req, res) => {
                      res.extend_utf8 ("Welcome %s".printf (ctx["id"].get_string ()))
                  app.use (basepath ("/user", user.handle));

                  How it works

                  When passing through the basepath middleware, request which have a prefix-match with the basepath are stripped and forwarded.

                  But there’s more!

                  That’s not all! The middleware also handle errors that set the Location header from Success.CREATED and Redirection.* domains.

         ("/", (req, res) => {
                      throw new Success.CREATED ("/%d", 5); // rewritten as '/user/5'

                  It also rewrite the Location header if it was set directly.

         ("/", (req, res) => {
                      res.status = Soup.Status.CREATED;
                      res.headers.replace ("Location", "/%d".printf (5));

                  Rewritting the Location header is exclusively applied on absolute paths starting with a leading slash /.

                  It can easily be combined with the subdomain middleware to provide a path-based fallback:

                  app.subdomain ("api", api.handle);
                  app.use (basepath ("/api/v1", api.handle));

                  There was a time when Vala was really popular, and a plethora of Vala apps spawned. A lot of them are dead right now, so since i don’t have time to revive any of them, i’ll publish this list of interesting projects in case someone is interested in bringing back one of them:

                  I often profile Valum’s performance with wrk to ensure that no regression hit the stable release.

                  It helped me identifying a couple of mistakes n various implementations.

                  Anyway, I’m glad to announce that I have reached 6.3k req/sec on small payload, all relative to my very lowgrade Acer C720.

                  The improvements are available in the 0.2.14 release.

                  • wrk with 2 threads and 256 connections running for one minute
                  • Lighttpd spawning 4 SCGI instances

                  Build Valum with examples and run the SCGI sample:

                  ./waf configure build --enable-examples
                  lighttpd -D -f examples/scgi/lighttpd.conf

                  Start wrk

                  wrk -c 256


                  Running 1m test @
                    2 threads and 256 connections
                    Thread Stats   Avg      Stdev     Max   +/- Stdev
                      Latency    40.26ms   11.38ms 152.48ms   71.01%
                      Req/Sec     3.20k   366.11     4.47k    73.67%
                    381906 requests in 1.00m, 54.31MB read
                  Requests/sec:   6360.45
                  Transfer/sec:      0.90MB

                  There’s still a few things to get done:

                  • hanging connections benchmark
                  • throughput benchmark
                  • logarithmic routing #144

                  The trunk buffers SCGI requests asynchronously, which should improve the concurrency with blocking clients.

                  Lighttpd is not really suited for throughput because it buffers the whole response. Sending a lot of data is problematic and use up a lot of memory.

                  Valum is designed with streaming in mind, so it has a very low (if not neglectable) memory trace.

                  I reached 6.5k req/sec, but since I could not reliably reproduce it, I prefered posting these results.

                  I have just backported important fixes from the latest developments in this hotfix release.

                  • fix blocking accept call
                  • async I/O with FastCGI with UnixInputStream and UnixOutputStream
                  • backlog defaults to 10

                  The blocking accept call was a real pain to work around, but I finally ended up with an elegant solution:

                  • use a threaded loop for accepting a new request
                  • delegate the processing into the main context

                  FastCGI mutiplexes multiple requests on a single connection and thus, it’s hard to perform efficient asynchronous I/O. The only thing we can do is polling the unique file descriptor we have and to do it correctly, why not reusing gio-unix-2.0?

                  The streams are reimplemented by deriving UnixInputStream and UnixOutputStream and overriding read and write to write a record instead of the raw data. That’s it!

                  I have also been working on SCGI: the netstring processing is now fully asynchronous. I couldn’t backport it as it was depending on other breaking changes.

                  First of all, happy new year to you all (yes, I know we are already in February)!

                  Long time no post, I've been very busy with work, new projects, new clients, new technologies, preparing the move to a new home, the second child, and lot more, on the personal side.
                  Handling all of the above at the same time resulted in a severe change in the amount of my open-source contributions, so I haven't been able to do anything more except for code reviews and minor fixes, plus the releases of the GNOME modules I am responsible for (GNOME Games rule!).
                  During the winter break, between Christmas and New Year's Eve I have managed to work a bit on AppMenu integration for Atomix (which is not completely ready, as the appmenu is not displayed, in spite of being there, when checking with gtkInspector)
                  In the meantime lots of good things have happened, e.g. the Fedora 23 release, which is (again)  the best Fedora release of all times, thanks to everyone contributing.

                  All in all, I just wanted to share that I'm not dead yet, just been very busy, but hoping that I can get back to the normal life with a couple more contributions to open-source, and sharing some more experiences with gadgets, e.g. the Android+Lubuntu dual boot open-source TV box I got for Christmas.

                  I’m using the thunderbird conversations add-on and am generally quite happy with it. One pain point however is that its quick reply feature has a really small text area for replying. This is especially annoying if you want to reply in-line and have to scroll to relevant parts of the e-mail.

                  A quick fix for this:

                  1. Install the Stylish thunderbird add-on
                  2. Add the following style snippet:
                    .quickReply .textarea.selected {
                      height: 400px !important;

                  Adjust height as preferred.

                  Since the new design of GNOME Mines has been implemented, several people have complained about the lack of colors and the performance issues.

                  The lack of colors has been tackled last cycle with the introduction of the theming support, and including the classic theme with the same colored numbers as we all know from the old days of GNOME Mines.

                  Now, to tackle the performance issues, which in most cases are not real performance issues but rather playability issues for hardcore miners who would like to get a sub-10 seconds time, as the reveal transition time is set to 0.4 seconds, which adds up to a few seconds during a game, which might lead in a 10seconds+ time. To overcome this limitation, I have implemented a disable animations option in the Appearance settings, to allow users to disable the transitions completely to be able to achieve the best scores they would like. This can also come handy in the rare cases when the transitions are causing real performance issues. The next step would be to count the number of manually revealed tiles, in case we are using animations multiply it with the transition time, and at the end of the game subtract this from the total time, to make sure timing is roughly the same for both players playing with and without animations.

                  Feedback, ideas, comments are always welcome: are you a hardcore miner? will you disable the eye-candy animations to get better scores? Which theme are you using when you are playing GNOME Mines?
                  I've been fairly busy recently, so all my colleagues have upgraded to F22 before I did, even though usually I was the one installing systems in beta or release candidate state. After seeing two fairly successful upgrades I decided to take an hour to upgrade my system, hoping that it will fix an annoying gdm issue I've seen recently. Each day after unlocking the system (I cold-boot each day, so after my first break) one of my three displays doesn't turn on, I have to go to displays settings, change something, click apply and then revert took have all my displays again. Subsequent screen unlocks work correctly, I only get this once a day at the first unlock.

                  After updating 3000+ packages in about an hour, I  rebooted, got to the login screen, typed my password, login screen disappeared, the grey texture appeared, and the system hang.
                  The steps to recover to a usable computer:
                  • Switching to another VT revealed that everything was running, including gnome shell, gdm status was ok.
                  • Tried restarting gdm, but it didn't help.
                  • Checking the common issues for fedora 22 have me a hint that gdm running with wayland could be the culprit, so I changed to X11-based gdm, but that didn't help either.
                  • Gnome on Wayland session managed to log in, but froze when I did press the meta key to access the applications.
                  • Settings from the top right corner did work however, so I managed to create another user, which could log in.
                  • That led me to the conclusion that there was a problem with my configuration. I'm still not sure, and I will never find out, as the computer to be upgraded was my work pc and I needed to get stuff done, I have decided to reset my configuration. Add I couldn't find a way to reset all dconf settings to default, i have backed upo and deleted the following folders: .gnome, .gnome2, and some other ones I can't remember, but should be found easily with a search for "resetting all gnome shell settings". That did the job: I had to reconfigure my gnome shell extensions and settings, BUT at least I managed to lo in all, it wasn't the best upgrade experience I ever had.
                  The result however is pretty good (though one of my displays is still turning off at the first unlock), it was definitely worth working on it (I knew it will be, on my home computer I'm running F22 since the Alpha ;) )
                  Thanks for everyone who contributed to this release, your work is welcome and appreciated.
                    Recently I've been thinking about the real value of my contributions to free software and open-source software.

                    I've realized that I'm mostly a "seasonal" open-source contributor: I choose a project, do some bug triaging, bug-fixing, and when I'm "stuck" with the project (aka the rest of the bugs/features would require serious efforts and quite some time to implement) I jump to unto a next project, and do the same there, and do this over and over again. Of course, in the meantime I get attached to some projects and "maintain" them, so I keep track of the new bugs and fix them whenever I can, I review the patches, make releases, but I don't really consider myself as an active contributor.
                    I've had a "season" for Ubuntu software-management related contributions (software-center, update-manager, synaptic), a System Monitor season, and elementary software season and a GNOME Games season (and this one's not over yet). I also had some minor contributions (just for fun) to projects like LibreOffice, or recently eclipse (in context of the GreatFix initiative - which was a really interesting and rewarding experience).

                    I am not sure whether all this is a good thing or a bad thing. I enjoy hacking on open-source projects, for fun, for profit, for experience, for whatever. The most useful skill I've gained is that of easily finding my way around large codebases for bugfixing. But what can be seen from the outside (e.g. from the point of view of a company looking for a developer): this guy keeps jumping from one project to another, he didn't really get really deep into any of the projects he did work on (my longest "streak" of working on a single project was one year). Fortunately OpenHub has a chart for contributions to GNOME as a whole, and it shows that I'm contributing to GNOME constantly, even if only with a few commits per month.

                    Another thing about my contributions is the programming language I use: at work I'm a Java Developer, but that can not be seen at all from my contributions by languages chart at OpenHub, as the only Java contributions it shows is a few commits to a project of a friend to implement Java bindings to a Go library. This will change a bit in the near future, as Eclipse project should appear there soon with a few commits, but still, it shows that I'm most experiences with C++, which I'm not :)

                    I've started to realize that the dream-job I'm looking for would make use of all these: working primarily on open-source software in Java, but still giving me the freedom to occasionally work on other open-source software. Does that job exist? Unfortunately, not in my country. I saw a job posting recently with a Job description which would probably fit into my dream-job category, but I'm a bit afraid I wouldn't be a good candidate, as it does list some nice-to-have skills, which I don't have, due to the area I did work on in Java until now (server-side Java done with Spring vs J2EE).

                    Does your company value open-source contributions when employing? If yes, which one is preferred: in-depth knowledge of one project or shifting between projects could also be useful? Being open-minded and language-agnostic is better, or knowing one language to its guts is better?

                    A while back I started working on a project called Squash, and today I’m pleased to announce the first release, version 0.5.

                    Squash is an abstraction layer for general-purpose data compression (zlib, LZMA, LZ4, etc.).  It is based on dynamically loaded plugins, and there are a lot of them (currently 25 plugins to support 42 different codecs, though 2 plugins are currently disabled pending bug fixes from their respective compression libraries), covering a wide range of compression codecs with vastly different performance characteristics.

                    The API isn’t final yet (hence version 0.5 instead of 1.0), but I don’t think it will change much.  I’m rolling out a release now in the hope that it encourages people to give it a try, since I don’t want to commit to API stability until a few people have given it a try. There is currently support for C and Vala, but I’m hopeful more languages will be added soon.

                    So, why should you be interested in Squash?  Well, because it allows you to support a lot of different compression codecs without changing your code, which lets you swap codecs with virtually no effort.  Different algorithm perform very differently with different data and on different platforms, and make different trade-offs between compression speed, decompression speed, compression ratio, memory usage, etc.

                    One of the coolest things about Squash is that it makes it very easy to benchmark tons of different codecs and configurations with your data, on whatever platform you’re running.  To give you an idea of what settings might be interesting to you I also created the Squash Benchmark, which tests lots of standard datasets with every codec Squash supports (except those which are disabled right now) at every preset level on a bunch of different machines.  Currently that is 28 datasets with 39 codecs in 178 different configurations on 8 different machines (and I’m adding more soon), for a total of 39,872 different data points. This will grow as more machines are added (some are already in progress) and more plugins are added to Squash.

                    There is a complete list of plugins on the Squash web site, but even with the benchmark there is a pretty decent amount of data to sift through, so here are some of the plugins I think are interesting (in alphabetical order):

                    libbsc targets very high compression ratios, achieving ratios similar to ZPAQ at medium levels, but it is much faster than ZPAQ. If you mostly care about compression ratio, libbsc could be a great choice for you.

                    DENSITY is fast. For text on x86_64 it is much faster than anything else at both compression and decompression. For binary data decompression speed is similar to LZ4, but compression is faster. That said, the compression ratio is relatively low. If you are on x86_64 and mostly care about speed DENSITY could be a great choice, especially if you’re working with text.

                    You have probably heard of LZ4, and for good reason. It has a pretty good compression ratio, fast compression, and very fast decompression. It’s a very strong codec if you mostly care about speed, but still want decent compression.

                    LZHAM compresses similarly to LZMA, both in terms of ratio and speed, but with faster decompression.

                    Snappy is another codec you’ve probably heard of. Overall, performance is pretty similar to LZ4—it seems to be a bit faster at compressing than LZ4 on ARM, but a bit slower on x86_64. For compressing small pieces of data (like fields.c from the benchmark) nothing really comes close. Decompression speed isn’t as strong, but it’s still pretty good. If you have a write-heavy application, especially on ARM or with small pieces of data, Snappy may be the way to go.

                    If you’re like me, when you download a project and want to build it the first thing you do is look for a configure script (or maybe ./ if you are building from git).  Lots of times I don’t bother reading the INSTALL file, or even the README.  Most of the time this works out well, but sometimes there is no such file. When that happens, more often than not there is a CMakeLists.txt, which means the project uses CMake for its build system.

                    The realization that that the project uses CMake is, at least for me, quickly followed by a sense of disappointment.  It’s not that I mind that a project is using CMake instead of Autotools; they both suck, as do all the other build systems I’m aware of.  Mostly it’s just that CMake is different and, for someone who just wants to build the project, not in a good way.

                    First you have to remember what arguments to pass to CMake. For people who haven’t built many projects with CMake before this often involves having to actually RTFM (the horrors!), or a consultation with Google. Of course, the project may or may not have good documentation, and there is much less consistency regarding which flags you need to pass to CMake than with Autotools, so this step can be a bit more cumbersome than one might expect, even for those familiar with CMake.

                    After you figure out what arguments you need to type, you need to actually type them. CMake has you define variables using -DVAR=VAL for everything, so you end up with things like -DCMAKE_INSTALL_PREFIX=/opt/gnome instead of --prefix=/opt/gnome. Sure, it’s not the worst thing imaginable, but let’s be honest—it’s ugly, and awkward to type.

                    Enter configure-cmake, a bash script that you drop into your project (as configure) which takes most of the arguments configure scripts typically accept, converts them to CMake’s particular style of insanity, and invokes CMake for you.  For example,

                    ./configure --prefix=/opt/gnome CC=clang CFLAGS="-fno-omit-frame-pointer -fsanitize=address"

                    Will be converted to

                    cmake . -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/opt/gnome -DCMAKE_INSTALL_LIBDIR=/opt/gnome/lib -DCMAKE_C_COMPILER=clang -DCMAKE_C_FLAGS="-fno-omit-frame-pointer -fsanitize=address"

                    Note that it assumes you’re including the GNUInstallDirs module (which ships with CMake, and you should probably be using).  Other than that, the only thing which may be somewhat contentious is that it adds -DCMAKE_BUILD_TYPE=Debug—Autotools usually  builds with debugging symbols enabled and lets the package manager take care of stripping them, but CMake doesn’t.  Unfortunately some projects use the build type to determine other things (like defining NDEBUG), so you can get configure-cmake to pass “Release” for the build type by passing it <code>–disable-debug</code>, one of two arguments that don’t mirror something from Autotools.

                    Sometimes you’ll want to be able to pass non-standard argument to CMake, which is where the other argument that doesn’t mirror something from Autotools comes in; --pass-thru (--pass-through, --passthru, and --passthrough also work), which just tells configure-cmake to pass all subsequent arguments to CMake untouched.  For example:

                    ./configure --prefix=/opt/gnome --pass-thru -DENABLE_AWESOMENESS=yes

                    Of course none of this replaces anything CMake is doing, so people who want to keep calling cmake directly can.

                    So, if you maintain a CMake project, please consider dropping the configure script from configure-cmake into your project.  Or write your own, or hack what I’ve done into pieces and use that, or really anything other than asking people to type those horrible CMake invocations manually.

                    I have a Pirelli P.VU2000 IPTV set-top box which I don't use, but would like to put that to a good use. It runs Linux, has an HDMI, stereo RCA audio output, 2x USB 2.0 and IR receiver + remote, so it'd be nice to have this play internet radios if that's possible (theoretically it is an IPTV receiver + media center, so it should be able to play media). And of course, let's not forget the advantage of learning new things, as I am aware that I could get similar media players fairly cheaply :)

                    Unfortunately I'm not too good at hacking, and I haven't found a way to access a root console on it yet (after two days of googling/duck-duck-going and reading several russian and greek forum posts translated with Google Translate), so if anyone's up to the challenge to help me break it (to be able to access a root shell) in the spirit of knowledge-sharing, I'd be grateful for any kind of help.

                    I've already spent a few days on this, with the following results:
                    • the device boots, gets an IP from my router, but then errors out with "wrong DHCP answer" likely to be caused by me not being in the same subnet the IPTV provider expects it, but still, accessing the media player functionality without IPTV access would be nice
                    • opening the box I have managed to get a serial console with some minimal output, I guess this is the bootloader logging to the serial console:
                      #xos2P4a-99 (sfla 128kbytes. subid 0x99/99) [serial#a225d]
                      #stepxmb 0xac                                            
                      #DRAM0 Window  :    0x# (20)                             
                      #DRAM1 Window  :    0x# (15)                             
                      #step6 *** zxenv has been customized compared to build ***
                    • scanning the ports with nmap reveals the following:
                      Nmap scan report for
                      Host is up (0.00043s latency).
                      Not shown: 65534 closed ports
                      PORT     STATE SERVICE VERSION
                      2396/tcp open  ssh     Dropbear sshd 0.52 (protocol 2.0)
                      | ssh-hostkey:
                      |   1024 70:ff:b6:6b:94:f4:4e:19:14:40:7d:40:de:07:b9:ac (DSA)
                      |_  1040 c4:52:0f:c9:e5:0f:fe:a8:a3:28:e6:d7:e1:02:23:0a (RSA)
                      Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

                      Service detection performed. Please report any incorrect results at .
                      Nmap done: 1 IP address (1 host up) scanned in 13.52 seconds
                    • telnet to the port found with nmap works, but no prompt comes up:
                      telnet 2396
                      Connected to
                      Escape character is '^]'.
                    • ssh into the STB with root fails, as only publickey authentication seems to be enabled:
                      ssh root@ -p2396
                      The authenticity of host '[]:2396 ([]:2396)' can't be established.
                      RSA key fingerprint is c4:52:0f:c9:e5:0f:fe:a8:a3:28:e6:d7:e1:02:23:0a.
                      Are you sure you want to continue connecting (yes/no)? yes
                      Warning: Permanently added '[]:2396' (RSA) to the list of known hosts.
                      Permission denied (publickey).
                    • checked for possible dropbear 0.52 exploits and vulnerabilities, but haven't found anything I could use
                    So if you have any other ideas what I could try, feel free to suggest them in the comments.
                    The new development version 0.27.1 of the Vala programming language contains a lot of enhancements and bug fixes.

                    Release notes are the following:


                    • Print compiler messages in color.
                    • Add clutter-gdk-1.0 bindings.
                    • Add clutter-gst-3.0 bindings.
                    • Add clutter-x11-1.0 bindings.
                    • Add rest-extras-0.7 bindings.
                    • Bug fixes and binding updates.
                    However I'd like to tell more:
                    • The compiler now checks for unknown attributes.
                    • More checks in the compiler about invalid semantics.
                    • Now XOR works with boolean, like bitwise OR and AND
                    • A new attribute [ConcreteAccessor], mostly used for bindings. It's common that in C interface properties have concrete accessors instead of abstract accessors. Before this, you had [NoAccessorMethod] on top of such properties.
                    • Sometimes the "type" property is used in projects but Vala could not support it. Now if the bindings have [NoAccessorMethod] you can use it.
                    • We now infer generics recursively in method calls, so less typing for you.
                    • And more.
                    Have fun.
                    Last cycle Gnome Mines went through a major rewrite and redesign, bringing it to the GNOME3 era. However, not everyone was happy with the new look, and several people mentioned the lack of colors on the numbers as the reason for this.

                    The problem

                    The numbers on the fields communicate the danger clearly. But you have to read them. Several people have reported using the colors as the primary clue for sensing the danger around the current field. With the new design we don't have colored numbers, so they would have to change the way they play Minesweeper to get used to this. Some people did, and mentioned that in spite of their initial complaints of missing the colors they are happy with the result and will never need the colors. But what about the others?
                    The lack of colors being the number one complaint, some people also mentioned the flatness of all the icons as an issue, others complained about the small difference between exploded and non-exploded mines, lack of explosion, which might be an accessibility issue for the visually impaired.

                    The options

                    On bug #729250, several G+ posts and blog entries I have read different suggestions (from both designers, casual users, and minesweeping junkies) on how to bring back this additional level of visual feedback showing the danger when you're clicking around mines.

                    Here are some of the options we have discussed (feel free to comment your pros/cons for any of the solutions, and I will expand the list):
                    • Colored numbers, as we had in the old version
                      • Pros
                        • Potentially less unsatisfied users
                        • similar in looks to the minesweepers other platforms have
                      • Cons
                        • Readability issues
                        • User interface using many colors might look out of place on the GNOME desktop
                    • Subtle background color change based on level of danger
                      • Pros
                        • Color feedback
                        • If the colors are subtle enough, readability shouldn't be affected
                      •  Cons
                        • User interface using many colors might look out of place on the GNOME desktop
                    • Symbolic pips instead of the numbers
                      • Pros
                        • no reading required
                        • with well-spaced pips no counting would be required
                      • Cons
                        • ???

                    The proposed solution

                    GNOME games are trying to be as simple as possible, with the number of options reduced to the bare minimum. I consider this a good thing. But still, several games have options for changing the "theme", the look of the game board: e.g. lightsoff, five-or-more, quadrapassel all have a theme selection option in preferences. If GNOME Mines was themeable, we could also do this in Mines.
                    • people can change the theme if they are not satisfied with the default one
                    • a theme selector has to be added
                    • a preferences menu item has to be added, as minesweeper doesn't have a preference window at the moment, options are accessible in the appmenu

                    The status

                    Fortunately, the minefield is styled with CSS and the images provided as SVG files, so a theme consists of a collection of files, a theme.css file for describing the styles, and several SVG files, the images to use.
                    I have implemented a theme switcher (branch wip/theming-support) with the following features
                    The current look of the theme switcher
                    • this loads the above files from a given directory to display the minefield, so a theme is a directory
                    • the theme name is the name of the directory, but is irrelevant, as the users shouldn't see this anywhere, the theme switcher being a carousel-style switcher, not showing the name
                    • the theme switcher is a live preview widget, you can play a game (although the minefield is prefilled to show you all the numbers, flagged and non-flagged states, and you also can click the unrevealed tiles to see how the mines look)
                    I have added three themes (currently these differ only in the CSS style) for now:
                    • classic - using the flat icons, but the old colored numbers
                    • default - the monochrome numbers and the flat icons
                    • colored backgrounds - flat icons, and the numbers using colored backgrounds
                    If this gets into the master repository, I wouldn't want to have more than five themes in the repository. However, if you don't like any of them, and you are privileged enough (you have write access to the themes directory of mines), you can create your own one, and the theme switcher will see that after an application restart.

                    The missing pieces

                    • do we need a theme switcher at all, or we can create a single theme that fits everyone? (I doubt this, but if it's possible, I'll happily throw the whole theme switcher implementation away)
                    • design input on the theme switcher would be welcome
                      • theme switcher navigation button styling
                      • theme switcher window title
                      • theme switcher menu item (currently it opens by clicking Appmenu/Preferences)
                    • input on the themes
                      • suggestions for the existing themes
                      • suggestions for new themes (with SVG images provided)


                    It's hard to please everyone, but we can try to do our best :)
                    Happy new year everyone!

                    As we have started a brand new year, it's time for reviewing last year and planning for this one.


                    Last year was a great one for me, professionally. Although I still didn't get my dream job working full time on open-source and free software, I am still proud for what I have accomplished.

                    • I have successfully landed a major rewrite in gnome-mines, with both welcome and criticized changes (colored vs monochrome numbers anyone?) :)
                    • The company I work for has successfully migrated all SVN repositories to git, and my colleagues have mostly got used to it. We still commit some mistakes, but we usually can handle them without too much troubles
                    • I have migrated our issue tracking system to Redmine, customized it, learning some Ruby in the meantime, reporting some issues on github projects in the meantime
                    • I have removed our in-repository shared libraries and implemented dependency management on top of our current ant-based build scripts, using Ivy
                    • Contributed more time for reviews than I did before (usually for the awesome elementary projects) along with some fixes
                    • Contributing to open-source (GNOME and elementary) projects did help me get a new laptop through bountysource (thanks to bountysource for providing a platform, to the people supporting elementary and GNOME with bounties), which I am grateful for.
                    • 237 commits to various open-source projects (according to my openhub stats), although some of them are only release commits, its still a good number for me, although lower than the previous year
                    • I interviewed for a job that seemed like my dream job, but unfortunately it turned out not to be, because of various reasons. I still don't know why I got rejected at the last phase, and unfortunately while talking with the interviewers it turned out the marketing stuff that motivated me to go for an interview was indeed only marketing (and a very successful one) but nothing more (at least that's what I found out based on the answers from the several people working there, who I managed to talk to)
                    • I held three talks at the university I graduated from, about open-source: the first and second one was the same, a generic introduction to open-source for students, and the last one was about contributing for computer scientists, with bugfixes, code reviews, and stuff. It was a great experience,
                      I enjoyed my talks a lot, but didn't see any enthusiasm around the topic, so I'm seriously thinking about what to do next, as I like talking about open-source, but it seems that I haven't found the right audience
                    • I seriously wanted to attend the Open Source Open Mind conference held annually in our city, I even had a ticket, but unfortunately I became ill the night before the conference (and it was my longest illness, with almost a month), so I skipped it, with regrets
                    • In the land of open-source I intend to have more contributions this year, at least one commit and/or bugfix each day.
                    • I would like to get this year to GUADEC, as I've never been, and it seems like the event I might have the possibility to get to, as it's held in Europe, this year in Gothenburg, Sweden, so I need no visa (if I would need it, I would have to travel 900 km for it). Unfortunately we intend to buy a house, so I might not have the chance because of this.
                    That's it. No big plans other than these (at least not programming-related). As personal goals I have some more ambitious ones, like reading some books, buying a house, but I hope I will be able to keep up the contributions, which breathe some more life in me.
                    Since glib 2.41.2, the mutex/cond implementation on Linux has changed. The code compiled with Vala < 0.26 which targets at least glib 2.32 with --target-glib 2.32 will suffer from deadlocks.

                    Your options are either:
                    • Do not --target-glib 2.32
                    • Update Vala to at least 0.25.2
                    • Instead of upgrading Vala, pick the bindings for Mutex and Cond from the new glib-2.0.vapi
                    • Downgrade glib
                    To clarify, it's not a glib bug. It's an old valac bug in the glib-2.0.vapi bindings of Mutex and Cond  that became now critical after the glib implementation change.

                    The relevant Vala bug can be found here:
                    No nos hace falta crear una ventana que muestre directorios para después elegir archivos. Gtk lo hace por nosotros/as usando FileChooserDialog...

                    valac -o "archivos" *.gs --pkg gtk+-3.0 

                    uses Gtk
                        Gtk.init (ref args)               // inicializa gtk
                        var prueba = new ventana ()      // crea el objeto prueba
                        prueba.show_all ()                  // muestra todo
                        Gtk.main ();                      // comienza con el loop

                    class ventana : Window             // Crea una clase de ventana
                            title = "Ventana de prueba"          // escribe el titulo
                            default_height = 250                // anchura
                            default_width = 250                  // altura
                            window_position = WindowPosition.CENTER  // posición
                            // creamos un boton con la siguiente etiqueta
                            var button = new Button.with_label ("Pulsa este botón")
                            // Une el evento de clic de raton con la funcion pulsado
                            button.clicked.connect (pulsado)
                            // si pulsamos la x de la barra saldrá del loop

                            // añade el boton a la ventana

                        def pulsado (btn : Button)
                            var FC=  new FileChooserDialog ("Elige un archivo para abrir", this, Gtk.FileChooserAction.OPEN,
                            FC.select_multiple = false;
                            case ()
                                when Gtk.ResponseType.CANCEL
                                when Gtk.ResponseType.ACCEPT
                                    var direccion=FC.get_filename ();
                                    print direccion
                    I use the terminal a lot, usually with bash or fish shell, and I always wanted some kind of notification on command completion, especially for long-running greps or other commands.

                    The guys working on elementary OS have already implemented job completion notification for zsh shell in their pantheon-terminal project, but I wanted something more generic, working everywhere, even on the servers I am running commands through SSH.

                    The terminal bell sound is something I usually don't like, but it seemed like a good fit for a quick heads-up, so the Bell character came to the rescue.
                    As the bash prompt is fairly customizable, you can easily set a prompt which includes the magic BELL character.

                    In order to do this:
                    • open a Terminal (surprize :))
                    • run the command echo PS1=\$\'\x07\'\'$PS1\'
                    • paste the output of the command into ~/.bashrc
                    Of course, this is not perfect, as it beeps for short commands too, not only long-running commands, but it works for me, maybe it will help you.
                    A quick update on my new ultrabook running Fedora:
                    • After watching the kernel development closely to see if anything related to the built-in touchpad comes in, and nothing came, I have decided to try some workarounds. If it can't work as a touchpad, at least it should work as a mouse. This can be accomplished by adding psmouse.proto=imps to the kernel parameters. The worst thing in this is that there's neither two-finger scrolling, nor edge-scrolling, but I can live with that, as I also have a wireless mouse.
                    • Unfortunately I couldn't do anything with the wireless card, I have downloaded the kernel driver for 3.13 and 3.14 kernels, changed the source to work with 3.17 kernel (the one in Fedora workstation dailies), but unfortunately it fails to connect to my WPA-PSK2 network. So, until I get a mini PCIe wifi card with an Intel or Atheros chip (which is confirmed to have proper linux support), I will use the laptop with an USB WLAN interface.
                    • Optimus graphics card switching still didn't seem trivial to install and set up properly. However, I don't need more than the intel graphics card, so I just wanted to switch the NVidia card off completely. So installed bumblebee and bbswitch based on the instructions on Fedora wiki, and turned the discreete card off.
                    • Battery usage is at about 8W, estimated usage on battery is 7.5 hours with standard internet browsin on a standard 9-cell-battery, so I'm pretty satisfied with that.
                    • I have formatted both the 24 GB SSD and the 1.5 TB HDD (cleaned up from sh*t like windows and McAffee 30 days trial), and installed Fedora 21 with a custom partitioning layout.
                    All in all, at last I have a mostly working (there's place for improvement though)  laptop with a battery life above six hours with constant browsing, so I'm satisfied.

                      We have been hard at work since the last announcement. Thanks to help from people testing out the previous release, we found a number of issues (some not even OS X related) and managed to fix most of them. The most significant issues that are resolved are related to focus/scrolling issues in gtk+/gdk, rendering of window border shadows and context menus. We now also ship the terminal plugin, had fixes pushed in pygobject to make multiedit not crash and fixed the commander and multiedit plugin rendering. For people running OS X, please try out the latest release [1] which includes all these fixes.

                      Can’t see the video? Watch it on youtube:


                      Redmine issue editing is quite a complex task, with a fairly complex, huge, two-columned form you get to edit (we also have several custom fields, which make this issue even worse).

                      In Trac customized for ubinam, after adding our custom workflow, at the end of the page we had some options for augmenting our workflow, to ease the status updates, like reassignment, quick-fix, start working on an issue, and other easy tasks, which left most of the ticket fields untouched, only juggled with resolution, status, and assignee.

                      The status-button Redmine plugin provided a great base: after the description and primary fields of the ticket, it shows links to quick status transitions. With it, you don't have to click edit the issue, find the status field on the form, click to open it, select the status, and click submit to save the changes, instead, you change the status with one click. In our Trac-originated workflow we had a status with multiple resolutions (fixed, invalid, duplicate, wontfix, worksforme), that being a more complex transition, as you have to update two fields, and usually the assigned status goes along with a new assignee, so that is not that easy either.

                      After checking the source, learning a bit of Ruby on Rails, I have managed to update the form to change the links to Bootstrap buttons, and added an assignee combobox (with a nice look, and using the same data as the one on the edit form, thus no additional requests) with a built-in search box, thanks to the awesome Select2 component.
                      Of course, some status transitions also need a reasoning, why you did switch to that status: I could have chosen to have a dropdown with a text entry, but as the form already had a nice way to scroll to the comment form, why not use it? The rest of the form is not really helpful in this context, so with a bit of JQuery I have hidden it. Now, clicking a quick-status button either changes the status and submits the form (if no comment required - like test released) or changes the status and jumps to the comment form to give you an option to comment. Obviously, you could still use the traditional edit button, but why would you?

                      But a picture is worth a thousand words, so here you go, instead of three thousand words:

                      The overall look of a ticket with the plugin, see the quick-status buttons
                      A complex status transition, setting the status and the resolution, and requiring a comment
                      Changing the assignee is easy and fast, select the user, and click reassign...
                       Again, this is a heavily customized version, but it there's enough interest, I will share the plugin, or even develop a more generic one, not strictly tied to our workflow. So, let me see your +1s/comments/shares, if I get 30 of those, I'll share it in a github repo.

                      After sharing my experiences of migrating from Trac 1.0.1→Redmine some people have asked me to share the script I have used.

                      Do you need the script?
                      (Public domain image)
                      I would prefer sharing the migration script by getting it in the Redmine source tree. I am willing to spend some more of my spare time of getting the migration script in shape (currently it's too personalized for our project to be shared), but I'm not sure how many people would use it, so to find out, I need you to +1/comment/share this post to express your interest in it. Even if this act might look like a shameless self-promotion, you'll have to believe me that it is only a way to find out in what form to share the script. If I see at least 30 people interested in it, I will do my best to share the migration script as soon as possible, and get it in the Redmine source tree. If there are less than 30 people interested in the script, I will still share the script with them, but as a raw script in a public github repo/gist, without getting proper testing and review from the Redmine team.

                      I have already asked the Redmine devs on IRC about the way they would prefer (and hopefully accept) a patch, they answered that they will accept the script, better in a separate migration script (the current one in the tree is probably for Trac 0.12 and Trac 1.0 has changed a lot), to avoid breaking the old script for the ones who could use it. This is the easiest way, as it reduces the number of checks in the migration script for Trac version.

                      The Redmine developers have also asked me to get a sample Trac DB dump, but my company's database is not public. If you would be interested in the migration script, and want to help, and have a public Trac database at hand (preferably with less than 1000 tickets), please share it. I have looked at the Trac users page for open-source projects, but only a few of them are using Trac 1.0.1. The database dump would be helpful to test the migration script, and write some unit tests, to make sure everything works well.

                      Stay tuned, in my next post I will present the personalizations I have used to ease Redmine ticket updates without using the complex edit form, and if there's enough interest, I will share the plugin I customized with the people interested.

                      As some of you might already know, the company I work for just migrated from Trac to Redmine (migration is mostly complete). I'm a developer, but in lack of DevOps people, I was responsible for the migration. It went fairly well, some more notes:
                      Fixing everything openclipart
                      • the migration didn't migrate the estimated time attribute for tickets, as I forgot it, but I wrote the part to migrate the estimated time changes in the journal, so I took a wild guess and set the attribute for each ticket to the max value I found in the ticket's history (usually that's the correct one, except for maybe a few)
                      • never allow your users to choose their theme: I installed a plugin to let the users choose their redmine theme, and installed seven themes, unfortunately each has their advantages and disadvantages, and everyone has their preferred theme, so we can't choose a default theme everyone would agree with (maybe I will be the bad guy in the story, and remove the plugin and force them to use what most people like)
                      • all in all, the feedback was mostly positive so far, in spite of my promise of sending a mail when everything is complete (which has not happened yet), most people are already using it, so it seems to be fairly intuitive (for people used to bugzilla and trac at least)

                      Commit messages in issue history

                      A major complaint was that the commit messages do not appear in redmine in the ticket comments, but appear on their side, making it see which commit came after which comment, and the issue-repo-history-merge plugin had some issues and did not fit our needs, so I started looking for another solution: modifying redmine source or writing my own plugin. After checking the Redmine source I found however that a changeset link will be added for fixing keywords defined in Redmine settings (which we already used for changing the status of tickets on commits), so I just added a fixing keywords with the usual "Refs #xxxx" style already defined in Redmine to associate the commit with a ticket, to also set the status of the ticket to Accepted, and inherently, add a ticket history entry with "Applied in changeset:xxxxx". This was still missing the commit comment, but I have added that in the Redmine source, that being the fastest solution for now.
                      Later on, a plugin might be more appropriate, if needed, to reduce the number of changes in the Redmine source, in case a reinstall/Redmine update is needed.

                      This post was going to be a rather long one, but decided to split it in three, as the other two topics need their own posts for objective reasons. If you're interested in the migration script itself or a redmine workflow helper, check back later.

                      If you’re reading this through planet GNOME, you’ll probably remember Ignacio talking about gedit 3 for windows. The windows port has always been difficult to maintain, especially due to gedit and its dependencies being a fast moving target, as well as the harsh build environment. Having seen his awesome work on such a difficult platform, I felt pretty bad about the general state of the OS X port of gedit.

                      The last released version for OS X was gedit 3.4, which is already pretty old by now. Even though developing on OS X (it being Unix/BSD based) is easier than Windows (for gedit), there is still a lot of work involved in getting an application like gedit to build. Things have definitely improved over the years though, GtkApplication has great support for OS X and things like the global menu and handling NSApp events are more integrated than they were before (we used the excellent GtkosxApplication from gtk-mac-integration though, so things were not all bad).

                      I spent most of the time on two things, the build environment and OS X integration.

                      Build environment

                      We are still using jhbuild as before, but have automated all of the previously manual steps (such as installing and configuring jhbuild). There is a single entry point (osx/build/build) which is basically a wrapper around jhbuild (and some more). The build script downloads and installs jhbuild (if needed), configures it with the right environment for gedit, bootstraps and finally builds gedit. All of the individual phases are commands which can be invoked by build separately if needed. Importantly, whereas before we would use a jhbuild already setup by the user, we now install and configure jhbuild entirely in-tree and independently of existing jhbuild installations. This makes the entire build more reliable, independent and reproducible. We now also distribute our complete jhbuild moduleset in-tree so that we no longer rely on a possibly moving external moduleset source. This too improves build reproducibility by fixing all dependencies to specific versions. To make updating and maintaining the moduleset easier, we now have a tool which:

                      1. Takes the gtk-osx stable modulesets.
                      2. Applies our own specific overrides and additional modules from a separate overrides file. For modules that already exist, a diff is shown and the user is asked whether or not to update the module from the overrides file. This makes it easy to spot whether a given override is now out of date, or needs to be updated (for example with additional patches).
                      3. For all GNOME modules, checks if there are newer versions available (stable or unstable), and asks whether or not to update modules that are out of date.
                      4. Merges all modules into two moduleset files (bootstrap.modules and gedit.modules). Only dependencies required for gedit are included and the resulting files are written to disk.
                      5. Downloads and copies all required patches for each required module in-tree so building does not rely on external sources.

                      If we are satisfied with the end modulesets, we copy the new ones in-tree and commit them (including the patches), so we have a single self-contained build setup (see modulesets/).

                      All it takes now is to run

                      osx/build/build all

                      and the all of gedit and its dependencies are built from a pristine checkout, without any user intervention. Of course, this being OS X, there are always possibilities for things to go wrong, so you might still need some jhbuild juju to get it working on your system. If you try and run into problems, please report them back. Running the build script without any commands should give you an overview of available commands.

                      Similar to the build script, we’ve now also unified the creation of the final app bundle and dmg. The entry point for this is osx/bundle/bundle and works in a similar way as the build script. The bundle script creates the final bundle using gtk-mac-bundler, which gets automatically installed when needed, and obtains the required files from the standard build in-tree build directory (i.e. you’ll have to run build first).

                      OS X Integration

                      Although GtkApplication takes care of most of the OS X integration these days (the most important being the global menu), there were still quite some little issues left to fix. Some of these were in gtk+ (like the menu not showing [1], DND issues [2], font anti-aliasing issues [3] and support for the openFiles Apple event [4]), of which some have been already fixed upstream (others are pending). We’ve also pushed support for native 10.7 fullscreen windows into gtk+ [5] and enabled this in gedit (see screenshot). Others we had fixed inside gedit itself. For example, we now use native file open/save dialogs to better integrate with the file system, have better support for multiple workspaces, improved support for keeping the application running without windows, making enchant (for the spell checker) relocatable and have an Apple Spell backend, and other small improvements.

                      Besides all of these, you of course also get all the “normal” improvements that have gone into gedit, gtk+ etc. over the years! I think that all in all this will be the best release for OS X yet, but let it not be me to be the judge of that.

                      gedit 3.13.91 on OS X

                      We are doing our best to release gedit 3.14 for OS X at the same time as it will be released for linux, which is in a little bit less than a month. You can download and try out gedit 3.13.91 now at:


                      It would be really great to have people owning a mac try this out and report bugs back to us so we can fix them (hopefully) in time for the final release. Note that Gedit 3.14 will require OS X 10.7+, we no longer support OS X 10.6.

                      [1] [Bug 735122] GtkApplication: fix global menubar on Mac OS
                      [2] [Bug 658722] Drag and Drop sometimes stops working
                      [3] [Bug 735316] Default font antialiasing results in wrong behavior on OS X
                      [4] [Bug 722476] GtkApplication mac os tracker
                      [5] [Bug 735283] gdkwindow-quartz: Support native fullscreen mode

                      The change

                      In January, after a long time with SVN, we (development team) decided to make the move to git, to speed up the development of the project we're working on, Tracking Live.
                      The switch has greatly improved our development speed (although some people are still not happy with it, because of occasional relatively large merge conflicts) and deployment rate (with Jenkins and a relatively good branching strategy, we can release daily if we want).

                      The problem

                      We use Trac for bug tracking, with a post-commit hook to leave a comment on the relevant referenced ticket after each commit. This has been introduced in SVN times, and migrated to git too, unfortunately somehow Trac with git is awfully slow (tickets without git commit load in less than 5 seconds, tickets with 1 git commit load on 40+ seconds, and the time goes up with the number of related commits). We have updated our Trac instance from 0.12 to 1.0.1, it didn't help, tried several tweaks and additional package installs to speed up Trac+git, but none of those helped. The Trac developers also consider their Git plugin sub-optimal at the moment of this writing.

                      The solution

                      40+ seconds for opening a ticket to leave a comment looked like a huge waste of time, so we started looking for alternatives. Redmine looked promising, being based on Trac, but completely rewritten with Ruby instead of Python, with the much-advertised Rails framework, and the interface by default looked familiar for the colleagues used to Trac.

                      Migration script updates

                      Redmine provided a migration script for migrating all tickets from Trac. Good start. After the first import (6+ hours for ten thousand tickets) Redmine didn't start at all. Bad news. So here are the changes I have made to the migration script in order to have a complete migration (learned the Ruby syntax easily, and the changes took 2 days with testing, and migrated only 200 tickets in each test until I was sure the migration script works ok, as I didn't like the 6+ hours for full migration):
                      • as the migration script is for migrating from Trac 0.12 and the datatype used to store the dates in the Trac database has changed since, updated the date conversion, after this redmine did start
                      • added migration for CC's to Redmine watchers
                      • updated attachments migration to work with Trac 1.0.1, as the attachment paths have changed
                      • added migration of total hours, estimated hours and hours spent, stored as custom fields in Trac, to Redmine's time management plugin entries
                      • added comments for custom field changes, as custom fields have been migrated (meaning the current value of the custom field being correct), but their changes have not been migrated
                      • added parent ticket relationship migration, as we had several beautiful ticket hierarchies for grouping featuresets (until we migrated to a more agile sprint-alike milestone-based grouping) in Trac
                      • added custom ticket states and priorities mapping (we have a custom set defined of these to help us in our workflow)
                      • added custom user mappings (for each of our users - 64 in the complete trac history) to create one user only for the same user using Trac with multiple email addresses (one for trac comments, another for git commits where these differ)
                      • added migration for ticket comment links
                      If you are interested in any of the above changes, feel free to ask, I will provide the migration script (unfortunately the changes do not seem to make in to redmine trunk, lots of patches I have applied have been waiting in redmine tracker for years, they apply cleanly, but have not been pushed to trunk)

                        The plugins

                        After all these steps, I had a good dataset to start with, but the functionality of Redmine was still not on par with Trac. The long Redmine plugin list (and additional github searches for 'redmine plugin') came in handy here, checked the list, tested plugins I found interesting, and here's a final list (all tested and working with Redmine 2.5.2)
                        • PixelCookers theme - the most complete and modern redmine theme with lots of customization options
                        • redmine_auto_watchers_from_groups - everyone from the assigned group should be cc'd for each mail, that's what we used Trac default cc's for (not perfect, reported the 1st issue for the project)
                        • redmine_auto_watchers - to add the persons commenting as watcher, bugzilla style
                        • redmine_category_tree - useful for component grouping in our project, as we have one project with lots of components and subcomponents and sub-sub-components
                        • redmine_custom_css and redmine_custom_js - for customizing the last bits without having to create a custom theme
                        • redmine_didyoumean - for auto duplicate search before reporting a ticket (current trunk is broken, but last stable works)
                        • redmine_custom_workflows - for additional updates on ticket changes
                        • redmine_image_clipboard_paste - makes bug reporting for a website so much easier with a screenshot
                        • redmine_issue_status_colors - we use a color for each status to help us visualize the current status of a milestone
                        • redmine_landing_page - we only have one project, so we always want to land on the project page after login
                        • redmine_open_search - no more custom html pages building custom links for accessing a ticket, just type the number in the searchbar of the browser
                        • redmine_revision_diff - expand the diff by default (with a bit of customization and custom code show the branches a given commit appears on, something my colleagues have missed when taking a first look at redmine)
                        • redmine_subtasks_inherited_fields - subtasks usually have most of the attributes inherited from the parent, so let's ease bug reporting
                        • redmine_default_version - we have a generic issue collector pool, management prioritizes bugs from there into scheduled milestones, let's use that collector as default target version
                        • redmine_tags - use tagging for bugs and wiki pages, something we used in Trac (although data not migrated)
                        • redmine_wiki_extensions, wiking, redmine_wiki_lists - additional wiki extensions, custom macros, e.g. for embedding a ticket list inside a wiki page
                        • redmine_wiki_toc - to have a table of contents of our wiki, which is kindof messy right now (we had a wiki page looking something like a ToC, but we occasionally forgot to update it)
                        • status_button - for quickly changing the status without having to open the combo and select the one to use and click update, just shows all statuses as links
                        • redmine_jenkins - awesome jenkins integration, can show build history, or even start jenkins builds from the redmine interface, no need to open jenkins anymore

                        What's missing

                        After all this setup, I've got two features of Trac without complete matches:
                        • TicketQuery macro results have not been migrated, as there's no 100% match of this feature neither in default Redmine, nor in the plugins. Based on the necessity of this we will either create custom queries for the most important TicketQueries or (the more time-consuming option) will extend redmine_wiki_lists plugin with additional query attributes to be as powerful as TicketQuery is in Trac
                        • Trac roadmap had a progress indicator for each Milestone, which we could colorize based on the status. Redmine progress indicator can only colorize Open/InProgress/Closed, so no progressbar colorized based on per-status ticket count. However the ticket list is shown after the progressbar (Trac doesn't show the list), which is something we can colorize, so we still have a visual clue of how the milestone stands.


                        All in all, it looks to me that the migration is prepared, test migration worked, preliminary tests look promising, speed is incomparable, featureset is OK, look and feel updated and awesome.
                        Hopefully we'll see it in action sometime soon (for my and some colleagues' relief, who got sick of waiting for trac pages to load), with sub-5 second page loading times. So Redmine, here we come...
                        Recently my (5-years) old laptop (HP ProBook 4710s) started behaving badly (shutting down multiple times, even after full interior cleaning) so I have started looking for a replacement. This time I wanted something a bit more portable (less than 17 inch) but still OK for development (13.3 and 14 seemed a bit too low), so I've opted for a 15.6 inch.
                        Choosing the right one was a tough decision, my requirements were:
                        • 15.6 inch with FullHD resolution (1920x1080)
                        • good battery life (4+ hours) involving an ultralow-voltage CPU (i5 42xxU or i7 45xxU)
                        • 8 GB memory
                        • SSD being a plus
                        My favourite one was the Dell Inspiron 15 (7000 series) but the price was a bit higher than I wanted to pay for it, so I hesitated a lot, until each e-shop sold it's stock out. Hunting this laptop one day I've found the lot cheaper, brand-new ASUS TransformerBook TP500 (LA/LN) series which met almost all my requirements (nothing on Google for Linux compatibility), so I've decided to order the i5 version (24 GB SSD + 1 TB HDD) on a Saturday, but the shop has announced me on Monday that unfortunately there was a mistake with the stock calculation, and they're out of stock, so I've opted for an upgrade to the i7 one. That shipped in one day (with an OEM install of Win 8.1 sadly).

                        After a quick first-time setup (OK, quick might be an exaggeration) of Win 8.1, a quick start of Internet Explorer to download Firefox, made the quick tests to see if everything's OK. Touchscreen worked, keyboard is amazing, resolution is OK, colors look wonderful, sadly the volume down button didn't work (volume up works, so it's likely to be a hardware issue). I've decided to return it for a replacement (hopefully a fully functional one this time), but not before checking the Linux compatibility,
                        After disabling secureboot, creating an EFI Fedora 20 liveUSB, I've booted Fedora on it in a few seconds, here's a summary:
                        • Resolution is ok, video cards (HD4400 and GeForce GT840) work
                        • Touchpad works
                        • Touchscreen works (haven't tried multitouch, seen some reports on it's smaller sister TP300 only single-point touch working right now)
                        • Keyboard works
                        • Wifi did not work out of the box (with the 3.11 kernel). The Wifi+Bluetooth card is a Mediatek (RaLink) 7630. Googling revealed that ASUS x550C and HP 450-470 G1 owners also have this card, there are several requests to add support, but it's just not there yet. Fortunately MediaTek provides Linux drivers, so it might be "only" a matter of compiling the kernel driver, which means it might get in the kernel soon.
                        • Card reader did not work (again with the 3.11 kernel), but a quick google revealed that support has been added in 3.13, so it should work if Fedora is updated (hopefully ethernet works, haven't had the chance to try it) - currently with Fedora updates installed I'm using the 3.15 kernel
                        • With GTK 3.10 CSD windows can not be moved by dragging the titlebar with touch, you have to use the touchpad for that. People have confirmed that this is not the case with 3.12+, which is strange (due to 708431 being still open ), but good news.
                        All in all, the experience was not perfect, but not frustrating either.

                        Will be back with a more in-depth review with battery life and other info after I get the replacement. I'm looking forward to having fun with experimenting with GNOME on touch displays and implementing GNOME Mines touch screen support :)
                        This new release of the Vala programming language brings a new set of features together with several bug fixes.


                        • Support explicit interface method implementation.
                        • Support (unowned type)[] syntax.
                        • Support non-literal length in fixed-size arrays.
                        • Mark regular expression literals as stable.
                        • GIR parser updates.
                        • Add webkit2gtk-3.0 bindings.
                        • Add gstreamer-allocators-1.0 and gstreamer-riff-1.0 bindings.
                        • Bug fixes and binding updates.
                        The explicit interface method implementation allows to implement two interfaces that have methods (not properties) with the same name. Example:

                        interface Foo {
                        public abstract int m();

                        interface Bar {
                        public abstract string m();

                        class Cls: Foo, Bar {
                        public int Foo.m() {
                        return 10;

                        public string Bar.m() {
                        return "bar";

                        void main () {
                        var cls = new Cls ();
                        message ("%d %s", ((Foo) cls).m(), ((Bar) cls).m());

                        Will output 10 bar.

                        The new (unowned type)[] syntax allows to represent "transfer container" arrays. Whereas it's possible to do List<unowned type>, now it's also possible with Vala arrays.
                        Beware that doing var arr = transfer_container_array; will not correctly reference the elements. This is a bug that will eventually get fixed. It's better to always specify (unowned type)[] arr = transfer_container_array;
                        Note that inside the parenthesis only the unowned keyword is currently allowed.

                        The non-literal length in fixed-size arrays has still a bug (lost track of it) that if not fixed may end up being reverted. So we advice not to use it yet.

                        Thanks to our Florian for always making the documentation shine, Evan and Rico for constantly keeping the bindings up-to-date to the bleeding edge, and all other contributors.

                        More information and download at the Vala homepage.

                        I have been a bit more quiet on this blog (and in the community) lately, but for somewhat good reasons. I’ve recently finished my PhD thesis titled On the dynamics of human locomotion and the co-design of lower limb assistive devices, and am now looking for new opportunities outside of pure academics. As such, I’m looking for a new job and I thought I would post this here in case I overlook some possibilities. I’m interested mainly in working around the Neuchâtel (Switzerland) area or working remotely. Please don’t hesitate to drop me a message.

                        My CV

                        Public service announcement: if you’re a bindings author, or are otherwise interested in the development of GIR annotations, the GIR format or typelib format, please subscribe to the gir-devel-list mailing list. It’s shiny and new, and will hopefully serve as a useful way to announce and discuss changes to GIR so that they’re suitable for all bindings.

                        Currently under discussion (mostly in bug #719966): changes to the default nullability of gpointer nodes, and the addition of a (never-null) annotation to complement (nullable).

                        I just learned of another automated build system for vala. It’s called bake. It looks pretty nice. It’s written in vala and appears to support a wide variety of languages. From what I can tell looking at the source code, bake will write out old school make files for you.

                        The other build system that I also have never used is called autovala. autovala is vala specific unlike bake appears to be. autovala is nice, though, in that it builds out CMake files your project. I’m already very familiar with CMake so that’s a big plus for me.

                        I plan to check out both very soon.

                        A few days ago Atom, the hackable text editor has been completely open-sourced under the MIT license (parts of it have been open-sourced some time ago, now they have completed it by open-sourcing the core).

                        Unfortunately currently it is only available for downloading for Mac OS, no Windows or Linux binaries available yet, but due to the nature of open-source, you can simply grab the sources, download and compile nodeJS (npm 1.4.4 is required, and neither Fedora 20 nor Ubuntu 14.04 provided it from the repos, they only had npm 1.3.x) and build yourself an executable. It's not always trivial, I had some issues building it both for Ubuntu 14 and Fedora 20, but with quick DuckDuckGo searches found the solutions, and I was able to test it.
                        Update: the folks at webupd8 have created a PPA for 64-bit Ubuntu 14.04, so you might be able to try it out without the hassle to build it for yourself.
                        As a first impression, it is a clean and extensible text editor, for people like me who are too lazy to learn vim or emacs.

                        It took me some time to configure Atom for using it as an IDE. The default build has support for some languages already, some plugins and themes, but there are plenty of additional packages to choose from. Here are my favourites (if these didn't exist, I would've already stopped using Atom):
                        • Word Jumper with it's default Ctrl+Alt+Left/Right reconfigured + Ctrl+Left/Right for jumping between words, something provided by almost every product dealing with writing and navigating text
                        • Terminal Status showing a terminal with Shift+Enter below your editor, useful for make commands, or git hackery for stuff not provided by the default git plugin. Unfortunately user input doesn't work yet, the console doesn't get the focus, so it's not perfect.
                        I have checked the available packages, language support was available for most of the languages I usually work with (C, C++, Python, Java, Bash Shell, GitHub MarkDown, Latex) , but unfortunately no support for Vala yet.

                        The GitHub folks did a wonderful job at providing documentation for everything for the community to quickly build a powerful ecosystem around the Atom core. They have links to their important guides from their main Documentation page, including a guide on how to convert a TextMate bundle. As TextMate already has a huge package ecosystem, including a Vala bundle, I have followed their guide, converted the TextMate bundle, created a github repo and published a language-vala atom package.

                        All in all, initial Vala support including syntax highlighting and code completion (and maybe some other features I am not aware of yet) is available for the ones eager to develop Vala code in Atom, after building it from source or after the GitHub folks provide binaries for other OSs too.

                        After a couple of discussions at the DX hackfest about cross-platform-ness and deployment of GLib, I started wondering: we often talk about how GNOME developers work at all levels of the stack, but how much of that actually qualifies as ‘core’ work which is used in web servers, in cross-platform desktop software1, or commonly in embedded systems, and which is security critical?

                        On desktop systems (taking my Fedora 19 installation as representative), we can compare GLib usage to other packages, taking GLib as the lowest layer of the GNOME stack:

                        Package Reverse dependencies Recursive reverse dependencies
                        glib2 4001
                        qt 2003
                        libcurl 628
                        boost-system 375
                        gnutls 345
                        openssl 101 1022

                        (Found with repoquery --whatrequires [--recursive] [package name] | wc -l. Some values omitted because they took too long to query, so can be assumed to be close to the entire universe of packages.)

                        Obviously GLib is depended on by many more packages here than OpenSSL, which is definitely a core piece of software. However, those packages may not be widely used or good attack targets. Higher layers of the GNOME stack see widespread use too:

                        Package Reverse dependencies
                        cairo 2348
                        gdk-pixbuf2 2301
                        pango 2294
                        gtk3 801
                        libsoup 280
                        gstreamer 193
                        librsvg2 155
                        gstreamer1 136
                        clutter 90

                        (Found with repoquery --whatrequires [package name] | wc -l.)

                        Widely-used cross-platform software which interfaces with servers2 includes PuTTY and Wireshark, both of which use GTK+3. However, other major cross-platform FOSS projects such as Firefox and LibreOffice, which are arguably more ‘core’, only use GNOME libraries on Linux.

                        How about on embedded systems? It’s hard to produce exact numbers here, since as far as I know there’s no recent survey of open source software use on embedded products. However, some examples:

                        So there are some sample points which suggest moderately widespread usage of GNOME technologies in open-source-oriented embedded systems. For more proprietary embedded systems it’s hard to tell. If they use Qt for their UI, they may well use GLib’s main loop implementation. I tried sampling GPL firmware releases from and, but both are quite out of date. There seem to be a few releases there which use GLib, and a lot which don’t (though in many cases they’re just kernel releases).

                        Servers are probably the largest attack surface for core infrastructure. How do GNOME technologies fare there? On my CentOS server:

                        • GLib is used by the popular web server lighttpd (via gamin),
                        • the widespread logging daemon syslog-ng,
                        • all MySQL load balancing via mysql-proxy, and
                        • also by QEMU.
                        • VMware ESXi seems to use GLib (both versions 2.22 and 2.24!), as determined from looking at its licencing file. This is quite significant — ESXi is used much more widely than QEMU/KVM.
                        • The Amanda backup server uses GLib extensively,
                        • as do the clustering solutions Heartbeat and Pacemaker.

                        I can’t find much evidence of other GNOME libraries in use, though, since there isn’t much call for them in a non-graphical server environment. That said, there has been heavy development of server-grade features in the NetworkManager stack, which will apparently be in RHEL 7 (thanks Jon).

                        So it looks like GLib, if not other GNOME technologies, is a plausible candidate for being core infrastructure. Why haven’t other GNOME libraries seen more widespread usage? Possibly they have, and it’s too hard to measure. Or perhaps they fulfill a niche which is too small. Most server technology was written before GNOME came along and its libraries matured, so any functionality which could be provided by them has already been implemented in other ways. Embedded systems seem to shun desktop libraries for being too big and slow. The cross-platform support in most GNOME libraries is poorly maintained or non-existent, limiting them to use on UNIX systems only, and not the large OS X or Windows markets. At the really low levels, though, there’s solid evidence that GNOME has produced core infrastructure in the form of GLib.

                        1. As much as 2014 is the year of Linux on the desktop, Windows and Mac still have a much larger market share. 

                        2. And hence is security critical. 

                        3. Though Wireshark is switching to Qt. 

                        In the weekend, after playing around with a Flappy Bird clone on a phone, I got curious how much time it would take me to implement a desktop version. After a G+ idea I have named the project Flappy Gnome, and implemented a playable clone in Vala with a GtkArrow jumping between GtkButtons in a few hours and less than 150 lines (including empty lines and stuff).

                        Here's a quick preview of the first version:

                        A bit about the tech details: it's basically a dynamically expanding GtkScrolledWindow scrolling to the right while you progress, that creates the effect of the moving pipes, and the player is moved from inside a tick_callback added to the Container GtkLayout.

                        Given that this is my second Vala project written from scratch (after Valawhole), and I learned a lot from it, seemed like a good idea to develop it further into a tutorial (for beginners), maybe someone else will find it useful too. I did start over twice to have a better code design and well-separated steps (1 commit/step), and have finally pushed to github, along with a description of each step. The resulting code is a bit longer (almost twice as long) than the initial version, but it also has more features, including CSS styling, Restart button, better design, and so on...

                        The end result of the tutorial in its current state.

                        I'm thinking of adding a Help screen to explain the complicated controls (F2 restarts the game, Space to start the game/jump) and maybe a Game Over screen, so the tutorial might not be completely ready, but it's in a good shape.

                        I could have done better in grouping related functionality in commits, or in commenting code, and I am sure there's a better way to implement/improve this using GTK+, but it's good for a start, with some known issues:
                        In its current state it runs choppy on a relatively modern dual- and quad-core CPUs with Ati cards using the open source radeon driver (I'm not sure what else could I blame), but works enjoyably on a PC with an Intel HD. Unfortunately I don't have an NVidia card to test with, but I'm really curious if it works on NVidia with nouveau, and maybe would also be interested in results with the binary blob drivers (both NVidia and Ati), if they make a difference. If you have any of these and have a few minutes, please try it and comment with your findings.
                        Update 1: Feedback from people running the game on Nouveau is positive, so the game seems to run smoothly on Nvidia with the open-source driver.