When I say everything torn apart, I mean it

Preparing the case

Choosing a non-mATX-compatible case to start with gave me major headaches, but simply put I have found no mATX case with a similar look. I had to work quite a bit to make the G5 case work with an mATX motherboard.
During shipping, as usual for these computer, the outer case stands have been bent, resulting in a less pleasant look. To fix it, had to rip the whole thing apart, meaning taking out the inner case to be able to "bend" the outer case stands back in their original position.
I did not expect that I will have to do this, but as I already had the case torn apart, I have decided to apply a new paint. It is not perfect, but it's ok for me, the outer case with grey base-paint and metallic grey paint applied over it, looks similar to the original (except for the Apple logo being mostly gone). The inner case was painted matt black, and it looks fine. However, when mounting back the inner case in the outer case, in some places the black paint fell off, so I had to reapply the paint.
I also had to cut the back IO plate as close to the side as possible to fit an mATX IO plate, with the standard mentioning 45x158 mm, but the standard G5 backplate is somewhere around 40x190 mm.

G5 PSU internals replaced

Modding the PSU

  • Remove PSU internals
  • Get an ATX PSU with a 120mm fan on top (in my case a Seasonic SS330HB)
  • Disassemble it completely (remove the cooling fan from the top and the case)
  • Mount the internals of the power supply in the G5 power supply case
  • Create-buy a longer cable with an Y-splitter with 2-pin male plugs for the fans
  • Mount the new 60mm fans (I have used Scythe Mini Kaze 60mm)
  • The resulting PSU
  • Assemble the whole thing again

Preparing mATX motherboard mount

  • Use an old mATX motherboard as a template
  • Break the mounts standing in the way of the motherboard
  • Mark the mounting holes
  • Use (a part of) the original cable organizer for the SATA power cable going to the HDD cage/optical drive
  • Mount old mATX motherboard with glue applied to the stands, so that they stick to the case (I did not go with the new one at first, as I had to push it hard for the stand-offs to stick, and I did not want to damage the new one)
  • Test wiring of the power button and the power led with an old mATX motherboard (I have used a different led, a red one to match the motherboard leds)
  • Wire USB and audio
  • Remove the old mATX motherboard
  • Mount the new mATX motherboard in place

The complete PC part list for the build is:
PCPartPicker part list / Price breakdown by merchant
Type Item Price
CPU Intel Core i7-6700T 2.8GHz Quad-Core OEM/Tray Processor Purchased For $366.42
CPU Cooler ARCTIC Alpine 11 Plus Fluid Dynamic Bearing CPU Cooler Purchased For $12.17
Motherboard MSI B150M MORTAR Micro ATX LGA1151 Motherboard Purchased For $85.70
Memory Kingston HyperX Fury Black 16GB (2 x 8GB) DDR4-2133 Memory Purchased For $89.88
Storage Kingston SSDNow V300 Series 120GB 2.5" Solid State Drive Purchased For $50.00
Storage Toshiba 1TB 3.5" 7200RPM Internal Hard Drive Purchased For $50.00
Video Card XFX Radeon HD 4550 1GB Video Card Purchased For $25.00
Case Fan ARCTIC Arctic F8 PWM 31.0 CFM 80mm Fan Purchased For $4.30
Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.17
Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.17
Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.50
Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.50
Other PowerMac G5 Purchased For $25
Prices include shipping, taxes, rebates, and discounts
Total $725.81
Generated by PCPartPicker 2016-08-11 09:17 EDT-0400


Bill of additional materials used so far:
1x gray basepaint - $6
1x matt black paint - $6
1x metallic silver paint - $3
1x matt black paint - $3

motherboard template - $2.5
motherboard stands ~ $2.5
power supply ~ $23
2x Scythe Mini Kaze fans for the PSU - $14
1x Bracket adapter 2x2.5 HDD/SSD to 3.5 bay for mounting SSD - $4

I have realized quite some time ago that my PC is struggling to keep up with the pace, so I have decided that it is time for an upgrade (after almost 6 years with my Dell Inspiron 560 minitower with C2D Q8300 quad-core).

I have "upgraded" the video card a couple of months ago due to the old one not supporting OpenGL3.2 needed by GtkGLArea. First I went with an ATI Radeon HD6770 I received from my gamer brother, but it was loud and I did not use it as much as it's worth using, with a high cost (108W TDP, bumped the consumption of the idle PC by 30-40W from 70-80W to 110-120W), so I have traded it for another one: a low-consumption (Passively cooled - 25W TDP) Ati Radeon HD4550 working well with Linux and all my Steam games whenever I am gaming (casual gamer). Consumption went back to 90-100W.

After that came the power supply, replacing the Dell-Provided 300W supply with a more efficient one, a 330W Seasonic SS330HB. This resulted in another 20W drop in power consumption, idling below 70W.

The processor being fairly old, and having a 95W TDP, but with the performance way below today's i7 processors with the same TDP, it might be worth upgrading, but that means motherboard + CPU + cooler + memory upgrade, but as I have the rest of the components, I will reuse them, and add a new (old) case to the equation, a PowerMac G5 from around 2004.

So here's the basic plan:
Case - PowerMac G5 modded for mATX compatibility, and repainted - metallic silver the outer case, matt black the inner case - inspired by Mike 7060's G5 Mod
CPU - Intel core i7 6700T - 35W TDP
Cooler - Arctic Alpine 11 Plus - silent, bigger brother of the fanless Arctic Alpine 11 Passive (for up to 35 W TDP, the i7 6700T being right at the edge, I did not want to risk)
MotherBoard - 1151 socket, DDR4, USB3, 4-pin CPU and case fan controller socket, HDMI and DVI video outs being the requirements - I chose the MSI B150M Mortar because of guaranteed Linux compatibility (thanks Phoronix), 2 onboard PWM case fan controllers + PWM controlled CPU fan
Memory - 2x8GB DDR4 Kit - Kingston Hyperx Fury
PSU - Seasonic SS-330HB mounted inside the G5 PSU case, original G5 PSU fans replaced with 2x 60mm Scythe Mini Kaze for silent operation
Case Cooling - Front 2x 92mm - Arctic F9 PWM PST in the original mounts

Video card - Onboard Intel or optional ATI Radeon HD4550 if (probably will not happen) the onboard will not be enough
Optical drive (not sure if it is required) - start with existing DVD-RW drive
Storage - 120 GB Kingston V300 + 1TB HDD - existing

Plans for later
(later/optional) update optical drive to a Blu-Ray drive
(later/optional)  Arctic F9 PWM PST, in the original G5 intake mounts or 120 mm Arctic F12 PWM PST in new intake mounts.

I'll soon be back with details on preparing the case, probably the hardest part of the whole build. The new parts are already ordered (the CPU was pretty hard to find on stock, and will be delivered in a week or so instead of the usual 1-2 days).

Valum now support dynamically loadable server implementation with GModule!

Server are typically looked up in /usr/lib64/vsgi/servers with the libvsgi-<name>.so pattern (although this is highly system-dependent).

This works by setting the RPATH to $ORIGIN/vsgi/servers of the VSGI shared library so that it looks into that folder first.

The VSGI_SERVER_PATH environment variable can be set as well to explicitly provide a directory containing implementations.

To implement a compliant VSGI server, all you need is a server_init symbol which complies with ServerInitFunc delegate like the following:

[ModuleInit]
public Type server_init (TypeModule type_module) {
    return typeof (VSGI.Custom.Server);
}

public class VSGI.Custom.Server : VSGI.Server {
    // ...
}

It has to return a type that is derived from VSGI.Server and instantiable with GLib.Object.new. The Vala compiler will automatically generate the code to register class and interfaces into the type_module parameter.

Some code from CGI has been moved into VSGI to provide uniform handling of its environment variables. If the protocol you want complies with that, just subclass (or directly use) VSGI.CGI.Request and it will perform all the required initialization.

public class VSGI.Custom.Request : VSGI.CGI.Request {
    public Request (IOStream connection, string[] environment) {
        base (connection, environment);
    }
}

For more flexibility, servers can be loaded with ServerModule directly, allowing one to specify an explicit lookup directory and control when the module should be loaded or unloaded.

var cgi_module = new ServerModule (null, "cgi");

if (!cgi_module.load ()) {
    assert_not_reached ();
}

var server = Object.new (cgi_module.server_type);

I received very useful support from Nirbheek Chauhan and Tim-Philipp Müller for setting the necessary build configuration for that feature.

I recently finished and merged support for content negotiation.

The implementation is really simple: one provide a header, a string describing expecations and a callback invoked with the negotiated representation. If no expectation is met, a 406 Not Acceptable is raised.

app.get ("/", negotiate ("Accept", "text/xml; application/json",
                         (req, res, next, ctx, content_type) => {
    // produce according to 'content_type'
}));

Content negotiation is a nice feature of the HTTP protocol allowing a client and a server to negotiate the representation (eg. content type, language, encoding) of a resource.

One very nice part allows the user agent to state a preference and the server to express quality for a given representation. This is done by specifying the q parameter and the negotiation process attempt to maximize the product of both values.

The following example express that the XML version is poor quality, which is typically the case when it’s not the source document. JSON would be favoured – implicitly q=1 – if the client does not state any particular preference.

accept ("text/xml; q=0.1, application/json", () => {

});

Mounted as a top-level middleware, it provide a nice way of setting a Content-Type: text/html; charset=UTF-8 header and filter out non-compliant clients.

using Tmpl;
using Valum;

var app = new Router ();

app.use (accept ("text/html", () => {
    return next ();
}));

app.use (accept_charset ("UTF-8", () => {
    return next ();
}));

var home = new Template.from_path ("templates/home.html");

app.get ("/", (req, res) => {
    home.expand (res.body, null);
});

This is another step forward a 0.3 release!

Ever heard of fork?

using GLib;
using VSGI.HTTP;

var server = new Server ("", () => {
    return res.expand_utf8 ("Hello world!");
});

server.listen (new VariantDict ().end ());
server.fork ();

new MainLoop ().run ();

Yeah, there’s a new API for listening and forking with custom options…

The fork system call will actually copy the whole process into a new process, running the exact same program.

Although memory is not shared, file descriptors are, so you can have workers listening on common interfaces.

I notably tested the whole thing on our cluster at IRIC. It’s a 64 cores Xeon Core i7 setup.

wrk -c 1024 -t 32 http://0.0.0.0:3003/hello

With a single worker:

Running 10s test @ http://0.0.0.0:3003/hello
  32 threads and 1024 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    54.35ms   95.96ms   1.93s    98.78%
    Req/Sec   165.81    228.28     2.04k    86.08%
  41741 requests in 10.10s, 5.89MB read
  Socket errors: connect 35, read 0, write 0, timeout 13
Requests/sec:   4132.53
Transfer/sec:    597.28KB

With 63 forks (64 workers):

Running 10s test @ http://0.0.0.0:3003/hello
  32 threads and 1024 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    60.83ms  210.70ms   2.00s    93.58%
    Req/Sec     2.99k   797.97     7.44k    70.33%
  956577 requests in 10.10s, 135.02MB read
  Socket errors: connect 35, read 0, write 0, timeout 17
Requests/sec:  94720.20
Transfer/sec:     13.37MB

It’s about 1500 req/sec per worker and an speedup of a factor of 23. The latency is almost not affected.

The past few days, I’ve been working on a really nice libmemcached GLib wrapper.

  • main loop integration
  • fully asynchronous API
  • error handling

The whole code is available under the LGPLv3 from arteymix/libmemcached-glib.

It should reach 1.0 very quickly, only a few features are missing:

  • a couple of function wrappers
  • integration for libmemcachedutil
  • async I/O improvements

Once released, it might be interesting to build a GTK UI for Memcached upon that work. Meanwhile, it will be a very useful tool to build fast web applications with Valum.

Well, it turns out that managing a Rock Band is more time consuming than I first thought, especially if they're getting ready to release their first album. It also turns out that building Debian packages is hard as well, particularly if you're trying to set up a Jenkins CI system to automate the process. Despite all this, I'm only a few weeks behind my initially projected release date and I'm pretty excited to announce that the day has finally come and Version 1.0 of Valadate is now ready for public consumption!

I'll go through the full feature set (warts and all) shortly, but for those who can't wait to dive in, here's how you can install it...

From Source

For the adventurous, you can download the source and build and install it yourself. You will need to have the automake toolchain set up on your system and the development libraries for the following installed:

  • glib-2.0
  • libxml-2.0
  • libxslt
  • json-glib-1.0

You'll also need Gtk-Doc and Valadoc if you want to build the API documentation.

Grab the source:

git clone https://github.com/chebizarro/valadate.git

In the source directory run:

./autogen.sh
make

You can pass the --enable-docs flag to autogen.sh if you have Valadoc and Gtk-Doc installed and it will build the API documentation in the docs directory.

To install, you then just need to run the following with root privileges:

make install

And that's it, you should be ready to roll. Of course, you'll need to go through this process everytime there's a new release so it might be easier to just install it using your system's package manager. Depending on what that is, you can do the following:

Debian

Add the repository's key

curl https://www.valadate.org/jenkins@valadate.org.gpg.key | sudo apt-key add -

Add the following to your Software Sources:

deb https://www.valadate.org/repos/debian valadate main

Then you can install Valadate with:

sudo apt-get update
sudo apt-get install valadate

Fedora 23

Add the following to /etc/yum.repos.d/valadate.repo

[valadate]
name=valadate
baseurl=http://www.valadate.org/repos/fedora/$releasever/$basearch
repo_gpgcheck=1
gpgcheck=1
enabled=1
gpgkey=http://www.valadate.org/jenkins@valadate.org.gpg.key

Then run with root privileges:

dnf update
dnf install valadate

Those are the distributions that are available so far, but there's a Homebrew package for Mac OS X that's more or less ready to push. Given the way Valadate works, a Windows release will probably be a little while off as there are a few platform specific issues to be worked through. If you have favourite platform that you would like to see packaged, submit an issue on GitHub and I'll see what I can do.

So now you've got Valadate, how do you use it?

The easiest way is to create a Sub Class of the TestCase Abstract Class and add test methods to it, which are any that start with test_, have no parameters and return void. These methods will then be detected and executed automatically at runtime.

namespace MyTest {
    public class BookTest : Valadate.Framework.TestCase {

        public void test_construct_book() {

            // Arrange ...

            // Act ...

            // Assert ...
        }
    }
}

To compile, pass the following flags and parameters where mytest-0.vala is the source code file containing the above test.

$ valac --library mytest-0 --gir mytest-0.gir --pkg valadate-1.0 -X -pie -X -fPIE mytest-0.vala

In order for everything to work correctly, the name of the output binary needs to exactly match that of the .gir file (less the file extension). This will then generate an executable which can be run on the Command Line:

$ ./mtest-0

/LibraryBookTest/construct_book: ** Message: mytest-0.vala:15: running

OK

To run the test binary with TAP output pass the --tap flag:

$ ./mtest-0 --tap

# random seed: R02Sddf35dad90ff6d1b6603ccb68028a4f0

1..1

# Start of LibraryBookTest tests

** Message: mytest-0.vala:15: running

ok 1 /LibraryBookTest/construct_book

# End of LibraryBookTest tests

The [Test] annotation and parameters are also available for giving test classes and methods more readable names and for supporting asynchronous tests.

namespace MyTest {
    [Test (name="Annotated TestCase with name")]
    public class MyTest : Valadate.Framework.TestCase {

        [Test (name="Annotated Method With Name")]
        public void annotated_test_with_name () {
            assert_true(true);
        }


        [Test (name="Asynchronous Test", timeout=1000)]
        public async void test_async () {
            assert_true(true);
        }

        [Test (skip="yes")]
        public void skip_test () {
            assert_true(false);
        }
    }
}

$ ./mtest-0 --tap

1..3
# Start of Annotated TestCase with name tests
ok 1 /Annotated TestCase with name/Annotated Method With Name
ok 2 /Annotated TestCase with name/Asynchronous Test
ok 3 /Annotated TestCase with name/skip_test # SKIP Skipping Test skip_test
# End of Annotated TestCase with name tests

Testing Gtk applications

If you want to test Gtk based applications you will need to use the valadate-gtk package (available in the same repository). It's usage is almost identical:

$ valac --library mytest-0 --gir mytest-0.gir --pkg valadate-gtk-1.0 -X -pie -X -fPIE mytest-0.vala

The valadate-gtk package makes sure the Gtk Test environment is properly loaded and configured, otherwise you will get all sorts of funky errors.

RTFM

The Wiki is pretty scant at the moment but will eventually have detailed instructions on installing and setting up your toolchain with Valadate as well as integrating it with Continuos Integration systems.

There are a number of sample projects available here which showcase Valadate's features and how to use it with different toolchains and platforms. This will be continuously updated as new features are added.

The API reference for Vala can be found here and for C here. These documents are automatically generated by Jenkins whenever a new release is made so should always be up-to-date.

Next steps...

Obviously (hopefully), there will be a tsunami of bug reports once people start using it and finding them. I've tested it on a large array of platforms but there's no saying what will happen once it's in the wild. Aside from that, I am very much keen to get to work on adding BDD support via Gherkin and gradually replacing some of the crustier and more unwieldly elements of GTest under the hood. This will have to come in the time I can find between my regular consulting work which has recently taken off in a big way, and managing a Rock band that's just about to put an album out. Good times!

We salute you

This post describe a feature I will attempt to implement this summer.

The declaration of async delegate is simply extending a traditional delegate with the async trait.

public async delegate void AsyncDelegate (GLib.OutputStream @out);

The syntax of callback is the same. It’s not necessary to add anything since the async trait is infered from the type of the variable holding it.

AsyncDelegate d = (@out) => {
    yield @out.write_all_async ("Hello world!".data, null);
}

Just like regular callback, asynchronous callbacks are first-class citizen.

public async void test_async (AsyncDelegate callback,
                              OutputStream  @out) {
    yield callback (@out);
}

It’s also possible to pass an asynchronous function which is type-compatible with the delegate signature:

public async void hello_world_async (OutputStream @out)
{
    yield @out.write_all_async ("Hello world!".data);
}

yield test_async (hello_world_async, @out);

Chaining

I still need to figure out how to handle chaining for async lambda. Here’s a few ideas:

  • refer to the callback using this (weird..)
  • introduce a callback keyword
AsyncDelegate d = (@out) => {
    Idle.add (this.callback);
    yield;
};

AsyncDelegate d = (@out) => {
    Idle.add (callback);
    yield;
};

How it would end-up for Valum

Most of the framework could be revamped with the async trait in ApplicationCallback, HandlerCallback and NextCallback.

app.@get ("/me", (req, res, next) => {
    if (req.lookup_signed_cookies ("session") == null) {
        return yield next (req, res);
    }
    return yield res.extend_utf8_async ("Hello world!".data);
});

The semantic for the return value would simply state if the request has been handled instead of being eventually handled.

As you might already know, GNOME 3.20 has been released, with a number of improvements, fixes, future-proofing changes, preparations for wayland prime-time.



Here's a short list of my favourite features from Delhi:
  • Files search improvements (see here)
  • Photos has basic photo editing support - crop and filters (see here)
  • Control center mouse panel revamped (see here)
  • Keyboard shortcuts window for some apps (see here) - although I have not managed to do this for any of the apps I maintain, I plan to do it for 3.22, as I consider it a useful feature in the sea of keyboard shortcuts
 I will shortly summarize what happened in some of the games from GNOME:
  • Mines got keyboard navigation updates and fixes, thanks to Isaac Lenton
  • Atomix 
    • has a gameplay tip starting window
    • has updated artwork to match the GNOME 3 world, thanks to Jakub Steiner
  • Five or more got a new hires icon, thanks to Jakub Steiner
All in all, congrats for everyone contributing to GNOME 3.20, keep up the good work.

    I have recently introduced a basepath middleware and I thought it would be relevant to describe it further.

    It’s been possible, since a while, to compose routers using subrouting. This is very important to write modular applications.

    var app = new Router ();
    var user = new Router ();
    
    user.get ("/user/<int:id>", (req, res, next, ctx) => {
        var id = ctx["id"] as string;
        var user = new User.from_id (id);
        res.extend_utf8 ("Welcome %s", user.username);
    });
    
    app.rule ("/user", user.handle);
    

    Now, using basepath, it’s possible to design the user router without specifying the /user prefix on rules.

    This is very important, because we want to be able to design the user router as if it were the root and rebase it on need upon any prefix.

    var app = new Router ();
    var user = new Router ();
    
    user.get ("/<int:id>", (req, res) => {
        res.extend_utf8 ("Welcome %s".printf (ctx["id"].get_string ()))
    });
    
    app.use (basepath ("/user", user.handle));
    

    How it works

    When passing through the basepath middleware, request which have a prefix-match with the basepath are stripped and forwarded.

    But there’s more!

    That’s not all! The middleware also handle errors that set the Location header from Success.CREATED and Redirection.* domains.

    user.post ("/", (req, res) => {
        throw new Success.CREATED ("/%d", 5); // rewritten as '/user/5'
    });
    

    It also rewrite the Location header if it was set directly.

    user.post ("/", (req, res) => {
        res.status = Soup.Status.CREATED;
        res.headers.replace ("Location", "/%d".printf (5));
    });
    

    Rewritting the Location header is exclusively applied on absolute paths starting with a leading slash /.

    It can easily be combined with the subdomain middleware to provide a path-based fallback:

    app.subdomain ("api", api.handle);
    app.use (basepath ("/api/v1", api.handle));
    

    I often profile Valum’s performance with wrk to ensure that no regression hit the stable release.

    It helped me identifying a couple of mistakes n various implementations.

    Anyway, I’m glad to announce that I have reached 6.3k req/sec on small payload, all relative to my very lowgrade Acer C720.

    The improvements are available in the 0.2.14 release.

    • wrk with 2 threads and 256 connections running for one minute
    • Lighttpd spawning 4 SCGI instances

    Build Valum with examples and run the SCGI sample:

    ./waf configure build --enable-examples
    lighttpd -D -f examples/scgi/lighttpd.conf
    

    Start wrk

    wrk -c 256 http://127.0.0.1:3003/
    

    Enjoy!

    Running 1m test @ http://127.0.0.1:3003/
      2 threads and 256 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    40.26ms   11.38ms 152.48ms   71.01%
        Req/Sec     3.20k   366.11     4.47k    73.67%
      381906 requests in 1.00m, 54.31MB read
    Requests/sec:   6360.45
    Transfer/sec:      0.90MB
    

    There’s still a few things to get done:

    • hanging connections benchmark
    • throughput benchmark
    • logarithmic routing #144

    The trunk buffers SCGI requests asynchronously, which should improve the concurrency with blocking clients.

    Lighttpd is not really suited for throughput because it buffers the whole response. Sending a lot of data is problematic and use up a lot of memory.

    Valum is designed with streaming in mind, so it has a very low (if not neglectable) memory trace.

    I reached 6.5k req/sec, but since I could not reliably reproduce it, I prefered posting these results.

    Things have really been moving quickly since I last posted, with the development branch really starting to take shape. When I sat down to look at the list of requirements, I decided that the best place to start for a first release would be to at least replicate the same feature set of the original. To recap, those were:

    • Automatic test discovery like JUnit or .NET testing framework.
    • Running tests for all parameters from specific set.
    • Utility functions for waiting in a main loop until specified event or timeout occurs.
    • Support for asynchronous tests. Method declared async in vala will be automatically run under main loop until completion or configurable timeout.
    • Utility functions providing temporary directory to tests.

    These have been translated into GitHub issues and the Waffle board as well as a few additional features that I thought should make the first cut, namely:

    These have all been added to the Version 1.0.0 milestone and well, I'm pleased to say that after a little under two weeks of concerted effort, I have (re)implemented almost all of the above features! Based on the level of effort so far, I am now envisaging an initial release as early as the 1st of March.

    I'm actually pretty excited about what has come out of the process so far. One of the original itches I set out to scratch was the verbosity of unit tests in Vala and through the voodoo of xml/xslt/json and GModule I believe I have achieved that. While the implementation details are frankly a little scary, the resulting user facing API hides them quite nicely.

    With a correctly configured build script, using Valadate is as easy as declaring a subclass of TestCase and adding annotated instance methods like so:

    That's it. No main function required, no need to add tests in the TestCase constructor. Clean and simple, the way it should be. The code snippet above is a real live test from the Valadate framework (the actual test of the test, so to speak) and it runs beautifully, producing TAP output both to file and to the terminal -

    Love that green!

    Astute readers will notice that it is still GLib.Test running things under the hood, although it is sufficiently encapsulated to allow its gradual replacement without affecting the way end users write their tests. It should now be possible to add things like events and notifications without breaking user's code.

    The TestRunner class handles test discovery via a Gir file output when the test is compiled. This was a key concept of the original Valadate but I took it a step further, combining it with GModule to create a kind of "poor person's" Introspection module. The test binary needs to be compiled as a Position Independent Executable (PIE) for this to work which is presently only supported on Linux and Mac OSX, although the fundamentals should apply to executable DLLs on Windows as well.

    The TestRunner currently supports [Test], [AsyncTest] and [SkipTest] with parameters. Although it is trivial to add new annotations, I am going to keep them to a minimum and move to a plugin based model which will allow plugins to decorate and control how test methods are run.

    Of course, if all of this is a little too funky for you, you can still do things the old way by adding each test method in the TestCase's constructor:

    and providing your own main entry point like so:

    in this case, you don't need to compile it as a PIE binary or add the Method annotations (they won't work anyway). You can still use all of Valadate's other awesome features such as asynchronous tests, you'll just have more redundant code to manage.

    With this feature now implemented and on the way to being solidly tested, I feel it's time to merge the development branch and roll a release. That way I can start getting feedback (and bug reports) on what's been done so far before implementing the meatier features like Gherkin integration and a GUI.

    It goes without saying that the only way anybody will be able to use Valadate is if there is clear documentation and working examples, so to this end there are now several example projects and a Wiki. I've also added support for building Valadoc and GtkDoc docs from the project source tree. There's still a bit of work to do before the first release, but the infrastructure is now in place (and I can close issue #1!).

    So that's all I'm going to go into in this post, so I can get back to documenting the work I've done and getting the release ready for deployment. The next post on Valadate will be about the release, so now's a good a time as any to jump in and let me know what you think, in the comments below or in the "usual" places. Thanks for watching!

    My up key stopped working, so I’m kind of forced into vim motions.

    All warnings have been fixed and I’m looking forward enforcing --experimental-non-null as well.

    The response head is automatically written on disposal and the body is not closed explicitly when a status is thrown.

    In the mean time, I managed to backport write-on-disposal into the 0.2.12 hotfix.

    I have written a formal grammar and working on an implementation that will be used to reverse rules.

    VSGI Redesign

    There’s some work on a slight redesign of VSGI where we would allow the ApplicationCallback to return a value. It would simplify the call of a next continuation:

    app.get ("", (req, res, next) => {
        if (some_condition)
            return next (req, res);
        return res.body.write_all ("Hello world!".data, null);
    });
    

    In short, a boolean returned tells if the request is or will eventually be handled.

    The only thing left is to decide what the server will do about not handled requests.

    Update (Feb 21, 2016)

    This work has been merged and it’s really great because it provides major improvements:

    • no more 404 at the bottom of perform_routing, we use the return value to determine if any route has matched
    • OPTIONS work even if no route has matched: a 404 Not Found would be thrown with the previous approach

    Yeah, we are even handling OPTIONS! It produce a 0-length body with the Allow header assigned with a list of available methods for the resource.

    Expand!

    The Response class now has a expand and expand_utf8 methods that work similarly to flatten for Request.

    app.get ("", (req, res) => {
        return res.expand_utf8 ("Hello world!");
    });
    

    It will deal with writting the head, piping the passed buffer and close the response stream properly.

    The asynchronous versions are provided if gio (>=2.44) is available during the build.

    SCGI improvements

    Everything is not buffered in a single step and resized on need if the request body happen not to hold in the default 4kiB buffer.

    I noticed that set_buffer_size literally allocate and copy over data, so we avoid that!

    I have also worked on some defensive programming to cover more cases of failure with the SCGI protocol:

    • encoded lengths are parsed with int64.try_parse, which prevented SEGFAULT
    • a missing CONTENT_LENGTH environment variable is properly handled

    I noticed that SocketListener also listen on IPv6 if available, so the SCGI implementation has a touch of modernity! This is not available (yet) for FastCGI.

    Right now, I’m working on supporting UNIX domain socket for SCGI and libsoup-2.4 implementations.

    It’s rolling at 6k req/sec behind Lighttpd on my shitty Acer C720, so enjoy!

    I have also fixed errors with the FastCGI implementation: it was a kind of major issue in the Vala language. In fact, it’s not possible to return a code and throw an exception simultaneously, which led to an inconsistent return value in OutputStream.write.

    To temporairly fix that, I had to supress the error and return -1. I’ll have to hack this out eventually.

    In short, I managed to make VSGI more reliable under heavy load, which is a very good thing.

    After a short break to work on one of my other projects (a Rock 'n Roll band) and finish setting up Jenkins, I'm back at work on the project now officially known as Valadate.

    As I've mentioned before, there were some initial attempts at developing a TDD framework for Vala, the most extensive of them being Valadate. After some consideration, and a review of the existing codebase, I decided that the most practical approach would be to assume maintainership of it and refactor/rewrite as necessary to meet the new requirements that have been gathered.

    Presently, the existing Valadate package provides a number of utility classes for such things as asynchronous tests and temporary directories as well as a command line Test Runner. The procedure for writing tests is to create a concrete implementation of the Valadate Fixture interface with each unit test being a method whose name starts with test_. The test is then compiled into a binary (shared library) which is run by the Test Runner. Test discovery is done by loading the .vapi and .gir files generated by Vala when the binary is compiled. The build system is Waf, but for the purposes of reviewing the code, I ported it to autotools, a build system I am more comfortable with.

    The code compiles, but it has suffered from some bitrot, with quite a number of deprecation warnings, especially the asynchronous tests. The actual framework is quite lean and uses the GLib Test and TestSuite classes to group and run the tests it finds in the binary. In total there probably isn't more than 1000 SLOC in the whole project. While I see some interesting ideas in the current code, I have decided that the best approach is to start again from scratch and incorporate whatever is useful and send the remainder to binary heaven || hell.

    So now that I have the repository for Valadate setup and updated to build with autotools, I will use this as the master from which we will derive the various development branches, using the widely practiced "GitHub Flow", a repository management process which embodies the principles of Continuous Integration. In a nutshell, it involves six discrete steps:

    1. Create a branch for developing a new feature
    2. Add commits to the branch
    3. Open pull requests
    4. Discuss and review the code
    5. Deploy
    6. Merge

    The underlying principle (or "one rule" as GitHub calls it) is that the master branch is always deployable - which in the case of a tool like Valadate means it can be pulled, compiled and run at any time. So while the existing master branch of Valadate is not exactly production ready, it is in the state where the Yorba Foundation stopped maintaining it. This at least gives us a baseline from which to start and some continuity with the original project, if only giving credit to the original developers for their hard work.

    We're ready to branch our new version, so what do we call it? The most commonly used system is Semantic Versioning which follows the MAJOR.MINOR.PATCH convention:

    • MAJOR version when you make incompatible API changes,
    • MINOR version when you add functionality in a backwards-compatible manner, and
    • PATCH version when you make backwards-compatible bug fixes.

    The last release of Valadate was 0.1.1 and it's not entirely clear if they were strictly following the Semantic Versioning scheme. There are separate API and SO version numbers which may not be applicable in our first release. So for simplicity, I will use the original version number as the starting point. As we are going to make some fairly substantial changes that would break the hell out of the 0 API, we should probably increment that to 1. Since we are starting from scratch, the MINOR version will revert to 0 as well. So the branch name that we will begin working on our new implementation under will be 1.0.0.

    Sweet. Let's dial up those digits:

    $ git checkout -b version-1.0.0

    The local repository is now a new branch called version-1.0.0, which will allow us to start really overhauling the code without affecting the "deployable" master branch. Since we're going to break more things than a stoner in a bong shop, we may as well reorganise the file layout to something more conventional and dispose with the Waf build system altogether.

    Our new repository directory structure looks like this:

    • valadate
      • libvaladate
      • src
      • tests
        • libvaladate
        • src

    This structure is a fairly commonly used pattern in developing medium to large size projects, you essentially replicate the source tree within the tests folder. This makes it easier to locate individual tests and means your integration tests will follow the same basic pattern as the main source tree does at compile time. With smaller projects, you could just get away with a simple tests directory - with the relatively small SLOC that Valadate has now it could probably all reside within a single source file! Given that we expect the project to grow significantly though, especially when we start adding complex features like BDD tests and a GUI as well as several layers of tests of tests, we should probably start with a more scalable structure.

    OK, now we're finally ready to start writing tests. Given that this is a Testing Framework, we're facing a potential chicken and egg situation - what framework do we use to test our framework? In this case, the solution is pretty straightforward, we have the GLib Test suite at our disposal which we can use to write the base tests that will guide the design of the framework. Once these tests all pass, we can move on to using Valadate to test itself when adding more complex testing features like Gherkin/Cucumber. Finally, we can use those features for even more complex testing such as user acceptance and integration tests for the project as a whole. The process is iterative and cascading, meaning that as features at one level are sufficiently tested they will become available for the next successive layer of tests. You could think of it like an Onion, if you like, or a series of waterfalls but my mental image at the moment is more like this:

    But that's just me. Use whatever metaphor you like, it's your head after all.

    So we begin using the basic or 'naked' (as I like to call it) GLib Testing Framework. Now the GLib Testing Framework is actually pretty powerful and was originally designed according to the xUnit interface. It's fairly straightforward to use, as this example from the Gnome Vala Wiki shows:

    void add_foo_tests () {
        Test.add_func ("/vala/test", () => {
            assert ("foo" + "bar" == "foobar");
        });
    }
    
    void main (string[] args) {
        Test.init (ref args);
        add_foo_tests ();
        Test.run ();
    }
    

    It also has the gtester and gtester-report utilities which are well integrated with existing toolchains and are able to output test results in a variety of formats.

    The main drawbacks of the GLib Testing Framework, and hence the need for Valadate at all, are:

    • It is not particularly Object Oriented - the base classes are all [Compact] classes and do not inherit from a common Test base class. This makes extending them in Vala difficult.
    • The test report functions need a lot of configuration to produce usable output, including several 'drivers' or shell scripts for postprocessing.
    • It is not particularly well documented
    • It doesn't scale very well to large projects or for Behavior Driven Design.
    • It's verbose and difficult to read.

    Most of these limitations are solvable in one form or another, so it should serve as a sufficient base to get started. If we follow the principles of Test Driven Design it should become obvious when we need to build something more powerful or flexible.

    Which tests and features do we write first? Well, that's determined by the requirements we've gathered and how we've prioritised them. One of the many great things of having a wife who is a CTO for a foundation developing open source land tenure software is that I get to vicariously experience how she manages her team's workflow and the tools they use to do that. One of the recent tools that they have started using for project management is Waffle, which integrates seamlessly with GitHub Issues and Pull Requests. Waffle is the next step beyond the Trello board that I was using to initially gather the requirements for Valadate. Waffle allows anyone to add a feature request or file a bug to the Backlog either through the Waffle board for the project or by simply creating a new issue on the GitHub page. The latter is the most straightforward as you don't need to log into Waffle at all.

    One of my wife's philosophies of Open Source is that it's not enough to just release your source code. A true Open Source project is also developed in the open, meaning that the history behind why certain design decisions were made, and by who, is recorded and all issues and pull requests are reviewed and where they meet the project's (i.e. enduser's) requirements, are fixed or merged, regardless of the source. Public repositories are, at the very least mirrors if not the working versions of the current master and branches, not just static snapshots of a final release.

    Taking an Open from the Start approach is also something that is essential in building a strong, diverse community of users around your product. Sarah Sharp, a long time Linux Kernel contributer, has written extensively about this on her blog. One of the things that I'm going to take the opportunity to lock down now is a Code of Conduct for contributors. I'm not going to go into the pros and cons of having a Code of Conduct - as I don't see any cons in the first place! So, as Sarah says on her blog -

    We don’t write legal agreements without expert help. We don’t write our own open source licenses. We don’t roll our own cryptography without expert advice. We shouldn’t roll our own Code of Conduct.1

    With that in mind, I've signed the project on to the Open Code of Conduct, which is used by GitHub and is inspired by the codes of conduct and diversity statements of projects like Django, Python and Ubuntu. It's worth a read, even if it's your bread and butter, but here's my summary - "don't be an asshat" - and you can tweet me on that.

    So that's all for this post, join me again soon for Part 5 where I will outline the product roadmap for the first release and delve into when we know we've tested enough with coverage reports. Thanks for reading and please feel free to join the conversation if you have something to say!

    I have just backported important fixes from the latest developments in this hotfix release.

    • fix blocking accept call
    • async I/O with FastCGI with UnixInputStream and UnixOutputStream
    • backlog defaults to 10

    The blocking accept call was a real pain to work around, but I finally ended up with an elegant solution:

    • use a threaded loop for accepting a new request
    • delegate the processing into the main context

    FastCGI mutiplexes multiple requests on a single connection and thus, it’s hard to perform efficient asynchronous I/O. The only thing we can do is polling the unique file descriptor we have and to do it correctly, why not reusing gio-unix-2.0?

    The streams are reimplemented by deriving UnixInputStream and UnixOutputStream and overriding read and write to write a record instead of the raw data. That’s it!

    I have also been working on SCGI: the netstring processing is now fully asynchronous. I couldn’t backport it as it was depending on other breaking changes.

    First of all, happy new year to you all (yes, I know we are already in February)!

    Long time no post, I've been very busy with work, new projects, new clients, new technologies, preparing the move to a new home, the second child, and lot more, on the personal side.
    Handling all of the above at the same time resulted in a severe change in the amount of my open-source contributions, so I haven't been able to do anything more except for code reviews and minor fixes, plus the releases of the GNOME modules I am responsible for (GNOME Games rule!).
    During the winter break, between Christmas and New Year's Eve I have managed to work a bit on AppMenu integration for Atomix (which is not completely ready, as the appmenu is not displayed, in spite of being there, when checking with gtkInspector)
    In the meantime lots of good things have happened, e.g. the Fedora 23 release, which is (again)  the best Fedora release of all times, thanks to everyone contributing.

    All in all, I just wanted to share that I'm not dead yet, just been very busy, but hoping that I can get back to the normal life with a couple more contributions to open-source, and sharing some more experiences with gadgets, e.g. the Android+Lubuntu dual boot open-source TV box I got for Christmas.

    Continuous Integration or CI is widely used in Test Driven Design for keeping the project's codebase tight, reducing errors and making sure there is always a working build available for deployment. It provides a means to automate the whole build and testing process, so developers can focus on writing their tests and the code that passes them. By setting up a system that builds and tests the software on its supported platforms, deployment issues can be identified early and distribution of new releases automated.

    Since once of the objectives of Valadate is to integrate with existing toolchains, and wanting to leverage the numerous benefits of CI for the project itself, I took a short DevOps break to set up a Jenkins based system on my local network. Jenkins is a widely used open source Continuous Integration server written in Java, so it can pretty much run anywhere, providing the system has enough juice. Taking this to its extreme, I decided to install it on a spare Raspberry Pi 2 I had lying around. So why Jenkins and why on a Raspberry Pi?

    Firstly, Jenkins is a robust and well maintained platform that is widely used. It has a plethora of plugins that integrate it tightly with Git, Docker, TAP and numerous other CI tools and protocols. It works on the master-slave model, where the master server directs the build operations of any number of slaves. A slave can be any other computer on the network that Jenkins can communicate with, either directly through SSH or with a plugin. It is highly configurable and it just works. It seemed like a good choice to start with.

    The Jenkins web interface

    Secondly, the Raspberry Pi. One of my considerations when setting up the CI system was that the master server should be internet accessible and available 24-7. Given that when it isn't running jobs the server is mostly idle, using a full powered computer would be a waste of electricity and CO2. It occurred to me that one of my spare Rapsberry Pis could do the job, so after a quick Google to confirm that it was possible, I proceeded with the install. The one comprehensive guide I had found had suggested a lot mucking about with downloading source packages, but since it was for the previous version of Raspbian I tried sudo apt-get install jenkins and whaddya know, it just worked.

    With the Jenkins server up and running, I added my recent port of Gherkin as a test job and set up a machine running Fedora 23 as a slave and in 5 minutes it had checked out, compiled and run the unit tests on it and...

    Build Status

    \O/ \O/ \O/

    Despite being relatively low-powered, the Raspberry Pi seems up to the task, as nothing is actually being built on it. Some configuration pages take a while to load, but for ordinary usage it's quite snappy. Not only that, but you can do cool things with it as well.

    Emboldened by my initial success, I moved onto setting up a Docker slave. For this setup, I revived an old server that had been mothballed, with the idea that as a build slave it doesn't need to be online all the time and with Wake On Lan (WOL) I can have Jenkins wake the server up when it needs to do a build and put it back to sleep when its done. This is still on the to-do list, but seems fairly straightforward.

    In this configuration, the slave is a Docker host that starts up and runs a container built from a Dockerfile in the repositories root. It is this container that runs the build, not the host, so it is possible to test your software on pretty much any platform that can be dockerized. Cool eh? So I set up an Ubuntu container and...

    Build Status

    Huh?!? I looked at the log and...

    ./.libs/libgherkin3.so: undefined reference to `g_value_init_from_instance'
    

    Dammit! In my rush to port Gherkin, I had done it on my new Fedora 23 box and hadn't actually tested it on Ubuntu at all. I checked the docs and sure enough, GLib.Value.init_from_instance() is available from GLib 2.42 on only and Ubuntu 15.04 ships with 2.40. D'oh! So now I either have to refactor the code or declare GLib 2.42 a prerequisite.

    This particular case is a really good example of the benefits of Continuous Integration. If I had had the Jenkins server set up before I ported the code, I would have noticed the incompatibility almost immediately and would have been able to deal with it then, rather than refactoring later.

    As nice as it would be to ignore the existence of other operating systems, the sad truth is that not everyone uses Linux as their primary desktop, including many people who might want to use my software. With this harsh reality in mind, I decided to set up Windows and Mac OSX slaves to test the cross platform compatibility of my projects.

    For the Windows slave, I set up a new Windows 7 VM in VirtualBox, running on the same server as the Docker host. For the build toolchain, I installed MinGW64 and MSYS2 and all of the necessary libraries and voila! Well, not quite voila, the MinGW linker is soooo sloooow that it took quite some time to debug but is now working just fine. The process isn't quite fully automated - I still need to manually spin it up and shut it down. There is a VirtualBox plugin to do this, but it doesn't presently support version 5. I also learned the hard way that you need to disable automatic updating for Windows, otherwise it will get stuck at the failed boot recovery screen. I am also thinking that for speed, I will cross compile the Windows binaries in a Docker container and run the tests in the Windows VM to make sure they work.

    Now, if you've been to any major Linux conference in the last few years, you'd be forgiven for thinking you were at WWDC with all the Apple hardware being toted about. Heck, my wife, an Open Source guru, was a long time MacBook Air user until she got a Microsoft Surface. And it's true, it is some of the coolest, most expensive hardware you can run a Linux Virtual Machine on. Don't get me wrong, I have one on my desk, I just mostly use it for email, IRC and the occasional Photoshop session (at least until Gimp gets better tablet support). Unfortunately, it's been a little neglected so it needs a bit of a clean up before it can be pressed into service, which will hopefully be by the start of next week.

    Along the way I also discovered that our crappy Comcast provided Cable Modem doesn't support hairpin DNS resolutions when I forwarded the Jenkins server ports. I tried to solve this by setting up dnsmasq on the Raspberry Pi but it still required manually editing the resolv.conf files on each machine. In the end I just put the Comcast Modem into bridge mode and set up a trusty old WRT-54GL running DD-WRT as the new Gateway/Router. It still has some problems with IPv6 DHCP but otherwise is running just fine.

    So there you have it, a working cross-platform Continuous Integration system building Vala based projects. It's live on the internet now, so you can check it out here (Github login required).

    OK, now we're ready to start building Valadate! Tune in again soon for Part 4. Who tests the tester?

    Less than a week ago I posted a call for input on my proposal to build a Test Driven Development Framework for Vala and feedback has been slowly trickling in. You can see a summary here which has also been distilled into a Trello board which will become the Product Backlog and Product Roadmap. The list is looking fairly complete so far, so I figure I'm just about ready to close it off and work on a Release Plan. Then I can finally start writing code! Phew.

    The requirements gathered so far are pretty much in line with other testing frameworks, but here's a good time to review our Product Vision again and see if we're heading in the right direction. I've highlighted the parts of the statement which correspond to features so we can compare.

    For Vala developers who need to test their code, < insert cool tool name > is a powerful testing framework that provides behavioral, functional and unit testing features to help them write great Open Source software. Unlike other testing frameworks, < insert cool tool name > is designed especially for Vala while integrating seamlessly into existing toolchains.

    Let's look at that Requirements we've gathered so far and see if these features would meet this vision:

    Product Backlog

    provides behavioral, functional and unit testing features

    • Test discovery
    • Async tests
    • Test Runner
    • Support for Gherkin
    • Asserts
    • Test protected behavior
    • Abstract Tests

    designed especially for Vala

    • Genie support

    integrating seamlessly into existing toolchains

    • Output TAP
    • Compatible with gstester
    • CLI and standalone GUI
    • PIE binaries
    • Integrate with CI tools like Jenkins
    • Tests can compile and run without framework installed

    So far so good! Of course, this is an Agile project, so this list is not exhaustive or final and we can expect some features to be added and others modified or removed altogether. The important thing is that our features align with our vision. The result of this prioritization process will be the Product Roadmap and the Product Backlog, which will guide sprints and daily development efforts and inform the release schedule. Before we do that though, we need some guidance on how to break these features up into functional areas which will determine how we structure our code base and where to start writing our tests. To do this we need a System Architecture.

    The System Architecture and TDD

    One of the misconceptions that newcomers to TDD have is that you don't write any code until you've written a test for it. This leaves many people new to the concept scratching their heads about where to start, as even creating a simple command line application requires a certain amount of boilerplate code to be written before you can even start processing the user's input. At this point, a lot of beginners may inadvertently write reams of redundant tests, start reinventing already well tested wheels or just give up on TDD altogether. There are very few times when your code will be executing without any dependencies (if only libc) so you will almost always be coding within an existing framework, if only loosely. Most of these interactions with other frameworks should be encapsulated in integration tests which are developed in parallel with the unit tests. The tests which inform our system design are those which test its unique features. Our System Architecture defines these interactions and boundaries and gives us a basic skeleton upon which to start laying down a codebase. Once this is in place, we can start writing actual tests.

    With a project like this we already have the advantage of several examples of prior art, chief amongst these the xUnit architecture. xUnit is a loose framework which includes JUnit and NUnit which stipulates that any implementation has a common architecture as shown in the diagram below:

    xUnit Class Diagram

    From this diagram we can already begin to see how we will structure the code. At a minimum we will be creating separate files and tests for Test, TestRunner, TestSuite, TestCase, TestFixture and TestResult. Yep, tests for tests. I may have said this would get interesting... This will give us the minimum we need to set up a toolchain, create a repository and start pushing to it. Hooray, we're about to start writing code! Except that it still doesn't have a name...

    What's in a name? That which we call a rose by any other name would smell as sweet.

    William Shakespeare

    Thanks Bill. I'm still not 100% sold on Valadate, even though it does reflect the Product Vision of being made especially for Vala and that it's not strictly limited to unit testing. Calling it VUnit would reflect its XUnit origins, but it's not like there's any rigid API to conform to. Technically it doesn't matter at this stage of development, but I would like to avoid having to refactor the code later just to change the name. There's still some more work that can done before laying down any code, so I'll let it percolate for a day or two longer before making a firm decision. Now's a good a time as any to speak up if you feel passionately one way or the other.

    But at least it's got a logo! Let me know what you think...

    A stylized solar system seen at an oblique angle

    The base was designed by misirlou and I added the nice colors. It's meant to symbolize the eponymous asteroid that gives Vala its name.

    That's all for now, tune in again soon when I discuss the Roadmap and Backlog as well as how I set up Jenkins CI on a Raspberry Pi.

    I was looking at the asteroid 131 Vala, the origin of the programming language's name, on the JPL Small Object Database when I heard the sad news of David Bowie's passing. Like many of my age, I grew up not knowing of a world without space travel, the threat of nuclear war or the Thin White Duke. No other artist captured that sense of both wonder and fear of a species walking a tight rope towards its destiny. Would we stumble and fall into oblivion or would we make it to the stars? Were all our heros like Major Tom? Brittle and flawed yet compellingly corageous.

    I was thinking these things and more while browsing the JPL website and I noticed that the venerable old Orbit Viewer applet was no longer working. I wanted to watch some celestial bodies move that morning to the sounds of David Bowie so I downloaded the applet source and ported it to Vala. For the music, I added a small gstreamer player that loads and plays a midi file from midiworld.com.

    Porting Java code to Vala is relatively easy, especially when it is older code and doesn't have too many of the odd little workarounds that have crept into the language over the years. The quickest part was the library of functions for calculating the orbits, as this is pretty much pure math. The trickiest bit was the interface, which I recreated in Glade and the drawing routines for animating the whole thing. I have been working on a port of the Box2D physics engine, so I have already solved most of these problems before. The end result you see above.

    It still needs some work before it's complete - only the play button works and there's no way to adjust the viewport yet, but these are fairly trivial to implement. If anyone is interested in the code, I'll post a link to a Github repo - both the physics and animation routines are particularly interesting if you're starting out, even with the lack of comments.

    Thanks for watching, and thanks Starman, for all those Golden Years...

    I first came across Vala when scoping out the possibility of updating a venerable old Open Source program from GTK+2 to GTK+3. I wasn't quite sure what to make of Vala at first - it was an Object Oriented Programming language based on the GObject type system that used C as an intermediary language for the gcc compiler. I looked through a bunch of the samples, and was struck by the elegance and parsimony of the syntax and how instantly understandable it was from my familiarity with Java and C#. I played with a few example apps and I was surprised at how much fun it was to use as well. It was GObject without the endless reams of boilerplate code. Properties actually made sense now and using signals is a snap. IDE support was about as good as any other language, especially in Geany, my tool of choice. I was hooked.

    There was only one problem. I'm a big fan of TDD and BDD and after many hours of intense Google-Fu, I was able to find precious little on the topic in regards to Vala. What there was boiled down to using the GLib Test library and a nice little adapter class to group individual test cases into a test suite. The end result was run through gtester on the command line, usually as part of a toolchain like Autotools. This was straightforward enough for simple applications with limited user interactions, but it doesn't really scale for BDD. Some work had been done on a framework called Valadate but it was abandoned by its maintainers a few years ago. This was a real blocker for me going forward. My philosophy is that you can occasionally write great software in moments of furious creativity but it takes boring old testing to consistently produce good software.

    Feel the hate flow throw you

    Feel the hate flow throw you

    The thing with Free and Open Source Software is that it's only free at the cashier. Once you get it home you have to pay an ongoing maintenance cost in time if you want to keep using it. That time could be spent making minor contributions like filing bug reports through to helping new users in forums or translating apps all the way to implementing features yourself. I see real potential in Vala but I feel that it's usability is being hampered by this missing feature. The developers of the language have given the world a great gift and their time is better spent maintaining it. The current solution is mostly good enough, but generates a lot of extra code to be maintained and has no support for BDD. "Somebody should do something about it" is a phrase that makes me groan whenever I hear it, because I usually think that that someone should be the person saying it. Well, someone should do something about it.

    So this blog post is an effort to get the ball rolling on that something. Although I have some free time now, it's not an endless wellspring. I also don't want to start a vaporware or abandonedware project that gets added to the list of good ideas people had at one point in time. I would like to build something that is sustainable, that evolves with its users and that could be easily passed on to other maintainers should I no longer have enough time to devote to it. I imagine this has been the manifesto of a thousand failed Open Source projects, but it's better than nothing, so here goes...

    Getting the ball rolling

    Since this a project to bring TDD and BDD to Vala, I would like to use Agile techniques to plan and develop it. The first steps in this case are setting up a Product Vision and Requirements Gathering. I'll take a stab at the first one (quoted because VISION STATEMENT).

    For Vala developers who need to test their code, < insert cool tool name > is a powerful testing framework that provides behavioral, functional and unit testing features to help them write great Open Source software. Unlike other testing frameworks, < insert cool tool name > is designed especially for Vala while integrating seamlessly into existing toolchains.

    I guess that makes me the Product Owner as well. I don't really care what it's called, Valadate is as good as any but I'm open to suggestions. If there are enough ideas we might have a poll.

    The next step will be the Requirements Gathering, one I have a number of ideas about already but I would really like to hear from the potential end users. I've started a Trello Board to that effect and if you would like to suggest a feature or comment on one that's already there, head on over and make yourself heard. If that's not your medium, you can ping me on Twitter or hit me up on the Vala IRC channel (irc.gimp.org #vala), as bizarro. A tool like this will live or die on its fitness for purpose, so please don't hold back.

    That's all for now, in the next post I'll summarize the requirements that have been gathered so far and lay out the options for the system architecture as well as a provisional schedule for the first release. Thanks for tuning in and don't forget to join the conversation if you have something to add.

    I've had some spare time recently to work on some pet projects and get them into a decent enough shape that they could be subjected to the withering gaze of the Panopticon. One in particular is a port of the Gherkin language to Vala. So what is Gherkin exactly and why should you care?

    From the Gherkin wiki:

    Gherkin is the language that Cucumber understands. It is a Business Readable, Domain Specific Language that lets you describe software’s behaviour without detailing how that behaviour is implemented.

    Gherkin is available for a host of languages and is tightly integrated into JUnit for example. Its syntax is pretty straightforward and designed to be intelligible by non-technical people:

      Feature: Some terse yet descriptive text of what is desired
      Textual description of the business value of this feature
      Business rules that govern the scope of the feature
       Any additional information that will make the feature easier to understand
    
       Scenario: Some determinable business situation
         Given some precondition
           And some other precondition
          When some action by the actor
           And some other action
           And yet another action
          Then some testable outcome is achieved
           And something else we can check happens too
    
       Scenario: A different situation
           ...
    

    The Gherkin parser for Vala, which you can get here reads in Feature files and builds a tree of elements that can then either be manipulated directly or output as JSON.

    The parser by itself is not tremendously useful, but is one of the building blocks for a comprehensive testing framework for Vala and by extension, GObject that I am presently scoping. If this is something you're interested in, and I assume it is since you've read this far, then I'd encourage you to join the conversation.

    I’m using the thunderbird conversations add-on and am generally quite happy with it. One pain point however is that its quick reply feature has a really small text area for replying. This is especially annoying if you want to reply in-line and have to scroll to relevant parts of the e-mail.

    A quick fix for this:

    1. Install the Stylish thunderbird add-on
    2. Add the following style snippet:
      .quickReply .textarea.selected {
        height: 400px !important;
      }

    Adjust height as preferred.

    Since the new design of GNOME Mines has been implemented, several people have complained about the lack of colors and the performance issues.

    The lack of colors has been tackled last cycle with the introduction of the theming support, and including the classic theme with the same colored numbers as we all know from the old days of GNOME Mines.

    Now, to tackle the performance issues, which in most cases are not real performance issues but rather playability issues for hardcore miners who would like to get a sub-10 seconds time, as the reveal transition time is set to 0.4 seconds, which adds up to a few seconds during a game, which might lead in a 10seconds+ time. To overcome this limitation, I have implemented a disable animations option in the Appearance settings, to allow users to disable the transitions completely to be able to achieve the best scores they would like. This can also come handy in the rare cases when the transitions are causing real performance issues. The next step would be to count the number of manually revealed tiles, in case we are using animations multiply it with the transition time, and at the end of the game subtract this from the total time, to make sure timing is roughly the same for both players playing with and without animations.

    Feedback, ideas, comments are always welcome: are you a hardcore miner? will you disable the eye-candy animations to get better scores? Which theme are you using when you are playing GNOME Mines?
    I've been fairly busy recently, so all my colleagues have upgraded to F22 before I did, even though usually I was the one installing systems in beta or release candidate state. After seeing two fairly successful upgrades I decided to take an hour to upgrade my system, hoping that it will fix an annoying gdm issue I've seen recently. Each day after unlocking the system (I cold-boot each day, so after my first break) one of my three displays doesn't turn on, I have to go to displays settings, change something, click apply and then revert took have all my displays again. Subsequent screen unlocks work correctly, I only get this once a day at the first unlock.

    After updating 3000+ packages in about an hour, I  rebooted, got to the login screen, typed my password, login screen disappeared, the grey texture appeared, and the system hang.
    The steps to recover to a usable computer:
    • Switching to another VT revealed that everything was running, including gnome shell, gdm status was ok.
    • Tried restarting gdm, but it didn't help.
    • Checking the common issues for fedora 22 have me a hint that gdm running with wayland could be the culprit, so I changed to X11-based gdm, but that didn't help either.
    • Gnome on Wayland session managed to log in, but froze when I did press the meta key to access the applications.
    • Settings from the top right corner did work however, so I managed to create another user, which could log in.
    • That led me to the conclusion that there was a problem with my configuration. I'm still not sure, and I will never find out, as the computer to be upgraded was my work pc and I needed to get stuff done, I have decided to reset my configuration. Add I couldn't find a way to reset all dconf settings to default, i have backed upo and deleted the following folders: .gnome, .gnome2, and some other ones I can't remember, but should be found easily with a search for "resetting all gnome shell settings". That did the job: I had to reconfigure my gnome shell extensions and settings, BUT at least I managed to lo in all, it wasn't the best upgrade experience I ever had.
    The result however is pretty good (though one of my displays is still turning off at the first unlock), it was definitely worth working on it (I knew it will be, on my home computer I'm running F22 since the Alpha ;) )
    Thanks for everyone who contributed to this release, your work is welcome and appreciated.
      Recently I've been thinking about the real value of my contributions to free software and open-source software.

      I've realized that I'm mostly a "seasonal" open-source contributor: I choose a project, do some bug triaging, bug-fixing, and when I'm "stuck" with the project (aka the rest of the bugs/features would require serious efforts and quite some time to implement) I jump to unto a next project, and do the same there, and do this over and over again. Of course, in the meantime I get attached to some projects and "maintain" them, so I keep track of the new bugs and fix them whenever I can, I review the patches, make releases, but I don't really consider myself as an active contributor.
      I've had a "season" for Ubuntu software-management related contributions (software-center, update-manager, synaptic), a System Monitor season, and elementary software season and a GNOME Games season (and this one's not over yet). I also had some minor contributions (just for fun) to projects like LibreOffice, or recently eclipse (in context of the GreatFix initiative - which was a really interesting and rewarding experience).

      I am not sure whether all this is a good thing or a bad thing. I enjoy hacking on open-source projects, for fun, for profit, for experience, for whatever. The most useful skill I've gained is that of easily finding my way around large codebases for bugfixing. But what can be seen from the outside (e.g. from the point of view of a company looking for a developer): this guy keeps jumping from one project to another, he didn't really get really deep into any of the projects he did work on (my longest "streak" of working on a single project was one year). Fortunately OpenHub has a chart for contributions to GNOME as a whole, and it shows that I'm contributing to GNOME constantly, even if only with a few commits per month.

      Another thing about my contributions is the programming language I use: at work I'm a Java Developer, but that can not be seen at all from my contributions by languages chart at OpenHub, as the only Java contributions it shows is a few commits to a project of a friend to implement Java bindings to a Go library. This will change a bit in the near future, as Eclipse project should appear there soon with a few commits, but still, it shows that I'm most experiences with C++, which I'm not :)

      I've started to realize that the dream-job I'm looking for would make use of all these: working primarily on open-source software in Java, but still giving me the freedom to occasionally work on other open-source software. Does that job exist? Unfortunately, not in my country. I saw a job posting recently with a Job description which would probably fit into my dream-job category, but I'm a bit afraid I wouldn't be a good candidate, as it does list some nice-to-have skills, which I don't have, due to the area I did work on in Java until now (server-side Java done with Spring vs J2EE).

      Does your company value open-source contributions when employing? If yes, which one is preferred: in-depth knowledge of one project or shifting between projects could also be useful? Being open-minded and language-agnostic is better, or knowing one language to its guts is better?

      A while back I started working on a project called Squash, and today I’m pleased to announce the first release, version 0.5.

      Squash is an abstraction layer for general-purpose data compression (zlib, LZMA, LZ4, etc.).  It is based on dynamically loaded plugins, and there are a lot of them (currently 25 plugins to support 42 different codecs, though 2 plugins are currently disabled pending bug fixes from their respective compression libraries), covering a wide range of compression codecs with vastly different performance characteristics.

      The API isn’t final yet (hence version 0.5 instead of 1.0), but I don’t think it will change much.  I’m rolling out a release now in the hope that it encourages people to give it a try, since I don’t want to commit to API stability until a few people have given it a try. There is currently support for C and Vala, but I’m hopeful more languages will be added soon.

      So, why should you be interested in Squash?  Well, because it allows you to support a lot of different compression codecs without changing your code, which lets you swap codecs with virtually no effort.  Different algorithm perform very differently with different data and on different platforms, and make different trade-offs between compression speed, decompression speed, compression ratio, memory usage, etc.

      One of the coolest things about Squash is that it makes it very easy to benchmark tons of different codecs and configurations with your data, on whatever platform you’re running.  To give you an idea of what settings might be interesting to you I also created the Squash Benchmark, which tests lots of standard datasets with every codec Squash supports (except those which are disabled right now) at every preset level on a bunch of different machines.  Currently that is 28 datasets with 39 codecs in 178 different configurations on 8 different machines (and I’m adding more soon), for a total of 39,872 different data points. This will grow as more machines are added (some are already in progress) and more plugins are added to Squash.

      There is a complete list of plugins on the Squash web site, but even with the benchmark there is a pretty decent amount of data to sift through, so here are some of the plugins I think are interesting (in alphabetical order):

      bsc
      libbsc targets very high compression ratios, achieving ratios similar to ZPAQ at medium levels, but it is much faster than ZPAQ. If you mostly care about compression ratio, libbsc could be a great choice for you.

      DENSITY
      DENSITY is fast. For text on x86_64 it is much faster than anything else at both compression and decompression. For binary data decompression speed is similar to LZ4, but compression is faster. That said, the compression ratio is relatively low. If you are on x86_64 and mostly care about speed DENSITY could be a great choice, especially if you’re working with text.

      LZ4
      You have probably heard of LZ4, and for good reason. It has a pretty good compression ratio, fast compression, and very fast decompression. It’s a very strong codec if you mostly care about speed, but still want decent compression.

      LZHAM
      LZHAM compresses similarly to LZMA, both in terms of ratio and speed, but with faster decompression.

      Snappy
      Snappy is another codec you’ve probably heard of. Overall, performance is pretty similar to LZ4—it seems to be a bit faster at compressing than LZ4 on ARM, but a bit slower on x86_64. For compressing small pieces of data (like fields.c from the benchmark) nothing really comes close. Decompression speed isn’t as strong, but it’s still pretty good. If you have a write-heavy application, especially on ARM or with small pieces of data, Snappy may be the way to go.

      If you’re like me, when you download a project and want to build it the first thing you do is look for a configure script (or maybe ./autogen.sh if you are building from git).  Lots of times I don’t bother reading the INSTALL file, or even the README.  Most of the time this works out well, but sometimes there is no such file. When that happens, more often than not there is a CMakeLists.txt, which means the project uses CMake for its build system.

      The realization that that the project uses CMake is, at least for me, quickly followed by a sense of disappointment.  It’s not that I mind that a project is using CMake instead of Autotools; they both suck, as do all the other build systems I’m aware of.  Mostly it’s just that CMake is different and, for someone who just wants to build the project, not in a good way.

      First you have to remember what arguments to pass to CMake. For people who haven’t built many projects with CMake before this often involves having to actually RTFM (the horrors!), or a consultation with Google. Of course, the project may or may not have good documentation, and there is much less consistency regarding which flags you need to pass to CMake than with Autotools, so this step can be a bit more cumbersome than one might expect, even for those familiar with CMake.

      After you figure out what arguments you need to type, you need to actually type them. CMake has you define variables using -DVAR=VAL for everything, so you end up with things like -DCMAKE_INSTALL_PREFIX=/opt/gnome instead of --prefix=/opt/gnome. Sure, it’s not the worst thing imaginable, but let’s be honest—it’s ugly, and awkward to type.

      Enter configure-cmake, a bash script that you drop into your project (as configure) which takes most of the arguments configure scripts typically accept, converts them to CMake’s particular style of insanity, and invokes CMake for you.  For example,

      ./configure --prefix=/opt/gnome CC=clang CFLAGS="-fno-omit-frame-pointer -fsanitize=address"

      Will be converted to

      cmake . -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/opt/gnome -DCMAKE_INSTALL_LIBDIR=/opt/gnome/lib -DCMAKE_C_COMPILER=clang -DCMAKE_C_FLAGS="-fno-omit-frame-pointer -fsanitize=address"

      Note that it assumes you’re including the GNUInstallDirs module (which ships with CMake, and you should probably be using).  Other than that, the only thing which may be somewhat contentious is that it adds -DCMAKE_BUILD_TYPE=Debug—Autotools usually  builds with debugging symbols enabled and lets the package manager take care of stripping them, but CMake doesn’t.  Unfortunately some projects use the build type to determine other things (like defining NDEBUG), so you can get configure-cmake to pass “Release” for the build type by passing it <code>–disable-debug</code>, one of two arguments that don’t mirror something from Autotools.

      Sometimes you’ll want to be able to pass non-standard argument to CMake, which is where the other argument that doesn’t mirror something from Autotools comes in; --pass-thru (--pass-through, --passthru, and --passthrough also work), which just tells configure-cmake to pass all subsequent arguments to CMake untouched.  For example:

      ./configure --prefix=/opt/gnome --pass-thru -DENABLE_AWESOMENESS=yes

      Of course none of this replaces anything CMake is doing, so people who want to keep calling cmake directly can.

      So, if you maintain a CMake project, please consider dropping the configure script from configure-cmake into your project.  Or write your own, or hack what I’ve done into pieces and use that, or really anything other than asking people to type those horrible CMake invocations manually.


      I have a Pirelli P.VU2000 IPTV set-top box which I don't use, but would like to put that to a good use. It runs Linux, has an HDMI, stereo RCA audio output, 2x USB 2.0 and IR receiver + remote, so it'd be nice to have this play internet radios if that's possible (theoretically it is an IPTV receiver + media center, so it should be able to play media). And of course, let's not forget the advantage of learning new things, as I am aware that I could get similar media players fairly cheaply :)

      Unfortunately I'm not too good at hacking, and I haven't found a way to access a root console on it yet (after two days of googling/duck-duck-going and reading several russian and greek forum posts translated with Google Translate), so if anyone's up to the challenge to help me break it (to be able to access a root shell) in the spirit of knowledge-sharing, I'd be grateful for any kind of help.

      I've already spent a few days on this, with the following results:
      • the device boots, gets an IP from my router, but then errors out with "wrong DHCP answer" likely to be caused by me not being in the same subnet the IPTV provider expects it, but still, accessing the media player functionality without IPTV access would be nice
      • opening the box I have managed to get a serial console with some minimal output, I guess this is the bootloader logging to the serial console:
        39idxfsef2f712148b75194ab1d3c691b55bd4d3a5e956dS         
                                                                  
        #xos2P4a-99 (sfla 128kbytes. subid 0x99/99) [serial#a225d]
        #stepxmb 0xac                                            
        #DRAM0 Window  :    0x# (20)                             
        #DRAM1 Window  :    0x# (15)                             
        #step6 *** zxenv has been customized compared to build ***
        #step22                                                  
        #ei
      • scanning the ports with nmap reveals the following:
        Nmap scan report for 192.168.2.100
        Host is up (0.00043s latency).
        Not shown: 65534 closed ports
        PORT     STATE SERVICE VERSION
        2396/tcp open  ssh     Dropbear sshd 0.52 (protocol 2.0)
        | ssh-hostkey:
        |   1024 70:ff:b6:6b:94:f4:4e:19:14:40:7d:40:de:07:b9:ac (DSA)
        |_  1040 c4:52:0f:c9:e5:0f:fe:a8:a3:28:e6:d7:e1:02:23:0a (RSA)
        Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

        Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
        Nmap done: 1 IP address (1 host up) scanned in 13.52 seconds
      • telnet to the port found with nmap works, but no prompt comes up:
        telnet 192.168.2.100 2396
        Trying 192.168.2.100...
        Connected to 192.168.2.100.
        Escape character is '^]'.
        SSH-2.0-dropbear_0.52
      • ssh into the STB with root fails, as only publickey authentication seems to be enabled:
        ssh root@192.168.2.100 -p2396
        The authenticity of host '[192.168.2.100]:2396 ([192.168.2.100]:2396)' can't be established.
        RSA key fingerprint is c4:52:0f:c9:e5:0f:fe:a8:a3:28:e6:d7:e1:02:23:0a.
        Are you sure you want to continue connecting (yes/no)? yes
        Warning: Permanently added '[192.168.2.100]:2396' (RSA) to the list of known hosts.
        Permission denied (publickey).
      • checked for possible dropbear 0.52 exploits and vulnerabilities, but haven't found anything I could use
      So if you have any other ideas what I could try, feel free to suggest them in the comments.
      The new development version 0.27.1 of the Vala programming language contains a lot of enhancements and bug fixes.

      Release notes are the following:

      Changes

      • Print compiler messages in color.
      • Add clutter-gdk-1.0 bindings.
      • Add clutter-gst-3.0 bindings.
      • Add clutter-x11-1.0 bindings.
      • Add rest-extras-0.7 bindings.
      • Bug fixes and binding updates.
      However I'd like to tell more:
      • The compiler now checks for unknown attributes.
      • More checks in the compiler about invalid semantics.
      • Now XOR works with boolean, like bitwise OR and AND
      • A new attribute [ConcreteAccessor], mostly used for bindings. It's common that in C interface properties have concrete accessors instead of abstract accessors. Before this, you had [NoAccessorMethod] on top of such properties.
      • Sometimes the "type" property is used in projects but Vala could not support it. Now if the bindings have [NoAccessorMethod] you can use it.
      • We now infer generics recursively in method calls, so less typing for you.
      • And more.
      Have fun.
      Last cycle Gnome Mines went through a major rewrite and redesign, bringing it to the GNOME3 era. However, not everyone was happy with the new look, and several people mentioned the lack of colors on the numbers as the reason for this.

      The problem

      The numbers on the fields communicate the danger clearly. But you have to read them. Several people have reported using the colors as the primary clue for sensing the danger around the current field. With the new design we don't have colored numbers, so they would have to change the way they play Minesweeper to get used to this. Some people did, and mentioned that in spite of their initial complaints of missing the colors they are happy with the result and will never need the colors. But what about the others?
      The lack of colors being the number one complaint, some people also mentioned the flatness of all the icons as an issue, others complained about the small difference between exploded and non-exploded mines, lack of explosion, which might be an accessibility issue for the visually impaired.

      The options

      On bug #729250, several G+ posts and blog entries I have read different suggestions (from both designers, casual users, and minesweeping junkies) on how to bring back this additional level of visual feedback showing the danger when you're clicking around mines.

      Here are some of the options we have discussed (feel free to comment your pros/cons for any of the solutions, and I will expand the list):
      • Colored numbers, as we had in the old version
        • Pros
          • Potentially less unsatisfied users
          • similar in looks to the minesweepers other platforms have
        • Cons
          • Readability issues
          • User interface using many colors might look out of place on the GNOME desktop
      • Subtle background color change based on level of danger
        • Pros
          • Color feedback
          • If the colors are subtle enough, readability shouldn't be affected
        •  Cons
          • User interface using many colors might look out of place on the GNOME desktop
      • Symbolic pips instead of the numbers
        • Pros
          • no reading required
          • with well-spaced pips no counting would be required
        • Cons
          • ???

      The proposed solution

      GNOME games are trying to be as simple as possible, with the number of options reduced to the bare minimum. I consider this a good thing. But still, several games have options for changing the "theme", the look of the game board: e.g. lightsoff, five-or-more, quadrapassel all have a theme selection option in preferences. If GNOME Mines was themeable, we could also do this in Mines.
      Pros
      • people can change the theme if they are not satisfied with the default one
      Cons
      • a theme selector has to be added
      • a preferences menu item has to be added, as minesweeper doesn't have a preference window at the moment, options are accessible in the appmenu

      The status

      Fortunately, the minefield is styled with CSS and the images provided as SVG files, so a theme consists of a collection of files, a theme.css file for describing the styles, and several SVG files, the images to use.
      I have implemented a theme switcher (branch wip/theming-support) with the following features
      The current look of the theme switcher
      • this loads the above files from a given directory to display the minefield, so a theme is a directory
      • the theme name is the name of the directory, but is irrelevant, as the users shouldn't see this anywhere, the theme switcher being a carousel-style switcher, not showing the name
      • the theme switcher is a live preview widget, you can play a game (although the minefield is prefilled to show you all the numbers, flagged and non-flagged states, and you also can click the unrevealed tiles to see how the mines look)
      I have added three themes (currently these differ only in the CSS style) for now:
      • classic - using the flat icons, but the old colored numbers
      • default - the monochrome numbers and the flat icons
      • colored backgrounds - flat icons, and the numbers using colored backgrounds
      If this gets into the master repository, I wouldn't want to have more than five themes in the repository. However, if you don't like any of them, and you are privileged enough (you have write access to the themes directory of mines), you can create your own one, and the theme switcher will see that after an application restart.

      The missing pieces

      • do we need a theme switcher at all, or we can create a single theme that fits everyone? (I doubt this, but if it's possible, I'll happily throw the whole theme switcher implementation away)
      • design input on the theme switcher would be welcome
        • theme switcher navigation button styling
        • theme switcher window title
        • theme switcher menu item (currently it opens by clicking Appmenu/Preferences)
      • input on the themes
        • suggestions for the existing themes
        • suggestions for new themes (with SVG images provided)

      Conclusion

      It's hard to please everyone, but we can try to do our best :)
      Happy new year everyone!

      As we have started a brand new year, it's time for reviewing last year and planning for this one.

      2014

      Last year was a great one for me, professionally. Although I still didn't get my dream job working full time on open-source and free software, I am still proud for what I have accomplished.

      Development
      • I have successfully landed a major rewrite in gnome-mines, with both welcome and criticized changes (colored vs monochrome numbers anyone?) :)
      • The company I work for has successfully migrated all SVN repositories to git, and my colleagues have mostly got used to it. We still commit some mistakes, but we usually can handle them without too much troubles
      • I have migrated our issue tracking system to Redmine, customized it, learning some Ruby in the meantime, reporting some issues on github projects in the meantime
      • I have removed our in-repository shared libraries and implemented dependency management on top of our current ant-based build scripts, using Ivy
      • Contributed more time for reviews than I did before (usually for the awesome elementary projects) along with some fixes
      • Contributing to open-source (GNOME and elementary) projects did help me get a new laptop through bountysource (thanks to bountysource for providing a platform, to the people supporting elementary and GNOME with bounties), which I am grateful for.
      • 237 commits to various open-source projects (according to my openhub stats), although some of them are only release commits, its still a good number for me, although lower than the previous year
      • I interviewed for a job that seemed like my dream job, but unfortunately it turned out not to be, because of various reasons. I still don't know why I got rejected at the last phase, and unfortunately while talking with the interviewers it turned out the marketing stuff that motivated me to go for an interview was indeed only marketing (and a very successful one) but nothing more (at least that's what I found out based on the answers from the several people working there, who I managed to talk to)
      Talks
      • I held three talks at the university I graduated from, about open-source: the first and second one was the same, a generic introduction to open-source for students, and the last one was about contributing for computer scientists, with bugfixes, code reviews, and stuff. It was a great experience,
        I enjoyed my talks a lot, but didn't see any enthusiasm around the topic, so I'm seriously thinking about what to do next, as I like talking about open-source, but it seems that I haven't found the right audience
      • I seriously wanted to attend the Open Source Open Mind conference held annually in our city, I even had a ticket, but unfortunately I became ill the night before the conference (and it was my longest illness, with almost a month), so I skipped it, with regrets
      2015
      • In the land of open-source I intend to have more contributions this year, at least one commit and/or bugfix each day.
      • I would like to get this year to GUADEC, as I've never been, and it seems like the event I might have the possibility to get to, as it's held in Europe, this year in Gothenburg, Sweden, so I need no visa (if I would need it, I would have to travel 900 km for it). Unfortunately we intend to buy a house, so I might not have the chance because of this.
      That's it. No big plans other than these (at least not programming-related). As personal goals I have some more ambitious ones, like reading some books, buying a house, but I hope I will be able to keep up the contributions, which breathe some more life in me.
      Since glib 2.41.2, the mutex/cond implementation on Linux has changed. The code compiled with Vala < 0.26 which targets at least glib 2.32 with --target-glib 2.32 will suffer from deadlocks.

      Your options are either:
      • Do not --target-glib 2.32
      • Update Vala to at least 0.25.2
      • Instead of upgrading Vala, pick the bindings for Mutex and Cond from the new glib-2.0.vapi
      • Downgrade glib
      To clarify, it's not a glib bug. It's an old valac bug in the glib-2.0.vapi bindings of Mutex and Cond  that became now critical after the glib implementation change.

      The relevant Vala bug can be found here: https://bugzilla.gnome.org/show_bug.cgi?id=733500
      No nos hace falta crear una ventana que muestre directorios para después elegir archivos. Gtk lo hace por nosotros/as usando FileChooserDialog...


      valac -o "archivos" *.gs --pkg gtk+-3.0 


      [indent=4]
      uses Gtk
      init
          Gtk.init (ref args)               // inicializa gtk
          var prueba = new ventana ()      // crea el objeto prueba
          prueba.show_all ()                  // muestra todo
          Gtk.main ();                      // comienza con el loop

      class ventana : Window             // Crea una clase de ventana
          init
              title = "Ventana de prueba"          // escribe el titulo
              default_height = 250                // anchura
              default_width = 250                  // altura
              window_position = WindowPosition.CENTER  // posición
             
              // creamos un boton con la siguiente etiqueta
              var button = new Button.with_label ("Pulsa este botón")
              // Une el evento de clic de raton con la funcion pulsado
              button.clicked.connect (pulsado)
             
              // si pulsamos la x de la barra saldrá del loop
              destroy.connect(Gtk.main_quit)

              // añade el boton a la ventana
              add(button)

          def pulsado (btn : Button)
              var FC=  new FileChooserDialog ("Elige un archivo para abrir", this, Gtk.FileChooserAction.OPEN,
                  "_Abrir",Gtk.ResponseType.ACCEPT,
                  "_Cerrar",Gtk.ResponseType.CANCEL);
              FC.select_multiple = false;
              FC.set_modal(true)
              case FC.run ()
                  when Gtk.ResponseType.CANCEL
                      FC.hide()
                      FC.close()
                  when Gtk.ResponseType.ACCEPT
                      FC.hide()
                      var direccion=FC.get_filename ();
                      print direccion
      I use the terminal a lot, usually with bash or fish shell, and I always wanted some kind of notification on command completion, especially for long-running greps or other commands.

      The guys working on elementary OS have already implemented job completion notification for zsh shell in their pantheon-terminal project, but I wanted something more generic, working everywhere, even on the servers I am running commands through SSH.

      The terminal bell sound is something I usually don't like, but it seemed like a good fit for a quick heads-up, so the Bell character came to the rescue.
      As the bash prompt is fairly customizable, you can easily set a prompt which includes the magic BELL character.

      In order to do this:
      • open a Terminal (surprize :))
      • run the command echo PS1=\$\'\x07\'\'$PS1\'
      • paste the output of the command into ~/.bashrc
      Of course, this is not perfect, as it beeps for short commands too, not only long-running commands, but it works for me, maybe it will help you.
      A quick update on my new ultrabook running Fedora:
      • After watching the kernel development closely to see if anything related to the built-in touchpad comes in, and nothing came, I have decided to try some workarounds. If it can't work as a touchpad, at least it should work as a mouse. This can be accomplished by adding psmouse.proto=imps to the kernel parameters. The worst thing in this is that there's neither two-finger scrolling, nor edge-scrolling, but I can live with that, as I also have a wireless mouse.
      • Unfortunately I couldn't do anything with the wireless card, I have downloaded the kernel driver for 3.13 and 3.14 kernels, changed the source to work with 3.17 kernel (the one in Fedora workstation dailies), but unfortunately it fails to connect to my WPA-PSK2 network. So, until I get a mini PCIe wifi card with an Intel or Atheros chip (which is confirmed to have proper linux support), I will use the laptop with an USB WLAN interface.
      • Optimus graphics card switching still didn't seem trivial to install and set up properly. However, I don't need more than the intel graphics card, so I just wanted to switch the NVidia card off completely. So installed bumblebee and bbswitch based on the instructions on Fedora wiki, and turned the discreete card off.
      • Battery usage is at about 8W, estimated usage on battery is 7.5 hours with standard internet browsin on a standard 9-cell-battery, so I'm pretty satisfied with that.
      • I have formatted both the 24 GB SSD and the 1.5 TB HDD (cleaned up from sh*t like windows and McAffee 30 days trial), and installed Fedora 21 with a custom partitioning layout.
      All in all, at last I have a mostly working (there's place for improvement though)  laptop with a battery life above six hours with constant browsing, so I'm satisfied.

        We have been hard at work since the last announcement. Thanks to help from people testing out the previous release, we found a number of issues (some not even OS X related) and managed to fix most of them. The most significant issues that are resolved are related to focus/scrolling issues in gtk+/gdk, rendering of window border shadows and context menus. We now also ship the terminal plugin, had fixes pushed in pygobject to make multiedit not crash and fixed the commander and multiedit plugin rendering. For people running OS X, please try out the latest release [1] which includes all these fixes.

        Can’t see the video? Watch it on youtube: https://www.youtube.com/watch?v=ZgwGGu7PYjY

        [1] ftp://ftp.gnome.org/pub/GNOME/binaries/mac/gedit/beta/Gedit-3.13.91-dbg-2.dmg

        Redmine issue editing is quite a complex task, with a fairly complex, huge, two-columned form you get to edit (we also have several custom fields, which make this issue even worse).

        In Trac customized for ubinam, after adding our custom workflow, at the end of the page we had some options for augmenting our workflow, to ease the status updates, like reassignment, quick-fix, start working on an issue, and other easy tasks, which left most of the ticket fields untouched, only juggled with resolution, status, and assignee.

        The status-button Redmine plugin provided a great base: after the description and primary fields of the ticket, it shows links to quick status transitions. With it, you don't have to click edit the issue, find the status field on the form, click to open it, select the status, and click submit to save the changes, instead, you change the status with one click. In our Trac-originated workflow we had a status with multiple resolutions (fixed, invalid, duplicate, wontfix, worksforme), that being a more complex transition, as you have to update two fields, and usually the assigned status goes along with a new assignee, so that is not that easy either.

        After checking the source, learning a bit of Ruby on Rails, I have managed to update the form to change the links to Bootstrap buttons, and added an assignee combobox (with a nice look, and using the same data as the one on the edit form, thus no additional requests) with a built-in search box, thanks to the awesome Select2 component.
        Of course, some status transitions also need a reasoning, why you did switch to that status: I could have chosen to have a dropdown with a text entry, but as the form already had a nice way to scroll to the comment form, why not use it? The rest of the form is not really helpful in this context, so with a bit of JQuery I have hidden it. Now, clicking a quick-status button either changes the status and submits the form (if no comment required - like test released) or changes the status and jumps to the comment form to give you an option to comment. Obviously, you could still use the traditional edit button, but why would you?

        But a picture is worth a thousand words, so here you go, instead of three thousand words:

        The overall look of a ticket with the plugin, see the quick-status buttons
        A complex status transition, setting the status and the resolution, and requiring a comment
        Changing the assignee is easy and fast, select the user, and click reassign...
         Again, this is a heavily customized version, but it there's enough interest, I will share the plugin, or even develop a more generic one, not strictly tied to our workflow. So, let me see your +1s/comments/shares, if I get 30 of those, I'll share it in a github repo.

        After sharing my experiences of migrating from Trac 1.0.1→Redmine some people have asked me to share the script I have used.

        Do you need the script?
        Share/+1/comment!
        (Public domain image)
        I would prefer sharing the migration script by getting it in the Redmine source tree. I am willing to spend some more of my spare time of getting the migration script in shape (currently it's too personalized for our project to be shared), but I'm not sure how many people would use it, so to find out, I need you to +1/comment/share this post to express your interest in it. Even if this act might look like a shameless self-promotion, you'll have to believe me that it is only a way to find out in what form to share the script. If I see at least 30 people interested in it, I will do my best to share the migration script as soon as possible, and get it in the Redmine source tree. If there are less than 30 people interested in the script, I will still share the script with them, but as a raw script in a public github repo/gist, without getting proper testing and review from the Redmine team.

        I have already asked the Redmine devs on IRC about the way they would prefer (and hopefully accept) a patch, they answered that they will accept the script, better in a separate migration script (the current one in the tree is probably for Trac 0.12 and Trac 1.0 has changed a lot), to avoid breaking the old script for the ones who could use it. This is the easiest way, as it reduces the number of checks in the migration script for Trac version.

        The Redmine developers have also asked me to get a sample Trac DB dump, but my company's database is not public. If you would be interested in the migration script, and want to help, and have a public Trac database at hand (preferably with less than 1000 tickets), please share it. I have looked at the Trac users page for open-source projects, but only a few of them are using Trac 1.0.1. The database dump would be helpful to test the migration script, and write some unit tests, to make sure everything works well.

        Stay tuned, in my next post I will present the personalizations I have used to ease Redmine ticket updates without using the complex edit form, and if there's enough interest, I will share the plugin I customized with the people interested.

        As some of you might already know, the company I work for just migrated from Trac to Redmine (migration is mostly complete). I'm a developer, but in lack of DevOps people, I was responsible for the migration. It went fairly well, some more notes:
        Fixing everything openclipart
        • the migration didn't migrate the estimated time attribute for tickets, as I forgot it, but I wrote the part to migrate the estimated time changes in the journal, so I took a wild guess and set the attribute for each ticket to the max value I found in the ticket's history (usually that's the correct one, except for maybe a few)
        • never allow your users to choose their theme: I installed a plugin to let the users choose their redmine theme, and installed seven themes, unfortunately each has their advantages and disadvantages, and everyone has their preferred theme, so we can't choose a default theme everyone would agree with (maybe I will be the bad guy in the story, and remove the plugin and force them to use what most people like)
        • all in all, the feedback was mostly positive so far, in spite of my promise of sending a mail when everything is complete (which has not happened yet), most people are already using it, so it seems to be fairly intuitive (for people used to bugzilla and trac at least)

        Commit messages in issue history

        A major complaint was that the commit messages do not appear in redmine in the ticket comments, but appear on their side, making it see which commit came after which comment, and the issue-repo-history-merge plugin had some issues and did not fit our needs, so I started looking for another solution: modifying redmine source or writing my own plugin. After checking the Redmine source I found however that a changeset link will be added for fixing keywords defined in Redmine settings (which we already used for changing the status of tickets on commits), so I just added a fixing keywords with the usual "Refs #xxxx" style already defined in Redmine to associate the commit with a ticket, to also set the status of the ticket to Accepted, and inherently, add a ticket history entry with "Applied in changeset:xxxxx". This was still missing the commit comment, but I have added that in the Redmine source, that being the fastest solution for now.
        Later on, a plugin might be more appropriate, if needed, to reduce the number of changes in the Redmine source, in case a reinstall/Redmine update is needed.

        This post was going to be a rather long one, but decided to split it in three, as the other two topics need their own posts for objective reasons. If you're interested in the migration script itself or a redmine workflow helper, check back later.

        If you’re reading this through planet GNOME, you’ll probably remember Ignacio talking about gedit 3 for windows. The windows port has always been difficult to maintain, especially due to gedit and its dependencies being a fast moving target, as well as the harsh build environment. Having seen his awesome work on such a difficult platform, I felt pretty bad about the general state of the OS X port of gedit.

        The last released version for OS X was gedit 3.4, which is already pretty old by now. Even though developing on OS X (it being Unix/BSD based) is easier than Windows (for gedit), there is still a lot of work involved in getting an application like gedit to build. Things have definitely improved over the years though, GtkApplication has great support for OS X and things like the global menu and handling NSApp events are more integrated than they were before (we used the excellent GtkosxApplication from gtk-mac-integration though, so things were not all bad).

        I spent most of the time on two things, the build environment and OS X integration.

        Build environment

        We are still using jhbuild as before, but have automated all of the previously manual steps (such as installing and configuring jhbuild). There is a single entry point (osx/build/build) which is basically a wrapper around jhbuild (and some more). The build script downloads and installs jhbuild (if needed), configures it with the right environment for gedit, bootstraps and finally builds gedit. All of the individual phases are commands which can be invoked by build separately if needed. Importantly, whereas before we would use a jhbuild already setup by the user, we now install and configure jhbuild entirely in-tree and independently of existing jhbuild installations. This makes the entire build more reliable, independent and reproducible. We now also distribute our complete jhbuild moduleset in-tree so that we no longer rely on a possibly moving external moduleset source. This too improves build reproducibility by fixing all dependencies to specific versions. To make updating and maintaining the moduleset easier, we now have a tool which:

        1. Takes the gtk-osx stable modulesets.
        2. Applies our own specific overrides and additional modules from a separate overrides file. For modules that already exist, a diff is shown and the user is asked whether or not to update the module from the overrides file. This makes it easy to spot whether a given override is now out of date, or needs to be updated (for example with additional patches).
        3. For all GNOME modules, checks if there are newer versions available (stable or unstable), and asks whether or not to update modules that are out of date.
        4. Merges all modules into two moduleset files (bootstrap.modules and gedit.modules). Only dependencies required for gedit are included and the resulting files are written to disk.
        5. Downloads and copies all required patches for each required module in-tree so building does not rely on external sources.

        If we are satisfied with the end modulesets, we copy the new ones in-tree and commit them (including the patches), so we have a single self-contained build setup (see modulesets/).

        All it takes now is to run

        osx/build/build all

        and the all of gedit and its dependencies are built from a pristine checkout, without any user intervention. Of course, this being OS X, there are always possibilities for things to go wrong, so you might still need some jhbuild juju to get it working on your system. If you try and run into problems, please report them back. Running the build script without any commands should give you an overview of available commands.

        Similar to the build script, we’ve now also unified the creation of the final app bundle and dmg. The entry point for this is osx/bundle/bundle and works in a similar way as the build script. The bundle script creates the final bundle using gtk-mac-bundler, which gets automatically installed when needed, and obtains the required files from the standard build in-tree build directory (i.e. you’ll have to run build first).

        OS X Integration

        Although GtkApplication takes care of most of the OS X integration these days (the most important being the global menu), there were still quite some little issues left to fix. Some of these were in gtk+ (like the menu not showing [1], DND issues [2], font anti-aliasing issues [3] and support for the openFiles Apple event [4]), of which some have been already fixed upstream (others are pending). We’ve also pushed support for native 10.7 fullscreen windows into gtk+ [5] and enabled this in gedit (see screenshot). Others we had fixed inside gedit itself. For example, we now use native file open/save dialogs to better integrate with the file system, have better support for multiple workspaces, improved support for keeping the application running without windows, making enchant (for the spell checker) relocatable and have an Apple Spell backend, and other small improvements.

        Besides all of these, you of course also get all the “normal” improvements that have gone into gedit, gtk+ etc. over the years! I think that all in all this will be the best release for OS X yet, but let it not be me to be the judge of that.

        gedit 3.13.91 on OS X

        We are doing our best to release gedit 3.14 for OS X at the same time as it will be released for linux, which is in a little bit less than a month. You can download and try out gedit 3.13.91 now at:

        ftp://ftp.gnome.org/pub/GNOME/binaries/mac/gedit/beta/Gedit-3.13.91-dbg-1.dmg

        It would be really great to have people owning a mac try this out and report bugs back to us so we can fix them (hopefully) in time for the final release. Note that Gedit 3.14 will require OS X 10.7+, we no longer support OS X 10.6.

        [1] [Bug 735122] GtkApplication: fix global menubar on Mac OS
        [2] [Bug 658722] Drag and Drop sometimes stops working
        [3] [Bug 735316] Default font antialiasing results in wrong behavior on OS X
        [4] [Bug 722476] GtkApplication mac os tracker
        [5] [Bug 735283] gdkwindow-quartz: Support native fullscreen mode

        The change

        In January, after a long time with SVN, we (development team) decided to make the move to git, to speed up the development of the project we're working on, Tracking Live.
        The switch has greatly improved our development speed (although some people are still not happy with it, because of occasional relatively large merge conflicts) and deployment rate (with Jenkins and a relatively good branching strategy, we can release daily if we want).

        The problem

        We use Trac for bug tracking, with a post-commit hook to leave a comment on the relevant referenced ticket after each commit. This has been introduced in SVN times, and migrated to git too, unfortunately somehow Trac with git is awfully slow (tickets without git commit load in less than 5 seconds, tickets with 1 git commit load on 40+ seconds, and the time goes up with the number of related commits). We have updated our Trac instance from 0.12 to 1.0.1, it didn't help, tried several tweaks and additional package installs to speed up Trac+git, but none of those helped. The Trac developers also consider their Git plugin sub-optimal at the moment of this writing.

        The solution

        40+ seconds for opening a ticket to leave a comment looked like a huge waste of time, so we started looking for alternatives. Redmine looked promising, being based on Trac, but completely rewritten with Ruby instead of Python, with the much-advertised Rails framework, and the interface by default looked familiar for the colleagues used to Trac.

        Migration script updates

        Redmine provided a migration script for migrating all tickets from Trac. Good start. After the first import (6+ hours for ten thousand tickets) Redmine didn't start at all. Bad news. So here are the changes I have made to the migration script in order to have a complete migration (learned the Ruby syntax easily, and the changes took 2 days with testing, and migrated only 200 tickets in each test until I was sure the migration script works ok, as I didn't like the 6+ hours for full migration):
        • as the migration script is for migrating from Trac 0.12 and the datatype used to store the dates in the Trac database has changed since, updated the date conversion, after this redmine did start
        • added migration for CC's to Redmine watchers
        • updated attachments migration to work with Trac 1.0.1, as the attachment paths have changed
        • added migration of total hours, estimated hours and hours spent, stored as custom fields in Trac, to Redmine's time management plugin entries
        • added comments for custom field changes, as custom fields have been migrated (meaning the current value of the custom field being correct), but their changes have not been migrated
        • added parent ticket relationship migration, as we had several beautiful ticket hierarchies for grouping featuresets (until we migrated to a more agile sprint-alike milestone-based grouping) in Trac
        • added custom ticket states and priorities mapping (we have a custom set defined of these to help us in our workflow)
        • added custom user mappings (for each of our users - 64 in the complete trac history) to create one user only for the same user using Trac with multiple email addresses (one for trac comments, another for git commits where these differ)
        • added migration for ticket comment links
        If you are interested in any of the above changes, feel free to ask, I will provide the migration script (unfortunately the changes do not seem to make in to redmine trunk, lots of patches I have applied have been waiting in redmine tracker for years, they apply cleanly, but have not been pushed to trunk)

          The plugins

          After all these steps, I had a good dataset to start with, but the functionality of Redmine was still not on par with Trac. The long Redmine plugin list (and additional github searches for 'redmine plugin') came in handy here, checked the list, tested plugins I found interesting, and here's a final list (all tested and working with Redmine 2.5.2)
          • PixelCookers theme - the most complete and modern redmine theme with lots of customization options
          • redmine_auto_watchers_from_groups - everyone from the assigned group should be cc'd for each mail, that's what we used Trac default cc's for (not perfect, reported the 1st issue for the project)
          • redmine_auto_watchers - to add the persons commenting as watcher, bugzilla style
          • redmine_category_tree - useful for component grouping in our project, as we have one project with lots of components and subcomponents and sub-sub-components
          • redmine_custom_css and redmine_custom_js - for customizing the last bits without having to create a custom theme
          • redmine_didyoumean - for auto duplicate search before reporting a ticket (current trunk is broken, but last stable works)
          • redmine_custom_workflows - for additional updates on ticket changes
          • redmine_image_clipboard_paste - makes bug reporting for a website so much easier with a screenshot
          • redmine_issue_status_colors - we use a color for each status to help us visualize the current status of a milestone
          • redmine_landing_page - we only have one project, so we always want to land on the project page after login
          • redmine_open_search - no more custom html pages building custom links for accessing a ticket, just type the number in the searchbar of the browser
          • redmine_revision_diff - expand the diff by default (with a bit of customization and custom code show the branches a given commit appears on, something my colleagues have missed when taking a first look at redmine)
          • redmine_subtasks_inherited_fields - subtasks usually have most of the attributes inherited from the parent, so let's ease bug reporting
          • redmine_default_version - we have a generic issue collector pool, management prioritizes bugs from there into scheduled milestones, let's use that collector as default target version
          • redmine_tags - use tagging for bugs and wiki pages, something we used in Trac (although data not migrated)
          • redmine_wiki_extensions, wiking, redmine_wiki_lists - additional wiki extensions, custom macros, e.g. for embedding a ticket list inside a wiki page
          • redmine_wiki_toc - to have a table of contents of our wiki, which is kindof messy right now (we had a wiki page looking something like a ToC, but we occasionally forgot to update it)
          • status_button - for quickly changing the status without having to open the combo and select the one to use and click update, just shows all statuses as links
          • redmine_jenkins - awesome jenkins integration, can show build history, or even start jenkins builds from the redmine interface, no need to open jenkins anymore

          What's missing

          After all this setup, I've got two features of Trac without complete matches:
          • TicketQuery macro results have not been migrated, as there's no 100% match of this feature neither in default Redmine, nor in the plugins. Based on the necessity of this we will either create custom queries for the most important TicketQueries or (the more time-consuming option) will extend redmine_wiki_lists plugin with additional query attributes to be as powerful as TicketQuery is in Trac
          • Trac roadmap had a progress indicator for each Milestone, which we could colorize based on the status. Redmine progress indicator can only colorize Open/InProgress/Closed, so no progressbar colorized based on per-status ticket count. However the ticket list is shown after the progressbar (Trac doesn't show the list), which is something we can colorize, so we still have a visual clue of how the milestone stands.

          Conclusion

          Redmine
          Trac
          All in all, it looks to me that the migration is prepared, test migration worked, preliminary tests look promising, speed is incomparable, featureset is OK, look and feel updated and awesome.
          Hopefully we'll see it in action sometime soon (for my and some colleagues' relief, who got sick of waiting for trac pages to load), with sub-5 second page loading times. So Redmine, here we come...
          Recently my (5-years) old laptop (HP ProBook 4710s) started behaving badly (shutting down multiple times, even after full interior cleaning) so I have started looking for a replacement. This time I wanted something a bit more portable (less than 17 inch) but still OK for development (13.3 and 14 seemed a bit too low), so I've opted for a 15.6 inch.
          Choosing the right one was a tough decision, my requirements were:
          • 15.6 inch with FullHD resolution (1920x1080)
          • good battery life (4+ hours) involving an ultralow-voltage CPU (i5 42xxU or i7 45xxU)
          • 8 GB memory
          • SSD being a plus
          My favourite one was the Dell Inspiron 15 (7000 series) but the price was a bit higher than I wanted to pay for it, so I hesitated a lot, until each e-shop sold it's stock out. Hunting this laptop one day I've found the lot cheaper, brand-new ASUS TransformerBook TP500 (LA/LN) series which met almost all my requirements (nothing on Google for Linux compatibility), so I've decided to order the i5 version (24 GB SSD + 1 TB HDD) on a Saturday, but the shop has announced me on Monday that unfortunately there was a mistake with the stock calculation, and they're out of stock, so I've opted for an upgrade to the i7 one. That shipped in one day (with an OEM install of Win 8.1 sadly).

          After a quick first-time setup (OK, quick might be an exaggeration) of Win 8.1, a quick start of Internet Explorer to download Firefox, made the quick tests to see if everything's OK. Touchscreen worked, keyboard is amazing, resolution is OK, colors look wonderful, sadly the volume down button didn't work (volume up works, so it's likely to be a hardware issue). I've decided to return it for a replacement (hopefully a fully functional one this time), but not before checking the Linux compatibility,
          After disabling secureboot, creating an EFI Fedora 20 liveUSB, I've booted Fedora on it in a few seconds, here's a summary:
          • Resolution is ok, video cards (HD4400 and GeForce GT840) work
          • Touchpad works
          • Touchscreen works (haven't tried multitouch, seen some reports on it's smaller sister TP300 only single-point touch working right now)
          • Keyboard works
          • Wifi did not work out of the box (with the 3.11 kernel). The Wifi+Bluetooth card is a Mediatek (RaLink) 7630. Googling revealed that ASUS x550C and HP 450-470 G1 owners also have this card, there are several requests to add support, but it's just not there yet. Fortunately MediaTek provides Linux drivers, so it might be "only" a matter of compiling the kernel driver, which means it might get in the kernel soon.
          • Card reader did not work (again with the 3.11 kernel), but a quick google revealed that support has been added in 3.13, so it should work if Fedora is updated (hopefully ethernet works, haven't had the chance to try it) - currently with Fedora updates installed I'm using the 3.15 kernel
          • With GTK 3.10 CSD windows can not be moved by dragging the titlebar with touch, you have to use the touchpad for that. People have confirmed that this is not the case with 3.12+, which is strange (due to 708431 being still open ), but good news.
          All in all, the experience was not perfect, but not frustrating either.

          Will be back with a more in-depth review with battery life and other info after I get the replacement. I'm looking forward to having fun with experimenting with GNOME on touch displays and implementing GNOME Mines touch screen support :)
          This new release of the Vala programming language brings a new set of features together with several bug fixes.

          Changes

          • Support explicit interface method implementation.
          • Support (unowned type)[] syntax.
          • Support non-literal length in fixed-size arrays.
          • Mark regular expression literals as stable.
          • GIR parser updates.
          • Add webkit2gtk-3.0 bindings.
          • Add gstreamer-allocators-1.0 and gstreamer-riff-1.0 bindings.
          • Bug fixes and binding updates.
          The explicit interface method implementation allows to implement two interfaces that have methods (not properties) with the same name. Example:

          interface Foo {
          public abstract int m();
          }

          interface Bar {
          public abstract string m();
          }

          class Cls: Foo, Bar {
          public int Foo.m() {
          return 10;
          }

          public string Bar.m() {
          return "bar";
          }
          }

          void main () {
          var cls = new Cls ();
          message ("%d %s", ((Foo) cls).m(), ((Bar) cls).m());
          }

          Will output 10 bar.

          The new (unowned type)[] syntax allows to represent "transfer container" arrays. Whereas it's possible to do List<unowned type>, now it's also possible with Vala arrays.
          Beware that doing var arr = transfer_container_array; will not correctly reference the elements. This is a bug that will eventually get fixed. It's better to always specify (unowned type)[] arr = transfer_container_array;
          Note that inside the parenthesis only the unowned keyword is currently allowed.

          The non-literal length in fixed-size arrays has still a bug (lost track of it) that if not fixed may end up being reverted. So we advice not to use it yet.

          Thanks to our Florian for always making the documentation shine, Evan and Rico for constantly keeping the bindings up-to-date to the bleeding edge, and all other contributors.

          More information and download at the Vala homepage.

          I have been a bit more quiet on this blog (and in the community) lately, but for somewhat good reasons. I’ve recently finished my PhD thesis titled On the dynamics of human locomotion and the co-design of lower limb assistive devices, and am now looking for new opportunities outside of pure academics. As such, I’m looking for a new job and I thought I would post this here in case I overlook some possibilities. I’m interested mainly in working around the Neuchâtel (Switzerland) area or working remotely. Please don’t hesitate to drop me a message.

          My CV

          Public service announcement: if you’re a bindings author, or are otherwise interested in the development of GIR annotations, the GIR format or typelib format, please subscribe to the gir-devel-list mailing list. It’s shiny and new, and will hopefully serve as a useful way to announce and discuss changes to GIR so that they’re suitable for all bindings.

          Currently under discussion (mostly in bug #719966): changes to the default nullability of gpointer nodes, and the addition of a (never-null) annotation to complement (nullable).

          I just learned of another automated build system for vala. It’s called bake. It looks pretty nice. It’s written in vala and appears to support a wide variety of languages. From what I can tell looking at the source code, bake will write out old school make files for you.

          The other build system that I also have never used is called autovala. autovala is vala specific unlike bake appears to be. autovala is nice, though, in that it builds out CMake files your project. I’m already very familiar with CMake so that’s a big plus for me.

          I plan to check out both very soon.

          A few days ago Atom, the hackable text editor has been completely open-sourced under the MIT license (parts of it have been open-sourced some time ago, now they have completed it by open-sourcing the core).

          Unfortunately currently it is only available for downloading for Mac OS, no Windows or Linux binaries available yet, but due to the nature of open-source, you can simply grab the sources, download and compile nodeJS (npm 1.4.4 is required, and neither Fedora 20 nor Ubuntu 14.04 provided it from the repos, they only had npm 1.3.x) and build yourself an executable. It's not always trivial, I had some issues building it both for Ubuntu 14 and Fedora 20, but with quick DuckDuckGo searches found the solutions, and I was able to test it.
          Update: the folks at webupd8 have created a PPA for 64-bit Ubuntu 14.04, so you might be able to try it out without the hassle to build it for yourself.
          As a first impression, it is a clean and extensible text editor, for people like me who are too lazy to learn vim or emacs.

          It took me some time to configure Atom for using it as an IDE. The default build has support for some languages already, some plugins and themes, but there are plenty of additional packages to choose from. Here are my favourites (if these didn't exist, I would've already stopped using Atom):
          • Word Jumper with it's default Ctrl+Alt+Left/Right reconfigured + Ctrl+Left/Right for jumping between words, something provided by almost every product dealing with writing and navigating text
          • Terminal Status showing a terminal with Shift+Enter below your editor, useful for make commands, or git hackery for stuff not provided by the default git plugin. Unfortunately user input doesn't work yet, the console doesn't get the focus, so it's not perfect.
          I have checked the available packages, language support was available for most of the languages I usually work with (C, C++, Python, Java, Bash Shell, GitHub MarkDown, Latex) , but unfortunately no support for Vala yet.

          The GitHub folks did a wonderful job at providing documentation for everything for the community to quickly build a powerful ecosystem around the Atom core. They have links to their important guides from their main Documentation page, including a guide on how to convert a TextMate bundle. As TextMate already has a huge package ecosystem, including a Vala bundle, I have followed their guide, converted the TextMate bundle, created a github repo and published a language-vala atom package.

          All in all, initial Vala support including syntax highlighting and code completion (and maybe some other features I am not aware of yet) is available for the ones eager to develop Vala code in Atom, after building it from source or after the GitHub folks provide binaries for other OSs too.

          After a couple of discussions at the DX hackfest about cross-platform-ness and deployment of GLib, I started wondering: we often talk about how GNOME developers work at all levels of the stack, but how much of that actually qualifies as ‘core’ work which is used in web servers, in cross-platform desktop software1, or commonly in embedded systems, and which is security critical?

          On desktop systems (taking my Fedora 19 installation as representative), we can compare GLib usage to other packages, taking GLib as the lowest layer of the GNOME stack:

          Package Reverse dependencies Recursive reverse dependencies
          glib2 4001
          qt 2003
          libcurl 628
          boost-system 375
          gnutls 345
          openssl 101 1022

          (Found with repoquery --whatrequires [--recursive] [package name] | wc -l. Some values omitted because they took too long to query, so can be assumed to be close to the entire universe of packages.)

          Obviously GLib is depended on by many more packages here than OpenSSL, which is definitely a core piece of software. However, those packages may not be widely used or good attack targets. Higher layers of the GNOME stack see widespread use too:

          Package Reverse dependencies
          cairo 2348
          gdk-pixbuf2 2301
          pango 2294
          gtk3 801
          libsoup 280
          gstreamer 193
          librsvg2 155
          gstreamer1 136
          clutter 90

          (Found with repoquery --whatrequires [package name] | wc -l.)

          Widely-used cross-platform software which interfaces with servers2 includes PuTTY and Wireshark, both of which use GTK+3. However, other major cross-platform FOSS projects such as Firefox and LibreOffice, which are arguably more ‘core’, only use GNOME libraries on Linux.

          How about on embedded systems? It’s hard to produce exact numbers here, since as far as I know there’s no recent survey of open source software use on embedded products. However, some examples:

          So there are some sample points which suggest moderately widespread usage of GNOME technologies in open-source-oriented embedded systems. For more proprietary embedded systems it’s hard to tell. If they use Qt for their UI, they may well use GLib’s main loop implementation. I tried sampling GPL firmware releases from gpl-devices.org and gpl.nas-central.org, but both are quite out of date. There seem to be a few releases there which use GLib, and a lot which don’t (though in many cases they’re just kernel releases).

          Servers are probably the largest attack surface for core infrastructure. How do GNOME technologies fare there? On my CentOS server:

          • GLib is used by the popular web server lighttpd (via gamin),
          • the widespread logging daemon syslog-ng,
          • all MySQL load balancing via mysql-proxy, and
          • also by QEMU.
          • VMware ESXi seems to use GLib (both versions 2.22 and 2.24!), as determined from looking at its licencing file. This is quite significant — ESXi is used much more widely than QEMU/KVM.
          • The Amanda backup server uses GLib extensively,
          • as do the clustering solutions Heartbeat and Pacemaker.

          I can’t find much evidence of other GNOME libraries in use, though, since there isn’t much call for them in a non-graphical server environment. That said, there has been heavy development of server-grade features in the NetworkManager stack, which will apparently be in RHEL 7 (thanks Jon).

          So it looks like GLib, if not other GNOME technologies, is a plausible candidate for being core infrastructure. Why haven’t other GNOME libraries seen more widespread usage? Possibly they have, and it’s too hard to measure. Or perhaps they fulfill a niche which is too small. Most server technology was written before GNOME came along and its libraries matured, so any functionality which could be provided by them has already been implemented in other ways. Embedded systems seem to shun desktop libraries for being too big and slow. The cross-platform support in most GNOME libraries is poorly maintained or non-existent, limiting them to use on UNIX systems only, and not the large OS X or Windows markets. At the really low levels, though, there’s solid evidence that GNOME has produced core infrastructure in the form of GLib.


          1. As much as 2014 is the year of Linux on the desktop, Windows and Mac still have a much larger market share. 

          2. And hence is security critical. 

          3. Though Wireshark is switching to Qt. 

          In the weekend, after playing around with a Flappy Bird clone on a phone, I got curious how much time it would take me to implement a desktop version. After a G+ idea I have named the project Flappy Gnome, and implemented a playable clone in Vala with a GtkArrow jumping between GtkButtons in a few hours and less than 150 lines (including empty lines and stuff).

          Here's a quick preview of the first version:


          A bit about the tech details: it's basically a dynamically expanding GtkScrolledWindow scrolling to the right while you progress, that creates the effect of the moving pipes, and the player is moved from inside a tick_callback added to the Container GtkLayout.

          Given that this is my second Vala project written from scratch (after Valawhole), and I learned a lot from it, seemed like a good idea to develop it further into a tutorial (for beginners), maybe someone else will find it useful too. I did start over twice to have a better code design and well-separated steps (1 commit/step), and have finally pushed to github, along with a description of each step. The resulting code is a bit longer (almost twice as long) than the initial version, but it also has more features, including CSS styling, Restart button, better design, and so on...

          The end result of the tutorial in its current state.

          I'm thinking of adding a Help screen to explain the complicated controls (F2 restarts the game, Space to start the game/jump) and maybe a Game Over screen, so the tutorial might not be completely ready, but it's in a good shape.

          I could have done better in grouping related functionality in commits, or in commenting code, and I am sure there's a better way to implement/improve this using GTK+, but it's good for a start, with some known issues:
          In its current state it runs choppy on a relatively modern dual- and quad-core CPUs with Ati cards using the open source radeon driver (I'm not sure what else could I blame), but works enjoyably on a PC with an Intel HD. Unfortunately I don't have an NVidia card to test with, but I'm really curious if it works on NVidia with nouveau, and maybe would also be interested in results with the binary blob drivers (both NVidia and Ati), if they make a difference. If you have any of these and have a few minutes, please try it and comment with your findings.
          Update 1: Feedback from people running the game on Nouveau is positive, so the game seems to run smoothly on Nvidia with the open-source driver.

          Last week I was in Berlin at the GNOME DX hackfest. My goal for the hackfest was to do further work on the fledgling gnome-clang, and work out ways of integrating it into GNOME. There were several really fruitful discussions about GIR, static analysis, Clang ASTs, and integration into Builder which have really helped flesh out my plans for gnome-clang.

          The idea we have settled on is to use static analysis more pervasively in the GNOME build process. I will be looking into setting up a build bot to do static analysis on all GNOME modules, with the dual aims of catching bugs and improving the static analyser. Eventually I hope the analysis will become fast enough and accurate enough to be enabled on developers’ machines — but that’s a while away yet.

          (For those who have no idea what gnome-clang is: it’s a plugin for the Clang static analyser I’ve been working on, which adds GLib- and GObject-specific checks to the static analysis process.)

          One key feature I was working on throughout the hackfest was support for GVariant format string checking, which has now landed in git master. This will automatically check variadic parameters against a static GVariant format string in calls to g_variant_new(), g_variant_get() and other similar methods.

          For example, this can statically catch when you forget to add one of the elements:

          /*
           * Expected a GVariant variadic argument of type ‘int’ but there wasn’t one.
           *         floating_variant = g_variant_new ("(si)", "blah");
           *                                           ^
           */
          {
          	floating_variant = g_variant_new ("(si)", "blah");
          }

          Or the inevitable time you forget the tuple brackets:

          /*
           * Unexpected GVariant format strings ‘i’ with unpaired arguments. If using multiple format strings, they should be enclosed in brackets to create a tuple (e.g. ‘(si)’).
           *         floating_variant = g_variant_new ("si", "blah", 56);
           *                                           ^
           */
          {
          	floating_variant = g_variant_new ("si", "blah", 56);
          }

          After Zeeshan did some smoketesting of it (and I fixed the bugs he found), I think gnome-clang is ready for slightly wider usage. If you’re interested, please install it and try it out! Instructions are on its home page. Let me know if you have any problems getting it running — I want it to be as easy to use as possible.

          Another topic I discussed with Ryan and Christian at the hackfest was the idea of a GMainContext visualiser and debugger. I’ve got some ideas for this, and will hopefully find time to work on them in the near future.

          Huge thanks to Chris Kühl and Endocode for the use of their offices and their unrivalled hospitality. Thanks to the GNOME Foundation for kindly sponsoring my accommodation; and thanks to my employer, Collabora, for letting me take community days to attend the hackfest.

          Here in sunny Berlin, progress is being made on documentation, developer tools, and Club Mate. I’ve heard rumours of plans for an updated GTK+ data model and widgets. The documentationists are busy alternating between massaging and culling documentation pages. There are excited discussions about the possibilities created by Builder.

          I’ve been keeping working on gnome-clang, and have reached a milestone with GVariant error checking:

          gvariant-test.c:10:61: error: [gnome]: Expected a GVariant variadic argument of type ‘char *’ but saw one of type ‘guint’.
                  some_variant = g_variant_new ("(sss)", "hello", my_string, a_little_int_short_and_stout);
                                                                             ^

          More details coming soon, once I’ve tidied it all up and committed it.

          A GNOME Foundation sponsorship badge.

          Mines 3.13.1 is out with a refreshed look and feel.

          You have to see it for yourself. But until you do that, here's a comparison of an in-game and an end-game screen-shot from before (3.12.1) and after (3.13.1) the changes.
          Mines 3.12.1 (left) vs. Mines 3.13.1 (right)
          The real beauty of the new Mines lies within the details, the updated look is much more than using new colours and new images:
          • The old version did draw the whole minefield to a DrawingArea using cairo calls, while the updated version contains no custom drawing code, only standard GTK+ widgets (GtkButtons within a GtkGrid) styled with CSS, inside a GtkOverlay to be able to hide with a Paused label.This means, that if you don't like the current look or colours of the minefield, or you would like to use some other images (like flowers instead of mines in a game called Minesweeper), you only have to provide the new image files and update the CSS file, without touching the code at all.
          • The user interface of the old version was built from code, so if you wanted to change something, you had to write the code for that. The user interface of the new version is built from Glade UI files, so you can fix user interface, layout, padding issues using Glade, without touching the code.
          Thanks to all the people helping to make this release awesome, especially to Michael Catanzaro for the countless patch reviews and trust, and to Allan Day for the designs and the multiple iterations for the CSS.

          Download. Play. Enjoy. Comment. There's still a lot to improve for the 3.14.0 release.

          I think I’m not the only one who dreads visiting the hog that is bugzilla. It is very aptly named, but a real pain to work with at times. Mostly, what I really don’t like about bugzilla is that it’s 1) really slow to load and in particular search, 2) has a very cluttered interface with all kinds of distracting information that I don’t care about. Every time I think to quickly look up a bug, or search something specific, get all bugs related to some feature in gedit or even open just all bugs in a certain product, bugzilla just gets in the way.

          So I introduce bugzini (https://github.com/jessevdk/bugzini), the light-weight bugzilla front-end which runs entirely in the local browser, using the bugzilla XML-RPC API, a simple local webservice implemented in Go and a JavaScript application running in the browser using IndexedDB to store bugs offline.

          bugzini-index

          Screenshot of the main bug index listing

          It’s currently at a state where I think it could be useful for other people as well, and it’s running reasonably well (although there are certainly still some small issues to work out). There are several useful features in bugzini currently which makes it much nicer to work with than bugzilla.

          1. Search as you type, both for products as well as bug reports. This is great because you get instantaneous results when looking for a particular bug. A simple query language enables searching for specific fields and creating simple AND/OR style queries as shown in the screenshot (see the README for more details)
          2. Products in which you are interested can be starred and results are shown for all starred products through a special selection (All Starred in the screenshot)
          3. Searches can be bookmarked and are shown in the sidebar so that you can easily retrieve them. In the screenshot one such bookmark is shown (named file browser) which shows all bugs which contain the terms file and browser
          4. bugzini keeps track of which bugs contain new changes since your last visit and marks them (bold) similar to e-mail viewers. This makes it easy to see which bugs have changed without having to track this in bugzilla e-mails instead
          Viewing a bug

          Viewing a bug

          Try it out

          To try out bugzini, simply do the following from a terminal:

          git clone https://github.com/jessevdk/bugzini.git
          make
          ./bugzini -l

          Please don’t forget to file issues if you find any.

          After working a bit in Vala on gnome-mines and swell-foop I thought I'd give it a try, and I also wanted to try some more GTK+ CSS styling ideas, so I have developed the simplest game ever, a 15 puzzle.
          After a bit of development in Vala, I can say I'm pretty comfortable with it. www.valadoc.org is a great website, each language should have such a reference with all the available functions. Sometimes the explanations are not enough, but in that case I can simply fall back to DevHelp, after some time one gets used to mapping the C names to Vala namespace+class+field mapping rules.
          GtkOverlay for start screen

          Back to Valawhole: it's a simple 15-puzzle, available on github already. Technically, I did experiment with some ideas:
          • CSS stylable UI, just like the one I did for gnome-mines, but taken one step further, as the puzzle blocks are styled using on-the-fly generated CSS to be able to set the size (3x3 or 4x4 grid)
          • transparent start screen overlay, using GtkOverlay
          • game logic separated from the view, as I did like the clear separation in Swell Foop and Mines, and wanted to practice doing that
          4 x 4 puzzle with kenney's graphics
          The game graphics are from kenney.nl, as he has the most awesome portfolio of game graphics on opengameart.org, each graphics perfectly matching my taste, colorful, cartoony, professional.

          All in all, I think I will develop my upcoming games in Vala, as it takes the burden  of memory-management and GObject boilerplating, but still preserving the native-speed attribute, being compiled to C. Awesome.


          I have finished the stylable Mines implementation for GNOME Mines, which I did mention in my last post:
          • using the old scalable images
          • with a rough paused overlay
          • game logic including keyboard and mouse control just as before
          • other improvements, 
          I think it's ready to be taken over by a designer for some CSS wizardry, as my CSS is a plain one, using some background-colors from the Tango color palette.
          It is available for testing in the wip/cssstyling branch of gnome-mines, and if you manage to test it, please report any issues you may find in a comment in bug 728483.

          Now, let the screenshots do the talking (beware, ugly CSS ahead)

          Looser

          Starting a new game


          Paused
          Winner

          Who knows GNOME Mines? You know, the GNOME version of that good old puzzle game (way older than I would've thought, its origins dating back to '60-'70s).

          Allan Day has created a new design for that game as part of the GNOME Games modernization, ready to be implemented. I started working on them, and implementing the UI layout wasn't hard, just had to rearrange some buttons, a bit of tweaking, and there you go, we have an updated layout. But the mockup isn't only about the layout, it's about the theme too, which uses the dark variant of GNOME's Adwaita theme. However, just toggling the dark variant setting isn't enough in this case, as drawing the minefield grid is almost completely hard-coded (the minefield borders do use the button style, which can be CSS style, but that's it). Allan has asked me, where he can find some CSS for styling, but unfortunately Mines is not very customizable. However, implementing this seemed like a great idea, for multiple reasons:
          • If you have tried resizing the window, you might have also noticed the CPU usage going up and some flickering. That is caused by the implementation redrawing the full board, and relayouting the full board to be centered.
          • The application would be easily styleable by designers, using CSS files only.
          I have been thinking a bit about it, seemed like a good idea, and even though I have already failed once in implementing Minesweeper clone good enough for my taste and public release, I wanted to do it. So the steps I have proposed:
          • Separate the layouting code out from the minefield, as an aspectframe can do that perfectly
          • Reimplement the minefield using standard GTK+ components, a (row- and column-) homogeneous grid for layouting the minefield, and buttons for representing the fields
            • strangely, with Adwaita theme on a Fedora 20 + GNOME 3.12 the 30x16 buttons grid mouse click takes ages (tried to find the bottleneck with Callgrind, if anyone's interested in the results, I can share them, but I don't understand them), but with clean css (no fancy rounded rectangles, gradients, background images for buttons) it's fast, and with Ubuntu 14.04 + GNOME 3.12 preinstalled from PPA with the same version of Adwaita it's fast by default, without CSS juggling
          • Use an overlay for the paused screen to hide the minefield while paused
          • Ask Allan to provide the new images and a CSS for styling, as I am bad at these :)
          I don't know who wrote the Mines Vala code, but even though it had some inefficiencies, it turned out to be a masterpiece design-wise (not UI design, but object-oriented design), as the minefield view was almost completely separated from the game logic, and well-commented (not over-commented, found the perfect amount of code comments required to understand what and how the code does).
          Separating the layouting code out was only a matter of minutes. I have also managed to replace the custom view with standard grid quite fast, in a matter of hours.
          So here's the "new" GNOME Mines (with a CSS style I could come up with, using the Tango color palette for now, waiting for Allan to come up with a better CSS).
          The game is playable, with text-only buttons for now, the pause overlay is missing, but that should be the easy part. Can't wait to see it finished :)

          And by the way, due to the recent work I've done, I have been asked and gladly accepted to become the maintainer of Mines, so feel free to file bugs/patches/feature requests for discussion, I will be happy to take this lil' project one step further :)

          Continuing in this fledgling series of examining GLib’s GMainContext, this post looks at ensuring that functions are called in the right main context when programming with multiple threads.

          tl;dr: Use g_main_context_invoke_full() or GTask. See the end of the post for some guidelines about multi-threaded programming using GLib and main contexts.

          To begin with, what is ‘the right context’? Taking a multi-threaded GLib program, let’s assume that each thread has a single GMainContext running in a main loop — this is the thread default main context.((Why use main contexts? A main context effectively provides a work or message queue for a thread — something which the thread can periodically check to determine if there is work pending from another thread. It’s not possible to pre-empt a thread’s execution without using hideous POSIX signalling). I’m ignoring the case of non-default contexts, but their use is similar.)) So ‘the right context’ is the one in the thread you want a function to execute in. For example, if I’m doing a long and CPU-intensive computation I will want to schedule this in a background thread so that it doesn’t block UI updates from the main thread. The results from this computation, however, might need to be displayed in the UI, so some UI update function has to be called in the main thread once the computation’s complete. Furthermore, if I can limit a function to being executed in a single thread, it becomes easy to eliminate the need for locking a lot of the data it accesses((Assuming that other threads are implemented similarly and hence most data is accessed by a single thread, with threads communicating by message passing, allowing each thread to update its data at its leisure.)), which makes multi-threaded programming a whole lot simpler.

          For some functions, I might not care which context they’re executed in, perhaps because they’re asynchronous and hence do not block the context. However, it still pays to be explicit about which context is used, since those functions may emit signals or invoke callbacks, and for reasons of thread safety it’s necessary to know which threads those signal handlers or callbacks are going to be invoked in. For example, the progress callback in g_file_copy_async() is documented as being called in the thread default main context at the time of the initial call.

          The core principle of invoking a function in a specific context is simple, and I’ll walk through it as an example before demonstrating the convenience methods which should actually be used in practice. A GSource has to be added to the specified GMainContext, which will invoke the function when it’s dispatched. This GSource should almost always be an idle source created with g_idle_source_new(), but this doesn’t have to be the case. It could be a timeout source so that the function is executed after a delay, for example.

          As described previously, this GSource will be added to the specified GMainContext and dispatched as soon as it’s ready((In the case of an idle source, this will be as soon as all sources at a higher priority have been dispatched — this can be tweaked using the idle source’s priority parameter with g_source_set_priority(). I’m assuming the specified GMainContext is being run in a GMainLoop all the time, which should be the case for the default context in a thread.)), calling the function on the thread’s stack. The source will typically then be destroyed so the function is only executed once (though again, this doesn’t have to be the case).

          Data can be passed between threads in this manner in the form of the user_data passed to the GSource’s callback. This is set on the source using g_source_set_callback(), along with the callback function to invoke. Only a single pointer is provided, so if multiple bits of data need passing, they must be packaged up in a custom structure first.

          Here’s an example. Note that this is to demonstrate the underlying principles, and there are convenience methods explained below which make this simpler.

          /* Main function for the background thread, thread1. */
          static gpointer
          thread1_main (gpointer user_data)
          {
          	GMainContext *thread1_main_context = user_data;
          	GMainLoop *main_loop;
          
          	/* Set up the thread’s context and run it forever. */
          	g_main_context_push_thread_default (thread1_main_context);
          
          	main_loop = g_main_loop_new (thread1_main_context, FALSE);
          	g_main_loop_run (main_loop);
          	g_main_loop_unref (main_loop);
          
          	g_main_context_pop_thread_default (thread1_main_context);
          	g_main_context_unref (thread1_main_context);
          
          	return NULL;
          }
          
          /* A data closure structure to carry multiple variables between
           * threads. */
          typedef struct {
          	gchar *some_string;  /* owned */
          	guint some_int;
          	GObject *some_object;  /* owned */
          } MyFuncData;
          
          static void
          my_func_data_free (MyFuncData *data)
          {
          	g_free (data->some_string);
          	g_clear_object (&data->some_object);
          	g_slice_free (MyFuncData, data);
          }
          
          static void
          my_func (const gchar *some_string, guint some_int,
                   GObject *some_object)
          {
          	/* Do something long and CPU intensive! */
          }
          
          /* Convert an idle callback into a call to my_func(). */
          static gboolean
          my_func_idle (gpointer user_data)
          {
          	MyFuncData *data = user_data;
          
          	my_func (data->some_string, data->some_int, data->some_object);
          
          	return G_SOURCE_REMOVE;
          }
          
          /* Function to be called in the main thread to schedule a call to
           * my_func() in thread1, passing the given parameters along. */
          static void
          invoke_my_func (GMainContext *thread1_main_context,
                          const gchar *some_string, guint some_int,
                          GObject *some_object)
          {
          	GSource *idle_source;
          	MyFuncData *data;
          
          	/* Create a data closure to pass all the desired variables
          	 * between threads. */
          	data = g_slice_new0 (MyFuncData);
          	data->some_string = g_strdup (some_string);
          	data->some_int = some_int;
          	data->some_object = g_object_ref (some_object);
          
          	/* Create a new idle source, set my_func() as the callback with
          	 * some data to be passed between threads, bump up the priority
          	 * and schedule it by attaching it to thread1’s context. */
          	idle_source = g_idle_source_new ();
          	g_source_set_callback (idle_source, my_func_idle, data,
          	                       (GDestroyNotify) my_func_data_free);
          	g_source_set_priority (idle_source, G_PRIORITY_DEFAULT);
          	g_source_attach (idle_source, thread1_main_context);
          	g_source_unref (idle_source);
          }
          
          /* Main function for the main thread. */
          static void
          main (void)
          {
          	GThread *thread1;
          	GMainContext *thread1_main_context;
          
          	/* Spawn a background thread and pass it a reference to its
          	 * GMainContext. Retain a reference for use in this thread
          	 * too. */
          	thread1_main_context = g_main_context_new ();
          	g_thread_new ("thread1", thread1_main,
          	              g_main_context_ref (thread1_main_context));
          
          	/* Maybe set up your UI here, for example. */
          
          	/* Invoke my_func() in the other thread. */
          	invoke_my_func (thread1_main_context,
          	                "some data which needs passing between threads",
          	                123456, some_object);
          
          	/* Continue doing other work. */
          }

          That’s a lot of code, and it doesn’t look fun. There are several points of note here:

          • This invocation is uni-directional: it calls my_func() in thread1, but there’s no way to get a return value back to the main thread. To do that, the same principle needs to be used again, invoking a callback function in the main thread. It’s a straightforward extension which isn’t covered here.
          • Thread safety: This is a vast topic, but the key principle is that data which is potentially accessed by multiple threads must have mutual exclusion enforced on those accesses using a mutex. What data is potentially accessed by multiple threads here? thread1_main_context, which is passed in the fork call to thread1_main; and some_object, a reference to which is passed in the data closure. Critically, GLib guarantees that GMainContext is thread safe, so sharing thread1_main_context between threads is fine. The other code in this example must ensure that some_object is thread safe too, but that’s a topic for another blog post. Note that some_string and some_int cannot be accessed from both threads, because copies of them are passed to thread1, rather than the originals. This is a standard technique for making cross-thread calls thread safe without requiring locking. It also avoids the problem of synchronising freeing some_string. Similarly, a reference to some_object is transferred to thread1, which works around the issue of synchronising destruction of the object.
          • Specificity: g_idle_source_new() was used rather than the simpler g_idle_add() so that the GMainContext the GSource is attached to could be specified.

          With those principles and mechanisms in mind, let’s take a look at a convenience method which makes this a whole lot easier: g_main_context_invoke_full().((Why not g_main_context_invoke()? It doesn’t allow a GDestroyNotify function for the user data to be specified, limiting its use in the common case of passing data between threads.)) As stated in its documentation, it invokes a callback so that the specified GMainContext is owned during the invocation. In almost all cases, the context being owned is equivalent to it being run, and hence the function must be being invoked in the thread for which the specified context is the thread default.

          Modifying the earlier example, the invoke_my_func() function can be replaced by the following:

          static void
          invoke_my_func (GMainContext *thread1_main_context,
                          const gchar *some_string, guint some_int,
                          GObject *some_object)
          {
          	MyFuncData *data;
          
          	/* Create a data closure to pass all the desired variables
          	 * between threads. */
          	data = g_slice_new0 (MyFuncData);
          	data->some_string = g_strdup (some_string);
          	data->some_int = some_int;
          	data->some_object = g_object_ref (some_object);
          
          	/* Invoke the function. */
          	g_main_context_invoke_full (thread1_main_context,
          	                            G_PRIORITY_DEFAULT, my_func_idle,
          	                            data,
          	                            (GDestroyNotify) my_func_data_free);
          }

          That’s a bit simpler. Let’s consider what happens if invoke_my_func() were to be called from thread1, rather than from the main thread. With the original implementation, the idle source would be added to thread1’s context and dispatched on the context’s next iteration (assuming no pending dispatches with higher priorities). With the improved implementation, g_main_context_invoke_full() will notice that the specified context is already owned by the thread (or can be acquired by it), and will call my_func_idle() directly, rather than attaching a source to the context and delaying the invocation to the next context iteration. This subtle behaviour difference doesn’t matter in most cases, but is worth bearing in mind since it can affect blocking behaviour (i.e. invoke_my_func() would go from taking negligible time, to taking the same amount of time as my_func() before returning).

          How can I be sure a function is always executed in the thread I expect? Since I’m now thinking about which thread each function could be called in, it would be useful to document this in the form of an assertion:

          g_assert (g_main_context_is_owner (expected_main_context));

          If that’s put at the top of each function, any assertion failure will highlight a case where a function has been called directly from the wrong thread. This technique was invaluable to me recently when writing code which used upwards of four threads with function invocations between all of them. It’s a whole lot easier to put the assertions in when initially writing the code than it is to debug the race conditions which easily result from a function being called in the wrong thread.

          This can also be applied to signal emissions and callbacks. As well as documenting which contexts a signal or callback will be emitted in, assertions can be added to ensure that this is always the case. For example, instead of using the following when emitting a signal:

          guint param1;  /* arbitrary example parameters */
          gchar *param2;
          guint retval = 0;
          
          g_signal_emit_by_name (my_object, "some-signal",
                                 param1, param2, &retval);

          it would be better to use the following:

          static guint
          emit_some_signal (GObject *my_object, guint param1,
                            const gchar *param2)
          {
          	guint retval = 0;
          
          	g_assert (g_main_context_is_owner (expected_main_context));
          
          	g_signal_emit_by_name (my_object, "some-signal",
          	                       param1, param2, &retval);
          
          	return retval;
          }

          As well as asserting emission happens in the right context, this improves type safety. Bonus! Note that signal emission via g_signal_emit() is synchronous, and doesn’t involve a main context at all. As signals are a more advanced version of callbacks, this approach can be applied to those as well.

          Before finishing, it’s worth mentioning GTask. This provides a slightly different approach to invoking functions in other threads, which is more suited to the case where you want your function to be executed in some background thread, but don’t care exactly which one. GTask will take a data closure, a function to execute, and provide ways to return the result from this function; and will then handle everything necessary to run that function in a thread belonging to some thread pool internal to GLib. Although, by combining g_main_context_invoke_full() and GTask, it should be possible to run a task in a specific context and effortlessly return its result to the current context:

          /* This will be invoked in thread1. */
          static gboolean
          my_func_idle (gpointer user_data)
          {
          	GTask *task = G_TASK (user_data);
          	MyFuncData *data;
          	gboolean retval;
          
          	/* Call my_func() and propagate its returned boolean to
          	 * the main thread. */
          	data = g_task_get_task_data (task);
          	retval = my_func (data->some_string, data->some_int,
          	                  data->some_object);
          	g_task_return_boolean (task, retval);
          
          	return G_SOURCE_REMOVE;
          }
          
          /* Whichever thread is invoked in, the @callback will be invoked in
           * once my_func() has finished and returned a result. */
          static void
          invoke_my_func_with_result (GMainContext *thread1_main_context,
                                      const gchar *some_string, guint some_int,
                                      GObject *some_object,
                                      GAsyncReadyCallback callback,
                                      gpointer user_data)
          {
          	MyFuncData *data;
          
          	/* Create a data closure to pass all the desired variables
          	 * between threads. */
          	data = g_slice_new0 (MyFuncData);
          	data->some_string = g_strdup (some_string);
          	data->some_int = some_int;
          	data->some_object = g_object_ref (some_object);
          
          	/* Create a GTask to handle returning the result to the current
          	 * thread default main context. */
          	task = g_task_new (NULL, NULL, callback, user_data);
          	g_task_set_task_data (task, data,
          	                      (GDestroyNotify) my_func_data_free);
          
          	/* Invoke the function. */
          	g_main_context_invoke_full (thread1_main_context,
          	                            G_PRIORITY_DEFAULT, my_func_idle,
          	                            task,
          	                            (GDestroyNotify) g_object_unref);
          }

          So in summary:

          • Use g_main_context_invoke_full() to invoke functions in other threads, under the assumption that every thread has a thread default main context which runs throughout the lifetime of that thread.
          • Use GTask if you only want to run a function in the background and don’t care about the specifics of which thread is used.
          • In any case, liberally use assertions to check which context is executing a function, and do this right from the start of a project.
          • Explicitly document contexts a function is expected to be called in, a callback will be invoked in, or a signal will be emitted in.
          • Beware of g_idle_add() and similar functions which use the global default main context.
          System Monitor needs an update, and it's not gonna be easy.

          Background

          System Monitor is a mostly stable piece of software, part of the GNOME project, it is (or should be) the application to
          • monitor your system with
          • to find the application/process
            • slowing down your system, using your network bandwidth
            • getting your laptop hot by using all/some of your cores at 100
            • why your laptop battery lasts only 1 hour
          After you have identified the problem, you should use the same application to recover:
          • by killing the process
          • setting a CPU usage/memory usage/bandwidth usage limit for the process
          There are some tasks System Monitor excels at, I personally love the process list filtering + multiple selection + kill feature, works better for me than the killall terminal command, and that's something good.

          Fact is, that the interface or System Monitor looks a bit outdated. Thanks to the help of several drive-by contributors, some elements of the user interface have been updated to match the rest of the GNOME 3 applications, but the application running is still the same old rusty application.

          System Monitor also consumes more resources than it probably should. I might be the one to blame here, as I could have done something since I have been working on it, but it just isn't that easy. Several people have reported bugs against either system-monitor or libgtop for fixing some bottlenecks, memory leaks, limitations, and where I saw it appropriate, I have reviewed and committed the patches, but I am not experienced enough with either of these projects to know all the implications of the changes suggested, and probably the one writing the patch is neither, so we might see an improvement, but might introduce another bottleneck somewhere else.

          The plan

          GNOME designer Allan Day has come up with some new designs for a system monitoring application (Usage), and after some suggestions and feedback (yes, feedback is always welcom) he has updated them with an even better sidebar-oriented design, which I like a lot.

          The progress

          Stefano Facchini has implemented a proof of concept application based on the first mockups from Allan, but it needs updating, and a lot of work afterwards.

          The dreams

          "the dreams that you dare to dream really do come true"
          (Lyman Frank Baum)
          I am dreaming of a fully updated Usage application for GNOME 4.0 :) I don't think it can be done properly in the timeframe of the next GNOME release 3.14, but hopefully can be done until GNOME 4.0 comes out, whenever that will happen.

          And by updated I mean an application which I can use to do the tasks System Monitor does, but a bit better, faster, and cleaner. And I am not speaking only about what's on a surface. I have been thinking about building a dbus wrapper around libgtop, with a lot more options to request only what you need, to be as fast as possible.

          Yes, I am dreaming of the interface designed by Allan, with some twists, like extensibility (separate "plugin" for power usage monitoring, cpu usage, memory usage, with the option to turn any of these off, or even better, turn the ones you need off automatically, like no power usage monitoring for PCs)

          Call for help

          Even the easiest version (whichever that would be from updating the System Monitor interface or implementing from Scratch) would need more manpower AND experience than I have, so I am asking for your help:
          • if you are a developer and you would like to contribute to this goal, let me know in the comments
          • if you are a simple user, and have any comments on the design, your workflow, what you would like to see, your comments are welcome
          • if you would be willing to test the application before it gets out, let me know in the comments