I feel like I have failed as a maintainer of GNOME modules, due to the fact that I have been busy lately with other tasks, and could not really handle my maintainer tasks, bugfixing, but it is November again, Bug Squash Month for GNOME. I will do my best to take the challenge and do the 5-a-day (5 bugs triaged per day) for GNOME this month.


Today I had a couple of comments and fixes on System Monitor and Calculator, and probably I will continue tomorrow on these two, and jump to the games afterwards. If you have any annoyances, would like me to prioritize certain bugs (preferably from libgtop, system-monitor, gnome-calculator, swell-foop, lightsoff, five-or-more, atomix, gnome-mines), just let me know, and I will do my best.

Where I work, we deal often deal with large datasets from which we copy the relevant entries into the program memory. However, doing so typically incurs a very large usage of memory, which could leads to memory-bound parallelism if multiple instances are launched.

The memory-bound parallelism issue is arising when a system cannot execute more tasks due to a lack of available memory. It is essentially wasting of all other available resources such as CPU time.

To address this kind of issue, I’ll describe in this post a strategy using memory-mapped files and on-demand processing over a very common data format in bioinformatics: FASTA. The use case is pretty simple: we want to query small and arbitrary subsequences without having to precondition them in allocated memory.

About Virtual Memory Space

The virtual address space is large, very. Think of all the addresse values a 64 bit pointer can take. That’s about 18 quitillions of addressable bytes, which is enough to never be bothered with.

Understandbly, no computer can hold that much of memory. Instead, the operating system partitions the virtual memory into pages and the physical memory into frames. It uses a cache algorithm and load addressed pages into physical frames. Unused pages are stored on disk, in the available swap partitions or compressed into physical memory if you use Zswap1.

The mmap2 system call makes a correspondance between a file and pages in virtual memory. Addressing the memory where the file has been mapped will result in the kernel fetching its content dynamically. Moreover, if multiple processes map the same file, the same frames (i.e. physical memory) will be used across all of them.

void * mmap (void *addr,
             size_t length,
             int prot,
             int flags,
             int fd,
             off_t offset);

Where addr hint the operating system for a memory location, length indicates the size of the mapping, prot indicates permissions on the region, flags holds various options, fd is a file descriptor and offset is a byte offset from the file content. The returned value is the mapped address.

We can use this feature at our advantage by loading our data once and transparently share them across all the instances of our program.

I’m using GLib, a portable C library, and its providen GMappedFile to carefully wrap mmap with reference counting.

g_auto (GMappedFile) fasta_map = g_mapped_file_new ("hg38.fa",
                                                    false);

Our Use Case

To be more specific our use case only require to view small windows (~7 nucleotides) of the sequence at once. If we assume 80 nucleotides per line, we have 80 possible windows from which 73 are free of newlines. The probability for a random subsequence of length 7 of landing on a newline is thus approximately 8.75%.

For the great majority of cases, assuming uniformly distributed subsequence requests, we can simply return the address from the mapped memory.

From now on, we assume that the in-memory mapped document has already been indexed by bookeeping the beginnings of each sequences, which can be easily done with memchr3. The sequence pointer points to some start of a sequence and sequence_len indicate the length before the next one.

To work efficiently, it is worth to index the newlines. For this purpose, we use a GPtrArray, which is a simple pointer array implementation that we populate with the addresses of the newlines in the mapped buffer.

const gchar *sequence = "ACTG\nACTG";
gsize sequence_len    = 9;

g_autoptr (GPtrArray) sequence_skips =
    g_ptr_array_sized_new (sequence_len / 80); // line feed every 80 characters

const gchar* seq = sequence;
while ((seq = memchr (seq, '\n', sequence_len - (seq - sequence))))
{
    g_ptr_array_add (sequence_skips, (gpointer) seq);
    seq++; // jump right after the line feed
}

A newline can either preceed, follow or land within the subsequence.

  • all thoses preceeding the desired subsquence shifts the sequence to the right
  • all those within the subsequence must be stripped
  • the remaining newlines can be safely ignored

If only the first or last condition apply, we’re in the 92.5% of the cases as we can simply return the corresponding memory address.

gsize subsequence_offset = 1;
gsize subsequence_len = 7;

We first position our subsequence at its initial location.

const gchar *subsequence = sequence + subsequence_offset;

We need some bookkeeping for filling a fixed-width buffer if a newline land within our subsequence.

static gchar subsequence_buffer[64];
gsize subsequence_buffer_offset = 0;

Now, for each linefeed we’ve collected, we’re going to test our three conditions and either move the subsequence right or fill the static buffer.

The second condition require some work. Using the indexed newlines, we basically trim the sequence into a static buffer that is returned. Although we lose thread safety working this way, it will be mitigated by process-level parallelism.

gint i;
for (i = 0; i < sequence_skips->len; i++)
{
    const gchar *linefeed = g_ptr_array_index (sequence_skips, i);
    if (linefeed <= subsequence)
    {
        subsequence++; // move the subsequence right
    }
    else if (linefeed < subsequence + subsequence_len)
    {
        // length until the next linefeed
        gsize len_to_copy = linefeed - subsequence;

        memcpy (subsequence_buffer + subsequence_buffer_offset,
                subsequence,
                len_to_copy);

        subsequence_buffer_offset += len_to_copy;
        subsequence += len_to_copy + 1; // jump right after the linefeed
    }
    else
    {
        break; // linefeed supersedes the subsequence
    }
}

Lastly we check whether or not we’ve used the static buffer, in which case we copy any trailing sequence.

if (subsequence_buffer_offset > 0)
{
    if (subsequence_buffer_offset < subsequence_len)
    {
        memcpy (subsequence_buffer + subsequence_buffer_offset,
                subsequence,
                subsequence_len - subsequence_buffer_offset);
    }

    return subsequence_buffer;
}
else
{
    return subsequence;
}

It’s possible to use a binary search strategy to obtain the range of newlines affecting the position of the requested subsequence, but since the number of newlines is considerably small, I ignored this optimization so far.

Here we are with our zero-copy FASTA parser that efficiently look for small subsequences.

P.S.: This technique has been used for the C rewrite of miRBooking4 I’ve been working on these past weeks.

The rewrite of valadoc.org in Vala using Valum has been completed and should be deployed eventually be elementary OS team (see pull #40). There’s a couple of interesting stuff there too:

  • experimental search API using JSON via the /search endpoint
  • GLruCache now has Vala bindings and an improved API
  • an eventual GMysql wrapper around the C client API if extracting the classes I wrote is worth it

In the meantime, you can test it at valadoc2.elementary.io and report any regression on the pull-request.

Valum 0.3 has been patched and improved while I have been working on the 0.4 feature set. There’s a work-in-progress WebSocket middleware, VSGI 1.0 and support for PyGObject planned.

If everything goes as planned, I should finish the AJP backend and maybe consider Lwan.

On the top, there’s Windows support coming, although the most difficult part is to test it. I might need some help there to setup AppVeyor CI.

I’m aware of the harsh discussions about the state of Vala and whether or not it will just end into an abysmal void. I would advocate inertia here: the current state of the language still make it an excelllent candidate for writing GNOME-related software and this is not expected to change.

The first release candidate for Valum 0.3 has been launched today!

Get it, test it and be the first to find a bug! The final release will come shortly after along with various Linux distributions packages.

This post review the changes and features that have been introduced since the 0.2. There’s been a lot of work, so take a comfortable seat and brew yourself a strong coffee.

The most significant change has probably been the introduction of Meson as a build system and all the new deployment strategy it now makes possible.

If you prefer avoiding a full install, it’s not possible to use it as a subproject. These are defined as subdirectories of subprojects, which you can conveniently track using git submodules.

project('', 'c', 'vala')

glib = dependency('glib-2.0')
gobject = dependency('gobject-2.0')
gio = dependency('gio-2.0')
soup = dependency('libsoup-2.4')
vsgi = subproject('valum').get_variable('vsgi')
valum = subproject('valum').get_variable('valum')

executable('app', 'app.vala',
           dependencies: [glib, gobject, gio, soup, vsgi, valum])

Once installed, however, all that is needed is to pass --pkg=valum-0.3 to the Vala compiler.

vala --pkg=valum-0.3 app.vala

In app.vala,

using Valum;
using VSGI;

public int main (string[] args) {
    var app = new Router ();

    app.get ("/", (req, res) => {
        return res.expand_utf8 ("Hello world!");
    });

    return Server.@new ("http", handler: app)
                 .run (args);
}

There’s been a lot of new features and I hope I won’t miss any!

There’s a new url_for utility in Router that comes with named route. It basically allow one to reverse URLs patterns defined with rules and raw paths.

All that is needed is to pass a name to rule, path or any method helper.

I discovered the : notation for named varidic arguments if they alternate between strings and values. This is typically used to initialize GLib.Object.

using Valum;
using VSGI;

var app = new Router ();

app.get ("/", (req, res) => {
    return "<a href=\"%s\">View profile of %s</a>".printf (
        app.url_for ("user", id: "5"), "John Doe");
});

app.get ("/users/<int:id>", (req, res, next, ctx) => {
    var id = ctx["id"].get_string ();
    return res.expand_utf8 ("Hello %s!".printf (id));
}, "user");

In Router, we also have:

  • asterisk to handle * URI
  • once for performing initialization
  • path for a path-based route
  • rule to replace method
  • register_type rather than a GLib.HashTable<string, Regex>

Another significant change is that the previous state stack has been replaced by a context tree with recursive key resolution. It pretty much maps string to GLib.Value in a non-destructive way.

In terms of new middlewares, you’ll be glad to see all the built-in functionnalities we now support:

  • authentication with support for the Basic scheme via authenticate
  • content negotiation via negotiate, accept and more!
  • static resource delivery from GLib.File and GLib.Resource bundles
  • basic to strip the Router responsibilities
  • subdomain
  • basepath to prefix URLs
  • cache_control to set the Cache-Control header
  • branch on raised status codes
  • perform work safely and don’t let any error leak!
  • stream events with stream_events

Now, which one to cover?

The basepath is my personal favourite, because it allow one to create prefix-agnostic routers.

var app = new Router ();
var api = new Router ();

// matches '/api/v1/'
api.get ("/", (req, res) => {
    return res.expand_utf8 ("Hello world!");
});

app.use (basepath ("/api/v1", api.handle));

The only missing feature is to retranslate URLs directly from the body. I think we could use some GLib.Converter here.

The negotiate middleware and it’s derivatives are really handy for declaring the available representations of a resource.

app.get ("/", accept ("text/html; text/plain", (req, res, next, ctx, ct) => {
    switch (ct) {
        case "text/html":
            return res.expand_utf8 ("");
        case "text/plain":
            return "Hello world!";
        default:
            assert_not_reached ();
    }
}))

There’s a lot of stuff happening in each of them so refer to the docs!

Quick review into Request and Response, we now have the following helpers:

  • lookup_query to fetch a query item and deal with its null case
  • lookup_cookie and lookup_signed_cookie to fetch a cookie
  • cookies to get cookies from a request and response
  • convert to apply a GLib.Converter
  • append to append a chunk into the response body
  • expand to write a buffer into the response body
  • expand_stream to pipe a stream
  • expand_file to pipe a file
  • end to end a response properly
  • tee to tee the response body into an additional stream

All the utilities to write the body come in _bytes and _utf8 variants. The latter properly set the content charset when appliable.

Back into Server, implementation have been modularized with GLib.Module and are now dynamically loaded. What used to be a VSGI.<server> namespace now has become simply Server.new ("<name>"). Implementations are installed in ${prefix}/${libdir}/vsgi-0.3/servers, which can be overwritten by the VSGI_SERVER_PATH environment variable.

The VSGI specification is not yet 1.0, so please, don’t write a custom server for now or if you do so, please submit it for inclusion. There’s some work-in-progress for Lwan and AJP as I speak if you have some time to spend.

Options have been moved into GLib.Object properties and a new listen API based on GLib.SocketAddress makes it more convenient than ever.

using VSGI;

var tls_cert = new TlsCertificate.from_files ("localhost.cert",
                                              "localhost.key");
var http_server = Server.new ("http", https: true,
                                      tls_certificate: tls_cert);

http_server.set_application_callback ((req, res) => {
    return res.expand_utf8 ("Hello world!");
});

http_server.listen (new InetSocketAddress (new InetAddress.loopback (SocketFamily.IPV4), 3003));

new MainLoop ().run ();

The GLib.Application code has been extracted into the new VSGI.Application cushion used when calling run. It parses the CLI, set the logger and handle SIGTERM into a graceful shutdown.

Server can also fork to scale on multicore architectures. I’ve backtracked on the Worker class to deal with IPC communication, but if anyone is interested into building a nice clustering system, I would be glad to look into it.

That wraps it up, the rest can be discovered in the updated docs. The API docs should be available shortly via valadoc.org.

I manage to cover this exhaustively with abidiff, a really nice tool to diff two ELF files.

Long-term notes

Here’s some long-term notes for things I couldn’t put into this release or that I plan at a much longer term.

  • multipart streams
  • digest authentication
  • async delegates
  • epoll and kqueue with wip/pollcore
  • schedule future release with the GNOME project
  • GIR introspection and typelibs for PyGObject and Gjs

The GIR and typelibs stuff might not be suitable for Valum, but VSGI could have a bright future with Python or JavaScript bindings.

Coming releases will be much less time-consuming as there’s been a big step to make to have something actually usable. Maybe every 6 months or so.

Actually, this post should have been a part (the last one) of my "Need for PC" blog posts (1, 2, 3), but this also deserves a separate blog post on its own.

So, I have a fresh install on a fresh PC, what do I do next (and why)? Here's a list of the GNOME Shell extensions I use, and the (highly opinionated) motivation for me using them.
    • dash to dock - I need an always-visible or intelligent-autohide icon-only window list to be able to see my open windows all the time, and to launch my favorites. That is an old habit of mine, but I simply can't live without it. I usually set dash to dock to expand vertically on the left side, as I come from a Unity world, and this made the transition easier, but with the settings available you can make yourself comfortable even if you're transitioning from MacOS X or Windows 7-8-10, with a couple of clicks.
    • alternatetab - I need window switching with Alt-Tab to any of my running application windows. I don't want to think about "which workspace is this window on?" or "do I want to switch to another instance of the same app or another app?". It helps me tidy up my window list from time to time, and keeps me productive. coverflow alt-tab is another option here, for people who like eye-candy, for me the animations and reflections are a bit too much, but if you like that, it's also a good replacement for the default tabbing behaviour.
      • applications menu - I rarely use it, I mostly got used to search for apps in GNOME Shell, but the Activities button is not for me, I access that using the META key, and removing the Activities button leaves an empty space on the top left corner. It's the perfect place for a "Start menu". The applications menu is a good option, installed by default for the GNOME classic session, but if you need a more complex menu, with search, recents, web bookmarks, places, and a lot more (resembling the Start menu, but without ads ;) ), then gno-menu is the way to go.
        • pump up/down the volume - I think this habit of mine also comes from Unity, but I use middle-click the sound icon to mute, and also would like to see a visual feedback when I am adjusting my volume via scrolling over the sound icon. A small tooltip, which I have to stare at to read doesn't count here. Better volume indicator does exacly what I need, no less, no more. Just perfect. I just wish it would be the default GNOME Shell behaviour.
          • selecting sound output device - I usually have multiple possible output devices (speakers and headphones) and multiple possible input devices (webcam microphone, jack microphone, etc), and I need to switch between these : switch to speakers/headphones fast, when receiving a call, switch the microphone. Opening the sound setting, selecting the input and output devices would take too much time, but "there's an app for that" (understand: extension), called Sound output device chooser, which can also choose the sound input device, and it's nicely integrated with the sound menu. Perfect for the job.
            • monitoring the system - information at a glance about my computer, CPU usage. I prefer to have a chart in the top bar, so there's only one option. This plugin has lots of settings, the preferences are kind of chaotic, but once you set it up, it just works. I only have a 200 px wide CPU chart in my top bar, that's all I need to see if something is misbehaving (firefox/flash/gnome-shell/some others happen to use 50%+ CPU just because they can)
            • tray area - although tray icons have been "deprecated" quite some time ago, there are some applications which can not/will not forget them. Most notable ones are Skype and Dropbox. The fallback notification area (bottom left corner) kindof conflicts with my left-side expanded dash to dock extension, so I use topIcons plus to move them back to the right corner.
            • top bar dropdown arrows - with Application menu/Gno-Menu an application and a keyboard layout switcher, the number of small triangles eating up space in the top bar goes up to 4. I understand that I have to know that the menu, the application name (appmenu), the keyboard layout switcher and the power/sound/network menu are clickable and will expand on click, but the triangles are too much. So, I remove the dropdown arrows.
            These tend to be the most important ones. A short list of other extensions I use, but are not a bare necessity:
            • Freon - for keeping an eye on the temperatures/fan speeds of your PC
            • Switcher - keyboard-only application launcher/switcher
            • Dynamic panel transparency - for making the top bar transparent without full-screen apps, but making it solid if an app is maximized. Eye-candy, but looks nice ( ssssht, secret - it might become the default behavior ) . It would be even nicer if it could also affect dash to dock.
              With these tweaks, I can use GNOME Shell, and can be fairly productive. How about you? Which extensions are you using? What would you change in GNOME Shell?
              As promised, after a long wait, here's some details about the operating system and software I have installed from day-0. This is a shortlist I usually install on each of my computers, so I will also provide a short why for each bullet.
              A side-note is that, although I tend to use the command-line a lot, the setup contains (only) a single cut-and-paste terminal command, the rest is entirely done using the
                1. Base system: Fedora (latest release of Workstation - 24 at installation time).
                  Reasons for choosing Fedora:
                  • user-friendly and developer-friendly
                  • includes latest stable GNOME stack - contains latest bugfixes and latest features - relevant also from both user and GNOME developer perspective
                  • most developer tools I use are bundled by default
                2. Fedy : a simple tool for configuring Fedora, installing the proprietary software I need to use.
                  The items I always install from Fedy:
                  • Archive formats - support for RAR and the likes, not installed by default
                  • Multimedia codecs - support for audio and video formats, MP3 and the likes
                  • Steam - for the child inside me
                  • ?Adobe flash - I wish this wasn't necessary, but sometimes it is
                  • Better font rendering - this could also be default, and may become obsolete in the near future
                  • Disk I/O scheduler - Advertised as a Performance-boost for SSD and HDD
                3. Media players
                  • Kodi - the media player I install on all my devices, be it tablet, PC,
                    laptop, Raspberry PI - extensible, supports library management, sharing on the local network, remote control, "Ambilight" clone for driving RGB leds behind my TV
                  • VLC - for one-shot video playback - Kodi is the best, but too heavy for basic video playback
                  • Audacious - for one-shot audio playback and playing sets of songs - as I grew up with WinAmp,  and audacious has support for Classic WinAmp skins, but also a standard GTK interface
                4. Graphics
                  • GIMP - photo editing and post-processing
                  • Inkscape - vector graphics editor
                  • Inkscape-sozi - extension for Inkscape presentation editing - whenever I need a good presentation, I create a vector-graphics presentation with inkscape+sozi, because it's so much better than a plain libreoffice(powerpoint) presentation - more like prezi
                With these installed, my system is ready to be used. Time for tweaking the user interface a bit, so next up is customizing GNOME Shell with extensions.
                As promised, back with the PowerMac G5 ATX mod final build pictures, as the PC is already complete and working. Actually, I have built the GNOME 3.22.0 release tarballs for several modules using this (and have tested building other stuff and also a bit of gaming, to see the temperatures, they are OK). Measured power consumption is almost all the time (even with all cores at 100%) below the one of my old PC idling (this one idles at ~35W and 65-70W under load or in-game).

                With every component mounted
                Intake fans in front
                Rear exhaust fans
                CPU cover mounted
                Plastic cover in-place, before mounting the sidepanel

                In a future post, I'll summarize the software setup, including GNOME Shell extensions I can't live without, of course, with some screenshots.

                I have discovered Meson a couple of years back and since then use it for most of my projects written in Vala. This post is an attempt at describing the good, bad and ugly of the build system.

                So, what is Meson?

                • a build system
                • portable (see Python portability)
                • a Ninja generator
                • use case oriented
                • fast
                • opiniated

                What it’s not?

                • a general purpose build system
                • a Turing-complete language
                • extensible (only in Python)

                It handle 80% of the cases nicely and elegantly.

                Since it is use case oriented, features are introduced on need. It keeps a tight balance between conciseness, generality and features.

                It mixes configure and build step so that the build essentially become one big tree. Then, the build system determine what goes into the configuration and what goes into the build.

                The cognitive load is very low, which means it’s very easy to learn the basics and make actual usage of it. This is critical, because all the time spent on setting the build hardly contribute to the project goal.

                The following is a basic build that check for dependencies (using pkg-config) and build an executable:

                project('Meson Example', 'c', 'vala')
                
                glib = dependency('glib-2.0')
                gobject = dependency('gobject-2.0')
                
                executable('app', 'app.vala', dependencies: [glib, gobject])
                

                Building becomes a piece of cake:

                mkdir build && cd build
                meson ..
                ninja
                

                Only a few keywords are sufficient for most builds:

                • executable
                • library with shared_library and static_library
                • dependency
                • declare_dependency

                Built-in benchmarks and tests, just pass the executable to either benchmark or test.

                The main downside is that if what you want to do is not supported, you either have to hack things or wait until the feature gets into the build system.

                The system is very opiniated. It’s both a good and bad thing. Good since you don’t need to write a lot to get most jobs done. Bad because you might hit a wall eventually.

                There’s also the Python question. It requires at least 3.4. This is becoming less an problematic as old distributions progressively die out, but still can prevent you now. Here’s a few ideas to remedy this problem:

                • build a dependency-free zipball (see issue #588)
                • backport Meson to older Python version

                Meson is getting better over time and so far has managed to become the best build system for Vala. This is why I highly recommend it.


                When I say everything torn apart, I mean it

                Preparing the case

                Choosing a non-mATX-compatible case to start with gave me major headaches, but simply put I have found no mATX case with a similar look. I had to work quite a bit to make the G5 case work with an mATX motherboard.
                During shipping, as usual for these computer, the outer case stands have been bent, resulting in a less pleasant look. To fix it, had to rip the whole thing apart, meaning taking out the inner case to be able to "bend" the outer case stands back in their original position.
                I did not expect that I will have to do this, but as I already had the case torn apart, I have decided to apply a new paint. It is not perfect, but it's ok for me, the outer case with grey base-paint and metallic grey paint applied over it, looks similar to the original (except for the Apple logo being mostly gone). The inner case was painted matt black, and it looks fine. However, when mounting back the inner case in the outer case, in some places the black paint fell off, so I had to reapply the paint.
                I also had to cut the back IO plate as close to the side as possible to fit an mATX IO plate, with the standard mentioning 45x158 mm, but the standard G5 backplate is somewhere around 40x190 mm.

                G5 PSU internals replaced

                Modding the PSU

                • Remove PSU internals
                • Get an ATX PSU with a 120mm fan on top (in my case a Seasonic SS330HB)
                • Disassemble it completely (remove the cooling fan from the top and the case)
                • Mount the internals of the power supply in the G5 power supply case
                • Create-buy a longer cable with an Y-splitter with 2-pin male plugs for the fans
                • Mount the new 60mm fans (I have used Scythe Mini Kaze 60mm)
                • The resulting PSU
                • Assemble the whole thing again

                Preparing mATX motherboard mount

                • Use an old mATX motherboard as a template
                • Break the mounts standing in the way of the motherboard
                • Mark the mounting holes
                • Use (a part of) the original cable organizer for the SATA power cable going to the HDD cage/optical drive
                • Mount old mATX motherboard with glue applied to the stands, so that they stick to the case (I did not go with the new one at first, as I had to push it hard for the stand-offs to stick, and I did not want to damage the new one)
                • Test wiring of the power button and the power led with an old mATX motherboard (I have used a different led, a red one to match the motherboard leds)
                • Wire USB and audio
                • Remove the old mATX motherboard
                • Mount the new mATX motherboard in place

                The complete PC part list for the build is:
                PCPartPicker part list / Price breakdown by merchant
                Type Item Price
                CPU Intel Core i7-6700T 2.8GHz Quad-Core OEM/Tray Processor Purchased For $366.42
                CPU Cooler ARCTIC Alpine 11 Plus Fluid Dynamic Bearing CPU Cooler Purchased For $12.17
                Motherboard MSI B150M MORTAR Micro ATX LGA1151 Motherboard Purchased For $85.70
                Memory Kingston HyperX Fury Black 16GB (2 x 8GB) DDR4-2133 Memory Purchased For $89.88
                Storage Kingston SSDNow V300 Series 120GB 2.5" Solid State Drive Purchased For $50.00
                Storage Toshiba 1TB 3.5" 7200RPM Internal Hard Drive Purchased For $50.00
                Video Card XFX Radeon HD 4550 1GB Video Card Purchased For $25.00
                Case Fan ARCTIC Arctic F8 PWM 31.0 CFM 80mm Fan Purchased For $4.30
                Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.17
                Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.17
                Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.50
                Case Fan ARCTIC F9 PWM Rev. 2 43.0 CFM 92mm Fan Purchased For $4.50
                Other PowerMac G5 Purchased For $25
                Prices include shipping, taxes, rebates, and discounts
                Total $725.81
                Generated by PCPartPicker 2016-08-11 09:17 EDT-0400


                Bill of additional materials used so far:
                1x gray basepaint - $6
                1x matt black paint - $6
                1x metallic silver paint - $3
                1x matt black paint - $3

                motherboard template - $2.5
                motherboard stands ~ $2.5
                power supply ~ $23
                2x Scythe Mini Kaze fans for the PSU - $14
                1x Bracket adapter 2x2.5 HDD/SSD to 3.5 bay for mounting SSD - $4

                I have realized quite some time ago that my PC is struggling to keep up with the pace, so I have decided that it is time for an upgrade (after almost 6 years with my Dell Inspiron 560 minitower with C2D Q8300 quad-core).

                I have "upgraded" the video card a couple of months ago due to the old one not supporting OpenGL3.2 needed by GtkGLArea. First I went with an ATI Radeon HD6770 I received from my gamer brother, but it was loud and I did not use it as much as it's worth using, with a high cost (108W TDP, bumped the consumption of the idle PC by 30-40W from 70-80W to 110-120W), so I have traded it for another one: a low-consumption (Passively cooled - 25W TDP) Ati Radeon HD4550 working well with Linux and all my Steam games whenever I am gaming (casual gamer). Consumption went back to 90-100W.

                After that came the power supply, replacing the Dell-Provided 300W supply with a more efficient one, a 330W Seasonic SS330HB. This resulted in another 20W drop in power consumption, idling below 70W.

                The processor being fairly old, and having a 95W TDP, but with the performance way below today's i7 processors with the same TDP, it might be worth upgrading, but that means motherboard + CPU + cooler + memory upgrade, but as I have the rest of the components, I will reuse them, and add a new (old) case to the equation, a PowerMac G5 from around 2004.

                So here's the basic plan:
                Case - PowerMac G5 modded for mATX compatibility, and repainted - metallic silver the outer case, matt black the inner case - inspired by Mike 7060's G5 Mod
                CPU - Intel core i7 6700T - 35W TDP
                Cooler - Arctic Alpine 11 Plus - silent, bigger brother of the fanless Arctic Alpine 11 Passive (for up to 35 W TDP, the i7 6700T being right at the edge, I did not want to risk)
                MotherBoard - 1151 socket, DDR4, USB3, 4-pin CPU and case fan controller socket, HDMI and DVI video outs being the requirements - I chose the MSI B150M Mortar because of guaranteed Linux compatibility (thanks Phoronix), 2 onboard PWM case fan controllers + PWM controlled CPU fan
                Memory - 2x8GB DDR4 Kit - Kingston Hyperx Fury
                PSU - Seasonic SS-330HB mounted inside the G5 PSU case, original G5 PSU fans replaced with 2x 60mm Scythe Mini Kaze for silent operation
                Case Cooling - Front 2x 92mm - Arctic F9 PWM PST in the original mounts

                Video card - Onboard Intel or optional ATI Radeon HD4550 if (probably will not happen) the onboard will not be enough
                Optical drive (not sure if it is required) - start with existing DVD-RW drive
                Storage - 120 GB Kingston V300 + 1TB HDD - existing

                Plans for later
                (later/optional) update optical drive to a Blu-Ray drive
                (later/optional)  Arctic F9 PWM PST, in the original G5 intake mounts or 120 mm Arctic F12 PWM PST in new intake mounts.

                I'll soon be back with details on preparing the case, probably the hardest part of the whole build. The new parts are already ordered (the CPU was pretty hard to find on stock, and will be delivered in a week or so instead of the usual 1-2 days).

                Valum now support dynamically loadable server implementation with GModule!

                Server are typically looked up in /usr/lib64/vsgi/servers with the libvsgi-<name>.so pattern (although this is highly system-dependent).

                This works by setting the RPATH to $ORIGIN/vsgi/servers of the VSGI shared library so that it looks into that folder first.

                The VSGI_SERVER_PATH environment variable can be set as well to explicitly provide a directory containing implementations.

                To implement a compliant VSGI server, all you need is a server_init symbol which complies with ServerInitFunc delegate like the following:

                [ModuleInit]
                public Type server_init (TypeModule type_module) {
                    return typeof (VSGI.Custom.Server);
                }
                
                public class VSGI.Custom.Server : VSGI.Server {
                    // ...
                }
                

                It has to return a type that is derived from VSGI.Server and instantiable with GLib.Object.new. The Vala compiler will automatically generate the code to register class and interfaces into the type_module parameter.

                Some code from CGI has been moved into VSGI to provide uniform handling of its environment variables. If the protocol you want complies with that, just subclass (or directly use) VSGI.CGI.Request and it will perform all the required initialization.

                public class VSGI.Custom.Request : VSGI.CGI.Request {
                    public Request (IOStream connection, string[] environment) {
                        base (connection, environment);
                    }
                }
                

                For more flexibility, servers can be loaded with ServerModule directly, allowing one to specify an explicit lookup directory and control when the module should be loaded or unloaded.

                var cgi_module = new ServerModule (null, "cgi");
                
                if (!cgi_module.load ()) {
                    assert_not_reached ();
                }
                
                var server = Object.new (cgi_module.server_type);
                

                I received very useful support from Nirbheek Chauhan and Tim-Philipp Müller for setting the necessary build configuration for that feature.

                I recently finished and merged support for content negotiation.

                The implementation is really simple: one provide a header, a string describing expecations and a callback invoked with the negotiated representation. If no expectation is met, a 406 Not Acceptable is raised.

                app.get ("/", negotiate ("Accept", "text/xml; application/json",
                                         (req, res, next, ctx, content_type) => {
                    // produce according to 'content_type'
                }));
                

                Content negotiation is a nice feature of the HTTP protocol allowing a client and a server to negotiate the representation (eg. content type, language, encoding) of a resource.

                One very nice part allows the user agent to state a preference and the server to express quality for a given representation. This is done by specifying the q parameter and the negotiation process attempt to maximize the product of both values.

                The following example express that the XML version is poor quality, which is typically the case when it’s not the source document. JSON would be favoured – implicitly q=1 – if the client does not state any particular preference.

                accept ("text/xml; q=0.1, application/json", () => {
                
                });
                

                Mounted as a top-level middleware, it provide a nice way of setting a Content-Type: text/html; charset=UTF-8 header and filter out non-compliant clients.

                using Tmpl;
                using Valum;
                
                var app = new Router ();
                
                app.use (accept ("text/html", () => {
                    return next ();
                }));
                
                app.use (accept_charset ("UTF-8", () => {
                    return next ();
                }));
                
                var home = new Template.from_path ("templates/home.html");
                
                app.get ("/", (req, res) => {
                    home.expand (res.body, null);
                });
                

                This is another step forward a 0.3 release!

                Ever heard of fork?

                using GLib;
                using VSGI.HTTP;
                
                var server = new Server ("", () => {
                    return res.expand_utf8 ("Hello world!");
                });
                
                server.listen (new VariantDict ().end ());
                server.fork ();
                
                new MainLoop ().run ();
                

                Yeah, there’s a new API for listening and forking with custom options…

                The fork system call will actually copy the whole process into a new process, running the exact same program.

                Although memory is not shared, file descriptors are, so you can have workers listening on common interfaces.

                I notably tested the whole thing on our cluster at IRIC. It’s a 64 cores Xeon Core i7 setup.

                wrk -c 1024 -t 32 http://0.0.0.0:3003/hello
                

                With a single worker:

                Running 10s test @ http://0.0.0.0:3003/hello
                  32 threads and 1024 connections
                  Thread Stats   Avg      Stdev     Max   +/- Stdev
                    Latency    54.35ms   95.96ms   1.93s    98.78%
                    Req/Sec   165.81    228.28     2.04k    86.08%
                  41741 requests in 10.10s, 5.89MB read
                  Socket errors: connect 35, read 0, write 0, timeout 13
                Requests/sec:   4132.53
                Transfer/sec:    597.28KB
                

                With 63 forks (64 workers):

                Running 10s test @ http://0.0.0.0:3003/hello
                  32 threads and 1024 connections
                  Thread Stats   Avg      Stdev     Max   +/- Stdev
                    Latency    60.83ms  210.70ms   2.00s    93.58%
                    Req/Sec     2.99k   797.97     7.44k    70.33%
                  956577 requests in 10.10s, 135.02MB read
                  Socket errors: connect 35, read 0, write 0, timeout 17
                Requests/sec:  94720.20
                Transfer/sec:     13.37MB
                

                It’s about 1500 req/sec per worker and an speedup of a factor of 23. The latency is almost not affected.

                The past few days, I’ve been working on a really nice libmemcached GLib wrapper.

                • main loop integration
                • fully asynchronous API
                • error handling

                The whole code is available under the LGPLv3 from arteymix/libmemcached-glib.

                It should reach 1.0 very quickly, only a few features are missing:

                • a couple of function wrappers
                • integration for libmemcachedutil
                • async I/O improvements

                Once released, it might be interesting to build a GTK UI for Memcached upon that work. Meanwhile, it will be a very useful tool to build fast web applications with Valum.

                This post describe a feature I will attempt to implement this summer.

                The declaration of async delegate is simply extending a traditional delegate with the async trait.

                public async delegate void AsyncDelegate (GLib.OutputStream @out);
                

                The syntax of callback is the same. It’s not necessary to add anything since the async trait is infered from the type of the variable holding it.

                AsyncDelegate d = (@out) => {
                    yield @out.write_all_async ("Hello world!".data, null);
                }
                

                Just like regular callback, asynchronous callbacks are first-class citizen.

                public async void test_async (AsyncDelegate callback,
                                              OutputStream  @out) {
                    yield callback (@out);
                }
                

                It’s also possible to pass an asynchronous function which is type-compatible with the delegate signature:

                public async void hello_world_async (OutputStream @out)
                {
                    yield @out.write_all_async ("Hello world!".data);
                }
                
                yield test_async (hello_world_async, @out);
                

                Chaining

                I still need to figure out how to handle chaining for async lambda. Here’s a few ideas:

                • refer to the callback using this (weird..)
                • introduce a callback keyword
                AsyncDelegate d = (@out) => {
                    Idle.add (this.callback);
                    yield;
                };
                
                AsyncDelegate d = (@out) => {
                    Idle.add (callback);
                    yield;
                };
                

                How it would end-up for Valum

                Most of the framework could be revamped with the async trait in ApplicationCallback, HandlerCallback and NextCallback.

                app.@get ("/me", (req, res, next) => {
                    if (req.lookup_signed_cookies ("session") == null) {
                        return yield next (req, res);
                    }
                    return yield res.extend_utf8_async ("Hello world!".data);
                });
                

                The semantic for the return value would simply state if the request has been handled instead of being eventually handled.

                As you might already know, GNOME 3.20 has been released, with a number of improvements, fixes, future-proofing changes, preparations for wayland prime-time.



                Here's a short list of my favourite features from Delhi:
                • Files search improvements (see here)
                • Photos has basic photo editing support - crop and filters (see here)
                • Control center mouse panel revamped (see here)
                • Keyboard shortcuts window for some apps (see here) - although I have not managed to do this for any of the apps I maintain, I plan to do it for 3.22, as I consider it a useful feature in the sea of keyboard shortcuts
                 I will shortly summarize what happened in some of the games from GNOME:
                • Mines got keyboard navigation updates and fixes, thanks to Isaac Lenton
                • Atomix 
                  • has a gameplay tip starting window
                  • has updated artwork to match the GNOME 3 world, thanks to Jakub Steiner
                • Five or more got a new hires icon, thanks to Jakub Steiner
                All in all, congrats for everyone contributing to GNOME 3.20, keep up the good work.

                  I have recently introduced a basepath middleware and I thought it would be relevant to describe it further.

                  It’s been possible, since a while, to compose routers using subrouting. This is very important to write modular applications.

                  var app = new Router ();
                  var user = new Router ();
                  
                  user.get ("/user/<int:id>", (req, res, next, ctx) => {
                      var id = ctx["id"] as string;
                      var user = new User.from_id (id);
                      res.extend_utf8 ("Welcome %s", user.username);
                  });
                  
                  app.rule ("/user", user.handle);
                  

                  Now, using basepath, it’s possible to design the user router without specifying the /user prefix on rules.

                  This is very important, because we want to be able to design the user router as if it were the root and rebase it on need upon any prefix.

                  var app = new Router ();
                  var user = new Router ();
                  
                  user.get ("/<int:id>", (req, res) => {
                      res.extend_utf8 ("Welcome %s".printf (ctx["id"].get_string ()))
                  });
                  
                  app.use (basepath ("/user", user.handle));
                  

                  How it works

                  When passing through the basepath middleware, request which have a prefix-match with the basepath are stripped and forwarded.

                  But there’s more!

                  That’s not all! The middleware also handle errors that set the Location header from Success.CREATED and Redirection.* domains.

                  user.post ("/", (req, res) => {
                      throw new Success.CREATED ("/%d", 5); // rewritten as '/user/5'
                  });
                  

                  It also rewrite the Location header if it was set directly.

                  user.post ("/", (req, res) => {
                      res.status = Soup.Status.CREATED;
                      res.headers.replace ("Location", "/%d".printf (5));
                  });
                  

                  Rewritting the Location header is exclusively applied on absolute paths starting with a leading slash /.

                  It can easily be combined with the subdomain middleware to provide a path-based fallback:

                  app.subdomain ("api", api.handle);
                  app.use (basepath ("/api/v1", api.handle));
                  

                  I often profile Valum’s performance with wrk to ensure that no regression hit the stable release.

                  It helped me identifying a couple of mistakes n various implementations.

                  Anyway, I’m glad to announce that I have reached 6.3k req/sec on small payload, all relative to my very lowgrade Acer C720.

                  The improvements are available in the 0.2.14 release.

                  • wrk with 2 threads and 256 connections running for one minute
                  • Lighttpd spawning 4 SCGI instances

                  Build Valum with examples and run the SCGI sample:

                  ./waf configure build --enable-examples
                  lighttpd -D -f examples/scgi/lighttpd.conf
                  

                  Start wrk

                  wrk -c 256 http://127.0.0.1:3003/
                  

                  Enjoy!

                  Running 1m test @ http://127.0.0.1:3003/
                    2 threads and 256 connections
                    Thread Stats   Avg      Stdev     Max   +/- Stdev
                      Latency    40.26ms   11.38ms 152.48ms   71.01%
                      Req/Sec     3.20k   366.11     4.47k    73.67%
                    381906 requests in 1.00m, 54.31MB read
                  Requests/sec:   6360.45
                  Transfer/sec:      0.90MB
                  

                  There’s still a few things to get done:

                  • hanging connections benchmark
                  • throughput benchmark
                  • logarithmic routing #144

                  The trunk buffers SCGI requests asynchronously, which should improve the concurrency with blocking clients.

                  Lighttpd is not really suited for throughput because it buffers the whole response. Sending a lot of data is problematic and use up a lot of memory.

                  Valum is designed with streaming in mind, so it has a very low (if not neglectable) memory trace.

                  I reached 6.5k req/sec, but since I could not reliably reproduce it, I prefered posting these results.

                  I have just backported important fixes from the latest developments in this hotfix release.

                  • fix blocking accept call
                  • async I/O with FastCGI with UnixInputStream and UnixOutputStream
                  • backlog defaults to 10

                  The blocking accept call was a real pain to work around, but I finally ended up with an elegant solution:

                  • use a threaded loop for accepting a new request
                  • delegate the processing into the main context

                  FastCGI mutiplexes multiple requests on a single connection and thus, it’s hard to perform efficient asynchronous I/O. The only thing we can do is polling the unique file descriptor we have and to do it correctly, why not reusing gio-unix-2.0?

                  The streams are reimplemented by deriving UnixInputStream and UnixOutputStream and overriding read and write to write a record instead of the raw data. That’s it!

                  I have also been working on SCGI: the netstring processing is now fully asynchronous. I couldn’t backport it as it was depending on other breaking changes.

                  First of all, happy new year to you all (yes, I know we are already in February)!

                  Long time no post, I've been very busy with work, new projects, new clients, new technologies, preparing the move to a new home, the second child, and lot more, on the personal side.
                  Handling all of the above at the same time resulted in a severe change in the amount of my open-source contributions, so I haven't been able to do anything more except for code reviews and minor fixes, plus the releases of the GNOME modules I am responsible for (GNOME Games rule!).
                  During the winter break, between Christmas and New Year's Eve I have managed to work a bit on AppMenu integration for Atomix (which is not completely ready, as the appmenu is not displayed, in spite of being there, when checking with gtkInspector)
                  In the meantime lots of good things have happened, e.g. the Fedora 23 release, which is (again)  the best Fedora release of all times, thanks to everyone contributing.

                  All in all, I just wanted to share that I'm not dead yet, just been very busy, but hoping that I can get back to the normal life with a couple more contributions to open-source, and sharing some more experiences with gadgets, e.g. the Android+Lubuntu dual boot open-source TV box I got for Christmas.

                  I’m using the thunderbird conversations add-on and am generally quite happy with it. One pain point however is that its quick reply feature has a really small text area for replying. This is especially annoying if you want to reply in-line and have to scroll to relevant parts of the e-mail.

                  A quick fix for this:

                  1. Install the Stylish thunderbird add-on
                  2. Add the following style snippet:
                    .quickReply .textarea.selected {
                      height: 400px !important;
                    }

                  Adjust height as preferred.

                  Since the new design of GNOME Mines has been implemented, several people have complained about the lack of colors and the performance issues.

                  The lack of colors has been tackled last cycle with the introduction of the theming support, and including the classic theme with the same colored numbers as we all know from the old days of GNOME Mines.

                  Now, to tackle the performance issues, which in most cases are not real performance issues but rather playability issues for hardcore miners who would like to get a sub-10 seconds time, as the reveal transition time is set to 0.4 seconds, which adds up to a few seconds during a game, which might lead in a 10seconds+ time. To overcome this limitation, I have implemented a disable animations option in the Appearance settings, to allow users to disable the transitions completely to be able to achieve the best scores they would like. This can also come handy in the rare cases when the transitions are causing real performance issues. The next step would be to count the number of manually revealed tiles, in case we are using animations multiply it with the transition time, and at the end of the game subtract this from the total time, to make sure timing is roughly the same for both players playing with and without animations.

                  Feedback, ideas, comments are always welcome: are you a hardcore miner? will you disable the eye-candy animations to get better scores? Which theme are you using when you are playing GNOME Mines?
                  I've been fairly busy recently, so all my colleagues have upgraded to F22 before I did, even though usually I was the one installing systems in beta or release candidate state. After seeing two fairly successful upgrades I decided to take an hour to upgrade my system, hoping that it will fix an annoying gdm issue I've seen recently. Each day after unlocking the system (I cold-boot each day, so after my first break) one of my three displays doesn't turn on, I have to go to displays settings, change something, click apply and then revert took have all my displays again. Subsequent screen unlocks work correctly, I only get this once a day at the first unlock.

                  After updating 3000+ packages in about an hour, I  rebooted, got to the login screen, typed my password, login screen disappeared, the grey texture appeared, and the system hang.
                  The steps to recover to a usable computer:
                  • Switching to another VT revealed that everything was running, including gnome shell, gdm status was ok.
                  • Tried restarting gdm, but it didn't help.
                  • Checking the common issues for fedora 22 have me a hint that gdm running with wayland could be the culprit, so I changed to X11-based gdm, but that didn't help either.
                  • Gnome on Wayland session managed to log in, but froze when I did press the meta key to access the applications.
                  • Settings from the top right corner did work however, so I managed to create another user, which could log in.
                  • That led me to the conclusion that there was a problem with my configuration. I'm still not sure, and I will never find out, as the computer to be upgraded was my work pc and I needed to get stuff done, I have decided to reset my configuration. Add I couldn't find a way to reset all dconf settings to default, i have backed upo and deleted the following folders: .gnome, .gnome2, and some other ones I can't remember, but should be found easily with a search for "resetting all gnome shell settings". That did the job: I had to reconfigure my gnome shell extensions and settings, BUT at least I managed to lo in all, it wasn't the best upgrade experience I ever had.
                  The result however is pretty good (though one of my displays is still turning off at the first unlock), it was definitely worth working on it (I knew it will be, on my home computer I'm running F22 since the Alpha ;) )
                  Thanks for everyone who contributed to this release, your work is welcome and appreciated.
                    Recently I've been thinking about the real value of my contributions to free software and open-source software.

                    I've realized that I'm mostly a "seasonal" open-source contributor: I choose a project, do some bug triaging, bug-fixing, and when I'm "stuck" with the project (aka the rest of the bugs/features would require serious efforts and quite some time to implement) I jump to unto a next project, and do the same there, and do this over and over again. Of course, in the meantime I get attached to some projects and "maintain" them, so I keep track of the new bugs and fix them whenever I can, I review the patches, make releases, but I don't really consider myself as an active contributor.
                    I've had a "season" for Ubuntu software-management related contributions (software-center, update-manager, synaptic), a System Monitor season, and elementary software season and a GNOME Games season (and this one's not over yet). I also had some minor contributions (just for fun) to projects like LibreOffice, or recently eclipse (in context of the GreatFix initiative - which was a really interesting and rewarding experience).

                    I am not sure whether all this is a good thing or a bad thing. I enjoy hacking on open-source projects, for fun, for profit, for experience, for whatever. The most useful skill I've gained is that of easily finding my way around large codebases for bugfixing. But what can be seen from the outside (e.g. from the point of view of a company looking for a developer): this guy keeps jumping from one project to another, he didn't really get really deep into any of the projects he did work on (my longest "streak" of working on a single project was one year). Fortunately OpenHub has a chart for contributions to GNOME as a whole, and it shows that I'm contributing to GNOME constantly, even if only with a few commits per month.

                    Another thing about my contributions is the programming language I use: at work I'm a Java Developer, but that can not be seen at all from my contributions by languages chart at OpenHub, as the only Java contributions it shows is a few commits to a project of a friend to implement Java bindings to a Go library. This will change a bit in the near future, as Eclipse project should appear there soon with a few commits, but still, it shows that I'm most experiences with C++, which I'm not :)

                    I've started to realize that the dream-job I'm looking for would make use of all these: working primarily on open-source software in Java, but still giving me the freedom to occasionally work on other open-source software. Does that job exist? Unfortunately, not in my country. I saw a job posting recently with a Job description which would probably fit into my dream-job category, but I'm a bit afraid I wouldn't be a good candidate, as it does list some nice-to-have skills, which I don't have, due to the area I did work on in Java until now (server-side Java done with Spring vs J2EE).

                    Does your company value open-source contributions when employing? If yes, which one is preferred: in-depth knowledge of one project or shifting between projects could also be useful? Being open-minded and language-agnostic is better, or knowing one language to its guts is better?

                    A while back I started working on a project called Squash, and today I’m pleased to announce the first release, version 0.5.

                    Squash is an abstraction layer for general-purpose data compression (zlib, LZMA, LZ4, etc.).  It is based on dynamically loaded plugins, and there are a lot of them (currently 25 plugins to support 42 different codecs, though 2 plugins are currently disabled pending bug fixes from their respective compression libraries), covering a wide range of compression codecs with vastly different performance characteristics.

                    The API isn’t final yet (hence version 0.5 instead of 1.0), but I don’t think it will change much.  I’m rolling out a release now in the hope that it encourages people to give it a try, since I don’t want to commit to API stability until a few people have given it a try. There is currently support for C and Vala, but I’m hopeful more languages will be added soon.

                    So, why should you be interested in Squash?  Well, because it allows you to support a lot of different compression codecs without changing your code, which lets you swap codecs with virtually no effort.  Different algorithm perform very differently with different data and on different platforms, and make different trade-offs between compression speed, decompression speed, compression ratio, memory usage, etc.

                    One of the coolest things about Squash is that it makes it very easy to benchmark tons of different codecs and configurations with your data, on whatever platform you’re running.  To give you an idea of what settings might be interesting to you I also created the Squash Benchmark, which tests lots of standard datasets with every codec Squash supports (except those which are disabled right now) at every preset level on a bunch of different machines.  Currently that is 28 datasets with 39 codecs in 178 different configurations on 8 different machines (and I’m adding more soon), for a total of 39,872 different data points. This will grow as more machines are added (some are already in progress) and more plugins are added to Squash.

                    There is a complete list of plugins on the Squash web site, but even with the benchmark there is a pretty decent amount of data to sift through, so here are some of the plugins I think are interesting (in alphabetical order):

                    bsc
                    libbsc targets very high compression ratios, achieving ratios similar to ZPAQ at medium levels, but it is much faster than ZPAQ. If you mostly care about compression ratio, libbsc could be a great choice for you.

                    DENSITY
                    DENSITY is fast. For text on x86_64 it is much faster than anything else at both compression and decompression. For binary data decompression speed is similar to LZ4, but compression is faster. That said, the compression ratio is relatively low. If you are on x86_64 and mostly care about speed DENSITY could be a great choice, especially if you’re working with text.

                    LZ4
                    You have probably heard of LZ4, and for good reason. It has a pretty good compression ratio, fast compression, and very fast decompression. It’s a very strong codec if you mostly care about speed, but still want decent compression.

                    LZHAM
                    LZHAM compresses similarly to LZMA, both in terms of ratio and speed, but with faster decompression.

                    Snappy
                    Snappy is another codec you’ve probably heard of. Overall, performance is pretty similar to LZ4—it seems to be a bit faster at compressing than LZ4 on ARM, but a bit slower on x86_64. For compressing small pieces of data (like fields.c from the benchmark) nothing really comes close. Decompression speed isn’t as strong, but it’s still pretty good. If you have a write-heavy application, especially on ARM or with small pieces of data, Snappy may be the way to go.

                    If you’re like me, when you download a project and want to build it the first thing you do is look for a configure script (or maybe ./autogen.sh if you are building from git).  Lots of times I don’t bother reading the INSTALL file, or even the README.  Most of the time this works out well, but sometimes there is no such file. When that happens, more often than not there is a CMakeLists.txt, which means the project uses CMake for its build system.

                    The realization that that the project uses CMake is, at least for me, quickly followed by a sense of disappointment.  It’s not that I mind that a project is using CMake instead of Autotools; they both suck, as do all the other build systems I’m aware of.  Mostly it’s just that CMake is different and, for someone who just wants to build the project, not in a good way.

                    First you have to remember what arguments to pass to CMake. For people who haven’t built many projects with CMake before this often involves having to actually RTFM (the horrors!), or a consultation with Google. Of course, the project may or may not have good documentation, and there is much less consistency regarding which flags you need to pass to CMake than with Autotools, so this step can be a bit more cumbersome than one might expect, even for those familiar with CMake.

                    After you figure out what arguments you need to type, you need to actually type them. CMake has you define variables using -DVAR=VAL for everything, so you end up with things like -DCMAKE_INSTALL_PREFIX=/opt/gnome instead of --prefix=/opt/gnome. Sure, it’s not the worst thing imaginable, but let’s be honest—it’s ugly, and awkward to type.

                    Enter configure-cmake, a bash script that you drop into your project (as configure) which takes most of the arguments configure scripts typically accept, converts them to CMake’s particular style of insanity, and invokes CMake for you.  For example,

                    ./configure --prefix=/opt/gnome CC=clang CFLAGS="-fno-omit-frame-pointer -fsanitize=address"

                    Will be converted to

                    cmake . -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/opt/gnome -DCMAKE_INSTALL_LIBDIR=/opt/gnome/lib -DCMAKE_C_COMPILER=clang -DCMAKE_C_FLAGS="-fno-omit-frame-pointer -fsanitize=address"

                    Note that it assumes you’re including the GNUInstallDirs module (which ships with CMake, and you should probably be using).  Other than that, the only thing which may be somewhat contentious is that it adds -DCMAKE_BUILD_TYPE=Debug—Autotools usually  builds with debugging symbols enabled and lets the package manager take care of stripping them, but CMake doesn’t.  Unfortunately some projects use the build type to determine other things (like defining NDEBUG), so you can get configure-cmake to pass “Release” for the build type by passing it <code>–disable-debug</code>, one of two arguments that don’t mirror something from Autotools.

                    Sometimes you’ll want to be able to pass non-standard argument to CMake, which is where the other argument that doesn’t mirror something from Autotools comes in; --pass-thru (--pass-through, --passthru, and --passthrough also work), which just tells configure-cmake to pass all subsequent arguments to CMake untouched.  For example:

                    ./configure --prefix=/opt/gnome --pass-thru -DENABLE_AWESOMENESS=yes

                    Of course none of this replaces anything CMake is doing, so people who want to keep calling cmake directly can.

                    So, if you maintain a CMake project, please consider dropping the configure script from configure-cmake into your project.  Or write your own, or hack what I’ve done into pieces and use that, or really anything other than asking people to type those horrible CMake invocations manually.


                    I have a Pirelli P.VU2000 IPTV set-top box which I don't use, but would like to put that to a good use. It runs Linux, has an HDMI, stereo RCA audio output, 2x USB 2.0 and IR receiver + remote, so it'd be nice to have this play internet radios if that's possible (theoretically it is an IPTV receiver + media center, so it should be able to play media). And of course, let's not forget the advantage of learning new things, as I am aware that I could get similar media players fairly cheaply :)

                    Unfortunately I'm not too good at hacking, and I haven't found a way to access a root console on it yet (after two days of googling/duck-duck-going and reading several russian and greek forum posts translated with Google Translate), so if anyone's up to the challenge to help me break it (to be able to access a root shell) in the spirit of knowledge-sharing, I'd be grateful for any kind of help.

                    I've already spent a few days on this, with the following results:
                    • the device boots, gets an IP from my router, but then errors out with "wrong DHCP answer" likely to be caused by me not being in the same subnet the IPTV provider expects it, but still, accessing the media player functionality without IPTV access would be nice
                    • opening the box I have managed to get a serial console with some minimal output, I guess this is the bootloader logging to the serial console:
                      39idxfsef2f712148b75194ab1d3c691b55bd4d3a5e956dS         
                                                                                
                      #xos2P4a-99 (sfla 128kbytes. subid 0x99/99) [serial#a225d]
                      #stepxmb 0xac                                            
                      #DRAM0 Window  :    0x# (20)                             
                      #DRAM1 Window  :    0x# (15)                             
                      #step6 *** zxenv has been customized compared to build ***
                      #step22                                                  
                      #ei
                    • scanning the ports with nmap reveals the following:
                      Nmap scan report for 192.168.2.100
                      Host is up (0.00043s latency).
                      Not shown: 65534 closed ports
                      PORT     STATE SERVICE VERSION
                      2396/tcp open  ssh     Dropbear sshd 0.52 (protocol 2.0)
                      | ssh-hostkey:
                      |   1024 70:ff:b6:6b:94:f4:4e:19:14:40:7d:40:de:07:b9:ac (DSA)
                      |_  1040 c4:52:0f:c9:e5:0f:fe:a8:a3:28:e6:d7:e1:02:23:0a (RSA)
                      Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

                      Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
                      Nmap done: 1 IP address (1 host up) scanned in 13.52 seconds
                    • telnet to the port found with nmap works, but no prompt comes up:
                      telnet 192.168.2.100 2396
                      Trying 192.168.2.100...
                      Connected to 192.168.2.100.
                      Escape character is '^]'.
                      SSH-2.0-dropbear_0.52
                    • ssh into the STB with root fails, as only publickey authentication seems to be enabled:
                      ssh root@192.168.2.100 -p2396
                      The authenticity of host '[192.168.2.100]:2396 ([192.168.2.100]:2396)' can't be established.
                      RSA key fingerprint is c4:52:0f:c9:e5:0f:fe:a8:a3:28:e6:d7:e1:02:23:0a.
                      Are you sure you want to continue connecting (yes/no)? yes
                      Warning: Permanently added '[192.168.2.100]:2396' (RSA) to the list of known hosts.
                      Permission denied (publickey).
                    • checked for possible dropbear 0.52 exploits and vulnerabilities, but haven't found anything I could use
                    So if you have any other ideas what I could try, feel free to suggest them in the comments.
                    The new development version 0.27.1 of the Vala programming language contains a lot of enhancements and bug fixes.

                    Release notes are the following:

                    Changes

                    • Print compiler messages in color.
                    • Add clutter-gdk-1.0 bindings.
                    • Add clutter-gst-3.0 bindings.
                    • Add clutter-x11-1.0 bindings.
                    • Add rest-extras-0.7 bindings.
                    • Bug fixes and binding updates.
                    However I'd like to tell more:
                    • The compiler now checks for unknown attributes.
                    • More checks in the compiler about invalid semantics.
                    • Now XOR works with boolean, like bitwise OR and AND
                    • A new attribute [ConcreteAccessor], mostly used for bindings. It's common that in C interface properties have concrete accessors instead of abstract accessors. Before this, you had [NoAccessorMethod] on top of such properties.
                    • Sometimes the "type" property is used in projects but Vala could not support it. Now if the bindings have [NoAccessorMethod] you can use it.
                    • We now infer generics recursively in method calls, so less typing for you.
                    • And more.
                    Have fun.
                    Last cycle Gnome Mines went through a major rewrite and redesign, bringing it to the GNOME3 era. However, not everyone was happy with the new look, and several people mentioned the lack of colors on the numbers as the reason for this.

                    The problem

                    The numbers on the fields communicate the danger clearly. But you have to read them. Several people have reported using the colors as the primary clue for sensing the danger around the current field. With the new design we don't have colored numbers, so they would have to change the way they play Minesweeper to get used to this. Some people did, and mentioned that in spite of their initial complaints of missing the colors they are happy with the result and will never need the colors. But what about the others?
                    The lack of colors being the number one complaint, some people also mentioned the flatness of all the icons as an issue, others complained about the small difference between exploded and non-exploded mines, lack of explosion, which might be an accessibility issue for the visually impaired.

                    The options

                    On bug #729250, several G+ posts and blog entries I have read different suggestions (from both designers, casual users, and minesweeping junkies) on how to bring back this additional level of visual feedback showing the danger when you're clicking around mines.

                    Here are some of the options we have discussed (feel free to comment your pros/cons for any of the solutions, and I will expand the list):
                    • Colored numbers, as we had in the old version
                      • Pros
                        • Potentially less unsatisfied users
                        • similar in looks to the minesweepers other platforms have
                      • Cons
                        • Readability issues
                        • User interface using many colors might look out of place on the GNOME desktop
                    • Subtle background color change based on level of danger
                      • Pros
                        • Color feedback
                        • If the colors are subtle enough, readability shouldn't be affected
                      •  Cons
                        • User interface using many colors might look out of place on the GNOME desktop
                    • Symbolic pips instead of the numbers
                      • Pros
                        • no reading required
                        • with well-spaced pips no counting would be required
                      • Cons
                        • ???

                    The proposed solution

                    GNOME games are trying to be as simple as possible, with the number of options reduced to the bare minimum. I consider this a good thing. But still, several games have options for changing the "theme", the look of the game board: e.g. lightsoff, five-or-more, quadrapassel all have a theme selection option in preferences. If GNOME Mines was themeable, we could also do this in Mines.
                    Pros
                    • people can change the theme if they are not satisfied with the default one
                    Cons
                    • a theme selector has to be added
                    • a preferences menu item has to be added, as minesweeper doesn't have a preference window at the moment, options are accessible in the appmenu

                    The status

                    Fortunately, the minefield is styled with CSS and the images provided as SVG files, so a theme consists of a collection of files, a theme.css file for describing the styles, and several SVG files, the images to use.
                    I have implemented a theme switcher (branch wip/theming-support) with the following features
                    The current look of the theme switcher
                    • this loads the above files from a given directory to display the minefield, so a theme is a directory
                    • the theme name is the name of the directory, but is irrelevant, as the users shouldn't see this anywhere, the theme switcher being a carousel-style switcher, not showing the name
                    • the theme switcher is a live preview widget, you can play a game (although the minefield is prefilled to show you all the numbers, flagged and non-flagged states, and you also can click the unrevealed tiles to see how the mines look)
                    I have added three themes (currently these differ only in the CSS style) for now:
                    • classic - using the flat icons, but the old colored numbers
                    • default - the monochrome numbers and the flat icons
                    • colored backgrounds - flat icons, and the numbers using colored backgrounds
                    If this gets into the master repository, I wouldn't want to have more than five themes in the repository. However, if you don't like any of them, and you are privileged enough (you have write access to the themes directory of mines), you can create your own one, and the theme switcher will see that after an application restart.

                    The missing pieces

                    • do we need a theme switcher at all, or we can create a single theme that fits everyone? (I doubt this, but if it's possible, I'll happily throw the whole theme switcher implementation away)
                    • design input on the theme switcher would be welcome
                      • theme switcher navigation button styling
                      • theme switcher window title
                      • theme switcher menu item (currently it opens by clicking Appmenu/Preferences)
                    • input on the themes
                      • suggestions for the existing themes
                      • suggestions for new themes (with SVG images provided)

                    Conclusion

                    It's hard to please everyone, but we can try to do our best :)
                    Happy new year everyone!

                    As we have started a brand new year, it's time for reviewing last year and planning for this one.

                    2014

                    Last year was a great one for me, professionally. Although I still didn't get my dream job working full time on open-source and free software, I am still proud for what I have accomplished.

                    Development
                    • I have successfully landed a major rewrite in gnome-mines, with both welcome and criticized changes (colored vs monochrome numbers anyone?) :)
                    • The company I work for has successfully migrated all SVN repositories to git, and my colleagues have mostly got used to it. We still commit some mistakes, but we usually can handle them without too much troubles
                    • I have migrated our issue tracking system to Redmine, customized it, learning some Ruby in the meantime, reporting some issues on github projects in the meantime
                    • I have removed our in-repository shared libraries and implemented dependency management on top of our current ant-based build scripts, using Ivy
                    • Contributed more time for reviews than I did before (usually for the awesome elementary projects) along with some fixes
                    • Contributing to open-source (GNOME and elementary) projects did help me get a new laptop through bountysource (thanks to bountysource for providing a platform, to the people supporting elementary and GNOME with bounties), which I am grateful for.
                    • 237 commits to various open-source projects (according to my openhub stats), although some of them are only release commits, its still a good number for me, although lower than the previous year
                    • I interviewed for a job that seemed like my dream job, but unfortunately it turned out not to be, because of various reasons. I still don't know why I got rejected at the last phase, and unfortunately while talking with the interviewers it turned out the marketing stuff that motivated me to go for an interview was indeed only marketing (and a very successful one) but nothing more (at least that's what I found out based on the answers from the several people working there, who I managed to talk to)
                    Talks
                    • I held three talks at the university I graduated from, about open-source: the first and second one was the same, a generic introduction to open-source for students, and the last one was about contributing for computer scientists, with bugfixes, code reviews, and stuff. It was a great experience,
                      I enjoyed my talks a lot, but didn't see any enthusiasm around the topic, so I'm seriously thinking about what to do next, as I like talking about open-source, but it seems that I haven't found the right audience
                    • I seriously wanted to attend the Open Source Open Mind conference held annually in our city, I even had a ticket, but unfortunately I became ill the night before the conference (and it was my longest illness, with almost a month), so I skipped it, with regrets
                    2015
                    • In the land of open-source I intend to have more contributions this year, at least one commit and/or bugfix each day.
                    • I would like to get this year to GUADEC, as I've never been, and it seems like the event I might have the possibility to get to, as it's held in Europe, this year in Gothenburg, Sweden, so I need no visa (if I would need it, I would have to travel 900 km for it). Unfortunately we intend to buy a house, so I might not have the chance because of this.
                    That's it. No big plans other than these (at least not programming-related). As personal goals I have some more ambitious ones, like reading some books, buying a house, but I hope I will be able to keep up the contributions, which breathe some more life in me.
                    Since glib 2.41.2, the mutex/cond implementation on Linux has changed. The code compiled with Vala < 0.26 which targets at least glib 2.32 with --target-glib 2.32 will suffer from deadlocks.

                    Your options are either:
                    • Do not --target-glib 2.32
                    • Update Vala to at least 0.25.2
                    • Instead of upgrading Vala, pick the bindings for Mutex and Cond from the new glib-2.0.vapi
                    • Downgrade glib
                    To clarify, it's not a glib bug. It's an old valac bug in the glib-2.0.vapi bindings of Mutex and Cond  that became now critical after the glib implementation change.

                    The relevant Vala bug can be found here: https://bugzilla.gnome.org/show_bug.cgi?id=733500
                    No nos hace falta crear una ventana que muestre directorios para después elegir archivos. Gtk lo hace por nosotros/as usando FileChooserDialog...


                    valac -o "archivos" *.gs --pkg gtk+-3.0 


                    [indent=4]
                    uses Gtk
                    init
                        Gtk.init (ref args)               // inicializa gtk
                        var prueba = new ventana ()      // crea el objeto prueba
                        prueba.show_all ()                  // muestra todo
                        Gtk.main ();                      // comienza con el loop

                    class ventana : Window             // Crea una clase de ventana
                        init
                            title = "Ventana de prueba"          // escribe el titulo
                            default_height = 250                // anchura
                            default_width = 250                  // altura
                            window_position = WindowPosition.CENTER  // posición
                           
                            // creamos un boton con la siguiente etiqueta
                            var button = new Button.with_label ("Pulsa este botón")
                            // Une el evento de clic de raton con la funcion pulsado
                            button.clicked.connect (pulsado)
                           
                            // si pulsamos la x de la barra saldrá del loop
                            destroy.connect(Gtk.main_quit)

                            // añade el boton a la ventana
                            add(button)

                        def pulsado (btn : Button)
                            var FC=  new FileChooserDialog ("Elige un archivo para abrir", this, Gtk.FileChooserAction.OPEN,
                                "_Abrir",Gtk.ResponseType.ACCEPT,
                                "_Cerrar",Gtk.ResponseType.CANCEL);
                            FC.select_multiple = false;
                            FC.set_modal(true)
                            case FC.run ()
                                when Gtk.ResponseType.CANCEL
                                    FC.hide()
                                    FC.close()
                                when Gtk.ResponseType.ACCEPT
                                    FC.hide()
                                    var direccion=FC.get_filename ();
                                    print direccion
                    I use the terminal a lot, usually with bash or fish shell, and I always wanted some kind of notification on command completion, especially for long-running greps or other commands.

                    The guys working on elementary OS have already implemented job completion notification for zsh shell in their pantheon-terminal project, but I wanted something more generic, working everywhere, even on the servers I am running commands through SSH.

                    The terminal bell sound is something I usually don't like, but it seemed like a good fit for a quick heads-up, so the Bell character came to the rescue.
                    As the bash prompt is fairly customizable, you can easily set a prompt which includes the magic BELL character.

                    In order to do this:
                    • open a Terminal (surprize :))
                    • run the command echo PS1=\$\'\x07\'\'$PS1\'
                    • paste the output of the command into ~/.bashrc
                    Of course, this is not perfect, as it beeps for short commands too, not only long-running commands, but it works for me, maybe it will help you.
                    A quick update on my new ultrabook running Fedora:
                    • After watching the kernel development closely to see if anything related to the built-in touchpad comes in, and nothing came, I have decided to try some workarounds. If it can't work as a touchpad, at least it should work as a mouse. This can be accomplished by adding psmouse.proto=imps to the kernel parameters. The worst thing in this is that there's neither two-finger scrolling, nor edge-scrolling, but I can live with that, as I also have a wireless mouse.
                    • Unfortunately I couldn't do anything with the wireless card, I have downloaded the kernel driver for 3.13 and 3.14 kernels, changed the source to work with 3.17 kernel (the one in Fedora workstation dailies), but unfortunately it fails to connect to my WPA-PSK2 network. So, until I get a mini PCIe wifi card with an Intel or Atheros chip (which is confirmed to have proper linux support), I will use the laptop with an USB WLAN interface.
                    • Optimus graphics card switching still didn't seem trivial to install and set up properly. However, I don't need more than the intel graphics card, so I just wanted to switch the NVidia card off completely. So installed bumblebee and bbswitch based on the instructions on Fedora wiki, and turned the discreete card off.
                    • Battery usage is at about 8W, estimated usage on battery is 7.5 hours with standard internet browsin on a standard 9-cell-battery, so I'm pretty satisfied with that.
                    • I have formatted both the 24 GB SSD and the 1.5 TB HDD (cleaned up from sh*t like windows and McAffee 30 days trial), and installed Fedora 21 with a custom partitioning layout.
                    All in all, at last I have a mostly working (there's place for improvement though)  laptop with a battery life above six hours with constant browsing, so I'm satisfied.

                      We have been hard at work since the last announcement. Thanks to help from people testing out the previous release, we found a number of issues (some not even OS X related) and managed to fix most of them. The most significant issues that are resolved are related to focus/scrolling issues in gtk+/gdk, rendering of window border shadows and context menus. We now also ship the terminal plugin, had fixes pushed in pygobject to make multiedit not crash and fixed the commander and multiedit plugin rendering. For people running OS X, please try out the latest release [1] which includes all these fixes.

                      Can’t see the video? Watch it on youtube: https://www.youtube.com/watch?v=ZgwGGu7PYjY

                      [1] ftp://ftp.gnome.org/pub/GNOME/binaries/mac/gedit/beta/Gedit-3.13.91-dbg-2.dmg

                      Redmine issue editing is quite a complex task, with a fairly complex, huge, two-columned form you get to edit (we also have several custom fields, which make this issue even worse).

                      In Trac customized for ubinam, after adding our custom workflow, at the end of the page we had some options for augmenting our workflow, to ease the status updates, like reassignment, quick-fix, start working on an issue, and other easy tasks, which left most of the ticket fields untouched, only juggled with resolution, status, and assignee.

                      The status-button Redmine plugin provided a great base: after the description and primary fields of the ticket, it shows links to quick status transitions. With it, you don't have to click edit the issue, find the status field on the form, click to open it, select the status, and click submit to save the changes, instead, you change the status with one click. In our Trac-originated workflow we had a status with multiple resolutions (fixed, invalid, duplicate, wontfix, worksforme), that being a more complex transition, as you have to update two fields, and usually the assigned status goes along with a new assignee, so that is not that easy either.

                      After checking the source, learning a bit of Ruby on Rails, I have managed to update the form to change the links to Bootstrap buttons, and added an assignee combobox (with a nice look, and using the same data as the one on the edit form, thus no additional requests) with a built-in search box, thanks to the awesome Select2 component.
                      Of course, some status transitions also need a reasoning, why you did switch to that status: I could have chosen to have a dropdown with a text entry, but as the form already had a nice way to scroll to the comment form, why not use it? The rest of the form is not really helpful in this context, so with a bit of JQuery I have hidden it. Now, clicking a quick-status button either changes the status and submits the form (if no comment required - like test released) or changes the status and jumps to the comment form to give you an option to comment. Obviously, you could still use the traditional edit button, but why would you?

                      But a picture is worth a thousand words, so here you go, instead of three thousand words:

                      The overall look of a ticket with the plugin, see the quick-status buttons
                      A complex status transition, setting the status and the resolution, and requiring a comment
                      Changing the assignee is easy and fast, select the user, and click reassign...
                       Again, this is a heavily customized version, but it there's enough interest, I will share the plugin, or even develop a more generic one, not strictly tied to our workflow. So, let me see your +1s/comments/shares, if I get 30 of those, I'll share it in a github repo.

                      After sharing my experiences of migrating from Trac 1.0.1→Redmine some people have asked me to share the script I have used.

                      Do you need the script?
                      Share/+1/comment!
                      (Public domain image)
                      I would prefer sharing the migration script by getting it in the Redmine source tree. I am willing to spend some more of my spare time of getting the migration script in shape (currently it's too personalized for our project to be shared), but I'm not sure how many people would use it, so to find out, I need you to +1/comment/share this post to express your interest in it. Even if this act might look like a shameless self-promotion, you'll have to believe me that it is only a way to find out in what form to share the script. If I see at least 30 people interested in it, I will do my best to share the migration script as soon as possible, and get it in the Redmine source tree. If there are less than 30 people interested in the script, I will still share the script with them, but as a raw script in a public github repo/gist, without getting proper testing and review from the Redmine team.

                      I have already asked the Redmine devs on IRC about the way they would prefer (and hopefully accept) a patch, they answered that they will accept the script, better in a separate migration script (the current one in the tree is probably for Trac 0.12 and Trac 1.0 has changed a lot), to avoid breaking the old script for the ones who could use it. This is the easiest way, as it reduces the number of checks in the migration script for Trac version.

                      The Redmine developers have also asked me to get a sample Trac DB dump, but my company's database is not public. If you would be interested in the migration script, and want to help, and have a public Trac database at hand (preferably with less than 1000 tickets), please share it. I have looked at the Trac users page for open-source projects, but only a few of them are using Trac 1.0.1. The database dump would be helpful to test the migration script, and write some unit tests, to make sure everything works well.

                      Stay tuned, in my next post I will present the personalizations I have used to ease Redmine ticket updates without using the complex edit form, and if there's enough interest, I will share the plugin I customized with the people interested.

                      As some of you might already know, the company I work for just migrated from Trac to Redmine (migration is mostly complete). I'm a developer, but in lack of DevOps people, I was responsible for the migration. It went fairly well, some more notes:
                      Fixing everything openclipart
                      • the migration didn't migrate the estimated time attribute for tickets, as I forgot it, but I wrote the part to migrate the estimated time changes in the journal, so I took a wild guess and set the attribute for each ticket to the max value I found in the ticket's history (usually that's the correct one, except for maybe a few)
                      • never allow your users to choose their theme: I installed a plugin to let the users choose their redmine theme, and installed seven themes, unfortunately each has their advantages and disadvantages, and everyone has their preferred theme, so we can't choose a default theme everyone would agree with (maybe I will be the bad guy in the story, and remove the plugin and force them to use what most people like)
                      • all in all, the feedback was mostly positive so far, in spite of my promise of sending a mail when everything is complete (which has not happened yet), most people are already using it, so it seems to be fairly intuitive (for people used to bugzilla and trac at least)

                      Commit messages in issue history

                      A major complaint was that the commit messages do not appear in redmine in the ticket comments, but appear on their side, making it see which commit came after which comment, and the issue-repo-history-merge plugin had some issues and did not fit our needs, so I started looking for another solution: modifying redmine source or writing my own plugin. After checking the Redmine source I found however that a changeset link will be added for fixing keywords defined in Redmine settings (which we already used for changing the status of tickets on commits), so I just added a fixing keywords with the usual "Refs #xxxx" style already defined in Redmine to associate the commit with a ticket, to also set the status of the ticket to Accepted, and inherently, add a ticket history entry with "Applied in changeset:xxxxx". This was still missing the commit comment, but I have added that in the Redmine source, that being the fastest solution for now.
                      Later on, a plugin might be more appropriate, if needed, to reduce the number of changes in the Redmine source, in case a reinstall/Redmine update is needed.

                      This post was going to be a rather long one, but decided to split it in three, as the other two topics need their own posts for objective reasons. If you're interested in the migration script itself or a redmine workflow helper, check back later.

                      If you’re reading this through planet GNOME, you’ll probably remember Ignacio talking about gedit 3 for windows. The windows port has always been difficult to maintain, especially due to gedit and its dependencies being a fast moving target, as well as the harsh build environment. Having seen his awesome work on such a difficult platform, I felt pretty bad about the general state of the OS X port of gedit.

                      The last released version for OS X was gedit 3.4, which is already pretty old by now. Even though developing on OS X (it being Unix/BSD based) is easier than Windows (for gedit), there is still a lot of work involved in getting an application like gedit to build. Things have definitely improved over the years though, GtkApplication has great support for OS X and things like the global menu and handling NSApp events are more integrated than they were before (we used the excellent GtkosxApplication from gtk-mac-integration though, so things were not all bad).

                      I spent most of the time on two things, the build environment and OS X integration.

                      Build environment

                      We are still using jhbuild as before, but have automated all of the previously manual steps (such as installing and configuring jhbuild). There is a single entry point (osx/build/build) which is basically a wrapper around jhbuild (and some more). The build script downloads and installs jhbuild (if needed), configures it with the right environment for gedit, bootstraps and finally builds gedit. All of the individual phases are commands which can be invoked by build separately if needed. Importantly, whereas before we would use a jhbuild already setup by the user, we now install and configure jhbuild entirely in-tree and independently of existing jhbuild installations. This makes the entire build more reliable, independent and reproducible. We now also distribute our complete jhbuild moduleset in-tree so that we no longer rely on a possibly moving external moduleset source. This too improves build reproducibility by fixing all dependencies to specific versions. To make updating and maintaining the moduleset easier, we now have a tool which:

                      1. Takes the gtk-osx stable modulesets.
                      2. Applies our own specific overrides and additional modules from a separate overrides file. For modules that already exist, a diff is shown and the user is asked whether or not to update the module from the overrides file. This makes it easy to spot whether a given override is now out of date, or needs to be updated (for example with additional patches).
                      3. For all GNOME modules, checks if there are newer versions available (stable or unstable), and asks whether or not to update modules that are out of date.
                      4. Merges all modules into two moduleset files (bootstrap.modules and gedit.modules). Only dependencies required for gedit are included and the resulting files are written to disk.
                      5. Downloads and copies all required patches for each required module in-tree so building does not rely on external sources.

                      If we are satisfied with the end modulesets, we copy the new ones in-tree and commit them (including the patches), so we have a single self-contained build setup (see modulesets/).

                      All it takes now is to run

                      osx/build/build all

                      and the all of gedit and its dependencies are built from a pristine checkout, without any user intervention. Of course, this being OS X, there are always possibilities for things to go wrong, so you might still need some jhbuild juju to get it working on your system. If you try and run into problems, please report them back. Running the build script without any commands should give you an overview of available commands.

                      Similar to the build script, we’ve now also unified the creation of the final app bundle and dmg. The entry point for this is osx/bundle/bundle and works in a similar way as the build script. The bundle script creates the final bundle using gtk-mac-bundler, which gets automatically installed when needed, and obtains the required files from the standard build in-tree build directory (i.e. you’ll have to run build first).

                      OS X Integration

                      Although GtkApplication takes care of most of the OS X integration these days (the most important being the global menu), there were still quite some little issues left to fix. Some of these were in gtk+ (like the menu not showing [1], DND issues [2], font anti-aliasing issues [3] and support for the openFiles Apple event [4]), of which some have been already fixed upstream (others are pending). We’ve also pushed support for native 10.7 fullscreen windows into gtk+ [5] and enabled this in gedit (see screenshot). Others we had fixed inside gedit itself. For example, we now use native file open/save dialogs to better integrate with the file system, have better support for multiple workspaces, improved support for keeping the application running without windows, making enchant (for the spell checker) relocatable and have an Apple Spell backend, and other small improvements.

                      Besides all of these, you of course also get all the “normal” improvements that have gone into gedit, gtk+ etc. over the years! I think that all in all this will be the best release for OS X yet, but let it not be me to be the judge of that.

                      gedit 3.13.91 on OS X

                      We are doing our best to release gedit 3.14 for OS X at the same time as it will be released for linux, which is in a little bit less than a month. You can download and try out gedit 3.13.91 now at:

                      ftp://ftp.gnome.org/pub/GNOME/binaries/mac/gedit/beta/Gedit-3.13.91-dbg-1.dmg

                      It would be really great to have people owning a mac try this out and report bugs back to us so we can fix them (hopefully) in time for the final release. Note that Gedit 3.14 will require OS X 10.7+, we no longer support OS X 10.6.

                      [1] [Bug 735122] GtkApplication: fix global menubar on Mac OS
                      [2] [Bug 658722] Drag and Drop sometimes stops working
                      [3] [Bug 735316] Default font antialiasing results in wrong behavior on OS X
                      [4] [Bug 722476] GtkApplication mac os tracker
                      [5] [Bug 735283] gdkwindow-quartz: Support native fullscreen mode

                      The change

                      In January, after a long time with SVN, we (development team) decided to make the move to git, to speed up the development of the project we're working on, Tracking Live.
                      The switch has greatly improved our development speed (although some people are still not happy with it, because of occasional relatively large merge conflicts) and deployment rate (with Jenkins and a relatively good branching strategy, we can release daily if we want).

                      The problem

                      We use Trac for bug tracking, with a post-commit hook to leave a comment on the relevant referenced ticket after each commit. This has been introduced in SVN times, and migrated to git too, unfortunately somehow Trac with git is awfully slow (tickets without git commit load in less than 5 seconds, tickets with 1 git commit load on 40+ seconds, and the time goes up with the number of related commits). We have updated our Trac instance from 0.12 to 1.0.1, it didn't help, tried several tweaks and additional package installs to speed up Trac+git, but none of those helped. The Trac developers also consider their Git plugin sub-optimal at the moment of this writing.

                      The solution

                      40+ seconds for opening a ticket to leave a comment looked like a huge waste of time, so we started looking for alternatives. Redmine looked promising, being based on Trac, but completely rewritten with Ruby instead of Python, with the much-advertised Rails framework, and the interface by default looked familiar for the colleagues used to Trac.

                      Migration script updates

                      Redmine provided a migration script for migrating all tickets from Trac. Good start. After the first import (6+ hours for ten thousand tickets) Redmine didn't start at all. Bad news. So here are the changes I have made to the migration script in order to have a complete migration (learned the Ruby syntax easily, and the changes took 2 days with testing, and migrated only 200 tickets in each test until I was sure the migration script works ok, as I didn't like the 6+ hours for full migration):
                      • as the migration script is for migrating from Trac 0.12 and the datatype used to store the dates in the Trac database has changed since, updated the date conversion, after this redmine did start
                      • added migration for CC's to Redmine watchers
                      • updated attachments migration to work with Trac 1.0.1, as the attachment paths have changed
                      • added migration of total hours, estimated hours and hours spent, stored as custom fields in Trac, to Redmine's time management plugin entries
                      • added comments for custom field changes, as custom fields have been migrated (meaning the current value of the custom field being correct), but their changes have not been migrated
                      • added parent ticket relationship migration, as we had several beautiful ticket hierarchies for grouping featuresets (until we migrated to a more agile sprint-alike milestone-based grouping) in Trac
                      • added custom ticket states and priorities mapping (we have a custom set defined of these to help us in our workflow)
                      • added custom user mappings (for each of our users - 64 in the complete trac history) to create one user only for the same user using Trac with multiple email addresses (one for trac comments, another for git commits where these differ)
                      • added migration for ticket comment links
                      If you are interested in any of the above changes, feel free to ask, I will provide the migration script (unfortunately the changes do not seem to make in to redmine trunk, lots of patches I have applied have been waiting in redmine tracker for years, they apply cleanly, but have not been pushed to trunk)

                        The plugins

                        After all these steps, I had a good dataset to start with, but the functionality of Redmine was still not on par with Trac. The long Redmine plugin list (and additional github searches for 'redmine plugin') came in handy here, checked the list, tested plugins I found interesting, and here's a final list (all tested and working with Redmine 2.5.2)
                        • PixelCookers theme - the most complete and modern redmine theme with lots of customization options
                        • redmine_auto_watchers_from_groups - everyone from the assigned group should be cc'd for each mail, that's what we used Trac default cc's for (not perfect, reported the 1st issue for the project)
                        • redmine_auto_watchers - to add the persons commenting as watcher, bugzilla style
                        • redmine_category_tree - useful for component grouping in our project, as we have one project with lots of components and subcomponents and sub-sub-components
                        • redmine_custom_css and redmine_custom_js - for customizing the last bits without having to create a custom theme
                        • redmine_didyoumean - for auto duplicate search before reporting a ticket (current trunk is broken, but last stable works)
                        • redmine_custom_workflows - for additional updates on ticket changes
                        • redmine_image_clipboard_paste - makes bug reporting for a website so much easier with a screenshot
                        • redmine_issue_status_colors - we use a color for each status to help us visualize the current status of a milestone
                        • redmine_landing_page - we only have one project, so we always want to land on the project page after login
                        • redmine_open_search - no more custom html pages building custom links for accessing a ticket, just type the number in the searchbar of the browser
                        • redmine_revision_diff - expand the diff by default (with a bit of customization and custom code show the branches a given commit appears on, something my colleagues have missed when taking a first look at redmine)
                        • redmine_subtasks_inherited_fields - subtasks usually have most of the attributes inherited from the parent, so let's ease bug reporting
                        • redmine_default_version - we have a generic issue collector pool, management prioritizes bugs from there into scheduled milestones, let's use that collector as default target version
                        • redmine_tags - use tagging for bugs and wiki pages, something we used in Trac (although data not migrated)
                        • redmine_wiki_extensions, wiking, redmine_wiki_lists - additional wiki extensions, custom macros, e.g. for embedding a ticket list inside a wiki page
                        • redmine_wiki_toc - to have a table of contents of our wiki, which is kindof messy right now (we had a wiki page looking something like a ToC, but we occasionally forgot to update it)
                        • status_button - for quickly changing the status without having to open the combo and select the one to use and click update, just shows all statuses as links
                        • redmine_jenkins - awesome jenkins integration, can show build history, or even start jenkins builds from the redmine interface, no need to open jenkins anymore

                        What's missing

                        After all this setup, I've got two features of Trac without complete matches:
                        • TicketQuery macro results have not been migrated, as there's no 100% match of this feature neither in default Redmine, nor in the plugins. Based on the necessity of this we will either create custom queries for the most important TicketQueries or (the more time-consuming option) will extend redmine_wiki_lists plugin with additional query attributes to be as powerful as TicketQuery is in Trac
                        • Trac roadmap had a progress indicator for each Milestone, which we could colorize based on the status. Redmine progress indicator can only colorize Open/InProgress/Closed, so no progressbar colorized based on per-status ticket count. However the ticket list is shown after the progressbar (Trac doesn't show the list), which is something we can colorize, so we still have a visual clue of how the milestone stands.

                        Conclusion

                        Redmine
                        Trac
                        All in all, it looks to me that the migration is prepared, test migration worked, preliminary tests look promising, speed is incomparable, featureset is OK, look and feel updated and awesome.
                        Hopefully we'll see it in action sometime soon (for my and some colleagues' relief, who got sick of waiting for trac pages to load), with sub-5 second page loading times. So Redmine, here we come...
                        Recently my (5-years) old laptop (HP ProBook 4710s) started behaving badly (shutting down multiple times, even after full interior cleaning) so I have started looking for a replacement. This time I wanted something a bit more portable (less than 17 inch) but still OK for development (13.3 and 14 seemed a bit too low), so I've opted for a 15.6 inch.
                        Choosing the right one was a tough decision, my requirements were:
                        • 15.6 inch with FullHD resolution (1920x1080)
                        • good battery life (4+ hours) involving an ultralow-voltage CPU (i5 42xxU or i7 45xxU)
                        • 8 GB memory
                        • SSD being a plus
                        My favourite one was the Dell Inspiron 15 (7000 series) but the price was a bit higher than I wanted to pay for it, so I hesitated a lot, until each e-shop sold it's stock out. Hunting this laptop one day I've found the lot cheaper, brand-new ASUS TransformerBook TP500 (LA/LN) series which met almost all my requirements (nothing on Google for Linux compatibility), so I've decided to order the i5 version (24 GB SSD + 1 TB HDD) on a Saturday, but the shop has announced me on Monday that unfortunately there was a mistake with the stock calculation, and they're out of stock, so I've opted for an upgrade to the i7 one. That shipped in one day (with an OEM install of Win 8.1 sadly).

                        After a quick first-time setup (OK, quick might be an exaggeration) of Win 8.1, a quick start of Internet Explorer to download Firefox, made the quick tests to see if everything's OK. Touchscreen worked, keyboard is amazing, resolution is OK, colors look wonderful, sadly the volume down button didn't work (volume up works, so it's likely to be a hardware issue). I've decided to return it for a replacement (hopefully a fully functional one this time), but not before checking the Linux compatibility,
                        After disabling secureboot, creating an EFI Fedora 20 liveUSB, I've booted Fedora on it in a few seconds, here's a summary:
                        • Resolution is ok, video cards (HD4400 and GeForce GT840) work
                        • Touchpad works
                        • Touchscreen works (haven't tried multitouch, seen some reports on it's smaller sister TP300 only single-point touch working right now)
                        • Keyboard works
                        • Wifi did not work out of the box (with the 3.11 kernel). The Wifi+Bluetooth card is a Mediatek (RaLink) 7630. Googling revealed that ASUS x550C and HP 450-470 G1 owners also have this card, there are several requests to add support, but it's just not there yet. Fortunately MediaTek provides Linux drivers, so it might be "only" a matter of compiling the kernel driver, which means it might get in the kernel soon.
                        • Card reader did not work (again with the 3.11 kernel), but a quick google revealed that support has been added in 3.13, so it should work if Fedora is updated (hopefully ethernet works, haven't had the chance to try it) - currently with Fedora updates installed I'm using the 3.15 kernel
                        • With GTK 3.10 CSD windows can not be moved by dragging the titlebar with touch, you have to use the touchpad for that. People have confirmed that this is not the case with 3.12+, which is strange (due to 708431 being still open ), but good news.
                        All in all, the experience was not perfect, but not frustrating either.

                        Will be back with a more in-depth review with battery life and other info after I get the replacement. I'm looking forward to having fun with experimenting with GNOME on touch displays and implementing GNOME Mines touch screen support :)
                        This new release of the Vala programming language brings a new set of features together with several bug fixes.

                        Changes

                        • Support explicit interface method implementation.
                        • Support (unowned type)[] syntax.
                        • Support non-literal length in fixed-size arrays.
                        • Mark regular expression literals as stable.
                        • GIR parser updates.
                        • Add webkit2gtk-3.0 bindings.
                        • Add gstreamer-allocators-1.0 and gstreamer-riff-1.0 bindings.
                        • Bug fixes and binding updates.
                        The explicit interface method implementation allows to implement two interfaces that have methods (not properties) with the same name. Example:

                        interface Foo {
                        public abstract int m();
                        }

                        interface Bar {
                        public abstract string m();
                        }

                        class Cls: Foo, Bar {
                        public int Foo.m() {
                        return 10;
                        }

                        public string Bar.m() {
                        return "bar";
                        }
                        }

                        void main () {
                        var cls = new Cls ();
                        message ("%d %s", ((Foo) cls).m(), ((Bar) cls).m());
                        }

                        Will output 10 bar.

                        The new (unowned type)[] syntax allows to represent "transfer container" arrays. Whereas it's possible to do List<unowned type>, now it's also possible with Vala arrays.
                        Beware that doing var arr = transfer_container_array; will not correctly reference the elements. This is a bug that will eventually get fixed. It's better to always specify (unowned type)[] arr = transfer_container_array;
                        Note that inside the parenthesis only the unowned keyword is currently allowed.

                        The non-literal length in fixed-size arrays has still a bug (lost track of it) that if not fixed may end up being reverted. So we advice not to use it yet.

                        Thanks to our Florian for always making the documentation shine, Evan and Rico for constantly keeping the bindings up-to-date to the bleeding edge, and all other contributors.

                        More information and download at the Vala homepage.

                        I have been a bit more quiet on this blog (and in the community) lately, but for somewhat good reasons. I’ve recently finished my PhD thesis titled On the dynamics of human locomotion and the co-design of lower limb assistive devices, and am now looking for new opportunities outside of pure academics. As such, I’m looking for a new job and I thought I would post this here in case I overlook some possibilities. I’m interested mainly in working around the Neuchâtel (Switzerland) area or working remotely. Please don’t hesitate to drop me a message.

                        My CV

                        Public service announcement: if you’re a bindings author, or are otherwise interested in the development of GIR annotations, the GIR format or typelib format, please subscribe to the gir-devel-list mailing list. It’s shiny and new, and will hopefully serve as a useful way to announce and discuss changes to GIR so that they’re suitable for all bindings.

                        Currently under discussion (mostly in bug #719966): changes to the default nullability of gpointer nodes, and the addition of a (never-null) annotation to complement (nullable).

                        I just learned of another automated build system for vala. It’s called bake. It looks pretty nice. It’s written in vala and appears to support a wide variety of languages. From what I can tell looking at the source code, bake will write out old school make files for you.

                        The other build system that I also have never used is called autovala. autovala is vala specific unlike bake appears to be. autovala is nice, though, in that it builds out CMake files your project. I’m already very familiar with CMake so that’s a big plus for me.

                        I plan to check out both very soon.

                        A few days ago Atom, the hackable text editor has been completely open-sourced under the MIT license (parts of it have been open-sourced some time ago, now they have completed it by open-sourcing the core).

                        Unfortunately currently it is only available for downloading for Mac OS, no Windows or Linux binaries available yet, but due to the nature of open-source, you can simply grab the sources, download and compile nodeJS (npm 1.4.4 is required, and neither Fedora 20 nor Ubuntu 14.04 provided it from the repos, they only had npm 1.3.x) and build yourself an executable. It's not always trivial, I had some issues building it both for Ubuntu 14 and Fedora 20, but with quick DuckDuckGo searches found the solutions, and I was able to test it.
                        Update: the folks at webupd8 have created a PPA for 64-bit Ubuntu 14.04, so you might be able to try it out without the hassle to build it for yourself.
                        As a first impression, it is a clean and extensible text editor, for people like me who are too lazy to learn vim or emacs.

                        It took me some time to configure Atom for using it as an IDE. The default build has support for some languages already, some plugins and themes, but there are plenty of additional packages to choose from. Here are my favourites (if these didn't exist, I would've already stopped using Atom):
                        • Word Jumper with it's default Ctrl+Alt+Left/Right reconfigured + Ctrl+Left/Right for jumping between words, something provided by almost every product dealing with writing and navigating text
                        • Terminal Status showing a terminal with Shift+Enter below your editor, useful for make commands, or git hackery for stuff not provided by the default git plugin. Unfortunately user input doesn't work yet, the console doesn't get the focus, so it's not perfect.
                        I have checked the available packages, language support was available for most of the languages I usually work with (C, C++, Python, Java, Bash Shell, GitHub MarkDown, Latex) , but unfortunately no support for Vala yet.

                        The GitHub folks did a wonderful job at providing documentation for everything for the community to quickly build a powerful ecosystem around the Atom core. They have links to their important guides from their main Documentation page, including a guide on how to convert a TextMate bundle. As TextMate already has a huge package ecosystem, including a Vala bundle, I have followed their guide, converted the TextMate bundle, created a github repo and published a language-vala atom package.

                        All in all, initial Vala support including syntax highlighting and code completion (and maybe some other features I am not aware of yet) is available for the ones eager to develop Vala code in Atom, after building it from source or after the GitHub folks provide binaries for other OSs too.

                        After a couple of discussions at the DX hackfest about cross-platform-ness and deployment of GLib, I started wondering: we often talk about how GNOME developers work at all levels of the stack, but how much of that actually qualifies as ‘core’ work which is used in web servers, in cross-platform desktop software1, or commonly in embedded systems, and which is security critical?

                        On desktop systems (taking my Fedora 19 installation as representative), we can compare GLib usage to other packages, taking GLib as the lowest layer of the GNOME stack:

                        Package Reverse dependencies Recursive reverse dependencies
                        glib2 4001
                        qt 2003
                        libcurl 628
                        boost-system 375
                        gnutls 345
                        openssl 101 1022

                        (Found with repoquery --whatrequires [--recursive] [package name] | wc -l. Some values omitted because they took too long to query, so can be assumed to be close to the entire universe of packages.)

                        Obviously GLib is depended on by many more packages here than OpenSSL, which is definitely a core piece of software. However, those packages may not be widely used or good attack targets. Higher layers of the GNOME stack see widespread use too:

                        Package Reverse dependencies
                        cairo 2348
                        gdk-pixbuf2 2301
                        pango 2294
                        gtk3 801
                        libsoup 280
                        gstreamer 193
                        librsvg2 155
                        gstreamer1 136
                        clutter 90

                        (Found with repoquery --whatrequires [package name] | wc -l.)

                        Widely-used cross-platform software which interfaces with servers2 includes PuTTY and Wireshark, both of which use GTK+3. However, other major cross-platform FOSS projects such as Firefox and LibreOffice, which are arguably more ‘core’, only use GNOME libraries on Linux.

                        How about on embedded systems? It’s hard to produce exact numbers here, since as far as I know there’s no recent survey of open source software use on embedded products. However, some examples:

                        So there are some sample points which suggest moderately widespread usage of GNOME technologies in open-source-oriented embedded systems. For more proprietary embedded systems it’s hard to tell. If they use Qt for their UI, they may well use GLib’s main loop implementation. I tried sampling GPL firmware releases from gpl-devices.org and gpl.nas-central.org, but both are quite out of date. There seem to be a few releases there which use GLib, and a lot which don’t (though in many cases they’re just kernel releases).

                        Servers are probably the largest attack surface for core infrastructure. How do GNOME technologies fare there? On my CentOS server:

                        • GLib is used by the popular web server lighttpd (via gamin),
                        • the widespread logging daemon syslog-ng,
                        • all MySQL load balancing via mysql-proxy, and
                        • also by QEMU.
                        • VMware ESXi seems to use GLib (both versions 2.22 and 2.24!), as determined from looking at its licencing file. This is quite significant — ESXi is used much more widely than QEMU/KVM.
                        • The Amanda backup server uses GLib extensively,
                        • as do the clustering solutions Heartbeat and Pacemaker.

                        I can’t find much evidence of other GNOME libraries in use, though, since there isn’t much call for them in a non-graphical server environment. That said, there has been heavy development of server-grade features in the NetworkManager stack, which will apparently be in RHEL 7 (thanks Jon).

                        So it looks like GLib, if not other GNOME technologies, is a plausible candidate for being core infrastructure. Why haven’t other GNOME libraries seen more widespread usage? Possibly they have, and it’s too hard to measure. Or perhaps they fulfill a niche which is too small. Most server technology was written before GNOME came along and its libraries matured, so any functionality which could be provided by them has already been implemented in other ways. Embedded systems seem to shun desktop libraries for being too big and slow. The cross-platform support in most GNOME libraries is poorly maintained or non-existent, limiting them to use on UNIX systems only, and not the large OS X or Windows markets. At the really low levels, though, there’s solid evidence that GNOME has produced core infrastructure in the form of GLib.


                        1. As much as 2014 is the year of Linux on the desktop, Windows and Mac still have a much larger market share. 

                        2. And hence is security critical. 

                        3. Though Wireshark is switching to Qt. 

                        In the weekend, after playing around with a Flappy Bird clone on a phone, I got curious how much time it would take me to implement a desktop version. After a G+ idea I have named the project Flappy Gnome, and implemented a playable clone in Vala with a GtkArrow jumping between GtkButtons in a few hours and less than 150 lines (including empty lines and stuff).

                        Here's a quick preview of the first version:


                        A bit about the tech details: it's basically a dynamically expanding GtkScrolledWindow scrolling to the right while you progress, that creates the effect of the moving pipes, and the player is moved from inside a tick_callback added to the Container GtkLayout.

                        Given that this is my second Vala project written from scratch (after Valawhole), and I learned a lot from it, seemed like a good idea to develop it further into a tutorial (for beginners), maybe someone else will find it useful too. I did start over twice to have a better code design and well-separated steps (1 commit/step), and have finally pushed to github, along with a description of each step. The resulting code is a bit longer (almost twice as long) than the initial version, but it also has more features, including CSS styling, Restart button, better design, and so on...

                        The end result of the tutorial in its current state.

                        I'm thinking of adding a Help screen to explain the complicated controls (F2 restarts the game, Space to start the game/jump) and maybe a Game Over screen, so the tutorial might not be completely ready, but it's in a good shape.

                        I could have done better in grouping related functionality in commits, or in commenting code, and I am sure there's a better way to implement/improve this using GTK+, but it's good for a start, with some known issues:
                        In its current state it runs choppy on a relatively modern dual- and quad-core CPUs with Ati cards using the open source radeon driver (I'm not sure what else could I blame), but works enjoyably on a PC with an Intel HD. Unfortunately I don't have an NVidia card to test with, but I'm really curious if it works on NVidia with nouveau, and maybe would also be interested in results with the binary blob drivers (both NVidia and Ati), if they make a difference. If you have any of these and have a few minutes, please try it and comment with your findings.
                        Update 1: Feedback from people running the game on Nouveau is positive, so the game seems to run smoothly on Nvidia with the open-source driver.

                        Last week I was in Berlin at the GNOME DX hackfest. My goal for the hackfest was to do further work on the fledgling gnome-clang, and work out ways of integrating it into GNOME. There were several really fruitful discussions about GIR, static analysis, Clang ASTs, and integration into Builder which have really helped flesh out my plans for gnome-clang.

                        The idea we have settled on is to use static analysis more pervasively in the GNOME build process. I will be looking into setting up a build bot to do static analysis on all GNOME modules, with the dual aims of catching bugs and improving the static analyser. Eventually I hope the analysis will become fast enough and accurate enough to be enabled on developers’ machines — but that’s a while away yet.

                        (For those who have no idea what gnome-clang is: it’s a plugin for the Clang static analyser I’ve been working on, which adds GLib- and GObject-specific checks to the static analysis process.)

                        One key feature I was working on throughout the hackfest was support for GVariant format string checking, which has now landed in git master. This will automatically check variadic parameters against a static GVariant format string in calls to g_variant_new(), g_variant_get() and other similar methods.

                        For example, this can statically catch when you forget to add one of the elements:

                        /*
                         * Expected a GVariant variadic argument of type ‘int’ but there wasn’t one.
                         *         floating_variant = g_variant_new ("(si)", "blah");
                         *                                           ^
                         */
                        {
                        	floating_variant = g_variant_new ("(si)", "blah");
                        }

                        Or the inevitable time you forget the tuple brackets:

                        /*
                         * Unexpected GVariant format strings ‘i’ with unpaired arguments. If using multiple format strings, they should be enclosed in brackets to create a tuple (e.g. ‘(si)’).
                         *         floating_variant = g_variant_new ("si", "blah", 56);
                         *                                           ^
                         */
                        {
                        	floating_variant = g_variant_new ("si", "blah", 56);
                        }

                        After Zeeshan did some smoketesting of it (and I fixed the bugs he found), I think gnome-clang is ready for slightly wider usage. If you’re interested, please install it and try it out! Instructions are on its home page. Let me know if you have any problems getting it running — I want it to be as easy to use as possible.

                        Another topic I discussed with Ryan and Christian at the hackfest was the idea of a GMainContext visualiser and debugger. I’ve got some ideas for this, and will hopefully find time to work on them in the near future.

                        Huge thanks to Chris Kühl and Endocode for the use of their offices and their unrivalled hospitality. Thanks to the GNOME Foundation for kindly sponsoring my accommodation; and thanks to my employer, Collabora, for letting me take community days to attend the hackfest.

                        Here in sunny Berlin, progress is being made on documentation, developer tools, and Club Mate. I’ve heard rumours of plans for an updated GTK+ data model and widgets. The documentationists are busy alternating between massaging and culling documentation pages. There are excited discussions about the possibilities created by Builder.

                        I’ve been keeping working on gnome-clang, and have reached a milestone with GVariant error checking:

                        gvariant-test.c:10:61: error: [gnome]: Expected a GVariant variadic argument of type ‘char *’ but saw one of type ‘guint’.
                                some_variant = g_variant_new ("(sss)", "hello", my_string, a_little_int_short_and_stout);
                                                                                           ^

                        More details coming soon, once I’ve tidied it all up and committed it.

                        A GNOME Foundation sponsorship badge.

                        Mines 3.13.1 is out with a refreshed look and feel.

                        You have to see it for yourself. But until you do that, here's a comparison of an in-game and an end-game screen-shot from before (3.12.1) and after (3.13.1) the changes.
                        Mines 3.12.1 (left) vs. Mines 3.13.1 (right)
                        The real beauty of the new Mines lies within the details, the updated look is much more than using new colours and new images:
                        • The old version did draw the whole minefield to a DrawingArea using cairo calls, while the updated version contains no custom drawing code, only standard GTK+ widgets (GtkButtons within a GtkGrid) styled with CSS, inside a GtkOverlay to be able to hide with a Paused label.This means, that if you don't like the current look or colours of the minefield, or you would like to use some other images (like flowers instead of mines in a game called Minesweeper), you only have to provide the new image files and update the CSS file, without touching the code at all.
                        • The user interface of the old version was built from code, so if you wanted to change something, you had to write the code for that. The user interface of the new version is built from Glade UI files, so you can fix user interface, layout, padding issues using Glade, without touching the code.
                        Thanks to all the people helping to make this release awesome, especially to Michael Catanzaro for the countless patch reviews and trust, and to Allan Day for the designs and the multiple iterations for the CSS.

                        Download. Play. Enjoy. Comment. There's still a lot to improve for the 3.14.0 release.

                        I think I’m not the only one who dreads visiting the hog that is bugzilla. It is very aptly named, but a real pain to work with at times. Mostly, what I really don’t like about bugzilla is that it’s 1) really slow to load and in particular search, 2) has a very cluttered interface with all kinds of distracting information that I don’t care about. Every time I think to quickly look up a bug, or search something specific, get all bugs related to some feature in gedit or even open just all bugs in a certain product, bugzilla just gets in the way.

                        So I introduce bugzini (https://github.com/jessevdk/bugzini), the light-weight bugzilla front-end which runs entirely in the local browser, using the bugzilla XML-RPC API, a simple local webservice implemented in Go and a JavaScript application running in the browser using IndexedDB to store bugs offline.

                        bugzini-index

                        Screenshot of the main bug index listing

                        It’s currently at a state where I think it could be useful for other people as well, and it’s running reasonably well (although there are certainly still some small issues to work out). There are several useful features in bugzini currently which makes it much nicer to work with than bugzilla.

                        1. Search as you type, both for products as well as bug reports. This is great because you get instantaneous results when looking for a particular bug. A simple query language enables searching for specific fields and creating simple AND/OR style queries as shown in the screenshot (see the README for more details)
                        2. Products in which you are interested can be starred and results are shown for all starred products through a special selection (All Starred in the screenshot)
                        3. Searches can be bookmarked and are shown in the sidebar so that you can easily retrieve them. In the screenshot one such bookmark is shown (named file browser) which shows all bugs which contain the terms file and browser
                        4. bugzini keeps track of which bugs contain new changes since your last visit and marks them (bold) similar to e-mail viewers. This makes it easy to see which bugs have changed without having to track this in bugzilla e-mails instead
                        Viewing a bug

                        Viewing a bug

                        Try it out

                        To try out bugzini, simply do the following from a terminal:

                        git clone https://github.com/jessevdk/bugzini.git
                        make
                        ./bugzini -l

                        Please don’t forget to file issues if you find any.

                        After working a bit in Vala on gnome-mines and swell-foop I thought I'd give it a try, and I also wanted to try some more GTK+ CSS styling ideas, so I have developed the simplest game ever, a 15 puzzle.
                        After a bit of development in Vala, I can say I'm pretty comfortable with it. www.valadoc.org is a great website, each language should have such a reference with all the available functions. Sometimes the explanations are not enough, but in that case I can simply fall back to DevHelp, after some time one gets used to mapping the C names to Vala namespace+class+field mapping rules.
                        GtkOverlay for start screen

                        Back to Valawhole: it's a simple 15-puzzle, available on github already. Technically, I did experiment with some ideas:
                        • CSS stylable UI, just like the one I did for gnome-mines, but taken one step further, as the puzzle blocks are styled using on-the-fly generated CSS to be able to set the size (3x3 or 4x4 grid)
                        • transparent start screen overlay, using GtkOverlay
                        • game logic separated from the view, as I did like the clear separation in Swell Foop and Mines, and wanted to practice doing that
                        4 x 4 puzzle with kenney's graphics
                        The game graphics are from kenney.nl, as he has the most awesome portfolio of game graphics on opengameart.org, each graphics perfectly matching my taste, colorful, cartoony, professional.

                        All in all, I think I will develop my upcoming games in Vala, as it takes the burden  of memory-management and GObject boilerplating, but still preserving the native-speed attribute, being compiled to C. Awesome.


                        I have finished the stylable Mines implementation for GNOME Mines, which I did mention in my last post:
                        • using the old scalable images
                        • with a rough paused overlay
                        • game logic including keyboard and mouse control just as before
                        • other improvements, 
                        I think it's ready to be taken over by a designer for some CSS wizardry, as my CSS is a plain one, using some background-colors from the Tango color palette.
                        It is available for testing in the wip/cssstyling branch of gnome-mines, and if you manage to test it, please report any issues you may find in a comment in bug 728483.

                        Now, let the screenshots do the talking (beware, ugly CSS ahead)

                        Looser

                        Starting a new game


                        Paused
                        Winner

                        Who knows GNOME Mines? You know, the GNOME version of that good old puzzle game (way older than I would've thought, its origins dating back to '60-'70s).

                        Allan Day has created a new design for that game as part of the GNOME Games modernization, ready to be implemented. I started working on them, and implementing the UI layout wasn't hard, just had to rearrange some buttons, a bit of tweaking, and there you go, we have an updated layout. But the mockup isn't only about the layout, it's about the theme too, which uses the dark variant of GNOME's Adwaita theme. However, just toggling the dark variant setting isn't enough in this case, as drawing the minefield grid is almost completely hard-coded (the minefield borders do use the button style, which can be CSS style, but that's it). Allan has asked me, where he can find some CSS for styling, but unfortunately Mines is not very customizable. However, implementing this seemed like a great idea, for multiple reasons:
                        • If you have tried resizing the window, you might have also noticed the CPU usage going up and some flickering. That is caused by the implementation redrawing the full board, and relayouting the full board to be centered.
                        • The application would be easily styleable by designers, using CSS files only.
                        I have been thinking a bit about it, seemed like a good idea, and even though I have already failed once in implementing Minesweeper clone good enough for my taste and public release, I wanted to do it. So the steps I have proposed:
                        • Separate the layouting code out from the minefield, as an aspectframe can do that perfectly
                        • Reimplement the minefield using standard GTK+ components, a (row- and column-) homogeneous grid for layouting the minefield, and buttons for representing the fields
                          • strangely, with Adwaita theme on a Fedora 20 + GNOME 3.12 the 30x16 buttons grid mouse click takes ages (tried to find the bottleneck with Callgrind, if anyone's interested in the results, I can share them, but I don't understand them), but with clean css (no fancy rounded rectangles, gradients, background images for buttons) it's fast, and with Ubuntu 14.04 + GNOME 3.12 preinstalled from PPA with the same version of Adwaita it's fast by default, without CSS juggling
                        • Use an overlay for the paused screen to hide the minefield while paused
                        • Ask Allan to provide the new images and a CSS for styling, as I am bad at these :)
                        I don't know who wrote the Mines Vala code, but even though it had some inefficiencies, it turned out to be a masterpiece design-wise (not UI design, but object-oriented design), as the minefield view was almost completely separated from the game logic, and well-commented (not over-commented, found the perfect amount of code comments required to understand what and how the code does).
                        Separating the layouting code out was only a matter of minutes. I have also managed to replace the custom view with standard grid quite fast, in a matter of hours.
                        So here's the "new" GNOME Mines (with a CSS style I could come up with, using the Tango color palette for now, waiting for Allan to come up with a better CSS).
                        The game is playable, with text-only buttons for now, the pause overlay is missing, but that should be the easy part. Can't wait to see it finished :)

                        And by the way, due to the recent work I've done, I have been asked and gladly accepted to become the maintainer of Mines, so feel free to file bugs/patches/feature requests for discussion, I will be happy to take this lil' project one step further :)

                        Continuing in this fledgling series of examining GLib’s GMainContext, this post looks at ensuring that functions are called in the right main context when programming with multiple threads.

                        tl;dr: Use g_main_context_invoke_full() or GTask. See the end of the post for some guidelines about multi-threaded programming using GLib and main contexts.

                        To begin with, what is ‘the right context’? Taking a multi-threaded GLib program, let’s assume that each thread has a single GMainContext running in a main loop — this is the thread default main context.((Why use main contexts? A main context effectively provides a work or message queue for a thread — something which the thread can periodically check to determine if there is work pending from another thread. It’s not possible to pre-empt a thread’s execution without using hideous POSIX signalling). I’m ignoring the case of non-default contexts, but their use is similar.)) So ‘the right context’ is the one in the thread you want a function to execute in. For example, if I’m doing a long and CPU-intensive computation I will want to schedule this in a background thread so that it doesn’t block UI updates from the main thread. The results from this computation, however, might need to be displayed in the UI, so some UI update function has to be called in the main thread once the computation’s complete. Furthermore, if I can limit a function to being executed in a single thread, it becomes easy to eliminate the need for locking a lot of the data it accesses((Assuming that other threads are implemented similarly and hence most data is accessed by a single thread, with threads communicating by message passing, allowing each thread to update its data at its leisure.)), which makes multi-threaded programming a whole lot simpler.

                        For some functions, I might not care which context they’re executed in, perhaps because they’re asynchronous and hence do not block the context. However, it still pays to be explicit about which context is used, since those functions may emit signals or invoke callbacks, and for reasons of thread safety it’s necessary to know which threads those signal handlers or callbacks are going to be invoked in. For example, the progress callback in g_file_copy_async() is documented as being called in the thread default main context at the time of the initial call.

                        The core principle of invoking a function in a specific context is simple, and I’ll walk through it as an example before demonstrating the convenience methods which should actually be used in practice. A GSource has to be added to the specified GMainContext, which will invoke the function when it’s dispatched. This GSource should almost always be an idle source created with g_idle_source_new(), but this doesn’t have to be the case. It could be a timeout source so that the function is executed after a delay, for example.

                        As described previously, this GSource will be added to the specified GMainContext and dispatched as soon as it’s ready((In the case of an idle source, this will be as soon as all sources at a higher priority have been dispatched — this can be tweaked using the idle source’s priority parameter with g_source_set_priority(). I’m assuming the specified GMainContext is being run in a GMainLoop all the time, which should be the case for the default context in a thread.)), calling the function on the thread’s stack. The source will typically then be destroyed so the function is only executed once (though again, this doesn’t have to be the case).

                        Data can be passed between threads in this manner in the form of the user_data passed to the GSource’s callback. This is set on the source using g_source_set_callback(), along with the callback function to invoke. Only a single pointer is provided, so if multiple bits of data need passing, they must be packaged up in a custom structure first.

                        Here’s an example. Note that this is to demonstrate the underlying principles, and there are convenience methods explained below which make this simpler.

                        /* Main function for the background thread, thread1. */
                        static gpointer
                        thread1_main (gpointer user_data)
                        {
                        	GMainContext *thread1_main_context = user_data;
                        	GMainLoop *main_loop;
                        
                        	/* Set up the thread’s context and run it forever. */
                        	g_main_context_push_thread_default (thread1_main_context);
                        
                        	main_loop = g_main_loop_new (thread1_main_context, FALSE);
                        	g_main_loop_run (main_loop);
                        	g_main_loop_unref (main_loop);
                        
                        	g_main_context_pop_thread_default (thread1_main_context);
                        	g_main_context_unref (thread1_main_context);
                        
                        	return NULL;
                        }
                        
                        /* A data closure structure to carry multiple variables between
                         * threads. */
                        typedef struct {
                        	gchar *some_string;  /* owned */
                        	guint some_int;
                        	GObject *some_object;  /* owned */
                        } MyFuncData;
                        
                        static void
                        my_func_data_free (MyFuncData *data)
                        {
                        	g_free (data->some_string);
                        	g_clear_object (&data->some_object);
                        	g_slice_free (MyFuncData, data);
                        }
                        
                        static void
                        my_func (const gchar *some_string, guint some_int,
                                 GObject *some_object)
                        {
                        	/* Do something long and CPU intensive! */
                        }
                        
                        /* Convert an idle callback into a call to my_func(). */
                        static gboolean
                        my_func_idle (gpointer user_data)
                        {
                        	MyFuncData *data = user_data;
                        
                        	my_func (data->some_string, data->some_int, data->some_object);
                        
                        	return G_SOURCE_REMOVE;
                        }
                        
                        /* Function to be called in the main thread to schedule a call to
                         * my_func() in thread1, passing the given parameters along. */
                        static void
                        invoke_my_func (GMainContext *thread1_main_context,
                                        const gchar *some_string, guint some_int,
                                        GObject *some_object)
                        {
                        	GSource *idle_source;
                        	MyFuncData *data;
                        
                        	/* Create a data closure to pass all the desired variables
                        	 * between threads. */
                        	data = g_slice_new0 (MyFuncData);
                        	data->some_string = g_strdup (some_string);
                        	data->some_int = some_int;
                        	data->some_object = g_object_ref (some_object);
                        
                        	/* Create a new idle source, set my_func() as the callback with
                        	 * some data to be passed between threads, bump up the priority
                        	 * and schedule it by attaching it to thread1’s context. */
                        	idle_source = g_idle_source_new ();
                        	g_source_set_callback (idle_source, my_func_idle, data,
                        	                       (GDestroyNotify) my_func_data_free);
                        	g_source_set_priority (idle_source, G_PRIORITY_DEFAULT);
                        	g_source_attach (idle_source, thread1_main_context);
                        	g_source_unref (idle_source);
                        }
                        
                        /* Main function for the main thread. */
                        static void
                        main (void)
                        {
                        	GThread *thread1;
                        	GMainContext *thread1_main_context;
                        
                        	/* Spawn a background thread and pass it a reference to its
                        	 * GMainContext. Retain a reference for use in this thread
                        	 * too. */
                        	thread1_main_context = g_main_context_new ();
                        	g_thread_new ("thread1", thread1_main,
                        	              g_main_context_ref (thread1_main_context));
                        
                        	/* Maybe set up your UI here, for example. */
                        
                        	/* Invoke my_func() in the other thread. */
                        	invoke_my_func (thread1_main_context,
                        	                "some data which needs passing between threads",
                        	                123456, some_object);
                        
                        	/* Continue doing other work. */
                        }

                        That’s a lot of code, and it doesn’t look fun. There are several points of note here:

                        • This invocation is uni-directional: it calls my_func() in thread1, but there’s no way to get a return value back to the main thread. To do that, the same principle needs to be used again, invoking a callback function in the main thread. It’s a straightforward extension which isn’t covered here.
                        • Thread safety: This is a vast topic, but the key principle is that data which is potentially accessed by multiple threads must have mutual exclusion enforced on those accesses using a mutex. What data is potentially accessed by multiple threads here? thread1_main_context, which is passed in the fork call to thread1_main; and some_object, a reference to which is passed in the data closure. Critically, GLib guarantees that GMainContext is thread safe, so sharing thread1_main_context between threads is fine. The other code in this example must ensure that some_object is thread safe too, but that’s a topic for another blog post. Note that some_string and some_int cannot be accessed from both threads, because copies of them are passed to thread1, rather than the originals. This is a standard technique for making cross-thread calls thread safe without requiring locking. It also avoids the problem of synchronising freeing some_string. Similarly, a reference to some_object is transferred to thread1, which works around the issue of synchronising destruction of the object.
                        • Specificity: g_idle_source_new() was used rather than the simpler g_idle_add() so that the GMainContext the GSource is attached to could be specified.

                        With those principles and mechanisms in mind, let’s take a look at a convenience method which makes this a whole lot easier: g_main_context_invoke_full().((Why not g_main_context_invoke()? It doesn’t allow a GDestroyNotify function for the user data to be specified, limiting its use in the common case of passing data between threads.)) As stated in its documentation, it invokes a callback so that the specified GMainContext is owned during the invocation. In almost all cases, the context being owned is equivalent to it being run, and hence the function must be being invoked in the thread for which the specified context is the thread default.

                        Modifying the earlier example, the invoke_my_func() function can be replaced by the following:

                        static void
                        invoke_my_func (GMainContext *thread1_main_context,
                                        const gchar *some_string, guint some_int,
                                        GObject *some_object)
                        {
                        	MyFuncData *data;
                        
                        	/* Create a data closure to pass all the desired variables
                        	 * between threads. */
                        	data = g_slice_new0 (MyFuncData);
                        	data->some_string = g_strdup (some_string);
                        	data->some_int = some_int;
                        	data->some_object = g_object_ref (some_object);
                        
                        	/* Invoke the function. */
                        	g_main_context_invoke_full (thread1_main_context,
                        	                            G_PRIORITY_DEFAULT, my_func_idle,
                        	                            data,
                        	                            (GDestroyNotify) my_func_data_free);
                        }

                        That’s a bit simpler. Let’s consider what happens if invoke_my_func() were to be called from thread1, rather than from the main thread. With the original implementation, the idle source would be added to thread1’s context and dispatched on the context’s next iteration (assuming no pending dispatches with higher priorities). With the improved implementation, g_main_context_invoke_full() will notice that the specified context is already owned by the thread (or can be acquired by it), and will call my_func_idle() directly, rather than attaching a source to the context and delaying the invocation to the next context iteration. This subtle behaviour difference doesn’t matter in most cases, but is worth bearing in mind since it can affect blocking behaviour (i.e. invoke_my_func() would go from taking negligible time, to taking the same amount of time as my_func() before returning).

                        How can I be sure a function is always executed in the thread I expect? Since I’m now thinking about which thread each function could be called in, it would be useful to document this in the form of an assertion:

                        g_assert (g_main_context_is_owner (expected_main_context));

                        If that’s put at the top of each function, any assertion failure will highlight a case where a function has been called directly from the wrong thread. This technique was invaluable to me recently when writing code which used upwards of four threads with function invocations between all of them. It’s a whole lot easier to put the assertions in when initially writing the code than it is to debug the race conditions which easily result from a function being called in the wrong thread.

                        This can also be applied to signal emissions and callbacks. As well as documenting which contexts a signal or callback will be emitted in, assertions can be added to ensure that this is always the case. For example, instead of using the following when emitting a signal:

                        guint param1;  /* arbitrary example parameters */
                        gchar *param2;
                        guint retval = 0;
                        
                        g_signal_emit_by_name (my_object, "some-signal",
                                               param1, param2, &retval);

                        it would be better to use the following:

                        static guint
                        emit_some_signal (GObject *my_object, guint param1,
                                          const gchar *param2)
                        {
                        	guint retval = 0;
                        
                        	g_assert (g_main_context_is_owner (expected_main_context));
                        
                        	g_signal_emit_by_name (my_object, "some-signal",
                        	                       param1, param2, &retval);
                        
                        	return retval;
                        }

                        As well as asserting emission happens in the right context, this improves type safety. Bonus! Note that signal emission via g_signal_emit() is synchronous, and doesn’t involve a main context at all. As signals are a more advanced version of callbacks, this approach can be applied to those as well.

                        Before finishing, it’s worth mentioning GTask. This provides a slightly different approach to invoking functions in other threads, which is more suited to the case where you want your function to be executed in some background thread, but don’t care exactly which one. GTask will take a data closure, a function to execute, and provide ways to return the result from this function; and will then handle everything necessary to run that function in a thread belonging to some thread pool internal to GLib. Although, by combining g_main_context_invoke_full() and GTask, it should be possible to run a task in a specific context and effortlessly return its result to the current context:

                        /* This will be invoked in thread1. */
                        static gboolean
                        my_func_idle (gpointer user_data)
                        {
                        	GTask *task = G_TASK (user_data);
                        	MyFuncData *data;
                        	gboolean retval;
                        
                        	/* Call my_func() and propagate its returned boolean to
                        	 * the main thread. */
                        	data = g_task_get_task_data (task);
                        	retval = my_func (data->some_string, data->some_int,
                        	                  data->some_object);
                        	g_task_return_boolean (task, retval);
                        
                        	return G_SOURCE_REMOVE;
                        }
                        
                        /* Whichever thread is invoked in, the @callback will be invoked in
                         * once my_func() has finished and returned a result. */
                        static void
                        invoke_my_func_with_result (GMainContext *thread1_main_context,
                                                    const gchar *some_string, guint some_int,
                                                    GObject *some_object,
                                                    GAsyncReadyCallback callback,
                                                    gpointer user_data)
                        {
                        	MyFuncData *data;
                        
                        	/* Create a data closure to pass all the desired variables
                        	 * between threads. */
                        	data = g_slice_new0 (MyFuncData);
                        	data->some_string = g_strdup (some_string);
                        	data->some_int = some_int;
                        	data->some_object = g_object_ref (some_object);
                        
                        	/* Create a GTask to handle returning the result to the current
                        	 * thread default main context. */
                        	task = g_task_new (NULL, NULL, callback, user_data);
                        	g_task_set_task_data (task, data,
                        	                      (GDestroyNotify) my_func_data_free);
                        
                        	/* Invoke the function. */
                        	g_main_context_invoke_full (thread1_main_context,
                        	                            G_PRIORITY_DEFAULT, my_func_idle,
                        	                            task,
                        	                            (GDestroyNotify) g_object_unref);
                        }

                        So in summary:

                        • Use g_main_context_invoke_full() to invoke functions in other threads, under the assumption that every thread has a thread default main context which runs throughout the lifetime of that thread.
                        • Use GTask if you only want to run a function in the background and don’t care about the specifics of which thread is used.
                        • In any case, liberally use assertions to check which context is executing a function, and do this right from the start of a project.
                        • Explicitly document contexts a function is expected to be called in, a callback will be invoked in, or a signal will be emitted in.
                        • Beware of g_idle_add() and similar functions which use the global default main context.
                        System Monitor needs an update, and it's not gonna be easy.

                        Background

                        System Monitor is a mostly stable piece of software, part of the GNOME project, it is (or should be) the application to
                        • monitor your system with
                        • to find the application/process
                          • slowing down your system, using your network bandwidth
                          • getting your laptop hot by using all/some of your cores at 100
                          • why your laptop battery lasts only 1 hour
                        After you have identified the problem, you should use the same application to recover:
                        • by killing the process
                        • setting a CPU usage/memory usage/bandwidth usage limit for the process
                        There are some tasks System Monitor excels at, I personally love the process list filtering + multiple selection + kill feature, works better for me than the killall terminal command, and that's something good.

                        Fact is, that the interface or System Monitor looks a bit outdated. Thanks to the help of several drive-by contributors, some elements of the user interface have been updated to match the rest of the GNOME 3 applications, but the application running is still the same old rusty application.

                        System Monitor also consumes more resources than it probably should. I might be the one to blame here, as I could have done something since I have been working on it, but it just isn't that easy. Several people have reported bugs against either system-monitor or libgtop for fixing some bottlenecks, memory leaks, limitations, and where I saw it appropriate, I have reviewed and committed the patches, but I am not experienced enough with either of these projects to know all the implications of the changes suggested, and probably the one writing the patch is neither, so we might see an improvement, but might introduce another bottleneck somewhere else.

                        The plan

                        GNOME designer Allan Day has come up with some new designs for a system monitoring application (Usage), and after some suggestions and feedback (yes, feedback is always welcom) he has updated them with an even better sidebar-oriented design, which I like a lot.

                        The progress

                        Stefano Facchini has implemented a proof of concept application based on the first mockups from Allan, but it needs updating, and a lot of work afterwards.

                        The dreams

                        "the dreams that you dare to dream really do come true"
                        (Lyman Frank Baum)
                        I am dreaming of a fully updated Usage application for GNOME 4.0 :) I don't think it can be done properly in the timeframe of the next GNOME release 3.14, but hopefully can be done until GNOME 4.0 comes out, whenever that will happen.

                        And by updated I mean an application which I can use to do the tasks System Monitor does, but a bit better, faster, and cleaner. And I am not speaking only about what's on a surface. I have been thinking about building a dbus wrapper around libgtop, with a lot more options to request only what you need, to be as fast as possible.

                        Yes, I am dreaming of the interface designed by Allan, with some twists, like extensibility (separate "plugin" for power usage monitoring, cpu usage, memory usage, with the option to turn any of these off, or even better, turn the ones you need off automatically, like no power usage monitoring for PCs)

                        Call for help

                        Even the easiest version (whichever that would be from updating the System Monitor interface or implementing from Scratch) would need more manpower AND experience than I have, so I am asking for your help:
                        • if you are a developer and you would like to contribute to this goal, let me know in the comments
                        • if you are a simple user, and have any comments on the design, your workflow, what you would like to see, your comments are welcome
                        • if you would be willing to test the application before it gets out, let me know in the comments
                        Long time, no post, but here it comes:

                        • I had my first presentation for promoting the FLOSS culture in a (computer science) student camp, I had fun, the audience seemed interested.
                          • I have prepared my best presentation so far, without words (only the title and my name in it, and a few numbers)
                          • Presentation prepared with Inkscape and Sozi resulting in a beautiful scalable Prezi-like presentation with lots of Public Domain vector graphics from OpenClipart and clker, and icons of several open-source projects
                          • Not shared anywhere yet, bceause
                            • as a presentation without words won't work without a presenter, so I'm thinking of writing a keyword to each slide
                            • lots of FLOSS software projects have beautiful scalable icons, BUT they have huge icon usage guidelines restricting not only the sizing it can be used with, but the background color too, so I still have to review some of those to make sure I'm not breaking any rules I should be conforming to
                        • Attended several local hackathons for FLOSS software, worked mostly on System Monitor, unfortunately these have been temporarily cancelled
                        • Found bountysource.com, a great way to get some cash for open-source contributions
                          • I'm NOT saying that it's a must, just that it was a good motivation for me to get involved (even if only for one patch) in several projects (most of them involving the elementary project, being a very welcoming community with very enthusiastic goals of developing a full set of properly designed applications bundled into a distribution based on Ubuntu)
                          • I have managed to collect bounties worth 100$, and still have some patches waiting for reviews, for another few hundred dollars
                        • Glad to see GNOME 3.12 out, I have it installed on my work pc from rhughes' COPR repo since it first appeared (and had some trouble installing it as multilib wasn't discussed in the installation instructions back then, but now it is, so it should be a breeze)
                        • I am thinking about the next-generation of System Monitor, but I will need some manpower there, to get it done, so I'll think I'll cover that in another post
                        • I have started game development again, developing simple games for GNOME with GTK+ only, and I am very satisfied with the results (a relatively new function - one year old - helping a lot in animations), but I'll blog about that in a next post too, I'm in the mood of posting a lot now, I have a lot to recover