Thursday, August 2, 2012

Disks improvements in GNOME 3.6

Just a short exposition of new features and changes in the Disks and udisks releases that will ship with GNOME 3.6
  • There are now configuration dialogs for power management for ATA disks, see these two screenshots. Implementation-wise, it's backed by udisks configuration files, see the udisks man page for details. In the future we may add more disk and block device settings (for example, for controlling write-back/write-thru caching)
  • The Disks application will now show a Zzz icon if a disk is in standby mode. It looks like this. You can also manually put a disk into standby mode and also wake it up
  • There is now a way to erase a disk by filling it with zeroes. If the disk supports the ATA SECURE ERASE command (all modern ATA hard disks from the last decade or so), we support that as well. The erase type is selected from a combo box. In the erase confirmation dialog we also now offer guidance for all three cases since each of them have interesting implications (quick format, overwrite, ATA secure erase). The link in the last warning points the user to the ATA Secure Erase page on the Linux ATA Wiki. Hopefully this guidance is useful for people donating/selling old disks/devices (e.g. remind them to wipe)
  • Long-running jobs are now displayed in the Disks user interface along with a way to cancel the job as well
  • There's a new "Disk Image Mounter" application for attaching (you've guessed it), disk images. It comes with MIME type associations and in Fedora 18 it is the default application for ISO files. This means that if you double click an ISO file in Files, this application is invoked and it simply just sets up the loop device.
  • Combined with this GVfs change the desktop bits will Do The Right Thing(tm) when the user sets up a loop device (don't worry, we don't do anything automatic if your shell script calls losetup(8) - it only affect loop devices set up via the udisks D-Bus interface). For example, if the image contains an installable OS, the desktop will go on to prompt the user to install the OS in the disk image. Note that the machinery is exactly the same as for physical media / devices - e.g. it works with partitioned disk images, the user is prompted to unlock LUKS volumes in the disk image etc. For example, for a ISO image of a Video DVD, the desktop will suggest opening whatever app is registered for x-content/video-dvd (normally Totem but could also be a 3rd party DVD player claiming to support that MIME type)
  • The GVfs bits will also toggle the autoclear flag for the loop device when it mounts / unlocks the loop device (or a partition on the loop device) - this is so the disk image is detached when the last user stops using it. This flag can also be toggled from the Disks application, see this screenshot
  • After a good and healthy discussion, the desktop shell now offers to save the LUKS passphrases in the keyring again (removed in 3.4, back in 3.6!). So it was a natural thing to make the Disks application read it when unlocking the volume
  • Speaking of passphrases, Disks is now using libpwquality and GtkLevelBar. It looks like this
  • By popular demand, the benchmark feature that was lost in the Disks/udisks rewrite that landed in GNOME 3.4 is now back - it now looks like this. Not only is it back, it's back in style: write benchmarks are now non-destructive (just use ECC memory, please) and the graph is updated while the benchmark is underway. You can also configure how many samples to make etc.. Also, benchmarking isn't constrained to whole disks anymore - you can now do it on any volume or block device (for example this RAID-6 array). It's a handy little feature
Of course, the GNOME Disks changes are just the user-visible bits. The "lower half" of the Disks application (basically invisible to end users) is called udisks and happens to be shared with other user interfaces including KDE and all the other user interfaces built on top of GLib/GTK+. For udisks changes, see its git repository for details.

Looking forward to GNOME 3.8

Some of the things I'm planning to work on in the future (hopefully some of it will land in GNOME 3.8) include
  • Filesystem/partition resizing - not that interesting of a feature on its own, but incredibly useful in an installer setup where you want to dual-boot with another OS. It shouldn't be that hard to implement since the functionality more or less exists and the main job here is a) add a couple of methods to the Filesystem and Partition D-Bus interfaces; and b) design and implement the UI in Disks. I'm planning to collaborate on the udisks bits with the KDE developer who filed the bug
  • Linux MD Raid support in udisks and Disks. We actually had somewhat decent support for this in the old version of udisks/Disks (see these screenshots for example) but it was removed in the rewrite as it was implemented in a hurry without a lot attention to design. Turns out there's still quite a bit of interest from several parties to add back this functionality plus also supporting external metadata and firmware/platform features in a meaningful way. To keep complexity down, we'll probably only allow creating arrays from whole disks as this is the recommended way to do RAID (users can still use mdadm(8) to create arrays on partitions if they so desire)
  • Simple iSCSI initiator support - there's already git branches for this in the gnome-disk-utility and udisks repos but it's probably somewhat out of date by now. Anyway, the idea is really simple - just provide a simple front-end to the most common features provided by Linux Open-iSCSI. Some old screenshots showing the work I've already done are here and here
Now an obvious question comes to mind: why should Disks support complicated stuff like MD-RAID and iSCSI? Honestly, for the last couple of years, I've been bouncing back and forth between wanting / not wanting complicated and advanced features like this. However, recent discussions in the GNOME community has put me on a trajectory where I'm more and more convinced that GNOME should try harder to cater to, uh, workstation users (for want of a better word).

If workstations are part of GNOME's mission statement then it follows that you want to enable users to, say, easily set up a beefy workstation with eight disks in a RAID configuration so rebuilding the OS goes faster, easily connect to the lab SAN for if you're into science and so on. Things like that. No matter how good, user-friendly and well-documented commands and configuration files are, I don't think saying "just use the command-line or edit this or that file in /etc" is a super-satisfying answer.

Monday, June 4, 2012

Authorization Rules in polkit

For the past couple of weeks, I've been working on rewriting the part of polkit that actually makes the authorization decision: the polkit authority.

Some history

In its current implementation, the so-called polkit local authority ended up being one of these things that never really worked well for the (relatively small subset of) users with a need to configure it. First of all the local authority was never really supposed to be the main polkit authority... back then, we envisioned that in an Enterprise setting, you'd have something like FreeIPA to handle permissions and authorization and this would in turn provide a polkit backend responsible for answering authorization questions from applications (sadly this haven't materialized yet but with this new work it could be we're a lot closer).

Second of all, the local authority had some serious usability problems, in fact so many that I early on re-purposed a bug for the rewrite. I think I basically concluded that it's just too hard to even do simple things and the key-file based format and priority rule scheme for the .pkla file format wasn't really helping. It's also really hard to test .pkla files.

Defining the problem

After thinking about this problem on and off for a while (while working on other things, mostly udisks/GNOME Disks and all the GDBus stuff), I identified the following requirements:

  1. There is no one-size-fits-all - for example some admins want to allow all users in a group to do the action xyz, while other admins want the opposite (forbid users in a group to do the action xyz). Similarly, some admins want black-lists ("allow anything but action xyz") and some want white-lists ("only allow actions xyz, abc, ..."). Some admins even want to have the result depend on the time ("don't allow xyz on school nights").
  2. It would be good to have more information than just the action available when making a decision. For example for mounting or formatting a disk, it would be nice to have the device file or serial number or name and e.g. make the decision depend on this. For connecting to a wireless access point, having access to its ESSID would be great (to only allow connections to ESSID's in a whitelist) and so on.
  3. Some admins may want to use external programs so there should be some facility available for this. Although this can't be the primary interface because forking a process every time someone calls CheckAuthorization() is a recipe for disaster [1].
  4. Ideally, it should be easy test authorization rules - after all, this has to do with security so being able to easily (and automatically) test that your rules Does The Right Thing(tm) would be, uhm, great

A lot of these requirements are actually similar to the requirements you have for udev rules. And a lot of the constraints are similar as well, especially the fork-fest part.

Another thing that's important to identify is who is going to end up using this. The answer here is mostly "Enterprise Admin" although in reality a lot hobbyist end up using it because, well, they like to tinker around. Some users of Linux distributions with broken defaults (see below) may also need to undo the damage done by the distribution so they can actually use their computer. All in all, one can probably assume that the target user here is relatively skilled, ie. can read the provided documentation and is at least capable of copy-pasting some snippet from a website into a file in /etc as root and check that it has an effect.

[1] : History lesson: before we had udevd(8), the kernel forked /sbin/hotplug for every hotplug event. And /sbin/hotplug, being a shell script and all, itself forked another ten shell scripts or so in /etc/hotplug.d/and these forked other shell-scripts and... the result was that hotplugging a USB hub full of devices could easily take minutes because tens of thousands of /bin/sh-instances were forked. Awesometown. Today udevd(8) does the same in less than a second without forking a lot of extraneous processes.

Just embed JavaScript

The first requirement clearly indicates that we want some kind of programming language. However, inventing your own programming language is rarely a good idea so I decided to just embed a JavaScript interpreter (specifically SpiderMonkey) and try that out.

For the second requirement, I just exposed information we already have to the rules engine. I also rewrote the "Writing polkit applications" chapter to mention that mechanisms should use this feature as well as a ton of other advice. [2]

For the third requirement, spawning programs, I added a simple polkit.spawn() method.

The fourth requirement, testing, is fulfilled by just observing that the polkit authorization rules are JavaScript files that the user can test any way they want by trivially mocking the Polkit, Action and Subject types via Duck Typing in their favorite server-side JavaScript environment (gjs, node.js, seed etc.).

After a couple of prototyping attempts, I ended up with something that isn't too awful - see for yourself in the polkit(8) man page in the AUTHORIZATION RULES section. I also ended up adding a number of tests for this, including a test to ensure that even runaway-scripts are terminated. Over all, I'm very satisfied with the result.

[2] : I also included common-sense advice like "don't ask for a root password for adding printer queues" in this section

Wait, isn't embedding a JS interpreter inherently dangerous?

Embedding a JS interpreter is actually perfectly safe. First of all, it all runs inside the polkitd(8) system daemon (which runs without privileges, see below). Second of all, all data passed to the rules are either trusted or from a process that is trusted (except where designated untrusted, e.g. pkexec(1)'s command_line variable). Third, being an interpreted language, we can actually sensibly terminate runaway scripts.

Hmm, OK, but you are bloating Linux anyway. You suck.

The only new dependency here is libmozjs185.so.1 which in turn depends on the C++ runtime and NSPR (which NSS also depends on and most Linux installs have this library). Note that you also already need a JS interpreter for proxy server auto-configuration. It's also possible (or will be, at least) to just not install polkitd(8) at all (or disable/mask it using systemd) while still having the client-side libraries installed.

Other features

Apart from the new authorization rules engine, I also made the polkitd(8) system daemon run as the unprivileged polkitd user instead of root (much safer for obvious reasons). The main reason why this hadn't been done before had to do with the fact that polkitd(8) was loading backends via an extension system and we didn't really know if some future backend had to run as root. Since I decided nuke the extension system, we no longer need to make such assumptions so changing it was straightforward.

A while ago, I also added the pkttyagent(1) command on request from Lennart for optional use in systemctl(1). Optional here means that it's a soft dependency - systemctl(1) works fine even when polkit is not installed.

Next steps

I've been doing all this work on the wip/js-rule-files branch and today I merged this branch to the master branch. I plan to do a new 0.106 release shortly and put it in what will end up being Fedora 18.

Once that's available, I plan to file bugs against the most important mechanisms (such as NetworkManager) so they can start exporting variables to be used in authorization rules and also properly document this.

Wednesday, March 7, 2012

Simpler, Faster, Better

For the past year or so, I've been working on and off on the next version of udisks and what is now known as the Disks GNOME application (née palimpsest):

Hello Shiny!

This code will ship in the upcoming GNOME 3.4 and Fedora 17 releases.

The motivation for rewriting udisks and palimpsest hadn't really anything to do with the user-experience per se - the main motivation was to port udisks/palimpsest code from dbus-glib to use all the D-Bus work I've been doing the last couple of years, e.g.

Since everything had to be rewritten to use new interfaces, I figured that while I was at it, I'd redesign the UI part as well. A big thanks goes to Jon McCann for helping me design the application and enduring my complicated descriptions of how storage works on Linux.

Cleanups

There's a couple of new user-visible features in the code but before delving into specifics I want to mention that we've also removed a couple of features. Why? Because the old interface was quickly becoming a mess and it certainly wasn't very GNOME3-ish neither in the way it looks nor the simplicity you'd expect. If there is one mantra that has driven this effort, it may very well be "don't try to expose everything in a desktop user interface" and I think this is something that extends to the greater GNOME 3 effort as well.

For example, the extent to which Disks support LVM (and MD RAID for that matter) is now that we only show the mapped devices (if the VG / array is running) and, except for e.g. showing the user-friendly (!) name /dev/vg_x61/lv_root instead of /dev/dm-2, that's pretty much it. You can still, of course, manipulate the device (such as changing the file system label) and its contents as if it was any other device (see below), but we no longer provide any UI to configure LVM or MD RAID itself - you are expected to use the command-line tools to do so yourself.

Another important goal of Disks that hasn't changed is the realization that we're only one tool among many - we don't pretend to own your machine and that we're the only tool you'll ever use. That's why you'd e.g. see device names in the user interface (without being primary identifiers) so you can copy/paste that to the terminal and use command-line tools on the device. Additionally, thanks to the notifications from udev, the Disks application will update its UI in response to changes you do from e.g. the command-line or other applications - for example, if you mkfs(8) a new filesystem, or add a new partition using parted(8).

Common operations

Fundamentally, the Disks end-user interface hasn't changed much (it's still basically a tree-view on the left and volume grid on the right), however instead of lots of buttons, most operations are initiated by accessing a popup-menu either for the volume or drive in question. Common operations like formatting a device, changing the passphrase of an encrypted device, creating a new partitionformatting a disk, checking ATA SMART data are pretty much the same, although the UI has been cleaned up some. Pretty basic stuff, really, no big changes here.

Mount and encryption options

A common theme since I started working on storage in desktop Linux (some eight years ago), is that some people are extreme control-freaks about where a device is mounted and with what options. Most of the time you won't need this (the desktop shell will automount the device somewhere in /media) but there's a couple of important uses for this. One common example, is that people are using their computer to serve media files to their TV or Media Jukebox over the LAN. For this to work you often need to make sure the device is mounted in say, /mnt/foo (because /mnt/foo is what you added to the config file for the media server) and that this happens before the media server is started.

We've been trying all sorts of things over the years (the most complex being the now deprecated gnome-mount program reading mount options from e.g. GConf) and while some of these things have worked great in theory, they were just way too complex; our users just didn't have the time nor the inclination to figure out how to use our software... and I don't blame them... my experience is that if the effort more complex than editing a single text file with emacs(1) or vi(1) you've lost most people.

(In retrospect, realizing when your software is too complex for people to use takes much longer than you think - worse, I suspect a lot of developers don't realize just how over-engineered their software is until it's too late.)

So, the way it has worked the past few years in GNOME is that if you want specific configuration for mounting a specific disk, I've been telling people to just add an entry to /etc/fstab and use one of the symlinks  in /dev/disk. Works great. So it was only natural to actually add some UI for this and the result is this dialog

Editing an entry in the /etc/fstab file

which basically correspond to the fields in the /etc/fstab file.

A nice touch here (I think) is that the "Identify As" combo box allows you to select any of the /dev/disk symlinks that currently point to the device, for example, you can make the given mount options apply to any disk connected to the given USB port (actually, partition 1 from said disk). This is of course nothing new, you've always been able to use any /dev/disk symlink in the /etc/fstab file but I bet most people don't have time or inclination to find out exactly what link to use so most just end up using e.g. /dev/sdb1 because that works right now for them. Again, the good old "if it's more complex than editing a single file" line of thought.

(My secret hope here is that even old Unix neck-beards will want to use such UI dialogs instead of editing the /etc/fstab file manually ... or at the very least realize how such a combo box adds value before they flame the tool to death :-) ...)

As for /etc/fstab, we also have similar UI for the /etc/crypttab file except that this is a bit more complex insofar that we allow managing passphrase-files to e.g. automatically unlock a device on boot-up... you may need encrypted /etc for this to be secure - this of course depends on your threat model but it could be you have provisions for securely shredding all content in /etc/luks-keys when needed etc. etc., I don't know :-)

Anyway, the nice thing about using standardized files such as /etc/fstab and /etc/crypttab for this, is interoperability with the rest of the OS - in particular, you can edit the mount options e.g. the boot file-system from the Disks application. Additionally, things like systemd will actually mount devices referenced in the /etc/fstab file at start-up (unless the noauto option is used) which is something you really want for the media server use-case mentioned above.

Disk images and loop devices

Another new feature has to do with disk images - the Disks application now allows you to create and restore disk images (including showing the speed and estimated time left) which is handy especially e.g. from a rescue live cd (it's basically just a GUI for dd(1)). Additionally, there is UI for attaching disk images and using them and, thanks to my friend Kay Sievers, it even works with partitioned disk images

Loop device, with partitions, oh my!

which I think is pretty handy.

Desktop integration

Another important part of the equation is what the user sees when it comes to storage devices in general (e.g. in general, not only when using the Disks application). This is largely the domain of the Desktop Shell, the Files application (née Nautilus) and the file chooser. These pieces all rely on the GLib Volume Monitor APIs which is a high-level abstraction with a stable API/ABI that applications can use (among other things) to figure out what volumes to present in their user interfaces.

In a nutshell, the volumes to show is a complex mixture of physical storage devices (say, mountable filesystems residing on a plugged-in USB stick), devices backed by GVfs backends (say, digital cameras and iPod that are not block devices), GVfs network backends (e.g. smb:// network mounts) and non-storage mounts such as e.g. NFS mounts, FUSE mounts and fstab entries representing things that can be mounted but may not already be mounted.

Now, the way GVfs works is pretty complex (there are multiple volume monitor daemons and storage backends instances) but the important point is that the volume monitor responsible for device and fstab mounts has been updated to use udisks2, the same storage daemon used by the Disks application. In addition to returning GDrive, GVolume and GMount objects for storage devices, this volume monitor also return GVolume entries for /etc/fstab entries so you can e.g. mount a NFS server by clicking an icon in Files (again, a common user request), like this

Desktop Integration. Bringing it all together

Note that the name, icon and desktop visibility of the GVolume instance representing the NFS mount is controlled by mount options in the /etc/fstab file - see this write-up for more information.

Thursday, September 22, 2011

New D-Bus features in GLib 2.30

For the upcoming GLib 2.30 release, there's a couple of new features to make it even easier to use D-Bus.

C Code Generator

GLib 2.30 ships with a command called gdbus-codegen(1) which can be used to generate C code. The command is similar in spirit to the dbus-binding-tool(1) command but it targets the D-Bus implementation added to GLib 2.26 (often informally referred to as GDBus). The command's manual page and migration documentation is the authoritative source of documentation, but from a 50,000 feet view what the tool does is simply to map a D-Bus interface (described by XML) to a couple of GObject-based types (including all D-Bus methods, signals and properties). It of course supports the PropertiesChanged signal added in D-Bus spec 0.14.

Even though the tool is targeting C programmers (most higher-level languages are a lot more dynamic than C so objects can be exported via e.g. language meta-data such as annotations), it's useful to note that the generated code is 100% annotated so it can be used from e.g. JavaScript through GObject Introspection - for example the GNOME Documents application (written in JavaScript) is consuming the GNOME Online Accounts client-library which is 99% generated by the gdbus-codegen(1) command.

In addition to just generating code, the gdbus-codegen(1) command can also generate very nice Docbook documentation (example 1, example 2) for D-Bus interfaces - in this respect it's useful even for non-GNOME applications in the same way a lot of libraries like libblkid and libudev are already using gtk-doc for their C library documentation.

Object Manager

Another part of GLib 2.30 is support for the org.freedesktop.DBus.ObjectManager D-Bus interface - in fact, the implementation in GLib, that is, the newly added GDBusObject{Proxy,Skeleton}and GDBusObjectManager{Client,Server} types, is actually what drove me to propose this as a standard interface instead of just doing a GLib-only thing (with shoutouts to my homeboys smcv and walters for excellent review and feedback).

In a nutshell, the org.freedesktop.DBus.ObjectManager interface is basically a formalization of what each and every non-trivial D-Bus service is already doing: offering some kind of GetAll() method (to return all objects) and signals ::Foo{Added,Removed} (to convey changes) on its Manager interface in its own special way (example 1, example 2). It first sounds weird to standardize such a simple thing as object enumeration and change signals but if you think about all the possible edge cases and race-conditions then using a well-tested implementation just makes everything so much easier. Additionally, with the way the interface is defined and the newly added path_namespace match rule, two method invocations and a single round-trip is all it takes for a client to grab the state from the service - this is a huge win compared to existing services that typically first retrieve a list of object paths and then gets properties for each object path (and only if you are lucky it does the latter in parallel).

When combined with the gdbus-codegen(1) command (which can also generate specialized GDBusObject types for use with GDBusObjectManager) you can start writing service-specific code right away and not worry about horrible implementation details like marshaling, too many round trips or race conditions. It just works out of the box as you'd expect it to.

Wednesday, July 6, 2011

Writing a C library, intro, conclusion and errata

This is a series of blog-posts about best practices for writing C libraries. See below for each part and the topics covered.

Table of contents

The entire series about best practices for writing C libraries covered 15 topics and was written over five parts posted over the course of approximately one week. Feel free to hotlink directly to each topic but please keep in mind that the content (like any other content on this blog) is copyrighted by its author and may not be reproduced without his consent (if you are friendly towards free software, like e.g. LWN, just ask and I will probably give you permission):

Topics not covered

Some topics relevant for writing a C library isn't (yet?) covered in this series either because I'm not an expert on the topic, the topic is still in development or for other reasons:
  • Networking
    You would think IP networking is easy but it's really not and the low-level APIs that are part of POSIX (e.g. BSD Sockets) are not really that helpful since they only do part of what you need. Difficult things here include name resolution, service resolutionproxy server handling, dual-stack addressing and transport security (including handling certificates for authentication).

    If you are using modern GLib networking primitives (such as GSocketClient or GSocketService) all of these problems are taken care of for you without you having to do much work; if not, well, talking to people (or at least, read the blogs) such as Dan Winship, Dan Williams or Lennart Poettering is probably your best bet.

  • Build systems
    This is a topic that continues to make me sad so I decided to not really cover it in the series because the best guidance I can give is to just copy/paste whatever other projects are doing - see e.g. the GLib source tree for how to nicely integrate unit testing (see Makefile.decl) and documentation (see docs/reference sub-directories) into the build system (link).

    Ideally we would have a single great IDE for developing Linux libraries and applications (integrating testing, documentation, distribution, package building and so on - see e.g. Sami's libhover video) but even if we did, most existing Linux programmers probably wouldn't use it because they are so used to e.g. emacs or vi (if you build it, they will come?). There's a couple of initiatives in this area including Eclipse CDT, Anjuta, KDevelop and MonoDevelop.

  • Bundling libraries/resources
    The traditional way of distributing applications on Linux is through so-called Linux distributions - the four most well-known being DebianFedoraopenSUSE and Ubuntu (in alphabetical order!). These guys, basically, take your source code, compile it against some version of other software it depends on (usually a different version than you, the developer, used), and then ship binary packages to users using dynamic linking.

    There's a couple of problems with this legacy model of distributing software (this list is not exhaustive): a) it can take up to one or two distribution release cycles (6-12 months) before your software is available to end users; and b) user X can't give a copy of the software to user Y - he can only tell him where to get it (it might not be available on user Y's distro); and c) it's all a hodgepodge of version skew e.g. the final product that your users are using is, most likely, using different versions of different libraries so who knows if it works; and d) the software is sometimes changed in ways that you, the original author, wasn't expecting or does not approve of (for example, by removing credits); and e) the distribution might not forward you bug reports or may forward you bug reports that is caused by downstream patches; and f) there's a peer pressure to not depend on too new libraries because distributions want to ship your software in old versions of their OS - for example, Mozilla wants to be able to run on a system with just GTK+ 2.8 installed (and hence won't use features in GTK+ 2.10 or later except for using dlopen()-techniques), similar for e.g. Google Chrome (maybe with a newer GTK+ version though). These problems are virtually unknown to developers on other platforms such as Microsoft Windows, Mac OS X or even some of the smartphone platforms such as iOS or Android - they all have fancy tools that bundles things up nicely so the developers won't have to worry about such things.

    There's a couple of interesting initiatives in this area see e.g. bockbuild, glick and the proposal to add a resource-system to GLib. Note that it's very very hard to do this properly since it depends not only on fixing a lot of libraries so they are relocatable, it also depends on identifying exactly what kind of run-time requirements each library in question has. The latter includes the kernel/udev version, the libc version (unless bundled or statically linked), the X11 server version (and its extensions such as e.g. RENDER) version, the presence of one or more message buses and so on. With modern techniques such as direct rendering this becomes even harder if you want to take advantage of hardware acceleration since you must assume that the host OS is providing recent enough versions of e.g. OpenGL or cairo libraries (since you don't want to bundle hardware drivers). And even after all this, you still need to deal with how each distribution patches core components. In some circumstances it might end up being easier to just ship a kernel+runtime along with the application, virtualized.
The way the series is set up is so it can be extended at a later point - so if there is a demand for one or more popular topics about writing a C library, I might write another blog entry and add it to this page as it's considered the canonical location for the entire series.

Errata

Please send me feedback and I will fix up the section in question and credit you here (I already have a couple of corrections lined up that I will add later).

Tuesday, July 5, 2011

Writing a C library, part 5

This is part five in a series of blog-posts about best practices for writing C libraries. Previous installments: part one, part two, part three, part four.

API design

A C library is, almost by definition, something that offers an API that is used in applications. Often an API can't be changed in incompatible ways (it can, however, be extended) so it is usually important to get right the first time because if you don't, you and your users will have to live with your mistakes for a long time.

This section is not a a full-blown guide to API design as there's a lot of literature, courses and presentations available on the subject - see e.g. Designing a library that's easy to use - but we will mention the most important principles and a couple of examples of good and bad API design.

The main goals when it comes to API design is, of course, to make the API easy to use - this include choosing good names for types, functions and constants. Be careful of abbreviations - atof might be quick to type but it's not exactly clear that the function parses a C string and returns a double (no, not a float as the name suggests). Typically nouns are used for types and while verbs are used for methods.

Another thing to keep in mind is the number of function arguments - ideally each function should take only a few arguments so it's easy to remember how to use it. For example, no-one probably ever remembers exactly what arguments to pass to g_spawn_async_with_pipes() so programmers end up looking up the docs, breaking the rhythm. A better approach (which is yet to be implemented in GLib), would be to create a new type, let's call it GProcess, with methods to set what you'd otherwise pass as arguments and then a method to spawn the actual program. Not only is this easier to use, it is also extensible as adding a method to a type doesn't break API while adding an argument to an existing function/method does. An example of such an API is libudev's udev_enumerate API - for example, at the time udev starting dealing with device tags, the udev_enumerate type gained the add_match_tag() method.

If using constants, it is often useful to use the C enum type since the compiler can warn if a switch statement isn't handling all cases. Generally avoid boolean types in functions and use flag enumerations instead - this has two advantages: first of all, it's sometimes easier to read foo_do_stuff(foo, FOO_FLAGS_FROBNICATOR) than foo_do_stuff(foo, TRUE) since the reader does not have to expend mental energy on remembering if TRUE translates into whether the frobnicator is to be used or not. Second, it means that several booleans arguments can be passed in one parameter so hard-to-use functions like e.g. gtk_box_pack_start() can be avoided (most programmers can't remember if the expand or fill boolean comes first). Additionally, this technique allows adding new flags without breaking API.

Often the compiler can help - for example, C functions can be annotated with all kinds of gcc-specific annotations that will cause warnings if the user is not using the function correctly. If using, GLib, some of these annotations are available as macros prefixed with G_GNUC, the most important ones being G_GNUC_CONST, G_GNUC_PURE, G_GNUC_MALLOC, G_GNUC_DEPRECATED_FORG_GNUC_PRINTF and G_GNUC_NULL_TERMINATED.

Checklist

  • Choose good type and function names (favor expressiveness over length).
  • Keep the number of arguments to functions down (consider introducing helper types).
  • Use the type system / compiler to your advantage instead of fighting it (enums, flags, compiler annotations).

Documentation

If your library is very simple, the best documentation might just be a nicely formatted C header file with inline comments. Often it's not that simple and people using your library might expect richer and cross-referenced documentation complete with code samples.

Many C libraries, including those in GLib and GNOME itself, use inline documentation tags that can be read by tools such as gtk-doc or Doxygen. Note that gtk-doc works just fine even on low-level non-GLib-using libraries - see e.g. libudev and libblkid API documentation.

If used with a GLib library, gtk-doc uses the GLib type system to draw type hierarchies and show type-specific things like properties and signals. gtk-doc can also easily integrate with any tool producing Docbook documentation such as manual pages or e.g. gdbus-codegen(1) when used to generate docs describing D-Bus interfaces (example with C API docs, D-Bus docs and man pages).

Checklist

  • Decide what level of documentation is needed (HTML, pdf, man pages, etc.).
  • Try to use standard tools such as Doxygen or gtk-doc.
  • If shipping commands/daemons/helpers (e.g. anything showing up in ps(1) output), consider shipping man pages for those as well.

Language bindings

C libraries are increasingly used from higher-level languages such as Python or JavaScript through a so-called language binding - for example, this is what allows the Desktop Shell in GNOME 3 to be written entirely in JavaScript while still using C libraries such as GLib, Clutter and Mutter underneath.

It's outside the scope of this article to go into detail on language bindings (however a lot of the advice given in this series does apply - see also: Writing Bindable APIs) but it's worth pointing out that the goal of the GObject Introspection project (which is what is used in GNOME's Shell) is aiming for 100% coverage of GLib libraries assuming the library is properly annotated. For example, this applies to the GUdev library (a thin wrapper on top of the libudev library) can be used from any language that supports GObject Introspection (JS example).

GObject Intropspection is interesting because if someone adds GObject Introspection support to a new language, X, then the GNOME platform (and a lot of the underlying Linux plumbing as well cf. GUdev) is now suddenly available from that language without any work.

Checklist

  • Make sure your API is easily bindable (avoid C-isms such as variadic functions).
  • If using GLib, set up GObject Introspection and ship GIR/typelibs (notes).
  • If writing a complicated application, consider writing parts of it in C and parts of it in a higher-level language.

ABI, API and versioning

While the API of a library describes how the programmer use it, the ABI describes how the API is mapped onto the target machine the library is running on. Roughly, a (shared) library is said to be compatible with a previous version if a recompile is not needed. The ABI involves a lot of factors including data alignment rules, calling conventions, file formats and other things that are not suitable to cover in this series; the important thing to know about when writing C libraries is how (and if) the ABI changes when the API changes. Specifically, since some changes (such as adding a new function) are backwards compatible, the interesting question is what kind of API changes result in non-backwards-compatible ABI changes.

Assuming all other factors like calling convention are constant, the rule of thumb about compatibility on the ABI level basically boils down to a very short list of allowed API changes:
  • you may add new C functions; and
  • you may add parameters to a function only if it doesn't cause a memory/resource leak; and
  • you may add a return value to a function returning void only if it doesn't cause a memory leak; and
  • modifiers such as const may be added / removed at will since they are not part of the ABI in C
The latter is an example of a change that breaks the API (causing compiler warnings when compiling existing programs that used to compile without warnings) but preserve the ABI (still allowing any previously compiled program to run) - see e.g. this GLib commit for a concrete example (note that this can't be done in C++ because of how name mangling work).

In general, you may not extend C structs that the user can allocate on the stack or embed in another C structure which is why opaque data types are often used since they can be extended without the user knowing. In case the data type is not opaque, an often used technique is to add padding to structs (example) and use it when adding a new virtual method or signal function pointer (example). Other types, such as enumeration types, may normally be extended with new constants but existing constants may not be changed unless explicitly allowed.

The semantics of a function, e.g. its side effect, is usually considered part of the ABI. For example, if the purpose of a function is to print diagnostics on standard output and it stops doing it in a later version of the library, one could argue it's an ABI break even when existing programs are able to call the function and return to the caller just fine possibly even returning the same value.

On Linux, shared libraries (similar to DLLs on Windows) use the so-called soname to maintain and provide backwards-compatibility as well as allowing having multiple incompatible run-time versions installed at the same time. The latter is achieved by increasing the major version number of a library every time a backwards-incompatible change is made. Additionally, other fields of the soname have other (complex) rules associated (more info).

One solution to managing non-backwards-compatible ABI changes without bumping the so-number is symbol versioning - however, apart from being hard to use, it only applies to functions and not e.g. higher-level run-time data structures like e.g. signals, properties and types registered with the GLib type-system.

It is often desirable to have multiple incompatible versions of libraries and their associated development tools installed at the same time (and in the same prefix) - for example, both version 2 and 3 of GTK+. To easily achieve this, many libraries (including GLib and up) include the major version number (which is what is bumped exactly when non-backwards-compatible changes are made) in the library name as well as names of tools and so on - see the Parallel Installation essay for more information.

Some libraries, especially when they are in their early stages of development, specifically gives no ABI guarantees (and thus, does not manage their soname when incompatible changes are made). Often, to better manage expectations, such unstable libraries require that the user defines a macro acknowledging this (example). Once the library is baked, this requirement is then removed and normal ABI stability rules starts applying (example).

Related to versioning, it's important to mention that in order for your library to be easy to use, it is absolutely crucial that it includes pkg-config files along with the header files and other development files (more information).

Checklist

  • Decide what ABI guarantees to give if any (and when)
  • Make sure your users understand the ABI guarantees (being explicit is good)
  • If possible, make it possible to have multiple incompatible versions of your library and tools installed at the same time (e.g. include the major version number in the library name)

Friday, July 1, 2011

Writing a C library, part 4

This is part four in a series of blog-posts about best practices for writing C libraries. Previous installments: part one, part two, part three.

Helpers and daemons

Occasionally it's useful for a program or library to call upon an external process to do its bidding. There are many reasons for doing this - for example, the code you want to use
  • might not be easily used from C - it could be written in say, python or, gosh, bash; or
  • could mess with signal handlers or other global process state; or
  • is not thread-safe or leaking or just bloated; or
  • its error handling is incompatible with how your library does things; or
  • the code needs elevated privileges; or
  • you have a bad feeling about the library but it's not worth (or (politically) feasible) to re-implement the functionality yourself. 
There are three main ways of doing this.

The first one is to just call fork(2) and start using the new library in the child process - this usually doesn't work because chances are that you are already using libraries that cannot be reliably used after the fork() call as discussed in previously (additionally, a lot of unnecessary COW might be happen if the parent process has a lot of writable pages mapped). If portability to Windows is a concern, this is also a non-starter as Windows does not have fork() or any meaningful equivalent that is as efficient.

The second way is to write a small helper program and distribute the helper along with your library. This also uses fork() but the difference is that one of the exec(3) functions is called immediately in the child process so all previous process state is cleaned up when the process image is replaced (except for file descriptors as they are inherited across exec() so be wary of undesired leaks). If using GLib, there's a couple of (portable) useful utility functions to do this (including support for automatically closing file descriptors).

The third way is to have your process communicate with a long-lived helper process (a socalled daemon or background process). The helper daemon can be launched either by dbus-daemon(1) (if you are using D-Bus as the IPC mechanism), systemd if you are using e.g. Unix domain sockets, an init script (uuidd(8) used to do this - wasteful if your library is not going to get used) or by the library itself.

Helper daemons usually serve multiple instances of library users, however it is sometimes desirable to have a helper daemon instance per library user instance. Note that having a library spawn a long-lived process by itself is usually a bad idea because the environment and other inherited process state might be wrong (or even insecure) - see Rethinking PID 1 for more details on why a good, known, minimal and secure working environment is desirable. Another thing that is horribly difficult to get right (or, rather, horribly easy to get wrong) is uniqueness - e.g. you want at most one instance of your helper daemon - see Colin's notes for details and how D-Bus can be used and note that things like GApplication has built-in support for uniqueness. Also, in a system-level daemon, note that you might need to set things like the loginuid (example of how to do this) so things like auditing work when rendering service for a client (this is related to the Windows concept known as impersonation).

As an example, GLib's libproxy-based GProxy implementation uses a helper daemon because dealing with proxy servers involves a interpreting JavaScript (!) and initializing a JS interpreter from every process wanting to make a connection is too much overhead not to mention the pollution caused (source, D-Bus activation file - also note how the helper daemon is activated by simply creating a D-Bus proxy).

If the helper needs to run with elevated privileges, a framework like PolicyKit is convenient to use (for checking whether the process using your library is authorized) since it nicely integrates with the desktop shell (and also console/ssh logins). If your library is just using a short-lived helper program, it's even simpler: just use the pkexec(1) command to launch your helper (example, policy file).

As an aside (since this write-up is about C libraries, not software architecture), many subsystems in today's Linux desktop are implemented as a system-level daemons (often running privileged) with the primary API being a D-Bus API (example) and a C library to access the functionality either not existing at all (applications then use generic D-Bus libraries or tools like gdbus(1) or dbus-send(1)) or mostly generated from the IDL-like D-Bus XML definition files (example). It's useful to contrast this approach to libraries using helpers since one is more or less upside down compared to the other.

Checklist

  • Identify when a helper program or helper daemon is needed
  • If possible, use D-Bus (or similar) for activation / uniqueness of helper daemons.
  • Communicating with a helper via the D-Bus protocol (instead of using a custom binary protocol) adds a layer of safety because message contents are checked.
  • Using D-Bus through a message bus router (instead of peer-to-peer connections) adds yet another layer of safety since the two processes are connected through an intermediate router process (a dbus-daemon(1) instance) which will also validate messages and disconnects processes sending garbage.
  • Hence, if the helper is privileged (meaning that it must a) treat the unprivileged application/library using it as untrusted and potentially compromised; and b) validate all data to it - see Wheeler's Secure Programming notes for details), activating a helper daemon on the D-Bus system bus is often a better idea than using a setuid root helper program spawned yourself.
  • If possible, in particular if you are writing code that is used on the Linux desktop, use PolicyKit (or similar) in privileged code to check if unprivileged code is authorized to carry out the requested operation.

Testing

A sign of maturity is when a library or application comes with a test suite; a good test suite is also incredible useful for ensuring mostly bug-free releases and, more importantly, ensuring that the maintainer is comfortable putting releases out without loosing too much sleep or sanity. Discussing specifics of testing is out of the scope for a series on writing C libraries, but it's worth pointing to the GLib test framework, how it's used (example, example and example) and how this is used by e.g. the GNOME buildbots.

One metric for measuring how good a test suite is (or at least how extensive it is), is determining how much of the code it covers - for this, the gcov tool can be used - see notes on how this is used in D-Bus. Specifically, if the test suite does not cover some edge case, the code paths for handling said edge case will appear as never being executed. Or if the code base handles OOM but the test suite isn't set up to handle it (for example, by failing each allocation) the code-paths for handling OOM should appear as untested.

Innovative approaches to testing can often help - for example, Mozilla employ a technique known as reftests (see also: notes on GTK+ reftests) while the Dracut test suite employs VMs for both client and server to test that booting from iSCSI work.

Checklist

  • Start writing a test suite as early as possible.
  • Use tools like gcov to ascertain how good the test suite is.
  • Run the test suite often - ideally integrate it into the build system ('make check'), release procedures, version control etc.