Showing posts with label workflow. Show all posts
Showing posts with label workflow. Show all posts

Wednesday, July 15, 2015

Using git-notes for marking test suite successes

The libinput test suite takes somewhere around 35 minutes now for a full run. That's annoying, especially as I'm running it for every commit before pushing. I've tried optimising things, but attempts at making it parallel have mostly failed so far (almost all tests need a uinput device created) and too many tests rely on specific timeouts to check for behaviours. Containers aren't an option when you have to create uinput devices so I started out farming out into VMs.

Ideally, the test suite should run against multiple commits (on multiple VMs) at the same time while I'm working on some other branch and then accumulate the results. And that's where git notes come in. They're a bit odd to use and quite the opposite of what I expected. But in short: a git note is an object that can be associated with a commit, without changing the commit itself. Sort-of like a post-it note attached to the commit. But there are plenty of limitations, for example you can only have one note (per namespace) and merge conflicts are quite easy to trigger. Look at any git notes tutorial to find out more, there's plenty out there.

Anyway, dealing with merge conflicts is a no-go for me here. So after a bit of playing around, I found something that seems to work out well. A script to run make check and add notes to the commit, combined with a repository setup to fetch those notes and display them automatically. The core of the script is this:

make check
rc=$?
if [ $? -eq 0 ]; then
    status="SUCCESS"
else
    status="FAIL"
fi

if [ -n "$sha" ]; then
    git notes --ref "test-$HOSTNAME" append \
        -m "$status: $HOSTNAME: make check `date`" HEAD
fi
exit $rc
Then in my main repository, I add each VM as a remote, adding a fetch path for the notes:
[remote "f22-libinput1"]
        url = f22-libinput1.local:/home/whot/code/libinput
        fetch = +refs/heads/*:refs/remotes/f22-libinput1/*
        fetch = +refs/notes/*:refs/notes/f22-libinput1/*
Finally, in the main repository, I extended the glob that displays notes to 'everything':
$ git config notes.displayRef "*" 
Now git log (and by extension tig) displays all notes attached to a commit automatically. All that's needed is a git fetch --all to fetch everything and it's clear in the logs which commit fails and which one succeeded.
:: whot@jelly:~/code/libinput (master)> git log
commit 6896bfd3f5c3791e249a0573d089b7a897c0dd9f
Author: Peter Hutterer 
Date:   Tue Jul 14 14:19:25 2015 +1000

    test: check for fcntl() return value
    
    Mostly to silence coverity complaints.
    
    Signed-off-by: Peter Hutterer 

Notes (f22-jelly/test-f22-jelly):
    SUCCESS: f22-jelly: make check Tue Jul 14 00:20:14 EDT 2015

Whenever I look at the log now, I immediately see which commits passed the test suite and which ones didn't (or haven't had it run yet). The only annoyance is that since a note is attached to a commit, amending the commit message or rebasing makes the note "go away". I've copied notes manually after this, but it'd be nice to find a solution to that.

Everything else has been working great so far, but it's quite new so there'll be a bit of polishing happening over the next few weeks. Any suggestions to improve this are welcome.

Monday, May 19, 2014

Introducing tellme, a text-to-speech notifier

I've been hacking on a little tool the last couple of days and I think it's ready for others to look at it and provide suggestions to improve it. Or possibly even tell me that it already exists, in which case I'll save a lot of time. "tellme" is a simple tool that uses text-to-speech to let me know when a command finished. This is useful for commands that run for a couple of minutes - you can go off read something and the computer tells you when it's done instead of you polling every couple of seconds to check. A simple example:

tellme sudo yum update
runs yum update, and eventually says in a beautiful totally-not-computer-sounding voice "finished yum update successfully".

That was the first incarnation which was a shell script, I've started putting a few more features in (now in Python) and it now supports per-command configuration and a couple of other semi-smart things. For example:

whot@yabbi:~/xorg/xserver/Xi> tellme make
eventually says "finished xserver make successfully". With the default make configuration, it runs up the tree to search for a .git directory and then uses that as basename for the voice output. Which is useful when you rebuild all drivers simultaneously and the box tells you which ones finished and whether there was an error.

I put it up on github: https://github.com/whot/tellme. It's still quite rough, but workable. Have a play with it and feel free to send me suggestions.

Monday, March 10, 2014

Using git - the next level

There's a million tutorials out there how to learn git. This isn't one of them. I'm going to assume that you learned git a while ago, you've been using it a bit and you're generally familiar with its principles. I'm going to show is a couple of things that improved my workflow. Chances are, it will improve yours too. This isn't a tutorial though. I'm just pointing you in the direction of things, you'll have to learn how to use them yourself.

Use tig

Seriously. Don't tell me you use gitk or git log is good enough for you. Use tig. tig is to git log what mutt is to mail(1). It has been the source of the biggest efficiency increase for me. Screenshots don't do it justice because the selling point is that it is interactive. But anyway, here are some official screenshots: tig blame shows you the file and the commits, you just need to select the line, hit enter and you see the actual commit. The main view by default shows you tags, branch names, remote branch names, etc. So not only do you immediately know which branch you're on, you will see local branches that have been merged, tags that have been applied, etc. It gives you an awareness that git log doesn't. Do yourself a favour, install it, use it for a day or two and I'm pretty sure you won't go back.

tig also supports custom configurations. Here is my $HOME/.tigrc:

bind generic X !git cherry-pick -x %(commit)
bind generic C !git cherry-pick %(commit)
bind generic R !git revert %(commit)
bind generic E !git format-patch -1 %(commit)
bind generic 0 !git checkout %(commit)
bind generic 9 !git checkout %(commit)~
bind generic A !git commit --amend -s
bind generic S !git show %(commit)
So with a couple of key strokes I can cherry-pick, export patches, revert, check out a single tree, etc. Especially cherry-picking is extremely efficient: check out the target branch, run "tig master", then simply select each commit, it "C" or "X" and done.

Use branches

Anytime it takes you more than 5 minutes to fix an issue, create a new branch. I'm getting torn between multiple things all the time. I may spend a day or two on one bug, then it's back to another, unrelated issue. With the review requirements on some projects I may have multiple patches waiting for feedback, but I can't push them yet. Hence - a branch for each feature/bugfix. master is reserved for patches that can be pushed immediately.

This approach becomes particularly useful for fixes that may need some extra refacturing. You start on a feature-based branch, but halfway through realise you need a few extra patches to refactor things. Those are easy to review so you send them out to gather reviews, then cherry-pick them to master and push. Back to your feature branch, rebase and you're done - you've managed two separate streams of fixes without interference. And most importantly, you got rid of a few patches that you'd otherwise have to carry in your feature branch.

Of course, it takes a while to get used this and it takes discipline. It took me a few times before I really managed to always work like this but the general rule for me is now: if I'm hacking on the master branch, something is off. Remember: there's no real limit to how many branches you can create - just make sure you clean them up when you're done to keep things easy for your brain.

Use the branch names to help you. You can rename branches (git branch -m), so I tend to name anything that's a bigger rewrite with "wip/somefeature" whereas normal bug fixes go on branches with normal names. And because I rebase local feature branches it doesn't matter what I name them anyway, the branches are deleted once I merge them. Branches where I do care about the branch history (i.e. those I pull them into master with a merge commit) I rename before pulling to get rid of the "wip" prefix.

Use branch descriptions

Hands up if you have a "devel" branch from 4 months ago. Hands up if you still remember what the purpose of that branch was. Right, I didn't think so. git branch --edit-description fires up an editor and lets you add a description for the branch. Sometimes a single sentence is enough to refresh your memory. Most importantly: when you task-switch to a different feature, edit the description to note where you left off, what the plan was, etc. This reduces the time to get back to work. git config branch.<branchname>.description shows you the description for the matching branch.

I even have a git hook to nag me when I check out a branch without a description. Note that branch descriptions are local only, they are not pushed to the remote.

Amend and rebase until the cows come home

The general rule: what is committed, doesn't get lost. At least not easily, it is still in the git reflog. So commit when you think you're done. Then review, test, add, and git commit --amend. That typo you made in line 4 - edit and amend. I have shell aliases for amend, rbs (git rebase -i) and rbc (git rebase --continue), and almost every commit goes through at least 3 amends (usually one because I missed something, one for that typo, one for commit log message editing). Importantly: it doesn't matter how often you amend. Really. This is local only, no-one cares. The important thing is that you get to a good patch set, not that you get there with one commit.

git commit --amend only modifies the last commit, to go back and edit the past, you need to rebase. So, you need to

Learn how to rebase

Not just the normal git rebase, the tutorials cover that. Make sure you know how to use git rebase --interactive. Make sure you know how to change the ordering of a commit, how to delete commits, how to abort a rebase. Make sure you know how to squash two commits together and what the difference is between squash and fixup. I'm not going to write a tutorial on that, because you can find the documentation is easy enough to find. Simply take this as a hint that the time you spend learning how to rebase pays off. Also, you may find git squash interesting.

And remember: even if a rebase goes bad, the previous state is still in the reflog. Which brings me to:

Learn how to use the reflog

The git reflog is the list of changes in reverse chronological order of how they were applied to the repository, regardless what branch you're on. So HEAD@{0} is always "whatever we have now", HEAD@{1} is always "the repository before the last command". This doesn't just mean commits, it remembers any change. So if you switch from branch A to branch B, commit something, then switch to branch C, HEAD@{3} is A. git reflog helpfully annotates everything with the type, so you know what actually happened. So for example, if you accidentally dropped a patch during a rebase, you can look at the reflog, figure out when the rebase started. Then you either reset to that commit, or you just tig it and cherry-pick the missing commits back onto the current branch. Create yourself up with a test git repository and learn how to do exactly that now, it'll save you some time in the future.

Note that the reflog is local only. And remember, if it hasn't been committed, it's not in the reflog.

Use a git push hook

Repeat after me: echo make > .git/hooks/pre-push. And no more embarrassment for pushing patches that don't compile. I've made that mistake too many times, so now I even use my own git patch-set command that will run a hook for me when I'm generating a patch set to send to a list. You might want to make the hooks executable btw.

Monday, February 10, 2014

Making sense of backtraces with addr2line

When the X server crashes it prints a backtrace to the log file. This backtrace looks something like this:

(EE) Backtrace:
(EE) 0: /usr/bin/Xorg (OsLookupColor+0x129) [0x473759]
(EE) 1: /lib64/libpthread.so.0 (__restore_rt+0x0) [0x3cd140f74f]
(EE) 2: /lib64/libc.so.6 (__select_nocancel+0xa) [0x3cd08ec78a]
(EE) 3: /usr/bin/Xorg (WaitForSomething+0x1ac) [0x46a8fc]
(EE) 4: /usr/bin/Xorg (SendErrorToClient+0x111) [0x43a091]
(EE) 5: /usr/bin/Xorg (_init+0x3b0a) [0x42c00a]
(EE) 6: /lib64/libc.so.6 (__libc_start_main+0xf5) [0x3cd0821d65]
(EE) 7: /usr/bin/Xorg (_start+0x29) [0x428c35]
(EE) 8: ? (?+0x29) [0x29]
This is a forced backtrace from the current F20 X Server package, generated by killall -11 Xorg. There is not a lot of human-readable information but you can see the call stack, and you can even recognise some internal functions. Now, in Fedora we compile with libunwind which gives us relatively good backtraces. Without libunwind, your backtrace may look like this:
(EE) 
(EE) Backtrace:
(EE) 0: /opt/xorg/bin/Xorg (xorg_backtrace+0xb5) [0x484989]
(EE) 1: /opt/xorg/bin/Xorg (0x400000+0x8d1a4) [0x48d1a4]
(EE) 2: /lib64/libpthread.so.0 (0x3cd1400000+0xf750) [0x3cd140f750]
(EE) 3: /lib64/libc.so.6 (__select+0x33) [0x3cd08ec7b3]
(EE) 4: /opt/xorg/bin/Xorg (WaitForSomething+0x3dd) [0x491a45]
(EE) 5: /opt/xorg/bin/Xorg (0x400000+0x3561b) [0x43561b]
(EE) 6: /opt/xorg/bin/Xorg (0x400000+0x43761) [0x443761]
(EE) 7: /opt/xorg/bin/Xorg (0x400000+0x9baa8) [0x49baa8]
(EE) 8: /lib64/libc.so.6 (__libc_start_main+0xf5) [0x3cd0821d65]
(EE) 9: /opt/xorg/bin/Xorg (0x400000+0x25df9) [0x425df9]
So, even less information and it certainly makes it hard to figure out where to even get started. Luckily there is a tool to get some useful info out of that: eu-addr2line. All you need is to install the debuginfo package for the crashing program. Then it's just a matter of copying addresses.
$ eu-addr2line -e /opt/xorg/bin/Xorg 0x48d1a4
/home/whot/xorg/xserver/os/osinit.c:132
Alright, this is useful now, I can download the source package and check where it actually goes wrong. But wait - it gets even better. Let's say you have a driver module in the callstack:
(EE) Backtrace:
(EE) 0: /opt/xorg/bin/Xorg (xorg_backtrace+0xb5) [0x484989]
(EE) 1: /opt/xorg/bin/Xorg (0x400000+0x8d1a4) [0x48d1a4]
(EE) 2: /lib64/libpthread.so.0 (0x3cd1400000+0xf750) [0x3cd140f750]
(EE) 3: /opt/xorg/lib/libinput.so.0 (libinput_dispatch+0x19) [0x7ffff1e51593]
(EE) 4: /opt/xorg/lib/xorg/modules/input/libinput_drv.so (0x7ffff205b000+0x2a12) [0x7ffff205da12]
(EE) 5: /opt/xorg/bin/Xorg (xf86Wakeup+0x1b1) [0x4af069]
(EE) 6: /opt/xorg/bin/Xorg (WakeupHandler+0x83) [0x444483]
(EE) 7: /opt/xorg/bin/Xorg (WaitForSomething+0x3fe) [0x491a66]
(EE) 8: /opt/xorg/bin/Xorg (0x400000+0x3561b) [0x43561b]
(EE) 9: /opt/xorg/bin/Xorg (0x400000+0x43761) [0x443761]
(EE) 10: /opt/xorg/bin/Xorg (0x400000+0x9baa8) [0x49baa8]
(EE) 11: /lib64/libc.so.6 (__libc_start_main+0xf5) [0x3cd0821d65]
(EE) 12: /opt/xorg/bin/Xorg (0x400000+0x25df9) [0x425df9]
You can see that we have a xf86libinput driver (libinput_drv.so) which in turn loadsd libinput.so. You can debug the crash the same way now, just change the addr2line argument:
$ eu-addr2line -e /opt/xorg/lib/libinput.so.0 libinput_dispatch+0x19 
/home/whot/code/libinput/src/libinput.c:603
Having this information of course doesn't mean you can fix any bug. But when you're reporting a bug it can be invaluable. If I have access to the same rpms that you're running it's possible to look up the context of the crash in the source. Or, even better, since you already have access to those you can make debugging a lot easier by attaching the required bits and pieces to a bug report. Seeing a bug report where a reporter already narrowed down where it crashes makes it a lot easier than guessing based on hex numbers what went wrong.

Friday, September 13, 2013

git-branch-tools: creating patch sets

git-branch-tools is my little repo for git scripts to make a few things easier. I first talked about it here. The repository is available on https://github.com/whot/git-branch-tools, the latest addition is git patch-set. I used to create git patch sets with just git format-patch, but too often I found some minor change on the last review and had to re-generate it. So ended up with multiple patch files in the directory, or worse, a combination of old and new ones in danger of being sent by git send-email later. git-patch-set fixes this for me:
$> git patch-set HEAD~2
patches/patches-201309130933-HEAD~2/0001-test-provide-wrapper-for-fetching-the-devnode-from-a.patch
patches/patches-201309130933-HEAD~2/0002-wrap-EVIOCSCLOCKID-into-an-API-call.patch
So my patches are in the $GIT_DIR/patches/ directory, named after the current date + time and the refs used for the list. This makes them identifiable and sortable (to some degree anyway). And, to make things easier, $GIT_DIR/patches/latest is a symlink to the latest patch set, so usually the workflow is
$> git patch-set HEAD~2
patches/patches-201309130933-HEAD~2/0001-test-provide-wrapper-for-fetching-the-devnode-from-a.patch
patches/patches-201309130933-HEAD~2/0002-wrap-EVIOCSCLOCKID-into-an-API-call.patch
$> git send-email patches/latest/*.patch
That's not all though. I've added two hooks, pre-patch-set and post-patch-set to be run before/after the actual patch generation.
$> cat .git/hooks/pre-patch-set
#!/bin/bash -e
echo "running make check"
make check
$> git patch-set HEAD~2
running make check
Making check in doc
doxygen libevdev.doxygen
Making check in libevdev
make  check-am
make[2]: Nothing to be done for `check-am'.
Making check in tools
make[1]: Nothing to be done for `check'.
Making check in test
make  check-TESTS check-local
PASS: test-libevdev
make[4]: Nothing to be done for `all'.
============================================================================
Testsuite summary for libevdev 0.3
============================================================================
# TOTAL: 1
# PASS:  1
# SKIP:  0
# XFAIL: 0
# FAIL:  0
# XPASS: 0
# ERROR: 0
============================================================================
  GEN      gcov-report.txt
========== coverage report ========
libevdev-uinput.c: total lines: 172 not tested: 28 (83%)
libevdev.c: total lines: 689 not tested: 78 (88%)
========== =============== ========
patches/patches-2013091309:33-HEAD~2/0001-test-provide-wrapper-for-fetching-the-devnode-from-a.patch
patches/patches-2013091309:33-HEAD~2/0002-wrap-EVIOCSCLOCKID-into-an-API-call.patch
I've been using that script for quite a while now and it did make sending patch sets a bit easier. Plus, now I'm not in danger of sending out patch sets that don't pass make check :)

Monday, March 4, 2013

git branch-tools: some helpers for managing git branches

I'm using a lot of branches. Almost one per feature or bug, and they add up quickly. Why I'm doing this doesn't matter for this post, but I found it to be a good workflow. The problem with that is of course that after a while I forget which branch was for what, or what branch I worked on three weeks ago. So I started hacking up some git helpers.

I pushed them to https://github.com/whot/git-branch-tools today, feel free to use them or improve on them.

Archiving branches

Some branches are not actively developed anymore but should still be preserved for posterity. These branches are clogging up the branch view.

git archive-branch mybranch
moves mybranch to archive/2013/mybranch and tags the current top commit with a message about the branch history. An example git branch output would look like this now:
  ...
  archive/2013/touch-test-libtool-linker-issues
  archive/2013/two-screen-coordinates
  archive/2013/wrong-signal-logging-merge
  archive/2013/xi2-protocol-tests
  archive/2013/xi21-confine-to
  archive/2013/xorg-conf-init-cleanup
  attic
  bugfix/xts-segfault
  devel
  fedora-17-branch
  fedora-rawhide-branch
  for-keith
  high-keycodes
  master
  memleak
* next
  ...

Showing recent branches

Working on many branches can mean you forget which branch you worked on last week, or the week before.

git recent-branches
lists the various branches checked out over the history, including the date and last commit date on that branch. Example:
next                                 4 hours ago    last commit 6 days ago
server-1.13-branch                   4 hours ago    last commit 2 weeks ago
touch-grab-race-condition-56578-v2   3 days ago     last commit 3 days ago
touch-grab-race-condition-56578      4 days ago     last commit 6 days ago      †
bug/xts-segfaults                    6 days ago     last commit 6 days ago      †
master                               6 days ago     last commit 3 weeks ago
for-keith                            10 days ago    last commit 2 weeks ago
memleak                              13 days ago    last commit 2 weeks ago
The output above shows the branch name, last time that branch was checked out, last commit time and a marker that shows up if this branch doesn't exist anymore. There are a few more flags you can pass in too, including git log flags, so play around with it.

Better branch descriptions

Can't remember what branch "fix-race-condition" was? Me neither. That's what

git branch-description [branchname] [upstream]
will tell you. If upstream is given, it'll also show you what has been merged into upstream already (by patch, not by commit). Example again:
:: whot@yabbi:~/xorg/xserver (next)> git-branch-description touch-grab-race-condition-56578-v2
Branch       touch-grab-race-condition-56578-v2
Branched:    Thu Feb 14 11:05:48 2013 -0800
Last commit: Fri Mar 1 16:37:49 2013 +1000

Fixes for https://bugs.freedesktop.org/show_bug.cgi?id=56578, second attempt

============================ Unmerged commits =============================
Commits on touch-grab-race-condition-56578-v2 not in next:
68b937046f278d53de14b586dbf7fd5aa7367f59 Xi: return !Success from DeliverTouchEmulatedEvent if we didn't deliver
f8baab8ac32e5abb31bcd1bb4f74e82d40208221 Xi: use a temp variable for the new listener
9cbb956765c7b4f1572ab2100f46504bf6313330 dix: don't set non-exisiting flags on touch events
2a5b3f2f2293f4a428142fffdb1b6e8ffbbb5db0 dix: fix a comment
76e8756545951d7f13ca84a4bd24fe5f367c5de2 Xi: compress two if statements with the same body
61b06226a43839ed75126f9c54d47bc440285e21 dix: update coords for touch events in PlayReleasedEvents
bd1a5423bbb02a349991a52f4997e830a0dc1992 Xi: add a comment to make a condition a bit clearer
78b26498085a7589e1f4d9ac3c21b69dc3227f87 Xi: not having an ownership mask does not mean automatic acceptance
c7271c7e05cdbeb35a3558223f9c2d6544504c4c dix: don't prepend an activated passive grab to the listeners
71ee72c97e459ef76984e6da64e5dab0ce6e4465 Xi: if we delivered a TouchEnd to a passive grab, end it
9b6966187fd0e6fb7ad3c2c1073456d96e3adab0 Xi: if a pointer grabbing listener gets the touch end, the touch is over
5afef18196ce70faec3e94379c3e6d3767660c4a FIXME: Xi: fix lookup in ActivateEarlyAccept
3784283be1f482a0f039f2eb790c0c8c2cc4bedb Xi: update the core listener state if we delivered the event
3570ef1244c87aef92db97df6e2b921529ffb75a Xi: if a passive async grab is activated from an emulated touch, accept
9bef901d8e28d48f43da3167219b02ad1dba27d8 Xi: save state for early acceptance
7d51022becd5af124896817030a10eedf7f1783a Xi: when punting to a new owner, always create TouchEnd events
4775cdb0d9a2513edcf27a9c4c1916e8213c397b Xi: use public.processInputProc to replay the touch history
431b128b9138af7a208b63d4eb5b917d94c08129 Xi: Don't emit a TouchEnd event to a frozen device
4126d64f6a40d5568b2d1412d519325c02786c9a dix: AllowSome is equivalent to TouchAccept
33421e91a52be91d7121c7c2146ff7bb53bea638 dix: move EmitTouchEnd to touch.c
54f8884aef275b15f2c42e3350e2b4968124af01 dix: XAllowEvents() on a touch event means accepting it

Commits on touch-grab-race-condition-56578-v2 already merged to next:

================================= Activity =================================
e7b4b83 HEAD@{5 hours ago}: checkout: moving from touch-grab-race-condition-56578-v2 to server-1.13-branch
9cbb956 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 9cbb956
d58ddeb HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to d58ddeb
7c3968b HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 7c3968b
a354dd8 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to a354dd8
fdf4869 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to fdf4869
82be6b2 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 82be6b2
82be6b2 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 82be6b2
68b9370 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 68b9370
151eff1 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 151eff1
57fa0b9 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 57fa0b9
b43e866 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to b43e866
ef6a120 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to ef6a120
9064294 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 90642948cc78834d95f7a3bddaac7ff77b68ed7e
9064294 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 90642948cc78834d95f7a3bddaac7ff77b68ed7e
6513e0e HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 6513e0e
0d60ba6 HEAD@{3 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 0d60ba6
dd23302 HEAD@{4 days ago}: checkout: moving from f21354da571dcd39ae1423388298d5c61d3e736d to touch-grab-race-condition-56578-v2
f21354d HEAD@{4 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to f21354da571dcd39ae1423388298d5c61d3e736d
0d60ba6 HEAD@{4 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 0d60ba6
ae2cac9 HEAD@{4 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to ae2cac99a75917d6c4d34b8aa4aeaec0b5d32da7
c2b3d37 HEAD@{4 days ago}: checkout: moving from d75925b9fb8b24c8134b5082294e82abf83294af to touch-grab-race-condition-56578-v2
0d60ba6 HEAD@{4 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 0d60ba683e7e95049c01ac5dba48a2f5fd80d9b9
c2b3d37 HEAD@{4 days ago}: checkout: moving from 0d60ba683e7e95049c01ac5dba48a2f5fd80d9b9 to touch-grab-race-condition-56578-v2
0d60ba6 HEAD@{4 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 0d60ba683e7e95049c01ac5dba48a2f5fd80d9b9
0d60ba6 HEAD@{4 days ago}: checkout: moving from 90642948cc78834d95f7a3bddaac7ff77b68ed7e to touch-grab-race-condition-56578-v2
9064294 HEAD@{4 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 90642948cc78834d95f7a3bddaac7ff77b68ed7e
8e5adb4 HEAD@{4 days ago}: checkout: moving from touch-grab-race-condition-56578-v2 to 8e5adb4ef8bafa2a3188e69409e2908f80288311
033b932 HEAD@{4 days ago}: checkout: moving from touch-grab-race-condition-56578 to touch-grab-race-condition-56578-v2

And install the git-post-checkout-nagging-hook as your .git/hooks/post-checkout to make sure you get reminded to set the branch description.

Monday, August 20, 2012

screen configuration for automatic session names

screen is an immensely useful program, but once you start running multiple sessions at the same time, it gets tricky to find the right one again.  For example, which one was offlineimap again?


 :: whot@salty:~> screen -ls
There are screens on:
    5238.pts-3.salty    (Detached)
    5229.pts-3.salty    (Detached)

The -S argument allows to specify a session name but I couldn't find a configuration to set this automatically. So I've added this little script:


$> cat $HOME/scripts/screen
#!/bin/bash
/usr/bin/screen $([[ ($1 != -*) && ($# > 0) ]] && echo "-S $1") "$@"


Explanation: if called with 1 or more parameters and the first parameter does not start with a dash, replace a call in the form of screen command ... with screen -S command command ...

This results in screen sessions having useful names, allowing for more effective reattaching. And the obligatory, ahem, screenshots: 
:: whot@yabbi:~> screen foo
[detached from 17755.foo]
:: whot@yabbi:~> screen bar
[detached from 17766.bar]
:: whot@yabbi:~> screen -ls
There are screens on:
    17766.bar    (Detached)
    17755.foo    (Detached)
2 Sockets in /var/run/screen/S-whot.

Wednesday, July 11, 2012

Daily tmp directory

My $HOME/tmp directory got a bit messy, especially with building test rpms or testing tarballs. My solution was to just change the directory to automatically change daily. Scripts below:
:: whot@yabbi:~> cat scripts/tmp-today 
#!/bin/bash
date=`date +%Y-%m-%d-%a`

tmpdir=$HOME/tmp/$date
tmplink=$HOME/tmp/today

if [ -e "$tmpdir" ]; then
    exit 0
fi

mkdir $tmpdir
ln -sf $tmpdir $tmplink
And the crontab entries to run this script:
0 2 * * * /home/whot/scripts/tmp-today
@reboot /home/whot/scripts/tmp-today
So run the thing at 2.00 am and on every reboot in case the box was shut off overnight. I had it on midnight first, but I think 2 am is better. If I'm still up at 2 and working, then mentally I'm still in on the day before and I don't want files to end up in different directories just because midnight clocked over. And because the laptop may be suspended overnight, we run the script on resume as well:
:: whot@yabbi:~> cat /etc/pm/sleep.d/00-tmp-dir 
#!/bin/bash

case "$1" in
 thaw|resume)
  su -c - whot /home/whot/scripts/tmp-today
  ;;
 *)
  ;;
esac
This obviously works for other directories as well, e.g. your daily download directory.

Tuesday, June 5, 2012

git branch-info

I have too many branches. And sometimes I forgot which one is which. So yesterday I sat down and wrote a git alias to tell me:
branch-info = "!sh -c 'git branch --list --no-color | \
    sed -e \"s/*/ /\" | \
    while read branch; do \
    git log -1 --format=format:\"%Cred$branch:%Cblue %s %Cgreen%h%Creset (%ar)\" $branch; \
    done'"
Gives me a nice output like this:
:: whot@yabbi:~/xorg/xserver (for-keith)> git branch-info
dtrace-input-abi:  dix: add dtrace probes to input API  c0b0a9b (3 months ago)
extmod-changes:  DRI2: Remove prototype for DRI2DestroyDrawable  8ba2980 (11 months ago)
fedora-17-branch:  os: make timers signal-safe 7089841 (6 weeks ago)
master:  Xext: include dix-config.h 594b4a4(12 days ago)
mt-devel: Switch to new listener handling for touch events 3ee84fc (6 months ago)
mt-devel2: dix: conditionally update the cursor sprite 9edb3fd(6 months ago)
multitouch: blah 8ecec2e(4 months ago)
...
Or, the same thing aligned in columns, though without colours (column doesn't parse the escape sequences for colors so the columns are misaligned).
branch-info = "!sh -c 'git branch --list --no-color | \
    sed -e \"s/*/ /\" | \
    while read branch; do \
    git log -1 --format=format:\"$branch:|%s|%h|%ar\n\" $branch; done | \
    column -t -s\"|\"'"

Thursday, May 10, 2012

Testing X servers from git

Every so-often I get asked the question of how to test the X server (or drivers) from git. The setup I have is quite simple: I have a full tree in /opt/xorg, next to the system-installed binaries in /usr. A symlink and some environment variables allow me to switch between git versions of the server and clients and the system-installed ones.

Installing the tree

Getting that setup is quite easy these days:
mkdir -p xorg/util
git clone git://anongit.freedesktop.org/git/xorg/util/modular xorg/util/modular
cd xorg
./util/modular/build.sh --clone --autoresume built.modules /opt/xorg
That takes a while but if any component fails to build (usually due to missing dependencies) just re-run the last command. The built.modules file contains all successfully built modules and the script will simply continue from the last component. Despite the name, build.sh will also install each component into the specified prefix.

You get everything here, including a shiny new copy of xeyes. Yes, what you always wanted, I know

Note that build.sh is just a shell script, so you can make changes to it. Disable the parts you don't want (fonts, for example) by commenting them out. Or alternatively, generate a list of all modules, remove the ones you don't want or need and build with that set only:

./util/modular/build.sh -L > module_list
vim module_list # you can skip fonts, apps (except xkbcomp) and exotic drivers
./util/modular/build.sh --clone --autoresume built.modules --modfile module_list /opt/xorg

Either way, you end up with /opt/xorg/bin/Xorg, the X server binary. I just move my system-installed and then symlink the new one.

sudo mv /usr/bin/Xorg /usr/bin/Xorg_old
sudo ln -s /opt/xorg/bin/Xorg /usr/bin/Xorg
Next time when gdm starts the server, it'll start the one from git. You can now update modules from git one-by-one as you need to and just run make install in all of them. Alternatively, running the build.sh script again without the --clone parameter will simply git pull in each module.

Setting up the environment

What I then define is a few environment variables. In my .zshrc I have
alias mpx=". $HOME/.exportrc.xorg"
and that file contains
export PKG_CONFIG_PATH=/opt/xorg/lib/pkgconfig:/opt/xorg/share/pkgconfig
export LD_LIBRARY_PATH=/opt/xorg/lib/
export PATH=/opt/xorg/bin:$PATH
export ACLOCAL="aclocal -I /opt/xorg/share/aclocal"
export MANPATH=/opt/xorg/share/man/
So running "mpx" will start git versions of the clients, link clients against git versions of the libraries, or build against git versions of the protocol.

Why this setup?

The biggest advantage of this setup is simple: the system install doesn't get touched at all and if the git X server breaks changing the symlink back to /usr/bin/Xorg_old gives me a working X again. And it's equally quick to test Fedora rpms, just flick the symlink back and restart the server. I have similar trees for gnome, wayland, and a few other large projects.

It also makes it simple to test if a specific bug is a distribution bug or an upstream bug. Install the matching X server branch instead of master and with a bit of symlink flicking you can check if the bug reproduces in both. For example, only a few weeks ago I noticed that xinput got BadAtom errors when run from /usr/bin but not when run from /opt/xorg/bin. Turns out it was a thing fixed in the upstream libXi but not backported to Fedora yet.

The drawback of this setup is that whenever the xorg-x11-server-Xorg module is updated, I need to move and symlink again. That could be automated with a script but so far I've just been too lazy to do it.

[Update 11.05.12: typo and minor fixes, explain build.sh -L]

Wednesday, May 9, 2012

vimdir for project-specific vim settings

Sometimes it feels that each project I work on has different indentation settings. Not quite true but still annoying. I don't know of a way to tell vim to auto-detect the indentation settings based on the current file (which, for X.Org projects wouldn't work anyway) but what has been incredibly useful is the vimdir script. It simply scans the directory tree upwards to find a .vimdir file and loads the settings from there. So I keep files like this around:
setlocal noexpandtab shiftwidth=8 tabstop=8
The alternative is to add a snippet to the file itself but not every maintainer is happy with that.
/* vim: set noexpandtab tabstop=8 shiftwidth=8: */

Friday, May 4, 2012

Copy/paste of code is not ok

Much too often, I see patches that add code copied from other sections of the same repository. The usual excuse is that, well, we know that block of code works, it's easy to copy and we immediately get the result we need.

This is rather short-sighted. Whenever code is copied, the two instances will go and live separate lives. Code is never static, over time that copy becomes a partial reimplementation of the original.

There are a few conditions when copy-paste is acceptable:

  • You can guarantee that the original code does not have any bugs and thus the copy does not have any bugs, now or in the future. Otehr

  • You can guarantee that anyone making changes to this code in the future is aware of the copy and the original and their respective contexts.

  • You can guarantee that the context of the original and the copy never changes in a different manner.

  • You are happy to reimburse testers and developers for the time wasted tracking down bugs caused by ignoring any of the three above.
If the above are true, copying code is ok. And you probably get some price for having found an impossible piece of code. In all other cases, write a helper function and share the code. If the helper function is to unwieldy, maybe it's time to think about the design and refactor some things.

Tuesday, May 1, 2012

Drive-by learning through patch reviews

This came up on the linuxwacom-devel list today and I think it warrants further spread through this post.

Different projects have different patch review requirements but the biggest difference is pre-review and post-review. That is, do patches get reviewed before or after they hit the repositories. Not too long ago, the X server employed a post-review process. Everyone with access pushed and bugs would get discovered by those reading commit logs. Patches that ended up on the list were mainly from those that didn't have commit access. Beginning with server 1.8 we introduced a hard review requirement and every patch to make it into the repos ended up on the list, so we switched from post-review to pre-review.

Aside from enforcing that someone gives the formal ACK for a patch, a side-effects is to allow for a passive "drive-by" learning of the code base. Rather than having to explicitly look up commit logs, patches are delivered into one's inbox, outlining where the development currently happens, what it is about and - perhaps most importantly - issues that may have been caused bugs in rejected patches. Ideally that is then archived, so you can link to that discussion later.

The example I linked to from above is automake's INCLUDES versus AM_CPPFLAGS. I wouldn't have know about them if it wasn't for skimming through Gaetan's patches to various X.Org-related projects. That again allowed me to contribute a (in this case admittedly minor) patch to another project. And since that patch ended up on another list, the knowledge can spread on.

Next time when you think of committing directly to a repo, consider sending the patches out. Not just for review, but also to make learning easier for others.

Thursday, March 15, 2012

ssh and shared connections

My job requires me to ssh into various boxes often, with several connections to the target host. Some people use screen on the target host but I work better if I have multiple terminal windows. But re-connecting to the same host can be annoying, a connection does take time and it should be instantaneous. Luckily, SSHv2 allows to share a connection, making reconnection a lot faster. Also, if you have password-authenticated connection instead of a key-based one you won't have to type the password for each new connection (but really, you should be using keys anyway). The few lines you'll need in your $HOME/.ssh/config:
ControlMaster auto
ControlPath ~/.ssh/sockets/ssh_mux_%h_%p_%r
ControlPersist 60
All three are extensively described in the ssh_config(5) man page, but here's a summary:
  • ControlMaster auto will create a new ssh connection when no matching socket exists. Otherwise, it will just use the existing connection.
  • ControlPath is simply the path to the control socket, with %h, %p and %r being replaced with target host, port and username to keep the socket name unique. Having this in a user-specific location instead of /tmp is generally a good idea.
  • ControlPersist defines how long the master connection should be kept open after exit. You can specify "yes" for indefinite or a number of idle seconds. If you reconnect within that idle time, it will again re-use the existing connection. Note that if you do not have ControlPersist and you quit the master connection, you will terminate all other connections too! ControlPersist was added in OpenSSH 5.6.
You can provide these options globally or inside a Host section of the config, depending on your needs. A few final notes: since you essentially only have one connection now, you can only forward one X11 display, one ssh agent, etc. at a time. If you need a separate connection for a otherwise shared host, use "ssh -S none". Also, if you're doing heavy data transfer on laggy connections you're probably better off having separate connections.

Friday, December 2, 2011

Improving code readability through temporary variables

We don't always have the luxury of using library interfaces that are sensibly designed and enforce readability self-explanatory (see this presentation). A fictional function may look like this:

extern void foo(struct foo *f,
Bool check_device,
int max_devices,
Bool emulate);

But a calls often end up like this:

foo(mystruct, TRUE, 0, FALSE);

Or, even worse, the function call could be:

foo(mystruct, !dev->invalid, 0, list_is_first_entry(dev->list));

The above is virtually unreadable and to understand it one needs to look at the function definition and the caller at the same time. The simple use of a temporary variable can do wonders here:

Bool check_device = !dev->invalid;
Bool emulate_pointer = list_is_first_entry(dev->list));

foo(mystruct, check_device, 0, emulate_pointer);

It adds a few lines of code (though I suspect the compiler will mostly reduce the output anyway) but it improves readability. Especially in the cases where the source field name is vastly different to the implied effect. In the second example, it's immediately obvious that pointer emulation should happen for the first entry.

Wednesday, November 2, 2011

The bugzilla attention span

Some bugs are less important than others, and there's always a certain background noise of bugs that aren't complete deal-breakers. And there are always more of those to fix than I have time for.

Every once in a while, I sweep through a list of open bugs and try to address as many as possible. Given the time constraints, I have a limited attention span for each bug. Anything that wastes time reduces the chance of a bug getting fixed. The bugs that tend to get fixed (or at least considered) are the ones with attachments: an xorg.conf, Xorg.log and whichever xorg.conf.d snippets were manually added. Always helpful is an evtest log of the device that's a problem.

If it takes me 10 comments to get useful log from a bug reporter because they insist on philosophical discussions, ego-stroking, blame games, etc., chances are I've used up the allocated time for that bug before doing anything productive. And I've had it happen often enough that reporters eventually attached useful information but I just never went back to that bug. Such is life.

So, in the interest of getting a bug fixed: attach the logs and stay on-topic. If something is interesting or important enough to have a philosophical discussion about it, then we can always start one later.

Thursday, September 29, 2011

Mediawiki for personal notes

For years, I've had the problem of how to store my notes but finally found something that is usable.

I've tried a few variations over time, but none suited my use-case sufficiently. Textfiles are too limited and become unmanageable quick. Org-Mode requires Emacs (Emacs and I disagree on how much pain I'm willing to accept in my hands), Zim wasn't available on RHEL and didn't scale well after a while. TomBoy and Gnote require GNOME which doesn't work on the Mac*, aside from me always running into synchronization issues using it on three machines. I toyed with ideas like Evernote but keeping notes in the cloud makes me slightly nervous and doesn't work for confidential stuff anyway. Other wikis I tried over the last few years were dokuwiki, ikiwiki and tiddlywiki.

Many moons ago, Aaron Stafford showed me how he used MediaWiki. At first that didn't seem viable since it didn't cover a few things I wanted: portability, git-based backups that I can take to work and back and use on any machine I care about. Anyway, I've used this for well over a year now and it works for me.

The ingredients:

  • one usb stick

  • truecrupt (realcrypt in Fedora's rpmfusion)

  • mediawiki



(Note: I'm not giving step-by-step instructions because if you're setting this up, you should understand what you're doing instead of just copying commands from a random website)

Preparing the USB stick is easy enough. Format it with FAT32 (if you're not suffering from having a Mac, you can use a better file system). Create a massive file with truecrypt, encrypt it. You can encrypt the entire stick but having the in-file option allows multiple encrypted files on the same stick.

The filesystem for the encrypted file should be FAT32 again. Then install mediawiki in a directory, point apache to it and run through the mediawiki install process. Set the mediawiki up to use an sqlite database. Finally, set up a cron job that essentially runs git add * && git commit -am "Autocommit $date" every hour. Backup is simply done by running git push remote.

I recommend writing a few scripts that automount the stick once you plug it in. Once that's done and on all machines you care about, you just need to plug the stick in, start httpd and off you go. Of course, if you trust your hosting provider enough you could also set it up somewhere on the web and you can skip the USB stick madness.

Now, is this setup perfect? No, by far not. Issues that I see with it:

  • no text-based backend. Now that my X server is decidedly more stable that doesn't matter that much anymore

  • interface decidedly Web 1.0 (TiddlyWiki is much nicer here)

  • no real database merging, essentially requiring a single install instead of several synchronising ones

  • the mediawiki syntax is random at best, and chaotic at worst. As you get used to it becomes less of a problem

  • search hardly works. Not sure if that's a sqlite issue or a broken setup



Lessons learned



Forgetting the USB stick at work means you can't take notes at home for that evening or weekend. I now have a repeating event in my calendar to remind me to take it home.

Databases don't merge and git won't help with a binary file. For a while, I kept two wikis, one for work notes, one for private stuff. But then you have something that overlaps both (e.g. a computer setup that you use at work and at home). Eventually I ended up dumping the smaller database and importing it into the other one before I had too much overlap.

Categories are awesome. They're essentially tags, add [[Category:Foobar]] to any page and the matching Category page lists all pages in that category (plus, that page can also be a page with other info). Categories can be nested.

Everything must be written down. This is something of a golden rule for any note-taking attempt. Unless you take notes, your notes won't be useful.

I still use a normal pencil and paper notebook. Especially when debugging I use pen and paper and then transfer the final result over to the mediawiki.

Install Lazarus. It has saved me a few times.

Redirects help finding pages. Whenever I search for a page with a certain title and it's not there, I add a redirect from that title to the real page. So next time, I'm sorted.

I'm now re-learning how to navigate pages instead of just searching through them. This has one big benefit: on the path to a page you may encounter another page you've forgotten about and that you can now deal with again.

Keep a "Log" (or "Diary") page, with short comments of what you did on each day. I've set this to my home page now and it consists of entries like "Looked at [[Fedora 16 Blocker Bugs]]". It's a quick way to jot down what you're doing, pointing to the pages that contain the actual information.

As said above, I've been using this setup for quite a while now. It's not perfect but software hardly ever is. The final ingredient though I only found last week: the LACIE MosKeyTo, a USB stick that sticks out by only a few mm. Before that I was always worried about breaking off the stick when I carried the laptop around.

So if you're looking for note-taking software, I can't say mediawiki is perfect. But it is the most usable one I've found for me.



* My Mac is essentially a netbook and I don't really install anything on it. It keeps me from working too much at home.

Friday, September 2, 2011

Making git reset less dangerous

If you've ever fat-fingered a git reset HEAD~2 into a git reset HEAD~23 then this may be of interest to you. If you have, add this to your .gitconfig:

[alias]
r = !sh -c \"git log -1 --format=oneline\" && git reset

Using git r instead of git reset now prints the current HEAD before resetting.

$ git r --hard HEAD~23
9d09ffc3ba1a65fc7feefd21abd5adacf3274628 dix: NewCurrentScreen must work on pointers where possible
HEAD is now at 5083b56 Revert "mi: move switching screen on dequeue to separate helper function"

Whoops. Fat-fingered. A git r --hard 9d09ffc3ba1a65fc7feefd21abd5adacf3274628 will undo the above and get you back to your previous HEAD.

Note: git fsck --lost-found will also help you to find the commit again, but it'll take more time and effort.

Update: as mjg59 pointed out to me, git reflog records git resets, thus making it easy to find the previous HEAD too. Wish I'd known that command sooner ;)

Thursday, August 4, 2011

Starting a terminal in GNOME3

After GNOME3 came out, one of the common criticisms I heard/read about was that a click on the the terminal icon doesn't start a new terminal - it brings the current one to the foreground.

I wouldn't have noticed, I can't remember the last time I clicked on that icon. GNOME has for years supported setting a keyboard short-cut to fire up a terminal. GNOME3 still supports that short-cut. Go to System Settings → Keyboard → Shortcuts, click on "Launch Terminal". Assign a shortcut (e.g. Ctrl+Alt+T). And that's it. Now you can fill your screen faster with new terminals than you could ever click it.

On that note, I recommend using "Terminator" as your terminal emulator, assign shortcuts to "maximize window" and "maximize window horizontally" and with a few shortcuts you essentially have tiling window manager for terminals, inside GNOME.

Friday, July 22, 2011

Disabling ssh host key checking

Note: this post describes how to disable a security feature. Unless you know why you're doing this, better don't do it.

The testboxes I have need to boot a variety of different operating systems and OS versions, usally from USB disks. Each OS has different ssh host keys, so all-too-frequently I got this error when trying to ssh in:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
[...]

Deleting the old key from $HOME/.ssh/known_hosts and re-trying fixes the issue - until the next reboot into a different system.

Now, while I'm sure there's a way to share host keys between installs some of those systems are very short-lived only. So I needed something for my main box (where I ssh from).

A while ago I found the magic recipe. Pop this into your $HOME/.ssh/config:


Host 192.168.0.100
UserKnownHostsFile /dev/null
StrictHostKeyChecking no


And what that does is nicely explained in ssh_config(5):

UserKnownHostsFile
Specifies a file to use for the user host key database instead of ~/.ssh/known_hosts.

StrictHostKeyChecking
[...]
If this flag is set to “no”, ssh will automatically add new
host keys to the user known hosts files. If this flag is set
to “ask”, new host keys will be added to the user known host
files only after the user has confirmed that is what they
really want to do, and ssh will refuse to connect to hosts
whose host key has changed. The host keys of known hosts will
be verified automatically in all cases. The argument must be
“yes”, “no”, or “ask”. The default is “ask”.



The combination of /dev/null and "no" to key checks means that ssh will automatically add new hosts. But the host key will be saved in /dev/null, so next time it's like ssh connects to a new unknown host.

Problem solved, though in your own interest you should keep the Host definition as narrow as possible. Host key checking exists for a reason.