Front page

Re: obnam v1.19.1 forgets clients and or generations after a while

b27d7e4c2a46424eb99713903dc85f09
SAWDUST INSINCERE LOCALE

From: Markus Dauberschmidt <daubsi@bigigloo.de>
Date: Wed, 5 Oct 2016 22:16:55 +0200

   Dear Adam,
   
   sorry if this reply messes up the thread. I was not yet subscribed to the list at first and thus do not know whether my copy/paste reply of the contents on the webpage at 
   http://distix.obnam.org/obnam-support/c5bf405aa2f8430baf12051f605e9067.html will be properly processed… I’ve now subscribed so future replies should be finde hopefully.
   
   
   I’m not sure where you see the 10 min gap in the logfile? Please notice that the last entries from the copied logfile might be 9 minutes later - but also 18 hours ;-)
   The extract should show that everything seemed to run fine at night but started to mess up when I manually queried the repo using “obnam generations”.
   
   Currently, I still encounter the "ERROR R1CA00X Client bigigloo does not exist” when I issue "obnam generations".
   A call to fsck gives me
   
   root@bigigloo:/home/daubsi# obnam fsck
   Checking #/1: ERROR: larch forest metadata missing "key_size"
   
   Doesn’t sound to promising, does it? :-(
   
   What really astonishes me: the nightly run seemed to work without problems again?
   
   2016-10-05 02:49:22 INFO Backing up /var/www/owncloud/COPYING-AGPL
   2016-10-05 02:49:22 INFO Backing up /var/www/owncloud/console.php
   2016-10-05 02:49:22 INFO Backing up /var/www/owncloud/AUTHORS
   2016-10-05 02:49:22 INFO Backing up /var/www/owncloud
   2016-10-05 02:50:23 INFO Unlocking client bigigloo
   2016-10-05 02:50:50 INFO Locking client bigigloo
   2016-10-05 02:50:51 INFO Unlocking client bigigloo
   2016-10-05 02:50:51 INFO Backup performance statistics:
   2016-10-05 02:50:51 INFO * files found: 113922
   2016-10-05 02:50:51 INFO * files backed up: 113922
   2016-10-05 02:50:51 INFO * uploaded chunk data: 22582949 bytes (21 MiB)
   2016-10-05 02:50:51 INFO * total uploaded data (incl. metadata): 185391281 bytes (176 MiB)
   2016-10-05 02:50:51 INFO * total downloaded data (incl. metadata): 10592808621 bytes (9 GiB)
   2016-10-05 02:50:51 INFO * transfer overhead: 10755616953 bytes (10 GiB)
   2016-10-05 02:50:51 INFO * duration: 3029.65273118 s (50m30s)
   2016-10-05 02:50:51 INFO * average speed: 7.27927029585 KiB/s
   2016-10-05 02:50:51 INFO Backup finished.
   2016-10-05 02:50:51 INFO obnam version 1.19.1 ends normally
   2016-10-05 02:50:52 INFO obnam version 1.19.1 starts
   2016-10-05 02:50:52 INFO Forcing lock
   2016-10-05 02:50:52 INFO Repository: /nas/storage/Backups/bigigloo
   2016-10-05 02:50:52 INFO Opening repository: /nas/storage/Backups/bigigloo
   2016-10-05 02:50:52 INFO Forcing client lock open for bigigloo
   2016-10-05 02:50:52 INFO obnam version 1.19.1 ends normally
   2016-10-05 02:50:52 INFO obnam version 1.19.1 starts
   2016-10-05 02:50:52 INFO Opening repository: /nas/storage/Backups/bigigloo
   2016-10-05 02:50:52 INFO Locking client bigigloo
   2016-10-05 02:50:53 INFO Unlocking client bigigloo
   2016-10-05 02:50:53 INFO obnam version 1.19.1 ends normally
   
   
   As I do not seem to have a possibility at the moment to actually verify this, judging from the line “total uploaded data … 176 MB” sounds to me as if the backup process successfully detected that there were hardly
   any changes since the last backup, right? But why does it give me errors when manually accessing the repo?
   
   For completeless here are the next lines from when I issued “obnam generations” acouple of minutes ago:
   
   2016-10-05 21:56:17 INFO obnam version 1.19.1 starts
   2016-10-05 21:56:17 INFO Opening repository: /nas/storage/Backups/bigigloo
   2016-10-05 21:56:17 CRITICAL R1CA00X: Client bigigloo does not exist in repository /nas/storage/Backups/bigigloo
   Traceback (most recent call last):
     File "/usr/lib/python2.7/dist-packages/obnamlib/app.py", line 208, in process_args
       cliapp.Application.process_args(self, args)
     File "/usr/lib/python2.7/dist-packages/cliapp/app.py", line 589, in process_args
       method(args[1:])
     File "/usr/lib/python2.7/dist-packages/obnamlib/plugins/show_plugin.py", line 106, in generations
       self.open_repository()
     File "/usr/lib/python2.7/dist-packages/obnamlib/plugins/show_plugin.py", line 95, in open_repository
       client=client, repo=self.app.settings['repository'])
   ClientDoesNotExistError: R1CA00X: Client bigigloo does not exist in repository /nas/storage/Backups/bigigloo
   
   
   You are writing that always forcing the lock could also be problematic. OK, but - when I did not force the lock at first when I started using obnam, the repo was also corrupted, so where is the golden line here when to use it an when not?
   
   Could it be some incompatibility of a python module due to a recent Ubuntu package update? How could I check this if that was the case?
   
   Still the question remains: Will there be any chance to access the backuped up data in the repo in case tonight my hard disks in the server die or is the repo wracked up beyond repair?
   
   With best regards,
   Markus
   
   
   ticket-id: c5bf405aa2f8430baf12051f605e9067
   title: obnam v1.19.1 forgets clients and or generations after a while
   From: Markus Dauberschmidt <daubsi@bigigloo.de>
   Date: Mon, 3 Oct 2016 21:07:50 +0200
   
      Hi,
      
      I have severe problems with obnam v1.19.1 trying to backup my home 
      directories to a mounted NFS share on a daily basis. obnam is installed 
      from the Ubuntu 16.04 LTS package repository.
      
      The NFS share is mounted as
      
      192.168.0.25:/volume1/storage on /nas/storage type nfs 
      (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.25,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.0.25)
      
      The share and server is always on. There is no downtime and the share is 
      in use around the clock.
      
      obnam works perfectly for a while, but after some weeks it somehow 
      forgot all generations, showing nothing for "obnam generations" anymore.
      I've had like 20 generations according to my "forgot" policy and now 
      they are all gone :-(
      
      Lately obnam gives me this error when I execute "obnam generations":
      ERROR: R1CA00X: Client bigigloo does not exist in repository 
      /nas/storage/Backups/bigigloo
      
      Thank god, the latest repository still seems to be on the NAS, having ~ 
      23 GB of used disk space below that directory tree.
      
      My config looks like that:
      
      /etc/obnam.conf:
      ---
      [config]
      repository = /nas/storage/Backups/bigigloo
      log = /var/log/obnam.log
      root = /root, /home/abc, /home/def, /var/www/owncloud
      one-file-system = no
      client-name = bigigloo
      keep = 7d,15w,12m,1y
      ---
      and I call the following script every night at 02 a.m.
      
      /root/obnam_backup:
      ---
      #!/bin/bash
      
      /usr/bin/obnam force-lock --config=/etc/obnam.conf
      /usr/bin/obnam backup --config=/etc/obnam.conf
      /usr/bin/obnam force-lock --config=/etc/obnam.conf
      /usr/bin/obnam forget --config=/etc/obnam.conf
      ---
      
      These are the last lines from the log, showing the successful backup 
      creation this night:
      
      2016-10-03 02:44:12 INFO Backing up /var/www/owncloud/db_structure.xml
      2016-10-03 02:44:12 INFO Backing up /var/www/owncloud/cron.php
      2016-10-03 02:44:12 INFO Backing up /var/www/owncloud/COPYING-AGPL
      2016-10-03 02:44:12 INFO Backing up /var/www/owncloud/console.php
      2016-10-03 02:44:12 INFO Backing up /var/www/owncloud/AUTHORS
      2016-10-03 02:44:12 INFO Backing up /var/www/owncloud
      2016-10-03 02:45:12 INFO Unlocking client bigigloo
      2016-10-03 02:45:38 INFO Locking client bigigloo
      2016-10-03 02:45:38 INFO Unlocking client bigigloo
      2016-10-03 02:45:38 INFO Backup performance statistics:
      2016-10-03 02:45:38 INFO * files found: 113815
      2016-10-03 02:45:38 INFO * files backed up: 113815
      2016-10-03 02:45:38 INFO * uploaded chunk data: 7830114 bytes (7 MiB)
      2016-10-03 02:45:38 INFO * total uploaded data (incl. metadata): 
      164711002 bytes (157 MiB)
      2016-10-03 02:45:38 INFO * total downloaded data (incl. metadata): 
      9917031993 bytes (9 GiB)
      2016-10-03 02:45:38 INFO * transfer overhead: 10073912881 bytes (9 GiB)
      2016-10-03 02:45:38 INFO * duration: 2717.00568295 s (45m17s)
      2016-10-03 02:45:38 INFO * average speed: 2.81434659895 KiB/s
      2016-10-03 02:45:38 INFO Backup finished.
      2016-10-03 02:45:38 INFO obnam version 1.19.1 ends normally
      2016-10-03 02:45:39 INFO obnam version 1.19.1 starts
      2016-10-03 02:45:39 INFO Forcing lock
      2016-10-03 02:45:39 INFO Repository: /nas/storage/Backups/bigigloo
      2016-10-03 02:45:39 INFO Opening repository: /nas/storage/Backups/bigigloo
      2016-10-03 02:45:39 INFO Forcing client lock open for bigigloo
      2016-10-03 02:45:39 INFO obnam version 1.19.1 ends normally
      2016-10-03 02:45:39 INFO obnam version 1.19.1 starts
      2016-10-03 02:45:39 INFO Opening repository: /nas/storage/Backups/bigigloo
      2016-10-03 02:45:39 INFO Locking client bigigloo
      2016-10-03 02:45:40 INFO Unlocking client bigigloo
      2016-10-03 02:45:40 INFO obnam version 1.19.1 ends normally
      2016-10-03 20:54:58 INFO obnam version 1.19.1 starts
      2016-10-03 20:54:58 INFO Opening repository: /nas/storage/Backups/bigigloo
      2016-10-03 20:54:59 CRITICAL R1CA00X: Client bigigloo does not exist in 
      repository /nas/storage/Backups/bigigloo
      Traceback (most recent call last):
         File "/usr/lib/python2.7/dist-packages/obnamlib/app.py", line 208, in 
      process_args
           cliapp.Application.process_args(self, args)
         File "/usr/lib/python2.7/dist-packages/cliapp/app.py", line 589, in 
      process_args
           method(args[1:])
         File 
      "/usr/lib/python2.7/dist-packages/obnamlib/plugins/show_plugin.py", line 
      106, in generations
           self.open_repository()
         File 
      "/usr/lib/python2.7/dist-packages/obnamlib/plugins/show_plugin.py", line 
      95, in open_repository
           client=client, repo=self.app.settings['repository'])
      ClientDoesNotExistError: R1CA00X: Client bigigloo does not exist in 
      repository /nas/storage/Backups/bigigloo
      
      
      I added the calls to "force-lock" in the backup script:
      /usr/bin/obnam force-lock --config=/etc/obnam.conf
      
      because it seemed to run more stable. The "lost generations" were gone 
      for a while but then the problems reappeared.
      
      What can I do to get hold of my repository again?
      
      Thanks
      Markus
      
      
      
      _______________________________________________
      obnam-support mailing list
      obnam-support@obnam.org
      http://listmaster.pepperfish.net/cgi-bin/mailman/listinfo/obnam-support-obnam.org
   From: Adam Porter <adam@alphapapa.net>
   Date: Tue, 04 Oct 2016 00:38:27 -0500
   
      Markus Dauberschmidt <daubsi@bigigloo.de> writes:
      
      Hi Markus,
      
      > 2016-10-03 02:45:39 INFO obnam version 1.19.1 ends normally
      > 2016-10-03 02:45:39 INFO obnam version 1.19.1 starts
      > 2016-10-03 02:45:39 INFO Opening repository: /nas/storage/Backups/bigigloo
      > 2016-10-03 02:45:39 INFO Locking client bigigloo
      > 2016-10-03 02:45:40 INFO Unlocking client bigigloo
      > 2016-10-03 02:45:40 INFO obnam version 1.19.1 ends normally
      > 2016-10-03 20:54:58 INFO obnam version 1.19.1 starts
      > 2016-10-03 20:54:58 INFO Opening repository: /nas/storage/Backups/bigigloo
      > 2016-10-03 20:54:59 CRITICAL R1CA00X: Client bigigloo does not exist
      
      This might be completely irrelevant, but I wonder why there was a nearly
      10 minute gap between the second force-lock and the forget command in
      the log file.  Since the commands are in the script in sequence, it
      seems like there should have been no delay between them.  This might be
      a clue to something abnormal going on.
      
      You're right that sometimes force-lock is needed.  For example, I backup
      to an old netbook that I keep running as a little server, but its wifi
      driver is slightly buggy, and sometimes it drops off the network
      randomly.  When this happens during a backup, the repo is left locked,
      and the backup jobs fail until I notice and force-lock it.
      
      So it makes sense to do that, but at the same time, I wonder if there
      could be an issue with doing it every time, no matter what.  For
      example, if the backup run gets interrupted, then the force-lock runs,
      and then the forget runs...  I guess Obnam should be able to handle that
      all right, but bugs like that can be really obscure.
      
      Anyway, have you tried running "obnam fsck"?  You might want to run that
      from the NAS if possible, rather than over the network, because the man
      page says that it can be slow.  But if there are any problems with the
      repo, it should report them.
      
      
      _______________________________________________
      obnam-support mailing list
      obnam-support@obnam.org
      http://listmaster.pepperfish.net/cgi-bin/mailman/listinfo/obnam-support-obnam.org
From: Lars Wirzenius <liw@liw.fi>
Date: Thu, 6 Oct 2016 21:58:39 +0300

   On Wed, Oct 05, 2016 at 10:16:55PM +0200, Markus Dauberschmidt wrote:
   > root@bigigloo:/home/daubsi# obnam fsck
   > Checking #/1: ERROR: larch forest metadata missing "key_size"
   
   Do you have a file called metadata/format in the repository? What does
   it contain?
   
   Do you have a file called clientlist/metadata in the repository? What
   does it contain?
   
   So far it looks to me like one the following:
   
   * Your backup repository gets extremely corrupted almost
     instantaneously after you finish a backup. I suspect something
     outside of Obnam, but if you can track down what in Obnam causes it,
     I'm happy to fix. If you can find a way to repoduce it, preferably
     with a script, it would help fixing it.
   
   * You're somehow using different repository URLs for backup and other
     commands.
From: Markus Dauberschmidt <daubsi@bigigloo.de>
Date: Fri, 7 Oct 2016 18:12:29 +0200

   Dear Lars,
   
   > Do you have a file called metadata/format in the repository? What does
   > it contain?
   
   Yes I have:
   
   root@bigigloo:/nas/storage/Backups/bigigloo/metadata# cat format
   6
   
   > Do you have a file called clientlist/metadata in the repository? What
   > does it contain?
   
   Yes, I have:
   root@bigigloo:/nas/storage/Backups/bigigloo/clientlist# cat metadata
   [metadata]
   format = 1/1
   last_id = 3
   root_ids = 1
   key_size = 25
   node_size = 262144
   
   New behavior:
   
   I just issued an “obnam fsck” and now I get a proper check progress message:
   
   root@bigigloo:/nas/storage/Backups/bigigloo/clientlist# obnam fsck
   forest clientlist: node 3: refcount is 17 but should be 1
   Checking 1569/2231: file bigigloo:('bigigloo', 
   2):/home/daubsi/Maildir/.Markus.2005/cur/1255682915.M16218P13585V0000000000000902I019BC085_87.bigigloo,S=3126:2,RS
   (constantly changing)
   and many, many minutes later:
   
   [..thousands of lines later..]
   chunk 504585954714841833 not used by anyone
   chunk 504585954714841835 not used by anyone
   chunk 504585954714841838 not used by anyone
   chunk 504585954714841839 not used by anyone
   chunk 504585954714841840 not used by anyone
   chunk 504585954714841841 not used by anyone
   chunk 504585954714841842 not used by anyone
   chunk 504585954714841843 not used by anyone
   chunk 504585954714841844 not used by anyone
   chunk 504585954714841845 not used by anyone
   chunk 504585954714841848 not used by anyone
   chunk 504585954714841849 not used by anyone
   chunk 504585954714841850 not used by anyone
   chunk 504585954714841855 not used by anyone
   chunk 504585954714841856 not used by anyone
   chunk 504585954714841857 not used by anyone
   chunk 504585954714841858 not used by anyone
   chunk 504585954714841859 not used by anyone
   Checking 338767/338767: extra chunks
   
   
   The same command gave me an error yesterday:
   Checking #/1: ERROR: larch forest metadata missing “key_size"
   
   Later this morning I issued an “obnam generations”, and this time I got 
   back no error:
   
   root@bigigloo:/home/daubsi# obnam generations
   2       2016-10-06 02:00:22 +0100 .. 2016-10-06 02:46:28 +0100 (113965 
   files, 12107407342 bytes)
   root@bigigloo:/home/daubsi#
   But, if I understand it correctly, there is only this one generation 
   left? So basically all backups gone and restart from the beginning?
   (Does the line : "forest clientlist: node 3: refcount is 17 but should 
   be 1” from the fsck command have something to do with this? I think I’ve 
   had around 17 generations when it still worked)
   
   Currently, “obnam generations” still gives me the same output like above.
   
   Could it be an issue with the mount options for the NFS share?
   192.168.0.25:/volume1/video on /nas/video type nfs 
   (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.25,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.0.25)
   
   However, I’ve not encountered the slightest problem so far with any 
   other application/use case.
   
   BR
   Markus
   
   
   > On 06 Oct 2016, at 20:58, Lars Wirzenius <liw@liw.fi 
   > <mailto:liw@liw.fi>> wrote:
   >
   > On Wed, Oct 05, 2016 at 10:16:55PM +0200, Markus Dauberschmidt wrote:
   >> root@bigigloo:/home/daubsi# obnam fsck
   >> Checking #/1: ERROR: larch forest metadata missing "key_size"
   >
   > Do you have a file called metadata/format in the repository? What does
   > it contain?
   >
   > Do you have a file called clientlist/metadata in the repository? What
   > does it contain?
   >
   > So far it looks to me like one the following:
   >
   > * Your backup repository gets extremely corrupted almost
   >  instantaneously after you finish a backup. I suspect something
   >  outside of Obnam, but if you can track down what in Obnam causes it,
   >  I'm happy to fix. If you can find a way to repoduce it, preferably
   >  with a script, it would help fixing it.
   >
   > * You're somehow using different repository URLs for backup and other
   >  commands.
   >
   > -- 
   > I want to build worthwhile things that might last. --joeyh
From: Markus Dauberschmidt <daubsi@bigigloo.de>
Date: Mon, 17 Oct 2016 22:30:35 +0200

   Hi all,
   
   as I’ve already written, I’ve mounted the NAS share using NFS at /nfs/storage.
   For testing purposes I’ve mounted the share now a second time via CIFS at /nfs/storage_cifs and reconfigured obnam to use /nas/storage_cifs/Backups/bigigloo
   as the repositorie’s path…
   
   Guess what: Running for four days now without a single problem!
   
   For your reference:
   
   These are the mount details for the NFS share:
   
   192.168.0.25:/volume1/storage on /nas/storage type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.25,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.0.25)
   
   and these are the details for the CIFS share:
   //192.168.0.25/storage on /nas/storage_cifs type cifs (rw,relatime,vers=1.0,cache=strict,username=daubsi,domain=local,uid=1000,forceuid,gid=100,forcegid,addr=192.168.0.25,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=1048576,actimeo=1)
   
   Any idea what settings could cause the problem in the NFS mount options? They all seem to be automatically set. I’ve not set anything special myself.
   
   BR
   Markus
   
   
   > On 07 Oct 2016, at 18:12, Markus Dauberschmidt <daubsi@bigigloo.de> wrote:
   > 
   > Dear Lars,
   > 
   >> Do you have a file called metadata/format in the repository? What does
   >> it contain?
   > 
   > Yes I have:
   > 
   > root@bigigloo:/nas/storage/Backups/bigigloo/metadata# cat format 
   > 6
   > 
   >> Do you have a file called clientlist/metadata in the repository? What
   >> does it contain?
   > 
   > Yes, I have:
   > root@bigigloo:/nas/storage/Backups/bigigloo/clientlist# cat metadata 
   > [metadata]
   > format = 1/1
   > last_id = 3
   > root_ids = 1
   > key_size = 25
   > node_size = 262144
   > 
   > New behavior:
   > 
   > I just issued an “obnam fsck” and now I get a proper check progress message:
   > 
   > root@bigigloo:/nas/storage/Backups/bigigloo/clientlist# obnam fsck
   > forest clientlist: node 3: refcount is 17 but should be 1
   > Checking 1569/2231: file bigigloo:('bigigloo', 2):/home/daubsi/Maildir/.Markus.2005/cur/1255682915.M16218P13585V0000000000000902I019BC085_87.bigigloo,S=3126:2,RS
   > (constantly changing)
   > and many, many minutes later:
   > 
   > [..thousands of lines later..]
   > chunk 504585954714841833 not used by anyone
   > chunk 504585954714841835 not used by anyone
   > chunk 504585954714841838 not used by anyone
   > chunk 504585954714841839 not used by anyone
   > chunk 504585954714841840 not used by anyone
   > chunk 504585954714841841 not used by anyone
   > chunk 504585954714841842 not used by anyone
   > chunk 504585954714841843 not used by anyone
   > chunk 504585954714841844 not used by anyone
   > chunk 504585954714841845 not used by anyone
   > chunk 504585954714841848 not used by anyone
   > chunk 504585954714841849 not used by anyone
   > chunk 504585954714841850 not used by anyone
   > chunk 504585954714841855 not used by anyone
   > chunk 504585954714841856 not used by anyone
   > chunk 504585954714841857 not used by anyone
   > chunk 504585954714841858 not used by anyone
   > chunk 504585954714841859 not used by anyone
   > Checking 338767/338767: extra chunks
   > 
   > 
   > The same command gave me an error yesterday:
   > Checking #/1: ERROR: larch forest metadata missing “key_size"
   > 
   > Later this morning I issued an “obnam generations”, and this time I got back no error:
   > 
   > root@bigigloo:/home/daubsi# obnam generations
   > 2       2016-10-06 02:00:22 +0100 .. 2016-10-06 02:46:28 +0100 (113965 files, 12107407342 bytes)
   > root@bigigloo:/home/daubsi#
   >  
   > But, if I understand it correctly, there is only this one generation left? So basically all backups gone and restart from the beginning?
   > (Does the line : "forest clientlist: node 3: refcount is 17 but should be 1” from the fsck command have something to do with this? I think I’ve had around 17 generations when it still worked)
   > 
   > Currently, “obnam generations” still gives me the same output like above.
   > 
   > Could it be an issue with the mount options for the NFS share?
   > 192.168.0.25:/volume1/video on /nas/video type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.25,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.0.25)
   > 
   > However, I’ve not encountered the slightest problem so far with any other application/use case.
   > 
   > BR
   > Markus
   > 
   > 
   >> On 06 Oct 2016, at 20:58, Lars Wirzenius <liw@liw.fi <mailto:liw@liw.fi>> wrote:
   >> 
   >> On Wed, Oct 05, 2016 at 10:16:55PM +0200, Markus Dauberschmidt wrote:
   >>> root@bigigloo:/home/daubsi# obnam fsck
   >>> Checking #/1: ERROR: larch forest metadata missing "key_size"
   >> 
   >> Do you have a file called metadata/format in the repository? What does
   >> it contain?
   >> 
   >> Do you have a file called clientlist/metadata in the repository? What
   >> does it contain?
   >> 
   >> So far it looks to me like one the following:
   >> 
   >> * Your backup repository gets extremely corrupted almost
   >>  instantaneously after you finish a backup. I suspect something
   >>  outside of Obnam, but if you can track down what in Obnam causes it,
   >>  I'm happy to fix. If you can find a way to repoduce it, preferably
   >>  with a script, it would help fixing it.
   >> 
   >> * You're somehow using different repository URLs for backup and other
   >>  commands.
   >> 
   >> -- 
   >> I want to build worthwhile things that might last. --joeyh
   >