← Back to team overview

touch-packages team mailing list archive

[Bug 1509747] Re: Intermittent lxc failures on wily, juju-template-restart.service race condition

 

I read  up on juju-local, and tried this in a clean wily amd64 cloud
image:

sudo apt install -y juju-local
juju init
juju switch local
juju bootstrap

I'm not entirely sure what I'm supposed to do now; there is a "juju
machine add" command, but this doesn't accept custom user data. But it
sounds like the above user data will already be created by juju itself?

So I ran

  sudo juju machine add

and get

$ juju status
environment: local
available-version: 1.24.7
machines:
  "0":
    agent-state: started
    agent-version: 1.24.6.1
    dns-name: localhost
    instance-id: localhost
    series: wily
    state-server-member-status: has-vote
  "1":
    instance-id: pending
    series: trusty
services: {}

But there's no new machine anywhere, just the template from
bootstrapping:


$ sudo lxc-ls -f
NAME                      STATE    IPV4  IPV6  GROUPS  AUTOSTART  
----------------------------------------------------------------
juju-trusty-lxc-template  STOPPED  -     -     -       NO         


... and now I just realize that this is trusty, not wily. So this should at least work, no? In trusty we don't have systemd yet.

/var/log/juju-ubuntu-local/all-machines.log just spits out a series of

machine-0: 2015-10-28 08:27:55 ERROR juju.worker.diskmanager
lsblk.go:116 error checking if "fd0" is in use: open /dev/fd0: no such
device or address

but doesn't otherwise seem to do anything.

For completeness I retried the above steps with wily, and set "default-
series: wily" in ~/.juju/environments.yaml before bootstrap. But now
after bootstrap it doesn't even build a template container,
/var/lib/lxc/ stays empty and "sudo tail -f /var/log/juju-ubuntu-
local/*" again just shows a series of these /dev/fd0 checks. I gave up
after 10 minutes.

While these look like bugs, they look unrelated to this issue. Can you
please tell me how this bug can be reproduced starting from a fresh wily
install?


** Changed in: systemd (Ubuntu)
       Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1509747

Title:
  Intermittent lxc failures on wily, juju-template-restart.service race
  condition

Status in juju-core:
  Confirmed
Status in systemd package in Ubuntu:
  Incomplete

Bug description:
  Frequently, when creating an lxc container on wily (either through
  --to lxc:#, or using the local provider on wily), the template never
  stops and errors out here:

  [ 2300.885573] cloud-init[2758]: Cloud-init v. 0.7.7 running 'modules:final' at Sun, 25 Oct 2015 00:28:57 +0000. Up 182 seconds.
  [ 2300.886101] cloud-init[2758]: Cloud-init v. 0.7.7 finished at Sun, 25 Oct 2015 00:29:03 +0000. Datasource DataSourceNoCloudNet [seed=/var/lib/cloud/seed/nocloud-net][dsmode=net].  Up 189 seconds
  [  OK  ] Started Execute cloud user/final scripts.
  [  OK  ] Reached target Multi-User System.
  [  OK  ] Reached target Graphical Interface.
           Starting Update UTMP about System Runlevel Changes...
  [  OK  ] Started /dev/initctl Compatibility Daemon.
  [FAILED] Failed to start Update UTMP about System Runlevel Changes.
  See 'systemctl status systemd-update-utmp-runlevel.service' for details.

  Attaching to the container and running the above command yields:

  ubuntu@cherylj-wily-local-lxc:~$ sudo lxc-attach --name juju-wily-lxc-template
  root@juju-wily-lxc-template:~# systemctl status systemd-update-utmp-runlevel.service
  ● systemd-update-utmp-runlevel.service - Update UTMP about System Runlevel Changes
     Loaded: loaded (/lib/systemd/system/systemd-update-utmp-runlevel.service; static; vendor preset: enabled)
     Active: failed (Result: exit-code) since Sun 2015-10-25 00:30:29 UTC; 2h 23min ago
       Docs: man:systemd-update-utmp.service(8)
             man:utmp(5)
    Process: 3963 ExecStart=/lib/systemd/systemd-update-utmp runlevel (code=exited, status=1/FAILURE)
   Main PID: 3963 (code=exited, status=1/FAILURE)

  Oct 25 00:29:46 juju-wily-lxc-template systemd[1]: Starting Update UTMP about System Runlevel Changes...
  Oct 25 00:30:29 juju-wily-lxc-template systemd[1]: systemd-update-utmp-runlevel.service: Main process exited, code=exited, status=1/FAILURE
  Oct 25 00:30:30 juju-wily-lxc-template systemd[1]: Failed to start Update UTMP about System Runlevel Changes.
  Oct 25 00:30:30 juju-wily-lxc-template systemd[1]: systemd-update-utmp-runlevel.service: Unit entered failed state.
  Oct 25 00:30:30 juju-wily-lxc-template systemd[1]: systemd-update-utmp-runlevel.service: Failed with result 'exit-code'.

  
  I have seen this on ec2 and in canonistack.  The canonistack machine is available for further debugging.  Ping me for access.

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1509747/+subscriptions