← Back to team overview

touch-packages team mailing list archive

[Bug 1522026] Re: armhf lxd container does not start on arm64 system

 

Ah, right. Then I'm afraid I can't test the patch on Power, as I only
have access to LE machines.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/1522026

Title:
  armhf lxd container does not start on arm64 system

Status in Auto Package Testing:
  New
Status in lxc package in Ubuntu:
  Fix Committed

Bug description:
  I'm trying to set up armhf testing on an arm64 host, as that's what we
  have in Scalingstack (no armhf images yet). The host is Ubuntu 15.10,
  with lxd 0.20-0ubuntu4.1 (no PPA).

  $ uname -a
  Linux arm64-lxd-test 4.2.0-18-generic #22-Ubuntu SMP Fri Nov 6 19:56:51 UTC 2015 aarch64 aarch64 aarch64 GNU/Linux

  $ lxc image list | grep arm
  | ubuntu/xenial/armhf | a406edc85653 | no     | ubuntu xenial armv7l (default) (20151202_04:37) | armv7l | 63.68MB | Dec 2, 2015 at 1:23pm (UTC) |

  $ lxc launch ubuntu/xenial/armhf x1

  Starting the container throws no error, and with debugging I don't see
  anything bad:

  $ lxc start x1 --debug --verbose
  DBUG[12-02|13:36:56] Fingering the daemon
  DBUG[12-02|13:36:56] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"api_compat":1,"auth":"trusted","config":{"core.https_address":"10.43.41.223","images.remote_cache_expiry":"10"},"environment":{"addresses":["10.43.41.223"],"architectures":[4,3],"driver":"lxc","driver_version":"1.1.4","kernel":"Linux","kernel_architecture":"aarch64","kernel_version":"4.2.0-18-generic","server":"lxd","server_pid":1339,"server_version":"0.20","storage":"dir","storage_version":""}}}

  DBUG[12-02|13:36:56] Pong received
  DBUG[12-02|13:36:56] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"architecture":0,"config":{"volatile.base_image":"a406edc85653e7b3232ea1ae77e35b67dd42574cb4c7335e9b586a6b4ad6223c","volatile.eth0.hwaddr":"00:16:3e:38:aa:2c","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536}]"},"devices":{},"ephemeral":false,"expanded_config":{"volatile.base_image":"a406edc85653e7b3232ea1ae77e35b67dd42574cb4c7335e9b586a6b4ad6223c","volatile.eth0.hwaddr":"00:16:3e:38:aa:2c","volatile.last_state.idmap":"[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":100000,\"Nsid\":0,\"Maprange\":65536}]"},"expanded_devices":{"eth0":{"hwaddr":"00:16:3e:38:aa:2c","nictype":"bridged","parent":"lxcbr0","type":"nic"}},"name":"x1","profiles":["default"],"status":{"status":"Stopped","status_code":102,"init":0,"ips":null}}}

  DBUG[12-02|13:36:56] Putting {"action":"start","force":false,"timeout":-1}
   to http://unix.socket/1.0/containers/x1/state
  DBUG[12-02|13:36:56] Raw response: {"type":"async","status":"OK","status_code":100,"operation":"/1.0/operations/f17b8722-1573-4af8-a365-bc450bce6654","resources":null,"metadata":null}

  DBUG[12-02|13:36:56] 1.0/operations/f17b8722-1573-4af8-a365-bc450bce6654/wait
  DBUG[12-02|13:36:57] Raw response: {"type":"sync","status":"Success","status_code":200,"metadata":{"created_at":"2015-12-02T13:36:56.76183Z","updated_at":"2015-12-02T13:36:57.059047Z","status":"Success","status_code":200,"resources":null,"metadata":null,"may_cancel":false}}

  But the container is not running afterwards. I'm attaching
  /var/log/lxd/x1/lxc.log, but the most interesting bits are several

    WARN     lxc_cgmanager - cgmanager.c:cgm_get:993 - do_cgm_get exited
  with error

  and

             NOTICE   lxc_start - start.c:post_start:1265 - '/sbin/init' started with pid '2028'
             WARN     lxc_start - start.c:signal_handler:310 - invalid pid for SIGCHLD
             DEBUG    lxc_commands - commands.c:lxc_cmd_handler:893 - peer has disconnected
             DEBUG    lxc_commands - commands.c:lxc_cmd_handler:893 - peer has disconnected
             DEBUG    lxc_commands - commands.c:lxc_cmd_get_state:579 - 'x1' is in 'RUNNING' state
             DEBUG    lxc_start - start.c:signal_handler:314 - container init process exited

  cgmanager.service itself is active and running, though.

  Is there some way to get a console for this, like we used to have with
  "lxc-start -n foo -F"?

To manage notifications about this bug go to:
https://bugs.launchpad.net/auto-package-testing/+bug/1522026/+subscriptions