← Back to team overview

hugin-devs team mailing list archive

[Bug 883208] Re: cpfind mode for gigapixel panoramas.

 

[Expired for Hugin because there has been no activity for 60 days.]

** Changed in: hugin
       Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Hugin
Developers, which is subscribed to Hugin.
https://bugs.launchpad.net/bugs/883208

Title:
  cpfind mode for gigapixel panoramas.

Status in Hugin - Panorama Tools GUI:
  Expired

Bug description:
  I have built my own "gigapan" hardware. I made a "test shot" with only
  200 images.

  The camera swept across the FOV snake-mode left-to-right, up one row
  and then right-to-left.

  CPfind found matches between most adjacent images. Whenever the camera
  went UP the controlpoiints are mostly in the upper half of the
  lowernumbered image and then in the lower half of the higher numbered
  image.

  Once this happens, the sum of the image numbers indicates that up to
  thirty images further (I have horizontal sweeps of 30 images) that
  same sum of the image numbers should be tested.

  For example, when image 30 and 31 match vertically, then the sum is 61
  and we should try to match 32-29, 33-28, 34-27 .... 60-1. Next we'll
  get a vertical match again between 60 and 61, the sum becomes 121, and
  we will have to match 62-59, 63-58 ... 90-31.

  I am now stuck with a choice between "fast" which only finds (part of)
  the "serpent" control points (some images are too dark and
  featureless.) and an "exponentially expensive" mode which I expect to
  run for at least days...

  Adjacent is now:

  for (i=1 ; i < nimages;i++)
     process_pair (i, i-1);

  while I think it should be:

  for (i=0;i<nrules;i++) {
     for (j=0;j < rules[i].len; j++) {
         process_pair (j+rules[i].offset, (rules[i].mode== OFFSET) ?
                                                                    j+rules[i].number:
                                                                    rules[i].number - j);
        }
    }
  The first rule should then be initialized to: len=nimages, offset = 0, number = 1, mode = OFFSET. 
  My case  should then have the "rule":  len = 30, offset = 0, number=1 mode= DIFFERENCE. 

  A multirow panorama where each row is taken left-to-right, would then have rules of the "OFFSET" type. (in fact one rule would
  suffice.)
  It would be a further enhancement to have rules detected automatically. (with rule-0 already present, the "if matches are perpendicular to the normal match direction, add a sum-rule with the current total of the two image numbers" is almost free in terms of computational requirements.

  To detect the multirow left-to-right pano, you would find that e.g.
  there is no match between 30, 31 then no matches between 60,61 etc
  etc. However I would think that with some images almost featureless,
  this would be difficult to detect automatically.

  That would leave us with an O(N) detection algorithm: 
    chose sample image (nimages/2) (with matches on both sides). 
       try to match this image with all other images. 

  Whenever you get a match, add the rule for that same offset.

  I currently take shots with a linear 50% overlap. So I'd get offsets
  linesize-1, linesize, linesize+1 for the images below-left, below,
  below-right.

  I can't think of a "cheap" autodetect mode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/hugin/+bug/883208/+subscriptions


References