← Back to team overview

c2c-oerpscenario team mailing list archive

Re: [Bug 709575] Re: [6.0.1][stock] very very slow (22 minutes!!!) to process a large picking

 

Olivier,

I agree that the name_get is negligible in front of the picking processing
slowness bug. Still, it's still a bug (even an inconsistency of the
osv.memory) and the main impact of the name_get calls is actually on small
pickings such as less than 20 moves with some network latency: it looks
sluggish on the GTK client.
Still no time to investigate more yet. What I can tell you:
in our specific case, our customer also did mi-use of OpenERP as they
entered several 1 lines moves instead of grouping moves, they could actually
re-do de 315 moves picking in around 120 moves. Also it doesn't sound a top
high priority bug (at least to us; say compared to the API consistency which
has impact on third parties development cycles), even it's important to get
it fixed one day, specially before attempting larger deployments of OpenERP.


On Wed, Feb 2, 2011 at 1:10 PM, Olivier Dony (OpenERP) <
709575@xxxxxxxxxxxxxxxxxx> wrote:

> Just for info, here are some timings found out while investigating
> related bug 709567:
>
> | #   | A  | B  | C |  D  |
> | 64  | 7  | 5  | 2 | 15  |
> | 128 | 14 | 11 | 3 | 56  |
> | 256 | 28 | 21 | 4 | 232 |
>
> with:
> #: lines in purchase.order
> A: time to confirm po (seconds)
> B: time to duplicate po (seconds)
> C: time to open picking processing wizard (seconds)
> D: time to process picking with wizard (seconds)
>
> This not only confirms that step D has a total time in O(n2), which is
> too much (this is the current bug), but also shows that the name_get()
> calls from the client, counted in column C, are almost negligible, even
> if YMMV depending on the network speed (this is bug 709567).
>
> --
> You received this bug notification because you are a direct subscriber
> of the bug.
> https://bugs.launchpad.net/bugs/709575
>
> Title:
>  [6.0.1][stock] very very slow (22 minutes!!!) to process a large
>  picking
>
> Status in OpenERP Modules (addons):
>   Confirmed
>
> Bug description:
>  Hey guys,
>
>   <content randomly removed due to foul language, please respect the
> Etiquette!> full description + screenshot here (new bug because different
> concern):
>   https://bugs.launchpad.net/openobject-addons/+bug/709559
>
>  During that time, my server CPU usage shows roughly:
>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>  32290 postgres  20   0 54388  38m  32m R   71  1.2   1:47.89 postgres
>  32284 akretion  20   0  186m 113m  23m S   21  3.5   0:49.00 python
>
>  So an SQL log analyzer will help us to find if there is some slow
>  query. Fortunately here the case is almost daily, so we will be able
>  to log the queries. But if you have an idea, be my guest, cause this
>  would impact large companies very badly.
>
>  BTW, this is our hardware, the rest of the perf is pretty good:
>
>  root@audiolivroserver:~# cat /proc/cpuinfo
>  processor     : 0
>  vendor_id     : GenuineIntel
>  cpu family    : 6
>  model         : 23
>  model name    : Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz
>  stepping      : 10
>  cpu MHz               : 1603.000
>  cache size    : 3072 KB
>  physical id   : 0
>  siblings      : 2
>  core id               : 0
>  cpu cores     : 2
>  apicid                : 0
>  initial apicid        : 0
>  fdiv_bug      : no
>  hlt_bug               : no
>  f00f_bug      : no
>  coma_bug      : no
>  fpu           : yes
>  fpu_exception : yes
>  cpuid level   : 13
>  wp            : yes
>  flags         : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov
> pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc
> arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3
> cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
>  bogomips      : 5852.43
>  clflush size  : 64
>  cache_alignment       : 64
>  address sizes : 36 bits physical, 48 bits virtual
>  power management:
>
>  processor     : 1
>  vendor_id     : GenuineIntel
>  cpu family    : 6
>  model         : 23
>  model name    : Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz
>  stepping      : 10
>  cpu MHz               : 1603.000
>  cache size    : 3072 KB
>  physical id   : 0
>  siblings      : 2
>  core id               : 1
>  cpu cores     : 2
>  apicid                : 1
>  initial apicid        : 1
>  fdiv_bug      : no
>  hlt_bug               : no
>  f00f_bug      : no
>  coma_bug      : no
>  fpu           : yes
>  fpu_exception : yes
>  cpuid level   : 13
>  wp            : yes
>  flags         : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov
> pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc
> arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3
> cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
>  bogomips      : 5851.96
>  clflush size  : 64
>  cache_alignment       : 64
>  address sizes : 36 bits physical, 48 bits virtual
>  power management:
>
> To unsubscribe from this bug, go to:
> https://bugs.launchpad.net/openobject-addons/+bug/709575/+subscribe
>

-- 
You received this bug notification because you are a member of C2C
OERPScenario, which is subscribed to the OpenERP Project Group.
https://bugs.launchpad.net/bugs/709575

Title:
  [6.0.1][stock] very very slow (22 minutes!!!) to process a large
  picking

Status in OpenERP Modules (addons):
  Confirmed

Bug description:
  Hey guys,

  <content randomly removed due to foul language, please respect the Etiquette!> full description + screenshot here (new bug because different concern):
  https://bugs.launchpad.net/openobject-addons/+bug/709559

  During that time, my server CPU usage shows roughly:
    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
  32290 postgres  20   0 54388  38m  32m R   71  1.2   1:47.89 postgres
  32284 akretion  20   0  186m 113m  23m S   21  3.5   0:49.00 python

  So an SQL log analyzer will help us to find if there is some slow
  query. Fortunately here the case is almost daily, so we will be able
  to log the queries. But if you have an idea, be my guest, cause this
  would impact large companies very badly.

  BTW, this is our hardware, the rest of the perf is pretty good:

  root@audiolivroserver:~# cat /proc/cpuinfo
  processor	: 0
  vendor_id	: GenuineIntel
  cpu family	: 6
  model		: 23
  model name	: Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz
  stepping	: 10
  cpu MHz		: 1603.000
  cache size	: 3072 KB
  physical id	: 0
  siblings	: 2
  core id		: 0
  cpu cores	: 2
  apicid		: 0
  initial apicid	: 0
  fdiv_bug	: no
  hlt_bug		: no
  f00f_bug	: no
  coma_bug	: no
  fpu		: yes
  fpu_exception	: yes
  cpuid level	: 13
  wp		: yes
  flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
  bogomips	: 5852.43
  clflush size	: 64
  cache_alignment	: 64
  address sizes	: 36 bits physical, 48 bits virtual
  power management:

  processor	: 1
  vendor_id	: GenuineIntel
  cpu family	: 6
  model		: 23
  model name	: Intel(R) Core(TM)2 Duo CPU     E7500  @ 2.93GHz
  stepping	: 10
  cpu MHz		: 1603.000
  cache size	: 3072 KB
  physical id	: 0
  siblings	: 2
  core id		: 1
  cpu cores	: 2
  apicid		: 1
  initial apicid	: 1
  fdiv_bug	: no
  hlt_bug		: no
  f00f_bug	: no
  coma_bug	: no
  fpu		: yes
  fpu_exception	: yes
  cpuid level	: 13
  wp		: yes
  flags		: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm tpr_shadow vnmi flexpriority
  bogomips	: 5851.96
  clflush size	: 64
  cache_alignment	: 64
  address sizes	: 36 bits physical, 48 bits virtual
  power management:





References