← Back to team overview

kernel-packages team mailing list archive

[Bug 1072472] Re: e1000 with Intel Pro1000GT shows strange ICMP behaviour

 

[Expired for linux (Ubuntu) because there has been no activity for 60
days.]

** Changed in: linux (Ubuntu)
       Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1072472

Title:
  e1000 with Intel Pro1000GT shows strange ICMP behaviour

Status in linux package in Ubuntu:
  Expired

Bug description:
  Hi, I'am siting here for 5 day configuring 4 brandnew Lenovo TS130 Server with 12.10 Server, each having two additional Intel Pro1000GT. Syslog shows that e1000 drives is used. Interface comes up normaly but shows strange icmp echo_request (ping ) behaviour.
  onboard em1
  Oct 28 20:20:31 cnode4 kernel: [    0.916024] e1000e: Intel(R) PRO/1000 Network Driver - 2.0.0-k
  Oct 28 20:20:31 cnode4 kernel: [    0.916031] e1000e: Copyright(c) 1999 - 2012 Intel Corporation.
  Oct 28 20:20:31 cnode4 kernel: [    0.916078] e1000e 0000:00:19.0: >setting latency timer to 64
  Oct 28 20:20:31 cnode4 kernel: [    0.916144] e1000e 0000:00:19.0: >Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode

  one of ethx Pro1000GT
  Oct 28 20:20:31 cnode4 kernel: [    0.916187] e1000e 0000:00:19.0: >irq 43 for MSI/MSI-X
  Oct 28 20:20:31 cnode4 kernel: [    0.917102] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
  Oct 28 20:20:31 cnode4 kernel: [    0.917106] e1000: Copyright (c) 1999-2006 Intel Corporation.

  
  When I do "ping -v -i 0.2 172.16.0.3" the icmp roudtrip times starts on about 100ms, continously reducing about 1ms every packet until 1ms and then hopp again to the maximum......scary and strange
  PING 172.16.0.3 (172.16.0.3) 56(84) bytes of data.
  64 bytes from 172.16.0.3: icmp_req=1 ttl=64 time=11.2 ms
  64 bytes from 172.16.0.3: icmp_req=2 ttl=64 time=10.7 ms
  64 bytes from 172.16.0.3: icmp_req=3 ttl=64 time=9.73 ms
  64 bytes from 172.16.0.3: icmp_req=4 ttl=64 time=8.71 ms
  64 bytes from 172.16.0.3: icmp_req=5 ttl=64 time=7.73 ms
  64 bytes from 172.16.0.3: icmp_req=6 ttl=64 time=6.73 ms
  64 bytes from 172.16.0.3: icmp_req=7 ttl=64 time=105 ms
  64 bytes from 172.16.0.3: icmp_req=8 ttl=64 time=104 ms
  64 bytes from 172.16.0.3: icmp_req=9 ttl=64 time=103 ms
  64 bytes from 172.16.0.3: icmp_req=10 ttl=64 time=102 ms
  64 bytes from 172.16.0.3: icmp_req=11 ttl=64 time=101 ms
  64 bytes from 172.16.0.3: icmp_req=12 ttl=64 time=101 ms
  64 bytes from 172.16.0.3: icmp_req=13 ttl=64 time=100 ms

  Do I see an counter-loop here???

  The netwerk performance ist bad too, no wonder.
  I changed the NIC, the cable, switchport and switch....still the same. In addtition a crosslink does not work totaly. I tried normal na cross cable. In all test mii-diag and mii-tool shows everything ist fine.
  onboard nic is e1000e and works as expected, pinging  far below 1ms roundtrip.

  Do you say that this is a bug or is it a "new" feature I never heard
  about and how can I get rid of it.

  Thanks for any answers

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1072472/+subscriptions