SocketCAN CAN bus Arbit-lost error increments once program started - python

I'm doing a project that will connect multiple subsystems (sensors, controller, etc) via CAN bus. I'm using SocketCAN and have settings as below:
root#ngtianxun-desktop:~# ip -details -statistic link show can0
5: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 16 qdisc pfifo_fast state UP mode DEFAULT group default qlen 10
link/can promiscuity 0 minmtu 0 maxmtu 0
can state ERROR-ACTIVE (berr-counter tx 0 rx 0) restart-ms 10
bitrate 500000 sample-point 0.600
tq 100 prop-seg 3 phase-seg1 8 phase-seg2 8 sjw 4
RDC_CAN: tseg1 5..16 tseg2 3..8 sjw 1..4 brp 2..131072 brp-inc 2
clock 20000000
re-started bus-errors arbit-lost error-warn error-pass bus-off
0 0 337 0 0 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
RX: bytes packets errors dropped overrun mcast
27841616 3498429 0 0 0 0
TX: bytes packets errors dropped carrier collsns
9504120 2357958 0 0 0 0
root#ngtianxun-desktop:~#
Python scripts have been written to constantly monitor the subsystems and write to the subsystems upon request from the python end.
My question here is - Why am I seeing arbit-lost incrementing by 1 every time at an interval of about ~5 mins once my python program runs? Does it indicate any serious issue? Does it mean the data frame is lost? Is there any concern if I just let it be like this? Anyone who could help to answer and explain would be appreciated!
Worth noting: The programs started for about ~3 days. Only arbit-lost is observed, there's no re-started, bus-errors, error-warn, error-pass, and bus-off. And there are no errors, dropped, overrun, mcast, carrier, and collsns errors in TX/RX fields.

Usually you can safely ignore arbitration lost errors. It just means one message lost an arbitration in favor of another. Thankfully CAN has been made robust enough so that the "arbitration loser" will be sent again.
I recommend the following reads:
Wikipedia
SocketCAN

Related

Guarantee that a C program starting at exactly the same time as a Python program will have the same integer time value

I am currently writing code involving piping data from a C program to a Python program. This requires that they both have exactly the same time value as an integer. My method of getting the time is:
time(0) for C
int(time.time()) for Python
However, I am getting inconsistencies in output leading me to believe that this is not resulting in the same value. The C program takes < 0.001s to run, while:
time ./cprog | python pythonprog.py
gives times typically looking like this:
real 0m0.043s
user 0m0.052s
sys 0m0.149s
Approximately one in every 5 runs results in the expected output. Can I make this more consistent?
Not a solution - but an explanation.
When starting python (or other interpreted/VM langauge), there is usually startup cost associated with read and parsing the many modules that are needed. Even a small Python program like 'print 5' will will perform large number of IO.
The startup cost will delay the initial lookup for the current time.
From strace output, invoking a python code will result in >200 open calls, ~ (f)stat, >20 mmap calls, etc.
strace -c python prog.py
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
31.34 0.000293 1 244 178 openat
9.84 0.000092 1 100 fstat
9.20 0.000086 1 90 60 stat
8.98 0.000084 1 68 rt_sigaction
7.81 0.000073 1 66 close
2.14 0.000020 2 9 brk
1.82 0.000017 9 2 munmap
1.39 0.000013 1 26 mmap
1.18 0.000011 2 5 lstat
...

cisco interface counters, multi string values

want to extract RX, TX counters separately. Any python example to print counters in the following way from the string_output?
RX_unicast_packets = 2735118
RX_multicast_packets = 703555
TX_unicast_packets = 3983205
TX_multicast_packets = 1916649
RX
2735118 unicast packets 703555 multicast packets 677 broadcast packets
3439365 input packets 3803190483 bytes
1867301 jumbo packets 0 storm suppression bytes
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 291 input discard
15 Rx pause
TX
3983205 unicast packets 1916649 multicast packets 340 broadcast packets
5900194 output packets 3546311266 bytes
1702539 jumbo packets
0 output errors 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 0 output discard
0 Tx pause
"
I believe the regex expression could be valuable here, I'm not sure they way you receive the above output but I will assume that it's a string type:
to parse the TX values:
string = 'RX 2735118 unicast packets 703555 multicast packets 677 broadcast packets 3439365 input packets 3803190483 bytes 1867301 jumbo packets 0 storm suppression bytes 0 runts 0 giants 0 CRC 0 no buffer 0 input error 0 short frame 0 overrun 0 underrun 0 ignored 0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop 0 input with dribble 291 input discard 15 Rx pause TX 3983205 unicast packets 1916649 multicast packets 340 broadcast packets 5900194 output packets 3546311266 bytes 1702539 jumbo packets 0 output errors 0 collision 0 deferred 0 late collision 0 lost carrier 0 no carrier 0 babble 0 output discard 0 Tx pause'
re.findall(r'RX \d+\ ', string) #to get the matching RX values
re.findall(r'TX \d+\ ', string)#to get the matching TX values
if you want to match specific values, you can make use of the re.search groups
RX_unicast_packets = re.search(r'RX (\d+)\ \w+\ \w+\ (\d+)\ ', string).group(1)
RX_multicast_packets = re.search(r'RX (\d+)\ \w+\ \w+\ (\d+)\ ', string).group(2)
let me know if you need any further help, and i would be happy to help here

parsing a text file for integers on a given line with python

I've got this test suite that outputs test results as a text file.
Here is a sample:
File Opened: Tuesday, June 26, 2016, 10:17:13 AM
File Opened: Tuesday, June 26, 2016, 10:17:29 AM
Radio Test BER LOOP BACK successful
Radio Test PAUSE successful
Radio Test BER LOOP BACK successful
File Opened: Tuesday, June 28, 2016, 10:18:11 AM
Bits received 10152
Bits in error 117
Access code bit errors 0
Packets received 49
Packets expected 2707
Packets w/ header error 0
Packets w/ CRC error 0
Packets w/ uncorr errors 0
Sync timeouts 3
==================================
Bits received 10368
Bits in error 85
Access code bit errors 0
Packets received 52
Packets expected 2758
Packets w/ header error 0
Packets w/ CRC error 0
Packets w/ uncorr errors 0
Sync timeouts 1
==================================
Bits received 10152
Bits in error 93
Access code bit errors 0
Packets received 49
Packets expected 2707
Packets w/ header error 0
Packets w/ CRC error 0
Packets w/ uncorr errors 0
Sync timeouts 3
I'm trying to extract the number after Bits received and Bits in error, and divide them to get a percentage.
Then, I'd like to plot those as a scatter plot with matplotlib.pyplot.
I'm having a hard time getting those numbers out of this file, however...I'm messing up something with how I'm parsing this.
Either way, I'm just feeling my way through this, and I'm sure I'm not doing this the most elegant way possible. This seems like such a simple task for Python and I'm surely making it much harder than it needs to be.
How would you handle this?
Thanks
Create two arrays, once for the received data and one for the error data, then just loop through the file and parse:
receivedData = []
errorData = []
with open("data.txt") as f:
for line in f:
if line.startswith("Bits received"):
receivedData.append(int(line.split()[-1]))
elif line.startswith("Bits in error"):
errorData.append(int(line.split()[-1]))
else:
#do normal stuff with other lines
pass
Another easy way would be to use the regex library re. (https://docs.python.org/2/library/re.html)
import re
pattern1 = re.compile(r'Bits received\s+(\d+)') # \d means any digit character
pattern2 = re.compile(r'Bits in error\s+(\d+)')
with open('path/file.txt', 'r') as f:
text = f.read()
received = int(pattern1.match(text).group(1))
in_error = int(pattern2.match(text).group(1))
value_of_interest = in_error/received
This approach assumes that every input file has those two lines. If that assumption can't be made, break up the matches to check for their presence:
match1 = pattern1.match(text) # re.MatchObject if the pattern is found
if match1: # None if it's not found
received = int(match1.group(1)) # re.MatchObject.group(1) is the first parenthesized group
match2 = pattern2.match(text)
if match2:
in_error = int(match2.group(1))

python getting upload/download speeds

I want to monitor on my computer the upload and download speeds. A program called conky already does it with the following in conky conf:
Connection quality: $alignr ${wireless_link_qual_perc wlan0}%
${downspeedgraph wlan0}
DLS:${downspeed wlan0} kb/s $alignr total: ${totaldown wlan0}
and it shows me the speeds in almost real time while I browse. I want to be able to access the same information using python.
You can calculate the speed yourself based on the rx_bytes and tx_bytes for the device and polling those values over an interval
Here is a very simplistic solution I hacked together using Python 3
#!/usr/bin/python3
import time
def get_bytes(t, iface='wlan0'):
with open('/sys/class/net/' + iface + '/statistics/' + t + '_bytes', 'r') as f:
data = f.read();
return int(data)
if __name__ == '__main__':
(tx_prev, rx_prev) = (0, 0)
while(True):
tx = get_bytes('tx')
rx = get_bytes('rx')
if tx_prev > 0:
tx_speed = tx - tx_prev
print('TX: ', tx_speed, 'bps')
if rx_prev > 0:
rx_speed = rx - rx_prev
print('RX: ', rx_speed, 'bps')
time.sleep(1)
tx_prev = tx
rx_prev = rx
I would look into the psutil module for Python.
Here is a short snippet which prints out the number of bytes sent since you booted your machine:
import psutil
iostat = psutil.net_io_counters(pernic=False)
print iostat[0] #upload only
You could easily expand this to grab the value at a constant interval and diff the two values to determine the number of bytes sent and/or received over that period of time.
In order to have interface specific statistics, the methods already proposed would work just fine.
I'll try instead to suggest a solution for your second request:
It would also be very helpful to know which program was using that
bandwidth, but so far I haven't seen anything that can do that.
As already suggested, nethogs prints process specific statistics. To my knowledge, there's no easy way to access these values under /proc and I will therefore explain how nethogs achieves this.
Considering one process with pid PID, nethogs first retrieves a list of all the sockets opened by the process listing the content of /proc/PID/fd:
➜ ~ [1] at 23:59:31 [Sat 15] $ ls -la /proc/21841/fd
total 0
dr-x------ 2 marco marco 0 Nov 15 23:41 .
dr-xr-xr-x 8 marco marco 0 Nov 15 23:41 ..
lrwx------ 1 marco marco 64 Nov 15 23:42 0 -> /dev/pts/15
l-wx------ 1 marco marco 64 Nov 15 23:42 1 -> /dev/null
lrwx------ 1 marco marco 64 Nov 15 23:41 2 -> /dev/pts/15
lrwx------ 1 marco marco 64 Nov 15 23:42 4 -> socket:[177472]
Here we have just one socket and 177472 is the inode number. We will find here all kind of sockets: TCPv4, TCPv6, UDP, netlink. In this case I will consider only TCPv4.
Once all the inode numbers are collected, each socket is assigned an unique identifier, namely (IP_SRC, PORT_SRC, IP_DEST, PORT_DEST). And of course the pairing with the PID is stored as well. The tuple (IP_SRC, PORT_SRC, IP_DEST, PORT_DEST) can be retrieved reading /proc/net/tcp (for TCPv4). In this case:
➜ ~ [1] at 0:06:05 [Sun 16] $ cat /proc/net/tcp | grep 177472
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
38: 1D00A8C0:1F90 0400A8C0:A093 01 00000000:00000000 00:00000000 00000000 1000 0 177472 1 f6fae080 21 4 0 10 5
Addresses are expressed as IP:PORT, with IP represented as a 4 bytes LE number. You can then build a key->value structure, where the key is (IP_SRC, PORT_SRC, IP_DEST, PORT_DEST) and value is the PID.
At this point, nethogs captures all the network traffic with libpcap. When it detects a TCP packet it tries to match the tuple (IP_SRC_PACKET, PORT_SRC_PACKET, IP_DEST_PACKET, PORT_DEST_PACKET) against all the connections inside the table. Of course it must try to swap SRC and DEST, the packet could be incoming (DL) or outgoing (UL). If it maches a connection, it retrieves the PID of the process the connection belongs to and it adds the size of the TCP payload to the TX or RX counter. With the number of bytes updated at every packet captured, the transfer speed for each process can be easily calculated.
This, in theory, can be implemented in python with pypcap, even though it needs a bit of work. I have tried to implement something, but it's painfully slow and it requires much more work to be usable. I was monitoring just one PID, with one connection, not updating the connections table, but beyond 3MB/s my script could not keep up with the network traffic.
As you can see it's not exactly trivial. Parsing the output of a tool already available might lead to a better solution and might save you a lot of work.
You could do something dodgy like call conky -i 1 and parse the output:
import subprocess
conky=subprocess.check_output("conky -i 1", shell=True)
lines=conky.splitlines()
print lines[11].split()[1::3]
resulting in:
['1234B', '5678B']
my config looks like:
${scroll 16 $nodename - $sysname $kernel on $machine | }
Uptime: $uptime
Frequency (in MHz): $freq
Frequency (in GHz): $freq_g
RAM Usage: $mem/$memmax - $memperc% ${membar 4}
Swap Usage: $swap/$swapmax - $swapperc% ${swapbar 4}
CPU Usage: $cpu% ${cpubar 4}
Processes: $processes Running: $running_processes
File systems:
/ ${fs_used /}/${fs_size /} ${fs_bar 6 /}
Networking:
Up: ${upspeed eth0} - Down: ${downspeed eth0}
Name PID CPU% MEM%
${top name 1} ${top pid 1} ${top cpu 1} ${top mem 1}
${top name 2} ${top pid 2} ${top cpu 2} ${top mem 2}
${top name 3} ${top pid 3} ${top cpu 3} ${top mem 3}
${top name 4} ${top pid 4} ${top cpu 4} ${top mem 4}

heapy reports memory usage << top

NB: This is my first foray into memory profiling with Python, so perhaps I'm asking the wrong question here. Advice re improving the question appreciated.
I'm working on some code where I need to store a few million small strings in a set. This, according to top, is using ~3x the amount of memory reported by heapy. I'm not clear what all this extra memory is used for and how I can go about figuring out whether I can - and if so how to - reduce the footprint.
memtest.py:
from guppy import hpy
import gc
hp = hpy()
# do setup here - open files & init the class that holds the data
print 'gc', gc.collect()
hp.setrelheap()
raw_input('relheap set - enter to continue') # top shows 14MB resident for python
# load data from files into the class
print 'gc', gc.collect()
h = hp.heap()
print h
raw_input('enter to quit') # top shows 743MB resident for python
The output is:
$ python memtest.py
gc 5
relheap set - enter to continue
gc 2
Partition of a set of 3197065 objects. Total size = 263570944 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 3197061 100 263570168 100 263570168 100 str
1 1 0 448 0 263570616 100 types.FrameType
2 1 0 280 0 263570896 100 dict (no owner)
3 1 0 24 0 263570920 100 float
4 1 0 24 0 263570944 100 int
So in summary, heapy shows 264MB while top shows 743MB. What's using the extra 500MB?
Update:
I'm running 64 bit python on Ubuntu 12.04 in VirtualBox in Windows 7.
I installed guppy as per the answer here:
sudo pip install https://guppy-pe.svn.sourceforge.net/svnroot/guppy-pe/trunk/guppy

Categories

Resources