python interaction with BPF maps - python

I'm wondering if there is an easy to to initialize BPF maps from python userspace. For my project, I'll have a scary looking NxN 2d array of floats for each process. For simplicity's sake, lets assume N is constant across processes (say 5). To achieve kernel support for this data, I could do something like:
b = BPF(text = """
typedef struct
{
float transMat[5][5];
} trans_struct;
BPF_HASH(trans_mapping, char[16], trans_struct);
.....
""")
I'm wondering if theres an easy way to initialize this map from python. Something like:
for ele in someDictionary:
#asume someDitionary has mapping (comm -> 5x5 float matrix)
b["trans_mapping"].insert(ele, someDictionary[ele])
I suppose at the crux of my confusion is -- 1) are all map methods available to the user, 2) how do we ensure type consistenty when going from python objects to c structures

Solution based on pchaigno's comment -- The key things to note are the use of c_types to ensure type consistency across environments, and extracting the table by indexing the BPF program object. Due to our ability to get maps by indexing, the get_table() function is now considered out of date. This format provides the structure of loading data into the map from the python front-end, but doesn't completely conform with the specifics of my question.
from time import sleep, strftime
from bcc import BPF
from bcc.utils import printb
from bcc.syscall import syscall_name, syscalls
from ctypes import *
b = BPF(text = """
BPF_HASH(start, u32, u64);
TRACEPOINT_PROBE(raw_syscalls, sys_exit)
{
u32 syscall_id = args->id;
u32 key = 1;
u64 *val;
u32 uid = bpf_get_current_uid_gid();
if (uid == 0)
{
val = start.lookup(&key); //find value associated with key 1
if (val)
bpf_trace_printk("Hello world, I have value %d!\\n", *val);
}
return 0;
}
""")
thisStart = b["start"]
thisStart[c_int(1)] = c_int(9) #insert key-value part 1->9
while 1:
try:
(task, pid, cpu, flags, ts, msg) = b.trace_fields()
except KeyboardInterrupt:
print("Detaching")
exit()
print("%-18.9f %-16s %-6d %s" % (ts, task, pid, msg))

Related

How to use BAC0 readRange in Python

Hi every one I try to use BAC0 package in python 3 to get value of multiple point in bacnet network.
I user something like following:
bacnet = BAC0.lite(ip=x.x.x.x)
tmp_points = bacnet.readRange("11:2 analogInput 0 presentValue");
and it seems not OK :(
error is:
BAC0.core.io.IOExceptions.NoResponseFromController: APDU Abort Reason : unrecognizedService
And in document I just can find
def readRange(
self,
args,
range_params=None,
arr_index=None,
vendor_id=0,
bacoid=None,
timeout=10,
):
"""
Build a ReadProperty request, wait for the answer and return the value
:param args: String with <addr> <type> <inst> <prop> [ <indx> ]
:returns: data read from device (str representing data like 10 or True)
*Example*::
import BAC0
myIPAddr = '192.168.1.10/24'
bacnet = BAC0.connect(ip = myIPAddr)
bacnet.read('2:5 analogInput 1 presentValue')
Requests the controller at (Network 2, address 5) for the presentValue of
its analog input 1 (AI:1).
"""
To read multiple properties from a device object, you must use readMultiple.
readRange will read from a property acting like an array (ex. TrendLogs objects implements records as an array, we use readRange to read them using chunks of records).
Details on how to use readMultiple can be found here : https://bac0.readthedocs.io/en/latest/read.html#read-multiple
A simple example would be
bacnet = BAC0.lite()
tmp_points = bacnet.readMultiple("11:2 analogInput 0 presentValue description")

Python: Data-structure and processing of GPS points and properties

I'm trying to read data from a csv and then process it on different way. (For starter just the average)
Data
(OneDrive) https://1drv.ms/u/s!ArLDiUd-U5dtg0teQoKGguBA1qt9?e=6wlpko
The data looks like this:
ID; Property1; Property2; Property3...
1; ....
1; ...
1; ...
2; ...
2; ...
3; ...
...
Every line is a GPS point. All points with same ID together (for example 1) produce one Route. The routes are not of the same length and some IDs are skipped. So it isn't a seamless increase of numbers.
I may need to add, that the points are ALWAYS the same set of meters apart from each other. And I don't need the XY information currently.
Wanted Result
In the end I want something like this:
[ID, AVG_Property1, AVG_Property2...] [1, 1.00595, 2.9595, ...] [2,1.50606, 1.5959, ...]
What I got so far
import os
import numpy
import pandas as pd
data = pd.read_csv(os.path.join('C:\\data' ,'data.csv'), sep=';')
# [id, len, prop1, prop2, ...]
routes = numpy.zeros((data.size, 10)) # 10 properties
sums = numpy.zeros(8)
nr_of_entries = 0;
current_id = 1;
for index, row in data.iterrows():
if(int(row['id']) != current_id): #after the last point of the route
routes[current_id-1][0] = current_id;
routes[current_id-1][1] = nr_of_entries; #how many points are in this route?
routes[current_id-1][2] = sums[0] / nr_of_entries;
routes[current_id-1][3] = sums[1] / nr_of_entries;
routes[current_id-1][4] = sums[2] / nr_of_entries;
routes[current_id-1][5] = sums[3] / nr_of_entries;
routes[current_id-1][6] = sums[4] / nr_of_entries;
routes[current_id-1][7] = sums[5] / nr_of_entries;
routes[current_id-1][8] = sums[6] / nr_of_entries;
routes[current_id-1][9] = sums[7] / nr_of_entries;
current_id = int(row['id']);
sums = numpy.zeros(8)
nr_of_entries = 0;
sums[0] += row[3];
sums[1] += row[4];
sums[2] += row[5];
sums[3] += row[6];
sums[4] += row[7];
sums[5] += row[8];
sums[6] += row[9];
sums[7] += row[10];
nr_of_entries = nr_of_entries + 1;
routes
My problem
1.) The way I did it, I have to copy paste the same code for every other processing approach, since as stated I need to do multiple different way. Average is just an example.
2.) The reading of the data is clumsy and fails when IDs are missing
3.) I'm a C# Developer, so my approach would be to create a Class 'Route' which has all the points and then provide methods for 'calculate average for prop 1'. Or something. This way I could also tweak the data if needed. (extreme values for example). But I have no idea how this would be done in Phyton and if this is a reasonable approach in this language.
4.) Is there a more elegant way to iterate through the original csv and getting like Route ID 1, then Route ID 2 and so on? Maybe something like LINQ Queries in C#?
Thanks for any help.
He is a solution and some ideas you can use. The example features multiple options for the same issue so you have to choose which fits the purpose best. Also it is Python 3.7, you didn't specify a version so i hope this works.
class Route(object):
"""description of class"""
def __init__(self, id, rawdata): # on startup
self.id = id
self.rawdata = rawdata
self.avg_Prop1 = self.calculate_average('Prop1')
self.sum_Prop4 = None
def calculate_average(self, Prop_Name): #selfreference for first argument in class method
return self.rawdata[Prop_Name].mean()
def give_Prop_data(self, Prop_Name): #return the Propdata as list
return self.rawdata[Prop_Name].tolist()
def any_function(self, my_function, Prop_Name): #not sure what dataframes support so turning it into a list first
return my_function(self.rawdata[Prop_Name].tolist())
#end of class definiton
data = pd.read_csv('testdata.csv', sep=';')
# [id, len, prop1, prop2, ...]
route_list = [] #List of all the objects created from the route class
for i in data.id.unique():
print('Current id:', i,' with ',len(data[data['id']==i]),'entries')
route_list.append(Route(i,data[data['id']==i]))
#created the Prop1 average in initialization of route so just accessing attribute
print(route_list[1].avg_Prop1)
for current_route in route_list:
print('Route ',current_route.id , ' Properties :')
for i in current_route.rawdata.columns[1:]: #for all except the first (id)
print(i, ' has average ', current_route.calculate_average(i)) #i is the string of the column not just an id
#or pass any function that you want
route_list[1].sum_Prop4 = (route_list[1].any_function(sum,'Prop4'))
print(route_list[1].sum_Prop4)
#which is equivalent to
print(sum(route_list[1].rawdata['Prop4']))
To adress your individual problems out of order:
For 2. and 4.) Looping only over the existing Ids (data.id.unique()) solves the problem. I have no idea what LINQ Queries are, but i assume they are similar. In general, Python has a great way of looping over objects (like for current_route in route_list), which is worth looking into if you want to use it a little more.
For 1. and 3.) Again looping solves the issue. I created a class in the example, mostly to show the syntax for classes. The benefits and drawbacks for using classes should be the same in Python as in C#.
As it is right now the class probably isn't great, but this depends on how you want to use it. If the class should just be a practical way of storing and accessing data it shouldn't have the methods, because you don't need an individual average method for each route. Then you can just access it's data and use it in a function like in sum(route_list[1].rawdata['Prop4']). If however, depending on the data (amount of rows for example) different calculations are necessary, it might come in handy to use the method calculate_average and differentiate in there.
An other example would be the use of the attributes. If you need the average for Prop1 every time, creating it at the initialization sees a good idea, otherwise i wouldn't bother always calculating it.
I hope this helps!

Python's construct - .sizeof() for construct depending on its parent

This post is about Python's Construct library
THE CODE
These are the definitions of my constructs:
from construct import *
AttributeHandleValuePair = "attribute_handle_value_pair" / Struct(
"handle" / Int16ul,
"value" / Bytes(this._.length - 2)
)
AttReadByTypeResponse = "read_by_type_response" / Struct(
"length" / Int8ul, # The size in bytes of each handle/value pair
"attribute_data_list" / AttributeHandleValuePair[2]
)
THE PROBLEM
Trying to run the following command:
AttReadByTypeResponse.sizeof(dict(length=4, attribute_data_list=[dict(handle=1, value=2), dict(handle=3, value=4)])
I receive the following error:
SizeofError: cannot calculate size, key not found in context
sizeof -> read_by_type_response -> attribute_data_list -> attribute_handle_value_pair -> value
WHAT I FOUND OUT
The size of the value field for each attribute_handle_value_pair is derived from the length field of its parent. I think that the sizeof() method is trying to calculate the size of attribute_handle_value_pair first, while the length field of read_by_type_response is still undefined, therefore it cannot calculate its size.
I tried changing the the length of the value field to a static value and it worked well.
MY QUESTION
How can I calculate the sizeof() for a construct that is depending of its parent construct?
Should I redesign the way this protocol is modeled? If so then how?
This is currently still an issue in Construct 2.9/2.10 (2.8 appears to be fine).
As a workaround, you can compute the size with a given context by summing the size of the subcons and passing in the length directly.
sum(sc.sizeof(length=4) for sc in AttReadByTypeResponse.subcons)
If you use the new compiled struct feature, you will need to access the original struct using defersubcon.
compiled_struct = AttReadByTypeResponse.compile()
sum(sc.sizeof(length=4) for sc in compiled_struct.defersubcon.subcons)

pyopencl copy_if(): is it possible to minimize the return buffer size?

Here's a simple pyopencl copy_if() example.
First, let's create a large set (2^25) of random ints, and query those below the 500,000 threshold:
import pyopencl as cl
import numpy as np
import my_pyopencl_algorithm
import time
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
from pyopencl.clrandom import rand as clrand
random_gpu = clrand(queue, (2^25,), dtype=np.int32, a=0, b=10**6)
start = time.time()
final_gpu, count_gpu, evt = my_pyopencl_algorithm.copy_if(random_gpu, "ary[i] < 500000", queue = queue)
final = final_gpu.get()
count = int(count_gpu.get())
print '\ncopy_if():\nresults=',final[:count], '\nfound=', count, '\ntime=', (time.time()-start), '\n========\n'
You may have noticed that I'm not calling pyopencl's copy_if, but a fork of it (my_pyopencl_algorithm.copy_if). The fork of pyopencl.algorithm.py can be found here.
The beauty of copy_if is that you have a ready-made count of the desired output, and on the order from gid=0 to gid=count. What doesn't seem optimal is that it allocates and returns (from the gpu) the entire buffer, with only the first entries having meaning. So in my fork of pyopencl.algorithm.py I'm trying to optimize the return buffer size, and I've got this:
def sparse_copy_if(ary, predicate, extra_args=[], preamble="", queue=None, wait_for=None):
"""Copy the elements of *ary* satisfying *predicate* to an output array.
:arg predicate: a C expression evaluating to a `bool`, represented as a string.
The value to test is available as `ary[i]`, and if the expression evaluates
to `true`, then this value ends up in the output.
:arg extra_args: |scan_extra_args|
:arg preamble: |preamble|
:arg wait_for: |explain-waitfor|
:returns: a tuple *(out, count, event)* where *out* is the output array, *count*
is an on-device scalar (fetch to host with `count.get()`) indicating
how many elements satisfied *predicate*, and *event* is a
:class:`pyopencl.Event` for dependency management. *out* is allocated
to the same length as *ary*, but only the first *count* entries carry
meaning.
.. versionadded:: 2013.1
"""
if len(ary) > np.iinfo(np.int32).max:
scan_dtype = np.int64
else:
scan_dtype = np.int32
extra_args_types, extra_args_values = extract_extra_args_types_values(extra_args)
knl = _copy_if_template.build(ary.context,
type_aliases=(("scan_t", scan_dtype), ("item_t", ary.dtype)),
var_values=(("predicate", predicate),),
more_preamble=preamble, more_arguments=extra_args_types)
out = cl.array.empty_like(ary)
count = ary._new_with_changes(data=None, offset=0,
shape=(), strides=(), dtype=scan_dtype)
# **dict is a Py2.5 workaround
evt = knl(ary, out, count, *extra_args_values,
**dict(queue=queue, wait_for=wait_for))
'''
Now I need to copy the first num_results values from out to final_gpu (in which buffer size is minimized)
'''
prg = cl.Program(ary.context, """
__kernel void copy_final_results(__global int *final_gpu, __global int *out_gpu)
{
__private uint gid;
gid = get_global_id(0);
final_gpu [gid] = out_gpu [gid];
}
""").build()
num_results= int(count.get())
final_gpu = pyopencl.array.zeros(queue, (num_results,), dtype=scan_dtype)
prg.copy_final_results (queue, (num_results,), None, final_gpu.data, out.data).wait()
return final_gpu, evt
#return out, count, evt
That is, I'm creating a final_gpu buffer exactly the size of the output, then copying the meaningful entries to it, and returning it.
If I now run:
start = time.time()
final_gpu, evt = my_pyopencl_algorithm.sparse_copy_if(random_gpu, "ary[i] < 500000", queue = queue)
final = final_gpu.get()
print '\ncopy_if_2():\nresults=',final, '\nfound=', count, '\ntime=', (time.time()-start) here
... this seems to yield orders of magnitude improvements in speed. The more sparse results are, the faster it becomes, as the buffer size to be transferred (with high latency) is minimized.
My question is: is there a reason we are returning a full-sized buffer? In other words, am I introducing any bugs, or should I just submit a patch?

Is python uuid1 sequential as timestamps?

Python docs states that uuid1 uses current time to form the uuid value. But I could not find a reference that ensures UUID1 is sequential.
>>> import uuid
>>> u1 = uuid.uuid1()
>>> u2 = uuid.uuid1()
>>> u1 < u2
True
>>>
But not always:
>>> def test(n):
... old = uuid.uuid1()
... print old
... for x in range(n):
... new = uuid.uuid1()
... if old >= new:
... print "OOops"
... break
... old = new
... print new
>>> test(1000000)
fd4ae687-3619-11e1-8801-c82a1450e52f
OOops
00000035-361a-11e1-bc9f-c82a1450e52f
UUIDs Not Sequential
No, standard UUIDs are not meant to be sequential.
Apparently some attempts were made with GUIDs (Microsoft's twist on UUIDs) to make them sequential to help with performance in certain database scenarios. But being sequential is not the intent of UUIDs.
http://en.wikipedia.org/wiki/Globally_unique_identifier
MAC Is Last, Not First
No, in standard UUIDs, the MAC address is not the first component. The MAC address is the last component in a Version 1 UUID.
http://en.wikipedia.org/wiki/Universally_unique_identifier
Do Not Assume Which Type Of UUID
The various versions of UUIDs are meant to be compatible with each other. So it may be unreasonable to expect that you always have Version 1 UUIDs. Other programmers may use other versions.
Specification
Read the UUID spec, RFC 4122, by the IETF. Only a dozen pages long.
From the python UUID docs:
Generate a UUID from a host ID, sequence number, and the current time. If node is not given, getnode() is used to obtain the hardware address. If clock_seq is given, it is used as the sequence number; otherwise a random 14-bit sequence number is chosen.
From this, I infer that the MAC address is first, then a (possibly random) sequence number, then the current time. So I would not expect these to be guaranteed to be monotonically increasing, even for UUIDs generated by the same machine/process.
I stumbled upon a probable answer in Cassandra/Python from http://doanduyhai.wordpress.com/2012/07/05/apache-cassandra-tricks-and-traps/
Lexicographic TimeUUID ordering
Cassandra provides, among all the primitive types, support for UUID values of type 1 (time and server based) and type 4 (random).
The primary use of UUID (Unique Universal IDentifier) is to obtain a really unique identifier in a potentially distributed environment.
Cassandra does support version 1 UUID. It gives you an unique identifier by combining the computer’s MAC address and the number of 100-nanosecond intervals since the beginning of the Gregorian calendar.
As you can see the precision is only 100 nanoseconds, but fortunately it is mixed with a clock sequence to add randomness. Furthermore the MAC address is also used to compute the UUID so it’s very unlikely that you face collision on one cluster of machine, unless you need to process a really really huge volume of data (don’t forget, not everyone is Twitter or Facebook).
One of the most relevant use case for UUID, and espcecially TimeUUID, is to use it as column key. Since Cassandra column keys are sorted, we can take advantage of this feature to have a natural ordering for our column families.
The problem with the default com.eaio.uuid.UUID provided by the Hector client is that it’s not easy to work with. As an ID you may need to bring this value from the server up to the view layer, and that’s the gotcha.
Basically, com.eaio.uuid.UUID overrides the toString() to gives a String representation of the UUID. However this String formatting cannot be sorted lexicographically…
Below are some TimeUUID generated consecutively:
8e4cab00-c481-11e1-983b-20cf309ff6dc at some t1
2b6e3160-c482-11e1-addf-20cf309ff6dc at some t2 with t2 > t1
“2b6e3160-c482-11e1-addf-20cf309ff6dc”.compareTo(“8e4cab00-c481-11e1-983b-20cf309ff6dc”) gives -6 meaning that “2b6e3160-c482-11e1-addf-20cf309ff6dc” is less/before “8e4cab00-c481-11e1-983b-20cf309ff6dc” which is incorrect.
The current textual display of TimeUUID is split as follow:
time_low – time_mid – time_high_and_version – variant_and_sequence – node
If we re-order it starting with time_high_and_version, we can then sort it lexicographically:
time_high_and_version – time_mid – time_low – variant_and_sequence – node
The utility class is given below:
public static String reorderTimeUUId(String originalTimeUUID)
{
StringTokenizer tokens = new StringTokenizer(originalTimeUUID, "-");
if (tokens.countTokens() == 5)
{
String time_low = tokens.nextToken();
String time_mid = tokens.nextToken();
String time_high_and_version = tokens.nextToken();
String variant_and_sequence = tokens.nextToken();
String node = tokens.nextToken();
return time_high_and_version + '-' + time_mid + '-' + time_low + '-' + variant_and_sequence + '-' + node;
}
return originalTimeUUID;
}
The TimeUUIDs become:
11e1-c481-8e4cab00-983b-20cf309ff6dc
11e1-c482-2b6e3160-addf-20cf309ff6dc
Now we get:
"11e1-c481-8e4cab00-983b-20cf309ff6dc".compareTo("11e1-c482-2b6e3160-addf-20cf309ff6dc") = -1
Argumentless use of uuid.uuid1() gives non-sequential results (see answer by #basil-bourque), but it can be easily made sequential if you set clock_seq or node arguments (because in this case uuid1 uses python implementation that guarantees to have unique and sequential timestamp part of the UUID in current process):
import time
from uuid import uuid1, getnode
from random import getrandbits
_my_clock_seq = getrandbits(14)
_my_node = getnode()
def sequential_uuid(node=None):
return uuid1(node=node, clock_seq=_my_clock_seq)
def alt_sequential_uuid(clock_seq=None):
return uuid1(node=_my_node, clock_seq=clock_seq)
if __name__ == '__main__':
from itertools import count
old_n = uuid1() # "Native"
old_s = sequential_uuid() # Sequential
native_conflict_index = None
t_0 = time.time()
for x in count():
new_n = uuid1()
new_s = sequential_uuid()
if old_n > new_n and not native_conflict_index:
native_conflict_index = x
if old_s >= new_s:
print("OOops: non-sequential results for `sequential_uuid()`")
break
if (x >= 10*0x3fff and time.time() - t_0 > 30) or (native_conflict_index and x > 2*native_conflict_index):
print('No issues for `sequential_uuid()`')
break
old_n = new_n
old_s = new_s
print(f'Conflicts for `uuid.uuid1()`: {bool(native_conflict_index)}')
print(f"Tries: {x}")
Multiple processes issues
BUT if you are running some parallel processes on the same machine, then:
node which defaults to uuid.get_node() will be the same for all the processes;
clock_seq has small chance to be the same for some processes (chance of 1/16384)
That might lead to conflicts! That is general concern for using
uuid.uuid1 in parallel processes on the same machine unless you have access to SafeUUID from Python3.7.
If you make sure to also set node to unique value for each parallel process that runs this code, then conflicts should not happen.
Even if you are using SafeUUID, and set unique node, it's still possible to have non-sequential ids if they are generated in different processes.
If some lock-related overhead is acceptable, then you can store clock_seq in some external atomic storage (for example in "locked" file) and increment it with each call: this allows to have same value for node on all parallel processes and also will make id-s sequential. For cases when all parallel processes are subprocesses created using multiprocessing: clock_seq can be "shared" using multiprocessing.Value

Categories

Resources