python mixing garmin timestamps - python

I belong a garmin watch , to report statistic they have a sdk
in this SDK they have a timestamp in two format
one is a true timestamp on 32 bit
another is the lower part on 16 bit which must be combinate whith the first
I dont know to code this in Python Can somebody help me
here is their explanation and the formula
*timestamp_16 is a 16 bit version of the timestamp field (which is 32 bit) that represents the lower 16 bits of the timestamp.
This field is meant to be used in combination with an earlier timestamp field that is used as a reference for the upper 16 bits.
The proper way to deal with this field is summarized as follows:
mesgTimestamp += ( timestamp_16 - ( mesgTimestamp & 0xFFFF ) ) & 0xFFFF;*
my problem is not to obtain the two timestamp but to combinate the two in python
thanks

I'm not sure of the result but I took them literally explanation
I shifted 32-bit timestamps of bits left 16 positions
then I shifted 16 places to the right and I made one or bitwise with the 16-bit timestamp

Related

python runtime 3x deviation for 32 vs 34 char IDs

I am running an aggregation script, which heavily relies on aggregating / grouping on an identifier column. Each identifier in this column is 32 character long as a result of a hashing function.
so my ID column which will be used in pandas groupby has something like
e667sad2345...1238a
as an entry.
I tried to add a prefix "ID" to some of the samples, for easier separation afterwards. Thus, I had some identifiers with 34 characters and others still with 32 characters.
e667sad2345...1238a
IDf7901ase323...1344b
Now the aggregation script takes 3 times as long (6000 vs 2000 seconds). And the change in the ID column (adding the prefix) is the only thing which happened. Also note, that I generate data separately and save a pickle file which is read in by my aggregation script as input. So the prefix addition is not part of the runtime I am talking about.
So now I am stunned, why this particular change made such a huge impact. Can someone elaborate?
EDIT: I replaced the prefix with suffix so now it is
e667sad2345...1238a
f7901ase323...1344bID
and now it runs again in 2000 seconds. Does groupby use a binary search or something, so all the ID are overrepresented with the starting character 'I' ?
Ok, I had a revelation what is going on.
My entries are sorted using quick sort, which has an expected runtime of O(n * log n). But in worst case, quick sort will actually run in O(n*n). By making my entries imbalanced (20% of data starts with "I", other 80% randomly distributed over alphanumeric characters) I shifted the data to be more of a bad case for quick sort.

Converting binary string to 32bit floats with Python sometimes shifts the decimal place

I'm reading out my sensor values via a protocol closely resembling Modbus RTU using Python. Sometimes, two consecutive values have their decimal point shifted by one place, say I read a voltage of 538.154 in one packet and on the next packet it shows 53.963, which I know is off by a factor of ten as the voltage is stable. I'm not really good at programming and am not sure if it is my python code or a problem in the sensor.
The above two values originate from these packets (only first few bytes given, voltage is the first value after id, function code) which I store in a variable called answ:
1: b'\x04\x03 \xc58D\x06\x89\xd9>\x1d\x00\x00\x00\x00n~<\xd9\x11\xb4K [....]'
(correct voltage)
2: b'\x04\x03 \xa1\xf4BW\xda(>\x95\x00\x00\x00\x00\xf6\x0f#\x16\x01.D [....]'
(off by factor of 10)
(The voltage values described are the first values in the 'payload' of both packets.)
I remove the header information via
data = answ[5:37]
first and then try to get the floating point values by
helper = 0
for i in range(0, len(data) ,4):
try:
global_vars.sensor_vals[helper] = float(struct.unpack(">f", bytearray(data[i:i+4]))[0])
global_vars.sensor_vals[helper] = float("{:3.3f}".format(global_vars.sensor_vals[helper]))
helper = helper + 1
Please excuse my crude Python code.
I also tried different approaches like bytes.fromhex(...) instead of bytearray, but with the same result.
Am I doing something wrong or does the sensor show me wrong values?
Best regards & Thank you!

RFID Unique serial number conversion

My USB-RFID reader receives a 24 bit ID from the card. I can convert this ID into 8 bit/16bit format. But how can I convert it into 40 bit format?
Example for one tag:
24 bit decimal format (as I get from the reader): 0005966009
8,16 bit binary format (converted via Python): 01011011, 0000100010111001
8,16 bit decimal format (converted via Python): 91, 2233
40 bit decimal format (provided by the manufacturer): 455272499385
How can I get that 40 bit number from the 24 bit number?
Tag standard: unique, 125 kHz
Screenshot from manufacturer's system:
No, in general it's impossible to turn a 24 bit number into a 40 bit number. Such a conversion would imply that you add 16 bits of extra information to the 24 bit value. This extra information won't just magically appear out of nowhere.
In your case, the numbers are
24 bit format: 0005966009dec = 5B08B9hex
8+16 bit format: 091dec & 02233dec = 5Bhex & 08B9hex
40 bit format: 455272499385dec = 6A005B08B9hex
Thus, all three numbers contain 24 common bits (5B08B9hex). The 40 bit number has an additional prefix value of 6A00hex.
Since you did not reveal what RFID system and what tags you are using (other than that they are operating on 125 khz), it's impossible to tell if that prefix is some standard prefix that is the same for all tags or if that prefix changes for every tag (or every manufacturer, customer, etc.)
If the prefix is the same for all your tags, you could easily use the value 6A00hex as the upper 16 bits of each tag serial number. However, if the prefix may be different for different tags, then there is no other way than to get a reader that reads that full 40 bit serial number.
Nevertheless, if all your readers only read the 24 bit value, I don't see why you would even want to use the whole 40 bit value. Even if you already have a database that contains the whole 40 bit value, you could easily trim that value to a 24 bit value (as long as the lower 24 bits are (sufficiently) unique across all your tags).

ARINC429 Word Construction

Just as a preamble I am using python 3 and the bitstring library.
So Arinc429 words are 32 bit data words.
Bits 1-8 are used to store the label. Say for example I want the word to set the latitude, according to the label docs, set latitude is set to the octal
041
I can model this in python by doing:
label = BitArray(oct='041')
print(label.bin)
>> 000100001
The next two bits can be used to send a source, or extend the label by giving an equipment ID. Equipment IDs are given in hex, the one I wish to use is
002
So again, I add it to a new BitArray object and convert it to binary
>> 000000010
Next comes the data field which spans from bits 11-29. Say I want to set the latitude to the general area of London (51.5072). This is where I'm getting stuck as floats can only be 32/64 bits long.
There are 2 other parts of the word, but before I go there I am just wondering, if I am going along the right track, or way off how you would construct such a word?
Thanks.
I think you're on the right track, but you need to either know or decide the format for your data field.
If the 19 bits you want to represent a float are documented somewhere as being a float then look how that conversion is done (as that's not at all a standard number of bits for a floating point number). If those bits are free-form and you can choose both the encode and decode then just pick something appropriate.
There is a standard for 16-bit floats which is occasionally used, but if you only want to represent a latitude I'd go for something simpler. As it can only got from 0 to 360 just scale that to an integer from 0 to 2^19 and store the integer.
So 51.5072 becomes (51.5072/360*(2**19)) = 75012
Then store this as a unsigned integer
> latitude = BitArray(uint=75012, length=19)
This gives you a resolution of about 0.0007 degrees, which is the best you can hope for. To convert back:
> latitude.uint*360.0/2**19
51.50665283203125

Query for bit based values

I have a problem how to get values from pytables. Values are bit based, but stored as integer number.
One column in my table is Int32Column() with name 'Value'. In this column I will store integer values where every bit has different meaning. So, if I want information for some bit, I will take value from table and make some bit manipulation actions. I don't know how to make query for getting specified values from table.
For example, I want to know all values in Value column where is first bit == 1 and third bit ==1.
How to make that query?
I'm trying with mask:
[ x['Value'] for x in table.where('((Value & mask) == mask)')]
but, I'm getting exception:
NotImplementedError: unsupported operand types for \*and\*: int, int
Processing query must be very fast because large number of rows in future. One restriction is that values must be as int values in table, because I'm getting values from server in int format. I hope that someone has better solution.
For future reference.
I had a similar problem and solved it in the following way. As the usual bitwise operators(<<, >>) are not available and the meaning of the &, | operators is logical, instead of bitwise, one has to improvise.
To check if a value VAL has the n-th bit set or not, we can shift the interesting bit to the 0th position, which denotes the parity of the number (2**0). The parity can be checked using the modulus operator.
So, one can do something like to check whether, for example, bit 25 is set and 16 is unset.
table.where("((VAL/(2**25))%2==1) & ((VAL/(2**16))%2==0)")
Not elegant, but working for now.

Categories

Resources