Any one know how to convert this python snippet in nodejs:
return "".join(reversed(struct.pack('I',data)))
I tried to make the same in nodejs using Buffer like this:
var buff = new Buffer(4).fill(0);
buff.writeInt16LE(data, 0);
return new Buffer(buff.reverse().toString('hex'),'hex');
But it not work exactly like python snippet, some data make my program stuck and it gave me this error:
buffer.js:830
throw new TypeError('value is out of bounds');
^
Make sure that data is a valid 16-bit signed integer. That means it must be a valid integer from -32,768 to 32,767.
However, the 'I' in Python's struct.pack() is for unsigned 32-bit integers, so what you should instead be using is buff.writeUInt32LE(data, 0).
Related
0
Story would be: I was using a hardware which can be automatic controlled by a objc framework, it was already used by many colleagues so I can see it as a "fixed" library. But I would like to use it via Python, so with pyobjc I can already connect to this device, but failed to send data into it.
The objc command in header is like this
(BOOL) executeabcCommand:(NSString*)commandabc
withArgs:(uint32_t)args
withData:(uint8_t*)data
writeLength:(NSUInteger)writeLength
readLength:(NSUInteger)readLength
timeoutMilliseconds:(NSUInteger)timeoutMilliseconds
error:(NSError **) error;
and from my python code, data is an argument which can contain 256bytes of data such
as 0x00, 0x01, 0xFF. My python code looks like this:
senddata=Device.alloc().initWithCommunicationInterface_(tcpInterface)
command = 'ABCw'
args= 0x00
writelength = 0x100
readlength = 0x100
data = '\x50\x40'
timeout = 500
success, error = senddata.executeabcCommand_withArgs_withData_writeLength_readLength_timeoutMilliseconds_error_(command, args, data, writelength, readlength, timeout, None)
Whatever I sent into it, it always showing that.
ValueError: depythonifying 'char', got 'str'
I tired to dig in a little bit, but failed to find anything about convert string or list to char with pyobjc
Objective-C follows the rules that apply to C.
So in objc as well as C when we look at uint8_t*, it is in fact the very same as char* in memory. string differs from this only in that sense that it is agreed that the last character ends in \0 to indicate that the char* block that we call string has its cap. So char* blocks end with \0 because, well its a string.
What do we do in C to find out the length of a character block?
We iterate the whole block until we find \0. Usually with a while loop, and break the loop when you find it, your counter inside the loop tells you your length if you did not give it somehow anyway.
It is up to you to interpret the data in the desired format.
Which is why sometime it is easier to cast from void* or to take indeed a char* block which is then cast to and declared as uint8_t data inside the function which makes use if it. Thats the nice part of C to be able to define that as you wish, use that force that was given to you.
So to make your life easier, you could define a length parameter like so
-withData:(uint8_t*)data andLength:(uint64_t)len; to avoid parsing the character stream again, as you know already it is/or should be 256 characters long. The only thing you want to avoid at all cost in C is reading attempts at indices that are out of bound throwing an BAD_ACCESS exception.
But this basic information should enable you to find a way to declare your char* block containing uint8_t data addressed with the very first pointer (*) which also contains the first uint8_t character of the block as str with a specific length or up to the first appearance of \0.
Sidenote:
objective-c #"someNSString" == pythons u"pythonstring"
PS: in your question is not clear who throw that error msg.
Python? Because it could not interpret the data when receiving?
Pyobjc? Because it is python syntax hell when you mix with objc?
The objc runtime? Because it follows the strict rules of C as well?
Python has always been very forgiving about shoe-horning one type into another, but python3 uses Unicode strings by default, which need to be converted into binary strings before plugging into pyobjc methods.
Try specifying the strings as byte objects as b'this'
I was hitting the same error trying to use IOKit:
import objc
from Foundation import NSBundle
IOKit = NSBundle.bundleWithIdentifier_('com.apple.framework.IOKit')
functions = [("IOServiceGetMatchingService", b"II#"), ("IOServiceMatching", b"#*"),]
objc.loadBundleFunctions(IOKit, globals(), functions)
The problem arose when I tried to call the function like so:
IOServiceMatching('AppleSmartBattery')
Receiving
Traceback (most recent call last):
File "<pyshell#53>", line 1, in <module>
IOServiceMatching('AppleSmartBattery')
ValueError: depythonifying 'charptr', got 'str'
While as a byte object I get:
IOServiceMatching(b'AppleSmartBattery')
{
IOProviderClass = AppleSmartBattery;
}
I need to build a python script that export data to a custom file format. This file is then read by a cpp programm (I have the source code, but cannot compile it).
custome file fomat spec:
The file must be little endian.
The float must be 4 bytes long.
I'm failing at exporting python float to bytes. thus crashing the cpp app without any error trace. If i try to fill all the float with 0 it load perfectly fine, but if i try anything else it crashes.
This is how the cpp app read the float
double Eds2ImporterFromMemory::read4BytesAsFloat(long offset) const
{
// Read data.
float i = 0;
memcpy(&i, &_data[offset], 4);
return i;
}
And I try to export the python float as follows:
def write_float(self, num):
# pack float
self.f.write(struct.pack('<f', float(num)))
And also like this as some people suggested me:
def write_float(self, num):
# unpack as integer the previously pack as float number
float_hex = struct.unpack('<I', struct.pack('<f', num))[0]
self.f.write(float_hex.to_bytes(4, byteorder='little'))
But it fails every time. I'm not a cpp guy as you can see. Do you have an idea on why my python script is not working.
Thanks in advance
Please excuse me for being really bad at python, but i tried to produce your desired result and it works fine for me.
The python code:
import struct
value = 13.37
ba = bytearray(struct.pack("f", value))
for b in ba:
print("%02x" % b)
I imagine if you would just concat that (basically write the hexadecimal representation to a file in that order), it would work just fine.
Either way, it will output
85
eb
55
41
Which i put in an array of unsigned chars and used memcpy in the same way your C++ code did:
unsigned char data[] = {0x85, 0xeb, 0x55, 0x41};
float f;
memcpy(&f, data, sizeof(float));
std::cout << f << std::endl;
std::cin.get();
Which will output 13.37 as you would expect. If you cannot reproduce the correct result this way, i suspect that the error occurs somewhere else, in which case it would be helpful to see how the format is written by python and how it's read by C++.
Also, keep in mind that there are several ways to represent the byte array, like:
\x85eb5541, 0x85, 0xeb, 0x55, 0x41, 85eb5541 and so on. To me it seems very likely that you just aren't outputting the correct format and thus the "parser" crashes.
I'm trying to work with:
softScheck/tplink-smartplug
I'm stuck in a loop of errors. The fix for the first, causes the other, and the fix for the other, causes the first. The code is all found in tplink-smartplug.py at the git link.
cmd = "{\"system\":{\"set_relay_state\":{\"state\":0}}}"
sock_tcp.send(encrypt(cmd))
def encrypt(string):
key = 171
result = "\0\0\0\0"
for i in string:
a = key ^ ord(i)
key = a
result += chr(a)
return result
As it is, result = 'Ðòøÿ÷Õï¶Å Ôùðè·Ä°Ñ¥ÀâØ£òçöÔîÞ£Þ' and I get the error (on line 92 in original file: sock_tcp.send(encrypt(cmd)):
a bytes-like object is required, not 'str'
so I change the function call too:
sock_tcp.send(encrypt(cmd.encode('utf-8')))
and my error changes too:
ord() expected string of length 1, but int found
I understand what ord() is trying to do, and I understand the encoding. But what I don't understand is...how am I supposed to send this encrypted message to my smart plugin, if I can't give the compiler what it wants? Is there a work around? I'm pretty sure the original git was written in Python 2 or earlier. So maybe I'm not converting to Python 3 correctly?
Thanks for reading, I appreciate any help.
In Python 2, the result of encode is a str byte-string, which is a sequence of 1-byte str values. So, when you do for i in string:, each i is a str, and you have to call ord(i) to turn it into a number from 0 to 255.
In Python 3, the result of encode is a bytes byte-string, which is a sequence of 1-byte integers. So when you do for i in string:, each i is already an int from 0 to 255, so you don't have to do anything to convert it. (And, if you try to do it anyway, you'll get the TypeError you're seeing.)
Meanwhile, you're building result up as a str. In Python 2, that's fine, but in Python 3, that means it's Unicode, not bytes. Hence the other TypeError you're seeing.
For more details on how to port Python 2 string-handling code to Python 3, you should read the Porting Guide, especially Text versus binary data, and maybe the Unicode HOWTO if you need more background.
One way you can write the code to work the same way for both Python 2 and 3 is to use a bytearray for both values:
def encrypt(string):
key = 171
result = bytearray(b"\0\0\0\0")
for i in bytearray(string):
a = key ^ i
key = a
result.append(a)
return result
cmd = u"{\"system\":{\"set_relay_state\":{\"state\":0}}}"
sock_tcp.send(encrypt(cmd.encode('utf-8')))
Notice the u prefix on cmd, which makes sure it's Unicode even in Python 2, and the b prefix on result, which makes sure it's bytes even in Python 3. Although since you know cmd is pure ASCII, it might be simpler to just do this:
cmd = b"{\"system\":{\"set_relay_state\":{\"state\":0}}}"
sock_tcp.send(encrypt(cmd))
If you don't care about Python 2, you can just for for i in string: without converting it to a bytearray, but you still probably want to use one for result. (Being able to append an int directly to it makes for simpler code—and it's even more efficient, as a nice bonus.)
I'm using python2.6's gdb module while debugging a C program, and would like to convert a gdb.Value instance into a python numeral object (variable) based off the instance's '.Type'.
E.g. turn my C program's SomeStruct->some_float_val = 1./6; to a Python gdb.Value via sfv=gdb.parse_and_eval('SomeStruct->some_double_val'), but THEN turn this into a double precision floating point python variable -- knowing that str(sfv.type.strip_typedefs())=='double' and its size is 8B -- WITHOUT just converting through a string using dbl=float(str(sfv)) or Value.string() but rather something like unpacking the bytes using struct to get the correct double value.
Every link returned from my searches points https://sourceware.org/gdb/onlinedocs/gdb/Values-From-Inferior.html#Values-From-Inferior, but I can't see how to convert a Value instance into a python variable cleanly, say the Value wasn't even in C memory but represented a gdb.Value.address (so can't use Inferior.read_memory()), how would one turn this into a Python int without casting string values?
You can convert it directly from the Value using int or float:
(gdb) python print int(gdb.Value(0))
0
(gdb) python print float(gdb.Value(0.0))
0.0
There seems to be at least one glitch in the system, though, as float(gdb.Value(0)) does not work.
I stumbled upon this while trying to figure out how to do bitwise operations on a pointer. In my particular use case, I needed to calculate a page alignment offset. Python did not want to cast a pointer Value to int, however, the following worked:
int(ptr.cast(gdb.lookup_type("unsigned long long")))
We first make gdb cast our pointer to unsigned long long, and then the resulting gdb.Value can be cast to Python int.
Python 2.6 on Redhat 6.3
I have a device that saves 32 bit floating point value across 2 memory registers, split into most significant word and least significant word.
I need to convert this to a float.
I have been using the following code found on SO and it is similar to code I have seen elsewhere
#!/usr/bin/env python
import sys
from ctypes import *
first = sys.argv[1]
second = sys.argv[2]
reading_1 = str(hex(int(first)).lstrip("0x"))
reading_2 = str(hex(int(second)).lstrip("0x"))
sample = reading_1 + reading_2
def convert(s):
i = int(s, 16) # convert from hex to a Python int
cp = pointer(c_int(i)) # make this into a c integer
fp = cast(cp, POINTER(c_float)) # cast the int pointer to a float pointer
return fp.contents.value # dereference the pointer, get the float
print convert(sample)
an example of the register values would be ;
register-1;16282 register-2;60597
this produces the resulting float of
1.21034872532
A perfectly cromulent number, however sometimes the memory values are something like;
register-1;16282 register-2;1147
which, using this function results in a float of;
1.46726675314e-36
which is a fantastically small number and not a number that seems to be correct. This device should be producing readings around the 1.2, 1.3 range.
What I am trying to work out is if the device is throwing bogus values or whether the values I am getting are correct but the function I am using is not properly able to convert them.
Also is there a better way to do this, like with numpy or something of that nature?
I will hold my hand up and say that I have just copied this code from examples on line and I have very little understanding of how it works, however it seemed to work in the test cases that I had available to me at the time.
Thank you.
If you have the raw bytes (e.g. read from memory, from file, over the network, ...) you can use struct for this:
>>> import struct
>>> struct.unpack('>f', '\x3f\x9a\xec\xb5')[0]
1.2103487253189087
Here, \x3f\x9a\xec\xb5 are your input registers, 16282 (hex 0x3f9a) and 60597 (hex 0xecb5) expressed as bytes in a string. The > is the byte order mark.
So depending how you get the register values, you may be able to use this method (e.g. by converting your input integers to byte strings). You can use struct for this, too; this is your second example:
>>> raw = struct.pack('>HH', 16282, 1147) # from two unsigned shorts
>>> struct.unpack('>f', raw)[0] # to one float
1.2032617330551147
The way you've converting the two ints makes implicit assumptions about endianness that I believe are wrong.
So, let's back up a step. You know that the first argument is the most significant word, and the second is the least significant word. So, rather than try to figure out how to combine them into a hex string in the appropriate way, let's just do this:
import struct
import sys
first = sys.argv[1]
second = sys.argv[2]
sample = int(first) << 16 | int(second)
Now we can just convert like this:
def convert(i):
s = struct.pack('=i', i)
return struct.unpack('=f', s)[0]
And if I try it on your inputs:
$ python floatify.py 16282 60597
1.21034872532
$ python floatify.py 16282 1147
1.20326173306