zeromq: unicode not allowed, use send_unicode - python

I am getting error on zeromq python while sending strings through ROUTER SOCKET. String type messages are receveid successfully but some times, a unicode message throws exception "Type Error: unicode not allowed. use send_unicode". Although I have been trying to use msg.encode('utf-8'). BUt I cant figure out a way to get over with it.
I am on python 2.7.3. I am not using pyzmq (import zmq only). Looking forward to your suggesitons :) Thanks
if backend in sockets:
request=backend.recv_multipart()
#print ("Backend Thread is ready")
worker_id,client_id = request[:2]
if client_id != b"READY" and len(request) > 3:
#print (len(request))
empty2,reply = request[2:]
router_socket.send_multipart([client_id, reply.encode('utf-8')])

The problem was resolved only thing was that I needed to convert the unicode strings back to ascii by using string.encode('ascii')

I got the same error. My erroneous code was:
socket.send("Server message to client3")
You must convert the message to bytes to solve it. To do so, just add b like this:
socket.send(b"Server message to client3")
Is it better to convert strings to byte, then bytes to strings when data sent through network, and why?

So, because PyZMQ is actually a good library they have some docs.
https://pyzmq.readthedocs.io/en/latest/unicode.html
What it says is that the str object changed it's nature over the course of history of Python evolution.
In Python 3 str is a collection of characters and in Python 2 it is a simple wrapper (with some sugar) for char* that we know from C :).
Docs explain why the people behind pyZMQ chose to make the differences explicit - performance is the answer.
To send strings in Python3 you should use the right method, which is send_string, probably the other way around for Python2 (to send unicode you should use send_unicode).
It is however recommended to stick to bytes and explicitly provide correct encoding and decoding where needed.
Also you are using pyzmq... the module name "zmq" comes from pyzmq library/package.
To confront this statement use: pip list | grep zmq (or pip list | select-string zmq for Windows).

Related

Current idiom for removing 'surrogateescape' characters fron a decoded string

Armin Ronacher, http://lucumr.pocoo.org/2013/7/2/the-updated-guide-to-unicode/
If you for instance pass [the result of os.fsdecode() or equivalent] to a template engine you [sometimes get a UnicodeEncodeError] somewhere else entirely and because the encoding happens at a much later stage you no longer know why the string was incorrect. If you detect that error when it happens the issue becomes much easier to debug
Armin suggests a function
def remove_surrogate_escaping(s, method='ignore'):
assert method in ('ignore', 'replace'), 'invalid removal method'
return s.encode('utf-8', method).decode('utf-8')
Nick Coghlan, 2014, [Python-Dev] Cleaning up surrogate escaped strings
The current proposal on the issue tracker is to ... take advantage of
the existing error handlers:
def convert_surrogateescape(data, errors='replace'):
return data.encode('utf-8', 'surrogateescape').decode('utf-8', errors)
That code is short, but semantically dense - it took a few iterations to
come up with that version. (Added bonus: once you're alerted to the
possibility, it's trivial to write your own version for existing Python 3
versions. The standard name just makes it easier to look up when you come
across it in a piece of code, and provides the option of optimising it
later if it ever seems worth the extra work)
The functions are slightly different. The second was written with knowledge of the first.
Since Python 3.5, the backslashreplace error handler now works on decoding as well as encoding. The first approach is not designed to use backslashreplace e.g. an error decoding the byte 0xff would get printed as "\udcff". The second approach is designed to solve this; it would print "\xff".
If you did not need backslashreplace, you might prefer the first version if you had the misfortune to be supporting Python < 3.5 (including polyglot 2/3 code, ouch).
Question
Is there a better idiom for this purpose yet? Or do we still use this drop-in function?
Nick referred to an issue for adding such a function to the codecs module. As of 2019 the function has not been added, and the ticket remains open.
The latest comment says
msg314682 Nick Coghlan, 2018
A recent discussion on python-ideas also introduced me to the third party library, "ftfy", which offers a wide range of tools for cleaning up improperly decoded data.
That includes a lone surrogate fixer: ftfy.fixes.fix_surrogates(text)
...
I do not find the function in ftfy appealing. The documentation does not say so, but it appears to be designed to handle both surrogateescape and ... be part of a workaround for CESU-8, or something like that ?
Replace 16-bit surrogate codepoints with the characters they represent (when properly paired), or with � otherwise.

pyserial Python2/3 string headache

I'm using pyserial to communicate with some sensors which use the Modbus protocol. In Python 2.7, this works perfectly:
import serial
c = serial.Serial('port/address') # connect to sensor
msg = "\xFE\x44\x00\x08\x02\x9F\x25" # 7 hex bytes(?)
c.write(msg) # send signal
r = c.read(7) # read 7 hex bytes (?).
In Python 3, this does not work. I know it's something to do with differences in how Python 2/3 handle binary vs. unicode strings. I've found numerous other threads suggesting the solution should be to simply prepend a b on my message (msg=b""\xFE\x44\x00\x08\x02\x9F\x25") to specify it as a binary string but this does not work for my case.
Any insights? What should I be sending in Python 3 so the sensor recieves the same signal? I'm at my wit's end...
I should add that I'm totally new to serial connections (well... 1 week old), and (despite reading around quite a bit) I struggle with understanding different character/string formats... Hence question marks in comments above. Please pitch answers appropriately :).
Thanks in advance!
write expects argument to be str, not bytes, so passing b"\xFE\x44\x00\x08\x02\x9F\x25" directly to it won't work. You need to convert bytes to str first: c.write(b"\xFE\x44\x00\x08\x02\x9F\x25".decode()) should work.
Solution
It turned out that specifying the input as a byte string (msg=b""\xFE\x44\x00\x08\x02\x9F\x25") did work. Initial error was from a typo in the msg string...
Secondary errors arose from how the outputs were handled - in Python 2 ord() had to be applied to the indexed output to return integers, in Python 3 integers can be extracted directly from the output by indexing (i.e. no ord() necessary).
Hope this might help someone in the future...

Python: base64.b64decode() vs .decode?

The Code Furies have turned their baleful glares upon me, and it's fallen to me to implement "Secure Transport" as defined by The Direct Project. Whether or not we internally use DNS rather than LDAP for sharing certificates, I'm obviously going to need to set up the former to test against, and that's what's got me stuck. Apparently, an X509 cert needs some massaging to be used in a CERT record, and I'm trying to work out how that's done.
The clearest thing I've found is a script on Videntity's blog, but not being versed in python, I'm hitting a stumbling block. Specifically, this line crashes:
decoded_clean_pk = clean_pk.decode('base64', strict)
since it doesn't seem to like (or rather, to know) whatever 'strict' is supposed to represent. I'm making the semi-educated guess that the line is supposed to decode the base64 data, but I learned from the Debian OpenSSL debacle some years back that blindly diddling with crypto-related code is a Bad Thing(TM).
So I turn the illustrious python wonks on SO to ask if that line might be replaced by this one (with the appropriate import added):
decoded_clean_pk = base64.b64decode(clean_pk)
The script runs after that change, and produces correct-looking output, but I've got enough instinct to know that I can't necessarily trust my instincts here. :)
This line should've work if you would've called like this:
decoded_clean_pk = clean_pk.decode('base64', 'strict')
Notice that strict has to be a string, otherwise python interpreter would try to search for a variable named strict and if it didn't find it or otherwise has other value than: strict, ignore, and replace, it'll probably would've complain about it.
Take a look at this code:
>>>b=base64.b64encode('hello world')
>>>b.decode('base64')
'hello world'
>>>base64.b64decode(b)
'hello world'
Both decode and b64decode works the same when .decode is passed the base64 argument string.
The difference is that str.decode will take a string of bytes as arguments and will return it's Unicode representation depending on the encoding argument you pass as first parameter. In this case, you're telling it to handle a bas64 string so it will do it ok.
To answer your question, both works the same, although b64decode/encode are meant to work only with base64 encodings and str.decode can handle as many encodings as the library is aware of.
For further information take a read at both of the doc sections: decode and b64decode.
UPDATE: Actually, and this is the most important example I guess :) take a look at the source code for encodings/base64_codec.py which is that decode() uses:
def base64_decode(input,errors='strict'):
""" Decodes the object input and returns a tuple (output
object, length consumed).
input must be an object which provides the bf_getreadbuf
buffer slot. Python strings, buffer objects and memory
mapped files are examples of objects providing this slot.
errors defines the error handling to apply. It defaults to
'strict' handling which is the only currently supported
error handling for this codec.
"""
assert errors == 'strict'
output = base64.decodestring(input)
return (output, len(input))
As you may see, it actually uses base64 module to do it :)
Hope this clarify in some way your question.

How do I handle Python unicode strings with null-bytes the 'right' way?

Question
It seems that PyWin32 is comfortable with giving null-terminated unicode strings as return values. I would like to deal with these strings the 'right' way.
Let's say I'm getting a string like: u'C:\\Users\\Guest\\MyFile.asy\x00\x00sy'. This appears to be a C-style null-terminated string hanging out in a Python unicode object. I want to trim this bad boy down to a regular ol' string of characters that I could, for example, display in a window title bar.
Is trimming the string off at the first null byte the right way to deal with it?
I didn't expect to get a return value like this, so I wonder if I'm missing something important about how Python, Win32, and unicode play together... or if this is just a PyWin32 bug.
Background
I'm using the Win32 file chooser function GetOpenFileNameW from the PyWin32 package. According to the documentation, this function returns a tuple containing the full filename path as a Python unicode object.
When I open the dialog with an existing path and filename set, I get a strange return value.
For example I had the default set to: C:\\Users\\Guest\\MyFileIsReallyReallyReallyAwesome.asy
In the dialog I changed the name to MyFile.asy and clicked save.
The full path part of the return value was: u'C:\Users\Guest\MyFile.asy\x00wesome.asy'`
I expected it to be: u'C:\\Users\\Guest\\MyFile.asy'
The function is returning a recycled buffer without trimming off the terminating bytes. Needless to say, the rest of my code wasn't set up for handling a C-style null-terminated string.
Demo Code
The following code demonstrates null-terminated string in return value from GetSaveFileNameW.
Directions: In the dialog change the filename to 'MyFile.asy' then click Save. Observe what is printed to the console. The output I get is u'C:\\Users\\Guest\\MyFile.asy\x00wesome.asy'.
import win32gui, win32con
if __name__ == "__main__":
initial_dir = 'C:\\Users\\Guest'
initial_file = 'MyFileIsReallyReallyReallyAwesome.asy'
filter_string = 'All Files\0*.*\0'
(filename, customfilter, flags) = \
win32gui.GetSaveFileNameW(InitialDir=initial_dir,
Flags=win32con.OFN_EXPLORER, File=initial_file,
DefExt='txt', Title="Save As", Filter=filter_string,
FilterIndex=0)
print repr(filename)
Note: If you don't shorten the filename enough (for example, if you try MyFileIsReally.asy) the string will be complete without a null byte.
Environment
Windows 7 Professional 64-bit (no service pack), Python 2.7.1, PyWin32 Build 216
UPDATE: PyWin32 Tracker Artifact
Based on the comments and answers I have received so far, this is likely a pywin32 bug so I filed a tracker artifact.
UPDATE 2: Fixed!
Mark Hammond reported in the tracker artifact that this is indeed a bug. A fix was checked in to rev f3fdaae5e93d, so hopefully that will make the next release.
I think Aleksi Torhamo's answer below is the best solution for versions of PyWin32 before the fix.
I'd say it's a bug. The right way to deal with it would probably be fixing pywin32, but in case you aren't feeling adventurous enough, just trim it.
You can get everything before the first '\x00' with filename.split('\x00', 1)[0].
This doesn't happen on the version of PyWin32/Windows/Python I tested; I don't get any nulls in the returned string even if it's very short. You might investigate if a newer version of one of the above fixes the bug.
ISTR that I had this issue some years ago, then I discovered that such Win32 filename-dialog-related functions return a sequence of 'filename1\0filename2\0...filenameN\0\0', while including possible garbage characters depending on the buffer that Windows allocated.
Now, you might prefer a list instead of the raw return value, but that would be a RFE, not a bug.
PS When I had this issue, I quite understood why one would expect GetOpenFileName to possibly return a list of filenames, while I couldn't imagine why GetSaveFileName would. Perhaps this is considered as API uniformity. Who am I to know, anyway?

Are PyArg_ParseTuple() "s" format specifiers useful in Python 3.x C API?

I'm trying to write a Python C extension that processes byte strings, and I have something basically working for Python 2.x and Python 3.x.
For the Python 2.x code, near the start of my function, I currently have a line:
if (!PyArg_ParseTuple(args, "s#:in_bytes", &src_ptr, &src_len))
...
I notice that the s# format specifier accepts both Unicode strings and byte strings. I really just want it to accept byte strings and reject Unicode. For Python 2.x, this might be "good enough"--the standard hashlib seems to do the same, accepting Unicode as well as byte strings. However, Python 3.x is meant to clean up the Unicode/byte string mess and not let the two be interchangeable.
So, I'm surprised to find that in Python 3.x, the s format specifiers for PyArg_ParseTuple() still seem to accept Unicode and provide a "default encoded string version" of the Unicode. This seems to go against the principles of Python 3.x, making the s format specifiers unusable in practice. Is my analysis correct, or am I missing something?
Looking at the implementation for hashlib for Python 3.x (e.g. see md5module.c, function MD5_update() and its use of GET_BUFFER_VIEW_OR_ERROUT() macro) I see that it avoids the s format specifiers, and just takes a generic object (O specifier) and then does various explicit type checks using the GET_BUFFER_VIEW_OR_ERROUT() macro. Is this what we have to do?
I agree with you -- it's one of several spots where the C API migration of Python 3 was clearly not designed as carefully and thouroughly as the Python coder-visible parts. I do also agree that probably the best workaround for now is focusing on "buffer views", per that macro -- until and unless something better gets designed into a future Python C API (don't hold your breath waiting for that to happen, though;-).

Categories

Resources