Storing html in Redis and keeping the quotes - python

Is there a way to store a mix of quote characters as a value in Redis? My specific need is to store PHP and HTML strings, and keep the mix of quote types.
In Python:
import redis
db = redis.Redis('localhost')
phpfile = /root/test.php
with open(phpfile) as f:
bar = f.read()
db.set('foo', bar)
If I try to pass the string any other direct way, it fails, of course, because of all the competing quotes.
Edit:
I'm going to close this question. I cannot get the behavior to repeat in Python using the code above, so it must be an issue with another layer in my code and not an issue with Redis.

Redis does not care the structure of your data you store. You can throw random bytes at redis or strings with double or single quotes.
Simply store the string and be happy.

Related

How to deserialize splitted json data

I stream data via Server Send Event and get about 500.000 datasets but instead of getting one json I get this (example of 2 of the 500.000 datasets)(this is how it looks like opening it in gedit, all question marks are \" and all new lines are \n):
data:{\"data\":[\"Kendrick\",\"Lamar\"]}\n\ndata:{\"data\":[\"David\",\"Bowie\"]}\n\n
... -
My goal is to get this into a database. I actually thought I put this into a dictionary and afterwards create a pandas dataframe from here on I should be able to get it into a database. But this ends up to be quite cumbersome. I ended up with something like this:
c1 = data_json[1:-1]
c2 = c1.replace('{data:{', '{\"data\":{')
c3 = c2.replace('}data:{', ', ')
c4 = '{' + c3 + '}'
but even here I have some problems since I have to add /n/n for the new lines. But as soon as I change c3 to c2.replace('}\n\ndata:{', ', ') I get Process finished with exit code 137 (interrupted by signal 9: SIGKILL). Coming from .NET I could handle this quite easy with a deserializer and I am wondering if there is a similar way to deserialize the data.
I get the data via sseclient and would be able to store them as bytes instead of string, if this would help, just fyi.
Any suggestions?
Juggling with replaces is of course a convoluted path -
the language does have the parsers for this kind of escaping built in -
the simpler of which would be passing the string that contains JSON through an eval call. But eval is seldom needed and should be avoided in most cases as "not elegant" - if not outright unsafe (but being unsafe actually just applies when you have no control over the input data - and even them, ast.literal_eval instead of plain eval can mitigate that). Anyway, there are other problems with the format that will prevent eval to work outright - the missing quotes of the outmost data:, for example.
Random rants apart, if your file content is actually:
data:{\"data\":[\"Kendrick\",\"Lamar\"]}\n\ndata:{\"data\":[\"David\",\"Bowie\"]}\n\n
It has two problems: "under-quoting' of the outmost data and an
"over-scaping" of the inner-data.
On an interactive Python session, using the "raw string" marker I can input your example line as it will be read from a file:
In [263]: a = r"""data:{\"data\":[\"Kendrick\",\"Lamar\"]}\n\ndata:{\"data\":[\"David\",\"Bowie\"]}\n\n"""
In [264]: print(a)
data:{\"data\":[\"Kendrick\",\"Lamar\"]}\n\ndata:{\"data\":[\"David\",\"Bowie\"]}\n\n
So, on to remove one level of backslashes - Python have an "unicode_escape" text encoding, but it only works from bytes-objects. We then resort to the "latin1" encoding, as it provides a byte-for-byte conversion of the unicode literal in "a" to bytes, and then apply an unicode_escape to remove the "\" :
In [266]: b = a.encode("latin1").decode("unicode_escape")
In [267]: print(b, "\n", repr(b))
data:{"data":["Kendrick","Lamar"]}
data:{"data":["David","Bowie"]}
'data:{"data":["Kendrick","Lamar"]}\n\ndata:{"data":["David","Bowie"]}\n\n'
now it is easy to parse:
We split the resulting string at "\n\n" and have one list with one record
(those you are calling "dataset") per element. Then we resort to string
manipulation to get rid of the starting "data:" and finally, json.load can work on the remaining part.
so:
import json
raw_data = open("mystrangefile.pseudo_json").read()
data = data.encode("latin1").decode("unicode_escape")
records = [json.loads(record.split(":", 1)[-1]) for record in data.split("\n\n")]
And "records" now should contain well behaved Python objects dictionaries, you can put in a database. (Unless Pandas can provide automatic mapping of the columns to a databas, it seems to be an uneeded step - a raw connection.executemany(""" INSERT ...""", records) with a proper open DB connection should suffice.
Also, on a sidenote you mentioned that you could handle this easily with a .NET deserializer: that is only if your files are not as broken as you have shown us - no possible standard serializer could know how to handle such an specific data format out of the box. But, if you actually is that more proeficient in another language/technology to do that, you could resort to write just a converter from the broken input to a properly encoded file, and use that as an intermediate step.
I'm not completely sure if I understood the format in which you get the string correctly, so please correct me if I'm wrong here:
data_json = 'data:{\\"data\\":[\\"Kendrick\\",\\"Lamar\\"]}\\n\\ndata:{\\"data\\":[\\"David\\",\\"Bowie\\"]}\\n\\n'
Your first line seems to strip the first and last character, which I don't see. Are there any additional characters you are stripping away here?
The two following substring replacements seem to have no effect as the substrings are not present in the initial string (if I got it correctly in the first place).
And finally in the last line you are wrapping your result with { and } which is not correct for lists in json. It should be [...]
I can't really tell why you would get a SIGKILL here, though. It does not throw any errors for me, it just does not do what you want it to do. Maybe you're running out of memory with all the 500k examples?
However, this would be a working solution (again, given that I got the initial string correctly):
c1 = data_json.replace('\\n\\n', '') # removing escaped newlines
c2 = c1.replace('data:', ',') # replacing the additional 'data:' with json delimiter ','
c3 = c2.replace('\\', '') # removing artificial escapes
c4 = c3[1:-1] # removing leading ',' (introduced in c2) and trailing newline
c5 = '[' + c4 + ']' # wrapping as list
Now you should be able to json.loads(c5) or whatever you need to do with that string.

Writing unicode symbols to files (as opposed to unicode code)

I'm new to python and unicode is starting to give me headaches.
Currently I write to file like this:
my_string = "马/馬"
f = codecs.open(local_filepath, encoding='utf-8', mode='w+')
f.write(my_string)
f.close()
And when I open file with i.e. Gedit, I can see something like this:
\u9a6c/\u99ac\tm\u01ce
While I'd like to see exactly what I've written:
马/馬
I've tried a few different variations, like writing my_string.decode() or my_string.encode('utf-8') instead of just my_string, I know those two methods are the opposites but I was not sure which one I needed. Neither worked anyway.
If I manually write these symbols to text file, then with python read the file, re-write what I've just read back to the same file and save, symbols get turned to the code \u9a6c. Not sure if this is importat, figured I'd just mention it to help identify the problem.
Edit: the strings came from SQL Alchemy objects repr method, which turned out to be where the problem lied. I didn't mention it because it just didn't occur to me it can be related to the problem somehow. Thanks again for your help!
From the comments it is now clear you are using either the repr() function or calling the object.__repr__() method directly.
Don't do that. You are writing debugging information to your file:
>>> my_string = u"马/馬"
>>> print repr(my_string)
u'\u9a6c/\u99ac'
The value produced is meant to be pastable back into a Python session so you can re-produce the exact same value, and as such it is ASCII-safe (so it can be used in Python 2 source code without encoding issues).
From the repr() documentation:
For many types, this function makes an attempt to return a string that would yield an object with the same value when passed to eval(), otherwise the representation is a string enclosed in angle brackets that contains the name of the type of the object together with additional information often including the name and address of the object.
Write the Unicode objects to your file directly instead, codecs.open() handles encoding to UTF-8 correctly if you do.

How do you convert string to bytes

I am using python v2.7.3 and am trying to get a conversion to work but am having some issues.
This is code that works the way I would like it to:
testString = "\x00\x13\xA2\x00\x40\xAA\x15\x47"
print 'Test String:',testString
This produces the following result
TestString: ¢#ªG
Now I load the same string as above along with some other data:
\x00\x13\xA2\x00\x40\xAA\x15\x47123456
into a SQLite3 database and then pull it from the database as such:
cur.execute('select datafield from databasetable')
rows = cur.fetchall()
if len(rows) == 0:
print 'Sorry Found Nothing!'
else:
print row[0][:32]
This however produces the following result:
\x00\x13\xA2\x00\x40\xAA\x15\x47
I can not figure out how to convert the database stored string to the bytes string, if that is what it is, as the first snippet of code does. I actually need it to load into a variable in that format so I can pass it to a function for further processing.
The following I have tried:
print "My Addy:",bytes(row[0][:32])
print '{0}'.format(row[0][:32])
...
They all produce the same results...
Please
First, Can anyone tell me what format the first results are in? I think its bytes format but am not sure.
Second, How can I convert the database stored text into
Any help and I would be eternally grateful.
Thanks in advance,
Ed
The problem is that you're not storing the value in the database properly. You want to store a sequence of bytes, but you're storing an escaped version of those bytes instead.
When entering string literals into a programming language, you can use escape codes in your source code to access non-printing characters. That's what you've done in your first example:
testString = "\x00\x13\xA2\x00\x40\xAA\x15\x47"
print 'Test String:',testString
But this is processing done by the Python interpreter as it's reading through your program and executing it.
Change the database column to a binary blob instead of a string, then go back to the code you're using to store the bytes in SQLite3, and have it store the actual bytes ('ABC', 3 bytes) instead of an escaped string ('\x41\x42\x43', 12 characters).
If you truly need to store the escaped string in SQLite3 and convert it at run-time, you might be able to use ast.literal_eval() to evaluate it as a string literal.

How to convert a list into a string and then convert it back in python?

I want to send a list through UDP/TCP, but since they support string list only, I need to convert the list into string and convert it back.
My list is like
['S1','S2','H1','C1','D8']
I know I can use
string_ = ''.join(list_)
to convert it into string.
But how to convert it back?
Or there is another way I can use UDP/TCP to send a list?
Custom format would depend on the assumptions about the list items format, so json looks like the safest way to go:
>>> import json
>>> data = json.dumps(['S1','S2','H1','C1','D8'])
>>> data
'["S1", "S2", "H1", "C1", "D8"]'
>>> json.loads(data)
[u'S1', u'S2', u'H1', u'C1', u'D8']
Use a separator:
string_ = ';'.join(list_)
list_ = string_.split(';')
You need to make sure the separator character can't be within your string. If it is, you might need encoding.
If you have python on both ends of network communication, you can use dumps and loads functions from pickle module design especially for serializing python objects:
import pickle
a = ['S1','S2','H1','C1','D8']
string_ = pickle.dumps(a)
...
a = pickle.loads(string_)
Otherwise solution proposed by #bereal is better one because there are json libraries for most programming languages. But it will demand some processing for data types not supported by json.
EDIT
As #bereal noticed there can be security problem with pickle because it's actually an executable language.
Maybe you could append a separator between the list elements and use it calling split to get the list back.
EDIT:
As #Eric Fortin mentionned in his answer, the separator should be something that cannot be in your string. One possibility is -- as he suggested -- to use encoding. Another possiblity is to send elements one by one, but that would obviously increase the communication overhead.
Note that your separator may be a sequence, it does not need to be one single character.
str_list = separator.join(list_)
new_list = str_list.split(separator)
If you know the format of your list elements, you may even go without using separators!
str_list = "".join(list_)
re.split(reg_exp, str_list)

Why is escaping of single quotes inconsistent on file read in Python?

Given two nearly identical text files (plain text, created in MacVim), I get different results when reading them into a variable in Python. I want to know why this is and how I can produce consistent behavior.
For example, f1.txt looks like this:
This isn't a great example, but it works.
And f2.txt looks like this:
This isn't a great example, but it wasn't meant to be.
"But doesn't it demonstrate the problem?," she said.
When I read these files in, using something like the following:
f = open("f1.txt","r")
x = f.read()
I get the following when I look at the variables in the console. f1.txt:
>>> x
"This isn't a great example, but it works.\n\n"
And f2.txt:
>>> y
'This isn\'t a great example, but it wasn\'t meant to be. \n"But doesn\'t it demonstrate the problem?," she said.\n\n'
In other words, f1 comes in with only escaped newlines, while f2 also has its single quotes escaped.
repr() shows what's going on. first for f1:
>>> repr(x)
'"This isn\'t a great example, but it works.\\n\\n"'
And f2:
>>> repr(y)
'\'This isn\\\'t a great example, but it wasn\\\'t meant to be. \\n"But doesn\\\'t it demonstrate the problem?," she said.\\n\\n\''
This kind of behavior is driving me crazy. What's going on and how do I make it consistent? If it matters, I'm trying to read in plain text, manipulate it, and eventually write it out so that it shows the properly escaped characters (for pasting into Javascript code).
Python is giving you a string literal which, if you gave it back to Python, would result in the same string. This is known as the repr() (short for "representation") of the string. This may not (probably won't, in fact) match the string as it was originally specified, since there are so many ways to do that, and Python does not record anything about how it was originally specified.
It uses double quotes around your first example, which works fine because it doesn't contain any double quotes. The second string contains double quotes, so it can't use double quotes as a delimiter. Instead it uses single quotes and uses backslashes to escape the single quotes in the string (it doesn't have to escape the double quotes this way, and there are more of them than there are single quotes). This keeps the representation as short as possible.
There is no reason for this behavior to drive you crazy and no need to try to make it consistent. You only get the repr() of a string when you are peeking at values in Python's interactive mode. When you actually print or otherwise use the string, you get the string itself, not a reconstituted string literal.
If you want to get a JavaScript string literal, the easiest way is to use the json module:
import json
print json.dumps('I said, "Hello, world!"')
Both f1 and f2 contain perfectly normal, unescaped single quotes.
The fact that their repr looks different is meaningless.
There are a variety of different ways to represent the same string. For example, these are all equivalent literals:
"abc'def'ghi"
'abc\'def\'ghi'
'''abc'def'ghi'''
r"abc'def'ghi"
The repr function on a string always just generates some literal that is a valid representation of that string, but you shouldn't depend on exactly which one it generate. (In fact, you should rarely use it for anything but debugging purposes in the first place.)
Since the language doesn't define anywhere what algorithm it uses to generate a repr, it could be different for each version of each implementation.
Most of them will try to be clever, using single or double quotes to avoid as many escaped internal quotes as possible, but even that isn't guaranteed. If you really want to know the algorithm for a particular implementation and version, you pretty much have to look at the source. For example, in CPython 3.3, inside unicode_repr, it counts the number of quotes of each type; then if there are single quotes but no double quotes, it uses " instead of '.
If you want "the" representation of a string, you're out of luck, because there is no such thing. But if you want some particular representation of a string, that's no problem. You just have to know what format you want; most formats, someone's already written the code, and often it's in the standard library. You can make C literal strings, JSON-encoded strings, strings that can fit into ASCII RFC822 headers… But all of those formats have different rules from each other (and from Python literals), so you have to use the right function for the job.

Categories

Resources