I have a text file and a I want to replace the following pattern:
\"
with:
"
The initial version of what I'm looking at looks like:
{"latestProfileVersion":51,
"scannerAccess":true,
"productRatings":"[{\"7H65018000\":{\"reviewCount\":0,\"avgRating\":0}}
So someone embedded a JSON string inside a JSON response.
This is what I have currently:
rawAuthResponseTextFile = open(rawAuthResponseFilename,'r')
formattedAuthResponse = open('formattedAuthResponse.txt', 'w')
try:
stringVersionOfAuthResponse = rawAuthResponseTextFile.read().replace('\n','')
cleanedStringVersionOfAuthResponse = re.sub(r'\"', '"', stringVersionOfAuthResponse)
jsonVersionOfAuthResponse = json.dumps(cleanedStringVersionOfAuthResponse)
formattedAuthResponse.write(jsonVersionOfAuthResponse)
finally:
rawAuthResponseTextFile.close()
formattedAuthResponse.close
Using http://pythex.org/ I have found that r'\"' should match only \", but this is not the case when I look at the output which appears to be adding additional escape characters.
I know I am doing something wrong because I cannot get the quotes around the embedded string to look like the quotes in the regular JSON no matter how much I tweek it, escape characters or no.
You need to use this regex
\\"
You need to escape \ with \
Related
I need to find a regex where I can reliably find a " that happens before a "" but there are a lot of " before it as well.
For example:
{"Field":"String data "Other String Data""}
I need to fix an error I'm getting in the JSON raw string. I need to make that "" into " and remove that extra " inside the value pair. If I don't remove these I can't make the the string into an object so I can iterate through it.
I am importing this string into Python.
I have tried to figure out some lookbacks and lookarounds but they don't seem to be working.
For example, I tried this: (?=(?=(")).*"")
Have you tried just finding all "" and replacing them with "
re.sub('""', '"', s)
Though this will work for your example it can cause issues if the double double quote is intended in a string.
You could use re.split to break down your string into parts that are between quotes, then replace the non-escaped inside quotes with properly escaped ones.
To break the string apart, you can use an expression that will find quoted character sequences that are followed by one of the JSON delimiter that can appear after a closing quote (i.e.: : , ] }):
s='{"Field":"String data "Other String Data""}'
import re
parts = re.split(r'(".*?"(?=[:,}\]]))',s)
fixed = "".join(re.sub(r'(?<!^)"(?!$)',r'\"',p) for p in parts)
print(parts) # ['{', '"Field"', ':', '"String data "Other String Data""', '}']
print(fixed) # {"Field":"String data \"Other String Data\""}
Obviously this will not cover all possible edge cases (otherwise JSON wouldn't need to escape quotes as it does) but, depending on your data it may be sufficient.
I am getting filename from an api in this format containing mix of / and \.
infilename = 'c:/mydir1/mydir2\mydir3\mydir4\123xyz.csv'
When I try to parse the directory structure, \ followed by a character is converted into single character.
Is there a way around to get each component correctly?
What I already tried:
path.normpath didn't help.
infilename = 'c:/mydir1/mydir2\mydir3\mydir4\123xyz.csv'
os.path.normpath(infilename)
out:
'c:\\mydir1\\mydir2\\mydir3\\mydir4Sxyz.csv'
use r before the string to process it as a raw string (i.e. no string formatting).
e.g.
infilename = r'C:/blah/blah/blah.csv'
More details here:
https://docs.python.org/3.6/reference/lexical_analysis.html#string-and-bytes-literals
that's not visible in your example but writing this:
infilename = 'c:/mydir1/mydir2\mydir3\mydir4\123xyz.csv'
isn't a good idea because some of the lowercase (and a few uppercase) letters are interpreted as escape sequences if following an antislash. Notorious examples are \t, \b, there are others. For instance:
infilename = 'c:/mydir1/mydir2\thedir3\bigdir4\123xyz.csv'
doubly fails because 2 chars are interpreted as "tab" and "backspace".
When dealing with literal Windows-style path (or regexes), you have to use the raw prefix, and better, normalize your path to get rid of the slashes.
infilename = os.path.normpath(r'c:/mydir1/mydir2\mydir3\mydir4\123xyz.csv')
However, the raw prefix only applies to literals. If the returned string appears, when printing repr(string), as 'the\terrible\\dir', then tab chars have already been put in the string, and there's nothing you can do except a lousy post-processing.
Instead of parsing by \ try parsing by \\. You usually have to escape by \ so the \ character is actually \\.
I'm trying to use pyparsing to parse quoted strings under the following conditions:
The quoted string might contain internal quotes.
I want to use backslashes to escape internal quotes.
The quoted string might end with a backslash.
I'm struggling to define a successful parser. Also, I'm starting to wonder whether the regular expression used by pyparsing for quoted strings of this kind is correct (see my alternative regular expression below).
Am I using pyparsing incorrectly (most likely) or is there a bug in pyparsing?
Here's a script that demonstrates the problem (Note: ignore this script; please focus instead on the Update below.):
import pyparsing as pp
import re
# A single-quoted string having:
# - Internal escaped quote.
# - A backslash as the last character before the final quote.
txt = r"'ab\'cd\'"
# Parse with pyparsing.
# Does not work as expected: grabs only first 3 characters.
parser = pp.QuotedString(quoteChar = "'", escChar = '\\', escQuote = '\\')
toks = parser.parseString(txt)
print
print 'txt: ', txt
print 'pattern:', parser.pattern
print 'toks: ', toks
# Parse with a regex just like the pyparsing pattern, but with
# the last two groups flipped -- which seems more correct to me.
# This works.
rgx = re.compile(r"\'(?:[^'\n\r\\]|(?:\\.)|(?:\\))*\'")
print
print rgx.search(txt).group(0)
Output:
txt: 'ab\'cd\'
pattern: \'(?:[^'\n\r\\]|(?:\\)|(?:\\.))*\'
toks: ["ab'"]
'ab\'cd\'
Update
Thanks for the replies. I suspect that I've confused things by framing my question badly, so let me try again.
Let's say we are trying to parse a language that uses quoting rules generally like Python's. We want users to be able to define strings that can include internal quotes (protected by backslashes) and we want those strings to be able to end with a backslash. Here's an example file in our language. Note that the file would also parse as valid Python syntax, and if we printed foo (in Python), the output would be the literal value: ab'cd\
# demo.txt
foo = 'ab\'cd\\'
My goal is to use pyparsing to parse such a language. Is there a way to do it? The question above is basically where I ended up after several failed attempts. Below is my initial attempt. It fails because there are two backslashes at the end, rather than just one.
with open('demo.txt') as fh:
txt = fh.read().split()[-1].strip()
parser = pp.QuotedString(quoteChar = "'", escChar = '\\')
toks = parser.parseString(txt)
print
print 'txt: ', txt
print 'pattern:', parser.pattern
print 'toks: ', toks # ["ab'cd\\\\"]
I guess the problem is that QuotedString treats the backslash only as a quote-escape whereas Python treats a backslash as a more general-purpose escape.
Is there a simple way to do this that I'm overlooking? One workaround that occurs to me is to use .setParseAction(...) to handle the double-backslashes after the fact -- perhaps like this, which seems to work:
qHandler = lambda s,l,t: [ t[0].replace('\\\\', '\\') ]
parser = pp.QuotedString(quoteChar = "'", escChar = '\\').setParseAction(qHandler)
I think you're misunderstanding the use of escQuote. According to the docs:
escQuote - special quote sequence to escape an embedded quote string (such as SQL's "" to escape an embedded ") (default=None)
So escQuote is for specifying a complete sequence that is parsed as a literal quote. In the example given in the docs, for instance, you would specify escQuote='""' and it would be parsed as ". By specifying a backslash as escQuote, you are causing a single backslash to be interpreted as a quotation mark. You don't see this in your example because you don't escape anything but quotes. However, if you try to escape something else, you'll see it won't work:
>>> txt = r"'a\Bc'"
>>> parser = pyp.QuotedString(quoteChar = "'", escChar = '\\', escQuote = "\\")
>>> parser.parseString(txt)
(["a'Bc"], {})
Notice that the backslash was replaced with '.
As for your alternative, I think the reason that pyparsing (and many other parsers) don't do this is that it involves special-casing one position within the string. In your regex, a single backslash is an escape character everywhere except as the last character in the string, in which position it is treated literally. This means that you cannot tell "locally" whether a given quote is really the end of the string or not --- even if it has a backslash, it might not be the end if there is one later on without a backslash. This can lead to parse ambiguities and surprising parsing behavior. For instance, consider these examples:
>>> txt = r"'ab\'xxxxxxx"
>>> print rgx.search(txt).group(0)
'ab\'
>>> txt = r"'ab\'xxxxxxx'"
>>> print rgx.search(txt).group(0)
'ab\'xxxxxxx'
By adding an apostrophe at the end of the string, I suddenly caused the earlier apostrophe to no longer be the end, and added all the xs to the string at once. In a real-usage context, this can lead to confusing situations in which mismatched quotes silently result in a reparsing of the string rather than a parse error.
Although I can't come up with an example at the moment, I also suspect that this has the possibility to cause "catastrophic backstracking" if you actually try to parse a sizable document containing multiple strings of this type. (This was my point about the "100MB of other text".) Because the parser can't know whether a given \' is the end of the string without parsing further, it might potentially have to go all the way to the end of the file just to make sure there are no more quote marks out there. If that remaining portion of the file contains additional strings of this type, it may become complicated to figure out which quotes are delimiting which strings. For instance, if the input contains something like
'one string \' 'or two'
we can't tell whether this is two valid strings (one string \ and or two) or one with invalid material after it (one string \' and the non-string tokens or two followed by an unmatched quote). This kind of situation is not desirable in many parsing contexts; you want the decisions about where strings begin and end to be locally determinable, and not depend on the occurrence of other tokens much later in the document.
What is it about this code that is not working for you?
from pyparsing import *
s = r"foo = 'ab\'cd\\'" # <--- IMPORTANT - use a raw string literal here
ident = Word(alphas)
strValue = QuotedString("'", escChar='\\')
strAssign = ident + '=' + strValue
results = strAssign.parseString(s)
print results.asList() # displays repr form of each element
for r in results:
print r # displays str form of each element
# count the backslashes
backslash = '\\'
print results[-1].count(backslash)
prints:
['foo', '=', "ab'cd\\\\"]
foo
=
ab'cd\\
2
EDIT:
So "\'" becomes just "'", but "\" is parsed but stays as "\" instead of being an escaped "\". Looks like a bug in QuotedString. For now you can add this workaround:
import re
strValue.setParseAction(lambda t: re.sub(r'\\(.)', r'\g<1>', t[0]))
Which will take every escaped character sequence and just give back the escaped character alone, without the leading '\'.
I'll add this in the next patch release of pyparsing.
PyParsing's QuotedString parser does not handle quoted strings that end with backslashes. This is a fundamental limitation, that doesn't have any easy workaround that I can see. If you want to support that kind of string, you'll need to use something other than QuotedString.
This is not an uncommon limitation either. Python itself does not allow an odd number of backslashes at the end of a "raw" string literal. Try it: r"foo\" will raise an exception, while r"bar\\" will include both backslashes in the output.
The reason you are getting truncated output (rather than an exception) from your current code is because you're passing a backslash as the escQuote parameter. I think that is intended to be an alternative to specifying an escape character, rather than a supplement. What is happening is that the first backslash is being interpreted as an internal quote (which it unescapes), and since it's followed by an actual quote character, the parser thinks it's reached the end of the quoted string. Thus you get ab' as your result.
I'm trying to figure out how to remove \r's and \n's and "\ from a json url site but everytime I try it keeps getting cut off when I output the results. There are:
\r\n\r\n
\n\n
\n
\r
"\wordhere"\
If you can help me I would appreciated.
use strict=False when loading, see python json docs:
>>> s
'\n{\n\r\n\r\n\n\n\n\n\n\n\r\n"wordhere": 0}\n'
>>> json.loads(s, strict=False)
{u'wordhere': 0}
You don't need regex for this.
You could use the replace method from string class.
string = 'abc\r\n\r\n\\\\'
string = string.replace('\r', '')
string = string.replace('\n', '')
string = string.replace('\\', '')
But if you really want to use regex, a possible approach would be:
string = re.sub('\\r*\\n*\\\\*', '', string)
When matching special characters, they need to be escaped with a backslash. When matching a backslash, though, you need to use four backslashes.
I am trying to split one big file into individual entries. Each entry ends with the character “//”. So when I try to use
#!/usr/bin/python
import sys,os
uniprotFile=open("UNIPROT-data.txt") #read original alignment file
uniprotFileContent=uniprotFile.read()
uniprotFileList=uniprotFileContent.split("//")
for items in uniprotFileList:
seqInfoFile=open('%s.dat'%items[5:14],'w')
seqInfoFile.write(str(items))
But I realised that there is another string with “//“(http://www.uniprot.org/terms)
hence it splits there as well and eventually I don’t get the result I want. I tried using regex but was not abler to figure it out.
Use a regex that only splits on // if it's not preceded by :
import re
myre = re.compile("(?<!:)//")
uniprotFileList = myre.split(uniprotFileContent)
I am using the code with modified split pattern and it works fine for me:
#!/usr/bin/python
import sys,os
uniprotFile = open("UNIPROT-data.txt")
uniprotFileContent = uniprotFile.read()
uniprotFileList = uniprotFileContent.split("//\n")
for items in uniprotFileList:
seqInfoFile = open('%s.dat' % items[5:17], 'w')
seqInfoFile.write(str(items))
You're confusing \ (backslash) and / (slash). You don't need to escape a slash, just use "/". For a backslash, you do need to escape it, so use "\\".
Secondly, if you split with a backslash it will not split on a slash or vice-versa.
Split using a regular exception that doesn't permit the "http:" part before your // marker.
For example: "([^:])\/\/"
You appear to be splitting on the wrong characters. Based on your question, you should split on r"\", not "//". Open a prompt and inspect the strings you're using. You'll see something like:
>>> "\\"
'\\'
>>> "\"
SyntaxError
>>> r"\"
'\\'
>>> "//"
'//'
So, you can use "\" or r"\" (I recommend r"\" for clarity in splitting and regex operations.