I have a .txt file (scraped as pre-formatted text from a website) where the data looks like this:
B, NICKOLAS CT144531X D1026 JUDGE ANNIE WHITE JOHNSON
ANDREWS VS BALL JA-15-0050 D0015 JUDGE EDWARD A ROBERTS
I'd like to remove all extra spaces (they're actually different number of spaces, not tabs) in between the columns. I'd also then like to replace it with some delimiter (tab or pipe since there's commas within the data), like so:
ANDREWS VS BALL|JA-15-0050|D0015|JUDGE EDWARD A ROBERTS
Looked around and found that the best options are using regex or shlex to split. Two similar scenarios:
Python Regular expression must strip whitespace except between quotes,
Remove white spaces from dict : Python.
You can apply the regex '\s{2,}' (two or more whitespace characters) to each line and substitute the matches with a single '|' character.
>>> import re
>>> line = 'ANDREWS VS BALL JA-15-0050 D0015 JUDGE EDWARD A ROBERTS '
>>> re.sub('\s{2,}', '|', line.strip())
'ANDREWS VS BALL|JA-15-0050|D0015|JUDGE EDWARD A ROBERTS'
Stripping any leading and trailing whitespace from the line before applying re.sub ensures that you won't get '|' characters at the start and end of the line.
Your actual code should look similar to this:
import re
with open(filename) as f:
for line in f:
subbed = re.sub('\s{2,}', '|', line.strip())
# do something here
What about this?
your_string ='ANDREWS VS BALL JA-15-0050 D0015 JUDGE EDWARD A ROBERTS'
print re.sub(r'\s{2,}','|',your_string.strip())
Output:
ANDREWS VS BALL|JA-15-0050|D0015|JUDGE EDWARD A ROBERTS
Expanation:
I've used re.sub() which takes 3 parameter, a pattern, a string you want to replace with and the string you want to work on.
What I've done is taking at least two space together , I 've replaced them with a | and applied it on your string.
s = """B, NICKOLAS CT144531X D1026 JUDGE ANNIE WHITE JOHNSON
ANDREWS VS BALL JA-15-0050 D0015 JUDGE EDWARD A ROBERTS
"""
# Update
re.sub(r"(\S)\ {2,}(\S)(\n?)", r"\1|\2\3", s)
In [71]: print re.sub(r"(\S)\ {2,}(\S)(\n?)", r"\1|\2\3", s)
B, NICKOLAS|CT144531X|D1026|JUDGE ANNIE WHITE JOHNSON
ANDREWS VS BALL|JA-15-0050|D0015|JUDGE EDWARD A ROBERTS
Considering there are at least two spaces separating the columns, you can use this:
lines = [
'B, NICKOLAS CT144531X D1026 JUDGE ANNIE WHITE JOHNSON ',
'ANDREWS VS BALL JA-15-0050 D0015 JUDGE EDWARD A ROBERTS '
]
for line in lines:
parts = []
for part in line.split(' '):
part = part.strip()
if part: # checking if stripped part is a non-empty string
parts.append(part)
print('|'.join(parts))
Output for your input:
B, NICKOLAS|CT144531X|D1026|JUDGE ANNIE WHITE JOHNSON
ANDREWS VS BALL|JA-15-0050|D0015|JUDGE EDWARD A ROBERTS
It looks like your data is in a "text-table" format.
I recommend using the first row to figure out the start point and length of each column (either by hand or write a script with regex to determine the likely columns), then writing a script to iterate the rows of the file, slice the row into column segments, and apply strip to each segment.
If you use a regex, you must keep track of the number of columns and raise an error if any given row has more than the expected number of columns (or a different number than the rest). Splitting on two-or-more spaces will break if a column's value has two-or-more spaces, which is not just entirely possible, but also likely. Text-tables like this aren't designed to be split on a regex, they're designed to be split on the column index positions.
In terms of saving the data, you can use the csv module to write/read into a csv file. That will let you handle quoting and escaping characters better than specifying a delimiter. If one of your columns has a | character as a value, unless you're encoding the data with a strategy that handles escapes or quoted literals, your output will break on read.
Parsing the text above would look something like this (i nested a list comprehension with brackets instead of the traditional format so it's easier to understand):
cols = ((0,34),
(34, 50),
(50, 59),
(59, None),
)
for line in lines:
cleaned = [i.strip() for i in [line[s:e] for (s, e) in cols]]
print cleaned
then you can write it with something like:
import csv
with open('output.csv', 'wb') as csvfile:
spamwriter = csv.writer(csvfile, delimiter='|',
quotechar='"', quoting=csv.QUOTE_MINIMAL)
for line in lines:
spamwriter.writerow([line[col_start:col_end].strip()
for (col_start, col_end) in cols
])
Looks like this library can solve this quite nicely:
http://docs.astropy.org/en/stable/io/ascii/fixed_width_gallery.html#fixed-width-gallery
Impressive...
Related
I have a txt file, single COLUMN, taken from excel, of the following type:
AMANDA (LOUDLY SPEAKING)
JEFF
STEVEN (TEASINGLY)
AMANDA
DOC BRIAN GREEN
As output I want:
AMANDA
JEFF
STEVEN
AMANDA
DOC BRIAN GREEN
I tried with a for cycle on all the column and then:
if (str[i] == '('):
return str.split('(')
but it's clearly not working.
Do you have any possible solution? I would then need an output file as my original txt, so with each name for each line in a single column.
Thanks everyone!
(I am using PyCharm 3.2)
I'd use regex in this situation. \w will replace letters, the * will select 0 or more. Then we check that it is between parenthesis.
import re
fi = "AMANDA (LOUDLY) JEFF STEVEN (TEASINGLY) AMANDA"
with open("mytext.txt","r") as fi, open("out.txt", "w") as fo:
for line in fi:
fo.write(re.sub("\(.*?\)", "", line))
You can split the string into a list using a regular expression that matches everything in parentheses or a full word, remove all elements from the list which contain parentheses and then join the list to a string again. The advantage is that there will be no double spaces in the result string where a word in parantheses was removed.
import re
text = "AMANDA (LOUDLY SPEAKING) JEFF STEVEN (TEASINGLY) AMANDA DOC BRIAN GREEN"
words = re.findall("\(.*?\)|[^\s]+",text)
print " ".join([x for x in words if "(" not in x])
Good morning,
I found multiple threads dealing with splitting strings with multiple delimiters, but not with one delimiter and multiple conditions.
I want to split the following strings by sentences:
desc = Dr. Anna Pytlik is an expert in conservative and aesthetic dentistry. She speaks both English and Polish.
If I do:
[t.split('. ') for t in desc]
I get:
['Dr', 'Anna Pytlik is an expert in conservative and aesthetic dentistry', 'She speaks both English and Polish.']
I don't want to split the first dot after 'Dr'. How can I add a list of substrings in which case the .split('. ') should not apply?
Thank you!
You could use re.split with a negative lookbehind:
>>> desc = "Dr. Anna Pytlik is an expert in conservative and aesthetic dentistry. She speaks both English and Polish."
>>> re.split(r"(?<!Dr|Mr)\. ", desc)
['Dr. Anna Pytlik is an expert in conservative and aesthetic dentistry',
'She speaks both English and Polish.']
Just add more "exceptions", delimited with |.
Update: Seems like negative lookbehind requires all the alternatives to have the same length, so this does not work with both "Dr." and "Prof." One workaround might be to pad the pattern with ., e.g. (?<!..Dr|..Mr|Prof). You could easily write a helper method to pad each title with as many . as needed. However, this may break if the very first word of the text is Dr., as the .. will not be matched.
Another workaround might be to first replace all the titles with some placeholders, e.g. "Dr." -> "{DR}" and "Prof." -> "{PROF}", then split, then swap the original titles back in. This way you don't even need regular expressions.
pairs = (("Dr.", "{DR}"), ("Prof.", "{PROF}")) # and some more
def subst_titles(s, reverse=False):
for x, y in pairs:
s = s.replace(*(x, y) if not reverse else (y, x))
return s
Example:
>>> text = "Dr. Anna Pytlik is an expert in conservative and aesthetic dentistry. Prof. Miller speaks both English and Polish."
>>> [subst_titles(s, True) for s in subst_titles(text).split(". ")]
['Dr. Anna Pytlik is an expert in conservative and aesthetic dentistry', 'Prof. Miller speaks both English and Polish.']
You could split and then join again Dr/Mr/...
It doesn't need complicated regexes and could be faster (you should benchmark it to choose best option).
I am working with a text file (620KB) that has a list of ID#s followed by full names separated by a comma.
The working regex I've used for this is
^([A-Z]{3}\d+)\s+([^,\s]+)
I want to also capture the first name and middle initial (space delimiter between first and MI).
I tried this by doing:
^([A-Z]{3}\d+)\s+([^,\s]+([\D])+)
Which works, but I want to remove the new line break that is generated on the output file (I will be importing the two output files into a database (possibly Access) and I don't want to capture the new line breaks, also if there is a better way of writing the regex?
Full code:
import re
source = open('source.txt')
ticket_list = open('ticket_list.txt', 'w')
id_list = open('id_list.txt', 'w')
for lines in source:
m = re.search('^([A-Z]{3}\d+)\s+([^\s]+([\D+])+)', lines)
if m:
x = m.group()
print('Ticket: ' + x)
ticket_list.write(x + "\n")
ticket_list = open('First.txt', 'r')
for lines in ticket_list:
y = re.search('^(\d+)\s+([^\s]+([\D+])+)', lines)
if y:
z = y.group()
print ('ID: ' + z)
id_list.write(z + "\n")
source.close()
ticket_list.close()
id_list.close()
Sample Data:
Source:
ABC1000033830 SMITH, Z
100000012 Davis, Franl R
200000655 Gest, Baalio
DEF4528942681 PACO, BETH
300000233 Theo, David Alex
400000012 Torres, Francisco B.
ABC1200045682 Mo, AHMED
DEF1000006753 LUGO, G TO
ABC1200123123 de la Rosa, Maria E.
Depending on what kind of linebreak you're dealing with, a simple positive lookahead may remedy your pattern capturing the linebreak in the result. This was generated by RegexBuddy 4.2.0, and worked with all your test data.
if re.search(r"^([A-Z]{3}\d+)\s+([^,\s]+([\D])+)(?=$)", subject, re.IGNORECASE | re.MULTILINE):
# Successful match
else:
# Match attempt failed
Basically, the positive lookahead makes sure that there is a linebreak (in this case, end of line) character directly after the pattern ends. It will match, but not capture the actual end of line.
I have multiple strings (>1000) of the form:
\r\nSenor Sisig\nThe Chairman\nCupkates\nLittle Green Cyclo\nSanguchon\nSeoul on Wheels\nKasa Indian\n\nGo Streatery\nWhip Out!\nLiba Falafel\nGrilled Cheese Bandits\r\n
The strings may have a whitespace before the '\n'
How do I split these strings (in an efficient way) so as to avoid getting any empty or duplicate (the whitespace case) elements?
I was using:
re.split(r'\r|\n', str)
EDIT:
some more examples:
\r\nThe Creme Brulee Cart \r\nCurry Up Now\r\nKoJa Kitchen\r\nAn the Go\r\nPacific Puffs\r\nEbbett's Good to Go\r\nFiveten Burger\r\nGo Streatery\r\nHiyaaa\r\nSAJJ\r\nKinder's Truck\r\nBlue Saigon\r
\r\nThe Chairman\r\nSanguchon\r\nSeoul on Wheels\r\nGo Streatery\r\nStreet Dog Truck\r\nKinder's Truck\r\nYummi BBQ\r\nLexie's Frozen Custard\r\nDrewski's Hot Rod Kitchen\r
\n An the Go \n Cheese Gone Wild \n Cupkates \n Curry Up Now \n Fins on the Hoof\n KoJa Kitchen\n Lobsta Truck \n Oui Chef \n Sanguchon\n Senor Sisig \n The Chairman \n The Rib Whip
thanks!
Your example doesn't show any "whitespace before the \n" except for a single optional \r.
If that's all you're trying to handle, instead of splitting on either \r or \n, split on a possible \r and a definite \n:
re.split(r"\r?\n", s)
Of course that's assuming you don't have any bare \r without \n to handle. If you need to handle \r, \r\n, and \n all equally (similar to Python's universal newline support…):
re.split(r"\r|\n|(\r\n)", s)
Or, more simply:
re.split(r"(\r|\n)+", s)
If you want to remove leading spaces, tabs, multiple \r, etc., you could do that in the regexp, or just call lstrip on each result:
map(str.lstrip, re.split(r"\r|\n", s))
… but that can leave you with empty elements. You could filter those out, but it's probably better to just split on any run of whitespace that ends with a \n instead:
re.split(r"\s*\n", s)
That will still leave empty elements at the start and end, because your string starts and ends with newlines, and that's what re.split is supposed to do. If you want to eliminate them, you can either strip the string before parsing, or toss the end values after parsing:
re.split(r"\s*\n", s.strip())
re.split(r"\s*\n", s)[1:-1]
I think one of these last two is exactly what you want… but that's really just a guess based on the limited information you gave. If not, then one of the others (along with its explanation) should hopefully be enough for you to write what you really want.
From your new examples, it looks like what you really want to split on is any run of whitespace that includes at least one \n. And your input may or may not have newlines at the start and end (your first example has both, your second has \r\n at the start but nothing at the end…), and you want to ignore them if it does. So:
re.split(r"\s*\n\s*", s.strip())
However, at this point, it might be worth asking why you're trying to parse this as a string instead of as a text file. Assuming you got these from some file or file-like object, instead of this:
with open(path, 'r') as f:
s = f.read()
results = re.split(regexpr, s.strip())
… something like this might be a lot more readable, and more than fast enough (maybe not as fast as the optimal regexp, but still so fast that any wasted string-processing time is swamped by the actual file reading time anyway):
with open(path, 'r') as f:
results = filter(None, map(str.strip, f))
Especially if you just want to iterate over this list once, in which case (assuming either Python 3.x, or using ifilter and imap from itertools if 2.x) this version doesn't have to read the whole file into memory and process it before you start doing your actual work.
re.split(r'[\s\n\r]+', str.strip())
>>> s = "\r\nSenor Sisig\nThe Chairman\nCupkates\nLittle Green Cyclo\nSanguchon\nSeoul on Wheels\nKasa Indian\n\nGo Streatery\nWhip Out!\nLiba Falafel\nGrilled Cheese Bandits\r\n"
>>> [x for x in s.strip("\r\n").split("\n") if x]
['Senor Sisig', 'The Chairman', 'Cupkates', 'Little Green Cyclo', 'Sanguchon', 'Seoul on Wheels', 'Kasa Indian', 'Go Streatery', 'Whip Out!', 'Liba Falafel', 'Grilled Cheese Bandits']
If you insist on regex
>>> import re
>>> re.split(r"[\r\n]+", s.strip("\r\n"))
['Senor Sisig', 'The Chairman', 'Cupkates', 'Little Green Cyclo', 'Sanguchon', 'Seoul on Wheels', 'Kasa Indian', 'Go Streatery', 'Whip Out!', 'Liba Falafel', 'Grilled Cheese Bandits']
Just filter out the empty values
list(ifilter(None, re.split(r"\r|\n", your_string)))
Pythons regular expressions offer you the \s -character class which matches any whitespace in [ \t\n\r\f\v] (unless UNICODE flag is set, then it depends on the character database in use).
As mentioned in the other answers (#abarnert), your regex could be \s*\n which is 0 or more whitespace ending with an \n. Below is an example.
In [1]: import re
In [2]: from itertools import ifilter
In [3]: my_string = """\r\nSenor Sisig \nThe Chairman\nCupkates\nLittle Green Cyclo\nSanguchon\nSeoul on Wheels\nKasa Indian\n\nGo Streatery\nWhip Out!\nLiba Falafel\nGrilled Cheese Bandits\r\n"""
In [4]: list(ifilter(None, re.split(r"\s*\n", my_string)))
Out[4]:
['Senor Sisig',
'The Chairman',
'Cupkates',
'Little Green Cyclo',
'Sanguchon',
'Seoul on Wheels',
'Kasa Indian',
'Go Streatery',
'Whip Out!',
'Liba Falafel',
'Grilled Cheese Bandits']
Note that I'm using ifilter from the itertools package. You could use filter or a list comp.
Like so:
[x for x in re.split("\s*\n", my_string) if x]
Given a list of actors, with their their character name in brackets, separated by either a semi-colon (;) or comm (,):
Shelley Winters [Ruby]; Millicent Martin [Siddie]; Julia Foster [Gilda];
Jane Asher [Annie]; Shirley Ann Field [Carla]; Vivien Merchant [Lily];
Eleanor Bron [Woman Doctor], Denholm Elliott [Mr. Smith; abortionist];
Alfie Bass [Harry]
How would I parse this into a list of two-typles in the form of [(actor, character),...]
--> [('Shelley Winters', 'Ruby'), ('Millicent Martin', 'Siddie'),
('Denholm Elliott', 'Mr. Smith; abortionist')]
I originally had:
actors = [item.strip().rstrip(']') for item in re.split('\[|,|;',data['actors'])]
data['actors'] = [(actors[i], actors[i + 1]) for i in range(0, len(actors), 2)]
But this doesn't quite work, as it also splits up items within brackets.
You can go with something like:
>>> re.findall(r'(\w[\w\s\.]+?)\s*\[([\w\s;\.,]+)\][,;\s$]*', s)
[('Shelley Winters', 'Ruby'),
('Millicent Martin', 'Siddie'),
('Julia Foster', 'Gilda'),
('Jane Asher', 'Annie'),
('Shirley Ann Field', 'Carla'),
('Vivien Merchant', 'Lily'),
('Eleanor Bron', 'Woman Doctor'),
('Denholm Elliott', 'Mr. Smith; abortionist'),
('Alfie Bass', 'Harry')]
One can also simplify some things with .*?:
re.findall(r'(\w.*?)\s*\[(.*?)\][,;\s$]*', s)
inputData = inputData.replace("];", "\n")
inputData = inputData.replace("],", "\n")
inputData = inputData[:-1]
for line in inputData.split("\n"):
actorList.append(line.partition("[")[0])
dataList.append(line.partition("[")[2])
togetherList = zip(actorList, dataList)
This is a bit of a hack, and I'm sure you can clean it up from here. I'll walk through this approach just to make sure you understand what I'm doing.
I am replacing both the ; and the , with a newline, which I will later use to split up every pair into its own line. Assuming your content isn't filled with erroneous ]; or ], 's this should work. However, you'll notice the last line will have a ] at the end because it didn't have a need a comma or semi-colon. Thus, I splice it off with the third line.
Then, just using the partition function on each line that we created within your input string, we assign the left part to the actor list, the right part to the data list and ignore the bracket (which is at position 1).
After that, Python's very useful zip funciton should finish the job for us by associating the ith element of each list together into a list of matched tuples.