Trying to solve old GoogleCodeJam for practice - python

I'm working on some of the old Google Code Jam problems as practice in python since we don't use this language at my school. Here is the current problem I am working on that is supposed to just reverse the order of a string by word.
Here is my code:
import sys
f = open("B-small-practice.in", 'r')
T = int(f.readline().strip())
array = []
for i in range(T):
array.append(f.readline().strip().split(' '))
for ar in array:
if len(ar) > 1:
count = len(ar) - 1
while count > -1:
print ar[count]
count -= 1
The problem is that instead of printing:
test a is this
My code prints:
test
a
is
this
Please let me know how to format my loop so that it prints all in one line. Also, I've had a bit of a learning curve when it comes to reading input from a .txt file and manipulating it, so if you have any advice on different methods to do so for such problems, it would be much appreciated!

print by default appends a newline. If you don't want that behavior, tack a comma on the end.
while count > -1:
print ar[count],
count -= 1
Note that a far easier method than yours to reverse a list is just to specify a step of -1 to a slice. (and join it into a string)
print ' '.join(array[::-1]) #this replaces your entire "for ar in array" loop

There are multiple ways you can alter your output format. Here are a few:
One way is by adding a comma at the end of your print statement:
print ar[count],
This makes print output a space at the end instead of a newline.
Another way is by building up a string, then printing it after the loop:
reversed_words = ''
for ar in array:
if len(ar) > 1:
count = len(ar) - 1
while count > -1:
reversed_words += ar[count] + ' '
count -= 1
print reversed_words
A third way is by rethinking your approach, and using some useful built-in Python functions and features to simplify your solution:
for ar in array:
print ' '.join(ar[::-1])
(See str.join and slicing).
Finally, you asked about reading from a file. You're already doing that just fine, but you forgot to close the file descriptor after you were done reading, using f.close(). Even better, with Python 2.5+, you can use the with keyword:
array = list()
with open(filename) as f:
T = int(f.readline().strip())
for i in range(T):
array.append(f.readline().strip().split(' '))
# The rest of your code here
This will close the file descriptor f automatically after the with block, even if something goes wrong while reading the file.

T = int(raw_input())
for t in range(T):
print "Case #%d:" % (t+1),
print ' '.join([w for w in raw_input().split(' ')][::-1])

Related

Censoring a txt file with python

Hi I could really use some help on a python project that I'm working on. Basically I have a list of banned words and I must go through a .txt file and search for these specific words and change them from their original form to a ***.
text_file = open('filename.txt','r')
text_file_read = text_file.readlines()
banned_words = ['is','activity', 'one']
words = []
i = 0
while i < len(text_file_read):
words.append(text_file_read[i].strip().lower().split())
i += 1
i = 0
while i < len(words):
if words[i] in banned_words:
words[i] = '*'*len(words[i])
i += 1
i = 0
text_file_write = open('filename.txt', 'w')
while i < len(text_file_read):
print(' '.join(words[i]), file = text_file_write)
i += 1
The expected output would be:
This **
********
***?
However its:
This is
activity
one?
Any help is greatly appreciated! I'm also trying not to use external libraries as well
I cannot solve this for you (haven't touched python in a while), but the best debugging tip I can offer is: print everything. Take the first loop, print every iteration, or print what "words" is afterwards. It will give you an insight on what's going wrong, and once you know what is working in an unexpected way, you can search how you can fix it.
Also, if you're just starting, avoid concatenating methods. It ends up a bit unreadable, and you can't see what each method is doing. In my opinion at least, it's better to have 30 lines of readable and easy-to-understand code, than 5 that take some brain power to understand.
Good luck!
A simpler way if you just need to print it
banned_words = ['is','activity', 'one']
output = ""
f = open('filename.txt','r')
for line in f:
for word in line.rsplit():
if not word in banned_words:
output += word + " "
else:
output += "*"*len(word) + " "
output += "\n"
print(output)

Breaking single line data to multi-line data

Working on project where i am given raw log data and need to parse it out to a readable state, know enought with python to break off all the undeed part and just left with raw data that needs to be split and formated, but cant figure out a way to break it apart if they put multiple records on the same line, which does not always happen.
This is the string value i am getting so far.
*190205*12,6000,0000000,12,6000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000*190206*01,2050,0100550,01,4999,0000000,,
I need to break it out apart so that each line starts with the number value, but since i can assume there will only be 1 or two of them i cant think of a way to do it, and the number of other comma seperate values after it vary so i cant go by length. this is what i am looking to get to use will further operations with data from the above example.
*190205*12,6000,0000000,12,6000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000
*190206*01,2050,0100550,01,4999,0000000,,
txt = "*190205*12,6000,0000000,12,6000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000*190206*01,2050,0100550,01,4999,0000000,,"
output = list()
i = 0
x = txt.split("*")
while i < len(x):
if len(x[i]) == 0:
i += 1
continue
print ("*{0}*{1}".format(x[i],x[i+1]))
output.append("*{0}*{1}".format(x[i],x[i+1]))
i += 2
Use split to tokezine the words between *
Print two constitutive tokens
You can use regex:
([*][0-9]*[*])
You can catch the header part with this and then split according to it.
Same answer as #mujiga but I though a dict might better for further operations
txt = "*190205*12,6000,0000000,12,6000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000*190206*01,2050,0100550,01,4999,0000000,,"
datadict=dict()
i=0
x=txt.split("*")
while i < len(x):
if len(x[i]) == 0:
i += 1
continue
datadict[x[i]]=x[i+1]
i += 2
Adding on to #Ali Nuri Seker's suggestion to use regex, here' a simple one lacking lookarounds (which might actually hurt it in this case)
>>> import re
>>> string = '''*190205*12,6000,0000000,12,6000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000*190206*01,2050,0100550,01,4999,0000000,,'''
>>> print(re.sub(r'([\*][0-9,]+[\*]+[0-9,]+)', r'\n\1', string))
#Output
*190205*12,6000,0000000,12,6000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000,13,2590,0000000,13,7000,0000000,13,7000,0000000
*190206*01,2050,0100550,01,4999,0000000,,

Python - Splitting a large string by number of delimiter occurrences

I'm still learning Python, and I have a question I haven't been able to solve. I have a very long string (millions of lines long) which I would like to be split into a smaller string length based on a specified number of occurrences of a delimeter.
For instance:
ABCDEF
//
GHIJKLMN
//
OPQ
//
RSTLN
//
OPQR
//
STUVW
//
XYZ
//
In this case I would want to split based on "//" and return a string of all lines before the nth occurrence of the delimeter.
So an input of splitting the string by // by 1 would return:
ABCDEF
an input of splitting the string by // by 2 would return:
ABCDEF
//
GHIJKLMN
an input of splitting the string by // by 3 would return:
ABCDEF
//
GHIJKLMN
//
OPQ
And so on... However, The length of the original 2 million line string appeared to be a problem when I simply tried to split the entire string and by "//" and just work with the individual indexes. (I was getting a memory error) Perhaps Python can't handle so many lines in one split? So I can't do that.
I'm looking for a way that I don't need to split the entire string into a hundred-thousand indexes when I may only need 100, but instead just start from the beginning until a certain point, stop and return everything before it, which I assume may also be faster? I hope my question is as clear as possible.
Is there a simple or elegant way to achieve this? Thanks!
If you want to work with files instead of strings in memory, here is another answer.
This version is written as a function that reads lines and immediately prints them out until the specified number of delimiters have been found (no extra memory needed to store the entire string).
def file_split(file_name, delimiter, n=1):
with open(file_name) as fh:
for line in fh:
line = line.rstrip() # use .rstrip("\n") to only strip newlines
if line == delimiter:
n -= 1
if n <= 0:
return
print line
file_split('data.txt', '//', 3)
You can use this to write the output to a new file like this:
python split.py > newfile.txt
With a little extra work, you can use argparse to pass parameters to the program.
As a more efficient way you can read the firs N lines separated by your delimiter so if you are sure that all of your lines are splitted by delimiter you can use itertools.islice to do the job:
from itertools import islice
with open('filename') as f :
lines = islice(f,0,2*N-1)
The method that comes to my mind when I read your question uses a for loop
where you cut up the string into several (for example the 100 you called) and iterate through the substring.
thestring = "" #your string
steps = 100 #length of the strings you are going to use for iteration
log = 0
substring = thestring[:log+steps] #this is the string you will split and iterate through
thelist = substring.split("//")
for element in thelist:
if(element you want):
#do your thing with the line
else:
log = log+steps
# and go again from the start only with this offset
now you can go through all the elements go through the whole 2 million(!) line string.
best thing to do here is actually make a recursive function from this(if that is what you want):
thestring = "" #your string
steps = 100 #length of the strings you are going to use for iteration
def iterateThroughHugeString(beginning):
substring = thestring[:beginning+steps] #this is the string you will split and iterate through
thelist = substring.split("//")
for element in thelist:
if(element you want):
#do your thing with the line
else:
iterateThroughHugeString(beginning+steps)
# and go again from the start only with this offset
For instance:
i = 0
s = ""
fd = open("...")
for l in fd:
if l[:-1] == delimiter: # skip last '\n'
i += 1
if i >= max_split:
break
s += l
fd.close()
Since you are learning Python it would be a challenge to model a complete dynamic solution. Here's a notion of how you can model one.
Note: The following code snippet only works for file(s) which is/are in the given format (see the 'For Instance' in the question). Hence, it is a static solution.
num = (int(input("Enter delimiter: ")) * 2)
with open("./data.txt") as myfile:
print ([next(myfile) for x in range(num-1)])
Now that have the idea, you can use pattern matching and so on.

Counting words in a dictionary (Python)

I have this code, which I want to open a specified file, and then every time there is a while loop it will count it, finally outputting the total number of while loops in a specific file. I decided to convert the input file to a dictionary, and then create a for loop that every time the word while followed by a space was seen it would add a +1 count to WHILE_ before finally printing WHILE_ at the end.
However this did not seem to work, and I am at a loss as to why. Any help fixing this would be much appreciated.
This is the code I have at the moment:
WHILE_ = 0
INPUT_ = input("Enter file or directory: ")
OPEN_ = open(INPUT_)
READLINES_ = OPEN_.readlines()
STRING_ = (str(READLINES_))
STRIP_ = STRING_.strip()
input_str1 = STRIP_.lower()
dic = dict()
for w in input_str1.split():
if w in dic.keys():
dic[w] = dic[w]+1
else:
dic[w] = 1
DICT_ = (dic)
for LINE_ in DICT_:
if ("while\\n',") in LINE_:
WHILE_ += 1
elif ('while\\n",') in LINE_:
WHILE_ += 1
elif ('while ') in LINE_:
WHILE_ += 1
print ("while_loops {0:>12}".format((WHILE_)))
This is the input file I was working from:
'''A trivial test of metrics
Author: Angus McGurkinshaw
Date: May 7 2013
'''
def silly_function(blah):
'''A silly docstring for a silly function'''
def nested():
pass
print('Hello world', blah + 36 * 14)
tot = 0 # This isn't a for statement
for i in range(10):
tot = tot + i
if_im_done = false # Nor is this an if
print(tot)
blah = 3
while blah > 0:
silly_function(blah)
blah -= 1
while True:
if blah < 1000:
break
The output should be 2, but my code at the moment prints 0
This is an incredibly bizarre design. You're calling readlines to get a list of strings, then calling str on that list, which will join the whole thing up into one big string with the quoted repr of each line joined by commas and surrounded by square brackets, then splitting the result on spaces. I have no idea why you'd ever do such a thing.
Your bizarre variable names, extra useless lines of code like DICT_ = (dic), etc. only serve to obfuscate things further.
But I can explain why it doesn't work. Try printing out DICT_ after you do all that silliness, and you'll see that the only keys that include while are while and 'while. Since neither of these match any of the patterns you're looking for, your count ends up as 0.
It's also worth noting that you only add 1 to WHILE_ even if there are multiple instances of the pattern, so your whole dict of counts is useless.
This will be a lot easier if you don't obfuscate your strings, try to recover them, and then try to match the incorrectly-recovered versions. Just do it directly.
While I'm at it, I'm also going to fix some other problems so that your code is readable, and simpler, and doesn't leak files, and so on. Here's a complete implementation of the logic you were trying to hack up by hand:
import collections
filename = input("Enter file: ")
counts = collections.Counter()
with open(filename) as f:
for line in f:
counts.update(line.strip().lower().split())
print('while_loops {0:>12}'.format(counts['while']))
When you run this on your sample input, you correctly get 2. And extending it to handle if and for is trivial and obvious.
However, note that there's a serious problem in your logic: Anything that looks like a keyword but is in the middle of a comment or string will still get picked up. Without writing some kind of code to strip out comments and strings, there's no way around that. Which means you're going to overcount if and for by 1. The obvious way of stripping—line.partition('#')[0] and similarly for quotes—won't work. First, it's perfectly valid to have a string before an if keyword, as in "foo" if x else "bar". Second, you can't handle multiline strings this way.
These problems, and others like them, are why you almost certainly want a real parser. If you're just trying to parse Python code, the ast module in the standard library is the obvious way to do this. If you want to be write quick&dirty parsers for a variety of different languages, try pyparsing, which is very nice, and comes with some great examples.
Here's a simple example:
import ast
filename = input("Enter file: ")
with open(filename) as f:
tree = ast.parse(f.read())
while_loops = sum(1 for node in ast.walk(tree) if isinstance(node, ast.While))
print('while_loops {0:>12}'.format(while_loops))
Or, more flexibly:
import ast
import collections
filename = input("Enter file: ")
with open(filename) as f:
tree = ast.parse(f.read())
counts = collections.Counter(type(node).__name__ for node in ast.walk(tree))
print('while_loops {0:>12}'.format(counts['While']))
print('for_loops {0:>14}'.format(counts['For']))
print('if_statements {0:>10}'.format(counts['If']))

Confusion about string find?

I have a list of data that I want to search through. This new list of data is structured like so.
name, address dob family members age height etc..
I want to search through the lines of data so that I stop the search at the ',' that appears after the name to optimize the search. I believe I want to use this command:
str.find(sub[, start[, end]])
I'm having trouble writing the code in this structure though. Any tips on how to make string find work for me?
Here is some sample data:
Bennet, John, 17054099","5","156323558","-","0", 714 //
Menendez, Juan,7730126","5","158662525" 11844 //
Brown, Jamal,"9","22966592","+","0",,"4432 //
The idea is I want my program to search only to the first ',' and not search through the rest of the large lines.
EDIT. So here is my code.
I want the to search the lines in completedataset only until the first comma. I'm still confused as to how I should implement these suggestions into my existing code.
counter = 1
for line in completedataset:
print counter
counter +=1
for t in matchedLines:
if t in line:
smallerdataset.write(line)
You can do it quite directly:
s = 'Bennet, John, 17054099","5","156323558","-","0", 714 //'
print s.find('John', 0, s.index(',')) # find the index of ',' and stop there
If I understand your specs correctly,
for thestring in listdata:
firstcomma = thestring.find(',')
havename = thestring.find(name, 0, firstcomma)
if havename >= 0:
print "found name:", thestring[:firstcomma]
Edit: given the OP's edit of the Q, this would become something like:
counter = 1
for line in completedataset:
print counter
counter += 1
firstcomma = thestring.find(',')
havename = thestring.find(t, 0, firstcomma)
if havename >= 0:
smallerdataset.write(line)
Of course, the use of counter is unPythonically low-level, and a better eqv would be
for counter, line in enumerate(completedataset):
print counter + 1
firstcomma = thestring.find(',')
havename = thestring.find(t, 0, firstcomma)
if havename >= 0:
smallerdataset.write(line)
but that doesn't affect the question as asked.
You will probably search in each line, so you can just split them by ', ' and then do a search on a first element:
for line in file:
name=line.split(', ')[0]
if name.find('smth'):
break
Any reason why you have to use find?
Why not just do something like:
if str.split(",", 1)[0] == search_string:
...
Edit:
Just thought I'd point out - I was just testing this and the split approach seems just as fast (if not faster than find). Test the performance of both approaches using the timeit module and see what you get.
Try:
python -m timeit -n 10000 -s "a='''Bennet, John, 17054099','5','156323558','-','0', 714'''" "a.split(',',1)[0] == 'Bennet'"
then compare with:
python -m timeit -n 10000 -s "a='''Bennet, John, 17054099','5','156323558','-','0', 714'''" "a.find('Bennet', 0, a.find(','))"
Make the name longer (e.g "BennetBennetBennetBennetBennetBennet") and you realize that find suffers more than split
Note: am using split with the maxsplit option
If you're checking a lot of names against each line, it seems like the biggest optimization might be only processing each line for commas once!
for line in completedataset:
i = line.index(',')
first_field = line[:i]
for name in matchedNames:
if name in first_field:
smalldataset.append(name)

Categories

Resources