concision in a for loop without list comprehension - python

Please don't laugh. I'm trying to write a simple script that will replace the hostname and IP of a base VM. I have a working version of this, but I'm trying to make it more readable and concise. I'm getting a syntax error when trying the code below. I was trying to make these list comprehensions, but since they are file types, that won't work. Thanks in advance.
try:
old_network = open("/etc/sysconfig/network", "r+")
new_network = open("/tmp/network", "w+")
replacement = "HOSTNAME=" + str(sys.argv[1]) + "\n"
shutil.copyfile('/etc/sysconfig/network', '/etc/sysconfig/network.setup_bak')
for line in old_network: new_network.write(line) if not re.match(("HOSTNAME"), line)
for line in old_network: new_network.write(replacement) if re.match(("HOSTNAME"), line)
os.rename("/tmp/network","/etc/sysconfig/network")
print 'Hostname set to', str(sys.argv[1])
except IOError, e:
print "Error %s" % e
pass

You are using some odd syntax here:
for line in old_network: new_network.write(line) if not re.match(("HOSTNAME"), line)
for line in old_network: new_network.write(replacement) if re.match(("HOSTNAME"), line)
You need to reverse the if statement there, and put these on separate lines; you have to combine the loops, which lets you simplify the if statements too:
for line in old_network:
if not re.match("HOSTNAME", line):
new_network.write(line)
else:
new_network.write(replacement)
You cannot really loop over an input file twice (your second loop wouldn't do anything as the file has already been read in full).
Next, you want to use the open file objects context managers (using with) to make sure they are closed properly, whatever happens. You can drop the + from the file modes, you are not using the files in mixed mode, and the backup copy is probably best done first before opening anything for reading and writing just yet.
There is no need to use a regular expression here; you are testing for the presence of a straightforward simple string, 'HOSTNAME' in line will do, or perhaps line.strip().startswith('HOSTNAME') to make sure the line starts with HOSTNAME.
Use the tempfile module to create a temporary file with a name that won't conflict:
from tempfile import NamedTemporaryFile
shutil.copyfile('/etc/sysconfig/network', '/etc/sysconfig/network.setup_bak')
replacement = "HOSTNAME={}\n".format(sys.argv[1])
new_network = NamedTemporaryFile(mode='w', delete=False)
with open("/etc/sysconfig/network", "r") as old_network, new_network:
for line in old_network:
if line.lstrip().startswith('HOSTNAME'):
line = replacement
new_network.write(line)
os.rename(new_network.name, "/etc/sysconfig/network")
print 'Hostname set to {}'.format(sys.argv[1])
You can simplify this even further by using the fileinput module, which lets you replace a file contents by simply printing; it supports creating a backup file natively:
import fileinput
import sys
replacement = "HOSTNAME={}\n".format(sys.argv[1])
for line in fileinput('/etc/sysconfig/network', inplace=True, backup='.setup_bak'):
if line.lstrip().startswith('HOSTNAME'):
line = replacement
sys.stdout.write(line)
print 'Hostname set to {}'.format(sys.argv[1])
That's 6 lines of code (not counting imports) versus your 12. We can squash this down to just 4 by using a conditional expression, but I am not sure if that makes things more readable:
for line in fileinput('/etc/sysconfig/network', inplace=True, backup='.setup_bak'):
sys.stdout.write(replacement if line.lstrip().startswith('HOSTNAME') else line)

Related

Python failing to read lines properly

I'm supposed to open a file, read it line per line and display the lines out.
Here's the code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import re
in_path = "../vas_output/Glyph/20140623-FLYOUT_mins_cleaned.csv"
out_path = "../vas_gender/Glyph/"
csv_read_line = open(in_path, "rb").read().split("\n")
line_number = 0
for line in csv_read_line:
line_number+=1
print str(line_number) + line
Here's the contents of the input file:
12345^67890^abcedefg
random^test^subject
this^sucks^crap
And here's the result:
this^sucks^crapjectfg
Some weird combo of all three. In addition to this, the result of line_number is missing. Printing out the result of len(csv_read_line) outputs 1, for some reason, no matter how many is in the input file. Changing the split type from \n to ^ gives the expected output, though, so I'm assuming the problem is probably with the input file.
I'm using a Mac, and did both the python code and the input file (on Sublime Text) on the Mac itself.
Am I missing something?
You seem to be splitting on "\n" which isn't necessary, and could be incorrect depending on the line terminators used in the input file. Python includes functionality to iterate over the lines of a file one at a time. The advantages are that it will worry about processing line terminators in a portable way, as well as not requiring the entire file to be held in memory at once.
Further, note that you are opening the file in binary mode (the b character in your mode string) when you actually intend to read the file as text. This can cause problems similar to the one you are experiencing.
Also, you do not close the file when you are done with it. In this case that isn't a problem, but you should get in the habit of using with blocks when possible to make sure the file gets closed at the earliest possible time.
Try this:
with open(in_path, "r") as f:
line_number = 0
for line in f:
line_number += 1
print str(line_number) + line.rstrip('\r\n')
So your example just works for me.
But then, i just copied your text into a text editor on linux, and did it that way, so any carriage returns will have been wiped out.
Try this code though:
import os
in_path = "input.txt"
with open(in_path, "rb") as inputFile:
for lineNumber, line in enumerate(inputFile):
print lineNumber, line.strip()
It's a little cleaner, and the for line in file style deals with line breaks for you in a system independent way - Python's open has universal newline support.
I'd try the following Pythonic code:
#!/usr/bin/env python
in_path = "../vas_output/Glyph/20140623-FLYOUT_mins_cleaned.csv"
out_path = "../vas_gender/Glyph/"
with open(in_path, 'rb') as f:
for i, line in enumerate(f):
print(str(i) + line)
There are several improvements that can be made here to make it more idiomatic python.
import csv
in_path = "../vas_output/Glyph/20140623-FLYOUT_mins_cleaned.csv"
out_path = "../vas_gender/Glyph/"
#Lets open the file and make sure that it closes when we unindent
with open(in_path,"rb") as input_file:
#Create a csv reader object that will parse the input for us
reader = csv.reader(input_file,delimiter="^")
#Enumerate over the rows (these will be lists of strings) and keep track of
#of the line number using python's built in enumerate function
for line_num, row in enumerate(reader):
#You can process whatever you would like here. But for now we will just
#print out what you were originally printing
print str(line_num) + "^".join(row)

Write strings to another file

The Problem - Update:
I could get the script to print out but had a hard time trying to figure out a way to put the stdout into a file instead of on a screen. the below script worked on printing results to the screen. I posted the solution right after this code, scroll to the [ solution ] at the bottom.
First post:
I'm using Python 2.7.3. I am trying to extract the last words of a text file after the colon (:) and write them into another txt file. So far I am able to print the results on the screen and it works perfectly, but when I try to write the results to a new file it gives me str has no attribute write/writeline. Here it the code snippet:
# the txt file I'm trying to extract last words from and write strings into a file
#Hello:there:buddy
#How:areyou:doing
#I:amFine:thanks
#thats:good:I:guess
x = raw_input("Enter the full path + file name + file extension you wish to use: ")
def ripple(x):
with open(x) as file:
for line in file:
for word in line.split():
if ':' in word:
try:
print word.split(':')[-1]
except (IndexError):
pass
ripple(x)
The code above works perfectly when printing to the screen. However I have spent hours reading Python's documentation and can't seem to find a way to have the results written to a file. I know how to open a file and write to it with writeline, readline, etc, but it doesn't seem to work with strings.
Any suggestions on how to achieve this?
PS: I didn't add the code that caused the write error, because I figured this would be easier to look at.
End of First Post
The Solution - Update:
Managed to get python to extract and save it into another file with the code below.
The Code:
inputFile = open ('c:/folder/Thefile.txt', 'r')
outputFile = open ('c:/folder/ExtractedFile.txt', 'w')
tempStore = outputFile
for line in inputFile:
for word in line.split():
if ':' in word:
splitting = word.split(':')[-1]
tempStore.writelines(splitting +'\n')
print splitting
inputFile.close()
outputFile.close()
Update:
checkout droogans code over mine, it was more efficient.
Try this:
with open('workfile', 'w') as f:
f.write(word.split(':')[-1] + '\n')
If you really want to use the print method, you can:
from __future__ import print_function
print("hi there", file=f)
according to Correct way to write line to file in Python. You should add the __future__ import if you are using python 2, if you are using python 3 it's already there.
I think your question is good, and when you're done, you should head over to code review and get your code looked at for other things I've noticed:
# the txt file I'm trying to extract last words from and write strings into a file
#Hello:there:buddy
#How:areyou:doing
#I:amFine:thanks
#thats:good:I:guess
First off, thanks for putting example file contents at the top of your question.
x = raw_input("Enter the full path + file name + file extension you wish to use: ")
I don't think this part is neccessary. You can just create a better parameter for ripple than x. I think file_loc is a pretty standard one.
def ripple(x):
with open(x) as file:
With open, you are able to mark the operation happening to the file. I also like to name my file object according to its job. In other words, with open(file_loc, 'r') as r: reminds me that r.foo is going to be my file that is being read from.
for line in file:
for word in line.split():
if ':' in word:
First off, your for word in line.split() statement does nothing but put the "Hello:there:buddy" string into a list: ["Hello:there:buddy"]. A better idea would be to pass split an argument, which does more or less what you're trying to do here. For example, "Hello:there:buddy".split(":") would output ['Hello', 'there', 'buddy'], making your search for colons an accomplished task.
try:
print word.split(':')[-1]
except (IndexError):
pass
Another advantage is that you won't need to check for an IndexError, since you'll have, at least, an empty string, which when split, comes back as an empty string. In other words, it'll write nothing for that line.
ripple(x)
For ripple(x), you would instead call ripple('/home/user/sometext.txt').
So, try looking over this, and explore code review. There's a guy named Winston who does really awesome work with Python and self-described newbies. I always pick up new tricks from that guy.
Here is my take on it, re-written out:
import os #for renaming the output file
def ripple(file_loc='/typical/location/while/developing.txt'):
outfile = "output.".join(os.path.basename(file_loc).split('.'))
with open(outfile, 'w') as w:
lines = open(file_loc, 'r').readlines() #everything is one giant list
w.write('\n'.join([line.split(':')[-1] for line in lines]))
ripple()
Try breaking this down, line by line, and changing things around. It's pretty condensed, but once you pick up comprehensions and using lists, it'll be more natural to read code this way.
You are trying to call .write() on a string object.
You either got your arguments mixed up (you'll need to call fileobject.write(yourdata), not yourdata.write(fileobject)) or you accidentally re-used the same variable for both your open destination file object and storing a string.

Python Overwriting files after parsing

I'm new to Python, and I need to do a parsing exercise. I got a file, and I need to parse it (just the headers), but after the process, i need to keep the file the same format, the same extension, and at the same place in disk, but only with the differences of new headers..
I tried this code...
for line in open ('/home/name/db/str/dir/numbers/str.phy'):
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
print linepars
..and it does the job, but I don't know how to "overwrite" the file with the new parsing.
The easiest way, but not the most efficient (by far, and especially for long files) would be to rewrite the complete file.
You could do this by opening a second file handle and rewriting each line, except in the case of the header, you'd write the parsed header. For example,
fr = open('/home/name/db/str/dir/numbers/str.phy')
fw = open('/home/name/db/str/dir/numbers/str.phy.parsed', 'w') # Name this whatever makes sense
for line in fr:
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
fw.write(linepars)
else:
fw.write(line)
fw.close()
fr.close()
EDIT: Note that this does not use readlines(), so its more memory efficient. It also does not store every output line, but only one at a time, writing it to file immediately.
Just as a cool trick, you could use the with statement on the input file to avoid having to close it (Python 2.5+):
fw = open('/home/name/db/str/dir/numbers/str.phy.parsed', 'w') # Name this whatever makes sense
with open('/home/name/db/str/dir/numbers/str.phy') as fr:
for line in fr:
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
fw.write(linepars)
else:
fw.write(line)
fw.close()
P.S. Welcome :-)
As others are saying here, you want to open a file and use that file object's .write() method.
The best approach would be to open an additional file for writing:
import os
current_cfg = open(...)
parsed_cfg = open(..., 'w')
for line in current_cfg:
new_line = parse(line)
print new_line
parsed.cfg.write(new_line + '\n')
current_cfg.close()
parsed_cfg.close()
os.rename(....) # Rename old file to backup name
os.rename(....) # Rename new file into place
Additionally I'd suggest looking at the tempfile module and use one of its methods for either naming your new file or opening/creating it. Personally I'd favor putting the new file in the same directory as the existing file to ensure that os.rename will work atomically (the configuration file named will be guaranteed to either point at the old file or the new file; in no case would it point at a partially written/copied file).
The following code DOES the job.
I mean it DOES overwrite the file ON ONESELF; that's what the OP asked for. That's possible because the transformations are only removing characters, so the file's pointer fo that writes is always BEHIND the file's pointer fi that reads.
import re
regx = re.compile('\AENS([A-Z]+)0+([0-9]{6})')
with open('bomo.phy','rb+') as fi, open('bomo.phy','rb+') as fo:
fo.writelines(regx.sub('\\1\\2',line) for line in fi)
I think that the writing isn't performed by the operating system one line at a time but through a buffer. So several lines are read before a pool of transformed lines are written. That's what I think.
newlines = []
for line in open ('/home/name/db/str/dir/numbers/str.phy').readlines():
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
newlines.append( linepars )
open ('/home/name/db/str/dir/numbers/str.phy', 'w').write('\n'.join(newlines))
(sidenote: Of course if you are working with large files, you should be aware that the level of optimization required may depend on your situation. Python by nature is very non-lazily-evaluated. The following solution is not a good choice if you are parsing large files, such as database dumps or logs, but a few tweaks such as nesting the with clauses and using lazy generators or a line-by-line algorithm can allow O(1)-memory behavior.)
targetFile = '/home/name/db/str/dir/numbers/str.phy'
def replaceIfHeader(line):
if line.startswith('ENS'):
return re.sub('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
else:
return line
with open(targetFile, 'r') as f:
newText = '\n'.join(replaceIfHeader(line) for line in f)
try:
# make backup of targetFile
with open(targetFile, 'w') as f:
f.write(newText)
except:
# error encountered, do something to inform user where backup of targetFile is
edit: thanks to Jeff for suggestion

How do you read a file into a list in Python? [duplicate]

This question already has answers here:
How to read a file line-by-line into a list?
(28 answers)
Closed 8 years ago.
I want to prompt a user for a number of random numbers to be generated and saved to a file. He gave us that part. The part we have to do is to open that file, convert the numbers into a list, then find the mean, standard deviation, etc. without using the easy built-in Python tools.
I've tried using open but it gives me invalid syntax (the file name I chose was "numbers" and it saved into "My Documents" automatically, so I tried open(numbers, 'r') and open(C:\name\MyDocuments\numbers, 'r') and neither one worked).
with open('C:/path/numbers.txt') as f:
lines = f.read().splitlines()
this will give you a list of values (strings) you had in your file, with newlines stripped.
also, watch your backslashes in windows path names, as those are also escape chars in strings. You can use forward slashes or double backslashes instead.
Two ways to read file into list in python (note these are not either or) -
use of with - supported from python 2.5 and above
use of list comprehensions
1. use of with
This is the pythonic way of opening and reading files.
#Sample 1 - elucidating each step but not memory efficient
lines = []
with open("C:\name\MyDocuments\numbers") as file:
for line in file:
line = line.strip() #or some other preprocessing
lines.append(line) #storing everything in memory!
#Sample 2 - a more pythonic and idiomatic way but still not memory efficient
with open("C:\name\MyDocuments\numbers") as file:
lines = [line.strip() for line in file]
#Sample 3 - a more pythonic way with efficient memory usage. Proper usage of with and file iterators.
with open("C:\name\MyDocuments\numbers") as file:
for line in file:
line = line.strip() #preprocess line
doSomethingWithThisLine(line) #take action on line instead of storing in a list. more memory efficient at the cost of execution speed.
the .strip() is used for each line of the file to remove \n newline character that each line might have. When the with ends, the file will be closed automatically for you. This is true even if an exception is raised inside of it.
2. use of list comprehension
This could be considered inefficient as the file descriptor might not be closed immediately. Could be a potential issue when this is called inside a function opening thousands of files.
data = [line.strip() for line in open("C:/name/MyDocuments/numbers", 'r')]
Note that file closing is implementation dependent. Normally unused variables are garbage collected by python interpreter. In cPython (the regular interpreter version from python.org), it will happen immediately, since its garbage collector works by reference counting. In another interpreter, like Jython or Iron Python, there may be a delay.
f = open("file.txt")
lines = f.readlines()
Look over here. readlines() returns a list containing one line per element. Note that these lines contain the \n (newline-character) at the end of the line. You can strip off this newline-character by using the strip()-method. I.e. call lines[index].strip() in order to get the string without the newline character.
As joaquin noted, do not forget to f.close() the file.
Converting strint to integers is easy: int("12").
The pythonic way to read a file and put every lines in a list:
from __future__ import with_statement #for python 2.5
with open('C:/path/numbers.txt', 'r') as f:
lines = f.readlines()
Then, assuming that each lines contains a number,
numbers =[int(e.strip()) for e in lines]
You need to pass a filename string to open. There's an extra complication when the string has \ in it, because that's a special string escape character to Python. You can fix this by doubling up each as \\ or by putting a r in front of the string as follows: r'C:\name\MyDocuments\numbers'.
Edit: The edits to the question make it completely different from the original, and since none of them was from the original poster I'm not sure they're warrented. However it does point out one obvious thing that might have been overlooked, and that's how to add "My Documents" to a filename.
In an English version of Windows XP, My Documents is actually C:\Documents and Settings\name\My Documents. This means the open call should look like:
open(r"C:\Documents and Settings\name\My Documents\numbers", 'r')
I presume you're using XP because you call it My Documents - it changed in Vista and Windows 7. I don't know if there's an easy way to look this up automatically in Python.
hdl = open("C:/name/MyDocuments/numbers", 'r')
milist = hdl.readlines()
hdl.close()
To summarize a bit from what people have been saying:
f=open('data.txt', 'w') # will make a new file or erase a file of that name if it is present
f=open('data.txt', 'r') # will open a file as read-only
f=open('data.txt', 'a') # will open a file for appending (appended data goes to the end of the file)
If you wish have something in place similar to a try/catch
with open('data.txt') as f:
for line in f:
print line
I think #movieyoda code is probably what you should use however
If you have multiple numbers per line and you have multiple lines, you can read them in like this:
#!/usr/bin/env python
from os.path import dirname
with open(dirname(__file__) + '/data/path/filename.txt') as input_data:
input_list= [map(int,num.split()) for num in input_data.readlines()]

Replace string in a specific line using python

I'm writing a python script to replace strings from a each text file in a directory with a specific extension (.seq). The strings replaced should only be from the second line of each file, and the output is a new subdirectory (call it clean) with the same file names as the original files, but with a *.clean suffix. The output file contains exactly the same text as the original, but with the strings replaced. I need to replace all these strings: 'K','Y','W','M','R','S' with 'N'.
This is what I've come up with after googling. It's very messy (2nd week of programming), and it stops at copying the files into the clean directory without replacing anything. I'd really appreciate any help.
Thanks before!
import os, shutil
os.mkdir('clean')
for file in os.listdir(os.getcwd()):
if file.find('.seq') != -1:
shutil.copy(file, 'clean')
os.chdir('clean')
for subdir, dirs, files in os.walk(os.getcwd()):
for file in files:
f = open(file, 'r')
for line in f.read():
if line.__contains__('>'): #indicator for the first line. the first line always starts with '>'. It's a FASTA file, if you've worked with dna/protein before.
pass
else:
line.replace('M', 'N')
line.replace('K', 'N')
line.replace('Y', 'N')
line.replace('W', 'N')
line.replace('R', 'N')
line.replace('S', 'N')
some notes:
string.replace and re.sub are not in-place so you should be assigning the return value back to your variable.
glob.glob is better for finding files in a directory matching a defined pattern...
maybe you should be checking if the directory already exists before creating it (I just assumed this, this could not be your desired behavior)
the with statement takes care of closing the file in a safe way. if you don't want to use it you have to use try finally.
in your example you where forgetting to put the sufix *.clean ;)
you where not actually writing the files, you could do it like i did in my example or use fileinput module (which until today i did not know)
here's my example:
import re
import os
import glob
source_dir=os.getcwd()
target_dir="clean"
source_files = [fname for fname in glob.glob(os.path.join(source_dir,"*.seq"))]
# check if target directory exists... if not, create it.
if not os.path.exists(target_dir):
os.makedirs(target_dir)
for source_file in source_files:
target_file = os.path.join(target_dir,os.path.basename(source_file)+".clean")
with open(source_file,'r') as sfile:
with open(target_file,'w') as tfile:
lines = sfile.readlines()
# do the replacement in the second line.
# (remember that arrays are zero indexed)
lines[1]=re.sub("K|Y|W|M|R|S",'N',lines[1])
tfile.writelines(lines)
print "DONE"
hope it helps.
You should replace line.replace('M', 'N') with line=line.replace('M', 'N'). replace returns a copy of the original string with the relevant substrings replaced.
An even better way (IMO) is to use re.
import re
line="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
line=re.sub("K|Y|W|M|R|S",'N',line)
print line
Here are some general hints:
Don't use find for checking the file extension (e.g., this would also match "file1.seqdata.xls"). At least use file.endswith('seq'), or, better yet, os.path.splitext(file)[1]
Actually, don't do that altogether. This is what you want:
import glob
seq_files = glob.glob("*.seq")
Don't copy the files, it's much easier to use just one loop:
for filename in seq_files:
in_file = open(filename)
out_file = open(os.path.join("clean", filename), "w")
# now read lines from in_file and write lines to out_file
Don't use line.__contains__('>'). What you mean is
if '>' in line:
(which will call __contains__ internally). But actually, you want to know wether the line starts with a `">", not if there's one somewhere within the line, be it at the beginning or not. So the better way would be this:
if line.startswith(">"):
I'm not familiar with your file type; if the ">" check really is just for determining the first line, there's better ways to do that.
You don't need the if block (you just pass). It's cleaner to write
if not something:
do_things()
other_stuff()
instead of
if something:
pass
else:
do_things()
other_stuff()
Have fun learning Python!
you need to allocate the result of the replacement back to "line" variable
line=line.replace('M', 'N')
you can also use the module fileinput for inplace edit
import os, shutil,fileinput
if not os.path.exists('clean'):
os.mkdir('clean')
for file in os.listdir("."):
if file.endswith(".seq"):
shutil.copy(file, 'clean')
os.chdir('clean')
for subdir, dirs, files in os.walk("."):
for file in files:
f = fileinput.FileInput(file,inplace=0)
for n,line in enumerate(f):
if line.lstrip().startswith('>'):
pass
elif n==1: #replace 2nd line
for repl in ["M","K","Y","W","R","S"]:
line=line.replace(ch, 'N')
print line.rstrip()
f.close()
change inplace=0 to inplace=1 for in place editing of your files.
line.replace is not a mutator, it leaves the original string unchanged and returns a new string with the replacements made. You'll need to change your code to line = line.replace('R', 'N'), etc.
I think you also want to add a break statement at the end of your else clause, so that you don't iterate over the entire file, but stop after having processed line 2.
Lastly, you'll need to actually write the file out containing your changes. So far, you are just reading the file and updating the line in your program variable 'line'. You need to actually create an output file as well, to which you will write the modified lines.

Categories

Resources