Replace string in a specific line using python - python

I'm writing a python script to replace strings from a each text file in a directory with a specific extension (.seq). The strings replaced should only be from the second line of each file, and the output is a new subdirectory (call it clean) with the same file names as the original files, but with a *.clean suffix. The output file contains exactly the same text as the original, but with the strings replaced. I need to replace all these strings: 'K','Y','W','M','R','S' with 'N'.
This is what I've come up with after googling. It's very messy (2nd week of programming), and it stops at copying the files into the clean directory without replacing anything. I'd really appreciate any help.
Thanks before!
import os, shutil
os.mkdir('clean')
for file in os.listdir(os.getcwd()):
if file.find('.seq') != -1:
shutil.copy(file, 'clean')
os.chdir('clean')
for subdir, dirs, files in os.walk(os.getcwd()):
for file in files:
f = open(file, 'r')
for line in f.read():
if line.__contains__('>'): #indicator for the first line. the first line always starts with '>'. It's a FASTA file, if you've worked with dna/protein before.
pass
else:
line.replace('M', 'N')
line.replace('K', 'N')
line.replace('Y', 'N')
line.replace('W', 'N')
line.replace('R', 'N')
line.replace('S', 'N')

some notes:
string.replace and re.sub are not in-place so you should be assigning the return value back to your variable.
glob.glob is better for finding files in a directory matching a defined pattern...
maybe you should be checking if the directory already exists before creating it (I just assumed this, this could not be your desired behavior)
the with statement takes care of closing the file in a safe way. if you don't want to use it you have to use try finally.
in your example you where forgetting to put the sufix *.clean ;)
you where not actually writing the files, you could do it like i did in my example or use fileinput module (which until today i did not know)
here's my example:
import re
import os
import glob
source_dir=os.getcwd()
target_dir="clean"
source_files = [fname for fname in glob.glob(os.path.join(source_dir,"*.seq"))]
# check if target directory exists... if not, create it.
if not os.path.exists(target_dir):
os.makedirs(target_dir)
for source_file in source_files:
target_file = os.path.join(target_dir,os.path.basename(source_file)+".clean")
with open(source_file,'r') as sfile:
with open(target_file,'w') as tfile:
lines = sfile.readlines()
# do the replacement in the second line.
# (remember that arrays are zero indexed)
lines[1]=re.sub("K|Y|W|M|R|S",'N',lines[1])
tfile.writelines(lines)
print "DONE"
hope it helps.

You should replace line.replace('M', 'N') with line=line.replace('M', 'N'). replace returns a copy of the original string with the relevant substrings replaced.
An even better way (IMO) is to use re.
import re
line="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
line=re.sub("K|Y|W|M|R|S",'N',line)
print line

Here are some general hints:
Don't use find for checking the file extension (e.g., this would also match "file1.seqdata.xls"). At least use file.endswith('seq'), or, better yet, os.path.splitext(file)[1]
Actually, don't do that altogether. This is what you want:
import glob
seq_files = glob.glob("*.seq")
Don't copy the files, it's much easier to use just one loop:
for filename in seq_files:
in_file = open(filename)
out_file = open(os.path.join("clean", filename), "w")
# now read lines from in_file and write lines to out_file
Don't use line.__contains__('>'). What you mean is
if '>' in line:
(which will call __contains__ internally). But actually, you want to know wether the line starts with a `">", not if there's one somewhere within the line, be it at the beginning or not. So the better way would be this:
if line.startswith(">"):
I'm not familiar with your file type; if the ">" check really is just for determining the first line, there's better ways to do that.
You don't need the if block (you just pass). It's cleaner to write
if not something:
do_things()
other_stuff()
instead of
if something:
pass
else:
do_things()
other_stuff()
Have fun learning Python!

you need to allocate the result of the replacement back to "line" variable
line=line.replace('M', 'N')
you can also use the module fileinput for inplace edit
import os, shutil,fileinput
if not os.path.exists('clean'):
os.mkdir('clean')
for file in os.listdir("."):
if file.endswith(".seq"):
shutil.copy(file, 'clean')
os.chdir('clean')
for subdir, dirs, files in os.walk("."):
for file in files:
f = fileinput.FileInput(file,inplace=0)
for n,line in enumerate(f):
if line.lstrip().startswith('>'):
pass
elif n==1: #replace 2nd line
for repl in ["M","K","Y","W","R","S"]:
line=line.replace(ch, 'N')
print line.rstrip()
f.close()
change inplace=0 to inplace=1 for in place editing of your files.

line.replace is not a mutator, it leaves the original string unchanged and returns a new string with the replacements made. You'll need to change your code to line = line.replace('R', 'N'), etc.
I think you also want to add a break statement at the end of your else clause, so that you don't iterate over the entire file, but stop after having processed line 2.
Lastly, you'll need to actually write the file out containing your changes. So far, you are just reading the file and updating the line in your program variable 'line'. You need to actually create an output file as well, to which you will write the modified lines.

Related

finding matches between a) all the files in a directory and b) a txt list of files not working with fnmatch - python

So I've got the code below and when I run tests to spit out all the files in A1_dir and A2_list, all of the files are showing up, but when I try to get the fnmatch to work, I get no results.
For background in case its helpful: I am trying to comb through a directory of files and take an action (duplicate the file) only IF it matches a file name on the newoutput.txt list. I'm sure there's a better way to do all of this lol, so if you have that I'd love to hear it too!
import fnmatch
import os
A1_dir = ('C:/users/alexd/kobe')
A2_list = open('C:/users/alexd/kobe/newoutput.txt')
Lines = A2_list.readlines()
A2_list.close()
for file in (os.listdir(A1_dir)):
for line in Lines:
if fnmatch.fnmatch(file, line):
print("got one:{file}")
readline returns a single line and readlines returns all the lines as a list (doc). However, in both cases, the lines always have a trailing \n i.e. the newline character.
A simple fix here would be to change
Lines = A2_list.readlines()
to
Lines = [i.strip() for i in A2_list.readlines()]
Since you asked for a better way, you could take a look at set operations.
Since the lines are exactly what you want the file names to be (and not patterns), save A2_list as a set instead of a list.
Next, save all the files from os.listdir also as a set.
Finally, perform a set intersection
import fnmatch
import os
with open('C:/users/alexd/kobe/newoutput.txt') as fp:
myfiles = set(i.strip() for i in fp.readlines())
all_files = set(os.listdir('C:/users/alexd/kobe'))
for f in all_files.intersection(myfiles):
print(f"got one:{f}")
You cannot use fnmatch.fnmatch to compare 2 different filenames, fnmatch.fnmatch only accepts 2 parameters filename and pattern respectively.
As you can see in the official documentation:
Possible Solution:
I don't think that you have to use any function to compare 2 strings. Both os.listdir() and .readlines() returns you lists of strings.

python clear content writing on same file

I am a newbie to python. I have a code in which I must write the contents again to my same file,but when I do it it clears my content.Please help to fix it.
How should I modify my code such that the contents will be written back on the same file?
My code:
import re
numbers = {}
with open('1.txt') as f,open('11.txt', 'w') as f1:
for line in f:
row = re.split(r'(\d+)', line.strip())
words = tuple(row[::2])
if words not in numbers:
numbers[words] = [int(n) for n in row[1::2]]
numbers[words] = [n+1 for n in numbers[words]]
row[1::2] = map(str, numbers[words])
indentation = (re.match(r"\s*", line).group())
print (indentation + ''.join(row))
f1.write(indentation + ''.join(row) + '\n')
In general, it's a bad idea to write over a file you're still processing (or change a data structure over which you are iterating). It can be done...but it requires much care, and there is little safety or restart-ability should something go wrong in the middle (an error, a power failure, etc.)
A better approach is to write a clean new file, then rename it to the old name. For example:
import re
import os
filename = '1.txt'
tempname = "temp{0}_{1}".format(os.getpid(), filename)
numbers = {}
with open(filename) as f, open(tempname, 'w') as f1:
# ... file processing as before
os.rename(tempname, filename)
Here I've dropped filenames (both original and temporary) into variables, so they can be easily referred to multiple times or changed. This also prepares for the moment when you hoist this code into a function (as part of a larger program), as opposed to making it the main line of your program.
You don't strictly need the temporary name to embed the process id, but it's a standard way of making sure the temp file is uniquely named (temp32939_1.txt vs temp_1.txt or tempfile.txt, say).
It may also be helpful to create backups of the files as they were before processing. In which case, before the os.rename(tempname, filename) you can drop in code to move the original data to a safer location or a backup name. E.g.:
backupname = filename + ".bak"
os.rename(filename, backupname)
os.rename(tempname, filename)
While beyond the scope of this question, if you used a read-process-overwrite strategy frequently, it would be possible to create a separate module that abstracted these file-handling details away from your processing code. Here is an example.
Use
open('11.txt', 'a')
To append to the file instead of w for writing (a new or overwriting a file).
If you want to read and modify file in one time use "r+' mode.
f=file('/path/to/file.txt', 'r+')
content=f.read()
content=content.replace('oldstring', 'newstring') #for example change some substring in whole file
f.seek(0) #move to beginning of file
f.write(content)
f.truncate() #clear file conent "tail" on disk if new content shorter then old
f.close()

Python Overwriting files after parsing

I'm new to Python, and I need to do a parsing exercise. I got a file, and I need to parse it (just the headers), but after the process, i need to keep the file the same format, the same extension, and at the same place in disk, but only with the differences of new headers..
I tried this code...
for line in open ('/home/name/db/str/dir/numbers/str.phy'):
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
print linepars
..and it does the job, but I don't know how to "overwrite" the file with the new parsing.
The easiest way, but not the most efficient (by far, and especially for long files) would be to rewrite the complete file.
You could do this by opening a second file handle and rewriting each line, except in the case of the header, you'd write the parsed header. For example,
fr = open('/home/name/db/str/dir/numbers/str.phy')
fw = open('/home/name/db/str/dir/numbers/str.phy.parsed', 'w') # Name this whatever makes sense
for line in fr:
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
fw.write(linepars)
else:
fw.write(line)
fw.close()
fr.close()
EDIT: Note that this does not use readlines(), so its more memory efficient. It also does not store every output line, but only one at a time, writing it to file immediately.
Just as a cool trick, you could use the with statement on the input file to avoid having to close it (Python 2.5+):
fw = open('/home/name/db/str/dir/numbers/str.phy.parsed', 'w') # Name this whatever makes sense
with open('/home/name/db/str/dir/numbers/str.phy') as fr:
for line in fr:
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
fw.write(linepars)
else:
fw.write(line)
fw.close()
P.S. Welcome :-)
As others are saying here, you want to open a file and use that file object's .write() method.
The best approach would be to open an additional file for writing:
import os
current_cfg = open(...)
parsed_cfg = open(..., 'w')
for line in current_cfg:
new_line = parse(line)
print new_line
parsed.cfg.write(new_line + '\n')
current_cfg.close()
parsed_cfg.close()
os.rename(....) # Rename old file to backup name
os.rename(....) # Rename new file into place
Additionally I'd suggest looking at the tempfile module and use one of its methods for either naming your new file or opening/creating it. Personally I'd favor putting the new file in the same directory as the existing file to ensure that os.rename will work atomically (the configuration file named will be guaranteed to either point at the old file or the new file; in no case would it point at a partially written/copied file).
The following code DOES the job.
I mean it DOES overwrite the file ON ONESELF; that's what the OP asked for. That's possible because the transformations are only removing characters, so the file's pointer fo that writes is always BEHIND the file's pointer fi that reads.
import re
regx = re.compile('\AENS([A-Z]+)0+([0-9]{6})')
with open('bomo.phy','rb+') as fi, open('bomo.phy','rb+') as fo:
fo.writelines(regx.sub('\\1\\2',line) for line in fi)
I think that the writing isn't performed by the operating system one line at a time but through a buffer. So several lines are read before a pool of transformed lines are written. That's what I think.
newlines = []
for line in open ('/home/name/db/str/dir/numbers/str.phy').readlines():
if line.startswith('ENS'):
linepars = re.sub ('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
newlines.append( linepars )
open ('/home/name/db/str/dir/numbers/str.phy', 'w').write('\n'.join(newlines))
(sidenote: Of course if you are working with large files, you should be aware that the level of optimization required may depend on your situation. Python by nature is very non-lazily-evaluated. The following solution is not a good choice if you are parsing large files, such as database dumps or logs, but a few tweaks such as nesting the with clauses and using lazy generators or a line-by-line algorithm can allow O(1)-memory behavior.)
targetFile = '/home/name/db/str/dir/numbers/str.phy'
def replaceIfHeader(line):
if line.startswith('ENS'):
return re.sub('ENS([A-Z]+)0+([0-9]{6})','\\1\\2',line)
else:
return line
with open(targetFile, 'r') as f:
newText = '\n'.join(replaceIfHeader(line) for line in f)
try:
# make backup of targetFile
with open(targetFile, 'w') as f:
f.write(newText)
except:
# error encountered, do something to inform user where backup of targetFile is
edit: thanks to Jeff for suggestion

f.read coming up empty

I'm doing all this in the interpreter..
loc1 = '/council/council1'
file1 = open(loc1, 'r')
at this point i can do file1.read() and it prints the file's contents as a string to standard output
but if i add this..
string1 = file1.read()
string 1 comes back empty.. i have no idea what i could be doing wrong. this seems like the most basic thing!
if I go on to type file1.read() again, the output to standard output is just an empty string. so, somehow i am losing my file when i try to create a string with file1.read()
You can only read a file once. After that, the current read-position is at the end of the file.
If you add file1.seek(0) before you re-read it, you should be able to read the contents again. A better approach, however, is to read into a string the first time and then keep it in memory:
loc1 = '/council/council1'
file1 = open(loc1, 'r')
string1 = file1.read()
print string1
You do not lose it, you just move offset pointer to the end of file and try to read some more data. Since it is the end of the file, no more data is available and you get empty string. Try reopening file or seeking to zero position:
f.read()
f.seek(0)
f.read()
Using with is the best syntax to use because it closes the connection to the file after using it(since python 2.5):
with open('/council/council1', 'r') as input_file:
text = input_file.read()
print(text)
To quote the official documentation on read():
To read a file’s contents, call f.read(size)
When size is omitted or negative, the entire contents of the file will
be read and returned;
And the most relevant part:
If the end of the file has been reached, f.read() will return an empty
string ('').
Which means that if you use read() twice consecutively, it is expected that the second time you'll get an empty string. Either store it the first time or use f.seek(0) to go back to the start. Together, they provide a lower level API to give you greater control.
Besides using a context manager to automatically open and close the file, there's another way to read a whole text file, using pathlib, example below:
#!/usr/bin/env python3
from pathlib import Path
txt_file = Path("myfile.txt")
try:
content = txt_file.read_text()
except FileNotFoundError:
print("Could not find file")
else:
print(f"The content is: {content}")
print(f"I can also read again: {txt_file.read_text()}")
As you can see, you can call read_text() several times and you'll get the full content, no surprises. Of course you wouldn't want to do that in production code since read_text() opens and closes the file each time, it's still best to store it. I could recommend pathlib highly when dealing with files and file paths.
It's outside the scope, but it may be worth noting a difference when reading line by line. Unlike the file object obtained by open(), PosixPath returned by Path() is not iterable. The equivalent of:
with open('file.txt') as f:
for line in f:
print(line)
Would be something like:
for line in Path('file.txt').read_text().split('\n'):
print(line)
One advantage of the first approach, with open, is that the entire file is not read into memory at once.
make sure your location is correct. Do you actually have a directory called /council under your root directory (/) ?. also use, os.path.join() to create your path
loc1 = os.path.join("/path","dir1","dir2")

Python: How do i split the file?

I have this txt file which is ls -R of etc directory in a linux system. Example file:
etc:
ArchiveSEL
xinetd.d
etc/cmm:
CMM_5085.bin
cmm_sel
storage.cfg
etc/crontabs:
root
etc/pam.d:
ftp
rsh
etc/rc.d:
eth.set.sh
rc.sysinit
etc/rc.d/init.d:
cmm
functions
userScripts
etc/security:
access.conf
console.apps
time.conf
etc/security/console.apps:
kbdrate
etc/ssh:
ssh_host_dsa_key
sshd_config
etc/var:
setUser
snmpd.conf
etc/xinetd.d:
irsh
wu-ftpd
I would like to split it by subdirectories into several files. example files would be like this: etc.txt, etcCmm.txt, etcCrontabs.txt, etcPamd.txt, ...
Can someone give me a python code that can do that?
Notice that the subdirectory lines end with ':', but i'm just not smart enough to write the code. some examples would be appreciated.
thank you :)
Maybe something like this? re.M generates a multiline regular expression which can match several lines, and the last part just iterates over the matches and creates the files...
import re
data = '<your input data as above>' # or open('data.txt').read()
results = map(lambda m: (m[0], m[1].strip().splitlines()),
re.findall('^([^\n]+):\n((?:[^\n]+\n)*)\n', data, re.M))
for dirname, files in results:
f = open(dirname.replace('/', '')+'.txt', 'w')
for line in files:
f.write(line + '\n')
f.close()
You will need to do it line-by-line. if a line.endswith(":") then you are in a new subdirectory. From then on, each line is a new entry into your subdirectory, until another line ends with :.
From my understanding, you just want to split one textfile into several, ambiguously named, text files.
So you'd see if a line ends with :. then you open a new text file, like etcCmm.txt, and every line that you read from the source text, from that point on, you write intoetcCmm.txt. When you encounter another line that ends in :, you close the previously opened file, create a new one, and continue.
I'm leaving a few things for you to do yourself, such as figuring out what to call the text file, reading a file line-by-line, etc.
use regexp like '.*:'.
use file.readline().
use loops.
If Python is not a must, you can use this one liner
awk '/:$/{gsub(/:|\//,"");fn=$0}{print $0 > fn".txt"}' file
Here's what I would do:
Read the file into memory (myfile = open(filename).read() should do).
Then split the file along the delimiters:
import re
myregex = re.compile(r"^(.*):[ \t]*$", re.MULTILINE)
arr = myregex.split(myfile)[1:] # dropping everything before the first directory entry
Then convert the array to a dict, removing unwanted characters along the way:
mydict = dict([(re.sub(r"\W+","",k), v.strip()) for (k,v) in zip(arr[::2], arr[1::2])])
Then write the files:
for name,content in mydict.iteritems():
output = open(name+".txt","w")
output.write(content)
output.close()

Categories

Resources