So I have a lot of data files, which have a name similar to this:
lvh_GTV_TwoField-3-401-86.txt
The thing that changes from file to file is the number 86 and GTV.
I'm trying to use this code to distinguish between files:
f.split('-')[3]
This, if I'm not mistaken, should split the file at the -, and then the 3rd, which is 86. In my case I would really like to use int(f.split('-')[3]) because I need to reference it against another number, however, when splitting at the 3rd, the output is actually 86.txt or so, and therefore I can't it as an integer.
So my question is: How do I split the file, so I only the the value 86, and not the .txt extension along with it ?
Thanks in advance.
You may also use the os.path.splitext function to remove the extension:
import os
os.path.splitext(f)[0].split('-')[3]
Or, more verbosely,
base, ext = os.path.splitext(f)
base.split('-')[3]
Given that this is very controlled, you could splice the string resulting, so something like:
f.split('-')[3][:-4] # '86', take all chars except the last 4 (.txt)
Using PyPI package parse:
from parse import parse
parse("lvh_{}_TwoField-3-401-{:d}.txt", "lvh_GTV_TwoField-3-401-86.txt")[1]
# => 86 (as an int)
Using Python's build-in RegExp library:
import re
m = re.match(
"lvh_.+_TwoField-3-401-(?P<the_number>\d+)\.txt",
"lvh_GTV_TwoField-3-401-86.txt"
)
the_number = int(m.group('the_number'))
Related
I wrote a small script in python to concatenate some lines from different files into one file. But somehow it doesn't print anything I like it to print by the function I wrote. I tried to spot the problems, but after one evening and one morning, I still can't find the problem. Could somebody help me please? Thanks a lot!
So I have a folder where around thousands of .fa files are. In each of the .fa file, I would like to extract the line starting with ">", and also do some change to extract the information I like. In the end, I would like to combine all the information extracted from one file into one line in a new file, and then concatenate all the information from all the .fa file into one .txt file.
So the folder:
% ls
EstimatedSpeciesTree.nwk HOG9998.fa concatenate_gene_list_HOG.py
HOG9997.fa HOG9999.fa output
One .fa file for example
>BnaCnng71140D [BRANA]
MTSSFKLSDLEEVTTNAEKIQNDLLKEILTLNAKTEYLRQFLHGSSDKTFFKKHVPVVSYEDMKPYIERVADGEPSEIIS
GGPITKFLRRYSF
>Cadbaweo98702.t [THATH]
MTSSFKLSDLEEVTTNAEKIQNDLLKEILTLNAKTEYLRQFLHGSSDKTFFKKHVPVVSYEDMKPYIERVADGEPSEIIS
GGPITKFLRRYSF
What I would like to have is one file like this
HOG9997.fa BnaCnng71140D:BRANA Cadbaweo98702.t:THATH
HOG9998.fa Bkjfnglks098:BSFFE dsgdrgg09769.t
HOG9999.fa Dsdfdfgs1937:XDGBG Cadbaweo23425.t:THATH Dkkjewe098.t:THUGB
# NOTE: the number of lines in each .fa file are uncertain. Also, some lines has [ ], but some lines has not.
So my code is
#!/usr/bin/env python3
import os
import re
import csv
def concat_hogs(a_file):
output = []
for row in a_file: # select the gene names in each HOG fasta file
if row.startswith(">"):
trans = row.partition(" ")[0].partition(">")[2]
if row.partition(" ")[2].startswith("["):
species = re.search(r"\[(.*?)\]", row).group(1)
else:
species = ""
output.append(trans + ":" + species)
return '\t'.join(output)
folder = "/tmp/Fasta/"
print("Concatenate names in " + folder)
for file in os.listdir(folder):
if file.endswith('.fa'):
reader = csv.reader(file, delimiter="\t")
print(file + concat_hogs(reader))
But the output only prints the file name with out the part that should be generated by the function concat_hogs(file). I don't understand why.
The error comes from you passing the name of the file to your concat_hogs function instead of an iterable file handle. You are missing the actual opening of the file for reading purposes.
I agree with Jay M that your code can be simplified drastically, not least by using regular expressions more efficiently. Also pathlib is awesome.
But I think it can be even more concise and expressive. Here is my suggestion:
#!/usr/bin/env python3
import re
from pathlib import Path
GENE_PATTERN = re.compile(
r"^>(?P<trans>[\w.]+)\s+(?:\[(?P<species>\w+)])?"
)
def extract_gene(string: str) -> str:
match = re.search(GENE_PATTERN, string)
return ":".join(match.groups(default=""))
def concat_hogs(file_path: Path) -> str:
with file_path.open("r") as file:
return '\t'.join(
extract_gene(row)
for row in file
if row.startswith(">")
)
def main() -> None:
folder = Path("/tmp/Fasta/")
print("Concatenate names in", folder)
for element in folder.iterdir():
if element.is_file() and element.suffix == ".fa":
print(element.name, concat_hogs(element))
if __name__ == '__main__':
main()
I am using named capturing groups for the regular expression because I prefer it for readability and usability later on.
Also I assume that the first group can only contain letters, digits and dots. Adjust the pattern, if there are more options.
PS
Just to add a few additional explanations:
The pathlib module is a great tool for any basic filesystem-related task. Among a few other useful methods you can look up there, I use the Path.iterdir method, which just iterates over elements in that directory instead of creating an entire list of them in memory first the way os.listdir does.
The RegEx Match.groups method returns a tuple of the matched groups, the default parameter allows setting the value when a group was not matched. I put an empty string there, so that I can always simply str.join the groups, even if the species-group was not found. Note that this .groups call will result in an AttributeError, if no match was found because then the match variable will be set to None. It may or may not be useful for you to catch this error.
For a few additional pointers about using regular expressions in Python, there is a great How-To-Guide in the docs. In addition I can only agree with Jay M about how useful regex101.com is, regardless of language specifics. Also, I think I would recommend using his approach of reading the entire file into memory as a single string first and then using re.findall on it to grab all matches at once. That is probably much more efficient than going line-by-line, unless you are dealing with gigantic files.
In concat_hogs I pass a generator expression to str.join. This is more efficient than first creating a list and passing that to join because no additional memory needs to be allocated. This is possible because str.join accepts any iterable of strings and that generator expression (... for ... in ...) returns a Generator, which inherits from Iterator and thus from Iterable. For more insight about the container inheritance structures I always refer to the collections.abc docs.
Use standard Python libraries
In this case
regex (use a site such as regex101 to test your regex)
pathlib to encapsulate paths in a platform independent way
collections.namedtuple to make data more structured
A breakdown of the regex used here:
>([a-z0-9A-Z\.]+?)\s*(\n|\[([A-Z]+)\]?\n)
> The start of block character
(regex1) First matchig block
\s* Any amount of whitespace (i.e. zero space is ok)
(regex2|regex3) A choice of two possible regex
regex1: + = One or more of characters in [class] Where class is any a to z or 0 to 9 or a dot
regex2: \n = A newline that immediately follows the whitespace
regex3: [([A-Z]+)] = One or more upper case letter inside square brackets
Note1: The brackets create capture groups, which we later use to split out the fields.
Note2: The regex demands zero or more whitespace between the first and second part of the text line, this makes it more resiliant.
import re
from collections import namedtuple
from pathlib import Path
import os
class HOG(namedtuple('HOG', ['filepath', 'filename', 'key', 'text'], defaults=[None])):
__slots__ = ()
def __str__(self):
return f"{self.key}:{self.text}"
regex = re.compile(r">([a-z0-9A-Z\.]+?)\s*(\n|\[([A-Z]+)\]?\n)")
path = Path(os.path.abspath("."))
wildcard = "*.fa"
files = list(path.glob("*.fa"))
print(f"Searching {path}/{wildcard} => found {len(files)} files")
data = {}
for file in files:
print(f"Processing {file}")
with file.open() as hf:
text = hf.read(-1)
matches = regex.findall(text)
for match in matches:
key = match[0].strip()
text = match[-1].strip()
if file.name not in data:
data[file.name] = []
data[file.name].append(HOG(path, file.name, key, text))
print("Now you have the data you can process it as you like")
for file, entries in data.items():
line = "\t".join(list(str(e) for e in entries))
print(file, line)
# e.g. Write the output as desired
with Path("output").open("w") as fh:
for file, entries in data.items():
line = "\t".join(list(str(e) for e in entries))
fh.write(f"{file}\t{line}\n")
So I've got the code below and when I run tests to spit out all the files in A1_dir and A2_list, all of the files are showing up, but when I try to get the fnmatch to work, I get no results.
For background in case its helpful: I am trying to comb through a directory of files and take an action (duplicate the file) only IF it matches a file name on the newoutput.txt list. I'm sure there's a better way to do all of this lol, so if you have that I'd love to hear it too!
import fnmatch
import os
A1_dir = ('C:/users/alexd/kobe')
A2_list = open('C:/users/alexd/kobe/newoutput.txt')
Lines = A2_list.readlines()
A2_list.close()
for file in (os.listdir(A1_dir)):
for line in Lines:
if fnmatch.fnmatch(file, line):
print("got one:{file}")
readline returns a single line and readlines returns all the lines as a list (doc). However, in both cases, the lines always have a trailing \n i.e. the newline character.
A simple fix here would be to change
Lines = A2_list.readlines()
to
Lines = [i.strip() for i in A2_list.readlines()]
Since you asked for a better way, you could take a look at set operations.
Since the lines are exactly what you want the file names to be (and not patterns), save A2_list as a set instead of a list.
Next, save all the files from os.listdir also as a set.
Finally, perform a set intersection
import fnmatch
import os
with open('C:/users/alexd/kobe/newoutput.txt') as fp:
myfiles = set(i.strip() for i in fp.readlines())
all_files = set(os.listdir('C:/users/alexd/kobe'))
for f in all_files.intersection(myfiles):
print(f"got one:{f}")
You cannot use fnmatch.fnmatch to compare 2 different filenames, fnmatch.fnmatch only accepts 2 parameters filename and pattern respectively.
As you can see in the official documentation:
Possible Solution:
I don't think that you have to use any function to compare 2 strings. Both os.listdir() and .readlines() returns you lists of strings.
I'm currently working with tkinter in Python (beginner), and I'm writing a small applet that requires one of the labels to dynamically change based on what the name of a selected .csv file is, without the '.csv' tag.
I can currently get the filepath to the .csv file using askopenfilename(), which returns a string that looks something like "User/Folder1/.../filename.csv". I need some way to extract "filename" from this filepath string, and I'm a bit stuck on how to do it. Is this simply a regex problem? Or is there a way to do this using string indices? Which is the "better" way to do it? Any help would be great. Thank you.
EDIT: The reason I was wondering if regex is the right way to do it is because there could be duplicates, e.g. if the user had something like "User/Folder1/hello/hello.csv". That's why I was thinking maybe just use string indices, since the file name I need will always end at [:-4]. Am I thinking about this the right way?
Solution:
import os
file = open('/some/path/to/a/test.csv')
fname = os.path.splitext(str(file))[0].split('/')[-1]
print(fname)
# test
If you get file path and name as string, then:
import os
file = "User/Folder1/test/filename.csv"
fname = os.path.splitext(file)[0].split('/')[-1]
print(fname)
# filename
Explanation on how it works:
Pay attention that command is os.path.splitEXT, not os.path.splitTEXT - very common mistake.
The command takes argument of type string, so if we use file = open(...), then we need to pass os.path.splitext argument of type string. Therefore in our first scenario we use:
str(file)
Now, this command splits complete file path + name string into two parts:
os.path.splitext(str(file))
# result:
['/some/path/to/a/test','csv']
In our case we only need first part, so we take it by specifying list index:
os.path.splitext(str(file))[0]
# result:
'/some/path/to/a/test'
Now, since we only need file name and not the whole path, we split it by /:
os.path.splitext(str(file))[0].split('/')
# result:
['some','path','to','a','test']
And out of this we only need one last element, or in other words, first from the end:
os.path.splitext(str(file)[0].split('/')[-1]
Hope this helps.
Check for more here: Extract file name from path, no matter what the os/path format
I'm having issues with .replace(). My XML parser does not like '&', but will accept '&\amp;'. I'd like to use .replace('&','&') but this does not seem to be working. I keep getting the error:
lxml.etree.XMLSyntaxError: xmlParseEntityRef: no name, line 51, column 41
So far I have tried just a straight forward file=file.replace('&','&'), but this doesn't work. I've also tried:
xml_file = infile
file=xml_file.readlines()
for line in file:
for char in line:
char.replace('&','&')
infile=open('a','w')
file='\n'.join(file)
infile.write(file)
infile.close()
infile=open('a','r')
xml_file=infile
What would be the best way to fix my issue?
str.replace creates and returns a new string. It can't alter strings in-place - they're immutable. Try replacing:
file=xml_file.readlines()
with
file = [line.replace('&','&') for line in xml_file]
This uses a list comprehension to build a list equivalent to .readlines() but with the replacement already made.
As pointed out in the comments, if there were already &s in the string, they'd be turned into &, likely not what you want. To avoid that, you could use a negative lookahead in a regular expression to replace only the ampersands not already followed by amp;:
import re
file = [re.sub("&(?!amp;)", "&", line) ...]
str.replace() returns new string object with the change made. It does not change data in-place. You are ignoring the return value.
You want to apply it to each line instead:
file = [line.replace('&', '&') for line in file]
You could use the fileinput() module to do the transformation, and have it handle replacing the original file (a backup will be made):
import fileinput
import sys
for line in fileinput.input('filename', inplace=True):
sys.stdout.write(line.replace('&', '&'))
Oh...
You need to decode HTML notation for special symbols. Python has module to deal with it - HTMLParser, here some docs.
Here is example:
import HTMLParser
out_file = ....
file = xml_file.readlines()
parsed_lines = []
for line in file:
parsed_lines.append(htmlparser.unescape(line))
Slightly off topic, but it might be good to use some escaping?
I often use urllib's quote which will put the HTML escaping in and out:
result=urllib.quote("filename&fileextension")
'filename%26fileextension'
urllib.unquote(result)
filename&fileextension
Might help for consistency?
Using Python, I'm trying to rename a series of .txt files in a directory according to a specific phrase in each given text file. Put differently and more specifically, I have a few hundred text files with arbitrary names but within each file is a unique phrase (something like No. 85-2156). I would like to replace the arbitrary file name with that given phrase for every text file. The phrase is not always on the same line (though it doesn't deviate that much) but it always is in the same format and with the No. prefix.
I've looked at the os module and I understand how
os.listdir
os.path.join
os.rename
could be useful but I don't understand how to combine those functions with intratext manipulation functions like linecache or general line reading functions.
I've thought through many ways of accomplishing this task but it seems like easiest and most efficient way would be to create a loop that finds the unique phrase in a file, assigns it to a variable and use that variable to rename the file before moving to the next file.
This seems like it should be easy, so much so that I feel silly writing this question. I've spent the last few hours looking reading documentation and parsing through StackOverflow but it doesn't seem like anyone has quite had this issue before -- or at least they haven't asked about their problem.
Can anyone point me in the right direction?
EDIT 1: When I create the regex pattern using this website, it creates bulky but seemingly workable code:
import re
txt='No. 09-1159'
re1='(No)' # Word 1
re2='(\\.)' # Any Single Character 1
re3='( )' # White Space 1
re4='(\\d)' # Any Single Digit 1
re5='(\\d)' # Any Single Digit 2
re6='(-)' # Any Single Character 2
re7='(\\d)' # Any Single Digit 3
re8='(\\d)' # Any Single Digit 4
re9='(\\d)' # Any Single Digit 5
re10='(\\d)' # Any Single Digit 6
rg = re.compile(re1+re2+re3+re4+re5+re6+re7+re8+re9+re10,re.IGNORECASE|re.DOTALL)
m = rg.search(txt)
name = m.group(0)
print name
When I manipulate that to fit the glob.glob structure, and make it like this:
import glob
import os
import re
re1='(No)' # Word 1
re2='(\\.)' # Any Single Character 1
re3='( )' # White Space 1
re4='(\\d)' # Any Single Digit 1
re5='(\\d)' # Any Single Digit 2
re6='(-)' # Any Single Character 2
re7='(\\d)' # Any Single Digit 3
re8='(\\d)' # Any Single Digit 4
re9='(\\d)' # Any Single Digit 5
re10='(\\d)' # Any Single Digit 6
rg = re.compile(re1+re2+re3+re4+re5+re6+re7+re8+re9+re10,re.IGNORECASE|re.DOTALL)
for fname in glob.glob("\file\structure\here\*.txt"):
with open(fname) as f:
contents = f.read()
tname = rg.search(contents)
print tname
Then this prints out the byte location of the the pattern -- signifying that the regex pattern is correct. However, when I add in the nname = tname.group(0) line after the original tname = rg.search(contents) and change around the print function to reflect the change, it gives me the following error: AttributeError: 'NoneType' object has no attribute 'group'. When I tried copying and pasting #joaquin's code line for line, it came up with the same error. I was going to post this as a comment to the #spatz answer but I wanted to include so much code that this seemed to be a better way to express the `new' problem. Thank you all for the help so far.
Edit 2: This is for the #joaquin answer below:
import glob
import os
import re
for fname in glob.glob("/directory/structure/here/*.txt"):
with open(fname) as f:
contents = f.read()
tname = re.search('No\. (\d\d\-\d\d\d\d)', contents)
nname = tname.group(1)
print nname
Last Edit: I got it to work using mostly the code as written. What was happening is that there were some files that didn't have that regex expression so I assumed Python would skip them. Silly me. So I spent three days learning to write two lines of code (I know the lesson is more than that). I also used the error catching method recommended here. I wish I could check all of you as the answer, but I bothered #Joaquin the most so I gave it to him. This was a great learning experience. Thank you all for being so generous with your time. The final code is below.
import os
import re
pat3 = "No\. (\d\d-\d\d)"
ext = '.txt'
mydir = '/directory/files/here'
for arch in os.listdir(mydir):
archpath = os.path.join(mydir, arch)
with open(archpath) as f:
txt = f.read()
s = re.search(pat3, txt)
if s is None:
continue
name = s.group(1)
newpath = os.path.join(mydir, name)
if not os.path.exists(newpath):
os.rename(archpath, newpath + ext)
else:
print '{} already exists, passing'.format(newpath)
Instead of providing you with some code which you will simply copy-paste without understanding, I'd like to walk you through the solution so that you will be able to write it yourself, and more importantly gain enough knowledge to be able to do it alone next time.
The code which does what you need is made up of three main parts:
Getting a list of all filenames you need to iterate
For each file, extract the information you need to generate a new name for the file
Rename the file from its old name to the new one you just generated
Getting a list of filenames
This is best achieved with the glob module. This module allows you to specify shell-like wildcards and it will expand them. This means that in order to get a list of .txt file in a given directory, you will need to call the function glob.iglob("/path/to/directory/*.txt") and iterate over its result (for filename in ...:).
Generate new name
Once we have our filename, we need to open() it, read its contents using read() and store it in a variable where we can search for what we need. That would look something like this:
with open(filename) as f:
contents = f.read()
Now that we have the contents, we need to look for the unique phrase. This can be done using regular expressions. Store the new filename you want in a variable, say newfilename.
Rename
Now that we have both the old and the new filenames, we need to simply rename the file, and that is done using os.rename(filename, newfilename).
If you want to move the files to a different directory, use os.rename(filename, os.path.join("/path/to/new/dir", newfilename). Note that we need os.path.join here to construct the new path for the file using a directory path and newfilename.
There is no checking or protection for failures (check is archpath is a file, if newpath already exists, if the search is succesful, etc...), but this should work:
import os
import re
pat = "No\. (\d\d\-\d\d\d\d)"
mydir = 'mydir'
for arch in os.listdir(mydir):
archpath = os.path.join(mydir, arch)
with open(archpath) as f:
txt = f.read()
s = re.search(pat, txt)
name = s.group(1)
newpath = os.path.join(mydir, name)
os.rename(archpath, newpath)
Edit: I tested the regex to show how it works:
>>> import re
>>> pat = "No\. (\d\d\-\d\d\d\d)"
>>> txt='nothing here or whatever No. 09-1159 you want, does not matter'
>>> s = re.search(pat, txt)
>>> s.group(1)
'09-1159'
>>>
The regex is very simple:
\. -> a dot
\d -> a decimal digit
\- -> a dash
So, it says: search for the string "No. " followed by 2+4 decimal digits separated by a dash.
The parentheses are to create a group that I can recover with s.group(1) and that contains the code number.
And that is what you get, before and after:
Text of files one.txt, two.txt and three.txt is always the same, only the number changes:
this is the first
file with a number
nothing here or whatever No. 09-1159 you want, does not matter
the number is
Create a backup of your files, then try something like this:
import glob
import os
def your_function_to_dig_out_filename(lines):
import re
# i'll let you attempt this yourself
for fn in glob.glob('/path/to/your/dir/*.txt'):
with open(fn) as f:
spam = f.readlines()
new_fn = your_function_to_dig_out_filename(spam)
if not os.path.exists(new_fn):
os.rename(fn, new_fn)
else:
print '{} already exists, passing'.format(new_fn)