select files from path - python

I have files in particular path and need to select one by one base on namefile (yyyymmdd.faifb1p16m2.nc) where yyyy is year, mm is month, and dd is date. I made code like this :
results=[]
base_dir = 'C:/DATA2013'
os.chdir(base_dir)
files = os.listdir('C:/DATA2013')
for f in files:
results += [each for each in os.listdir('C:/DATA2013')
if each.endswith('.faifb1p16m2.nc')]
What should I do next if I only select files for January, and then February, and so on. Thank you.

You can do :
x = [i for i in results if i[4:6] == '01']
It will list all file names for January.
Assuming that your all files of same format as you have described in the question.

Two regexes:
\d{4}(?:\d?|\d{2})(?:\d?|\d{2})\.faifb1p16m2\.nc
\d{8}\.faifb1p16m2\.nc
Sample data:
20140131.faifb1p16m2.nc
2014131.faifb1p16m2.nc
201412.faifb1p16m2.nc
201411.faifb1p16m2.nc
20141212.faifb1p16m2.nc
2014121.faifb1p16m2.nc
201411.faifb1p16m2.nc
The first regex will match all 7 of those entries. The second regex will match only 1, and 5. I probably made the regexes way more complicated than I needed to.
You're going to want the second regex, but I'm just listing the first as an example.
from glob import glob
import re
re1 = r'\d{4}(?:\d?|\d{2})(?:\d?|\d{2})\.faifb1p16m2\.nc'
re2 = r'\d{8}\.faifb1p16m2\.nc'
l = [f for f in glob('*.faifb1p16m2.nc') if re.search(re1, f)]
m = [f for f in glob('*.faifb1p16m2.nc') if re.search(re2, f)]
print l
print
print m
#Then, suppose you want to filter and select everything with '12' in the list m
print filter(lambda x: x[4:6] == '12', m)
As another similar solution shows you can ditch glob for os.listdir(), so:
l = [f for f in glob('*.faifb1p16m2.nc') if re.search(re1, f)]`
Becomes:
l = [f for f in os.listdir() if re.search(re1, f)]
And then the rest of the code is great. One of the great things about using glob is that you can use iglob which is just like glob, but as an iterator, which can help with performance when going through a directory with lots of files.
One more thing, here's another stackoverflow post with an overview of python's infamous lambda feature. It's often used for the functions map, reduce, filter, and so on.

To validate filenames, you could use datetime.strptime() method:
#!/usr/bin/env python
import os
from datetime import datetime
from glob import glob
suffix = '.faifb1p16m2.nc'
def parse_date(path):
try:
return datetime.strptime(os.path.basename(path), '%Y%m%d' + suffix)
except ValueError:
return None # failed to parse
paths_by_month = [[] for _ in range(12 + 1)]
for path in glob(r'C:\DATA2013\*' + suffix): # for each nc-file in the directory
date = parse_date(path)
paths_by_month[date and date.month or 0].append(path)
print(paths_by_month[2]) # February paths
print(paths_by_month[0]) # paths with unrecognized date

try this:
from os import *
results = []
base_dir = 'C://local'
chdir(base_dir)
files = listdir(base_dir)
for f in files:
if '.faifb1p16m2.nc' in f and f[4:6] == '01': #describe the month in this string
print f

Related

Python grab substring between two specific characters

I have a folder with hundreds of files named like:
"2017_05_S2B_7VEG_20170528_0_L2A_B01.tif"
Convention:
year_month_ID_zone_date_0_L2A_B01.tif ("_0_L2A_B01.tif", and "zone" never change)
What I need is to iterate through every file and build a path based on their name in order to download them.
For example:
name = "2017_05_S2B_7VEG_20170528_0_L2A_B01.tif"
path = "2017/5/S2B_7VEG_20170528_0_L2A/B01.tif"
The path convention needs to be: path = year/month/ID_zone_date_0_L2A/B01.tif
I thought of making a loop which would "cut" my string into several parts every time it encounters a "_" character, then stitch the different parts in the right order to create my path name.
I tried this but it didn't work:
import re
filename =
"2017_05_S2B_7VEG_20170528_0_L2A_B01.tif"
try:
found = re.search('_(.+?)_', filename).group(1)
except AttributeError:
# _ not found in the original string
found = '' # apply your error handling
How could I achieve that on Python ?
Since you only have one separator character, you may as well simply use Python's built in split function:
import os
items = filename.split('_')
year, month = items[:2]
new_filename = '_'.join(items[2:])
path = os.path.join(year, month, new_filename)
Try the following code snippet
filename = "2017_05_S2B_7VEG_20170528_0_L2A_B01.tif"
found = re.sub('(\d+)_(\d+)_(.*)_(.*)\.tif', r'\1/\2/\3/\4.tif', filename)
print(found) # prints 2017/05/S2B_7VEG_20170528_0_L2A/B01.tif
No need for a regex -- you can just use split().
filename = "2017_05_S2B_7VEG_20170528_0_L2A_B01.tif"
parts = filename.split("_")
year = parts[0]
month = parts[1]
Maybe you can do like this:
from os import listdir, mkdir
from os.path import isfile, join, isdir
my_path = 'your_soure_dir'
files_name = [f for f in listdir(my_path) if isfile(join(my_path, f))]
def create_dir(files_name):
for file in files_name:
month = file.split('_', '1')[0]
week = file.split('_', '2')[1]
if not isdir(my_path):
mkdir(month)
mkdir(week)
### your download code
filename = "2017_05_S2B_7VEG_20170528_0_L2A_B01.tif"
temp = filename.split('_')
result = "/".join(temp)
print(result)
result is
2017/05/S2B/7VEG/20170528/0/L2A/B01.tif

iterating through specific files in folder with name matching pattern in python

I have a folder with a lot of csv files with different names.
I want to work only with the files that their name is made up of numbers only,
though I have no information of the range of the numbers in the title of the files.
for example, I have
['123.csv', 'not.csv', '75839.csv', '2.csv', 'bad.csv', '23bad8.csv']
and I would like to only work with ['123.csv', '75839.csv', '2.csv']
I tried the following code:
for f in file_list:
if f.startwith('1' or '2' or '3' ..... or '9'):
# do something
but this does not some the problem if the file name starts with a number but still includes letters or other symbols later.
You can use Regex to do the following:
import re
lst_of_files = ['temo1.csv', '12321.csv', '123123.csv', 'fdao123.csv', '12312asdv.csv', '123otk123.csv', '123.txt']
pattern = re.compile('^[0-9]+.csv')
newlst = [re.findall(pattern, filename) for filename in lst_of_files if len(re.findall(pattern, filename)) > 0]
print(newlst)
You can do it this way:
file_list = ["123.csv", "not.csv", "75839.csv", "2.csv", "bad.csv", "23bad8.csv"]
for f in file_list:
name, ext = f.rsplit(".", 1) # split at the rightmost dot
if name.isnumeric():
print(f)
Output is
123.csv
75839.csv
2.csv
One of the approaches:
import re
lst_of_files = ['temo1.csv', '12321.csv', '123123.csv', 'fdao123.csv', '12312asdv.csv', '123otk123.csv', '123.txt', '876.csv']
for f in lst_of_files:
if re.search(r'^[0-9]+.csv', f):
print (f)
Output:
12321.csv
123123.csv
876.csv

Specify multiple file extensions with glob python [duplicate]

Is there a better way to use glob.glob in python to get a list of multiple file types such as .txt, .mdown, and .markdown? Right now I have something like this:
projectFiles1 = glob.glob( os.path.join(projectDir, '*.txt') )
projectFiles2 = glob.glob( os.path.join(projectDir, '*.mdown') )
projectFiles3 = glob.glob( os.path.join(projectDir, '*.markdown') )
Maybe there is a better way, but how about:
import glob
types = ('*.pdf', '*.cpp') # the tuple of file types
files_grabbed = []
for files in types:
files_grabbed.extend(glob.glob(files))
# files_grabbed is the list of pdf and cpp files
Perhaps there is another way, so wait in case someone else comes up with a better answer.
glob returns a list: why not just run it multiple times and concatenate the results?
from glob import glob
project_files = glob('*.txt') + glob('*.mdown') + glob('*.markdown')
So many answers that suggest globbing as many times as number of extensions, I'd prefer globbing just once instead:
from pathlib import Path
files = (p.resolve() for p in Path(path).glob("**/*") if p.suffix in {".c", ".cc", ".cpp", ".hxx", ".h"})
from glob import glob
files = glob('*.gif')
files.extend(glob('*.png'))
files.extend(glob('*.jpg'))
print(files)
If you need to specify a path, loop over match patterns and keep the join inside the loop for simplicity:
from os.path import join
from glob import glob
files = []
for ext in ('*.gif', '*.png', '*.jpg'):
files.extend(glob(join("path/to/dir", ext)))
print(files)
Chain the results:
import itertools as it, glob
def multiple_file_types(*patterns):
return it.chain.from_iterable(glob.iglob(pattern) for pattern in patterns)
Then:
for filename in multiple_file_types("*.txt", "*.sql", "*.log"):
# do stuff
For example, for *.mp3 and *.flac on multiple folders, you can do:
mask = r'music/*/*.[mf][pl][3a]*'
glob.glob(mask)
The idea can be extended to more file extensions, but you have to check that the combinations won't match any other unwanted file extension you may have on those folders. So, be careful with this.
To automatically combine an arbitrary list of extensions into a single glob pattern, you can do the following:
def multi_extension_glob_mask(mask_base, *extensions):
mask_ext = ['[{}]'.format(''.join(set(c))) for c in zip(*extensions)]
if not mask_ext or len(set(len(e) for e in extensions)) > 1:
mask_ext.append('*')
return mask_base + ''.join(mask_ext)
mask = multi_extension_glob_mask('music/*/*.', 'mp3', 'flac', 'wma')
print(mask) # music/*/*.[mfw][pml][a3]*
with glob it is not possible. you can use only:
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any character not in seq
use os.listdir and a regexp to check patterns:
for x in os.listdir('.'):
if re.match('.*\.txt|.*\.sql', x):
print x
While Python's default glob doesn't really follow after Bash's glob, you can do this with other libraries. We can enable braces in wcmatch's glob.
>>> from wcmatch import glob
>>> glob.glob('*.{md,ini}', flags=glob.BRACE)
['LICENSE.md', 'README.md', 'tox.ini']
You can even use extended glob patterns if that is your preference:
from wcmatch import glob
>>> glob.glob('*.#(md|ini)', flags=glob.EXTGLOB)
['LICENSE.md', 'README.md', 'tox.ini']
Same answer as #BPL (which is computationally efficient) but which can handle any glob pattern rather than extension:
import os
from fnmatch import fnmatch
folder = "path/to/folder/"
patterns = ("*.txt", "*.md", "*.markdown")
files = [f.path for f in os.scandir(folder) if any(fnmatch(f, p) for p in patterns)]
This solution is both efficient and convenient. It also closely matches the behavior of glob (see the documentation).
Note that this is simpler with the built-in package pathlib:
from pathlib import Path
folder = Path("/path/to/folder")
patterns = ("*.txt", "*.md", "*.markdown")
files = [f for f in folder.iterdir() if any(f.match(p) for p in patterns)]
Here is one-line list-comprehension variant of Pat's answer (which also includes that you wanted to glob in a specific project directory):
import os, glob
exts = ['*.txt', '*.mdown', '*.markdown']
files = [f for ext in exts for f in glob.glob(os.path.join(project_dir, ext))]
You loop over the extensions (for ext in exts), and then for each extension you take each file matching the glob pattern (for f in glob.glob(os.path.join(project_dir, ext)).
This solution is short, and without any unnecessary for-loops, nested list-comprehensions, or functions to clutter the code. Just pure, expressive, pythonic Zen.
This solution allows you to have a custom list of exts that can be changed without having to update your code. (This is always a good practice!)
The list-comprehension is the same used in Laurent's solution (which I've voted for). But I would argue that it is usually unnecessary to factor out a single line to a separate function, which is why I'm providing this as an alternative solution.
Bonus:
If you need to search not just a single directory, but also all sub-directories, you can pass recursive=True and use the multi-directory glob symbol ** 1:
files = [f for ext in exts
for f in glob.glob(os.path.join(project_dir, '**', ext), recursive=True)]
This will invoke glob.glob('<project_dir>/**/*.txt', recursive=True) and so on for each extension.
1 Technically, the ** glob symbol simply matches one or more characters including forward-slash / (unlike the singular * glob symbol). In practice, you just need to remember that as long as you surround ** with forward slashes (path separators), it matches zero or more directories.
Python 3
We can use pathlib; .glob still doesn't support globbing multiple arguments or within braces (as in POSIX shells) but we can easily filter the result.
For example, where you might ideally like to do:
# NOT VALID
Path(config_dir).glob("*.{ini,toml}")
# NOR IS
Path(config_dir).glob("*.ini", "*.toml")
you can do:
filter(lambda p: p.suffix in {".ini", ".toml"}, Path(config_dir).glob("*"))
which isn't too much worse.
A one-liner, Just for the hell of it..
folder = "C:\\multi_pattern_glob_one_liner"
files = [item for sublist in [glob.glob(folder + ext) for ext in ["/*.txt", "/*.bat"]] for item in sublist]
output:
['C:\\multi_pattern_glob_one_liner\\dummy_txt.txt', 'C:\\multi_pattern_glob_one_liner\\dummy_bat.bat']
files = glob.glob('*.txt')
files.extend(glob.glob('*.dat'))
By the results I've obtained from empirical tests, it turned out that glob.glob isn't the better way to filter out files by their extensions. Some of the reason are:
The globbing "language" does not allows perfect specification of multiple extension.
The former point results in obtaining incorrect results depending on file extensions.
The globbing method is empirically proven to be slower than most other methods.
Even if it's strange even other filesystems objects can have "extensions", folders too.
I've tested (for correcteness and efficiency in time) the following 4 different methods to filter out files by extensions and puts them in a list:
from glob import glob, iglob
from re import compile, findall
from os import walk
def glob_with_storage(args):
elements = ''.join([f'[{i}]' for i in args.extensions])
globs = f'{args.target}/**/*{elements}'
results = glob(globs, recursive=True)
return results
def glob_with_iteration(args):
elements = ''.join([f'[{i}]' for i in args.extensions])
globs = f'{args.target}/**/*{elements}'
results = [i for i in iglob(globs, recursive=True)]
return results
def walk_with_suffixes(args):
results = []
for r, d, f in walk(args.target):
for ff in f:
for e in args.extensions:
if ff.endswith(e):
results.append(path_join(r,ff))
break
return results
def walk_with_regs(args):
reg = compile('|'.join([f'{i}$' for i in args.extensions]))
results = []
for r, d, f in walk(args.target):
for ff in f:
if len(findall(reg,ff)):
results.append(path_join(r, ff))
return results
By running the code above on my laptop I obtained the following auto-explicative results.
Elapsed time for '7 times glob_with_storage()': 0.365023 seconds.
mean : 0.05214614
median : 0.051861
stdev : 0.001492152
min : 0.050864
max : 0.054853
Elapsed time for '7 times glob_with_iteration()': 0.360037 seconds.
mean : 0.05143386
median : 0.050864
stdev : 0.0007847381
min : 0.050864
max : 0.052859
Elapsed time for '7 times walk_with_suffixes()': 0.26529 seconds.
mean : 0.03789857
median : 0.037899
stdev : 0.0005759071
min : 0.036901
max : 0.038896
Elapsed time for '7 times walk_with_regs()': 0.290223 seconds.
mean : 0.04146043
median : 0.040891
stdev : 0.0007846776
min : 0.04089
max : 0.042885
Results sizes:
0 2451
1 2451
2 2446
3 2446
Differences between glob() and walk():
0 E:\x\y\z\venv\lib\python3.7\site-packages\Cython\Includes\numpy
1 E:\x\y\z\venv\lib\python3.7\site-packages\Cython\Utility\CppSupport.cpp
2 E:\x\y\z\venv\lib\python3.7\site-packages\future\moves\xmlrpc
3 E:\x\y\z\venv\lib\python3.7\site-packages\Cython\Includes\libcpp
4 E:\x\y\z\venv\lib\python3.7\site-packages\future\backports\xmlrpc
Elapsed time for 'main': 1.317424 seconds.
The fastest way to filter out files by extensions, happens even to be the ugliest one. Which is, nested for loops and string comparison using the endswith() method.
Moreover, as you can see, the globbing algorithms (with the pattern E:\x\y\z\**/*[py][pyc]) even with only 2 extension given (py and pyc) returns also incorrect results.
I have released Formic which implements multiple includes in a similar way to Apache Ant's FileSet and Globs.
The search can be implemented:
import formic
patterns = ["*.txt", "*.markdown", "*.mdown"]
fileset = formic.FileSet(directory=projectDir, include=patterns)
for file_name in fileset.qualified_files():
# Do something with file_name
Because the full Ant glob is implemented, you can include different directories with each pattern, so you could choose only those .txt files in one subdirectory, and the .markdown in another, for example:
patterns = [ "/unformatted/**/*.txt", "/formatted/**/*.mdown" ]
I hope this helps.
This is a Python 3.4+ pathlib solution:
exts = ".pdf", ".doc", ".xls", ".csv", ".ppt"
filelist = (str(i) for i in map(pathlib.Path, os.listdir(src)) if i.suffix.lower() in exts and not i.stem.startswith("~"))
Also it ignores all file names starting with ~.
After coming here for help, I made my own solution and wanted to share it. It's based on user2363986's answer, but I think this is more scalable. Meaning, that if you have 1000 extensions, the code will still look somewhat elegant.
from glob import glob
directoryPath = "C:\\temp\\*."
fileExtensions = [ "jpg", "jpeg", "png", "bmp", "gif" ]
listOfFiles = []
for extension in fileExtensions:
listOfFiles.extend( glob( directoryPath + extension ))
for file in listOfFiles:
print(file) # Or do other stuff
Not glob, but here's another way using a list comprehension:
extensions = 'txt mdown markdown'.split()
projectFiles = [f for f in os.listdir(projectDir)
if os.path.splitext(f)[1][1:] in extensions]
The following function _glob globs for multiple file extensions.
import glob
import os
def _glob(path, *exts):
"""Glob for multiple file extensions
Parameters
----------
path : str
A file name without extension, or directory name
exts : tuple
File extensions to glob for
Returns
-------
files : list
list of files matching extensions in exts in path
"""
path = os.path.join(path, "*") if os.path.isdir(path) else path + "*"
return [f for files in [glob.glob(path + ext) for ext in exts] for f in files]
files = _glob(projectDir, ".txt", ".mdown", ".markdown")
From previous answer
glob('*.jpg') + glob('*.png')
Here is a shorter one,
from glob import glob
extensions = ['jpg', 'png'] # to find these filename extensions
# Method 1: loop one by one and extend to the output list
output = []
[output.extend(glob(f'*.{name}')) for name in extensions]
print(output)
# Method 2: even shorter
# loop filename extension to glob() it and flatten it to a list
output = [p for p2 in [glob(f'*.{name}') for name in extensions] for p in p2]
print(output)
You can try to make a manual list comparing the extension of existing with those you require.
ext_list = ['gif','jpg','jpeg','png'];
file_list = []
for file in glob.glob('*.*'):
if file.rsplit('.',1)[1] in ext_list :
file_list.append(file)
import os
import glob
import operator
from functools import reduce
types = ('*.jpg', '*.png', '*.jpeg')
lazy_paths = (glob.glob(os.path.join('my_path', t)) for t in types)
paths = reduce(operator.add, lazy_paths, [])
https://docs.python.org/3.5/library/functools.html#functools.reduce
https://docs.python.org/3.5/library/operator.html#operator.add
To glob multiple file types, you need to call glob() function several times in a loop. Since this function returns a list, you need to concatenate the lists.
For instance, this function do the job:
import glob
import os
def glob_filetypes(root_dir, *patterns):
return [path
for pattern in patterns
for path in glob.glob(os.path.join(root_dir, pattern))]
Simple usage:
project_dir = "path/to/project/dir"
for path in sorted(glob_filetypes(project_dir, '*.txt', '*.mdown', '*.markdown')):
print(path)
You can also use glob.iglob() to have an iterator:
Return an iterator which yields the same values as glob() without actually storing them all simultaneously.
def iglob_filetypes(root_dir, *patterns):
return (path
for pattern in patterns
for path in glob.iglob(os.path.join(root_dir, pattern)))
One glob, many extensions... but imperfect solution (might match other files).
filetypes = ['tif', 'jpg']
filetypes = zip(*[list(ft) for ft in filetypes])
filetypes = ["".join(ch) for ch in filetypes]
filetypes = ["[%s]" % ch for ch in filetypes]
filetypes = "".join(filetypes) + "*"
print(filetypes)
# => [tj][ip][fg]*
glob.glob("/path/to/*.%s" % filetypes)
I had the same issue and this is what I came up with
import os, sys, re
#without glob
src_dir = '/mnt/mypics/'
src_pics = []
ext = re.compile('.*\.(|{}|)$'.format('|'.join(['png', 'jpeg', 'jpg']).encode('utf-8')))
for root, dirnames, filenames in os.walk(src_dir):
for filename in filter(lambda name:ext.search(name),filenames):
src_pics.append(os.path.join(root, filename))
Use a list of extension and iterate through
from os.path import join
from glob import glob
files = []
extensions = ['*.gif', '*.png', '*.jpg']
for ext in extensions:
files.extend(glob(join("path/to/dir", ext)))
print(files)
You could use filter:
import os
import glob
projectFiles = filter(
lambda x: os.path.splitext(x)[1] in [".txt", ".mdown", ".markdown"]
glob.glob(os.path.join(projectDir, "*"))
)
You could also use reduce() like so:
import glob
file_types = ['*.txt', '*.mdown', '*.markdown']
project_files = reduce(lambda list1, list2: list1 + list2, (glob.glob(t) for t in file_types))
this creates a list from glob.glob() for each pattern and reduces them to a single list.
Yet another solution (use glob to get paths using multiple match patterns and combine all paths into a single list using reduce and add):
import functools, glob, operator
paths = functools.reduce(operator.add, [glob.glob(pattern) for pattern in [
"path1/*.ext1",
"path2/*.ext2"]])
If you use pathlib try this:
import pathlib
extensions = ['.py', '.txt']
root_dir = './test/'
files = filter(lambda p: p.suffix in extensions, pathlib.Path(root_dir).glob('**/*'))
print(list(files))

Verify the format of a filename in Python

Every week I get two files with following pattern.
EMEA_{sample}_Tracker_{year}_KW{week}
E.g.
EMEA_G_Tracker_2019_KW52.xlsx
EMEA_BC_Tracker_2019_KW52.xlsx
Next files would look like these
EMEA_G_Tracker_2020_KW1.xlsx
EMEA_BC_Tracker_2020_KW1.xlsx
Placeholders:
sample = G or BC
year = current year [YYYY]
week = calendar week [0 - ~52]
The only changes are made in the placeholders, everything else will stay the same.
How can I extract these values from the filename and check if the filename has this format?
Right now I only read all files using os.walk():
path_files = "Files/"
files = []
for (_, _, filenames) in walk(path_files):
files.extend(filenames)
break
If filename is the name of the file you've got:
import re
result = re.match(r'EMEA_(.*?)_Tracker_(\d+)_KW(\d+)', filename)
sample, year, week = result.groups()
Here is an example of how to collect all files matching your pattern into a list using regex and list comprehension. Then you can use the list as you wish in later code.
import os
import re
# Compile the regular expression pattern.
re_emea = re.compile('^EMEA_(G|BC)_Tracker_20\d{2}_KW\d{1,2}.xlsx$')
# Set path to be searched.
path = '/home/username/Desktop/so/emea_files'
# Collect all filenames matching the pattern into a list.
files = [f for f in os.listdir(path) if re_emea.match(f)]
# View the results.
print(files)
All files in the directory:
['EMEA_G_Tracker_2020_KW2.xlsx',
'other_file_3.txt',
'EMEA_G_Tracker_2020_KW1.xlsx',
'other_file_2.txt',
'other_file_5.txt',
'other_file_4.txt',
'EMEA_BC_Tracker_2019_KW52.xlsx',
'other_file_1.txt',
'EMEA_G_Tracker_2019_KW52.xlsx',
'EMEA_BC_Tracker_2020_KW2.xlsx',
'EMEA_BC_Tracker_2020_KW1.xlsx']
The results from pattern matching:
['EMEA_G_Tracker_2020_KW2.xlsx',
'EMEA_G_Tracker_2020_KW1.xlsx',
'EMEA_BC_Tracker_2019_KW52.xlsx',
'EMEA_G_Tracker_2019_KW52.xlsx',
'EMEA_BC_Tracker_2020_KW2.xlsx',
'EMEA_BC_Tracker_2020_KW1.xlsx']
Hope this helps! If not, just give me a shout.

Filename string comparison in list search fails [Python]

I am trying to associate some filepaths from 2 list elements in Python. These files have a part of their name identical, while the extension and some extra words are different.
This means the extension of the file, extra characters and their location can differ. The files are in different folders, hence their filepath name differs. What is exactly equal: their Numbering index: 0033, 0061 for example.
Example code:
original_files = ['C:/0001.jpg',
'C:/0033.jpg',
'C:/0061.jpg',
'C:/0080.jpg',
'C:/0204.jpg',
'C:/0241.jpg']
related_files = ['C:/0001_PM.png',
'C:/0033_PMA.png',
'C:/0033_NM.png',
'C:/0061_PMLTS.png',
'C:/0080_PM.png',
'C:/0080_RS.png',
'C:/0204_PM.png']
for idx, filename in enumerate(original_files):
related_filename = [s for s in (related_files) if filename.rsplit('/',1)[1][:-4] in s]
print(related_filename)
At filename = 'C:/0241.jpg' it should return [], but instead it returns all the filenames from related_files.
For privacy reasons I didn't post the entire filepath, just the names of the files. In this example, the comparison works, but for the entire filepath it fails.
I suppose my comparison condition is not correct but I don't know how to write it.
Note: I am looking for something with as few code lines as possible to do this.
I suggest something along the line of
from collections import defaultdict
original_files = ['C:/0001.jpg',
'C:/0033.jpg',
'C:/0061.jpg',
'C:/0080.jpg',
'C:/0204.jpg',
'C:/0241.jpg']
related_files = ['C:/0001_PM.png',
'C:/0033_PMA.png',
'C:/0033_NM.png',
'C:/0061_PMLTS.png',
'C:/0080_PM.png',
'C:/0080_RS.png',
'C:/0204_PM.png']
def key1(filename):
return filename.rsplit('/', 1)[-1].rsplit('.', 1)[0]
def key2(filename):
return key1(filename).split('_', 1)[0]
d = defaultdict(list)
for x in related_files:
d[key2(x)].append(x)
for x in original_files:
related = d.get(key1(x), [])
print(x, '->', related)
In key1() and key2() you could alternately use os.path functions or pathlib.Path methods.
Here's a solution that returns only the matched relative_files.
import os, re
def get_index(filename):
m = re.match('([0-9]+)', os.path.split(filename)[1])
return m.group(1) if m else False
indexes = filter(bool, map(get_index, original_files))
[f for f in related_files if get_index(f) in indexes]
Make use of defaultdict.
import os, re
from collections import defaultdict
stragglers = []
grouped_files = defaultdict(list)
file_index = re.compile('([0-9]+)')
for f in original_files + related_files:
m = file_index.match(os.path.split(f)[1])
if m:
grouped_files[m.group(1)].append(f)
else:
stragglers.append(f)
You now have grouped_files, a dict (or dictionary-like object) of key-value pairs where the key is the regex matched part of the filename and the value is a list of matching filenames.
for x in grouped_files.items():
print(x)
# ('0204', ['C:/0204.jpg', 'C:/0204_PM.png'])
# ('0001', ['C:/0001.jpg', 'C:/0001_PM.png'])
# ('0033', ['C:/0033.jpg', 'C:/0033_PM.png'])
# ('0061', ['C:/0061.jpg', 'C:/0061_PM.png'])
# ('0241', ['C:/0241.jpg'])
# ('0080', ['C:/0080.jpg', 'C:/0080_PM.png'])
In stragglers you have any filenames that didn't match your regex.
print(stragglers)
# []
For python 3.X you can try to use this:
for origfiles in original_files:
for relfiles in related_files:
if origfiles[3:6] == relfiles[3:6]:
print(origfiles)

Categories

Resources