I have many folders in a master folder as given below. Each folder contains a .JPG file. I would like to extract all these files and store them in this master folder.
Inside each folder
My present code:
import os
import glob
os.chdir('Master folder')
extension = 'JPG'
jpg_files= [i for i in glob.glob('*.{}'.format(extension))]
This did not work.
To find the images in your tree, I would use os.walk. Below you can find a complete example to 'find and move' function that move all the files to your given path, and create a new filename for duplicate filenames.
The simple 'find and replace' function will also check with function add_index_to_filepath whether or not the file already exists, add an index (n) to the path. For example: if image.jpg would exists, it turns the next one into turn into image (1).jpg and the following one into image (2).jpg and so on.
import os
import re
import shutil
def add_index_to_filepath(path):
'''
Check if a file exists, and append '(n)' if true.
'''
# If the past exists, go adjust it
if os.path.exists(path):
# pull apart your path and filenames
folder, file = os.path.split(path)
filename, extension = os.path.splitext(file)
# discover the current index, and correct filename
try:
regex = re.compile(r'\(([0-9]*)\)$')
findex = regex.findall(filename)[0]
filename = regex.sub('({})'.format(int(findex) + 1), filename)
except IndexError:
filename = filename + ' (1)'
# Glue your path back together.
new_path = os.path.join(folder, '{}{}'.format(filename, extension))
# Recursivly call your function, go keep verifying if it exists.
return add_index_to_filepath(new_path)
return path
def find_and_move_files(path, extension_list):
'''
Walk through a given path and move the files from the sub-dir to the path.
Upper-and lower-case are ignored. Duplicates get a new filename.
'''
files_moved = []
# First walk through the path, to list all files.
for root, dirs, files in os.walk(path, topdown=False):
for file in files:
# Is your extension wanted?
extension = os.path.splitext(file)[-1].lower()
if extension in extension_list:
# Perpare your old an new path, and move
old_path = os.path.join(root, file)
new_path = add_index_to_filepath(os.path.join(path, file))
if new_path in files_moved:
shutil.move(old_path, new_path)
# Lets keep track of what we moved to return it in the end
files_moved.append(new_path)
return files_moved
path = '.' # your filepath for the master-folder
extensions = ['.jpg', '.jpeg'] # There are some variations of a jpeg-file extension.
found_files = find_and_move_files(path, extensions)
I have a parent directory that contains a lot of subdirectories. I want to create a script that loops through all of the subdirectories and removes any key words that I have specified in the list variable.
I am not entirely sure how to acheive this.
Currently I have this:
import os
directory = next(os.walk('.'))[1]
stringstoremove = ['string1','string2','string3','string4','string5']
for folders in directory:
os.rename
And maybe this type of logic to check to see if the string exists within the subdirectory name:
if any(words in inputstring for words in stringstoremove):
print ("TRUE")
else:
print ("FALSE")
Trying my best to to deconstruct the task, but I'm going round in circles now
Thanks guys
Startng from your existing code:
import os
directory = next(os.walk('.'))[1]
stringstoremove = ['string1','string2','string3','string4','string5']
for folder in directory :
new_folder = folder
for r in stringstoremove :
new_folder = new_folder.replace( r, '')
if folder != new_folder : # don't rename if it's the same
os.rename( folder, new_folder )
If you want to rename those sub directories which match in your stringstoremove list then following script will be helpful.
import os
import re
path = "./" # parent directory path
sub_dirs = os.listdir(path)
stringstoremove = ['string1','string2','string3','string4','string5']
for directory_name in sub_dirs:
if os.path.isdir(path + directory):
for string in stringstoremove:
if string in directory_name:
try:
new_name = re.sub(string, "", directory_name)
os.rename(path + directory, path + new_name) # rename this directory
except Exception as e:
print (e)
I am trying to remove all empty files in a folder, and there are folders within the folder so it needs to check inside those folders too:
e.g
remove all empty files within C:\folder1\folder1 and C:\folder1\folder2 etc
import sys
import os
def main():
getemptyfiles(sys.argv[1])
def getemptyfiles(rootdir):
for root, dirs, files in os.walk(rootdir):
for d in ['RECYCLER', 'RECYCLED']:
if d in dirs:
dirs.remove(d)
for f in files:
fullname = os.path.join(root, f)
try:
if os.path.getsize(fullname) == 0:
print fullname
os.remove(fullname)
except WindowsError:
continue
This will work with a bit of adjusting:
The os.remove() statement could fail so you might want to wrap it with try...except as well. WindowsError is platform specific. Filtering the traversed directories is not strictly necessary but helpful.
The for loop uses dir to find all files, but not directories, in the current directory and all subfolders recursively. Then the second line checks to see if the length of each file is less than 1 byte before deleting it.
cd /d C:\folder1
for /F "usebackq" %%A in (`dir/b/s/a-d`) do (
if %%~zA LSS 1 del %%A
)
import os
while(True):
path = input("Enter the path")
if(os.path.isdir(path)):
break
else:
print("Entered path is wrong!")
for root,dirs,files in os.walk(path):
for name in files:
filename = os.path.join(root,name)
if os.stat(filename).st_size == 0:
print(" Removing ",filename)
os.remove(filename)
I do first remove empty files, afterwards by following this answer (https://stackoverflow.com/a/6215421/2402577), I have removed the empty folders.
In addition, I added topdown=False in os.walk() to walk from leaf to roo since the default behavior of os.walk() is to walk from root to leaf.
So empty folders that also contains empty folders or files are removed as well.
import os
def remove_empty_files_and_folders(dir_path) -> None:
for root, dirnames, files in os.walk(dir_path, topdown=False):
for f in files:
full_name = os.path.join(root, f)
if os.path.getsize(full_name) == 0:
os.remove(full_name)
for dirname in dirnames:
full_path = os.path.realpath(os.path.join(root, dirname))
if not os.listdir(full_path):
os.rmdir(full_path)
I hope this can help you
#encoding = utf-8
import os
docName = []
def listDoc(path):
docList = os.listdir(path)
for doc in docList:
docPath = os.path.join(path,doc)
if os.path.isfile(docPath):
if os.path.getsize(docPath)==o:
os.remove(docPath)
if os.path.isdir(docPath):
listDoc(docPath)
listDoc(r'C:\folder1')
I'm writing a program that renames files and directories by taking out a certain pattern.
My renaming function works well for files since os.walk() targets all files, but no so much with directories
for root, dirs, files in os.walk(path): # Listing the files
for i, foldername in enumerate(dirs):
output = foldername.replace(pattern, "") # Taking out pattern
if output != foldername:
os.rename( # Renaming
os.path.join(path, foldername),
os.path.join(path, output))
else:
pass
Could someone suggest a solution to target ALL directories and not only first level ones?
Setting topdown=False in os.walk does the trick
for root, dirs, files in os.walk(path, topdown=False): # Listing the files
for i, name in enumerate(dirs):
output = name.replace(pattern, "") # Taking out pattern
if output != name:
os.rename( # Renaming
os.path.join(root, name),
os.path.join(root, output))
else:
pass
Thanks to J.F Sebastian!
This will do the trick (pymillsutils.getFiles()):
def getFiles(root, pattern=".*", tests=[isfile], **kwargs):
"""getFiles(root, pattern=".*", tests=[isfile], **kwargs) -> list of files
Return a list of files in the specified path (root)
applying the predicates listed in tests returning
only the files that match the pattern. Some optional
kwargs can be specified:
* full=True (Return full paths)
* recursive=True (Recursive mode)
"""
def test(file, tests):
for test in tests:
if not test(file):
return False
return True
full = kwargs.get("full", False)
recursive = kwargs.get("recursive", False)
files = []
for file in os.listdir(root):
path = os.path.abspath(os.path.join(root, file))
if os.path.isdir(path):
if recursive:
files.extend(getFiles(path, pattern, **kwargs))
elif test(path, tests) and re.match(pattern, path):
if full:
files.append(path)
else:
files.append(file)
return files
Usage:
getFiles("*.txt", recursive=True)
To list just directories:
from os.path import isdir
getFiles("*.*", tests=[isdir], recursive=True)
There is also a nice OOP-style library for path manipulation and traversal called py which has really nice API(s) that I quite like and use in all my projects.
I have a C++/Obj-C background and I am just discovering Python (been writing it for about an hour).
I am writing a script to recursively read the contents of text files in a folder structure.
The problem I have is the code I have written will only work for one folder deep. I can see why in the code (see #hardcoded path), I just don't know how I can move forward with Python since my experience with it is only brand new.
Python Code:
import os
import sys
rootdir = sys.argv[1]
for root, subFolders, files in os.walk(rootdir):
for folder in subFolders:
outfileName = rootdir + "/" + folder + "/py-outfile.txt" # hardcoded path
folderOut = open( outfileName, 'w' )
print "outfileName is " + outfileName
for file in files:
filePath = rootdir + '/' + file
f = open( filePath, 'r' )
toWrite = f.read()
print "Writing '" + toWrite + "' to" + filePath
folderOut.write( toWrite )
f.close()
folderOut.close()
Make sure you understand the three return values of os.walk:
for root, subdirs, files in os.walk(rootdir):
has the following meaning:
root: Current path which is "walked through"
subdirs: Files in root of type directory
files: Files in root (not in subdirs) of type other than directory
And please use os.path.join instead of concatenating with a slash! Your problem is filePath = rootdir + '/' + file - you must concatenate the currently "walked" folder instead of the topmost folder. So that must be filePath = os.path.join(root, file). BTW "file" is a builtin, so you don't normally use it as variable name.
Another problem are your loops, which should be like this, for example:
import os
import sys
walk_dir = sys.argv[1]
print('walk_dir = ' + walk_dir)
# If your current working directory may change during script execution, it's recommended to
# immediately convert program arguments to an absolute path. Then the variable root below will
# be an absolute path as well. Example:
# walk_dir = os.path.abspath(walk_dir)
print('walk_dir (absolute) = ' + os.path.abspath(walk_dir))
for root, subdirs, files in os.walk(walk_dir):
print('--\nroot = ' + root)
list_file_path = os.path.join(root, 'my-directory-list.txt')
print('list_file_path = ' + list_file_path)
with open(list_file_path, 'wb') as list_file:
for subdir in subdirs:
print('\t- subdirectory ' + subdir)
for filename in files:
file_path = os.path.join(root, filename)
print('\t- file %s (full path: %s)' % (filename, file_path))
with open(file_path, 'rb') as f:
f_content = f.read()
list_file.write(('The file %s contains:\n' % filename).encode('utf-8'))
list_file.write(f_content)
list_file.write(b'\n')
If you didn't know, the with statement for files is a shorthand:
with open('filename', 'rb') as f:
dosomething()
# is effectively the same as
f = open('filename', 'rb')
try:
dosomething()
finally:
f.close()
If you are using Python 3.5 or above, you can get this done in 1 line.
import glob
# root_dir needs a trailing slash (i.e. /root/dir/)
for filename in glob.iglob(root_dir + '**/*.txt', recursive=True):
print(filename)
As mentioned in the documentation
If recursive is true, the pattern '**' will match any files and zero or more directories and subdirectories.
If you want every file, you can use
import glob
for filename in glob.iglob(root_dir + '**/**', recursive=True):
print(filename)
Agree with Dave Webb, os.walk will yield an item for each directory in the tree. Fact is, you just don't have to care about subFolders.
Code like this should work:
import os
import sys
rootdir = sys.argv[1]
for folder, subs, files in os.walk(rootdir):
with open(os.path.join(folder, 'python-outfile.txt'), 'w') as dest:
for filename in files:
with open(os.path.join(folder, filename), 'r') as src:
dest.write(src.read())
TL;DR: This is the equivalent to find -type f to go over all files in all folders below and including the current one:
for currentpath, folders, files in os.walk('.'):
for file in files:
print(os.path.join(currentpath, file))
As already mentioned in other answers, os.walk() is the answer, but it could be explained better. It's quite simple! Let's walk through this tree:
docs/
└── doc1.odt
pics/
todo.txt
With this code:
for currentpath, folders, files in os.walk('.'):
print(currentpath)
The currentpath is the current folder it is looking at. This will output:
.
./docs
./pics
So it loops three times, because there are three folders: the current one, docs, and pics. In every loop, it fills the variables folders and files with all folders and files. Let's show them:
for currentpath, folders, files in os.walk('.'):
print(currentpath, folders, files)
This shows us:
# currentpath folders files
. ['pics', 'docs'] ['todo.txt']
./pics [] []
./docs [] ['doc1.odt']
So in the first line, we see that we are in folder ., that it contains two folders namely pics and docs, and that there is one file, namely todo.txt. You don't have to do anything to recurse into those folders, because as you see, it recurses automatically and just gives you the files in any subfolders. And any subfolders of that (though we don't have those in the example).
If you just want to loop through all files, the equivalent of find -type f, you can do this:
for currentpath, folders, files in os.walk('.'):
for file in files:
print(os.path.join(currentpath, file))
This outputs:
./todo.txt
./docs/doc1.odt
The pathlib library is really great for working with files. You can do a recursive glob on a Path object like so.
from pathlib import Path
for elem in Path('/path/to/my/files').rglob('*.*'):
print(elem)
import glob
import os
root_dir = <root_dir_here>
for filename in glob.iglob(root_dir + '**/**', recursive=True):
if os.path.isfile(filename):
with open(filename,'r') as file:
print(file.read())
**/** is used to get all files recursively including directory.
if os.path.isfile(filename) is used to check if filename variable is file or directory, if it is file then we can read that file.
Here I am printing file.
If you want a flat list of all paths under a given dir (like find . in the shell):
files = [
os.path.join(parent, name)
for (parent, subdirs, files) in os.walk(YOUR_DIRECTORY)
for name in files + subdirs
]
To only include full paths to files under the base dir, leave out + subdirs.
I've found the following to be the easiest
from glob import glob
import os
files = [f for f in glob('rootdir/**', recursive=True) if os.path.isfile(f)]
Using glob('some/path/**', recursive=True) gets all files, but also includes directory names. Adding the if os.path.isfile(f) condition filters this list to existing files only
For my taste os.walk() is a little too complicated and verbose. You can do the accepted answer cleaner by:
all_files = [str(f) for f in pathlib.Path(dir_path).glob("**/*") if f.is_file()]
with open(outfile, 'wb') as fout:
for f in all_files:
with open(f, 'rb') as fin:
fout.write(fin.read())
fout.write(b'\n')
use os.path.join() to construct your paths - It's neater:
import os
import sys
rootdir = sys.argv[1]
for root, subFolders, files in os.walk(rootdir):
for folder in subFolders:
outfileName = os.path.join(root,folder,"py-outfile.txt")
folderOut = open( outfileName, 'w' )
print "outfileName is " + outfileName
for file in files:
filePath = os.path.join(root,file)
toWrite = open( filePath).read()
print "Writing '" + toWrite + "' to" + filePath
folderOut.write( toWrite )
folderOut.close()
os.walk does recursive walk by default. For each dir, starting from root it yields a 3-tuple (dirpath, dirnames, filenames)
from os import walk
from os.path import splitext, join
def select_files(root, files):
"""
simple logic here to filter out interesting files
.py files in this example
"""
selected_files = []
for file in files:
#do concatenation here to get full path
full_path = join(root, file)
ext = splitext(file)[1]
if ext == ".py":
selected_files.append(full_path)
return selected_files
def build_recursive_dir_tree(path):
"""
path - where to begin folder scan
"""
selected_files = []
for root, dirs, files in walk(path):
selected_files += select_files(root, files)
return selected_files
I think the problem is that you're not processing the output of os.walk correctly.
Firstly, change:
filePath = rootdir + '/' + file
to:
filePath = root + '/' + file
rootdir is your fixed starting directory; root is a directory returned by os.walk.
Secondly, you don't need to indent your file processing loop, as it makes no sense to run this for each subdirectory. You'll get root set to each subdirectory. You don't need to process the subdirectories by hand unless you want to do something with the directories themselves.
Try this:
import os
import sys
for root, subdirs, files in os.walk(path):
for file in os.listdir(root):
filePath = os.path.join(root, file)
if os.path.isdir(filePath):
pass
else:
f = open (filePath, 'r')
# Do Stuff
If you prefer an (almost) Oneliner:
from pathlib import Path
lookuppath = '.' #use your path
filelist = [str(item) for item in Path(lookuppath).glob("**/*") if Path(item).is_file()]
In this case you will get a list with just the paths of all files located recursively under lookuppath.
Without str() you will get PosixPath() added to each path.
This worked for me:
import glob
root_dir = "C:\\Users\\Scott\\" # Don't forget trailing (last) slashes
for filename in glob.iglob(root_dir + '**/*.jpg', recursive=True):
print(filename)
# do stuff
If just the file names are not enough, it's easy to implement a Depth-first search on top of os.scandir():
stack = ['.']
files = []
total_size = 0
while stack:
dirname = stack.pop()
with os.scandir(dirname) as it:
for e in it:
if e.is_dir():
stack.append(e.path)
else:
size = e.stat().st_size
files.append((e.path, size))
total_size += size
The docs have this to say:
The scandir() function returns directory entries along with file attribute information, giving better performance for many common use cases.