how to use exclude option with pep8.py - python

I have a directory structure like this
/path/to/dir/a/foo
/path/to/dir/b/foo
and want to run pep8 on the directory /path/to/dir/ excluding /path/to/dir/a/foo
pep8 --exclude='/path/to/dir/a/foo' /path/to/dir
and the expected output of pep8 is, it should not include the files from /a/foo/
but pep8 is checking the files inside the /a/foo/ also
when I do this
pep8 --exclude='foo' /path/to/dir
it is excluding the files from both and a/foo /b/foo/
what is the pattern to be given to pep8 exclude option so that it exclude the files only from /a/foo/ but not from b/foo/ ?

You can try something like this:
pep8 --exclude='*/a/foo*' /path/to/dir
The exclude portion uses fnmatch to match against the path as seen in the source code.
def excluded(filename):
"""
Check if options.exclude contains a pattern that matches filename.
"""
basename = os.path.basename(filename)
for pattern in options.exclude:
if fnmatch(basename, pattern):
# print basename, 'excluded because it matches', pattern
return True

I'm sure I'm reinventing the wheel here, but I have also been unable to get the API working:
import os
import re
from pep8 import StyleGuide
def get_pyfiles(directory=None, exclusions=None, ftype='.py'):
'''generator of all ftype files in all subdirectories.
if directory is None, will look in current directory.
exclusions should be a regular expression.
'''
if directory is None:
directory = os.getcwd()
pyfiles = (os.path.join(dpath, fname)
for dpath, dnames, fnames in os.walk(directory)
for fname in [f for f in fnames
if f.endswith(ftype)])
if exclusions is not None:
c = re.compile(exclusions)
pyfiles = (fname for fname in pyfiles if c.match(fname) is None)
return pyfiles
def get_pep8_counter(directory=None, exclusions=None):
if directory is None:
directory = os.getcwd()
paths = list(get_pyfiles(directory=directory, exclusions=exclusions))
# I am only interested in counters (but you could do something else)
return StyleGuide(paths=paths).check_files().counters
counter = get_pep8_counter(exclusions='.*src.*|.*doc.*')

Related

Find absolute path of a file in python when knowing the last part of the path and the base directory?

Using python, I have the last parts of paths to existing files, like that:
sub_folder1/file1.txt
sub_folder2/file120.txt
sub_folder78/file99.txt
Note, that these paths are not the relative paths to the current folder I am working in, e.g., this pandas.read_csv('sub_folder1/file1.txt') would through a non-existing-file error. Nevertheless, I know all the files have the same base directory base_dir, but I don't know the absolute path. This means a file could be located like this:
base_dir/inter_folder1/sub_folder1/file1.txt
Or like this:
base_dir/inter_folder7/inter_folder4/.../sub_folder1/file1.txt
Is there a function that returns the absolute path, when given the last part of the path and the base directory of a file (or equivalently, finding the intermediate folders)? Should be looking like that:
absolut_path = some_func(end_path='bla/bla.txt', base_dir='BLAH')
I thought pathlib might have a solution, but couldn't find anything there. Thanks
I need this to do something like the below:
for end_path in list_of_paths:
full_path = some_func(end_path=end_path, base_dir='base_dir')
image = cv2.imread(full_path)
This should be fairly easy to implement from pathlib:
from pathlib import Path
def find(end_path: str, base_dir: str):
for file in Path(base_dir).rglob("*"):
if str(file).endswith(end_path):
yield file
This is a generator, to match the pathlib interface; as such it will yield pathlib.PosixPath objects. It will also find all matching files, for example:
[str(f) for f in find(end_path="a.txt", base_dir="my_dir")]
# creates:
# ['my_dir/a.txt', 'my_dir/sub_dir/a.txt']
If you just want the first value you can just return the first item:
def find_first(end_path: str, base_dir: str):
for file in Path(base_dir).rglob("*"):
if str(file).endswith(end_path):
return str(file)
abs_path = find_first(end_path="a.txt", base_dir="my_dir")
A better function that would improve the lookup:
from pathlib import Path
def find(pattern, suffixes, base_dir):
for file in Path(base_dir).rglob(pattern):
if any(str(file).endswith(suffix) for suffix in suffixes):
yield str(file)
base_dir = "base_directory"
suffixes = [
'sub_folder1/file1.txt',
'sub_folder2/file120.txt',
'sub_folder78/file99.txt',
]
for full_path in find(pattern="*.txt", suffixes=suffix, base_dir=base_dir):
image = cv2.imread(full_path)
You need to search for the sub-folder within the base folder e.g,
import os
for dirpath, dirnames, files in os.walk(os.path.abspath(base_dir)):
if dirpath.endswith(subfolder1):
print(dirpath)
You might want to also make sure that the file exists in that folder using:
if dirpath.endswith("subfolder1") and "file1.txt" in files:
print(dirpath)

python: How to get latest file in a directory with certain pattern

I want the latest file in a directory with certain pattern. I can find the latest file but don't know how to include pattern. Please try to propose solution involving os library only.
def newest(DIR_PATH):
files = os.listdir(DIR_PATH)
FILE_LIST = [os.path.join(DIR_PATH, BASENAME) for BASENAME in files]
return max(FILE_LIST, key=os.path.getctime)
The directory is having many kinds of files. For example consider below two kind of files.
xyz-2019-11-17_01-25-14.json
xyz-2019-11-17_01-25-14-trimmed.json
I want to get the latest file that does not end with '-trimmed.json'.Please suggest.
You could simply go like this:
def newest(DIR_PATH):
files = os.listdir(DIR_PATH)
FILE_LIST = [os.path.join(DIR_PATH, BASENAME) for BASENAME in files if not BASENAME.endswith("trimmed.json")]
return max(FILE_LIST, key=os.path.getctime)
you could probably use
import os
from pathlib import Path as makePath
def find_youngest(path, pattern, n=1):
"""
find the file that matches a pattern and has the highest modification
timestamp if there are multiple files that match the pattern.
input:
path, string or pathlib.Path, where to look for the file(s)
pattern, string, pattern to look for in filename
n, integer, how many to return. defaults to 1
returns
filename(s) of youngest file(s), including path.
None if no file
"""
assert n >= 1, "n must be greater equal 1."
path = makePath(path)
files = [makePath(f) for f in path.glob(pattern) if os.path.isfile(f)]
sortfiles = sorted(files, key=lambda x: os.path.getmtime(x), reverse=True)
if sortfiles:
return sortfiles[:n]
return None
Note: if you use pathlib.Path.glob, you can also use regex patterns for string matching.
A simple way to select files base on the occurance of a specific filename ending could be
files = ['xyz-2019-11-17_01-25-14.json',
'xyz-2019-11-17_01-25-14-trimmed.json']
select = [f for f in files if not f.endswith('-trimmed.json')]
# select
# Out[35]: ['xyz-2019-11-17_01-25-14.json']
There's a library called glob. Check it out.
https://docs.python.org/3/library/glob.html
This solution is almost same as what was given here by "Florian H" with one minor difference, if the pattern is somewhere in between the filename where endswith is not relevant.
def newest(DIR_PATH):
files = os.listdir(DIR_PATH)
FILE_LIST = [os.path.join(DIR_PATH, BASENAME) for BASENAME in files if "trimmed" not in BASENAME]
return max(FILE_LIST, key=os.path.getctime)

Opening files with a "wildcard" in python [duplicate]

This is what I have:
glob(os.path.join('src','*.c'))
but I want to search the subfolders of src. Something like this would work:
glob(os.path.join('src','*.c'))
glob(os.path.join('src','*','*.c'))
glob(os.path.join('src','*','*','*.c'))
glob(os.path.join('src','*','*','*','*.c'))
But this is obviously limited and clunky.
pathlib.Path.rglob
Use pathlib.Path.rglob from the pathlib module, which was introduced in Python 3.5.
from pathlib import Path
for path in Path('src').rglob('*.c'):
print(path.name)
If you don't want to use pathlib, use can use glob.glob('**/*.c'), but don't forget to pass in the recursive keyword parameter and it will use inordinate amount of time on large directories.
For cases where matching files beginning with a dot (.); like files in the current directory or hidden files on Unix based system, use the os.walk solution below.
os.walk
For older Python versions, use os.walk to recursively walk a directory and fnmatch.filter to match against a simple expression:
import fnmatch
import os
matches = []
for root, dirnames, filenames in os.walk('src'):
for filename in fnmatch.filter(filenames, '*.c'):
matches.append(os.path.join(root, filename))
For python >= 3.5 you can use **, recursive=True :
import glob
for f in glob.glob('/path/**/*.c', recursive=True):
print(f)
If recursive is True (default is False), the pattern ** will match any files and zero
or more directories and subdirectories. If the pattern is followed by
an os.sep, only directories and subdirectories match.
Python 3 Demo
Similar to other solutions, but using fnmatch.fnmatch instead of glob, since os.walk already listed the filenames:
import os, fnmatch
def find_files(directory, pattern):
for root, dirs, files in os.walk(directory):
for basename in files:
if fnmatch.fnmatch(basename, pattern):
filename = os.path.join(root, basename)
yield filename
for filename in find_files('src', '*.c'):
print 'Found C source:', filename
Also, using a generator alows you to process each file as it is found, instead of finding all the files and then processing them.
I've modified the glob module to support ** for recursive globbing, e.g:
>>> import glob2
>>> all_header_files = glob2.glob('src/**/*.c')
https://github.com/miracle2k/python-glob2/
Useful when you want to provide your users with the ability to use the ** syntax, and thus os.walk() alone is not good enough.
Starting with Python 3.4, one can use the glob() method of one of the Path classes in the new pathlib module, which supports ** wildcards. For example:
from pathlib import Path
for file_path in Path('src').glob('**/*.c'):
print(file_path) # do whatever you need with these files
Update:
Starting with Python 3.5, the same syntax is also supported by glob.glob().
import os
import fnmatch
def recursive_glob(treeroot, pattern):
results = []
for base, dirs, files in os.walk(treeroot):
goodfiles = fnmatch.filter(files, pattern)
results.extend(os.path.join(base, f) for f in goodfiles)
return results
fnmatch gives you exactly the same patterns as glob, so this is really an excellent replacement for glob.glob with very close semantics. An iterative version (e.g. a generator), IOW a replacement for glob.iglob, is a trivial adaptation (just yield the intermediate results as you go, instead of extending a single results list to return at the end).
You'll want to use os.walk to collect filenames that match your criteria. For example:
import os
cfiles = []
for root, dirs, files in os.walk('src'):
for file in files:
if file.endswith('.c'):
cfiles.append(os.path.join(root, file))
Here's a solution with nested list comprehensions, os.walk and simple suffix matching instead of glob:
import os
cfiles = [os.path.join(root, filename)
for root, dirnames, filenames in os.walk('src')
for filename in filenames if filename.endswith('.c')]
It can be compressed to a one-liner:
import os;cfiles=[os.path.join(r,f) for r,d,fs in os.walk('src') for f in fs if f.endswith('.c')]
or generalized as a function:
import os
def recursive_glob(rootdir='.', suffix=''):
return [os.path.join(looproot, filename)
for looproot, _, filenames in os.walk(rootdir)
for filename in filenames if filename.endswith(suffix)]
cfiles = recursive_glob('src', '.c')
If you do need full glob style patterns, you can follow Alex's and
Bruno's example and use fnmatch:
import fnmatch
import os
def recursive_glob(rootdir='.', pattern='*'):
return [os.path.join(looproot, filename)
for looproot, _, filenames in os.walk(rootdir)
for filename in filenames
if fnmatch.fnmatch(filename, pattern)]
cfiles = recursive_glob('src', '*.c')
Consider pathlib.rglob().
This is like calling Path.glob() with "**/" added in front of the given relative pattern:
import pathlib
for p in pathlib.Path("src").rglob("*.c"):
print(p)
See also #taleinat's related post here and a similar post elsewhere.
import os, glob
for each in glob.glob('path/**/*.c', recursive=True):
print(f'Name with path: {each} \nName without path: {os.path.basename(each)}')
glob.glob('*.c') :matches all files ending in .c in current directory
glob.glob('*/*.c') :same as 1
glob.glob('**/*.c') :matches all files ending in .c in the immediate subdirectories only, but not in the current directory
glob.glob('*.c',recursive=True) :same as 1
glob.glob('*/*.c',recursive=True) :same as 3
glob.glob('**/*.c',recursive=True) :matches all files ending in .c in the current directory and in all subdirectories
In case this may interest anyone, I've profiled the top three proposed methods.
I have about ~500K files in the globbed folder (in total), and 2K files that match the desired pattern.
here's the (very basic) code
import glob
import json
import fnmatch
import os
from pathlib import Path
from time import time
def find_files_iglob():
return glob.iglob("./data/**/data.json", recursive=True)
def find_files_oswalk():
for root, dirnames, filenames in os.walk('data'):
for filename in fnmatch.filter(filenames, 'data.json'):
yield os.path.join(root, filename)
def find_files_rglob():
return Path('data').rglob('data.json')
t0 = time()
for f in find_files_oswalk(): pass
t1 = time()
for f in find_files_rglob(): pass
t2 = time()
for f in find_files_iglob(): pass
t3 = time()
print(t1-t0, t2-t1, t3-t2)
And the results I got were:
os_walk: ~3.6sec
rglob ~14.5sec
iglob: ~16.9sec
The platform: Ubuntu 16.04, x86_64 (core i7),
Recently I had to recover my pictures with the extension .jpg. I ran photorec and recovered 4579 directories 2.2 million files within, having tremendous variety of extensions.With the script below I was able to select 50133 files havin .jpg extension within minutes:
#!/usr/binenv python2.7
import glob
import shutil
import os
src_dir = "/home/mustafa/Masaüstü/yedek"
dst_dir = "/home/mustafa/Genel/media"
for mediafile in glob.iglob(os.path.join(src_dir, "*", "*.jpg")): #"*" is for subdirectory
shutil.copy(mediafile, dst_dir)
based on other answers this is my current working implementation, which retrieves nested xml files in a root directory:
files = []
for root, dirnames, filenames in os.walk(myDir):
files.extend(glob.glob(root + "/*.xml"))
I'm really having fun with python :)
For python 3.5 and later
import glob
#file_names_array = glob.glob('path/*.c', recursive=True)
#above works for files directly at path/ as guided by NeStack
#updated version
file_names_array = glob.glob('path/**/*.c', recursive=True)
further you might need
for full_path_in_src in file_names_array:
print (full_path_in_src ) # be like 'abc/xyz.c'
#Full system path of this would be like => 'path till src/abc/xyz.c'
Johan and Bruno provide excellent solutions on the minimal requirement as stated. I have just released Formic which implements Ant FileSet and Globs which can handle this and more complicated scenarios. An implementation of your requirement is:
import formic
fileset = formic.FileSet(include="/src/**/*.c")
for file_name in fileset.qualified_files():
print file_name
Another way to do it using just the glob module. Just seed the rglob method with a starting base directory and a pattern to match and it will return a list of matching file names.
import glob
import os
def _getDirs(base):
return [x for x in glob.iglob(os.path.join( base, '*')) if os.path.isdir(x) ]
def rglob(base, pattern):
list = []
list.extend(glob.glob(os.path.join(base,pattern)))
dirs = _getDirs(base)
if len(dirs):
for d in dirs:
list.extend(rglob(os.path.join(base,d), pattern))
return list
Or with a list comprehension:
>>> base = r"c:\User\xtofl"
>>> binfiles = [ os.path.join(base,f)
for base, _, files in os.walk(root)
for f in files if f.endswith(".jpg") ]
If the files are on a remote file system or inside an archive, you can use an implementation of the fsspec AbstractFileSystem class. For example, to list all the files in a zipfile:
from fsspec.implementations.zip import ZipFileSystem
fs = ZipFileSystem("/tmp/test.zip")
fs.glob("/**") # equivalent: fs.find("/")
or to list all the files in a publicly available S3 bucket:
from s3fs import S3FileSystem
fs_s3 = S3FileSystem(anon=True)
fs_s3.glob("noaa-goes16/ABI-L1b-RadF/2020/045/**") # or use fs_s3.find
you can also use it for a local filesystem, which may be interesting if your implementation should be filesystem-agnostic:
from fsspec.implementations.local import LocalFileSystem
fs = LocalFileSystem()
fs.glob("/tmp/test/**")
Other implementations include Google Cloud, Github, SFTP/SSH, Dropbox, and Azure. For details, see the fsspec API documentation.
Just made this.. it will print files and directory in hierarchical way
But I didn't used fnmatch or walk
#!/usr/bin/python
import os,glob,sys
def dirlist(path, c = 1):
for i in glob.glob(os.path.join(path, "*")):
if os.path.isfile(i):
filepath, filename = os.path.split(i)
print '----' *c + filename
elif os.path.isdir(i):
dirname = os.path.basename(i)
print '----' *c + dirname
c+=1
dirlist(i,c)
c-=1
path = os.path.normpath(sys.argv[1])
print(os.path.basename(path))
dirlist(path)
That one uses fnmatch or regular expression:
import fnmatch, os
def filepaths(directory, pattern):
for root, dirs, files in os.walk(directory):
for basename in files:
try:
matched = pattern.match(basename)
except AttributeError:
matched = fnmatch.fnmatch(basename, pattern)
if matched:
yield os.path.join(root, basename)
# usage
if __name__ == '__main__':
from pprint import pprint as pp
import re
path = r'/Users/hipertracker/app/myapp'
pp([x for x in filepaths(path, re.compile(r'.*\.py$'))])
pp([x for x in filepaths(path, '*.py')])
In addition to the suggested answers, you can do this with some lazy generation and list comprehension magic:
import os, glob, itertools
results = itertools.chain.from_iterable(glob.iglob(os.path.join(root,'*.c'))
for root, dirs, files in os.walk('src'))
for f in results: print(f)
Besides fitting in one line and avoiding unnecessary lists in memory, this also has the nice side effect, that you can use it in a way similar to the ** operator, e.g., you could use os.path.join(root, 'some/path/*.c') in order to get all .c files in all sub directories of src that have this structure.
This is a working code on Python 2.7. As part of my devops work, I was required to write a script which would move the config files marked with live-appName.properties to appName.properties. There could be other extension files as well like live-appName.xml.
Below is a working code for this, which finds the files in the given directories (nested level) and then renames (moves) it to the required filename
def flipProperties(searchDir):
print "Flipping properties to point to live DB"
for root, dirnames, filenames in os.walk(searchDir):
for filename in fnmatch.filter(filenames, 'live-*.*'):
targetFileName = os.path.join(root, filename.split("live-")[1])
print "File "+ os.path.join(root, filename) + "will be moved to " + targetFileName
shutil.move(os.path.join(root, filename), targetFileName)
This function is called from a main script
flipProperties(searchDir)
Hope this helps someone struggling with similar issues.
Simplified version of Johan Dahlin's answer, without fnmatch.
import os
matches = []
for root, dirnames, filenames in os.walk('src'):
matches += [os.path.join(root, f) for f in filenames if f[-2:] == '.c']
Here is my solution using list comprehension to search for multiple file extensions recursively in a directory and all subdirectories:
import os, glob
def _globrec(path, *exts):
""" Glob recursively a directory and all subdirectories for multiple file extensions
Note: Glob is case-insensitive, i. e. for '\*.jpg' you will get files ending
with .jpg and .JPG
Parameters
----------
path : str
A directory name
exts : tuple
File extensions to glob for
Returns
-------
files : list
list of files matching extensions in exts in path and subfolders
"""
dirs = [a[0] for a in os.walk(path)]
f_filter = [d+e for d in dirs for e in exts]
return [f for files in [glob.iglob(files) for files in f_filter] for f in files]
my_pictures = _globrec(r'C:\Temp', '\*.jpg','\*.bmp','\*.png','\*.gif')
for f in my_pictures:
print f
import sys, os, glob
dir_list = ["c:\\books\\heap"]
while len(dir_list) > 0:
cur_dir = dir_list[0]
del dir_list[0]
list_of_files = glob.glob(cur_dir+'\\*')
for book in list_of_files:
if os.path.isfile(book):
print(book)
else:
dir_list.append(book)
I modified the top answer in this posting.. and recently created this script which will loop through all files in a given directory (searchdir) and the sub-directories under it... and prints filename, rootdir, modified/creation date, and size.
Hope this helps someone... and they can walk the directory and get fileinfo.
import time
import fnmatch
import os
def fileinfo(file):
filename = os.path.basename(file)
rootdir = os.path.dirname(file)
lastmod = time.ctime(os.path.getmtime(file))
creation = time.ctime(os.path.getctime(file))
filesize = os.path.getsize(file)
print "%s**\t%s\t%s\t%s\t%s" % (rootdir, filename, lastmod, creation, filesize)
searchdir = r'D:\Your\Directory\Root'
matches = []
for root, dirnames, filenames in os.walk(searchdir):
## for filename in fnmatch.filter(filenames, '*.c'):
for filename in filenames:
## matches.append(os.path.join(root, filename))
##print matches
fileinfo(os.path.join(root, filename))
Here is a solution that will match the pattern against the full path and not just the base filename.
It uses fnmatch.translate to convert a glob-style pattern into a regular expression, which is then matched against the full path of each file found while walking the directory.
re.IGNORECASE is optional, but desirable on Windows since the file system itself is not case-sensitive. (I didn't bother compiling the regex because docs indicate it should be cached internally.)
import fnmatch
import os
import re
def findfiles(dir, pattern):
patternregex = fnmatch.translate(pattern)
for root, dirs, files in os.walk(dir):
for basename in files:
filename = os.path.join(root, basename)
if re.search(patternregex, filename, re.IGNORECASE):
yield filename
I needed a solution for python 2.x that works fast on large directories.
I endet up with this:
import subprocess
foundfiles= subprocess.check_output("ls src/*.c src/**/*.c", shell=True)
for foundfile in foundfiles.splitlines():
print foundfile
Note that you might need some exception handling in case ls doesn't find any matching file.

excluding some files from Build in Scons

Since I am a newbie in Scons, I am finding it difficult to migrate my existing Makefile to Scons.
Background:
I have 50 files in a directory.I want to filter files only with *.cxx extension, that too filenames without string "win32" .
Can somebody suggest an implementation for this logic in Scons :
Makefile Implementation :
WIN32FILTER = $(wildcard *win32*)
CXXOBJS = $(patsubst %.cxx,%.o,$(filter-out $(WIN32FILTER),$(wildcard *.cxx)))
In Scons ,I am trying something like this :
moduleSources = ''
for root, dirs, files in os.walk('./'):
for filename in fnmatch.filter(files, '*.cxx'):
if "win32" not in filename:
moduleSources += ' ' + filename
env.StaticLibrary( "support_host", moduleSources)
moduleSources here should contain list of all *.cxx files (excluding win32 string) which will be used to make a Static library.
Any help is appreciated.
You create a single string with spaces in it to describe the set of source code files. This doesn't do what you hoped.
Instead, create a list of filenames. The following SConstruct does what you want:
import os
import fnmatch
env = Environment()
moduleSources = []
for root, dirs, files in os.walk('./'):
for filename in fnmatch.filter(files, '*.cxx'):
if "win32" not in filename:
moduleSources.append(os.path.join(root, filename))
env.StaticLibrary( "support_host", moduleSources)

How to find all files with a particular extension? [duplicate]

This question already has answers here:
Find all files in a directory with extension .txt in Python
(25 answers)
Closed 2 months ago.
I am trying to find all the .c files in a directory using Python.
I wrote this, but it is just returning me all files - not just .c files:
import os
import re
results = []
for folder in gamefolders:
for f in os.listdir(folder):
if re.search('.c', f):
results += [f]
print results
How can I just get the .c files?
try changing the inner loop to something like this
results += [each for each in os.listdir(folder) if each.endswith('.c')]
Try "glob":
>>> import glob
>>> glob.glob('./[0-9].*')
['./1.gif', './2.txt']
>>> glob.glob('*.gif')
['1.gif', 'card.gif']
>>> glob.glob('?.gif')
['1.gif']
KISS
# KISS
import os
results = []
for folder in gamefolders:
for f in os.listdir(folder):
if f.endswith('.c'):
results.append(f)
print results
There is a better solution that directly using regular expressions, it is the standard library's module fnmatch for dealing with file name patterns. (See also glob module.)
Write a helper function:
import fnmatch
import os
def listdir(dirname, pattern="*"):
return fnmatch.filter(os.listdir(dirname), pattern)
and use it as follows:
result = listdir("./sources", "*.c")
for _,_,filenames in os.walk(folder):
for file in filenames:
fileExt=os.path.splitext(file)[-1]
if fileExt == '.c':
results.append(file)
For another alternative you could use fnmatch
import fnmatch
import os
results = []
for root, dirs, files in os.walk(path)
for _file in files:
if fnmatch.fnmatch(_file, '*.c'):
results.append(os.path.join(root, _file))
print results
or with a list comprehension:
for root, dirs, files in os.walk(path)
[results.append(os.path.join(root, _file))\
for _file in files if \
fnmatch.fnmatch(_file, '*.c')]
or using filter:
for root, dirs, files in os.walk(path):
[results.append(os.path.join(root, _file))\
for _file in fnmatch.filter(files, '*.c')]
Change the directory to the given path, so that you can search files within directory. If you don't change the directory then this code will search files in your present directory location:
import os #importing os library
import glob #importing glob library
path=raw_input() #input from the user
os.chdir(path)
filedata=glob.glob('*.c') #all files with .c extenstions stores in filedata.
print filedata
import os, re
cfile = re.compile("^.*?\.c$")
results = []
for name in os.listdir(directory):
if cfile.match(name):
results.append(name)
The implementation of shutil.copytree is in the docs. I mofdified it to take a list of extentions to INCLUDE.
def my_copytree(src, dst, symlinks=False, *extentions):
""" I modified the 2.7 implementation of shutils.copytree
to take a list of extentions to INCLUDE, instead of an ignore list.
"""
names = os.listdir(src)
os.makedirs(dst)
errors = []
for name in names:
srcname = os.path.join(src, name)
dstname = os.path.join(dst, name)
try:
if symlinks and os.path.islink(srcname):
linkto = os.readlink(srcname)
os.symlink(linkto, dstname)
elif os.path.isdir(srcname):
my_copytree(srcname, dstname, symlinks, *extentions)
else:
ext = os.path.splitext(srcname)[1]
if not ext in extentions:
# skip the file
continue
copy2(srcname, dstname)
# XXX What about devices, sockets etc.?
except (IOError, os.error), why:
errors.append((srcname, dstname, str(why)))
# catch the Error from the recursive copytree so that we can
# continue with other files
except Error, err:
errors.extend(err.args[0])
try:
copystat(src, dst)
# except WindowsError: # cant copy file access times on Windows
# pass
except OSError, why:
errors.extend((src, dst, str(why)))
if errors:
raise Error(errors)
Usage: For example, to copy only .config and .bat files....
my_copytree(source, targ, '.config', '.bat')
this is pretty clean.
the commands come from the os library.
this code will search through the current working directory and list only the specified file type. You can change this by replacing 'os.getcwd()' with your target directory and choose the file type by replacing '(ext)'. os.fsdecode is so you don't get a bytewise error from .endswith(). this also sorts alphabetically, you can remove sorted() for the raw list.
import os
filenames = sorted([os.fsdecode(file) for file in os.listdir(os.getcwd()) if os.fsdecode(file).endswith(".(ext)")])
Here's yet another solution, using pathlib (and Python 3):
from pathlib import Path
gamefolder = "path/to/dir"
result = sorted(Path(gamefolder).glob("**.c"))
Notice the double asterisk (**) in the glob() argument. This will search the gamefolder as well as its subdirectories. If you only want to search the gamefolder, use a single * in the pattern: "*.c". For more details, see the documentation.
If you replace '.c' with '[.]c$', you're searching for files that contain .c as the last two characters of the name, rather than all files that contain a c, with at least one character before it.
Edit: Alternatively, match f[-2:] with '.c', this MAY be computationally cheaper than pulling out a regexp match.
Just to be clear, if you wanted the dot character in your search term, you could've escaped it too:
'.*[backslash].c' would give you what you needed, plus you would need to use something like:
results.append(f), instead of what you had listed as results += [f]
This function returns a list of all file names with the specified extension that live in the specified directory:
import os
def listFiles(path, extension):
return [f for f in os.listdir(path) if f.endswith(extension)]
print listFiles('/Path/to/directory/with/files', '.txt')
If you want to list all files with the specified extension in a certain directory and its subdirectories you could do:
import os
def filterFiles(path, extension):
return [file for root, dirs, files in os.walk(path) for file in files if file.endswith(extension)]
print filterFiles('/Path/to/directory/with/files', '.txt')
You can actually do this with just os.listdir
import os
results = [f for f in os.listdir(gamefolders/folder) if f.endswith('.c')]

Categories

Resources