How to get the newest directory in Python - python

I'm looking for a method that can find the newest directory created inside another directory
The only method i have is os.listdir() but it shows all files and directories inside. How can I list only directories and how can I access to the attributes of the directory to find out the newest created?
Thanks

import os
dirs = [d for d in os.listdir('.') if os.path.isdir(d)]
sorted(dirs, key=lambda x: os.path.getctime(x), reverse=True)[:1]
Update:
Maybe some more explanation:
[d for d in os.listdir('.') if os.path.isdir(d)]
is a list comprehension. You can read more about them here
The code does the same as
dirs = []
for d in os.listdir('.'):
if os.path.isdir(d):
dirs.append(d)
would do, but the list comprehension is considered more readable.
sorted()is a built-in function. Some examples are here
The code I showed sorts all elemens within dirs by os.path.getctime(ELEMENT) in reverse.
The result is again a list. Which of course can be accessed using the [index] syntax and slicing

Here's a little function I wrote to return the name of the newest directory:
#!/usr/bin/env python
import os
import glob
import operator
def findNewestDir(directory):
os.chdir(directory)
dirs = {}
for dir in glob.glob('*'):
if os.path.isdir(dir):
dirs[dir] = os.path.getctime(dir)
lister = sorted(dirs.iteritems(), key=operator.itemgetter(1))
return lister[-1][0]
print "The newest directory is", findNewestDir('/Users/YOURUSERNAME/Sites')

The following Python code should solve your problem.
import os
import glob
for dir in glob.glob('*'):
if os.path.isdir(dir):
print dir,":",os.path.getctime(dir)

Check out os.walk and the examples in the docs for an easy way to get directories.
root, dirs, files = os.walk('/your/path').next()
Then check out os.path.getctime which depending on your os may be creation or modification time. If you are not already familiar with it, you will also want to read up on os.path.join.
os.path.getctime(path) Return the system’s ctime which, on some systems (like Unix) is the time of the last change, and, on others (like Windows), is the creation time for path.
max((os.path.getctime(os.path.join(root, f)), f) for f in dirs)

Related

Deleting the useless output files using Python

After I execute a python script from a particular directory, I get many output files but apart from 5-6 files I want to delete the rest from that directory. What I have done is, I have taken those 5-6 useful files inside a list and deleted all the other files which are not there in that list. Below is my code:
list1=['prog_1.py', 'prog_2.py', 'prog_3.py'] #Extend
import os
dir = '/home/dev/codes' #Change accordingly
for f in os.listdir(dir):
if f not in list1:
os.remove(os.path.join(dir, f))
Now here I just want to add one more thing, if the output files start with output_of_final, then I don't want them to be deleted. How can I do it? Should I use regex?
You could use Regex, but that's overkill here. Just use the str.startswith method.
Also, it's bad practice to use reserved keywords, built-in types and functions as variable names. I have renamed dir to directory. (https://docs.python.org/3/library/functions.html#dir)
list1 = ['prog_1.py', 'prog_2.py', 'prog_3.py'] # Extend
import os
directory = '/home/dev/codes' # Change accordingly
for f in os.listdir(directory):
if f not in list1 and not f.startswith('output_of_final'):
os.remove(os.path.join(directory, f))
yes the regex works here, but there are easier options like using startswith method for strings
list1=['prog_1.py', 'prog_2.py', 'prog_3.py'] #Extend
import os
dir = '/home/dev/codes' #Change accordingly
for f in os.listdir(dir):
if (f not in list1) and (not f.startswith('output_of_final')):
os.remove(os.path.join(dir, f))

Variable interpolation appears not to work

I am trying to write a simple function that returns a list of files in a directory and its subdirectories. I shamelessly stole the majority of this function from another SO poster. I am using Python 2.6.4.
def getFiles(Asite):
# returns a list of config files
from os import listdir
from os.path import isfile, join
mypath = '/etc/config/' + Asite
print mypath
files = [ f for f in listdir(mypath) if isfile(join(mypath,f)) ]
return files
The function simply returns an empty list, []. It appears that the mypath variable is not being interpolated by the listdir() and isfile() functions. Before anyone asks, yes, I have verified that there are in fact files located at mypath. Why is my files array empty?
Thanks all for your helpful comments. It turns out that the directory I was searching in only had sudirectories in it, but not files. So os.listdir() didn't work for searching subdirectories, as it only goes one level (thank you abhishekgarg). I ended up using the code below, which worked great (which I also found on SO, and modified a bit).
def getFiles(Asite):
# returns a list of config files
files = []
mypath = '/share/profile/base/sol-10-sparc-base/config/' + Asite
for dirname, dirnames, filenames in os.walk(mypath):
for filename in filenames:
pfile = os.path.join(dirname, filename)
files.append(pfile[len(mypath):])
return files

How to find files and skip directories in os.listdir

I use os.listdir and it works fine, but I get sub-directories in the list also, which is not what I want: I need only files.
What function do I need to use for that?
I looked also at os.walk and it seems to be what I want, but I'm not sure of how it works.
You need to filter out directories; os.listdir() lists all names in a given path. You can use os.path.isdir() for this:
basepath = '/path/to/directory'
for fname in os.listdir(basepath):
path = os.path.join(basepath, fname)
if os.path.isdir(path):
# skip directories
continue
Note that this only filters out directories after following symlinks. fname is not necessarily a regular file, it could also be a symlink to a file. If you need to filter out symlinks as well, you'd need to use not os.path.islink() first.
On a modern Python version (3.5 or newer), an even better option is to use the os.scandir() function; this produces DirEntry() instances. In the common case, this is faster as the direntry loaded already has cached enough information to determine if an entry is a directory or not:
basepath = '/path/to/directory'
for entry in os.scandir(basepath):
if entry.is_dir():
# skip directories
continue
# use entry.path to get the full path of this entry, or use
# entry.name for the base filename
You can use entry.is_file(follow_symlinks=False) if only regular files (and not symlinks) are needed.
os.walk() does the same work under the hood; unless you need to recurse down subdirectories, you don't need to use os.walk() here.
Here is a nice little one-liner in the form of a list comprehension:
[f for f in os.listdir(your_directory) if os.path.isfile(os.path.join(your_directory, f))]
This will return a list of filenames within the specified your_directory.
import os
directoryOfChoice = "C:\\" # Replace with a directory of choice!!!
filter(os.path.isfile, os.listdir(directoryOfChoice))
P.S: os.getcwd() returns the current directory.
for fname in os.listdir('.'):
if os.path.isdir(fname):
pass # do your stuff here for directory
else:
pass # do your stuff here for regular file
The solution with os.walk() would be:
for r, d, f in os.walk('path/to/dir'):
for files in f:
# This will list all files given in a particular directory
Even though this is an older post, let me please add the pathlib library introduced in 3.4 which provides an OOP style of handling directories and files for sakes of completeness. To get all files in a directory, you can use
def get_list_of_files_in_dir(directory: str, file_types: str ='*') -> list:
return [f for f in Path(directory).glob(file_types) if f.is_file()]
Following your example, you could use it like this:
mypath = '/path/to/directory'
files = get_list_of_files_in_dir(mypath)
If you only want a subset of files depending on the file extension (e.g. "only csv files"), you can use:
files = get_list_of_files_in_dir(mypath, '*.csv')
Note PEP 471 DirEntry object attributes is: is_dir(*, follow_symlinks=True)
so...
from os import scandir
folder = '/home/myfolder/'
for entry in scandir(folder):
if entry.is_dir():
# do code or skip
continue
myfile = folder + entry.name
#do something with myfile

Simplest way to get the equivalent of "find ." in python?

What is the simplest way to get the full recursive list of files inside a folder with python? I know about os.walk(), but it seems overkill for just getting the unfiltered list of all files. Is it really the only option?
There's nothing preventing you from creating your own function:
import os
def listfiles(folder):
for root, folders, files in os.walk(folder):
for filename in folders + files:
yield os.path.join(root, filename)
You can use it like so:
for filename in listfiles('/etc/'):
print filename
os.walk() is not overkill by any means. It can generate your list of files and directories in a jiffy:
files = [os.path.join(dirpath, filename)
for (dirpath, dirs, files) in os.walk('.')
for filename in (dirs + files)]
You can turn this into a generator, to only process one path at a time and safe on memory.
You could also use the find program itself from Python by using sh
import sh
text_files = sh.find(".", "-iname", "*.txt")
Either that or manually recursing with isdir() / isfile() and listdir() or you could use subprocess.check_output() and call find .. Bascially os.walk() is highest level, slightly lower level is semi-manual solution based on listdir() and if you want the same output find . would give you for some reason you can make a system call with subprocess.
pathlib.Path.rglob is pretty simple. It lists the entire directory tree
(The argument is a filepath search pattern. "*" means list everything)
import pathlib
for path in pathlib.Path("directory_to_list/").rglob("*"):
print(path)
os.walk() is hard to use, just kick it and use pathlib instead.
Here is a python function mimicking a similar function of list.files in R language.
def list_files(path,pattern,full_names=False,recursive=True):
if(recursive):
files=pathlib.Path(path).rglob(pattern)
else:
files=pathlib.Path(path).glob(pattern)
if full_names:
files=[str(f) for f in files]
else:
files=[f.name for f in files]
return(files)
import os
path = "path/to/your/dir"
for (path, dirs, files) in os.walk(path):
print files
Is this overkill, or am I missing something?

How do you get a directory listing sorted by creation date in python?

What is the best way to get a list of all files in a directory, sorted by date [created | modified], using python, on a windows machine?
I've done this in the past for a Python script to determine the last updated files in a directory:
import glob
import os
search_dir = "/mydir/"
# remove anything from the list that is not a file (directories, symlinks)
# thanks to J.F. Sebastion for pointing out that the requirement was a list
# of files (presumably not including directories)
files = list(filter(os.path.isfile, glob.glob(search_dir + "*")))
files.sort(key=lambda x: os.path.getmtime(x))
That should do what you're looking for based on file mtime.
EDIT: Note that you can also use os.listdir() in place of glob.glob() if desired - the reason I used glob in my original code was that I was wanting to use glob to only search for files with a particular set of file extensions, which glob() was better suited to. To use listdir here's what it would look like:
import os
search_dir = "/mydir/"
os.chdir(search_dir)
files = filter(os.path.isfile, os.listdir(search_dir))
files = [os.path.join(search_dir, f) for f in files] # add path to each file
files.sort(key=lambda x: os.path.getmtime(x))
Update: to sort dirpath's entries by modification date in Python 3:
import os
from pathlib import Path
paths = sorted(Path(dirpath).iterdir(), key=os.path.getmtime)
(put #Pygirl's answer here for greater visibility)
If you already have a list of filenames files, then to sort it inplace by creation time on Windows (make sure that list contains absolute path):
files.sort(key=os.path.getctime)
The list of files you could get, for example, using glob as shown in #Jay's answer.
old answer
Here's a more verbose version of #Greg Hewgill's answer. It is the most conforming to the question requirements. It makes a distinction between creation and modification dates (at least on Windows).
#!/usr/bin/env python
from stat import S_ISREG, ST_CTIME, ST_MODE
import os, sys, time
# path to the directory (relative or absolute)
dirpath = sys.argv[1] if len(sys.argv) == 2 else r'.'
# get all entries in the directory w/ stats
entries = (os.path.join(dirpath, fn) for fn in os.listdir(dirpath))
entries = ((os.stat(path), path) for path in entries)
# leave only regular files, insert creation date
entries = ((stat[ST_CTIME], path)
for stat, path in entries if S_ISREG(stat[ST_MODE]))
#NOTE: on Windows `ST_CTIME` is a creation date
# but on Unix it could be something else
#NOTE: use `ST_MTIME` to sort by a modification date
for cdate, path in sorted(entries):
print time.ctime(cdate), os.path.basename(path)
Example:
$ python stat_creation_date.py
Thu Feb 11 13:31:07 2009 stat_creation_date.py
There is an os.path.getmtime function that gives the number of seconds since the epoch
and should be faster than os.stat.
import os
os.chdir(directory)
sorted(filter(os.path.isfile, os.listdir('.')), key=os.path.getmtime)
Here's my version:
def getfiles(dirpath):
a = [s for s in os.listdir(dirpath)
if os.path.isfile(os.path.join(dirpath, s))]
a.sort(key=lambda s: os.path.getmtime(os.path.join(dirpath, s)))
return a
First, we build a list of the file names. isfile() is used to skip directories; it can be omitted if directories should be included. Then, we sort the list in-place, using the modify date as the key.
Here's a one-liner:
import os
import time
from pprint import pprint
pprint([(x[0], time.ctime(x[1].st_ctime)) for x in sorted([(fn, os.stat(fn)) for fn in os.listdir(".")], key = lambda x: x[1].st_ctime)])
This calls os.listdir() to get a list of the filenames, then calls os.stat() for each one to get the creation time, then sorts against the creation time.
Note that this method only calls os.stat() once for each file, which will be more efficient than calling it for each comparison in a sort.
In python 3.5+
from pathlib import Path
sorted(Path('.').iterdir(), key=lambda f: f.stat().st_mtime)
Without changing directory:
import os
path = '/path/to/files/'
name_list = os.listdir(path)
full_list = [os.path.join(path,i) for i in name_list]
time_sorted_list = sorted(full_list, key=os.path.getmtime)
print time_sorted_list
# if you want just the filenames sorted, simply remove the dir from each
sorted_filename_list = [ os.path.basename(i) for i in time_sorted_list]
print sorted_filename_list
from pathlib import Path
import os
sorted(Path('./').iterdir(), key=lambda t: t.stat().st_mtime)
or
sorted(Path('./').iterdir(), key=os.path.getmtime)
or
sorted(os.scandir('./'), key=lambda t: t.stat().st_mtime)
where m time is modified time.
Here's my answer using glob without filter if you want to read files with a certain extension in date order (Python 3).
dataset_path='/mydir/'
files = glob.glob(dataset_path+"/morepath/*.extension")
files.sort(key=os.path.getmtime)
# *** the shortest and best way ***
# getmtime --> sort by modified time
# getctime --> sort by created time
import glob,os
lst_files = glob.glob("*.txt")
lst_files.sort(key=os.path.getmtime)
print("\n".join(lst_files))
sorted(filter(os.path.isfile, os.listdir('.')),
key=lambda p: os.stat(p).st_mtime)
You could use os.walk('.').next()[-1] instead of filtering with os.path.isfile, but that leaves dead symlinks in the list, and os.stat will fail on them.
For completeness with os.scandir (2x faster over pathlib):
import os
sorted(os.scandir('/tmp/test'), key=lambda d: d.stat().st_mtime)
this is a basic step for learn:
import os, stat, sys
import time
dirpath = sys.argv[1] if len(sys.argv) == 2 else r'.'
listdir = os.listdir(dirpath)
for i in listdir:
os.chdir(dirpath)
data_001 = os.path.realpath(i)
listdir_stat1 = os.stat(data_001)
listdir_stat2 = ((os.stat(data_001), data_001))
print time.ctime(listdir_stat1.st_ctime), data_001
Alex Coventry's answer will produce an exception if the file is a symlink to an unexistent file, the following code corrects that answer:
import time
import datetime
sorted(filter(os.path.isfile, os.listdir('.')),
key=lambda p: os.path.exists(p) and os.stat(p).st_mtime or time.mktime(datetime.now().timetuple())
When the file doesn't exist, now() is used, and the symlink will go at the very end of the list.
This was my version:
import os
folder_path = r'D:\Movies\extra\new\dramas' # your path
os.chdir(folder_path) # make the path active
x = sorted(os.listdir(), key=os.path.getctime) # sorted using creation time
folder = 0
for folder in range(len(x)):
print(x[folder]) # print all the foldername inside the folder_path
folder = +1
Here is a simple couple lines that looks for extention as well as provides a sort option
def get_sorted_files(src_dir, regex_ext='*', sort_reverse=False):
files_to_evaluate = [os.path.join(src_dir, f) for f in os.listdir(src_dir) if re.search(r'.*\.({})$'.format(regex_ext), f)]
files_to_evaluate.sort(key=os.path.getmtime, reverse=sort_reverse)
return files_to_evaluate
Add the file directory/folder in path, if you want to have specific file type add the file extension, and then get file name in chronological order.
This works for me.
import glob, os
from pathlib import Path
path = os.path.expanduser(file_location+"/"+date_file)
os.chdir(path)
saved_file=glob.glob('*.xlsx')
saved_file.sort(key=os.path.getmtime)
print(saved_file)
Turns out os.listdir sorts by last modified but in reverse so you can do:
import os
last_modified=os.listdir()[::-1]
Maybe you should use shell commands. In Unix/Linux, find piped with sort will probably be able to do what you want.

Categories

Resources