I'm working on text file processing using Python.
I've got a text file (ctl_Files.txt) which has the following content/ or similar to this:
------------------------
Changeset: 143
User: Sarfaraz
Date: Tuesday, April 05, 2011 5:34:54 PM
Comment:
Initial add, all objects.
Items:
add $/Systems/DB/Expences/Loader
add $/Systems/DB/Expences/Loader/AAA.txt
add $/Systems/DB/Expences/Loader/BBB.txt
add $/Systems/DB/Expences/Loader/CCC.txt
Check-in Notes:
Code Reviewer:
Performance Reviewer:
Reviewer:
Security Reviewer:
------------------------
Changeset: 145
User: Sarfaraz
Date: Thursday, April 07, 2011 5:34:54 PM
Comment:
edited objects.
Items:
edit $/Systems/DB/Expences/Loader
edit $/Systems/DB/Expences/Loader/AAA.txt
edit $/Systems/DB/Expences/Loader/AAB.txt
Check-in Notes:
Code Reviewer:
Performance Reviewer:
Reviewer:
Security Reviewer:
------------------------
Changeset: 147
User: Sarfaraz
Date: Wednesday, April 06, 2011 5:34:54 PM
Comment:
Initial add, all objects.
Items:
delete, source rename $/Systems/DB/Expences/Loader/AAA.txt;X34892
rename $/Systems/DB/Expences/Loader/AAC.txt.
Check-in Notes:
Code Reviewer:
Performance Reviewer:
Reviewer:
Security Reviewer:
------------------------
To process this file I wrote the following code:
#Tags - used for spliting the information
tag1 = 'Changeset:'
tag2 = 'User:'
tag3 = 'Date:'
tag4 = 'Comment:'
tag5 = 'Items:'
tag6 = 'Check-in Notes:'
#opening and reading the input file
#In path to input file use '\' as escape character
with open ("C:\\Users\\md_sarfaraz\\Desktop\\ctl_Files.txt", "r") as myfile:
val=myfile.read().replace('\n', ' ')
#counting the occurence of any one of the above tag
#As count will be same for all the tags
occurence = val.count(tag1)
#initializing row variable
row=""
#passing the count - occurence to the loop
for count in range(1, occurence+1):
row += ( (val.split(tag1)[count].split(tag2)[0]).strip() + '|' \
+ (val.split(tag2)[count].split(tag3)[0]).strip() + '|' \
+ (val.split(tag3)[count].split(tag4)[0]).strip() + '|' \
+ (val.split(tag4)[count].split(tag5)[0]).strip() + '|' \
+ (val.split(tag5)[count].split(tag6)[0]).strip() + '\n')
#opening and writing the output file
#In path to output file use '\' as escape character
file = open("C:\\Users\\md_sarfaraz\\Desktop\\processed_ctl_Files.txt", "w+")
file.write(row)
file.close()
and got the following result/File (processed_ctl_Files.txt):
143|Sarfaraz|Tuesday, April 05, 2011 5:34:54 PM|Initial add, all objects.|add $/Systems/DB/Expences/Loader add $/Systems/DB/Expences/Loader/AAA.txt add $/Systems/DB/Expences/Loader/BBB.txt add $/Systems/DB/Expences/Loader/CCC.txt
145|Sarfaraz|Thursday, April 07, 2011 5:34:54 PM|edited objects.|edit $/Systems/DB/Expences/Loader edit $/Systems/DB/Expences/Loader/AAA.txt edit $/Systems/DB/Expences/Loader/AAB.txt
147|Sarfaraz|Wednesday, April 06, 2011 5:34:54 PM|Initial add, all objects.|delete, source rename $/Systems/DB/Rascal/Expences/AAA.txt;X34892 rename $/Systems/DB/Rascal/Expences/AAC.txt.
But, I want the result like this:
143|Sarfaraz|Tuesday, April 05, 2011 5:34:54 PM|Initial add, all objects.|add $/Systems/DB/Expences/Loader
add $/Systems/DB/Expences/Loader/AAA.txt
add $/Systems/DB/Expences/Loader/BBB.txt
add $/Systems/DB/Expences/Loader/CCC.txt
145|Sarfaraz|Thursday, April 07, 2011 5:34:54 PM|edited objects.|edit $/Systems/DB/Expences/Loader
edit $/Systems/DB/Expences/Loader/AAA.txt
edit $/Systems/DB/Expences/Loader/AAB.txt
147|Sarfaraz|Wednesday, April 06, 2011 5:34:54 PM|Initial add, all objects.|delete, source rename $/Systems/DB/Rascal/Expences/AAA.txt;X34892
rename $/Systems/DB/Rascal/Expences/AAC.txt.
or it would be great if we can get results like this :
143|Sarfaraz|Tuesday, April 05, 2011 5:34:54 PM|Initial add, all objects.|add $/Systems/DB/Expences/Loader
143|Sarfaraz|Tuesday, April 05, 2011 5:34:54 PM|Initial add, all objects.|add $/Systems/DB/Expences/Loader/AAA.txt
143|Sarfaraz|Tuesday, April 05, 2011 5:34:54 PM|Initial add, all objects.|add $/Systems/DB/Expences/Loader/BBB.txt
143|Sarfaraz|Tuesday, April 05, 2011 5:34:54 PM|Initial add, all objects.|add $/Systems/DB/Expences/Loader/CCC.txt
145|Sarfaraz|Thursday, April 07, 2011 5:34:54 PM|edited objects.|edit $/Systems/DB/Expences/Loader
145|Sarfaraz|Thursday, April 07, 2011 5:34:54 PM|edited objects.|edit $/Systems/DB/Expences/Loader/AAA.txt
145|Sarfaraz|Thursday, April 07, 2011 5:34:54 PM|edited objects.|edit $/Systems/DB/Expences/Loader/AAB.txt
147|Sarfaraz|Wednesday, April 06, 2011 5:34:54 PM|Initial add, all objects.|delete, source rename $/Systems/DB/Rascal/Expences/AAA.txt;X34892
147|Sarfaraz|Wednesday, April 06, 2011 5:34:54 PM|Initial add, all objects.|rename $/Systems/DB/Rascal/Expences/AAC.txt.
Let me know how I can do this. Also, I'm very new to Python so please ignore if I've written some lousy or redundant code. And help me to improve this.
This solution is not as short and probably not as effective as the answer utilizing regular expressions, but it should be quite easy to understand. The solution does make it easier to use the parsed data because each section data is stored into a dictionary.
ctl_file = "ctl_Files.txt" # path of source file
processed_ctl_file = "processed_ctl_Files.txt" # path of destination file
#Tags - used for spliting the information
changeset_tag = 'Changeset:'
user_tag = 'User:'
date_tag = 'Date:'
comment_tag = 'Comment:'
items_tag = 'Items:'
checkin_tag = 'Check-in Notes:'
section_separator = "------------------------"
changesets = []
#open and read the input file
with open(ctl_file, 'r') as read_file:
first_section = True
changeset_dict = {}
items = []
comment_stage = False
items_stage = False
checkin_dict = {}
# Read one line at a time
for line in read_file:
# Check which tag matches the current line and store the data to matching key in the dictionary
if changeset_tag in line:
changeset = line.split(":")[1].strip()
changeset_dict[changeset_tag] = changeset
elif user_tag in line:
user = line.split(":")[1].strip()
changeset_dict[user_tag] = user
elif date_tag in line:
date = line.split(":")[1].strip()
changeset_dict[date_tag] = date
elif comment_tag in line:
comment_stage = True
elif items_tag in line:
items_stage = True
elif checkin_tag in line:
pass # not implemented due to example file not containing any data
elif section_separator in line: # new section
if first_section:
first_section = False
continue
tmp = changeset_dict
changesets.append(tmp)
changeset_dict = {}
items = []
# Set stages to false just in case
items_stage = False
comment_stage = False
elif not line.strip(): # empty line
if items_stage:
changeset_dict[items_tag] = items
items_stage = False
comment_stage = False
else:
if comment_stage:
changeset_dict[comment_tag] = line.strip() # Only works for one line comment
elif items_stage:
items.append(line.strip())
#open and write to the output file
with open(processed_ctl_file, 'w') as write_file:
for changeset in changesets:
row = "{0}|{1}|{2}|{3}|".format(changeset[changeset_tag], changeset[user_tag], changeset[date_tag], changeset[comment_tag])
distance = len(row)
items = changeset[items_tag]
join_string = "\n" + distance * " "
items_part = str.join(join_string, items)
row += items_part + "\n"
write_file.write(row)
Also, try to use variable names which describes its content. Names like tag1, tag2, etc. does not say much about the variable content. This makes code difficult to read, especially when scripts gets longer. Readability might seem unimportant in most cases, but when re-visiting old code it takes much longer to understand what the code does with non describing variables.
I would start by extracting the values into variables. Then create a prefix from the first few tags. You can count the number of characters in the prefix and use that for the padding. When you get to items, append the first one to the prefix and any other item can be appended to padding created from the number of spaces that you need.
# keywords used in the tag "Items: "
keywords = ['add', 'delete', 'edit', 'source', 'rename']
#passing the count - occurence to the loop
for cs in val.split(tag1)[1:]:
changeset = cs.split(tag2)[0].strip()
user = cs.split(tag2)[1].split(tag3)[0].strip()
date = cs.split(tag3)[1].split(tag4)[0].strip()
comment = cs.split(tag4)[1].split(tag5)[0].strip()
items = cs.split(tag5)[1].split(tag6)[0].strip().split()
notes = cs.split(tag6)
prefix = '{0}|{1}|{2}|{3}'.format(changeset, user, date, comment)
space_count = len(prefix)
i = 0
while i < len(items):
# if we are printing the first item, add it to the other text
if i == 0:
pref = prefix
# otherwise create padding from spaces
else:
pref = ' '*space_count
# add all keywords
words = ''
for j in range(i, len(items)):
if items[j] in keywords:
words += ' ' + items[j]
else:
break
if i >= len(items): break
row += '{0}|{1} {2}\n'.format(pref, words, items[j])
i += j - i + 1 # increase by the number of keywords + the param
This seems to do what you want, but I am not sure if this is the best solution. Maybe it is better to process the file line by line and print the values straight to the stream?
You can use a regular expression to search for 'add', 'edit' etc.
import re
#Tags - used for spliting the information
tag1 = 'Changeset:'
tag2 = 'User:'
tag3 = 'Date:'
tag4 = 'Comment:'
tag5 = 'Items:'
tag6 = 'Check-in Notes:'
#opening and reading the input file
#In path to input file use '\' as escape character
with open ("wibble.txt", "r") as myfile:
val=myfile.read().replace('\n', ' ')
#counting the occurence of any one of the above tag
#As count will be same for all the tags
occurence = val.count(tag1)
#initializing row variable
row=""
prevlen = 0
#passing the count - occurence to the loop
for count in range(1, occurence+1):
row += ( (val.split(tag1)[count].split(tag2)[0]).strip() + '|' \
+ (val.split(tag2)[count].split(tag3)[0]).strip() + '|' \
+ (val.split(tag3)[count].split(tag4)[0]).strip() + '|' \
+ (val.split(tag4)[count].split(tag5)[0]).strip() + '|' )
distance = len(row) - prevlen
row += re.sub("\s\s+([edit]|[add]|[delete]|[rename])", r"\n"+r" "*distance+r"\1", (val.split(tag5)[count].split(tag6)[0])) + '\r'
prevlen = len(row)
#opening and writing the output file
#In path to output file use '\' as escape character
file = open("wobble.txt", "w+")
file.write(row)
file.close()
Related
I'd like to use Python to read in a list of directories and store data in variables based on a template such as /home/user/Music/%artist%/[%year%] %album%.
An example would be:
artist, year, album = None, None, None
template = "/home/user/Music/%artist%/[%year%] %album%"
path = "/home/user/Music/3 Doors Down/[2002] Away From The Sun"
if text == "%artist%":
artist = key
if text == "%year%":
year = key
if text == "%album%":
album = key
print(artist)
# 3 Doors Down
print(year)
# 2002
print(album)
# Away From The Sun
I can do the reverse easily enough with str.replace("%artist%", artist) but how can extract the data?
If your folder structure template is reliable the following should work without the need for regular expressions.
path = "/home/user/Music/3 Doors Down/[2002] Away From The Sun"
path_parts = path.split("/") # divide up the path into array by slashes
print(path_parts)
artist = path_parts[4] # get element of array at index 4
year = path_parts[5][1:5] # get characters at index 1-5 for the element of array at index 5
album = path_parts[5][7:]
print(artist)
# 3 Doors Down
print(year)
# 2002
print(album)
# Away From The Sun
# to put the path back together again using an F-string (No need for str.replace)
reconstructed_path = f"/home/user/Music/{artist}/[{year}] {album}"
print(reconstructed_path)
output:
['', 'home', 'user', 'Music', '3 Doors Down', '[2002] Away From The Sun']
3 Doors Down
2002
Away From The Sun
/home/user/Music/3 Doors Down/[2002] Away From The Sun
The following works for me:
from difflib import SequenceMatcher
def extract(template, text):
seq = SequenceMatcher(None, template, text, True)
return [text[c:d] for tag, a, b, c, d in seq.get_opcodes() if tag == 'replace']
template = "home/user/Music/%/[%] %"
path = "home/user/Music/3 Doors Down/[2002] Away From The Sun"
artist, year, album = extract(template, path)
print(artist)
print(year)
print(album)
Output:
3 Doors Down
2002
Away From The Sun
Each template placeholder can be any single character as long as the character is not present in the value to be returned.
I'm making a website. I write the all data I get from the database as a list. I want to make a filter. I just want to get the data in the last hour.
#app.route('/task/list/birsaat', methods=['GET'])
def get_birsaat():
birsaat_tasks=tasks_collection.find({"zaman":"zaman.utcnow()-timedelta(hours=1)"})
task_list_birsaat = []
for rss_collection in birsaat_tasks:
task_list_birsaat.append({'baslik': rss_collection['baslik'],
'kisa_bilgi': rss_collection['kisa_bilgi'], 'link': rss_collection['link'],
'zaman': rss_collection['zaman'], 'saglayici': rss_collection['saglayici']})
response_birsaat = jsonify(task_list_birsaat)
response_birsaat.headers.add('Access-Control-Allow-Origin', '*')
return response_birsaat
zaman means time in turkish. my database datas
_id:5eff873b4f9b5e349c14bc91
baslik:"KKTC’ye gelen tüm yolculara 1 gün karantina şartı getirildi"
kisa_bilgi:"haberler"
zaman:"Fri, 03 Jul 2020 21:25:00 +0300"
saglayici:"sabah"
IF you want to find the last hour result until now, you should add $gte in your query, and if the zaman is string you should convert your target value to the format you saved, if you want to access the Fri, 03 Jul 2020 21:25:00 +0300 you can use this:
(zaman.utcnow()-timedelta(hours=1)).ctime())
the output is:
'Sat Jul 4 06:39:30 2020'
for adding the , to the above result you can use this:
dt = (zaman.utcnow()-timedelta(hours=1)).ctime())
spl = dt.split(' ')
res = spl[0] + ',' + ' ' + spl[3] + spl[2] + ' ' + spl[-1] + ' ' + spl[4] + ' ' + '+0300'
the result is:
'Sat, 4 2020 06:39:30 +0300'
then you should filter mongodb by this value.
I'm trying to parse a string from log communicating with network which will be like
2019 Jun 30 15:40:17.561 NETWORK_MESSAGE
Direction = UE_TO_NETWORK
From: <1106994972>
To: <3626301680>
and here is my code:
import re
log = '2019 Jun 30 15:40:17.561 NETWORK_MESSAGE\r\nDirection = UE_TO_NETWORK\r\nFrom: <1106994972>\r\nTo: <3626301680>\r\n'
PATTERN = re.compile(
'(?P<time>\d{2}:\d{2}:\d{2}.\d{3}).*' # Time
'Direction = (?P<Direction>\S+).*' # Direction
'From: <(?P<From>\S+)>.*' # from
'To: <(?P<To>\S+)>', # to
re.DOTALL)
results = PATTERN.search(log)
print(results.group('From'))
However, I just found sometimes there will be reversed position between "From" and "To", just like the following.
2019 Jun 30 15:40:16.548 NETWORK_MESSAGE
Direction = NETWORK_TO_UE
To: <3626301680>
From: <1106994972>
Is it possible I can solve this with only one pattern?
Here is a solution that uses (From|To) to match either From or To and then explicitly checks which of the two places matched From:
import re
log1 = '2019 Jun 30 15:40:17.561 NETWORK_MESSAGE\r\nDirection = UE_TO_NETWORK\r\nFrom: <1106994972>\r\nTo: <3626301680>\r\n'
log2 = '2019 Jun 30 15:40:17.561 NETWORK_MESSAGE\r\nDirection = UE_TO_NETWORK\r\nTo: <3626301680>\r\nFrom: <1106994972>\r\n'
PATTERN = re.compile(
'(?P<time>\d{2}:\d{2}:\d{2}.\d{3}).*' # Time
'Direction = (?P<Direction>\S+).*' # Direction
'(?P<tag1>From|To): <(?P<val1>\S+)>.*' # from or to
'(?P<tag2>From|To): <(?P<val2>\S+)>', # from or to
re.DOTALL)
for log in [log1, log2]:
results = PATTERN.search(log)
if results.group('tag1') == 'From':
print(results.group('val1'))
elif results.group('tag2') == 'From':
print(results.group('val2'))
This matches your line but does not make sure there is exactly on From and one To.
I also considered this pattern
PATTERN = re.compile(
'(?P<time>\d{2}:\d{2}:\d{2}.\d{3}).*' # Time
'Direction = (?P<Direction>\S+).*' # Direction
'(?P<FromTo>(?P<tag1>From|To): <(?P<val1>\S+)>.*){2}', # from or to
re.DOTALL)
but this will only capture the last match in From and To (according to the docs "If a group is contained in a part of the pattern that matched multiple times, the last match is returned."). So if the two fields appear in the wrong order then you will not be able to get the value for From.
If things get more complicated you may have more readable code by using more than one pattern.
log1 = "2019 Jun 30 15:40:17.561 NETWORK_MESSAGE\r\nDirection = UE_TO_NETWORK\r\nFrom: <1106994972>\r\nTo: <3626301680>\r\n"
log2 = "2019 Jun 30 15:40:16.548 NETWORK_MESSAGE\r\nDirection = NETWORK_TO_UE\r\nTo: <3626301680>\r\nFrom: <1106994972>\r\n"
PATTERN = re.compile(
'(?P<time>\d{2}:\d{2}:\d{2}.\d{3}).*' # Time
'Direction = (?P<Direction>\S+).*' # Direction
'(From|To): <(?P<X>\S+)>.*'
'(To|From): <(?P<Y>\S+)>',
re.DOTALL)
print(re.findall(PATTERN, log1))
print(re.findall(PATTERN, log2))
I have a google spreadsheet and it has 31 tabs(the 31 days). What I want to do is to use my code to reformat the data (which I have solved), but I can't figure out how to use a for loop to apply the code to all 31 tabs/days. Since each tab is one day of the month, I want the code to go to the first tab, apply the code, and then jump to the next tab and apply the same code. I want this process to go on until it finishes with all 31 tabs.
Below is the code that I have tried, but it doesn't seem to work. I also have tried selecting multiple sheets and trying to select the google sheets' tab as the day, but this doesn't seem to be possible.
Jan = gc.open_by_url('with held for privacy reasons')
Jan = Jan.worksheet('01')
#for worksheet in Jan.worksheet:
#while Jan.worksheet is not 31:
if Jan.worksheet != 31:
Jan = get_as_dataframe(Jan)
Jan = pd.DataFrame(Jan)
day_month = Jan.worksheet
new_header = Jan.iloc[0]
Jan = Jan[1:]
Jan.columns = new_header
col_list = ['Time', 'Roof(in)', 'East(in)', 'West(in)', 'North(in)', 'Roof(out)', 'East(out)', 'West(out)', 'North(out)']
Jan = Jan[col_list]
Jan = Jan.dropna(axis=0, how='all')
Jan = Jan[:-2]
Jan.columns = ['DateTime', 'Business_Location_In', 'East_Location_In', 'West_Location_In', 'North_Location_In',
'Business_Location_Out', 'East_Location_Out', 'West_Location_Out', 'North_Location_Out']
Jan['DateTime'] = Jan['DateTime'].str.slice(6)
Jan['DateTime'] = pd.to_datetime('2019-01- ' + worksheet+ Jan['DateTime'])
for filename in Jan:
Jan['Jan'+ day_month] = filenames
while Jan.worksheet() < 31:
Jan = Jan.worksheet(day_month + 1)
elif Jan.worksheet == 31:
Jan = get_as_dataframe(Jan)
Jan = pd.DataFrame(Jan)
day_month = Jan.worksheet
new_header = Jan.iloc[0]
Jan = Jan[1:]
Jan.columns = new_header
col_list = ['Time', 'Roof(in)', 'East(in)', 'West(in)', 'North(in)', 'Roof(out)', 'East(out)', 'West(out)', 'North(out)']
Jan = Jan[col_list]
Jan = Jan.dropna(axis=0, how='all')
Jan = Jan[:-2]
Jan.columns = ['DateTime', 'Business_Location_In', 'East_Location_In', 'West_Location_In', 'North_Location_In',
'Business_Location_Out', 'East_Location_Out', 'West_Location_Out', 'North_Location_Out']
Jan['DateTime'] = Jan['DateTime'].str.slice(6)
Jan['DateTime'] = pd.to_datetime('2019-01- ' + worksheet+ Jan['DateTime'])
for filename in Jan: #this sets the file name to Jan and the day of month
Jan['Jan'+ day_month] = filenames
print(filenames)
One error that I have received is : AttributeError: 'Worksheet' object has no attribute 'worksheet'. I don't know what this means. Overall, I just can't seem to figure out how to apply the codes to all tabs and then to give me a list of all tab names. Additionally, this code doesn't need to be the same code. If someone is able to get this to work, but it rewrites all of the code, I am all for that. The date column should end up as year-month-day hour(military)-minute-second.
import os, sys
import os.path, time
path=os.getcwd()
def file_info(directory):
file_list = []
for i in os.listdir(directory):
a = os.stat(os.path.join(directory,i))
file_list.append([i,time.ctime(a.st_atime),time.ctime(a.st_ctime)]) #[file,most_recent_access,created]
return file_list
print file_info(path)
Problem
how I can show each list item in new line and nice a nice format
how I can sort the file/directory list based on last modified
how I can sort the file/directory list based on creatation date
Here is the program with some nice printing using the format function:
import os
import time
path = os.getcwd()
def file_info(directory):
file_list = []
for i in os.listdir(directory):
a = os.stat(os.path.join(directory,i))
file_list.append([i,time.ctime(a.st_atime),time.ctime(a.st_ctime)]) #[file,most_recent_access,created]
return file_list
file_list = file_info(path)
for item in file_list:
line = "Name: {:<20} | Last Accessed: {:>20} | Date Created: {:>20}".format(item[0],item[1],item[2])
print(line)
Here is some code with a sort function being used on the accessed time. The code is not optimized but it is very readable and you should be able to understand it.
import os
import time
path = os.getcwd()
def file_info(directory,sortLastModifiedOrNaw=False):
file_list = []
currentMin = 0 #This is the variable that will track the lowest digit
for i in os.listdir(directory):
a = os.stat(os.path.join(directory,i))
if sortLastModifiedOrNaw == True: #If you would like to sort.
if a.st_atime > currentMin: #Check if this is bigger than the current minimum.
currentMin = a.st_atime #If it is we update the current minimum
#Below we append so that it ends up in the end of the list
file_list.append([i,time.ctime(a.st_atime),time.ctime(a.st_ctime)]) #[file,most_recent_access,created]
else: #If it is smaller, it should be in the front of the list so we insert it into position 0.
file_list.insert(0,[i,time.ctime(a.st_atime),time.ctime(a.st_ctime)]) #[file,most_recent_access,created]
else: #If you would not like to sort
file_list.append([i,time.ctime(a.st_atime),time.ctime(a.st_ctime)]) #[file,most_recent_access,created]
return file_list
file_list = file_info(path)
print("Unsorted Example")
for item in file_list:
line = "Name: {:<20} | Date Last Accessed: {:>20} | Date Created: {:>20}".format(item[0],item[1],item[2])
print(line)
print("\nSorted example using last modified time")
file_list = file_info(path,sortLastModifiedOrNaw=True)
for item in file_list:
line = "Name: {:<20} | Date Last Accessed: {:>20} | Date Created: {:>20}".format(item[0],item[1],item[2])
print(line)
Sample output:
Unsorted Example
Name: .idea | Date Last Accessed: Sun Jan 3 21:13:45 2016 | Date Created: Sun Jan 3 21:13:14 2016
Name: blahblah.py | Date Last Accessed: Sun Jan 3 21:13:48 2016 | Date Created: Sun Jan 3 21:13:48 2016
Name: testhoe1.py | Date Last Accessed: Sun Jan 3 19:09:57 2016 | Date Created: Sun Jan 3 18:52:06 2016
Sorted example using last modified time
Name: testhoe1.py | Date Last Accessed: Sun Jan 3 19:09:57 2016 | Date Created: Sun Jan 3 18:52:06 2016
Name: .idea | Date Last Accessed: Sun Jan 3 21:13:45 2016 | Date Created: Sun Jan 3 21:13:14 2016
Name: blahblah.py | Date Last Accessed: Sun Jan 3 21:13:48 2016 | Date Created: Sun Jan 3 21:13:48 2016
Happy optimizing! #If you change line 12 atime to ctime it will sort based on create-time.