Someone has challenged me to create a program that sorts their pictures into folders based on the month they were taken, and I want to do it in one line (I know, it's inefficient and unreadable, but I still want to do it because one-liners are cool)
I needed a for loop to accomplish this, but the only way I know of to use a for loop in one line is list comprehension, so that's what I did, but it creates an empty list, and doesn't print anything from the list or anything.
What I'm doing is renaming the file to be the month created + original filename (ex: bacon.jpg --> May\bacon.jpg)
Here is my code (Python 3.7.3):
import time
import os.path
[os.rename(str(os.fspath(f)), str(time.ctime(os.path.getctime(str(os.fspath(f))))).split()[1] + '\\' + str(os.fspath(f))) for f in os.listdir() if f.endswith('.jpg')]
and the more readable, non-list-comprehension version:
import time
import os.path
for f in os.listdir():
fn = str(os.fspath(f))
dateCreated = str(time.ctime(os.path.getctime(fn)))
monthCreated = dateCreated.split()[1]
os.rename(fn, monthCreated + '\\' + fn)
Is list comprehension a bad way to do it? Also, is there a reason why, if I print the list it's [] instead of [None, None, None, None, None, (continuing "None"s for every image moved)]?
Please note: I understand that it's inefficient and bad practice. If I were doing this for purposes other than just for fun to see if I could do it, I would obviously not try to do it in one line.
This is bad in two immediate respects:
You're using a list comprehension when you're not actually interested in constructing a list -- you ignore the object you just constructed.
Your construction has an ugly side effect in the OS.
Your purpose appears to be renaming a sequence of files, not constructing a list. The Python facility you want is, I believe, the map function. Write a function to change one file name, and then use map on a list of file names -- or tuples of old, new file names -- to run through the sequence of desired changes.
Is list comprehension a bad way to do it?
YES. But if you want to do it in one line, it is either that or using ";". For instance:
for x in range(5): print(x);print(x+2)
And, by the way, just renaming a file including a slash will not create a folder. You have to use os.mkdir('foldername').
In the end, if you really want to do that, I would just recommend doing it normally in many lines and then separating it with semicolons in a single line.
Related
Why am I seeing extra ] characters in output of a list construction that should have just a list of lists? Is this a terminal problem (using CoCalc's terminal)?
Particularly, the output should have just two levels of lists, the global list and each of the sublists inside it.
But when I read through the output of data in a Python interpreter in CoCalc's terminal. Then I see this kind of thing:
Notice the extra ] as if there was inner lists that should not exist. Also notice the numbering which seems to not be in order, even though in the data it is ordered.
What's happening here?
To reconstruct the problem:
Download the dorothea_valid.data file from here:
https://archive.ics.uci.edu/ml/machine-learning-databases/dorothea/DOROTHEA/
Then create a project in CoCalc (https://cocalc.com/). Upload dorothea_valid.data to that project.
Start a Linux terminal in CoCalc, and make sure you know the path/working directory so that you can find dorothea_valid.data from Python. In the Linux terminal start the Python interpreter by writing python.
Paste the following function meant for reading a file with sequences of integer values separated by "\n" to the interpreter:
def read_datafile(fname):
data = list()
with open(fname, 'r') as file:
for line in file:
data.append([int(i) for i in line.split()])
return data
# and then call print(read_datafile(fname)) to get the output.
Then call read_datafile() on dorothea_valid.data, and then print the resulting object as suggested in the above comment. The screen captured lines are seen when scrolling right to the bottom, however problems may be seen from other parts of the output as well.
EDIT:
It's now 10/08/2022 and I'm unable to see the problem. Maybe it has been fixed in CoCalc.
You are creating inner lists. You're using one list comprehension per line of the file so it's making one list of integers per line. If you want it all as one list, use extend rather than append:
for line in file:
data.extend(int(i) for i in line.split())
Notice I'm using a generator expression here rather than a list comprehension. Using a list comprehension is a waste becaues it creates the whole list in memory only to be read through once and then discarded.
Right now, I'm basically running through an excel sheet.
I have about 20 names and then I have 50k total values that match to one of those 20 names, so the excel sheet is 50k rows long, column B showing any random value, and column A showing one of the 20 names.
I'm trying to get a string for each of the names that show all of the values.
Name A: 123,244,123,523,123,5523,12505,142... etc etc.
Name B: 123,244,123,523,123,5523,12505,142... etc etc.
Right now, I created a dictionary that runs through the excel sheet, checks if the name is all ready in the dictionary, if it is, then it does a
strA = strA + "," + foundValue
Then it inserts strA back into the dictionary for that particular name. If the name doesn't exist, it creates that dictionary key and then adds that value to it.
Now, this was working all well at first.. but it's been about 15 or 20 mins and it is only on 5k values added to the dictionary so far and it seems to get slower as time goes on and it keeps running.
I wonder if there is a better way to do this or faster way to do this. I was thinking of building new dictionaries every 1k values and then combine them all together at the end.. but that would be 50 dictionaries total and it sounds complicated.. although maybe not.. I'm not sure, maybe it could work better that way, this seems to not work.
I DO need the string that shows each value with a comma between each value. That is why I am doing the string thing right now.
There are a number of things that are likely causing your program to run slowly.
String concatenation in python can be extremely inefficient when used with large strings.
Strings in Python are immutable. This fact frequently sneaks up and bites novice Python programmers on the rump. Immutability confers some advantages and disadvantages. In the plus column, strings can be used as keys in dictionaries and individual copies can be shared among multiple variable bindings. (Python automatically shares one- and two-character strings.) In the minus column, you can't say something like, "change all the 'a's to 'b's" in any given string. Instead, you have to create a new string with the desired properties. This continual copying can lead to significant inefficiencies in Python programs.
Considering each string in your example could contain thousands of characters, each time you do a concatenation, python has to copy that giant string into memory to create a new object.
This would be much more efficient:
strings = []
strings.append('string')
strings.append('other_string')
...
','.join(strings)
In your case, instead of each dictionary key storing a massive string, it should store a list, and you would just append each match to the list, and only at the very end would you do a string concatenation using str.join.
In addition, printing to stdout is also notoriously slow. If you're printing to stdout on each iteration of your massive 50,000 item loop, each iteration is being held up by the unbuffered write to stdout. Consider only printing every nth iteration, or perhaps writing to a file instead (file writes are normally buffered) and then tailing the file from another terminal.
This answer is based on OP's answer to my comment. I asked what he would do with the dict, suggesting that maybe he doesn't need to build it in the first place. #simon replies:
i add it to an excel sheet, so I take the KEY, which is the name, and
put it in A1, then I take the VALUE, which is
1345,345,135,346,3451,35.. etc etc, and put that into A2. then I do
the rest of my programming with that information...... but i need
those values seperated by commas and acessible inside that excel sheet
like that!
So it looks like the dict doesn't have to be built after all. Here is an alternative: for each name, create a file, and store those files in a dict:
files = {}
name = 'John' # let's say
if name not in files:
files[name] = open(name, 'w')
Then when you loop over the 50k-row excel, you do something like this (pseudo-code):
for row in 50k_rows:
name, value_string = rows.split() # or whatever
file = files[name]
file.write(value_string + ',') # if already ends with ',', no need to add
Since your value_string is already comma separated, your file will be csv-like without any further tweaking on your part (except maybe you want to strip the last trailing comma after you're done). Then when you need the values, say, of John, just value = open('John').read().
Now I've never worked with 50k-row excels, but I'd be very surprised if this is not quite a bit faster than what you currently have. Having persistent data is also (well, maybe) a plus.
EDIT:
Above is a memory-oriented solution. Writing to files is much slower than appending to lists (but probably still faster than recreating many large strings). But if the lists are huge (which seems likely) and you run into a memory problem (not saying you will), you can try the file approach.
An alternative, similar to lists in performance (at least for the toy test I tried) is to use StringIO:
from io import StringIO # python 2: import StringIO import StringIO
string_ios = {'John': StringIO()} # a dict to store StringIO objects
for value in ['ab', 'cd', 'ef']:
string_ios['John'].write(value + ',')
print(string_ios['John'].getvalue())
This will output 'ab,cd,ef,'
Instead of building a string that looks like a list, use an actual list and make the string representation you want out of it when you are done.
The proper way is to collect in lists and join at the end, but if for some reason you want to use strings, you could speed up the string extensions. Pop the string out of the dict so that there's only one reference to it and thus the optimization can kick in.
Demo:
>>> timeit('s = d.pop(k); s = s + "y"; d[k] = s', 'k = "x"; d = {k: ""}')
0.8417842664330237
>>> timeit('s = d[k]; s = s + "y"; d[k] = s', 'k = "x"; d = {k: ""}')
294.2475278390723
Depending on how you have read the excel file, but let's say that lines are read as delimiter-separated tuples or something:
d = {}
for name, foundValue in line_tuples:
try:
d[name].append(foundValue)
except KeyError:
d[name] = [foundValue]
d = {k: ",".join(v) for k, v in d.items()}
Alternatively using pandas:
import pandas as pd
df = pd.read_excel("some_excel_file.xlsx")
d = df.groupby("A")["B"].apply(lambda x: ",".join(x)).to_dict()
I have inherited some Python scripts and I'm working to understand them. I am a beginner-level Python programmer but very experienced in several other scripting languages.
The following Python code snippet generates a file list which is then used in a later code block. I would like to understand exactly how it is doing it. I understand that os.path.isfile is a test for filetype and os.path.join combines the arguments in to a filepath string. Could someone help me understand the rest?
flist = [file for file in whls if os.path.isfile(os.path.join(whdir, i, file))]
whls is an iterable of some kind.
For each element in whls, it checks if os.path.join(whdir, i, that_element) is a file.
(os.path.join("C:","users","adsmith") on Windows is r"C:\users\adsmith")
If so, it includes it in that list.
As #jonsharpe posted in the comments, this is an example of a list comprehension which are well worth your time to master.
The list comprehension means that python will iterate over each member of whls (this is maybe a tuple/list?), and for each item, it will test whether os.path.join(whdir, i, file) is a file (as opposed to a directory etc). It will return a list containing only the elements from whls that pass this condition check.
This list comprehension is equivalent to the following loop:
flist = []
for file in whls:
if os.path.isfile(os.path.join(whdir, i, file)):
flist.append(file)
The list comprehension is more compact. Performance-wise, they are similar, with the list comprehension being a little faster because it doesn't load the append() method.
I have two files, say source and target. I compare each element in source to check if it also exists in target. If it does not exist in target, I print it ( the end goal is to have 0 difference). Here is the code I have written.
def finddefaulters(source,target):
f = open(source,'r')
g = open(target,'r')
reference = f.readlines()
done = g.readlines()
for i in reference:
if i not in done:
print i,
I need help with
How would this code be rated on a scale of 1-10
How can I make it better and optimal if the file sizes are huge.
Another question - When I read all the lines as list elements, they are interpreted as 'element\n' - So for correct comparison, I have to add a newline at the end of each file. Is there a way to strip the newlines so I do not have to add newline at the end of files. I tried rstrip. But it did not work.
Thanks in advance.
Regarding efficiency: The method you show has an asymptotic runtime complexity of O(m*n) where m and n are the number of elements in reference and done, i.e. if you double the size of both lists, the algorithm will run 4 times longer (times a fixed constant that is uniteresting to theoretical computer scientists). If m and n are very large, you will probably want to choose a faster algorithm, e.g sort the two lists first using the .sort() (runtime complexity: O(n * log(n))) and then go through the lists just once (runtime complexity: O(n)). That algorithm has a worst-case runtime complexity of O(n * log(n)), which is already a big improvement. However, you trade readability and simplicity of the code for efficiency, so I would only advise you to do this if absolutely necessary.
Regarding coding style: You do not .close() the file handles which you should. Instead of opening and closing the file handle, you could use the with language construct of python. Also, if you like the functional style, you could replace the for loop by a list expression:
for i in reference:
if i not in done:
print i,
then becomes:
items = [i.strip() for i in reference if i not in done]
print ' '.join(items)
However, this way you will not see any progress while the list is being composed.
As joaquin already mentions, you can loop over f directly instead of f.readlines() as file handles support the iterator protocol.
Some ideas:
1) use [with] to open files safely:
with open(source) as f:
.............
The with statement is used to wrap the
execution of a block with methods
defined by a context manager. This
allows common try...except...finally
usage patterns to be encapsulated for
convenient reuse.
2) you can iterate over the lines of a file instead of using readlines:
for line in f:
..........
3) Although for this short snippet it could be enough, try to use more informative names for your variables. One-letter names are not recommended.
4) If you want to get profit of python lib, try functions in difflib module. For example:
make_file(fromlines, tolines[, fromdesc][, todesc][, context][, numlines])
Compares fromlines and tolines (lists
of strings) and returns a string which
is a complete HTML file containing a
table showing line by line differences
with inter-line and intra-line changes
highlighted.
So lets say I'm using Python's ftplib to retrieve a list of log files from an FTP server. How would I parse that list of files to get just the file names (the last column) inside a list? See the link above for example output.
Using retrlines() probably isn't the best idea there, since it just prints to the console and so you'd have to do tricky things to even get at that output. A likely better bet would be to use the nlst() method, which returns exactly what you want: a list of the file names.
This best answer
You may want to use ftp.nlst() instead of ftp.retrlines(). It will give you exactly what you want.
If you can't, read the following :
Generators for sysadmin processes
In his now famous review, Generator Tricks For Systems Programmers An Introduction, David M. Beazley gives a lot of receipes to answer to this kind of data problem with wuick and reusable code.
E.G :
# empty list that will receive all the log entry
log = []
# we pass a callback function bypass the print_line that would be called by retrlines
# we do that only because we cannot use something better than retrlines
ftp.retrlines('LIST', callback=log.append)
# we use rsplit because it more efficient in our case if we have a big file
files = (line.rsplit(None, 1)[1] for line in log)
# get you file list
files_list = list(files)
Why don't we generate immediately the list ?
Well, it's because doing it this way offer you much flexibility : you can apply any intermediate generator to filter files before turning it into files_list : it's just like pipe, add a line, you add a process without overheat (since it's generators). And if you get rid off retrlines, it still work be it's even better because you don't store the list even one time.
EDIT : well, I read the comment to the other answer and it says that this won't work if there is any space in the name.
Cool, this will illustrate why this method is handy. If you want to change something in the process, you just change a line. Swap :
files = (line.rsplit(None, 1)[1] for line in log)
and
# join split the line, get all the item from the field 8 then join them
files = (' '.join(line.split()[8:]) for line in log)
Ok, this may no be obvious here, but for huge batch process scripts, it's nice :-)
And a slightly less-optimal method, by the way, if you're stuck using retrlines() for some reason, is to pass a function as the second argument to retrlines(); it'll be called for each item in the list. So something like this (assuming you have an FTP object named 'ftp') would work as well:
filenames = []
ftp.retrlines('LIST', lambda line: filenames.append(line.split()[-1]))
The list 'filenames' will then be a list of the file names.
Is there any reason why ftplib.FTP.nlst() won't work for you? I just checked and it returns only names of the files in a given directory.
Since every filename in the output starts at the same column, all you have to do is get the position of the dot on the first line:
drwxrwsr-x 5 ftp-usr pdmaint 1536 Mar 20 09:48 .
Then slice the filename out of the other lines using the position of that dot as the starting index.
Since the dot is the last character on the line, you can use the length of the line minus 1 as the index. So the final code is something like this:
lines = ftp.retrlines('LIST')
lines = lines.split("\n") # This should split the string into an array of lines
filename_index = len(lines[0]) - 1
files = []
for line in lines:
files.append(line[filename_index:])
If the FTP server supports the MLSD command, then please see section “single directory case” from that answer.
Use an instance (say ftpd) of the FTPDirectory class, call its .getdata method with connected ftplib.FTP instance in the correct folder, then you can:
directory_filenames= [ftpfile.name for ftpfile in ftpd.files]
I believe it should work for you.
file_name_list = [' '.join(each_file.split()).split()[-1] for each_file_detail in file_list_from_log]
NOTES -
Here I am making a assumption that you want the data in the program (as list), not on console.
each_file_detail is each line that is being produced by the program.
' '.join(each_file.split())
To replace multiple spaces by 1 space.