problem with moving files using os.rename - python

i have this block of code where i try to move all the files in a folder to a different folder.
import os
from os import listdir
from os.path import isfile, join
def run():
print("Do you want to convert 1 file (0) or do you want to convert all the files in a folder(1)")
oneortwo = input("")
if oneortwo == "0":
filepathonefile = input("what is the filepath of your file?")
filepathonefilewithoutfullpath = os.path.basename(filepathonefile)
newfolder = "C:/Users/EL127032/Documents/fileconvertion/files/" + filepathonefilewithoutfullpath
os.rename(filepathonefile,newfolder)
if oneortwo == "1" :
filepathdirectory = input("what is the filepath of your folder?")
filesindirectory = [f for f in listdir(filepathdirectory) if isfile(join(filepathdirectory, f))]
numberoffiles = len(filesindirectory)
handlingfilenumber = 0
while numberoffiles > handlingfilenumber:
currenthandlingfile = filesindirectory[handlingfilenumber]
oldpathcurrenthandling = filepathdirectory + "/" + currenthandlingfile
futurepathcurrenhandlingfile = "C:/Users/EL127032/Documents/fileconvertion/files/" + currenthandlingfile
os.rename(oldpathcurrenthandling, futurepathcurrenhandlingfile)
but when i run this it gives
os.rename(oldpathcurrenthandling, futurepathcurrenhandlingfile)
FileNotFoundError: [WinError 2] System couldn't find the file: 'C:\Users\EL127032\Documents\Eligant - kopie\Klas 1\Stermodules\Basisbiologie/lopen (1).odt' -> 'C:/Users/EL127032/Documents/fileconvertion/files/lopen (1).odt'
can someone help me please.

You are trying to move the same file twice.
The bug is in this part :
numberoffiles = len(filesindirectory)
handlingfilenumber = 0
while numberoffiles > handlingfilenumber:
currenthandlingfile = filesindirectory[handlingfilenumber]
oldpathcurrenthandling = filepathdirectory + "/" + currenthandlingfile
futurepathcurrenhandlingfile = "C:/Users/EL127032/Documents/fileconvertion/files/" + currenthandlingfile
os.rename(oldpathcurrenthandling, futurepathcurrenhandlingfile)
The first time you loop, handlingfilenumber will be 0, so you will move the 0-th file from your filesindirectory list.
Then you loop again, handlingfilenumber is still 0, so you try to move it again, but it is not there anymore (you moved it already on the first turn).
You forgot to increment handlingfilenumber. Add handlingfilenumber += 1 on a line after os.rename and you will be fine.
while loops are more error-prone than simpler for loops, I recommend you use for loops when appropriate.
Here, you want to move each file, so a for loops suffices :
for filename in filesindirectory:
oldpathcurrenthandling = filepathdirectory + "/" + currenthandlingfile
futurepathcurrenhandlingfile = "C:/Users/EL127032/Documents/fileconvertion/files/" + currenthandlingfile
os.rename(oldpathcurrenthandling, futurepathcurrenhandlingfile)
No need to use len, initialize a counter, increment it, get the n-th element, ... And fewer lines.
Three other things :
you could have found the cause of the problem yourself, using debugging, there are plenty of ressources online to explain how to do it. Just printing the name of the file about to be copied (oldpathcurrenthandling) you would have seen it twice and noticed the problem causing the os error.
your variable names are not very readable. Consider following the standard style guide about variable names (PEP 8) and standard jargon, for example filepathonefilewithoutfullpath becomes filename, oldpathcurrenthandling becomes source_file_path (following the source/destination convention), ...
When you have an error, include the stacktrace that Python gives you. It would have pointed directly to the second os.rename case, the first one (when you copy only one file) does not contribute to the problem. It also helps finding a Minimal Reproducible Example.

Related

How to convert a function to a recursive function

Hey guys I don't know if I can ask this but I'm working on the original files in google collab and I wrote a function that sums all the sizes of the file
import os
def recls_rtsize(argpath):
sumsize = 0
for entry in os.scandir(argpath):
path= argpath+'/'+entry.name
size= os.path.getsize(path)
sumsize+=size
return sumsize
print("total:",recls_rtsize('/var/log'))
But I need a way to make this function a recursive function or if there is some kind of formula or idea to convert no-recursive into recursive
Recursive function is the function which calls itself. For example if You are trying to calculate the sum of all files inside some directory You can just loop through files of that directory and summarize the sizes of the files. If directory You are checking for has subdirectories, then you can just put a condition, if directory has subdirs, if it is, then you can call function itself for that subdirectory.
In your case:
import os
def recls_rtsize(argpath):
sumsize = 0
for entry in os.scandir(argpath):
# think of is_directory is your custom function that checks
# if this path is a directory
if entry.is_directory():
# then call your function for this directory
size = recls_stsize(entry)
else:
path = argpath+'/'+entry.name
size = os.path.getsize(path)
sumsize += size
return sumsize
print("total:",recls_rtsize('/var/log'))
For example, you could write helper function to process it recursively, although I don't understand the purpose:
import os
def recls_rtsize(argpath):
def helper(dirs):
if not dirs:
return 0
path = argpath + '/' + dirs[0].name
size = os.path.getsize(path)
return size + helper(dirs[1:])
return helper(list(os.scandir(argpath)))
print("total:", recls_rtsize('testing_package'))
Explanation:
Let's say argpath contains several files:
argpath = [file1, file2, file2]
Then the function calls would be:
size(file1) + recls_rtsize([file2, file2]) we pass everything after the first element
size(file1) + size(file2) + recls_rtsize([file3])
size(file1) + size(file2) + size(file3) + recls_rtsize([])
There are no elements left, and we return 0 and start backtracking
size(file1) + size(file2) + size(file3) + 0
size(file1) + size(file2) + (size(file3) + 0)
size(file1) + (size(file2) + (size(file3) + 0))
(size(file1) + (size(file2) + (size(file3) + 0))) # our result
I hope it make sense
To iterate over files in sub-folders (I assume that this is your goal here) you can use os.walk().
example

Opening files from directory in specific order

I have a folder that contains around 500 images that I am rotating at a random angle from 0 to 360. The files are named 00i.jpeg where i = 0 then i = 1. For example I have an image named 009.jpeg and one named 0052.jpeg and another one 00333.jpeg. My code below works as is does rotate the image, but how the files are being read through is not stepping correctly.
I would think I would need some sort of stepping code chunk that starts at 0 and adds one each time, but I'm not sure where I would put that. os.listdir doesn't allow me to do that because (from my understanding) it just lists the files out. I tried using os.walk but I cannot use cv2.imread. I receive a SystemError: <built-in function imread> returned NULL without setting an error error.
Any suggestions?
import cv2
import imutils
from random import randrange
import os
os.chdir("C:\\Users\\name\\Desktop\\training\\JPEG")
j = 0
for infile in os.listdir("C:\\Users\\name\\Desktop\\training\\JPEG"):
filename = 'testing' + str(j) + '.jpeg'
i = randrange(360)
image = cv2.imread(infile)
rotation_output = imutils.rotate_bound(image, angle=i)
os.chdir("C:\\Users\\name\\Desktop\\rotate_test")
cv2.imwrite("C:\\Users\\name\\Desktop\\rotate_test\\" + filename, rotation_output)
os.chdir("C:\\Users\\name\\Desktop\\training\\JPEG")
j = j + 1
print(infile)
000.jpeg
001.jpeg
0010.jpeg
00100.jpeg
...
Needs to be:
print(infile)
000.jpeg
001.jpeg
002.jpeg
003.jpeg
...
Get a list of files first, then use sort with key where the key is an integer version of the file name without extension.
files = os.listdir("C:\\Users\\name\\Desktop\\training\\JPEG")
files.sort(key=lambda x:int(x.split('.')[0]))
for infile in files:
...
Practical example:
files = ['003.jpeg','000.jpeg','001.jpeg','0010.jpeg','00100.jpeg','002.jpeg']
files.sort(key=lambda x:int(x.split('.')[0]))
print(files)
Output
['000.jpeg', '001.jpeg', '002.jpeg', '003.jpeg', '0010.jpeg', '00100.jpeg']

loop np.load until file-index exceeds index of available files

I wish to read in all files from a folder using np.load without specifying the total number of files in advance. Currently, after a few loops the index will run out of the range of available files, and the code will terminate.
index = 0
while True:
a = np.load(file=filepath + 'c_l' + pc_output_layer + '_s0_p' + str(index) + '.npy')
layer = np.append(layer, a)
index += 1
How can I keep loading until an error occurs and then continue running the rest of the script? Thank you!
You could catch the exception and break out of the loop that way, but a more 'pythonic' way would be to loop over the filenames themselves, rather than using an index.
The glob library allows you to find files matching a given pattern and return a list you can then iterate over.
E.g.:
import glob
files = glob.glob(filepath + 'c_l*.npy')
for f in files:
a = np.load(file=f)
layer = np.append(layer, a)
You could also simplify it further by creating the layers directly using a list comprehension.

Python: Reading Data from Multiple CSV Files to Lists

I'm using Python 3.5 to move through directories and subdirectories to access csv files and fill arrays with data from those files. The first csv file the code encounters looks like this:
The code I have is below:
import matplotlib.pyplot as plt
import numpy as np
import os, csv, datetime, time, glob
gpheight = []
RH = []
dewpt = []
temp = []
windspd = []
winddir = []
dirpath, dirnames, filenames = next(os.walk('/strm1/serino/DATA'))
count2 = 0
for dirname in dirnames:
if len(dirname) >= 8:
try:
dt = datetime.datetime.strptime(dirname[:8], '%m%d%Y')
csv_folder = os.path.join(dirpath, dirname)
for csv_file2 in glob.glob(os.path.join(csv_folder, 'figs', '0*.csv')):
if os.stat(csv_file2).st_size == 0:
continue
#create new arrays for each case
gpheight.append([])
RH.append([])
temp.append([])
dewpt.append([])
windspd.append([])
winddir.append([])
with open(csv_file2, newline='') as f2_input:
csv_input2 = csv.reader(f2_input,delimiter=' ')
for j,row2 in enumerate(csv_input2):
if j == 0:
continue #skip header row
#fill arrays created above
windspd[count2].append(float(row2[5]))
winddir[count2].append(float(row2[6]))
gpheight[count2].append(float(row2[1]))
RH[count2].append(float(row2[4]))
temp[count2].append(float(row2[2]))
dewpt[count2].append(float(row2[3]))
count2 = count2 + 1
except ValueError as e:
pass
I have it set up to create a new array for each new csv file. However, when I print the third (temperature) column,
for n in range(0,len(temp)):
print(temp[0][n])
it only partially prints that column of data:
-70.949997
-68.149994
-60.449997
-63.649994
-57.449997
-51.049988
-45.349991
-40.249985
-35.549988
-31.249985
-27.149994
-24.549988
-22.149994
-19.449997
-16.349976
-13.25
-11.049988
-8.949982
-6.75
-4.449982
-2.25
-0.049988
In addition, I believe a related problem is that when I simply do,
print(temp)
it prints
with the highlighted section the section that belongs to this one csv file, and should therefore be in one array. There are also additional empty arrays at the end that should not be there.
I have (not shown) a section of code before this that does the same thing but with different csv files, and that works as expected, separating each file's data into a new array, with no empty arrays. I appreciate any help!
The issue had been my use of try and pass. All the files that matched my criteria were met, but some of those files had issues with how their contents were read, which caused the errors I was receiving later in the code. For anyone looking to use try and pass, make sure that you are able to safely pass on any exceptions that block of code may encounter. Otherwise, it could cause problems later. You may still get an error if you don't pass on it, but that will force you to fix it appropriately instead of ignoring it.

Python - Can glob be used multiple times?

I want the user to process files in 2 different folders. The user does by selecting a folder for First_Directory and another folder for Second_Directory. Each of these are defined, have their own algorithms and work fine if only one directory is selected at a time. If the user selects both, only the First_Directory is processed.
Both also contain the glob module as shown in the simplified code which I think the problem lies. My question is: can the glob module be used multiple times and if not, is there an alternative?
##Test=name
##First_Directory=folder
##Second_Directory=folder
path_1 = First_Directory
path_2 = Second_Directory
path = path_1 or path_2
os.chdir(path)
def First(path_1):
output_1 = glob.glob('./*.shp')
#Do some processing
def Second(path_2):
output_2 = glob.glob('./*.shp')
#Do some other processing
if path_1 and path_2:
First(path_1)
Second(path_2)
elif path_1:
First(path_1)
elif path_2:
Second(path_2)
else:
pass
You can modify your function to only look for .shp files in the path of interest. Then you can use that function for one path or both.
def globFolder(path):
output_1 = glob.glob(path + '\*.shp')
path1 = "C:\folder\data1"
path2 = "C:\folder\data2"
Then you can use that generic function:
totalResults = globFolder(path1) + globFolder(path2)
This will combine both lists.
I think by restructring your code can obtain your goal:
def First(path,check):
if check:
output = glob.glob(path+'./*.shp')
#Do some processing
else:
output = glob.glob(path+'./*.shp')
#Do some other processing
return output
#
#
#
First(path_1,True)
First(path_2,False)

Categories

Resources