Can't close temporary files made with the Python tempfile module - python

I am using an ffmpeg based module called pydub to edit an audio file. I am trying to use the module tempfile, but for some reason I can't close the files (they arent deleted). When using a context manager it throws a PermissionError.
What I tried:
This code works like intended, however it does not delete the tempfile(s), no error is thrown.
temp1, temp2 = tempfile.NamedTemporaryFile(dir='data/temp', delete=False), None
# save something to temp1
seg = AudioSegment.from_file_using_temporary_files(temp1)
if options['overlay']:
# create new seg
if overlay_seg:
temp2 = tempfile.NamedTemporaryFile(dir='data/temp', delete=False)
# save something to temp2
overlay_seg = AudioSegment.from_file_using_temporary_files(temp2)
seg = seg.overlay(overlay_seg)
# edit the audio more here..
final = 'data//temp//edited.mp3'
seg.export(final, bitrate=options['bitrate'], format='mp3')
await ctx.send(file=discord.File(final)) # sends the file to a Discord chat
temp1.close()
if temp2:
temp2.close()
This creates the first temp file, then it throws an error:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\site-packages\discord\ext\commands\core.py", line 85, in wrapped
ret = await coro(*args, **kwargs)
File "<myfile>", line 153, in editaudio
await attachment.save(temp1.name)
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\site-packages\discord\message.py", line 155, in save
with open(fp, 'wb') as f:
PermissionError: [Errno 13] Permission denied: '<base>data\\temp\\tmpnf2r68qe'
with tempfile.NamedTemporaryFile(dir='data/temp') as temp1:
await attachment.save(temp1.name)
seg = AudioSegment.from_file_using_temporary_files(temp1)
if options['overlay']:
# create new seg
if overlay_seg:
with tempfile.NamedTemporaryFile(dir='data/temp') as temp2:
# save something to temp2
overlay_seg = AudioSegment.from_file_using_temporary_files(temp2)
seg = seg.overlay(overlay_seg)
# edit the audio more here..
final = 'data//temp//edited.mp3'
seg.export(final, bitrate=options['bitrate'], format='mp3')
await ctx.send(file=discord.File(final)) # sends the file to a Discord chat
I am not sure what I am missing here

Well, your tempfiles do not get deleted if you create them with the option delete=False. This is self explanatory but also mentioned in the docs.
Your second approach is hard to debug since you do not provide a minimal reproducible example. Presumably the problem arises because you already opened the tempfile by using the context manager. attachment.save() probably expects a path, but I couldn't find a documentation online.

Related

Opening a file, if not existent, create it

I'm using two JSON files, one for storing and loading device variables and another one for mqtt infos. I'm using a load_config function to load the correct file and then load it as JSON. When the file exists, it works without any problem, but when the file is not existing, it throws a file not found error, obviously. but My function contains an exception block to handle this by creating the file, but it isn't called. Here's my code for the function:
def load_config(config_path):
with open(config_path) as f: #Config
try:
return json.load(f)
except OSError:
print("file not there, creating it")
open(config_path, "w")
except json.JSONDecodeError:
return {}
f.close()
I call that function like this:
DEVICE_PATH = 'config.json'
MQTT_PATH = 'mqtt.json'
conf = load_config(DEVICE_PATH) #load device config
mqtt_conf = load_config(MQTT_PATH) #load mqtt config
mqtt_broker_ip = mqtt_conf['ip'] #setup mqtt
mqtt_broker_port = mqtt_conf['port']
mqtt_user = mqtt_conf['username']
mqtt_pass = mqtt_conf['password']
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.username_pw_set(mqtt_user, password=mqtt_pass)
client.connect(mqtt_broker_ip, mqtt_broker_port, keepalive = 60, bind_address="" )
what am I doing wrong? When I open the file directly with the load_config via with open(config_path, "a") as f: everything in it gets deleted, with x it just throws an exception if the file exists and with w, it gets also overwritten.
What you are trying to accomplish is already an open() built-in functionality.
Just skip the whole file existence check and load the JSON in w+ mode:
with open("file.json", "w+") as f:
try:
data = json.load(f)
except JSONDecodeError:
data = {}
w+ opens any file in read and write mode and creates the filename if it doesn't exist.
Keep in mind that dumping any data with this mode will entirely overwrite any existing file's content.
Just as a side note, you might need to explore some basic knowledge about file processing, to avoid being stuck with a similar issue very soon.
Since I had a logic error, the exception IOError would never been raised. I opened the file and tried to load into json. Now I just check beforehands, if the file not exists, and create it.
def load_config(config_path):
if not os.path.isfile(config_path):
open(config_path, "w+")
with open(config_path) as f: #Config
try:
return json.load(f)
except json.JSONDecodeError:
return {}

Pytsk - Sending files to a server from a disk image

I am trying to send each file from a disk image to a remote server using paramiko.
class Server:
def __init__(self):
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.ssh.connect('xxx', username='xxx', password='xxx')
def send_file(self, i_node, name):
sftp = self.ssh.open_sftp()
serverpath = '/home/paul/Testing/'
try:
sftp.chdir(serverpath)
except IOError:
sftp.mkdir(serverpath)
sftp.chdir(serverpath)
serverpath = '/home/Testing/' + name
sftp.putfo(fs.open_meta(inode = i_node), serverpath)
However when I run this I get an error saying that "pytsk.File has no attribute read".
Is there any other way of sending this file to the server?
After a quick investigation I think I found what your problem is. Paramiko's sftp.putfo expects a Python file object as the first parameter. The file object of Pytsk3 is a completely different thing. Your sftp object tries to perform "read" on this, but Pytsk3 file object does not have a method "read", hence the error.
You could in theory try expanding Pytsk3.File class and adding this method but I would not hold my breath that it actually works.
I would just read the file to a temporary one and send that. Something like this (you would need to make temp file name handling more clever and delete the file afterwards but you will get the idea):
serverpath = '/home/Testing/' + name
tmp_path = "/tmp/xyzzy"
file_obj = fs.open_meta(inode = i_node)
# Add here tests to confirm this is actually a file, not a directory
tha = open(tmp_path, "wb")
tha.write(file_obj.read_random(0, file_obj.info.meta.size))
tha.close()
rha = open(tmp_path, "rb")
sftp.putfo(rha, serverpath)
rha.close()
# Delete temp file here
Hope this helps. This will read the whole file in memory from fs image to be written to temp file, so if the file is massive you would run out of memory.
To work around that, you should read the file in chunks looping through it with read_random in suitable chunks (the parameters are start offset and amount of data to read), allowing you to construct the temp file in a chunks of for example a couple of megabytes.
This is just a simple example to illustrate your problem.
Hannu

Delete a file after reading

In my code, user uploads file which is saved on server and read using the server path. I'm trying to delete the file from that path after I'm done reading it. But it gives me following error instead:
An error occurred while reading file. [WinError 32] The process cannot access the file because it is being used by another process
I'm reading file using with, and I've tried f.close() and also f.closed but its the same error every time.
This is my code:
f = open(filePath)
with f:
line = f.readline().strip()
tempLst = line.split(fileSeparator)
if(len(lstHeader) != len(tempLst)):
headerErrorMsg = "invalid headers"
hjsonObj["Line No."] = 1
hjsonObj["Error Detail"] = headerErrorMsg
data['lstErrorData'].append(hjsonObj)
data["status"] = True
f.closed
return data
f.closed
after this code I call the remove function:
os.remove(filePath)
Edit: using with open(filePath) as f: and then trying to remove the file gives the same error.
Instead of:
f.closed
You need to say:
f.close()
closed is just a boolean property on the file object to indicate if the file is actually closed.
close() is method on the file object that actually closes the file.
Side note: attempting a file delete after closing a file handle is not 100% reliable. The file might still be getting scanned by the virus scanner or indexer. Or some other system hook is holding on to the file reference, etc... If the delete fails, wait a second and try again.
Use below code:
import os
os.startfile('your_file.py')
To delete after completion:
os.remove('your_file.py')
This
import os
path = 'path/to/file'
with open(path) as f:
for l in f:
print l,
os.remove(path)
should work, with statement will automatically close the file after the nested block of code
if it fails, File could be in use by some external factor. you can use Redo pattern.
while True:
try:
os.remove(path)
break
except:
time.sleep(1)
There is probably an application that is opening the file; check and close the application before executing your code:
os.remove(file_path)
Delete files that are not used by another application.

Why do I get "Pickle - EOFError: Ran out of input" reading an empty file?

I am getting an interesting error while trying to use Unpickler.load(), here is the source code:
open(target, 'a').close()
scores = {};
with open(target, "rb") as file:
unpickler = pickle.Unpickler(file);
scores = unpickler.load();
if not isinstance(scores, dict):
scores = {};
Here is the traceback:
Traceback (most recent call last):
File "G:\python\pendu\user_test.py", line 3, in <module>:
save_user_points("Magix", 30);
File "G:\python\pendu\user.py", line 22, in save_user_points:
scores = unpickler.load();
EOFError: Ran out of input
The file I am trying to read is empty.
How can I avoid getting this error, and get an empty variable instead?
Most of the answers here have dealt with how to mange EOFError exceptions, which is really handy if you're unsure about whether the pickled object is empty or not.
However, if you're surprised that the pickle file is empty, it could be because you opened the filename through 'wb' or some other mode that could have over-written the file.
for example:
filename = 'cd.pkl'
with open(filename, 'wb') as f:
classification_dict = pickle.load(f)
This will over-write the pickled file. You might have done this by mistake before using:
...
open(filename, 'rb') as f:
And then got the EOFError because the previous block of code over-wrote the cd.pkl file.
When working in Jupyter, or in the console (Spyder) I usually write a wrapper over the reading/writing code, and call the wrapper subsequently. This avoids common read-write mistakes, and saves a bit of time if you're going to be reading the same file multiple times through your travails
I would check that the file is not empty first:
import os
scores = {} # scores is an empty dict already
if os.path.getsize(target) > 0:
with open(target, "rb") as f:
unpickler = pickle.Unpickler(f)
# if file is not empty scores will be equal
# to the value unpickled
scores = unpickler.load()
Also open(target, 'a').close() is doing nothing in your code and you don't need to use ;.
It is very likely that the pickled file is empty.
It is surprisingly easy to overwrite a pickle file if you're copying and pasting code.
For example the following writes a pickle file:
pickle.dump(df,open('df.p','wb'))
And if you copied this code to reopen it, but forgot to change 'wb' to 'rb' then you would overwrite the file:
df=pickle.load(open('df.p','wb'))
The correct syntax is
df=pickle.load(open('df.p','rb'))
As you see, that's actually a natural error ..
A typical construct for reading from an Unpickler object would be like this ..
try:
data = unpickler.load()
except EOFError:
data = list() # or whatever you want
EOFError is simply raised, because it was reading an empty file, it just meant End of File ..
You can catch that exception and return whatever you want from there.
open(target, 'a').close()
scores = {};
try:
with open(target, "rb") as file:
unpickler = pickle.Unpickler(file);
scores = unpickler.load();
if not isinstance(scores, dict):
scores = {};
except EOFError:
return {}
if path.exists(Score_file):
try :
with open(Score_file , "rb") as prev_Scr:
return Unpickler(prev_Scr).load()
except EOFError :
return dict()
Had the same issue. It turns out when I was writing to my pickle file I had not used the file.close(). Inserted that line in and the error was no more.
I have encountered this error many times and it always occurs because after writing into the file, I didn't close it. If we don't close the file the content stays in the buffer and the file stays empty.
To save the content into the file, either file should be closed or file_object should go out of scope.
That's why at the time of loading it's giving the ran out of input error because the file is empty. So you have two options :
file_object.close()
file_object.flush(): if you don't wanna close your file in between the program, you can use the flush() function as it will forcefully move the content from the buffer to the file.
This error comes when your pickle file is empty (0 Bytes). You need to check the size of your pickle file first. This was the scenario in my case. Hope this helps!
Note that the mode of opening files is 'a' or some other have alphabet 'a' will also make error because of the overwritting.
pointer = open('makeaafile.txt', 'ab+')
tes = pickle.load(pointer, encoding='utf-8')
temp_model = os.path.join(models_dir, train_type + '_' + part + '_' + str(pc))
# print(type(temp_model)) # <class 'str'>
filehandler = open(temp_model, "rb")
# print(type(filehandler)) # <class '_io.BufferedReader'>
try:
pdm_temp = pickle.load(filehandler)
except UnicodeDecodeError:
pdm_temp = pickle.load(filehandler, fix_imports=True, encoding="latin1")
from os.path import getsize as size
from pickle import *
if size(target)>0:
with open(target,'rb') as f:
scores={i:j for i,j in enumerate(load(f))}
else: scores={}
#line 1.
we importing Function 'getsize' from Library 'OS' sublibrary 'path' and we rename it with command 'as' for shorter style of writing. Important is hier that we loading only one single Func that we need and not whole Library!
line 2.
Same Idea, but when we dont know wich modul we will use in code at the begining, we can import all library using a command '*'.
line 3.
Conditional Statement... if size of your file >0 ( means obj is not an empty). 'target' is variable that schould be a bit earlier predefined.
just an Example : target=(r'd:\dir1\dir.2..\YourDataFile.bin')
Line 4.
'With open(target) as file:' an open construction for any file, u dont need then to use file.close(). it helps to avoid some typical Errors such as "Run out of input" or Permissions rights.
'rb' mod means 'rea binary' that u can only read(load) the data from your binary file but u cant modify/rewrite it.
Line5.
List comprehension method in applying to a Dictionary..
line 6. Case your datafile is empty, it will not raise an any Error msg, but return just an empty dictionary.

weird no such file or directory in python

I'm new to python and the following piece of code is driving me crazy. It lists the files in a directory and for each file does some stuff. I get a IOError: [Errno2] No such file or directory: my_file_that_is_actually_there!
def loadFile(aFile):
f_gz = gzip.open(aFile, 'rb')
data = f_gz.read()
#do some stuff...
f_gz.close()
return data
def main():
inputFolder = '../myFolder/'
for aFile in os.listdir(inputFolder):
data = loadFile(aFile)
#do some more stuff
The file exists and it's not corrupted. I do not understand how it's possible that python first finds the file when it checks the content of myFolder, and then it cannot find itanymore... This happens on the second iteration of my for loop only with any files.
NOTE: Why does this exception happen ONLY at the second iteration of the loop?? The first file in the folder is found and opened without any issues...
This is because open receives the local name (returned from os.listdir). It doesn't know that you mean that it should look in ../myFolder. So it receives a relative path and applies it to the current dir. To fix it, try:
data = loadFile(os.path.join(inputFolder, aFile))

Categories

Resources