I'm using two JSON files, one for storing and loading device variables and another one for mqtt infos. I'm using a load_config function to load the correct file and then load it as JSON. When the file exists, it works without any problem, but when the file is not existing, it throws a file not found error, obviously. but My function contains an exception block to handle this by creating the file, but it isn't called. Here's my code for the function:
def load_config(config_path):
with open(config_path) as f: #Config
try:
return json.load(f)
except OSError:
print("file not there, creating it")
open(config_path, "w")
except json.JSONDecodeError:
return {}
f.close()
I call that function like this:
DEVICE_PATH = 'config.json'
MQTT_PATH = 'mqtt.json'
conf = load_config(DEVICE_PATH) #load device config
mqtt_conf = load_config(MQTT_PATH) #load mqtt config
mqtt_broker_ip = mqtt_conf['ip'] #setup mqtt
mqtt_broker_port = mqtt_conf['port']
mqtt_user = mqtt_conf['username']
mqtt_pass = mqtt_conf['password']
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.username_pw_set(mqtt_user, password=mqtt_pass)
client.connect(mqtt_broker_ip, mqtt_broker_port, keepalive = 60, bind_address="" )
what am I doing wrong? When I open the file directly with the load_config via with open(config_path, "a") as f: everything in it gets deleted, with x it just throws an exception if the file exists and with w, it gets also overwritten.
What you are trying to accomplish is already an open() built-in functionality.
Just skip the whole file existence check and load the JSON in w+ mode:
with open("file.json", "w+") as f:
try:
data = json.load(f)
except JSONDecodeError:
data = {}
w+ opens any file in read and write mode and creates the filename if it doesn't exist.
Keep in mind that dumping any data with this mode will entirely overwrite any existing file's content.
Just as a side note, you might need to explore some basic knowledge about file processing, to avoid being stuck with a similar issue very soon.
Since I had a logic error, the exception IOError would never been raised. I opened the file and tried to load into json. Now I just check beforehands, if the file not exists, and create it.
def load_config(config_path):
if not os.path.isfile(config_path):
open(config_path, "w+")
with open(config_path) as f: #Config
try:
return json.load(f)
except json.JSONDecodeError:
return {}
Related
I have a FastAPI endpoint that receives a file, uploads it to s3, and then processes it. Everything works fine except for the processing, that fails with this message:
File "/usr/local/lib/python3.9/site-packages/starlette/datastructures.py", line 441, in read
return self.file.read(size)
File "/usr/local/lib/python3.9/tempfile.py", line 735, in read
return self._file.read(*args)
ValueError: I/O operation on closed file.
My simplified code looks like this:
async def process(file: UploadFile):
reader = csv.reader(iterdecode(file.file.read(), "utf-8"), dialect="excel") # This fails!
datarows = []
for row in reader:
datarows.append(row)
return datarows
How can I read the contents of the uploaded file?
UPDATE
I managed to isolate the problem a bit more. Here's my simplified endpoint:
import boto3
from loguru import logger
from botocore.exceptions import ClientError
UPLOAD = True
#router.post("/")
async def upload(file: UploadFile = File(...)):
if UPLOAD:
# Upload the file
s3_client = boto3.client("s3", endpoint_url="http://localstack:4566")
try:
s3_client.upload_fileobj(file.file, "local", "myfile.txt")
except ClientError as e:
logger.error(e)
contents = await file.read()
return JSONResponse({"message": "Success!"})
If UPLOAD is True, I get the error. If it's not, everything works fine. It seems boto3 is closing the file after uploading it. Is there any way I can reopen the file? Or send a copy to upload_fileobj?
FastAPI's (actually Starlette's) UploadFile (see Starlette's documentation as well) uses Python's SpooledTemporaryFile, a "file stored in memory up to a maximum size limit, and after passing this limit it will be stored in disk.". It "operates exactly as TemporaryFile", which "is destroyed as soon as it is closed (including an implicit close when the object is garbage collected)". Hence, it seems that once the contents of the file have been read by boto3, the file gets closed, which, in turn, causes the file to be deleted.
Option 1
If the server supports it, you could read the file contents—using contents = file.file.read(), as shown in this answer (or for async reading/writing see here)—and then upload these contents (i.e.,bytes) to your server directly.
Otherwise, you can again read the contents and then move the file's reference point at the beginning of the file. In a file there is an internal "cursor" (or "file pointer") denoting the position from which the file contents will be read (or written). When calling read() reads all the way to the end of the buffer, leaving zero bytes beyond the cursor. Thus, one could also use the seek() method to set the current position of the cursor to 0 (i.e., rewinding the cursor to the start of the file); thus, allowing you to pass the file object (i.e., upload_fileobj(file.file) see this answer) after reading the file contents.
As per FastAPI's documentation:
seek(offset): Goes to the byte position offset (int) in the file.
E.g., await myfile.seek(0) would go to the start of the file.
This is especially useful if you run await myfile.read() once and then need to read the contents again.
Example
from fastapi import File, UploadFile, HTTPException
#app.post('/')
def upload(file: UploadFile = File(...)):
try:
contents = file.file.read()
file.file.seek(0)
# Upload the file to to your S3 service
s3_client.upload_fileobj(file.file, 'local', 'myfile.txt')
except Exception:
raise HTTPException(status_code=500, detail='Something went wrong')
finally:
file.file.close()
print(contents) # Handle file contents as desired
return {"filename": file.filename}
Option 2
Copy the contents of the file into a NamedTemporaryFile, which, unlike TemporaryFile, "has a visible name in the file system" that "can be used to open the file" (that name can be retrieved from the .name attribute ). Additionally, it can remain accesible after it is closed, by setting the delete argument to False; thus, allowing the file to reopen when needed. Once you are done with it, you can delete it using the os.remove() or os.unlink() method. Below is a working example (inspired by this answer):
from fastapi import FastAPI, File, UploadFile, HTTPException
from tempfile import NamedTemporaryFile
import os
app = FastAPI()
#app.post("/upload")
def upload_file(file: UploadFile = File(...)):
temp = NamedTemporaryFile(delete=False)
try:
try:
contents = file.file.read()
with temp as f:
f.write(contents);
except Exception:
raise HTTPException(status_code=500, detail='Error on uploading the file')
finally:
file.file.close()
# Upload the file to your S3 service using `temp.name`
s3_client.upload_file(temp.name, 'local', 'myfile.txt')
except Exception:
raise HTTPException(status_code=500, detail='Something went wrong')
finally:
#temp.close() # the `with` statement above takes care of closing the file
os.remove(temp.name) # Delete temp file
print(contents) # Handle file contents as desired
return {"filename": file.filename}
Option 3
You could even keep the bytes in an in-memory buffer BytesIO, use it to upload the contents to the S3 bucket, and finally close it ("The buffer is discarded when the close() method is called."). Remember to call seek(0) method to reset the cursor back to the beginning of the file after you finish writing to the BytesIO stream.
contents = file.file.read()
temp_file = io.BytesIO()
temp_file.write(contents)
temp_file.seek(0)
s3_client.upload_fileobj(temp_file, "local", "myfile.txt")
temp_file.close()
From FastAPI ImportFile:
Import File and UploadFile from fastapi:
from fastapi import FastAPI, File, UploadFile
app = FastAPI()
#app.post("/files/")
async def create_file(file: bytes = File(...)):
return {"file_size": len(file)}
#app.post("/uploadfile/")
async def create_upload_file(file: UploadFile = File(...)):
return {"filename": file.filename}
From FastAPI UploadFile:
For example, inside of an async path operation function you can get the
contents with:
contents = await myfile.read()
with your code you should have something like this:
async def process(file: UploadFile = File(...)):
content = await file.read()
reader = csv.reader(iterdecode(content, "utf-8"), dialect="excel")
datarows = []
for row in reader:
datarows.append(row)
return datarows
In my code, user uploads file which is saved on server and read using the server path. I'm trying to delete the file from that path after I'm done reading it. But it gives me following error instead:
An error occurred while reading file. [WinError 32] The process cannot access the file because it is being used by another process
I'm reading file using with, and I've tried f.close() and also f.closed but its the same error every time.
This is my code:
f = open(filePath)
with f:
line = f.readline().strip()
tempLst = line.split(fileSeparator)
if(len(lstHeader) != len(tempLst)):
headerErrorMsg = "invalid headers"
hjsonObj["Line No."] = 1
hjsonObj["Error Detail"] = headerErrorMsg
data['lstErrorData'].append(hjsonObj)
data["status"] = True
f.closed
return data
f.closed
after this code I call the remove function:
os.remove(filePath)
Edit: using with open(filePath) as f: and then trying to remove the file gives the same error.
Instead of:
f.closed
You need to say:
f.close()
closed is just a boolean property on the file object to indicate if the file is actually closed.
close() is method on the file object that actually closes the file.
Side note: attempting a file delete after closing a file handle is not 100% reliable. The file might still be getting scanned by the virus scanner or indexer. Or some other system hook is holding on to the file reference, etc... If the delete fails, wait a second and try again.
Use below code:
import os
os.startfile('your_file.py')
To delete after completion:
os.remove('your_file.py')
This
import os
path = 'path/to/file'
with open(path) as f:
for l in f:
print l,
os.remove(path)
should work, with statement will automatically close the file after the nested block of code
if it fails, File could be in use by some external factor. you can use Redo pattern.
while True:
try:
os.remove(path)
break
except:
time.sleep(1)
There is probably an application that is opening the file; check and close the application before executing your code:
os.remove(file_path)
Delete files that are not used by another application.
I have written script that parses a web page and saves data of interest in a CSV file. Before I open the data and use it in a second script I check if the file with data exist and if not I am running the parser script first. The odd behaviour of the second script is, that it is able to detect that there is no file, then the file is created, but when it is read for the first time it is empty (part of else statement). I tried to provide some delay by using the time.sleep() method, but it does not work. The explorer clearly shows that the file is not empty, but at the first run, script recognizes the file as empty. At the subsequent runs the scripts clearly sees the file and is able to properly recognize it content.
Maybe You have some explanation for this behaviour.
def open_file():
# TARGET_DIR and URL are global variables.
all_lines = []
try:
current_file = codecs.open(TARGET_DIR, 'r', 'utf-8')
except FileNotFoundError:
procesed_data = parse_site(URL)
save_parsed(procesed_data)
compare_parsed()
open_file()
else:
time.sleep(10)
data = csv.reader(current_file, delimiter=';')
for row in data:
all_lines.append(row)
current_file.close()
return all_lines
You got some recursion going on.
Another way to do it—assuming I understand correctly—is this:
import os
def open_file():
# TARGET_DIR and URL are global variables.
all_lines = []
# If the file is not there, make it.
if not os.path.isfile(TARGET_DIR):
procesed_data = parse_site(URL)
save_parsed(procesed_data)
compare_parsed()
# Here I am assuming the file has been created.
current_file = codecs.open(TARGET_DIR, 'r', 'utf-8')
data = csv.reader(current_file, delimiter=';')
for row in data:
all_lines.append(row)
current_file.close()
return all_lines
you should return the result of your internal open_file call, or just opening the file in your except block:
def open_file():
# TARGET_DIR and URL are hopefully constants
try:
current_file = codecs.open(TARGET_DIR, 'r', 'utf-8')
except FileNotFoundError:
procesed_data = parse_site(URL)
save_parsed(procesed_data)
compare_parsed()
current_file = codecs.open(TARGET_DIR, 'r', 'utf-8')
data = csv.reader(current_file, delimiter=';')
all_lines = list(data)
current_file.close()
return all_lines
I have a function that opens a file and returns an opened file object.
def read_any():
try:
opened = gzip.open(fname, 'r')
except IOError:
opened = open(fname, 'r')
return opened
When I attempt to run this function on some non-zipped file except condition does not get triggered and the function crashes with the message: IOError: Not a gzipped file.
Ok, now I try and do the same with with statement:
def read_any2():
try:
with gzip.open(fname, 'r') as f:
return f.read()
except IOError:
with open(fname, 'r') as f:
return f.read()
Now, if I try to run the same file the function works as intended.
Can you explain why doesn't except condition get triggered?
To see what's going on, test it in a REPL:
>>> import gzip
>>> f = gzip.open('some_nongzipped_file', 'r')
You will see that this doesn't raise an error. Once you, however, read from the object:
>>> f.read()
... (snip)
OSError: Not a gzipped file
, it raises the error.
In short: Simply creating the file object doesn't read anything from the file yet, and thus doesn't know if it should fail or not.
Since in the first example you just return the file object, when you try to read from it later it will raise the exception there (outside your raise-except block). In your second example you return f.read() which reads and therefore raises the exception. It has nothing to do with the with block, as you can see if you remove it:
def read_any_mod():
try:
opened = gzip.open(fname, 'r')
return opened.read()
except IOError:
opened = open(fname, 'r')
return opened.read()
I have dilemma.. I'm uploading files both in scribd store and blobstore using tipfy as framework.
I have webform with action is not created by blobstore.create_upload_url (i'm just using url_for('myhandler')). I did it because if i'm using blobstore handler the POST response parsed and I cannot use normal python-scribd api to upload file into scribd store.
Now I have working scribd saver:
class UploadScribdHandler(RequestHandler, BlobstoreUploadMixin):
def post(self):
uploaded_file = self.request.files.get('upload_file')
fname = uploaded_file.filename.strip()
try:
self.post_to_scribd(uploaded_file, fname)
except Exception, e:
# ... get the exception message and do something with it
msg = e.message
# ...
# reset the stream to zero (beginning) so the file can be read again
uploaded_file.seek(0)
#removed try-except to see debug info in browser window
# Create the file
file_name = files.blobstore.create(_blobinfo_uploaded_filename=fname)
# Open the file and write to it
with files.open(file_name, 'a') as f:
f.write(uploaded_file.read())
# Finalize the file. Do this before attempting to read it.
files.finalize(file_name)
# Get the file's blob key
blob_key = files.blobstore.get_blob_key(file_name)
return Response('done')
def post_to_scribd(self, uploaded_file, fname):
errmsg =''
uploaded_file = self.request.files.get('upload_file')
fname = uploaded_file.filename.strip()
fext = fname[fname.rfind('.')+1:].lower()
if (fext not in ALLOWED_EXTENSION):
raise Exception('This file type does not allowed to be uploaded\n')
if SCRIBD_ENABLED:
doc_title = self.request.form.get('title')
doc_description = self.request.form.get('description')
doc_tags = self.request.form.get('tags')
try:
document = scribd.api_user.upload(uploaded_file, fname, access='private')
#while document.get_conversion_status() != 'DONE':
# time.sleep(2)
if not doc_title:
document.title = fname[:fname.rfind('.')]
else:
document.title = doc_title
if not doc_description:
document.description = 'This document was uploaded at ' + str(datetime.datetime.now()) +'\n'
else:
document.description = doc_description
document.tags = doc_tags
document.save()
except scribd.ResponseError, err:
raise Exception('Scribd failed: error code:%d, error message: %s\n' % (err.errno, err.strerror))
except scribd.NotReadyError, err:
raise Exception('Scribd failed: error code:%d, error message: %s\n' % (err.errno, err.strerror))
except:
raise Exception('something wrong exception')
As you can see it also saves file into blobstore.. But If i'm uploading big file (i.e. 5Mb) I'm receiving
RequestTooLargeError: The request to API call file.Append() was too large.
Request: docs.upload(access='private', doc_type='pdf', file=('PK\x03\x04\n\x00\x00\x00\x00\x00"\x01\x10=\x00\x00(...)', 'test.pdf'))
How can I fix it?
Thanks!
You need to make multiple, smaller calls to the file API, for instance like this:
with files.open(file_name, 'a') as f:
data = uploaded_file.read(65536)
while data:
f.write(data)
data = uploaded_file.read(65536)
Note that the payload size limit on regular requests to App Engine apps is 10MB; if you want to upload larger files, you'll need to use the regular blobstore upload mechanism.
finally i found solution.
Nick Johneson's answer occurred attribute error because uploaded_file is treated as string.
string didn't have read() method.
Cause string doesn't have method read(), i spliced file string and write it just like he wrote.
class UploadRankingHandler(webapp.RequestHandler):
def post(self):
fish_image_file = self.request.get('file')
file_name = files.blobstore.create(mime_type='image/png', _blobinfo_uploaded_filename="testfilename.png")
file_str_list = splitCount(fish_image_file,65520)
with files.open(file_name, 'a') as f:
for line in file_str_list:
f.write(line)
you can check about splitCount(). here
http://www.bdhwan.com/entry/gaewritebigfile