Open and read latest json file one time only - python

SO members...how can i read latest json file in a directory one time only (if no new file print something). So far I can only read the latest file ...The sample script (run every 45mins) below open and read latest json file in a directory. In this case latest file is file3.json (json file created every 30mins). Thus, if file4 is not created for some reason (for example server fail to create new json file). If the script run again.. it will still read the same last file3.
files in directory
file1.json
file2.json
file3.json
The script below able to open and read latest json file created in the directory.
import glob
import os
import os.path
import datetime, time
listFiles = glob.iglob('logFile/*.json')
latestFile = max(listFiles, key=os.path.getctime)
with open(latestFile, 'r') as f:
mydata = json.load(f)
print(mydata)
To ensure the script will only read newest file and read the newest file one time only...aspect something below:-
listFiles = glob.iglob('logFile/*.json')
latestFile = max(listFiles, key=os.path.getctime)
if latestFile newer than previous open/read file: # Not sure to compare the latest file with the previous file.
with open(latestFile, 'r') as f:
mydata = json.load(f)
print(mydata)
else:
print("no new file created")
Thank you for your help. Example solution would be good to share.
I can't figure out the solution...seems simple but few days try n error without any luck.
(1)Make sure read latest file in directory
(2)Make sure read file/s that may miss to read (due to script fail to run)
(3)Only read once all the files and if no new file give warning.
Thank you.
After SO discussion and suggestion, I got few methods to resolve or at least to accommodate some of the requirement. I just move files that have been processed. If no file create, script will run nothing and if script fail and once normalize it will run and read all related files available. I think its good for now. Thank you guyz...

Below is the answer rather an approach, I would like to propose:
The idea is as follows:
Every log file that is written to a directory can have a key-val in it called "creation_time": timestamp (fileX.json that gets stored in the server). Now, your script runs at 45min to obtain the file which is dumped to a directory. In normal cases, you must be able to read the file, and finally, when you exit the script you can store the last read filename and the creation_time taken from the fileX.json into a logger.json.
An example for a logger.json is as follows:
{
"creation_time": "03520201330",
"file_name": "file3.json"
}
Whenever a server fail or any delay occurs, there could be a rewritten of the fileX.json or new fileX's.json would have been created in the directory. In these situations, you would first open the logger.json and obtain both the timestamp and last filename as shown in the example above. By using the last filename, you can compare the old timestamp that is present in logger with the new timestamp in fileX.json. If they match basically there is no change you only read ahead files and rewrite the logger.
If that is not the case you would re-read the last fileX.json again and proceed to read other ahead files.

Related

Archive files directly from memory in Python

I'm writing this program where I get a number of files, then zip them with encryption using pyzipper, and also I'm using io.BitesIO() to write these files to it so I keep them in-memory. So now, after some other additions, I want to get all of these in-memory files and zip them together in a single encrypted zip file using the same pyzipper.
The code looks something like this:
# Create the in-memory file object
in_memory = BytesIO()
# Create the zip file and open in write mode
with pyzipper.AESZipFile(in_memory, "w", compression=pyzipper.ZIP_LZMA, encryption=pyzipper.WZ_AES) as zip_file:
# Set password
zip_file.setpassword(b"password")
# Save "data" with file_name
zip_file.writestr(file_name, data)
# Go to the beginning
in_memory.seek(0)
# Read the zip file data
data = in_memory.read()
# Add the data to a list
files.append(data)
So, as you may guess the "files" list is an attribute from a class and the whole thing above is a function that does this a number of times and then you get the full files list. For simplicity's sake, I removed most of the irrelevant parts.
I get no errors for now, but when I try to write all files to a new zip file I get an error. Here's the code:
with pyzipper.AESZipFile(test_name, "w", compression=pyzipper.ZIP_LZMA, encryption=pyzipper.WZ_AES) as zfile:
zfile.setpassword(b"pass")
for file in files:
zfile.write(file)
I get a ValueError because of os.stat:
File "C:\Users\vulka\AppData\Local\Programs\Python\Python310\lib\site-packages\pyzipper\zipfile.py", line 820, in from_file
st = os.stat(filename)
ValueError: stat: embedded null character in path
[WHAT I TRIED]
So, I tried using mmap for this purpose but I don't think this can help me and if it can - then I have no idea how to make it work.
I also tried using fs.memoryfs.MemoryFS to temporarily create a virtual filessystem in memory to store all the files and then get them back to zip everything together and then save it to disk. Again - failed. I got tons of different errors in my tests and TBH, there's very little information out there on this fs method and even if what I'm trying to do is possible - I couldn't figure it out.
P.S: I don't know if pyzipper (almost 1:1 zipfile with the addition of encryption) supports nested zip files at all. This could be the problem I'm facing but if it doesn't I'm open to any suggestions for a new approach to doing this. Also, I don't want to rely on a 3rd party software, even if it is open source! (I'm talking about the method of using 7zip to do all the archiving and ecryption, even though it shouldn't even be possible to use it without saving the files to disk in the first place, which is the main thing I'm trying to avoid)

Get the last written file from the series of the sub-folders

I have tried this solution:
How to get the latest file in a folder using python
The code I tried is:
import glob
import os
list_of_files = glob.glob('/path/to/folder/**/*.csv')
latest_file = max(list_of_files, key=os.path.getctime)
print (latest_file)
I received the output with respect to the Windows log of timestamp for the files.
But I have maintained a log separate for writing files in the respective sub-folder.
When I opened the log I see that the last updated file was not what the Python code has specified.
I was shocked as my complete process was depending upon the last file written.
Kindly, let me know what I can do to get the last updated file through Python
I want to read the file which is updated last, but as windows is not prioritizing the updation of the file Last modified, I am not seeing any other way out.
Does anyone has any other way to look out for it?
In linux, os.path.getctime() returns the last modified time, but on windows it returns the creation time. You need to use os.path.getmtime to get the modified time on windows.
import glob
import os
list_of_files = glob.glob('/path/to/folder/**/*.csv')
latest_file = max(list_of_files, key=os.path.getmtime)
print (latest_file)
This code should work for you.
os.path.getctime is the creation time of the file - it seems you want os.path.getmtime which is the modification time of the file, so, try:
latest_file = max(list_of_files, key=os.path.getmtime)
and see if that does what you want.

Why Does a Strange File Shows Up in Directory When Using os.walk()?

The project is written in Pycharm on Windows 10.
I wrote a program that grabs .docx files from a directory and searches for information. At the end of the list of file names I get this file: "~$640188.docx"
I get this error when it hits this file:
raise BadZipfile, "File is not a zip file"
zipfile.BadZipfile: File is not a zip file
This error happens when I try to put file '~$640188.docx' into the docx2text method process
text = docx2txt.process(r'C:\path\to\folder\~$640188.docx')
From what I can see, this file does not exist in the directory I'm searching nor anywhere on my computer. The other strange part is that yesterday I wasn't getting this error.
I know there are sometimes "hidden" files in directories and I ran into those before on my mac (specifically '.DS_Store') but this is a .docx file.
I currently have an ugly solution, which says "don't run the code if you run into '~$640188.docx'". My concern is that this will become more of a problem when I dump 11000 files into the directory.
Where does this file come from?
Below is the code for reference
import docx2txt
import os
check_files = []
for dir, subdir, files in os.walk(r'C:\path\to\folder'):
for file in files:
check_files.append(file)
for file in check_files:
print "file: {0}".format(file)
text = docx2txt.process(r'C:\path\to\folder\{0}'.format(file))
Hidden .docx files starting with ~$ are simply temporary files created by Word while a file is actively open and being edited – the first two characters of the respective parent file's name are replaced with the ~$. They are usually deleted once you save and close a document, but sometimes they manage to stick around after you quit anyway. Since they are designed to be temporary compliments to a proper .docx file, they do not necessary have the correct zip package structure at all times.
You will do well to skip those. Checking if the file name starts with '~' should be good enough. Just add the following filtering:
check_files2 = [fl for fl in check_files if fl[0] != '~']
for file in check_files2:

Permission denied when pandas dataframe to tempfile csv

I'm trying to store a pandas dataframe to a tempfile in csv format (in windows), but am being hit by:
[Errno 13] Permission denied: 'C:\Users\Username\AppData\Local\Temp\tmpweymbkye'
import tempfile
import pandas
with tempfile.NamedTemporaryFile() as temp:
df.to_csv(temp.name)
Where df is the dataframe. I've also tried changing the temp directory to one I am sure I have write permissions:
tempfile.tempdir='D:/Username/Temp/'
This gives me the same error message
Edit:
The tempfile appears to be locked for editing as when I change the loop to:
with tempfile.NamedTemporaryFile() as temp:
df.to_csv(temp.name + '.csv')
I can write the file in the temp directory, but then it is not automatically deleted at the end of the loop, as it is no longer a temp file.
However, if I change the code to:
with tempfile.NamedTemporaryFile(suffix='.csv') as temp:
training_data.to_csv(temp.name)
I get the same error message as before. The file is not open anywhere else.
I encountered the same error message and the issue was resolved after adding "/df.csv" to file_path.
df.to_csv('C:/Users/../df.csv', index = False)
Check your permissions and, according to this post, you can run your program as an administrator by right click and run as administrator.
We can use the to_csv command to do export a DataFrame in CSV format. Note that the code below will by default save the data into the current working directory. We can save it to a different folder by adding the foldername and a slash to the file
verticalStack.to_csv('foldername/out.csv').
Check out your working directory to make sure the CSV wrote out properly, and that you can open it! If you want, try to bring it back into python to make sure it imports properly.
newOutput = pd.read_csv('out.csv', keep_default_na=False, na_values=[""])
ref
Unlike TemporaryFile(), the user of mkstemp() is responsible for deleting the temporary file when done with it.
With the use of this function may introduce a security hole in your program. By the time you get around to doing anything with the file name it returns, someone else may have beaten you to the punch. mktemp() usage can be replaced easily with NamedTemporaryFile(), passing it the delete=False paramete.
Read more.
After export to CSV you can close your file with temp.close().
with tempfile.NamedTemporaryFile(delete=False) as temp:
df.to_csv(temp.name + '.csv')
temp.close()
Sometimes,you need check the file path that if you have right permission to read and write file. Especially when you use relative path.
xxx.to_csv('%s/file.csv'%(file_path), index = False)
Sometimes, it gives that error simply because there is another file with the same name and it has no permission to delete the earlier file and replace it with the new file.
So either name the file differently while saving it,
or
If you are working on Jupyter Notebook or a other similar environment, delete the file after executing the cell that reads it into memory. So that when you execute the cell which writes it to the machine, there is no other file that exists with that name.
I encountered the same error. I simply had not yet saved my entire python file. Once I saved my python file in VS code as "insertyourfilenamehere".py to documents(which is in my path), I ran my code again and I was able to save my data frame as a csv file.
As per my knowledge, this error pops up when one attempt to save the file that have been saved already and currently open in the background.
You may try closing those files first and then rerun the code.
Just give a valid path and a file name
e.g:
final_df.to_csv('D:\Study\Data Science\data sets\MNIST\sample.csv')

Python - Sort files in directory and use latest file in code

Long time reader, first time poster. I am very new to python and I will try to ask my question properly.
I have posted a snippet of the .py code I am using below. I am attempting to get the latest modified file in the current directory to be listed and then pass it along later in the code.
This is the error I get in my log file when I attempt to run the file:
WindowsError: [Error 2] The system cannot find the file specified: '05-30-2012_1500.wav'
So it appears that it is in fact pulling a file from the directory, but that's about it. And actually, the file that it pulls up is not the most recently modified file in that directory.
latest_page = max(os.listdir("/"), key=os.path.getmtime)
cause = channel.FilePlayer.play(latest_page)
os.listdir returns the names of files, not full paths to those files. Generally, when you use os.listdir(SOME_DIR), you then need os.path.join(SOME_DIR, fname) to get a path you can use to work with the file.
This might work for you:
files = [os.path.join("/", fname) for fname in os.listdir("/")]
latest = max(files, key=os.path.getmtime)
cause = channel.FilePlayer.play(latest)

Categories

Resources