This code was working before today.
I am using this line to encrypt a file and save it:
gpg.encrypt_file(f,recipients=encrypt_key,output= encrypted_file)
Here is the code:
gpg = gnupg.GPG()
path = 'secure_data' #********** Please make secure_data folder in utils before running
os.chdir(path)
files = sorted(os.listdir(os.getcwd()), key=os.path.getmtime)
latest_file = files [-1]
if encrypt_key is not None:
with open (latest_file, 'rb') as f:
encrypted_file = latest_file +".gpg"
gpg.encrypt_file(f,recipients=encrypt_key,output= encrypted_file)
The code throws no errors, but no file is created. It worked previously up until today.
Any thoughts?
Related
I'm trying to upload a file with my Slack bot. I can send a file with exact path+name without any problem, for example "C:/Program Files/file.txt". But in this case I want to upload LATEST file (newest one) from given directory (with X txt files), because these .txt files are automatically generated and have added date+time to their name, for example "file 06-02-2023 10:23.txt", so I can't upload them easily!
I used python's code to get latest file from directory:
import os
import glob
dir_path = 'C:/Users/MyUser/Desktop/Files'
list_of_files = glob.glob(os.path.join(dir_path, '*.txt'))
latest_file = max(list_of_files, key=os.path.getctime)
print(latest_file)
I tried then to use this variable as file/path+file, as shown in code below:
import os
import glob
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
dir_path = 'C:/Users/MyUser/Desktop/Files'
list_of_files = glob.glob(os.path.join(dir_path, '*.txt'))
latest_file = max(list_of_files, key=os.path.getctime)
client = WebClient(token="xoxb-MY-BOT-TOKEN")
client.files_upload(channels="my-channel", file=latest_file, title='Test', filetype='.txt')
but I get error: {"ok":false,"error":"no_file_data"}
I've tried various versions of code above(with little changes), for example:
latest_file = max(list_of_files, key=os.path.getctime)
just_file = os.path.basename(latest_file)
client.files_upload(channels="my-channel", file=f"C:/Users/MyUser/Desktop/Files"+just_file, title='Test', filetype='.txt')
Any change in file path/name etc. didn't work, it's just "error":"no_file_data" all the time...
Is it possible to do? Or maybe such codes aren't allowed and you have to paste just exact full file's path?
I am currently trying to create a loop that goes through a folder and converts every file from .zst to json, and then puts it in a new folder. I have encountered the error above once it gets to the second file in the directory.
import os
import zstandard
import pathlib
import json
directory = os.fsencode("D:\data")
for file in os.listdir(directory):
file_name = os.fsdecode(file)
input_file = pathlib.Path(file_name)
if filename.endswith(".zst"):
with open(input_file, 'rb') as compressed:
decomp = zstandard.ZstdDecompressor()
output_path = pathlib.Path("D:\New\Folder") / input_file.stem
with open(output_path, 'wb') as destination:
decomp.copy_stream(compressed, destination)
continue
This is my current code as I am still trying to figure out how to have it output into json instead of file format. Any guidance would be greatly appreciated.
I'm trying to remove all the outlook .ost and .nst files from the user's folder on a network PC, as well as I'm trying to get it to write what files were removed into a CSV file.
I'm able to get it to find all the files in the directory and write it to a CSV file but when I try to remove the files with os.remove it doesn't seem to run, I hashed it out for the time being.
I added in the try and except, to skip the files that are in use.
import os
import sys
sys.stdout = open("output_file.csv", "w")
try:
for rootDir, subdir, files in os.walk("//network_pc_name/c$/Users"):
for filenames in files:
if filenames.endswith((".nst",".ost")):
foundfiles = os.path.join(rootDir, filenames)
#os.remove(os.path.join(rootDir, filenames))
print(foundfiles)
except:
pass
sys.stdout.close()
I made some change to the script as suggested and it appears to run alot quicker, however, I can't seem to figure out how to ignore files which are in use.
I switched the files extensions to .xlsx and .txt files to simulate the .xlsx file being open receiving the permissions error and to see if the script would continue to run and remove the .txt file.
I got the following error:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: '//DESKTOP-HRLS19N/c$/globtest\Book1.xlsx
import glob
import os
files = [i for i in glob.glob("//DESKTOP-HRLS19N/c$/globtest/**", recursive = True) if i.endswith((".xlsx",".txt"))]
[os.remove(f) for f in files]
with open("output_file.csv", "w") as f:
f.writelines("\n".join(files))
In my experience glob is much easier:
print([i for i in glob.glob("//network_pc_name/c$/Users/**", recursive=True) if i.endswith((".nst", ".ost"))])
Assuming that prints out the files you're expecting:
files = [i for i in glob.glob("//network_pc_name/c$/Users/**", recursive=True) if i.endswith((".nst", ".ost"))]
removed_files = []
for file in files:
try:
size = os.path.getsize(file)
os.remove(file)
removed_files.append(file + " Bytes: " + size)
except Exception as e:
print("Could not remove file: " + file)
with open("output_file.csv", "w") as f:
f.writelines("\n".join(removed_files))
I have a folder that stores a json file in my django application folder, ie, test_data/data.json.
In my tests.py, I am trying to read this file using the following code:
with open('/test_data/data.json', 'r') as f:
self.response_data = json.load(f)
However, I keep on getting the following error:
FileNotFoundError: [Errno 2] No such file or directory: '/test_data/data.json'
What am I doing wrong? Thanks.
Edit: I tried removing the leading slash, yet I still get the same error.
import os
try this
with open(os.getcwd() + '/test_data/data.json', 'r') as f:
self.response_data = json.load(f)
If you're opening files in directories close to where your code is, it is common to place
import os
DIRNAME = os.path.dirname(__file__) # the directory of this file
at the top of the file.
Then you can open files in a test_data subdirectory with
with open(os.path.join(DIRNAME, 'test_data', 'data.json'), 'rb') as fp:
self.response_data = json.load(fp)
you probably want to open json files, which should be utf-8 encoded, in 'rb' (read-binary) mode.
I want to move a large number of files from a windows system to a unix ftp server using python. I have a csv which has the current full path and filename and the new base bath to send it to (see here for an example dataset).
I have got a script using os.renames to do the transfer and directory creation in windows but can figure out a way to easily do it via ftp.
import os, glob, arcpy, csv, sys, shutil, datetime
top=os.getcwd()
RootOutput = top
startpath=top
FileList = csv.reader(open('FileList.csv'))
filecount=0
successcount=0
errorcount=0
# Copy/Move to FTP when required
ftp = ftplib.FTP('xxxxxx')
ftp.login('xxxx', 'xxxx')
directory = '/TransferredData'
ftp.cwd(directory)
##f = open(RootOutput+'\\Success_LOG.txt', 'a')
##f.write("Log of files Succesfully processed. RESULT of process run #:"+str(datetime.datetime.now())+"\n")
##f.close()
##
for File in FileList:
infile=File[0]
# local network ver
#outfile=RootOutput+File[4]
#os.renames(infile, outfile)
# ftp netowrk ver
# outfile=RootOutput+File[4]
# ftp.mkd(directory)
print infile, outfile
I tried the process in http://forums.arcgis.com/threads/17047-Upload-file-to-FTP-using-Python-ftplib but this is for moving all files in a directory, I have the old and new full file names and just need it to create the intermediate directories.
Thanks,
The following might work (untested):
def mkpath(ftp, path):
path = path.rsplit('/', 1)[0] # parent directory
if not path:
return
try:
ftp.cwd(path)
except ftplib.error_perm:
mkpath(ftp, path)
ftp.mkd(path)
ftp = FTP(...)
directory = '/TransferredData/'
for File in FileList:
infile = File[0]
outfile = File[4].split('\\') # need forward slashes in FTP
outfile = directory + '/'.join(outfile)
mkpath(ftp, outfile)
ftp.storbinary('STOR '+outfile, open(infile, 'rb'))