How to generate a unique name for each uploaded file? - python

I am building a Flask website, and I want to save a path of a file to my sqlite database
I have a "create" view, where user uploads an image and it gets stored in a folder
#app.route('/create', methods = ["GET", "POST"])
#login_required
def create():
if request.method == "POST":
file = request.files['file']
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
return render_template('create.html')
I want to store a 'path' to each image in my sqlite database. To do this, I think, every filename should be unique. So how should I generate a unique name for each uploaded file?

An easy way to generate a unique file name would be to just use a numbering system (first file being 1, then increasing by 1). Like so:
counter = 0 #put this at the beginning of your script
then when creating file name:
counter += 1
filename = counter
If you do not want your files to be named just numbers you could hash the number to generate a unique code, like so:
counter += 1
filename = counter.encode("utf-8")
filename = hashlib.sha224(filename).hexdigest()
If doing the latter, you would want to save this code somewhere associated with the user so you can open it again.

Related

how to read a file from a directory and removing the extension when printing it in django?

I'm building a web app on Django and I've implemented two functions, one to save csv files in a file directory on my PC, and the other is to read these files to view them in the website.
My problem is that I want to read csv files from the directory and display them but without the csv extension, I want their names to be the only thing visible, but I keep getting this error FileNotFoundError.
this is the function that saves the files to a directory
def openDataset(request):
if request.method == "GET":
return render(request, 'blog/upload_csv_ag.html')
if request.FILES.get("file2") is not None:
csv_file = request.FILES['file2']
if not csv_file.name.endswith('.csv'):
message='The uploaded file has to be CSV. Please try again.'
return render(request, 'blog/upload_csv_ag.html',{'message':message})
else:
save_path = 'C:/Users/user/Desktop/Fault Detection App/Uploaded_Datasets/'
file_name = csv_file.name
fs = FileSystemStorage(location=save_path)
file = fs.save(file_name, csv_file)
else:
message='no file is uploaded'
return render(request, 'blog/upload_csv_ag.html',{'message':message})
return render(request,'blog/upload_csv_ag.html',{'message':'Dataset Uploaded'})
and the function that reads the files from the directory
def read_datasets(request):
path = r"C:/Users/user/Desktop/Fault Detection App/Uploaded_Datasets/"
test = os.listdir(path)
path1, dirs, files = next(os.walk(path))
file_count = len(files)
print(file_count)
dataframes_list_html = []
file_names = []
index = []
for i in range(file_count):
temp_df = pd.read_csv(path+files[i])
print(files[i])
dataframes_list_html.append(temp_df.to_html(index=False))
index.append(i)
for item in test:
if item.endswith(".csv"):
os.remove(os.path.join(path, item))
file_names.append(files[i])
return render(request,'blog/view_datasets.html',{'files': file_names})
Extracting the names of the files
Step 1 : iterate through the directory
A simpler way would be to just do
for file in os.listdir(base_path)
Step 2 - remove the extension
You can use the method that evergreen suggested
Step 3 - store the processed string
Just append to your file_names list like you're doing and return the list in the response context
Reading and displaying the CSVs content
Actually reading and returning the content of the CSVs is slightly more involved, but your current approach by using pandas to read the files, and converting the dataframes to html tables should work just fine. Just remember to return the dataframes_list_html list in the context as well so that you can access it in the template

Upload files to GCS, skip if existed using python

I have a GCS called my-gcs with inconsistent subfolder such as;
parent-path/path1/path2/*
parent-path/path3/path4/path5/*
parent-path/path6/*
The files can be parquet/csv or other than this.
This is my function to copy the entire folder from local to GCS:
def upload_local_directory_to_gcs(src_path, dest_path, data_backup, file_name):
"""
Upload the whole directory to GCS
"""
logger.debug("Uploading directory...")
storage_client = storage.Client.from_service_account_json(key_path)
bucket = storage_client.get_bucket(GCS_BUCKET)
if os.path.isfile(src_path):
blob = bucket.blob(os.path.join(dest_path, os.path.basename(src_path)))
blob.upload_from_filename(src_path)
return
for item in glob.glob(src_path + '/*'):
file_exist = check_file_exist(data_backup, file_name)
if os.path.isfile(item):
print(item)
if file_exist is False:
blob = bucket.blob(os.path.join(dest_path, os.path.basename(item)),
chunk_size=10485760)
blob.upload_from_filename(item)
else:
logger.warning("Skipping upload. File already existed")
else:
if file_exist is False:
upload_local_directory_to_gcs(item, os.path.join(dest_path, os.path.basename(item)),
data_backup, file_name)
else:
logger.warning("Skipping upload. File already existed")
This is the function to check if specific file exist in the directory & sub-directory:
def check_file_exist(dataset, file_name):
"""
Check if files existed
"""
storage_client = storage.Client.from_service_account_json(key_path)
bucket = storage_client.bucket(GCS_BUCKET)
logger.debug("Checking if file already existed in GCS to skip upload...")
blobs = bucket.list_blobs(prefix=f'parent-path{dataset}/')
check_files = [blob.name for blob in blobs if file_name in blob.name] # if '.' in blob.name
return bool(len(check_files))
However the code is not running correctly. Say this path parent-path/path1/path2/* already has a file called first_file.csv. It will skip uploading the existing file in this path. Until it encounters a file that not yet existed, it will upload the file and overwrite the other files for all directories as well.
Where I was expecting it to only upload specific file that is not existed yet, without overwriting the other files.
I tried my best to explain... please help.
If you have a look to the documentation, you can see that on the Name property of the blob
The name of the blob. This corresponds to the unique path of the object in the bucket.
That means the value is not only the file name, but the fully qualified path + the name path/to/file.csv
If your loop, you check if a file name (file.csv for example) is included in the blob path. Consider this case
path/to/file.csv
path/to/to/file.csv
If you test is file.csv exists, both blobs will return true.
To fix your issue, you need to
Either compare the strict equality of the target_path + file_name and the blob.name
Or include an additional condition in your "if" to include the bucket path to check in addition to the file name.

Generating zip file in Flask

I am generating multiple .csv files using a script which is plugged in to my Flask app. Below is my route file section where required inputs are taken and passing in to the script.
#app.route('/summary_report', methods= ['GET', 'POST'])
def summary_report():
"""
Showing page for generating daily report
:return:
"""
form = frm.SummaryReportForm()
if form.validate_on_submit():
from_date = form.from_date.data
is_active = form.is_active.data
reports = current_summary_report.fetch_report(from_date=from_date, status=is_active) # Go the script
return send_file(reports, as_attachment=True, attachment_filename="reports.zip") # Download attachment
return render_template('pages/reports/current/summary.html', form=form)
And my script file is running a loop and creating multiple csv files.
def fetch_report(from_date=from_date, status=is_active):
for record in records:
....
f = open(file_name + '-' + str(time) + '.csv','w+')
f.write(csv)
f.close()
What changes I should do here to make all csv files in to a zip file and making it downloadable.
Python allows you to quickly create zip/tar archives.
Following command will zip entire directory
shutil.make_archive(output_filename, 'zip', dir_name)
Following command gives you control on the files you want to archive
ZipFile.write(filename)
Code Explanation
Import make_archive class from module shutil.
Use the split function to split out the directory and the file name from the path to the location of the text file.
Then we call the module "shutil.make_archive("guru99 archive, "zip", root_dir)" to create archive file, which will be in zip format

Can i have two different sets of app configurations?

I have made a flask server REST API that does an upload to a specific folder.
The upload must comply to a specific file extension, and must be in a specific folder.
So i made these two app.configs:
app.config['UPLOAD_EXTENSIONS'] = ['.stp', '.step']
app.config['UPLOAD_PATH'] = 'uploads'
However, i want to make a new route, for a new upload of a specific file extension to another folder.
So can i have two sets of app.config['UPLOAD_EXTENSIONS'] and app.config['UPLOAD_PATH']?
One set will be for extension1 in folder1 and the other set for extension2 in folder2.
Try using the extension Flask-Uploads .
Or, proceeding from the file format, form a subdirectory in your UPLOAD_PATH.
import os
def select_directory(filename: str) -> str:
file_format = filename.split('.')[1]
your_path1 = 'path1'
your_path2 = 'path2'
if file_format in ('your format1', 'your format2'):
full_path = os.path.join(app.config['UPLOAD_FOLDER'], your_path1, filename)
else:
full_path = os.path.join(app.config['UPLOAD_FOLDER'], your_path2, filename)
return full_path

Pyramid: How can I making a static view to some absolute path, and then let users upload files to that path?

In my view callable, I want users to be able to create a new file called filename like so:
#view_config(route_name='home_page', renderer='templates/edit.pt')
def home_page(request):
if 'form.submitted' in request.params:
name= request.params['name']
input_file=request.POST['stl'].filename
vertices, normals = [],[]
for line in input_file:
parts = line.split()
if parts[0] == 'vertex':
vertices.append(map(float, parts[1:4]))
elif parts[0] == 'facet':
normals.append(map(float, parts[2:5]))
ordering=[]
N=len(normals)
...parsing data...
data=[vertices,ordering]
jsdata=json.dumps(data)
renderer_dict = dict(name=name,data=jsdata)
app_dir = request.registry.settings['upload_dir']
filename = "%s/%s" % ( app_dir , name )
html_string = render('tutorial:templates/view.pt', renderer_dict, request=request)
with open(filename,'w') as file:
file.write(new_comment)
return HTTPFound(location=request.static_url('tutorial:pages/%(pagename)s.html' % {'pagename': name}))
return {}
right now, when I attempt to upload a file, I am getting this error message: IOError: [Errno 2] No such file or directory: u'/path/pages/one' (one is the name variable) I believe this is because I am incorrectly defining the app_dir variable. I want filename to be the url of the new file that is being created with the name variable that is defined above (so that it can be accessed at www.domain.com/pages/name). Here is the file structure of my app:
env
tutorial
tutorial
templates
home.pt
static
pages
(name1)
(name2)
(name3)
....
views.py
__init__.py
In my init.py I have:
config.add_static_view(name='path/pages/', path=config.registry.settings['upload_dir'])
In my development.ini file I have
[app:main]
use = egg:tutorial
upload_dir = /path/pages
Edit: If anyone has an idea on why this question isn't getting much attention, I would love to hear it.
While I feel like you probably have a misunderstanding of how to serve up user-generated content, I will show you a way to do what you're asking. Generally user-generated content would not be uploaded into your source, you'll provide some configurable spot outside to place it, as I show below.
Make the path configurable via your INI file:
[app:main]
use = egg:tutorial
upload_dir = /path/to/writable/upload/directory
Add a static view that can serve up files under that directory.
config.add_static_view(name='/url/to/user_uploads', path=config.registry.settings['upload_dir'])
In your upload view you can get your app_dir via
app_dir = request.registry.settings['upload_dir']
Copy the data there, and from then on it'll be available at /url/to/user_uploads/filename.

Categories

Resources