I have a database model that in the Django model all I need to do is upload an image file to a new database record. Is there anyway I can automate this as I have a lot of images to upload? All images will come from a folder on my computer and each picture will be added to a new database record. All help is appreciated. Thanks
Just run a simple script to save files stored in a particular folder
from django.core.files import File
class MyModel(models.Model):
picture=models.ImageField()
MyModel.picture.save('abc.png', File(open('/tmp/pic.png', 'r')))
To do this for all files in a directory -
BASE_PATH = '/home/somefolder'
files = os.listdir(BASE_PATH)
for file in files:
MyModel.picture.save(file, File(open(os.path.join(BASE_PATH, file), 'r')))
Related
Hi guys I need to generate a zip file from a bunch of images and add it as a file to the database
This is the code to generate the file
def create_zip_folder(asin):
print('creating zip folder for asin: ' + asin)
asin_object = Asin.objects.get(asin=asin)
# create folder
output_dir = f"/tmp/{asin}"
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# download images
for img in asin_object.asin_images.all():
urllib.request.urlretrieve(img.url, f"{output_dir}/{img.id}.jpg")
# zip all files in output_dir
zip_file = shutil.make_archive(asin, 'zip', output_dir)
asin_object.zip_file = zip_file
asin_object.has_zip = True
asin_object.save()
# delete folder
shutil.rmtree(output_dir)
return True
This all works and I can see the files generated in my editor but when I try to access it in the template asin.zip_file.url I get this error
SuspiciousOperation at /history/
Attempted access to '/workspace/B08MFR2DRS.zip' denied.
Why is this happening? I thought the file is to be uploaded to the file storage through the model but apparently it's in a restricted folder, this happens both in development (with local file storage) and in production (with s3 bucket as file storage)
I have developed a Django app where I am uploading a file, doing some processing using a project folder name media.
Process:
user uploads a csv file, python code treats the csv data by creating temp folders in Media folder. After processing is complete, these temp folders are deleted and processed data is downloaded through browser.
I am using the below lines of code to make and delete temp file after processing
temp = 'media/temp3'
os.mkdir(temp)
shutil.copyfile('media/' + file_name, temp + '/' + file_name)
shutil.rmtree(temp, ignore_errors=True)
To set the media root, I used the below lines in settings.py which I am sure I am not using in other parts of the code.
MEDIA_ROOT = os.path.join(BASE_DIR, 'media/')
MEDIA_URL = "/media/"
Everything works fine when I run the app on local host. but as soon as i deployed it to heroku, it seems like these folders were not created/not found.
I am looking for:
Either a solution to create, read and delete folders/files in runtime using heroku,
or
a better way to manage files/folders in runtime.
I'm working on a news blog where you can add as many files to news as you want. For files storing properties, I'm using amazon s3 and django-strorage. But after I've added news-update view, I got some problems with files management.
As you can see, here my files model
class FileStorage(models.Model):
file = models.FileField(upload_to=uploadFile)
upload_path = models.TextField(blank=True, default='files/')
def __str__(self):
return f'Файл: {self.file.name.split("/")[-1]}'
The main problem is how to update FileField after moving the file into another directory?
Here is my files moving script.
bucket = S3Boto3Storage()
from_path = bucket._normalize_name(bucket._clean_name(self.instance.file.name))
to_path = bucket._normalize_name(bucket._clean_name(self.cleaned_data['upload_path']))
result = bucket.connection.meta.client.copy_object(
Bucket=bucket.bucket_name,
CopySource=bucket.bucket_name + "/" + from_path,
Key=to_path)
bucket.delete(from_path)
All works good, but only with path.
File in FileField store old path.
How can I update it to?
Screen with problem
If you want this, just change your file "name" parameter like this:
file.name = "new file path"
I am trying to upload a file using the flask and boto3 module to one of my Amazon S3 buckets. My code does not just upload the file, but it also uploads the folder where that file is stored. Can somebody please help me with that. If I am already providing the path of the file in the code which you can see below. How does the upload thing work in the Html button?
#app.route("/upload", methods=['POST'])
def upload():
if request.method == "POST":
f = request.files['file']
f.save(os.path.join(UPLOAD_FOLDER, f.filename))
upload_file(f"readme/{f.filename}", BUCKET)
return redirect("/storage")
Folders do not actually exist in Amazon S3. If you upload a file to a folder, the folder will magically 'appear'. Later, if you delete all files in the (pretend) folder, then the folder will disappear.
If you use the "Create Folder" button in the S3 management console, it actually creates a zero-length object with the same name of the folder. This 'forces' the folder to appear because it contains an object (but that object isn't displayed).
So, when you say "it also uploads the folder where that file is stored", you are probably just seeing the folder name 'appear'. It probably only uploaded one file.
I want to find original path of the uploaded image but 'image_path' gives me the location of the project. Is it possible to save the path where the uploaded image is located?
def image_data():
data = {
'full_image': request.files['image'],
'image_name': request.files['image'].filename,
'image_path': os.path.realpath(request.files['image'].filename)
}
return data
As in the path of the user who uploaded the file? The file is uploaded, not the path of the user, also you wouldn't have much value in this information.
The path of the image is where it is currently stored for Flask, in the project, where the uploaded image file is probably located as well.