Hope all stack members are alright .I am able to fetch binary data of Product image using code
p_ids=self.env.context.get('active_ids')
produtc_templates = self.env['product.template']
for p_id in p_ids:
binaryData = produtc_templates.search([('id', '=',p_id)]).image
data=base64.b64decode(binaryData)
file="marketplaces/rakuten_ftp/static/imageToSave_"+str(p_id)+".png"
with open(file, "wb") as imgFile:
imgFile.write(data)
Above code is create files from binary Data But i am failed to apply condition on mimetype base.Because when i query ir_attachment table with Products id's it return me False.
for p_id in p_ids:
attachments = self.env['ir.attachment']
mimetype=attachments.search([('res_id','=',p_id)])
I am considering res_id as Product id .But odoo failed to find any record against that id.So if any body have idea that how i can get mimetype against my product id then please help me.
Your code looks good! But as per ir.attachement object, binary data stored in datas field. So, you can use that data to decode binary data to image file!!
Already tried below code into Odoo v11... & it's working as created new image file from binary data which is stored in datas field!
product_image = self.env['ir.attachment']
product_images = product_image.search([('id', 'in', p_ids)])
for rec in product_images:
with open("imageToSave.jpg", "wb") as imgFile:
imgFile.write(base64.b64decode(rec.datas))
You can also add the condition for mimetype as, p_ids can contains multiple ids, so taking only those ids which have mimetype of image/jpeg or image/png
EDIT #1
Below code snippet already checked with Odoo v11.0
import base64
from odoo.tools.mimetypes import guess_mimetype
p_ids = [16, 18, 11, 38, 39, 40] # Taking random ids of product.template
produtc_templates = self.env['product.template']
for p_id in p_ids:
binary_data = produtc_templates.search([('id', '=', p_id)]).image
mimetype = guess_mimetype(base64.b64decode(binary_data))
file_path = ""
if mimetype == 'image/png':
file_path = "/home/Downloads/" + str(p_id) + ".png"
elif mimetype == 'image/jpeg':
file_path = "/home/Downloads/" + str(p_id) + ".jpeg"
if file_path:
with open(file_path, "wb") as imgFile:
imgFile.write(base64.b64decode(binary_data))
Product images aren't saved as instances/records of ir.attachment. OK, maybe that has changed, but i didn't find anything so fast.
What you can do, is using ir.attachment's method _compute_mimetype()
Following example wasn't tested:
def get_mimetype_of_product_image(self, product_id)
product = self.env['product.product'].browse(product_id)
mimetype = self.env['ir.attachment']._compute_mimetype(
{'values': product.image})
return mimetype
Related
so our DB was designed very badly. There is no foreign key used to link multiple tables
I need to fetch complete information and export it to csv. the challenge is the information need to be queried from multiple tables (say for e.g, usertable only stored sectionid in the table, in order to get section detail, I would have to query from section table and match it with sectionid acquired from usertable).
So i did this using serializer, because the fields are multiples.
So the problem with my current method is that its so slow because it needs to query for each object(queryset) to match with other tables using uuid/userid/anyid.
this is my views
class FileDownloaderSerializer(APIView):
def get(self, request, **kwargs):
filename = "All-users.csv"
f = open(filename, 'w')
datas = Userstable.objects.using(dbname).all()
serializer = UserSerializer( datas, context={'sector': sector}, many=True)
df=serializer.data
df.to_csv(f, index=False, header=False)
f.close()
wrapper = FileWrapper(open(filename))
response = HttpResponse(wrapper, content_type='text/csv')
response['Content-Length'] = os.path.getsize(filename)
response['Content-Disposition'] = "attachment; filename=%s" % filename
return response
so notice that i need one file exported which is .csv.
this is my serializer
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = Userstable
fields = _all_
section=serializers.SerializerMethodField()
def get_section(self, obj):
return section.objects.using(dbname.get(pk=obj.sectionid).sectionname
department =serializers.SerializerMethodField()
def get_department(self, obj):
return section.objects.using(dbname).get(pk=obj.deptid).deptname
im showing only two tables here, but in my code i have total of 5 different tables
I tried to limit 100 rows and it is successful, i tried to fecth 300000 and it took me 3 hours to download csv. certainly not efficient. How can i solve this?
I'm trying to read a pdf form with django. The point is that in another view of my views.py I've succeed into do it by using PyPDF2 and its PdfFileReader.getFields() method.
Now the problem is that the reading is not working properly: I've checked with adobe acrobat and the file still is a form with actually fields, so I don't really have any idea of what could be the problem.
I'm attaching here the relevant portion of the code:
if request.method == "POST":
form = Form(request.POST, request.FILES) # the form refer to a model called 'New Request'
if form.is_valid():
form.save()
File = request.FILES['File'].name
full_filename = os.path.join(BASE_DIR, 'media/media', File)
f = PdfFileReader(full_filename)
fields = f.getFields()
fdfinfo = dict((k, v.get('/V', '')) for k, v in fields.items())
k = creare_from_pdf2(request, fdfinfo, pk) # this is a custom function
nr = NewRequest.objects.all() #I'm deleting the object uploaded because it won't be useful anymore
nr.delete()
os.remove(full_filename)
If I display print(fdfinfo) it actually shows {}. This of course is leading to error when fdfinfo passes into the 'create_from_pdf_2' function. I don't really know what the problem could be, also because in another view I made exactly the same and it works:
if request.method=='POST':
form = Form(request.POST, request.FILES)
if form.is_valid():
form.save()
uploaded_filename = request.FILES['File'].name
full_filename = os.path.join(BASE_DIR, 'media/media', uploaded_filename)
f = PdfFileReader(full_filename)
fields = f.getFields()
fdfinfo = dict((k, v.get('/V', '')) for k, v in fields.items())
k=create_from_pdf1(request, fdfinfo)
if k==1:
return HttpResponse('<html><body>Something went wrong</html></body>')
nr = NewRequest.objects.all()
nr.delete()
os.remove(full_filename)
Maybe is there a way to display the errors of PdfFileReader?
UPDATING
The new file that I'm trying to reading is firstly modified in the sense that some (BUT NOT ALL!) fields are filled with PdfFileWriter, and the one filled are set then to only readable. Could this operation have influenced the performances of PdfFileReader? I'm attaching the correspondent view
att = MAIN.objects.get(id=pk)
file_path = os.path.join(BASE_DIR, 'nuova_form.pdf')
input_stream = open(file_path, "rb")
pdf_reader = PdfFileReader(input_stream, strict = False)
if "/AcroForm" in pdf_reader.trailer["/Root"]:
pdf_reader.trailer["/Root"]["/AcroForm"].update(
{NameObject("/NeedAppearances"): BooleanObject(True)})
pdf_writer = PdfFileWriter()
set_need_appearances_writer(pdf_writer)
if "/AcroForm" in pdf_writer._root_object:
# Acro form is form field, set needs appearances to fix printing issues
pdf_writer._root_object["/AcroForm"].update(
{NameObject("/NeedAppearances"): BooleanObject(True)})
data_dict1 = { # my text fields
}
data_dict2 = { # my booleancheckbox fields }
for i in range(0,6): #The pdf file has 6 pages
pdf_writer.addPage(pdf_reader.getPage(i))
page = pdf_writer.getPage(i)
# update form fields
pdf_writer.updatePageFormFieldValues(page, data_dict1)
for j in range(0, len(page['/Annots'])):
writer_annot = page['/Annots'][j].getObject()
for field in data_dict1:
if writer_annot.get('/T') == field:
writer_annot.update({
NameObject("/Ff"): NumberObject(1) # make ReadOnly
})
# update checkbox fields
updateCheckboxValues(page, data_dict2)
output_stream = BytesIO()
pdf_writer.write(output_stream)
return output_stream
def updateCheckboxValues(page, fields):
for j in range(0, len(page['/Annots'])):
writer_annot = page['/Annots'][j].getObject()
for field in fields:
if writer_annot.get('/T') == field:
writer_annot.update({
NameObject("/V"): NameObject(fields[field]),
NameObject("/AS"): NameObject(fields[field])
})
I got similar results when trying to do a straightforward read of a PDF form using Python and PyPDF2. The PDF form had been created using Libre Writer and was a single page with about 50 text fields on it. When I ran the getFields() method on the reader object I was getting the same issue -- it was returning an empty dict object.
I thought there might be a limitation on the number of fields and tried removing some for testing, but got the same result. Then when looking at it I noticed the fieldnames were all pretty long: txtLabMemberFirstName01, txtLabMemberLastName01, txtPrincipalInvestigatorFirstName, etc.
I shortened all the fields' names (e.g., "txtLMFN01") and PyPDF2 started working again as expected.
Good Morning All!
I'm trying to have a routine iterate through a table list. The below code works on a single table 'contact'. I want to iterate through all of the tables listed in my tablelist.csv. I bolded the selections below which would need to be dynamically modified in the code. My brain is pretty fried at this point from working through two nights and I'm fully prepared for the internet to tell me that this is in chapter two of intro to Python but I could use the help just to get over this hurdle.
import pandas as pd
import boto3
from simple_salesforce import salesforce
li = pd.read_csv('tablelist.csv', header=none)
desc = sf.**Contact**.describe()
field_names = [field['name'] for field in desc['fields']]
soql = "SELECT {} FROM **Contact**".format(','.join(field_names))
results = sf.query_all(soql)
sf_df = pd.DataFrame(results['records']).drop(columns='attributes')
sf_df.to_csv('**contact**.csv')
s3 = boto3.client('s3')
s3.upload_file('contact.csv', 'mybucket', 'Ops/20201027/contact.csv')
Would help if you could provide a sample of the tablelist file, but here's a stab at...you really just need to get list of tables and loop through it.
#assuming table is a column somewhere in the file
df_tablelist = pd.read_csv('tablelist.csv', header=none)
for Contact in df_tablelist['yourtablecolumttoiterateon'].tolist():
desc = sf.**Contact**.describe()
field_names = [field['name'] for field in desc['fields']]
soql = "SELECT {} FROM {}".format(','.join(field_names), Contact)
results = sf.query_all(soql)
sf_df = pd.DataFrame(results['records']).drop(columns='attributes')
sf_df.to_csv(Contact + '.csv')
s3 = boto3.client('s3')
s3.upload_file(Contact + '.csv', 'mybucket', 'Ops/20201027/' + Contact + '.csv')
How to save the file generated from pd.dataframe to certain database record.
This is the view..
#csrf_exempt
def Data_Communication(request):
if request.method == 'POST':
data_sets_number = (len(request.POST)) - 1
Data_Sets_asNestedList = []
Data_set_id = request.POST.get('id')
Data_instance = Data_Sets.objects.get(pk=Data_set_id)
for x in range(data_sets_number):
i = 1
Data_Sets_asNestedList.append(request.POST.getlist('Data'+str(i)))
i = i + 1
pd.DataFrame(Data_Sets_asNestedList).to_excel('output.xlsx', header=False, index=False)
print(Data_Sets_asNestedList)
return HttpResponse('1')
If you're looking to associate the generated Excel file with the model Data_Sets, then you'd probably want to add a FileField to that model:
class Data_Sets(models.Model):
excel_file = fields.FileField()
Once you've created the Excel file in your view, you can then associate it with the new field:
from django.core.files import File
#csrf_exempt
def Data_Communication(request):
if request.method == 'POST':
data_sets_number = (len(request.POST)) - 1
Data_Sets_asNestedList = []
Data_set_id = request.POST.get('id')
Data_instance = Data_Sets.objects.get(pk=Data_set_id)
for x in range(data_sets_number):
i = 1
Data_Sets_asNestedList.append(request.POST.getlist('Data'+str(i)))
i = i + 1
pd.DataFrame(Data_Sets_asNestedList).to_excel('output.xlsx', header=False, index=False)
# Associate the Excel file with the model
with open('output.xlsx', 'rb') as excel:
Data_instance.excel_file.save('output.xlsx', File(excel))
print(Data_Sets_asNestedList)
return HttpResponse('1')
The excel file itself will be saved into the folder specified by the MEDIA_ROOT setting in your settings.py, and the model will point to that file via the excel_file attribute.
Note that you may want to generate a unique filename for output.xlsx to avoid requests from treading on each other.
Additional info on saving a file can be found here.
Don't randomly insert your data to database, use django validation system to validate your data first.
check bulk_create api to store large chunks of records.
I have an Archetype content that has field called file and it is MultiFileField (from archetypes.multifile.MultiFileField). The schema is something like:
MultiFileField('file',
primary=True,
languageIndependent=True,
widget = MultiFileWidget(
label= "File Uploads",
show_content_type = False,))
And I have a Dexterity content type that has the same field name which is file and I want to create a script that extract the stored uploaded object from the Archetype content and pass it on the Dexterity custom content type. The schema for Dexterity custom content type is:
form.widget(file=MultiFileFieldWidget)
file = schema.List(
title=_(u"File Attachment"),
required=False,
value_type=NamedFile(),
)
I observed that Archetype's MultiFileField stores the uploaded object as OFS Image Pdata, and for the latter part, it stores as plone.namedfile.file.NamedFile object. Is there a way to convert the OFS object into Namedfile object?
Update:
I have found a solution but I am not sure if it's the right thing.
for field in prev_obj.Schema().fields():
key = field.getName()
objects_list = []
value = field.getRaw(prev_obj)
for f in value:
data = str(f['file'].data)
filename = unicode(f['filename'])
contentType = f['content_type']
fileData = NamedFile(data=data, contentType=contentType, filename=filename)
objects_list.append(fileData)
new_obj.file = copy.copy(objects_list)
First off, you may want to use NamedBlobFile instead.
Then, have you tried something like this to convert the data?
from plone.namedfile.file import NamedBlobFile
new_obj.file = [NamedBlobFile(str(fdata), contentType=fdata.content_type, filename=fdata.filename) for fdata in previous_obj.getFile()]
Assuming you have both previous_obj and new_obj available.