I have an Archetype content that has field called file and it is MultiFileField (from archetypes.multifile.MultiFileField). The schema is something like:
MultiFileField('file',
primary=True,
languageIndependent=True,
widget = MultiFileWidget(
label= "File Uploads",
show_content_type = False,))
And I have a Dexterity content type that has the same field name which is file and I want to create a script that extract the stored uploaded object from the Archetype content and pass it on the Dexterity custom content type. The schema for Dexterity custom content type is:
form.widget(file=MultiFileFieldWidget)
file = schema.List(
title=_(u"File Attachment"),
required=False,
value_type=NamedFile(),
)
I observed that Archetype's MultiFileField stores the uploaded object as OFS Image Pdata, and for the latter part, it stores as plone.namedfile.file.NamedFile object. Is there a way to convert the OFS object into Namedfile object?
Update:
I have found a solution but I am not sure if it's the right thing.
for field in prev_obj.Schema().fields():
key = field.getName()
objects_list = []
value = field.getRaw(prev_obj)
for f in value:
data = str(f['file'].data)
filename = unicode(f['filename'])
contentType = f['content_type']
fileData = NamedFile(data=data, contentType=contentType, filename=filename)
objects_list.append(fileData)
new_obj.file = copy.copy(objects_list)
First off, you may want to use NamedBlobFile instead.
Then, have you tried something like this to convert the data?
from plone.namedfile.file import NamedBlobFile
new_obj.file = [NamedBlobFile(str(fdata), contentType=fdata.content_type, filename=fdata.filename) for fdata in previous_obj.getFile()]
Assuming you have both previous_obj and new_obj available.
Related
I'm using Django I want to send some data from my database to a document word, I'm using Python-Docx for creating word documents I use the class ExportDocx it can generate a static word file but I want to place some dynamic data (e.g. product id =5, name=""..) basically all the details to the "product" into the document
class ExportDocx(APIView):
def get(self, request, *args, **kwargs):
queryset=Products.objects.all()
# create an empty document object
document = Document()
document = self.build_document()
# save document info
buffer = io.BytesIO()
document.save(buffer) # save your memory stream
buffer.seek(0) # rewind the stream
# put them to streaming content response
# within docx content_type
response = StreamingHttpResponse(
streaming_content=buffer, # use the stream's content
content_type='application/vnd.openxmlformats-officedocument.wordprocessingml.document'
)
response['Content-Disposition'] = 'attachment;filename=Test.docx'
response["Content-Encoding"] = 'UTF-8'
return response
def build_document(self, *args, **kwargs):
document = Document()
sections = document.sections
for section in sections:
section.top_margin = Inches(0.95)
section.bottom_margin = Inches(0.95)
section.left_margin = Inches(0.79)
section.right_margin = Inches(0.79)
# add a header
document.add_heading("This is a header")
# add a paragraph
document.add_paragraph("This is a normal style paragraph")
# add a paragraph within an italic text then go on with a break.
paragraph = document.add_paragraph()
run = paragraph.add_run()
run.italic = True
run.add_text("text will have italic style")
run.add_break()
return document
This is the URL.py of the
path('<int:pk>/testt/', ExportDocx.as_view() , name='generate-testt'),
How can I generate it tho I think I need to make the data string so it can work with py-docx.
for the python-docx documentation: http://python-docx.readthedocs.io/
For a product record like: record = {"product_id": 5, "name": "Foobar"), you can add it to the document in your build_document()` method like:
document.add_paragraph(
"Product id: %d, Product name: %s"
% (record.product_id, record.name)
)
There are other more modern methods for interpolating strings, although this sprintf style works just fine for most cases. This resource is maybe not a bad place to start.
So I found out that I need to pass the model I was doing it but in another version of code and forgot to add it... Basically, I just had to add these lines of code, hope this helps whoever is reading this.
def get(self, request,pk, *args, **kwargs):
# create an empty document object
document = Document()
product = Product.objects.get(id=pk)
document = self.build_document(product)
And in the build of the document we just need to stringify it simply by using f'{queryset.xxxx}'
def build_document(self,queryset):
document = Document()
document.add_heading(f'{queryset.first_name}')
Hope all stack members are alright .I am able to fetch binary data of Product image using code
p_ids=self.env.context.get('active_ids')
produtc_templates = self.env['product.template']
for p_id in p_ids:
binaryData = produtc_templates.search([('id', '=',p_id)]).image
data=base64.b64decode(binaryData)
file="marketplaces/rakuten_ftp/static/imageToSave_"+str(p_id)+".png"
with open(file, "wb") as imgFile:
imgFile.write(data)
Above code is create files from binary Data But i am failed to apply condition on mimetype base.Because when i query ir_attachment table with Products id's it return me False.
for p_id in p_ids:
attachments = self.env['ir.attachment']
mimetype=attachments.search([('res_id','=',p_id)])
I am considering res_id as Product id .But odoo failed to find any record against that id.So if any body have idea that how i can get mimetype against my product id then please help me.
Your code looks good! But as per ir.attachement object, binary data stored in datas field. So, you can use that data to decode binary data to image file!!
Already tried below code into Odoo v11... & it's working as created new image file from binary data which is stored in datas field!
product_image = self.env['ir.attachment']
product_images = product_image.search([('id', 'in', p_ids)])
for rec in product_images:
with open("imageToSave.jpg", "wb") as imgFile:
imgFile.write(base64.b64decode(rec.datas))
You can also add the condition for mimetype as, p_ids can contains multiple ids, so taking only those ids which have mimetype of image/jpeg or image/png
EDIT #1
Below code snippet already checked with Odoo v11.0
import base64
from odoo.tools.mimetypes import guess_mimetype
p_ids = [16, 18, 11, 38, 39, 40] # Taking random ids of product.template
produtc_templates = self.env['product.template']
for p_id in p_ids:
binary_data = produtc_templates.search([('id', '=', p_id)]).image
mimetype = guess_mimetype(base64.b64decode(binary_data))
file_path = ""
if mimetype == 'image/png':
file_path = "/home/Downloads/" + str(p_id) + ".png"
elif mimetype == 'image/jpeg':
file_path = "/home/Downloads/" + str(p_id) + ".jpeg"
if file_path:
with open(file_path, "wb") as imgFile:
imgFile.write(base64.b64decode(binary_data))
Product images aren't saved as instances/records of ir.attachment. OK, maybe that has changed, but i didn't find anything so fast.
What you can do, is using ir.attachment's method _compute_mimetype()
Following example wasn't tested:
def get_mimetype_of_product_image(self, product_id)
product = self.env['product.product'].browse(product_id)
mimetype = self.env['ir.attachment']._compute_mimetype(
{'values': product.image})
return mimetype
I have simple REST/json api. I am trying to update a mongoengine model Listing using a PUT call. I get one from the mongodb and update it using my own deserilize method with the incoming json. The update does not work as the json has a few DBRef and EmbeddedDocuments. The object does get updated with the correct values before the save is executed. There is no error but the object does not get saved. Any thoughts?
obj = Listing.objects.get(pk=id)
obj.deserialize(**request.json)
obj.save()
obj.reload()
return obj
class Listing():
name = db.StringField()
l_type = db.StringField( choices=listing_const.L_TYPE.choices(), )
expiry = db.ComplexDateTimeField( auto_now=False,auto_now_add=False, )
a_data = db.EmbeddedDocumentField(Media)
lcrr = db.ReferenceField( 'LCRR', reverse_delete_rule=3, dbref=False, )
meta = {
'db_alias': config.get_config()['MONGODB_SETTINGS']['alias'],
'cascade':True
}
The code below is part of the Python Quickbase module which has not been updated in quite a while. The help text for one of the function shown below is not clear on how to pass the parameters to upload a file (the value of which is actually base64 encoded).
def add_record(self, fields, named=False, database=None, ignore_error=True, uploads=None):
"""Add new record. "fields" is a dict of name:value pairs
(if named is True) or fid:value pairs (if named is False). Return the new records RID
"""
request = {}
if ignore_error:
request['ignoreError'] = '1'
attr = 'name' if named else 'fid'
request['field'] = []
for field, value in fields.iteritems():
request_field = ({attr: to_xml_name(field) if named else field}, value)
request['field'].append(request_field)
if uploads:
for upload in uploads:
request_field = (
{attr: (to_xml_name(upload['field']) if named else upload['field']),
'filename': upload['filename']}, upload['value'])
request['field'].append(request_field)
response = self.request('AddRecord', database or self.database, request, required=['rid'])
return int(response['rid'])
Can someone help me in how I should pass the parameters to add a record.
Based on the definition you provided, it appears that you you need to pass an array of dictionaries that each provide the field name/id, filename, and the base64 encoding of the file for the uploads parameter. So, if I had a table where I record the name of a color to the field named "color" with the field id of 19 and a sample image to the field named "sample image" with the field id of 21, I believe my method call would be something like:
my_color_file = #base64 encoding of your file
my_fields = {'19': 'Seafoam Green'}
my_uploads = [{'field': 21, 'filename':'seafoam_green.png', 'value': my_color_file}]
client.add_record(fields=my_fields, uploads=my_uploads)
Or, if you're using field names:
my_color_file = #base64 encoding of your file
my_fields = {'color': 'Seafoam Green'}
my_uploads = [{'field': 'sample_image', 'filename':'seafoam_green.png', 'value': my_color_file}]
client.add_record(fields=my_fields, named=True, uploads=my_uploads)
client is just the object you instantiated earlier using whatever constructor this module has.
I have a database of artists and paintings, and I want to query based on artist name and painting title. The titles are in a json file (the artist name comes from ajax) so I tried a loop.
def rest(request):
data = json.loads(request.body)
artistname = data['artiste']
with open('/static/top_paintings.json', 'r') as fb:
top_paintings_dict = json.load(fb)
response_data = []
for painting in top_paintings_dict[artist_name]:
filterargs = {'artist__contains': artistname, 'title__contains': painting}
response_data.append(serializers.serialize('json', Art.objects.filter(**filterargs)))
return HttpResponse(json.dumps(response_data), content_type="application/json")
It does not return a list of objects like I need, just some ugly double-serialized json data that does no good for anyone.
["[{\"fields\": {\"artist\": \"Leonardo da Vinci\", \"link\": \"https://trove2.storage.googleapis.com/leonardo-da-vinci/the-madonna-of-the-carnation.jpg\", \"title\": \"The Madonna of the Carnation\"}, \"model\": \"serve.art\", \"pk\": 63091}]",
This handler works and returns every painting I have for an artist.
def rest(request):
data = json.loads(request.body)
artistname = data['artiste']
response_data = serializers.serialize("json", Art.objects.filter(artist__contains=artistname))
return HttpResponse(json.dumps(response_data), content_type="application/json")
I just need to filter my query by title as well as by artist.
inYour problem is that you are serializing the data to json twice - once with serializers.serialize and then once more with json.dumps.
I don't know the specifics of your application, but can chain filters in django. So I would go with your second approach and just replace the line
response_data = serializers.serialize("json", Art.objects.filter(artist__contains=artistname))
with
response_data = serializers.serialize("json", Art.objects.filter(artist__contains=artistname).filter(title__in=paintings))
Check the queryset documentation.
The most efficient way to do this for a __contains search on painting title would be to use Q objects to or together all your possible painting names:
from operator import or_
def rest(request):
data = json.loads(request.body)
artistname = data['artiste']
with open('/static/top_paintings.json', 'r') as fb:
top_paintings_dict = json.load(fb)
title_filters = reduce(or_, (Q(title__contains=painting) for painting in top_paintings_dict[artist_name]))
paintings = Art.objects.filter(title_filters, artist__contains=artist_name)
That'll get you a queryset of paintings. I suspect your double serialization is not correct, but it seems you're happy with it in the single artist name case so I'll leave that up to you.
The reduce call here is a way to build up the result of |ing together multiple Q objects - operator.or_ is a functional handle for |, and then I'm using a generator expression to create a Q object for each painting name.