I used this code to store attachment xlsx files from a specific address email in Outlook, but now I would like to store these files in a database in SQL Server, not in a folder in my laptop? Do you have any idea about how to store these files directly in a database? Many thanks.
outputDir = r"C:\Users\CMhalla\Desktop\Hellmann_attachment"
i=0
for m in messages:
if m.SenderEmailAddress == 'adress#outlook.com':
body_content=m.Body
for attachment in m.Attachments:
i=i+1
attachment.SaveAsFile(os.path.join(outputDir,attachment.FileName + str(i)+'.xlsx'))
The Oultook object model doesn't provide any property or method for saving attachments to DBs directly. You need to save the file on the disk first and then add it to the Db in any convenient way.
However, you may be interested in reading the bytes array of the attached item in Outlook. In that case you may write the byte array directly to the Db without touching the file system which may slow down the overall performance. The PR_ATTACH_DATA_BIN property contains binary attachment data typically accessed through the Object Linking and Embedding (OLE) IStream interface. This property holds the attachment when the value of the PR_ATTACH_METHOD property is ATTACH_BY_VALUE, which is the usual attachment method and the only one required to be supported.
The Outlook object model cannot retrieve large binary or string MAPI properties using PropertyAccessor.GetProperty. On the low level (Extended MAPI) the IMAPIProp::GetProps() method does not work for the large PT_STING8 / PT_UNICODE / PT_BINARY properties. They must be opened as IStream in the following way - IMAPIProp::OpenProperty(PR_ATTACH_DATA_BIN, IIS_IStream, ...). See PropertyAccessor.GetProperty( PR_ATTACH_DATA_BIN) fails for outlook attachment for more information.
You can use Microsoft Power Automate to save the attachment in the drive and then upload the file to the Python environment.
Related
I am relatively new to web development and very new to using Web2py. The application I am currently working on is intended to take in a CSV upload from a user, then generate a PDF file based on the contents of the CSV, then allow the user to download that PDF. As part of this process I need to generate and access several intermediate files that are specific to each individual user (these files would be images, other pdfs, and some text files). I don't need to store these files in a database since they can be deleted after the session ends, but I am not sure the best way or place to store these files and keep them separate based on each session. I thought that maybe the subfolders in the sessions folder would make sense, but I do not know how to dynamically get the path to the correct folder for the current session. Any suggestions pointing me in the right direction are appreciated!
I was having this error "TypeError: expected string or Unicode object, NoneType found" and I had to store just a link in the session to the uploaded document in the db or maybe the upload folder in your case. I would store it to upload to proceed normally, and then clear out the values and the file if not 'approved'?
If the information is not confidential in similar circumstances, I directly write the temporary files under /tmp.
I need to save Outlook-Mails with the attachments in the msg-file in Python. Currently working with win32com.client I use: message.SaveAs(path + name) which gives me a nice .msg file, but that does not include attachments (if attachments existent). Attached files are visible using message.Attachments.Count and message.Attachments, but how can I create a .msg-file with the attachments included to store as one file which works when messages are exported straight from Outlook?
how can I create a .msg-file with the attachments included to store as one file which works when messages are exported straight from Outlook?
The Outlook object model doesn't provide anything for that. Potentially, the best that you could do, is save the attached files along with your mail items (msg). Use the Attachment.SaveAsFile method which saves the attachment to the specified path.
I have a couple hundred daily Excel attachments in email that I want to pull appropriate data from and save into a database. I want to avoid saving each attachment to disk only to re-open from disk to read, since I'll never need the files saved to disk ever again. For this project, sure, I could just do it and delete them, but there ought to be a better way.
Here's what I'm doing so far
outlook = Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders[blah].Folders[blahblah]
for item in folder.items:
for att in item.Attachments:
att.SaveAsFile(???) # This is where I need something cool, like stream or bytes or something that I don't understand
# do something with the file, either read with pandas or openpyxl
If I can get around even doing the save and have pandas / openpyxl read it without saving, that would be great, but neither of them can read the att directly.
Outlook Object Model won't let you do that: Attachment.SaveAsFile only allows to specify a valid file name.
On the Extended MAPI level (C++ or Delphi only), the one and only way to access attachment data (Extended MAPI does not know anything about files) is to open the PR_ATTACH_DATA_BIN MAPI property as IStream interface: IAttach::OpenProperty(PR_ATTACH_DATA_BIN, IID_IStream, ...). You can then retreive the data directly from the IStream interface.
If using Redemption (any language, I am its author) is an option, it exposes RDOAttachment.AsStream / AsArray / AsText properties that allow to access raw attachment data without saving it as file first.
I currently have a process of reading from sql, using pandas and pd.Excelwriter to format the data and email it out. I want my function to read from sql (no problem) and write to a blob, then from that blob (using SendGrid binding) attach that file from the blob and send it out.
My question is do I need both an in (attaching for email) and an out (archiving to the blob) binding for that blob? Additionally, is this the simplest way to do this? It's be nice to send it and write to the blob as two unconnected operations instead of sequentially.
It also appears that with the binding, I have to hard code the name of the file in the blob-path? That seems a little ridiculous, does anyone know a workaround, or perhaps I have misunderstood.
do I need both an in (attaching for email) and an out (archiving to
the blob) binding for that blob?
Firstly I don't think you could bind the blob in and out simultaneously if the not existed. If you have tried you will find it will return error. And I suppose you could send the mail directly with the content from sql and write to blob, don't need to read content from blob again.
I have to hard code the name of the file in the blob-path?
If you could accept guid or datetime blob name you could bind the path with {rand-guid} or {DateTime}(you could format the time).
I fyou could not accept this binding, you could pass the blob path from the trigger body with json data like below pic. If you use other like queue trigger, you also could pass the json data with the path value.
I am writing a bit of code that will take a CSV file input and perform an operation based on its contents. In the admin panel I am designing, the admin should be able to select a CSV file on their local system which my application will then read. The application does not need to store the CSV file, just read from it for a one-time operation.
Any ideas on how to best handle this in Pyramid?
What you want is essentially a file upload, followed by additional processing on the uploaded data. You can create input elements of type "file" in HTML forms to allow uploading of files.
Refer to the cookbook in the Pyramid documentation on file uploads for how to handle the uploaded data on the server side (summarized: use the file-like object request.POST[ field_name ].file).