Uploading Attachments to Salesforce API via Beatbox, Python - python

I'm uploading documents to Salesforce using beatbox and python and the files are attaching correctly but the data contained within the files gets completely corrupted.
def Send_File():
import beatbox
svc = beatbox.Client() # instantiate the object
svc.login(login1, pw1) # login using your sf credentials
update_dict = {
'type':'Attachment',
'ParentId': accountid,
'Name': 'untitled.txt',
'body':'/Users/My_Files/untitled.txt',
}
results2 = svc.create(update_dict)
print results2
output is:
00Pi0000005ek6gEAAtrue
So things are coming through well, but when I go to the salesforce record 00Pi0000005ek6gEAA and view the file the contents of the file are:
˝KÆœ  Wøä ï‡Îä˜øHÅCj÷øaÎ0j∑ø∫{b∂Wù
I have no clue what's causing the issue and I can't find any situations where this has happened to other people
Link to SFDC Documentation on uploads

the 'body' value in the dictionary should be the base64 encoded contents of the file, not the file name. you need to read and encode the file contents yourself. e.g.
body = ""
with open("/Users/My_Files/untitled.txt", "rb") as f:
body = f.read().encode("base64")
update_dict = {
'type' : 'Attachement'
'ParentId' : accountId,
'Name' : 'untitled.txt',
'Body' : body }
...
Docs about Attachment

Related

Get a Json file that represents device(object) from a specific folder and generating kafka event

I need make tool using Python that takes JSON file(device) from a folder(devices) and generates kafka events for the device.events topic. before generating event it has to check if device exists in folder.
I wrote this but i still dont know how to open file inside the folder.
import json
test_devices = r'''
{
"data": ["{\"mac_address\": null, \"serial_number\": \"BRUCE03\", \"device_type\": \"STORAGE\", \"part_number\": \"STCHPRTffOM01\", \"extra_attributes\": [], \"platform_customer_id\": \"a44a5c7c65ae11eba916dc31c0b885\", \"application_customer_id\": \"4b232de87dcf11ebbe01ca32d32b6b77\"}"],
"id": "b139937e-5107-4125-b9b0-d05d17ad2bea",
"source":"CCS",
"specversion":"1.0",
"time": "2021-01-13T14:44:18.972181+00:00",
"type":"` `",
"topic":"` `"
}
'''
data = json.loads(test_devices)
print(type(test_devices))
print(data)
Kafka is an implementation detail and doesn't relate to your questions.
how to open file inside the folder
import json
with open("/path/to/device.json") as f:
content = json.load(f)
print(content)
has to check if device exists in folder
import os.path
print("Exists? :", os.path.exists("/path/to/device.json"))

I am looking to create an API endpoint route that returns txt in a json format -Python

I'm new to developing and my question(s) involves creating an API endpoint in our route. The api will be used for a POST from a Vuetify UI. Data will come from our MongoDB. We will be getting a .txt file for our shell script but it will have to POST as a JSON. I think these are the steps for converting the text file:
1)create a list for the lines of the .txt
2)add each line to the list
3) join the list elements into a string
4)create a dictionary with the file/file content and convert it to JSON
This is my current code for the steps:
import json
something.txt: an example of the shell script ###
f = open("something.txt")
create a list to put the lines of the file in
file_output = []
add each line of the file to the list
for line in f:
file_output.append(line)
mashes all of the list elements together into one string
fileoutput2 = ''.join(file_output)
print(fileoutput2)
create a dict with file and file content and then convert to JSON
json_object = {"file": fileoutput2}
json_response = json.dumps(json_object)
print(json_response)
{"file": "Hello\n\nSomething\n\nGoodbye"}
I have the following code for my baseline below that I execute on my button press in the UI
#bp_customer.route('/install-setup/<string:customer_id>', methods=['POST'])
def install_setup(customer_id):
cust = Customer()
customer = cust.get_customer(customer_id)
### example of a series of lines with newline character between them.
script_string = "Beginning\nof\nscript\n"
json_object = {"file": script_string}
json_response = json.dumps(json_object)
get the install shell script content
replace the values (somebody has already done this)
attempt to return the below example json_response
return make_response(jsonify(json_response), 200)
my current Vuetify button press code is here: so I just have to ammend it to a POST and the new route once this is established
onClickScript() {
console.log("clicked");
axios
.get("https://sword-gc-eadsusl5rq-uc.a.run.app/install-setup/")
.then((resp) => {
console.log("resp: ", resp.data);
this.scriptData = resp.data;
});
},
I'm having a hard time combining these 2 concepts in the correct way. Any input as to whether I'm on the right path? Insight from anyone who's much more experienced than me?
You're on the right path, but needlessly complicating things a bit. For example, the first bit could be just:
import json
with open("something.txt") as f:
json_response = json.dumps({'file': f.read()})
print(json_response)
And since you're looking to pass everything through jsonify anyway, even this would suffice:
with open("something.txt") as f:
data = {'file': f.read()}
Where you can pass data directly through jsonify. The rest of it isn't sufficiently complete to offer any concrete comments, but the basic idea is OK.
If you have a working whole, you could go to https://codereview.stackexchange.com/ to ask for some reviews, you should limit questions on StackOverflow to actual questions about getting something to work.

Upload binary files using python-gitlab API

I'm tasked with migrating repos to gitlab and I decided to automate the process using python-gitlab. Everything works fine except for binary or considered-binary files like compiled object files ( .o ) or .zip files. (I know that repositories are not place for binaries. I work with what I got and what I'm told to do.)
I'm able to upload them using:
import gitlab
project = gitlab.Gitlab("git_adress", "TOKEN")
bin_content = base64.b64encode(open("my_file.o", 'rb').read() ).decode()
and then:
data = {'branch':'main', 'commit_message':'go away', 'actions':[{'action': 'create', 'file_path': "my_file.o", 'content': bin_content, 'encode' : 'base64'}]}
project.commits.create(data)
Problem is that content of such files inside gitlab repository is something like:
f0VMRgIBAQAAAAAAAAAAAAEAPgABAAAAAAAAAAAAA....
Which is not what I want.
If I don't .decode() I get error saying:
TypeError: Object of type bytes is not JSON serializable
Which is expected since I sent file opened in binary mode and encoded with base64.
I'd like to have such files uploaded/stored like when I upload them using web GUI "upload file" option.
Is it possible to achieve this using python-gitlab API ? If so, how?
The problem is that Python's base64.b64encode function will provide you with a bytes object, but REST APIs (specifically, JSON serialization) want strings. Also the argument you want is encoding not encode.
Here's the full example to use:
from base64 import b64encode
import gitlab
GITLAB_HOST = 'https://gitlab.com'
TOKEN = 'YOUR API KEY'
PROJECT_ID = 123 # your project ID
gl = gitlab.Gitlab(GITLAB_HOST, private_token=TOKEN)
project = gl.projects.get(PROJECT_ID)
with open('myfile.o', 'rb') as f:
bin_content = f.read()
b64_content = b64encode(bin_content).decode('utf-8')
# b64_content must be a string!
f = project.files.create({'file_path': 'my_file.o',
'branch': 'main',
'content': b64_content,
'author_email': 'test#example.com',
'author_name': 'yourname',
'encoding': 'base64', # important!
'commit_message': 'Create testfile'})
Then in the UI, you will see GitLab has properly recognized the contents as binary, rather than text:

How to test feeding a PDF to a lambda function

I have written a lambda function who's goal is to accept a pdf, parse it using PyPDF2, and return specific text fields as a payload.
import PyPDF2 as pypdf
def lambda_handler(event, context):
file_path = event['body']
pdfdoc=open(file_path,'rb')
pdfreader=pypdf.PdfFileReader(pdfdoc)
#dictionary of extracted text fields
dic = pdfreader.getFormTextFields()
#fields we want to send in our payload
firstname = dic['name'].split(' ')[0]
lastname = dic['name[0]'].split(' ')[1]
payload = {"FirstName": firstname,
"LastName": lastname,
}
return jsonify(payload)
The problem is, I dont know if this is going to work and I want to test it by feeding it a pdf and see what error it spits out at me. How can I manually feed my function a PDF without tying it to API Gateway or another app? Ive tested the code locally, but I've never passed a PDF to a lambda function before so this is the part that is throwing me off.

Office365-REST-Python-Client 401 on File Update

I finally got over the hurdle of uploading files into SharePoint which enabled me to answer my own question here:
Office365-REST-Python-Client Access Token issue
However, the whole point of my project was to add metadata to the files being uploaded to make it possible to filter on them. For the avoidance of double, I am talking about column information in Sharepoints Document Libraries.
Ideally, I would like to do this when I upload the files in the first place but my understanding of the rest API is that you have to upload first and then use a PUT request to update its metadata.
The link to the Git Hub for Office365-REST-Python-Client:
https://github.com/vgrem/Office365-REST-Python-Client
This Libary seems to be the answer but the closest I can find to documentation is under the examples folder. Sadly the example for update file metadata does not exist. I think part of the reason for this stems from the only option being to use a PUT request on a list item.
According to the REST API documentation, which this library is built on, an item's metadata must be operated on as part of a list.
REST API Documentation for file upload:
https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/working-with-folders-and-files-with-rest#working-with-files-by-using-rest
REST API Documentation for updating list metadata:
https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/working-with-lists-and-list-items-with-rest#update-list-item
There is an example for updating a list item:
'https://github.com/vgrem/Office365-REST-Python-Client/blob/master/examples/sharepoint/listitems_operations_alt.py' but it returns a 401. If you look at my answer to my own question in the link-up top you will see that I granted this App full control. So an unauthorized response and stopped has stopped me dead in my tracks wondering what to do next.
So after all that, my question is:
How do I upload a file to a Sharepoint Document Libary and add Metadata to its column information using Office365-REST-Python-Client?
Kind Regards
Rich
Upload endpoint request
url: http://site url/_api/web/GetFolderByServerRelativeUrl('/Shared Documents')/Files/Add(url='file name', overwrite=true)
method: POST
body: contents of binary file
headers:
Authorization: "Bearer " + accessToken
X-RequestDigest: form digest value
content-type: "application/json;odata=verbose"
content-length:length of post body
could be converted to the following Python example:
ctx = ClientContext(url, ctx_auth)
file_info = FileCreationInformation()
file_info.content = file_content
file_info.url = os.path.basename(path)
file_info.overwrite = True
target_file = ctx.web.get_folder_by_server_relative_url("Shared Documents").files.add(file_info)
ctx.execute_query()
Once file is uploaded, it's metadata could be set like this:
list_item = target_file.listitem_allfields # get associated list item
list_item.set_property("Title", "New title")
list_item.update()
ctx.execute_query()
I'm glad I stumbled upon this post and Office365-REST-Python-Client in general. However, I'm currently stuck trying to update a file's metadata, I keep receiving:
'File' object has no attribute 'listitem_allfields'
Any help is greatly appreciated. Note, I also updated this module to v 2.3.1
Here's my code:
list_title = "Documents"
target_folder = ctx.web.lists.get_by_title(list_title).root_folder
target_file = target_folder.upload_file(filename, filecontents)
ctx.execute_query()
list_item = target_file.listitem_allfields
I've also tried:
library_root = ctx.web.get_folder_by_server_relative_url('Shared Documents')
file_info = FileCreationInformation()
file_info.overwrite = True
file_info.content = filecontent
file_info.url = filename
upload_file = library_root.files.add(file_info)
ctx.load(upload_file)
ctx.execute_query()
list_item = upload_file.listitem_allfields
I've also tried to get the uploaded file item directly with the same result:
target_folder = ctx.web.lists.get_by_title(list_title).root_folder
target_file = target_folder.upload_file(filename, filecontent)
ctx.execute_query()
uploaded_file = ctx.web.get_file_by_server_relative_url(target_file.serverRelativeUrl)
print(uploaded_file.__dict__)
list_item = uploaded_file.listitem_allfields
All variations return:
'File' object has no attribute 'listitem_allfields'
What am I missing? How to add metadata to a new SPO file/list item uploaded via Python/Office365-REST-Python-Client
Update:
The problem was I was looking for the wrong property of the uploaded file. The correct attribute is:
uploaded_file.listItemAllFields
Note the correct casing. Hopefully my question/answer may help someone else who's is as ignorant as me of attribute/object casing.

Categories

Resources