DoubleClick Bid Manager API not updating the line items - python

I am downloading line items from DBM, modifying them and uploading them again to DBM. Once uploaded, if I download and view them again, I am not able to see the modifications. There is no error in the code and the API also is not returning any errorStatus.
Code to upload line items:
service = build('doubleclickbidmanager', config.Version, http=credentials.authorize(httplib2.Http()))
request = service.lineitems().uploadlineitems(body=BODY)
response = request.execute()
if 'uploadStatus' in response and 'errors' in response['uploadStatus']:
for error in response['uploadStatus']['errors']:
logging.error(error)
Code to download line items:
service = build('doubleclickbidmanager', config.Version, http=credentials.authorize(httplib2.Http()))
request = service.lineitems().downloadlineitems(body=body)
print "Downloading Line Items.."
logging.info("function: Downloading Line Items..")
# Execute request and save response contents.
with open(file_path, 'wb') as handler:
# Call the API, getting the (optionally filtered) list of line items.
# Then write the contents of the response to a CSV file.
lidata = request.execute()['lineItems'].encode('utf-8')
logging.info("function:request.execute succeeded.")
handler.write(lidata)
print 'Download Completed.'
Is this the proper way to check whether the line item is modified, or am I doing something wrong? Is there any other way to check it?

Please note that api will update the line item in following cases:
-Based on any flag,if the flag is set for the row,it will update
-Not all the rows will get update
Refer this link:https://developers.google.com/bid-manager/guides/entity-write/format- For those Columns, whose writeable value is yes can only be updated

Related

I am looking to create an API endpoint route that returns txt in a json format -Python

I'm new to developing and my question(s) involves creating an API endpoint in our route. The api will be used for a POST from a Vuetify UI. Data will come from our MongoDB. We will be getting a .txt file for our shell script but it will have to POST as a JSON. I think these are the steps for converting the text file:
1)create a list for the lines of the .txt
2)add each line to the list
3) join the list elements into a string
4)create a dictionary with the file/file content and convert it to JSON
This is my current code for the steps:
import json
something.txt: an example of the shell script ###
f = open("something.txt")
create a list to put the lines of the file in
file_output = []
add each line of the file to the list
for line in f:
file_output.append(line)
mashes all of the list elements together into one string
fileoutput2 = ''.join(file_output)
print(fileoutput2)
create a dict with file and file content and then convert to JSON
json_object = {"file": fileoutput2}
json_response = json.dumps(json_object)
print(json_response)
{"file": "Hello\n\nSomething\n\nGoodbye"}
I have the following code for my baseline below that I execute on my button press in the UI
#bp_customer.route('/install-setup/<string:customer_id>', methods=['POST'])
def install_setup(customer_id):
cust = Customer()
customer = cust.get_customer(customer_id)
### example of a series of lines with newline character between them.
script_string = "Beginning\nof\nscript\n"
json_object = {"file": script_string}
json_response = json.dumps(json_object)
get the install shell script content
replace the values (somebody has already done this)
attempt to return the below example json_response
return make_response(jsonify(json_response), 200)
my current Vuetify button press code is here: so I just have to ammend it to a POST and the new route once this is established
onClickScript() {
console.log("clicked");
axios
.get("https://sword-gc-eadsusl5rq-uc.a.run.app/install-setup/")
.then((resp) => {
console.log("resp: ", resp.data);
this.scriptData = resp.data;
});
},
I'm having a hard time combining these 2 concepts in the correct way. Any input as to whether I'm on the right path? Insight from anyone who's much more experienced than me?
You're on the right path, but needlessly complicating things a bit. For example, the first bit could be just:
import json
with open("something.txt") as f:
json_response = json.dumps({'file': f.read()})
print(json_response)
And since you're looking to pass everything through jsonify anyway, even this would suffice:
with open("something.txt") as f:
data = {'file': f.read()}
Where you can pass data directly through jsonify. The rest of it isn't sufficiently complete to offer any concrete comments, but the basic idea is OK.
If you have a working whole, you could go to https://codereview.stackexchange.com/ to ask for some reviews, you should limit questions on StackOverflow to actual questions about getting something to work.

How does django really handle multiple requests on development server?

I am making a little django app to serve translations for my react frontend. The way it works is as follows:
The frontend tries to find a translation using a key.
If the translation for that key is not found, It sends a request to the backend with the missing key
On the backend, the missing key is appended to a json file
Everything works just fine when the requests are sent one at a time (when one finishes, the other is sent). But when multiple requests are sent at the same time, everything breaks. The json file gets corrupted. It's like all the requests are changing the file at the same time which causes this to happen. I am not sure if that's the case because I think that the file can not be edited by two processes at the same time(correct me if I am wrong) but I don't receive such an error which indicates that the requests are handled one at a time according to this and this
Also, I tried something, which to my surprise worked, that is to add time.sleep(1) to the top of my api view. When I did this, everything worked as expected.
What is going on ?
Here is the code, just in case it matters:
#api_view(['POST'])
def save_missing_translation_keys(request, lng, ns):
time.sleep(1)
missing_trans_path = MISSING_TRANS_DIR / f'{lng}.json'
# Read lng file and get current missing keys for given ns
try:
with open(missing_trans_path, 'r', encoding='utf-8') as missing_trans_file:
if is_file_empty(missing_trans_path):
missing_keys_dict = {}
else:
missing_keys_dict = json.load(missing_trans_file)
except FileNotFoundError:
missing_keys_dict = {}
except Exception as e:
# Even if file is not empty, we might not be able to parse it for some reason, so we log any errors in log file
with open(MISSING_LOG_FILE, 'a', encoding='utf-8') as logFile:
logFile.write(
f'could not save missing keys {str(list(request.data.keys()))}\nnamespace {lng}/{ns} file can not be parsed because\n{str(e)}\n\n\n')
raise e
# Add new missing keys to the list above.
ns_missing_keys = missing_keys_dict.get(ns, [])
for missing_key in request.data.keys():
if missing_key and isinstance(missing_key, str):
ns_missing_keys.append(missing_key)
else:
raise ValueError('Missing key not allowed')
missing_keys_dict.update({ns: list(set(ns_missing_keys))})
# Write new missing keys to the file
with open(missing_trans_path, 'w', encoding='utf-8') as missing_trans_file:
json.dump(missing_keys_dict, missing_trans_file, ensure_ascii=False)
return Response()

"Syntax error" in lambda function but tabbed properly and working in test case?

I have a lambda function that is utilized to grab a user uploaded file via a react webapp, submit it to an s3 bucket, then a lambda function using python that grabs said image from the bucket, translates it, and submits a translated version of that file back into the bucket.
The issue is, this program is only working on the test cases/in theory. To extrapolate, I bring in the file name as the "event" for the lambda function, and carry it to various other functions like so:
def get_kv_map(event):
filePath = event
fileExt = filePath.get('body')
s3 = boto3.resource('s3')
bucket = s3.Bucket('myBucket')
obj = bucket.Object(bucket)
client = boto3.client('textract') #We utilize boto3's textract
response = client.analyze_document(Document={'S3Object': {'Bucket': 'myBucket', 'Name': fileExt}}, FeatureTypes=['FORMS'])
# Get the text blocks
blocks=response['Blocks'] #We make a blocks variable that will be the blocks we find in the document
# get key and value maps
key_map = {}
value_map = {}
block_map = {}
for block in blocks: #Traverse the blocks found in the document
block_id = block['Id'] #Set variable for blockId to the Id's found on that block location
block_map[block_id] = block #Make the block map at that ID be the block variable
if block['BlockType'] == "KEY_VALUE_SET": #if we see that the type of block we're on is a key and value set pair, we check if it's a key or not. If it's not a key, we know it's a value. We send it to the respective map.
if 'KEY' in block['EntityTypes']:
key_map[block_id] = block
else:
value_map[block_id] = block
return key_map, value_map, block_map #######LINE WITH ERROR ######
Why is this line the one causing an error, and more importantly, why is it only happening on the website? When I use the program in cloud9 with my test cases, everything is perfectly fine. But the moment I try and have the website submit the data for the function to "do its work" it seems like it halts right after it uploads the file and gets to this line of all lines. I've tried checking tabs/spaces and everything. I'm very perplexed.
Thank you for any help, I've been pulling my hair out today.
So, I tried to analyze what I'm bringing (event) and what it contains. It seems that it's a permission error, but I cannot understand why. I gave myself full access to various features in AWS. Below is the error code I get when I utilize the test case as my event I see being brought in.
"errorType": "InvalidS3ObjectException",
"errorMessage": "An error occurred (InvalidS3ObjectException) when calling the AnalyzeDocument operation: Unable to get object metadata from S3. Check object key, region and/or access permissions.",
"stackTrace": [
" File \"/var/task/scrapeShow/lambda_function.py\", line 133, in main\n key_map, value_map, block_map = get_kv_map(event) #Take map variables in to get the key and value map we need.\n",
" File \"/var/task/scrapeShow/lambda_function.py\", line 39, in get_kv_map\n response = client.analyze_document(Document={'S3Object': {'Bucket': 'myBucket', 'Name': fileExt}}, FeatureTypes=['FORMS'])\n",
" File \"/var/runtime/botocore/client.py\", line 316, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 626, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]}

Office365-REST-Python-Client 401 on File Update

I finally got over the hurdle of uploading files into SharePoint which enabled me to answer my own question here:
Office365-REST-Python-Client Access Token issue
However, the whole point of my project was to add metadata to the files being uploaded to make it possible to filter on them. For the avoidance of double, I am talking about column information in Sharepoints Document Libraries.
Ideally, I would like to do this when I upload the files in the first place but my understanding of the rest API is that you have to upload first and then use a PUT request to update its metadata.
The link to the Git Hub for Office365-REST-Python-Client:
https://github.com/vgrem/Office365-REST-Python-Client
This Libary seems to be the answer but the closest I can find to documentation is under the examples folder. Sadly the example for update file metadata does not exist. I think part of the reason for this stems from the only option being to use a PUT request on a list item.
According to the REST API documentation, which this library is built on, an item's metadata must be operated on as part of a list.
REST API Documentation for file upload:
https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/working-with-folders-and-files-with-rest#working-with-files-by-using-rest
REST API Documentation for updating list metadata:
https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/working-with-lists-and-list-items-with-rest#update-list-item
There is an example for updating a list item:
'https://github.com/vgrem/Office365-REST-Python-Client/blob/master/examples/sharepoint/listitems_operations_alt.py' but it returns a 401. If you look at my answer to my own question in the link-up top you will see that I granted this App full control. So an unauthorized response and stopped has stopped me dead in my tracks wondering what to do next.
So after all that, my question is:
How do I upload a file to a Sharepoint Document Libary and add Metadata to its column information using Office365-REST-Python-Client?
Kind Regards
Rich
Upload endpoint request
url: http://site url/_api/web/GetFolderByServerRelativeUrl('/Shared Documents')/Files/Add(url='file name', overwrite=true)
method: POST
body: contents of binary file
headers:
Authorization: "Bearer " + accessToken
X-RequestDigest: form digest value
content-type: "application/json;odata=verbose"
content-length:length of post body
could be converted to the following Python example:
ctx = ClientContext(url, ctx_auth)
file_info = FileCreationInformation()
file_info.content = file_content
file_info.url = os.path.basename(path)
file_info.overwrite = True
target_file = ctx.web.get_folder_by_server_relative_url("Shared Documents").files.add(file_info)
ctx.execute_query()
Once file is uploaded, it's metadata could be set like this:
list_item = target_file.listitem_allfields # get associated list item
list_item.set_property("Title", "New title")
list_item.update()
ctx.execute_query()
I'm glad I stumbled upon this post and Office365-REST-Python-Client in general. However, I'm currently stuck trying to update a file's metadata, I keep receiving:
'File' object has no attribute 'listitem_allfields'
Any help is greatly appreciated. Note, I also updated this module to v 2.3.1
Here's my code:
list_title = "Documents"
target_folder = ctx.web.lists.get_by_title(list_title).root_folder
target_file = target_folder.upload_file(filename, filecontents)
ctx.execute_query()
list_item = target_file.listitem_allfields
I've also tried:
library_root = ctx.web.get_folder_by_server_relative_url('Shared Documents')
file_info = FileCreationInformation()
file_info.overwrite = True
file_info.content = filecontent
file_info.url = filename
upload_file = library_root.files.add(file_info)
ctx.load(upload_file)
ctx.execute_query()
list_item = upload_file.listitem_allfields
I've also tried to get the uploaded file item directly with the same result:
target_folder = ctx.web.lists.get_by_title(list_title).root_folder
target_file = target_folder.upload_file(filename, filecontent)
ctx.execute_query()
uploaded_file = ctx.web.get_file_by_server_relative_url(target_file.serverRelativeUrl)
print(uploaded_file.__dict__)
list_item = uploaded_file.listitem_allfields
All variations return:
'File' object has no attribute 'listitem_allfields'
What am I missing? How to add metadata to a new SPO file/list item uploaded via Python/Office365-REST-Python-Client
Update:
The problem was I was looking for the wrong property of the uploaded file. The correct attribute is:
uploaded_file.listItemAllFields
Note the correct casing. Hopefully my question/answer may help someone else who's is as ignorant as me of attribute/object casing.

How to serve pdf (or non-image) files through google app engine with python flask? [duplicate]

This question already has an answer here:
serving blobs from GAE blobstore in flask
(1 answer)
Closed 7 years ago.
Here is the code that I currently use to upload the file that I get:
header = file.headers['Content-Type']
parsed_header = parse_options_header(header)
blob_key = parsed_header[1]['blob-key']
UserObject(file = blobstore.BlobKey(blob_key)).put()
On the serving side, I tried doing the same thing that I do for images. That is, I get the UserObject and then just do
photoUrl = images.get_serving_url(str(userObject.file)).
To my surprise, this hack worked perfectly in local server. However, on production, it raises an exception:
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/images/__init__.py", line 1794, in get_serving_url
return rpc.get_result()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/images/__init__.py", line 1892, in get_serving_url_hook
raise _ToImagesError(e, readable_blob_key)
TransformationError
What is a proper way to store/serve non-image files such as pdf?
Don't use the images API, that won't work since (at least in production) it performs image content/format checks and you're not serving images.
You can have your own request path namespace with a handler, just like any other handler in your app, maybe something along these lines:
def pdfServingUrl(request, blob_key):
from urlparse import urlsplit, urlunsplit
split = list(urlsplit(request.url))
split[2] = '/PDF?key=%s' % blob_key.urlsafe() # rewrite the url 'path'
return urlunsplit(tuple(split))
And in the handler for /PDF*:
blob_key = self.request.get('key')
self.response.write(blobstore.BlobReader(blob_key).read())
If you use the Blobstore API on top of Google Cloud Storage you can also use direct GCS access paths, as mentioned in this answer: https://stackoverflow.com/a/34023661/4495081.
Note: the above code is a barebone python suggestion just for accessing the blob data. Additional stuff like headers, etc. is not addressed.
Per #NicolaOtasevic's comment, for flask the more complete code might look like this:
blob_info = blobstore.get(request.args.get('key'))
response = make_response(blob_info.open().read())
response.headers['Content-Type'] = blob_info.content_type
response.headers['Content-Disposition'] = 'filename="%s"' % blob_info.filename

Categories

Resources