I've been able to read sheets, rows, columns, and cells using the Python SDK for Smartsheet, but I haven't been able to actually change/write/update a cell value. I've simplified my code quite a bit and I'm left with this:
import smartsheet
MySS = smartsheet.Smartsheet(MyApiKey)
single_row = MySS.Sheets.get_row(SHEET_ID, ROW_ID)
destination_cell = single_row.get_column(DST_COLUMN_ID)
destination_cell.value = "new value"
single_row.set_column(destination_cell.column_id, destination_cell)
MySS.Sheets.update_rows(SHEET_ID, ROW_ID)
I get the following error when I run this code:
Traceback (most recent call last):
File "C:/Users/XXXXXX/Python/Smartsheet/test.py", line 24, in <module>
MySS.Sheets.update_rows(SHEET_ID, ROW_ID)
File "C:\Users\XXXXXX\Python\virtualenv PC Smartsheet\lib\site-packages\smartsheet\sheets.py", line 961, in update_rows
for item in _op['json']:
TypeError: 'long' object is not iterable
I have tried passing the ROW_ID in the last line of code as ROW_ID and [ROW_ID] and [ROW_ID,] but get the same error nonetheless.
I'm using this as my reference: http://smartsheet-platform.github.io/api-docs/?python#update-row(s)
What am I doing wrong?
You're so close! Rather than sending the ROW_ID to the update_rows() you actually want to send the row object in a list.
So, in your case, you would just want to change your last line to be
MySS.Sheets.update_rows(SHEET_ID, [single_row])
Related
I have written the below python function (a snippet of the full code) to work in AWS Lambda. The purpose of it is to take a GeoJSON from an S3 bucket and parse it accordingly.
Once parsed, it is placed back into JSON format (data) and then should be inserted into the specified database using
bulk_item['uuid'] = str(uuid.uuid4())
bulk_item['name'] = feature_name
bulk_item['type'] = feature_type
bulk_item['info'] = obj
bulk_item['created'] = epoch_time
bulk_item['scope'] = 2
data = json.dumps(bulk_item)
print(data)
self.database.upsert_record(self.organisation, json_doc=data)
except Exception as e:
print(f'Exception: {e.__class__.__name__}({e})')
The db_access file in which the above is relating to is another python script. The function upsert_record is as below:
def upsert_record(self, organisation,
json_doc={}):
My code is working perfectly until I try to upsert it into the database. Once this line is gotten to, it throws the error
Traceback (most recent call last):
File "/var/task/s3_asset_handler.py", line 187, in process_incoming_file
self.database.upsert_record(self.organisation, json_doc=data)
File "/opt/python/database_access.py", line 1218, in upsert_record
new_uuid = json_doc['uuid']
TypeError: string indices must be integers
I can't seem to figure out the issue at all
You are trying to get an element from a JSON object, but passing a string.
The
data = json.dumps(bulk_item)
creates a string representing the object.
Try using bulk_item on it's own.
[Here is the API snippet. I am working on]
Here is the code snippet that fetching API snippet above.
for i in parse_json["lines"]:
list = []
if (i["chaosValue"] > chaos_ex_ratio):
value = (i["variant"],i["baseType"], i["exaltedValue"], "Exalted Orb", i["levelRequired"])
else:
value = (i["variant"],i["baseType"], i["exaltedValue"], "Chaos Orb", i["levelRequired"])
list.append(value)
If I remove i["variant"] everything works perfectly but i["variant"] is throwing key error even tho "variant": "Shaper/Redeemer" exists on the JSON file. Thanks for any help.
Error code:
Traceback (most recent call last):
File "C:\Users\emosc\PycharmProjects\GithubPushs\psychescape_price_fetcher\psychescape_price_fetcher\main.py", line 265, in <module>
BaseType_Values()
File "C:\Users\emosc\PycharmProjects\GithubPushs\psychescape_price_fetcher\psychescape_price_fetcher\main.py", line 219, in BaseType_Values
value = (i["variant"],i["baseType"], i["exaltedValue"], "Exalted Orb", i["levelRequired"])
KeyError: 'variant'
You're showing us a screenshot where the first line has a variant, but your code will fail unless every line has a variant.
What is the output of this snippet?
for l in parse_json["lines"]:
if "variant" not in l:
print("Line {} does not have a 'variant'".format(l))
If it outputs any lines, there's your problem. If it doesn't, something deeper is going on. I would try to catch the KeyError exception and put a breakpoint in the handler to continue investigating.
I am reading a json file with dictionary and values, but I am battling to use a variable as a query item when searching the json file.
x = value_cloud = "%s%s%s" % (["L1_METADATA_FILE"],["IMAGE_ATTRIBUTES"],["CLOUD_COVER"])
for meta in filelist(dir):
with open (meta) as data_file:
data = json.load(data_file)
cloud = str(data[x])
The error I get is:
Traceback (most recent call last):
File "E:\SAMPLE\Sample_Script_AWS\L8_TOA_using_gdal_rasterio.py", line 96, in <module>
cloud = str(data[x])
KeyError: "['L1_METADATA_FILE']['IMAGE_ATTRIBUTES']['CLOUD_COVER']"
What I actually want is to search the json file for the key in the variable...
The keys do exist in the json file because when I run the following I get the correct output.
cloud = str(data["L1_METADATA_FILE"]["IMAGE_ATTRIBUTES"]["CLOUD_COVER"])
print cloud
My knowledge of python is sketchy, and I am passing the variable through as a string and not an expression or object and therefore it gives me that error. What is the correct way to create the variable and call the keys that I want.
Thanks in advance!
Your key ends up including the brackets in the string, which which where the error comes from. If you use each key in its own variable, like this:
x, y, z = "L1_METADATA_FILE", "IMAGE_ATTRIBUTES" , "CLOUD_COVER"
and then:
cloud = str(data[x][y][z])
it should avoid any errors.
I keep getting a FieldMissingError on a field ('severa_id') of which I am sure that it exists. I've checked ver_33.py which shows that the Exception triggers if the field is not in self._meta.fields.
table._meta.fields shows the field being there:
print(table._meta.fields)
>>>
['proj_code', 'severa_id', 'rec_id', 'ext_key']
>>>
This is the code I'm trying:
table = dbf.Table(path_to_dbf)
table.open()
for row in dbf.Process(table):
for project in projects:
if str(row.proj_code)[0:4] == project["ProjectNumber"]:
row.write_record(severa_id=project["GUID"])
I've also tried these methods of setting the field:
row.severa_id = project["ProjectNumber"]
#row.write()
row.write_record()
Lastly, I've also tried setting each of the other fields (with a random string) which results in the same error.
EDIT: I am using the dbf module (https://pypi.python.org/pypi/dbf/0.96.005)
EDIT: The original traceback:
Traceback (most recent call last):
File "<string>", line 420, in run_nodebug
File "C:\Users\Alexander\Documents\update_projecten\update_dbf.py", line 58, in <module>
row.write()
File "C:\Users\Alexander\Envs\vfp\lib\site-packages\dbf\ver_33.py", line 2451, in __getattr__
raise FieldMissingError(name)
dbf.ver_33.FieldMissingError: 'write: no such field in table'
EDIT: The final version of the script that worked. Note not using Process and indicating in dbf.write the row and field to be written.
table = dbf.Table(path_to_dbf)
table.open()
for row in table:
for project in projects:
if str(row.proj_code)[0:4] == project["ProjectNumber"]:
dbf.write(row, severa_id=project["GUID"])
write_record no longer exists as a row method, so the error you are seeing is probably stating that write_record is not a field.
Instead, try:
dbf.write(severa_id=project['GUID'])
I'm having some trouble with SalesForce, I've never used it before so I'm not entirely sure what is going wrong here. I am using the simple_salesforce python module. I have successfully pulled data from SalesForce standard objects, but this custom object is giving me trouble. My query is
result = sf.query("Select Name from Call_Records__c")
which produces this error:
Traceback (most recent call last):
File "simple.py", line 15, in <module>
result = sf.query("Select Name from Call_Records__c")
File "/usr/local/lib/python2.7/dist-packages/simple_salesforce/api.py", line 276, in query
_exception_handler(result)
File "/usr/local/lib/python2.7/dist-packages/simple_salesforce/api.py", line 634, in _exception_handler
raise exc_cls(result.url, result.status_code, name, response_content)
simple_salesforce.api.SalesforceMalformedRequest: Malformed request https://sandbox.company.com/services/data/v29.0/query/?q=Select+Name+from+Call_Records__c. Response content: [{u'errorCode': u'INVALID_TYPE', u'message': u"\nSelect Name from Call_Records__c\n ^\nERROR at Row:1:Column:18\nsObject type 'Call_Records__c' is not supported. If you are attempting to use a custom object, be sure to append the '__c' after the entity name.
Please reference your WSDL or the describe call for the appropriate names."}]
I've tried it with and without the __c for both the table name and the field name, still can't figure this out. Anything blatantly wrong?
Make sure your result is Call_Records__c/CallRecords__c
Call_Records__c result = sf.query("Select Name from Call_Records__c")
Or
CallRecords__c result = sf.query("Select Name from CallRecords__c")
Try using -
result = sf.query("Select Name from CallRecords__c")