Listing the Document ID from TinyDB - python

I am trying to list out the content of db.json from this github (https://github.com/syntaxsmurf/todo/blob/main/main.py) on line 40 in main.py
Specifically this line
for item in db:
table.add_row( item["task"], item["completed_by"], item["status"]) #need to find the right command for pulling data out of tinyDB into these example strings
as you can see I can pull out and list the items just fine that I defined the names on Fx with item["task]
here is an example entry from db.json if you don't wanna take a look at github.
{
"_default": {
"1": {
"completed_by": "Today",
"status": "Pending",
"task": "Shopping"
}
}
Now what I am missing is how do I pull out the default generated ID "1" and list that? I wanna use that to for the user being able to remove it later.
Thank you I hope the question makes sense!

From reddit user azzal07: item.doc_id would be the correct implementation of this.
for item in db:
table.add_row(str(item.doc_id), item["task"], item["completed_by"], item["status"])
Str() is for Rich table function it does not work if it's an int it would seem.

Related

Binance API Python - How to use a specific output

When i let my bot place an order, it gives me something like the following output:
[{
"symbol": "BNBBTC",
"orderId": 3301945,
"clientOrderId": "6gCrw2kRUAF9CvJDGP16IP",
"transactTime": 1507725176595,
"price": "0.00000000",
"origQty": "10.00000000",
"executedQty": "10.00000000",
"status": "FILLED",
"timeInForce": "GTC",
"type": "LIMIT",
"side": "SELL"
}]
I want my bot to automatically be able to fetch the orderId so that it is able to continue working with that by itself without me manually typing in the Id.
For example, if i want to cancel that order:
result = client.cancel_order(
symbol='BNBBTC',
orderId='orderId')
I'd need to ask for the Id first, replace that 'orderId' and run again to be able to cancel the order. There has to be a way to automate this, right?
I suggest looking at some basic tutorials on dictionaries. Getting values of keys is and should be the first thing you learn.
In your case with a dictionary as provided, the structure is very plain. So to get the value of orderId you can just use your_dictionary.get("orderId").
Note I use .get instead of dict[key], this way if there is no orderId in your dictionary the console will only output None. Whereas if I use dict[key] and there is no such key, we will get a KeyError.

Success Factors - updating custom Picklists

Using Postman Version 7.34.0 (7.34.0)
Technically I am using Django / Python in my application, but I am testing using Postman, I will use the same payload while making a call to the Success Factors ATS.
The documentation for SuccessFactors: link
I am trying to update a Candidate entity, in particular a custom Picklist, with custom PicklistOptions on that Candidate entity.
I was given a list of values for each Picklist, like this one:
Field ID: "myCustomPicklist" (used as name field in the payload)
Field label: Some Label (irrelevant - used for UI display)
Field Type: Picklist
And for each option of the Picklist:
...
PicklistOption value: <Str, "some_value"> (value string displayed in the UI)
PicklistOption external code: <Str, "picklistOption_external_code">
PicklistOption external ID: <Int, picklistOption_id >
...
I was able to update the Picklist field on the candidate using the PicklistOption external IDs, this way:
POST: https://<subdomain>.successfactors.eu/odata/v2/upsert
Payload:
{
"__metadata": {"uri": "Candidate(<candidate_id>)"},
"firstName": "some_name",
"lastName": "some_last_name",
...
"myCustomPicklist": {
"__metadata": {"uri": "PicklistOption('<picklistOption_id>')"},
"optionValue":"<picklistOption_id>"
}
}
Response:
...
<d:status>OK</d:status>
<d:editStatus>UPDATED</d:editStatus>
<d:message>Candidate has been updated successfully</d:message>
<d:index m:type="Edm.Int32">0</d:index>
<d:httpCode m:type="Edm.Int32">204</d:httpCode>
...
The problem:
I understand that the PicklistOption.external-id is the ID from the DB, and that means that I would have to get a different ID to interact within different environments - that would force me to create a special map, and I wouldn't want that.
My question:
How can I use the picklistOption_external_code instead?
I am looking for the right syntax, as I was not able to find it in the documentation.
That would allow me to have a single map of fields, as the code (naming) will not change between environments, while the IDs will change.
Thank you!
EDIT 1
This works:
"myCustomPicklist": {
"__metadata": {"uri": "PicklistOption('<picklistOption_id>')"},
"externalCode":"<picklistOption_external_code>"
}
But I cannot find the syntax for replacing the last occurrence of the picklistOption_id in the metadata
EDIT 2
This also works. Well, almost:
"myCustomPicklist": {
"__metadata": {"uri": "PicklistOption('<PicklistOption_value>')"},
"externalCode":"<picklistOption_external_code>"
}
This seems to pass the validation:
<d:message>Candidate has been updated successfully</d:message>
The PicklistOption_value is a legitimate descriptor for the PicklistOption - but the UI in SuccessFactors seems to override the displayed content for the field with null value.
I verified that if I use a fake PicklistOption_value I see an error:
<d:message>Candidate upsert failed: myCustomPicklist invalid, with the index 0</d:message>
For any given SAP SuccessFactors entity (table), the way the picklist has to be used with either OptionID or externalCode is predefined by SAP in the product.
Use the OData Dictionary for the entity, and for the field's navigation data, check the Type column. It will be either: PicklistOption or PickListValueV2.
See SAP KBA 2773713: https://launchpad.support.sap.com/#/notes/2773713
As you noted, if using OptionID, then your solution must account for different number values in different SuccessFactors instances.

Google Docs API programmatically adding a table of content

I have a python script which does some analysis and output the results as text (paragraphs) on a Google Doc. I know how to insert text, update paragraph and text style through batchUpdate.
doc_service.documents().batchUpdate(documentId=<ID>,body={'requests': <my_request>}).execute()
where, for instance, "my_request" takes the form of something like:
request = [
{
"insertText": {
"location": {
"index": <index_position>,
"segmentId": <id>
},
"text": <text>
}
},
{
"updateParagraphStyle": {
"paragraphStyle": {
"namedStyleType": <paragraph_type>
},
"range": {
"segmentId": <id>,
"startIndex": <index_position>,
"endIndex": <index_position>
},
"fields": "namedStyleType"
}
},
]
However, once the script is done updating the table, it would be fantastic if a table of content could be added at the top of the document.
However, I am very new to Google Docs API and I am not entirely sure how to do that. I know I should use "TableOfContents" as a StructuralElement. I also know this option currently does not update automatically after each modification brought to the document (this is why I would like to create it AFTER the document has finished updating and place it at the top of the document).
How to do this with python? I am unclear where to call "TableOfContents" in my request.
Thank you so very much!
After your comment, I was able to understand better what you are desiring to do, but I came across these two Issue Tracker's posts:
Add the ability to generate and update the TOC of a doc.
Geting a link to heading paragraph.
These are well-known feature requests that unfortunately haven't been implemented yet. You can hit the ☆ next to the issue number in the top left on this page as it lets Google know more people are encountering this and so it is more likely to be seen faster.
Therefore, it's not possible to insert/update a table of contents programmatically.

Trying to convert a CSV into JSON in python for posting to REST API

I've got the following data in a CSV file (a few hundred lines) that I'm trying to massage into sensible JSON to post into a rest api
I've gone with the bare minimum fields required, but here's what I've got:
dateAsked,author,title,body,answers.author,answers.body,topics.name,answers.accepted
13-Jan-16,Ben,Cant set a channel ,"Has anyone had any issues setting channels. it stays at �0�. It actually tells me there are �0� files.",Silvio,"I�m not sure. I think you can leave the cable out, because the control works. But you could try and switch two port and see if problem follows the serial port. maybe �extended� clip names over 32 characters.
Please let me know if you find out!
Best regards.",club_k,TRUE
Here's a sample of JSON that is roughly like where I need to get to:
json_test = """{
"title": "Can I answer a question?",
"body": "Some text for the question",
"author": "Silvio",
"topics": [
{
"name": "club_k"
}
],
"answers": [
{
"author": "john",
"body": "I\'m not sure. I think you can leave the cable out. Please let me know if you find out! Best regards.",
"accepted": "true"
}
]
}"""
Pandas seems to import it into a dataframe okay (ish) but keeps telling me I can't serialize it to json - also need to clean it and sanitise, but that should be fairly easy to achieve within the script.
There must also be a way to do this in Pandas, but I'm beating my head against a wall here - as the columns for both answers and topics can't easily be merged together into a dict or a list in python.
You can use a csv.DictReader to process the CSV file as a dictionary for each row. Using the field names as keys, a new dictionary can be constructed that groups common keys into a nested dictionary keyed by the part of the field name after the .. The nested dictionary is held within a list, although it is unclear whether that is really necessary - the nested dictionary could probably be placed immediately under the top-level without requiring a list. Here's the code to do it:
import csv
import json
json_data = []
for row in csv.DictReader(open('/tmp/data.csv')):
data = {}
for field in row:
key, _, sub_key = field.partition('.')
if not sub_key:
data[key] = row[field]
else:
if key not in data:
data[key] = [{}]
data[key][0][sub_key] = row[field]
# print(json.dumps(data, indent=True))
# print('---------------------------')
json_data.append(json.dumps(data))
For your data, with the print() statements enabled, the output would be:
{
"body": "Has anyone had any issues setting channels. it stays at '0'. It actually tells me there are '0' files.",
"author": "Ben",
"topics": [
{
"name": "club_k"
}
],
"title": "Cant set a channel ",
"answers": [
{
"body": "I'm not sure. I think you can leave the cable out, because the control works. But you could try and switch two port and see if problem follows the serial port. maybe 'extended' clip names over 32 characters. \nPlease let me know if you find out!\n Best regards.",
"accepted ": "TRUE",
"author": "Silvio"
}
],
"dateAsked": "13-Jan-16"
}
---------------------------

MongoDB PyMongo Listing all keys in a document

I have a question about how to manipulate a document in PyMongo to have it list all of its current keys, and I'm not quite sure how to do it. For example, if I had a document that looked like this:
{
"_id" : ObjectID("...")
"name": ABCD,
"info": {
"description" : "XYZ",
"type" : "QPR"
}
}
and I had a variable "document" that had this current document as its value, how could I write code to print the three keys:
"_id"
"name"
"info"
I don't want it to list the values, simply the names. The motivation for this is that the user would type one of the names and my program would do additional things after that.
As mentioned in the documentation:
In PyMongo we use dictionaries to represent documents.
So you can get all keys using .keys():
print(document.keys())
Using Python we can do the following which is to fetch all the documents in a variable as mydoc
mydoc = collections.find()
for x in mydoc:
l=list(x.keys())
print(l)
Using this we can get all the keys as a list and then we can use them for further user's need
the document is a python dictionary so you can just print its keys
e.g.
document = db.collection_name.find_one()
for k in document:
print(k)

Categories

Resources