I have a python program that loads an order to a treeview, this order is loaded in the form of documents in a firestore collection from firebase. When I press the order that I want to load, I call the function loadOrder with the necessary id to filter them. But for some reason they don't load.
This is my code:
def loadPedido(idTarget):
docs = db.collection(u'slots').where(u'slotId', u'==', idTarget).stream()
for doc in docs:
docu = doc.to_dict()
nombre = (docu.get('SlotName'))
entero = (docu.get('entero'))
valor = (docu.get('slotPrecio'))
print(f'{doc.id} => {nombre}')
trvPedido.insert("",'end',iid= doc.id, values=(doc.id,nombre, entero, valor))
idTarget is the id to filter, and check with a print that it arrives correctly.
i tried this:
If I write the result of the varable directly in the code, it loads correctly, like so:
...
docs = db.collection(u'slots').where(u'slotId', u'==', u"2996gHQ32CNFMp5vyieu").stream()
...
Related
I am defining a method which fetches all accountIDs from an organization.
If I am using get_paginator('list_accounts'), then am I okay if I do not check the NextToken?
Code to get the list of all AWS account IDs in the organization:
def get_all_account_ids():
org_client = boto3.client('organizations')
paginator = org_client.get_paginator('list_accounts')
page_iterator = paginator.paginate()
account_ids = []
for page in page_iterator:
for acct in page['Accounts']:
print(acct['Id']) # print the account id
# add to account_ids list
account_ids.append(acct['Id'])
return account_ids
I have seen examples of using either get_paginator() call or while loop checking for NextToken. But I have not seen example using both paginator and NextToken?
No you don't have to check NextToken. That's the point of paginators:
Paginators are a feature of boto3 that act as an abstraction over the process of iterating over an entire result set of a truncated API operation.
I am using the Twitter API StreamingClient using the python module Tweepy. I am currently doing a short stream where I am collecting tweets and saving the entire ID and text from the tweet inside of a json object and writing it to a file.
My goal is to be able to collect the Twitter handle from each specific tweet and save it to a json file (preferably print it in the output terminal as well).
This is what the current code looks like:
KEY_FILE = './keys/bearer_token'
DURATION = 10
def on_data(json_data):
json_obj = json.loads(json_data.decode())
#print('Received tweet:', json_obj)
print(f'Tweet Screen Name: {json_obj.user.screen_name}')
with open('./collected_tweets/tweets.json', 'a') as out:
json.dump(json_obj, out)
bearer_token = open(KEY_FILE).read().strip()
streaming_client = tweepy.StreamingClient(bearer_token)
streaming_client.on_data = on_data
streaming_client.sample(threaded=True)
time.sleep(DURATION)
streaming_client.disconnect()
And I have no idea how to do this, the only thing I found is that someone did this:
json_obj.user.screen_name
However, this did not work at all, and I am completely stuck.
So a couple of things
Firstly, I'd recommend using on_response rather than on_data because StreamClient already defines a on_data function to parse the json. (Then it will fire on_tweet, on_response, on_error, etc)
Secondly, json_obj.user.screen_name is part of API v1 I believe, which is why it doesn't work.
To get extra data using Twitter Apiv2, you'll want to use Expansions and Fields (Tweepy Documentation, Twitter Documentation)
For your case, you'll probably want to use "username" which is under the user_fields.
def on_response(response:tweepy.StreamResponse):
tweet:tweepy.Tweet = response.data
users:list = response.includes.get("users")
# response.includes is a dictionary representing all the fields (user_fields, media_fields, etc)
# response.includes["users"] is a list of `tweepy.User`
# the first user in the list is the author (at least from what I've tested)
# the rest of the users in that list are anyone who is mentioned in the tweet
author_username = users and users[0].username
print(tweet.text, author_username)
streaming_client = tweepy.StreamingClient(bearer_token)
streaming_client.on_response = on_response
streaming_client.sample(threaded=True, user_fields = ["id", "name", "username"]) # using user fields
time.sleep(DURATION)
streaming_client.disconnect()
Hope this helped.
also tweepy documentation definitely needs more examples for api v2
KEY_FILE = './keys/bearer_token'
DURATION = 10
def on_data(json_data):
json_obj = json.loads(json_data.decode())
print('Received tweet:', json_obj)
with open('./collected_tweets/tweets.json', 'a') as out:
json.dump(json_obj, out)
bearer_token = open(KEY_FILE).read().strip()
streaming_client = tweepy.StreamingClient(bearer_token)
streaming_client.on_data = on_data
streaming_client.on_closed = on_finish
streaming_client.sample(threaded=True, expansions="author_id", user_fields="username", tweet_fields="created_at")
time.sleep(DURATION)
streaming_client.disconnect()
I am using PyMongo and I am trying to loop through an entire collection and display the ObjectId onto onto my Flask Web Page. However, when I write my method I keep getting the error "ObjectId('5efbe85b4aeb5d21e56fa81f')" is not a valid ObjectId.
The following is the code I am running
def get_class_names(self):
temp = list()
print("1")
for document_ in db.classes.find():
tempstr = document_.get("_id")
tempobjectid = ObjectId(tempstr)
temp.append(repr(tempobjectid))
print("2")
classes = list()
for class_ in temp:
classes.append(class_, Classes.get_by_id(class_).name)
return classes
How do I fix this?
Note: get_by_id, just takes in an ObjectId and finds it in the database.
The line
tempstr = document_.get("_id")
retrieves an ObjectId already. You then wrap it again in another ObjectId before calling repr on that. If you print(type(tempstr)), you'll see that it's an ObjectId.
Just do temp.append(tempstr).
BTW, you should rename the variable tempstr to tempId or something more appropriate.
I have a JSON document in my database that I want to modify frequently from my python program, once every 25 seconds. I know how to upload a document to the database and read a document from it, but I do not know how to modify/replace a document.
This link shows the functions offered in the python module. I see the ReplaceDocument function, but it takes in a document-link. Though how can I get the document link? Where am I suppose to look for this information?
Thanks.
It sounds like you had resolved it. Just as summary, the code below.
# Query a document
query = { 'query': 'SELECT * FROM <collection name> ....'}
docs = client.QueryDocuments(coll_link, query)
doc = list(docs)[0]
# Get the document link from attribute `_self`
doc_link = doc['_self']
# Modify the document
.....
# Replace the document via document link
client.ReplaceDocument(doc_link, doc)
April 2020
If you are reading MS Azure's Quickstart guide and following a supporting git repo, note that there might be some differences.
For example,
from azure.cosmos import exceptions, CosmosClient, PartitionKey
endpoint = 'endpoint'
key = 'key'
db_name = 'cosmos-db-name'
container_name = 'container-name'
client = CosmosClient(endpoint, key)
db = client.create_database_if_not_exists(id=db_name)
container = db.create_container_if_not_exists(id=container_name, partition_key=PartitionKey(path="/.."), offer_throughput=456
...
# Replace item
container.replace_item(doc_link, doc)
When it comes to doc_link and doc, in the above case, I encountered an error when I used doc['_self']. By using the primary key of the doc, the doc is updated.
I have two BigQuery projects and I want to copy a view from Project 1 to Project 2:
from google.cloud import bigquery
proj_1 = bigquery.Client.from_service_account_json(<path>, project='Project 1')
dataset_1 = proj_1.dataset(<dataset_name>)
view_1 = dataset_1.table(<view_name>) # View to copy, already existing
proj_2 = bigquery.Client.from_service_account_json(<path>, project='Project 2')
dataset_2 = proj_2.dataset(<dataset_name>)
view_2 = dataset_2.table(<view_name>) # Destination for copied view
# Start copy job like Google says
# https://cloud.google.com/bigquery/docs/tables#copyingtable
I get the following error:
RuntimeError: [{'message': 'Using table <project>:<dataset>.<view_name> is not allowed for this operation because of its type. Try using a different table that is of type TABLE.', 'reason': 'invalid'}]
I already know that if I set the attribute view_query, view_2 will be recognized as a view. If I set it manually, it works. But the second (automated) solution does not, because the attribute view_1.view_query is always None.
view_2.view_query = 'SELECT * FROM ...' # works
view_2.view_query = view_1.view_query # Won't work, because view_1.view_query is always None
How can I access the query of view_1?
Call of view_1.reload() loads the attribute view_query.
See https://googlecloudplatform.github.io/google-cloud-python/latest/bigquery-usage.html
So
view_1.reload()
view_2.view_query = view_1.view_query
view_2.create() # No need for a copy job, because there is no data copied
does the trick now.