I am in the process of writing a Cloud Function for Firebase via the Python option. I am interested in Firebase Realtime Database Triggers; in other words I am willing to listen to events that happen in my Realtime Database.
The Python environment provides the following signature for handling Realtime Database triggers:
def handleEvent(data, context):
# Triggered by a change to a Firebase RTDB reference.
# Args:
# data (dict): The event payload.
# context (google.cloud.functions.Context): Metadata for the event.
This is looking good. The data parameter provides 2 dictionaries; 'data' for notifying the data before the change and 'delta' for the changed bits.
The confusion kicks in when comparing this signature with the Node.js environment. Here is a similar signature from theNode.js world:
exports.handleEvent = functions.database.ref('/path/{objectId}/').onWrite((change, context) => {}
In this signature, the change parameter is pretty powerful and it seems to be of type firebase.database.DataSnapshot. It has nice helper methods such as hasChild() or numChildren() that provide information about the changed object.
The question is: Does Python environment have a similar DataSnapshot object? With Python, do I have to query the database to get the number of children for example? It really isn't clear what Python environment can and can't do.
Related API/Reference/Documentation:
Firebase Realtime DB Triggers: https://cloud.google.com/functions/docs/calling/realtime-database
DataSnapshot Reference: https://firebase.google.com/docs/reference/js/firebase.database.DataSnapshot
The python runtime currently doesn't have a similar object structure. The firebase-functions SDK is actually doing a lot of work for you in creating objects that are easy to consume. Nothing similar is happening in the python environment. You are essentially getting a pretty raw view at the payload of data contained by the event that triggered your function.
If you write Realtime Database triggers for node, not using the Firebase SDK, it will be a similar situation. You'll get a really basic object with properties similar to the python dictionary.
This is the reason why use of firebase-functions along with the Firebase SDK is the preferred environment for writing triggers from Firebase products. The developer experience is superior: it does a bunch of convenient work for you. The downside is that you have to pay for the cost of the Firebase Admin SDK to load and initialize on cold start.
Note that might be possible for you to parse the event and create your own convenience objects using the Firebase Admin SDK for python.
Related
I'm creating a few GCP cloud armor policies across multiple projects using the Python client library and attaching them to several backend services using the .set_security_policy() method
I know you can do it using the console / gcloud but I need to automate this in Python
I've tried the .update() method in google-cloud-compute but that did not work out
from google.cloud import compute, compute_v1
client = compute.BackendServicesClient()
backend_service_resource = compute_v1.types.BackendService(security_policy="")
client.update(project='project_id',
backend_service='backend_service',
backend_service_resource=backend_service_resource)
The error I got when running the above code is
google.api_core.exceptions.BadRequest: 400 PUT https://compute.googleapis.com/compute/v1/projects/<project-id>/global/backendServices/<backend-name>: Invalid value for field 'resource.loadBalancingScheme': 'INVALID_LOAD_BALANCING_SCHEME'. Cannot change load balancing scheme.
When I specify loadBalancingScheme then the same error occurs with another resource value. At run-time I would not have information of all the meta data of the backend-service and some meta-data might not be initialized in the first place.
This is for anyone who had similar issues in the future. I was originally going to call the gcloud commands through python using os.system() as #giles-roberts recommended, but then I stumbled across a proper way to to do this using the Client Libraries
You simply use the same .set_security_policy() to set the security policy in the first place but this time make the policy as None. This is not quite obvious since the name of the security policy has to be a string in the documentation and it does not accept an empty string either.
from google.cloud import compute, compute_v1
client = compute.BackendServicesClient()
resource = compute_v1.types.SecurityPolicyReference(security_policy=None)
error = client.set_security_policy(project='<project_id>',
backend_service='<backend_service>',
security_policy_reference_resource=resource)
I want to get a stream object from Azure Inheritance Iterator ItemPaged - ItemPaged[TableEntity] to stream (Python). Is it possible?
https://learn.microsoft.com/en-us/python/api/azure-core/azure.core.paging.itempaged?view=azure-python
https://learn.microsoft.com/en-us/python/api/azure-core/azure.core.paging.itempaged?view=azure-python
#Updated 11.08.2021
I have a realization to backup Azure Tables to Azure Blob - Current process to backup Azure Tables. But I want to improve this process and I am considering different options. I try to get the stream from Azure Tables to use create_blob_from_stream
I assume you want to stream bytes from the HTTP response, and not the use the iterator of objects you receive.
Each API in the SDK supports a keyword argument call raw_response_hook that gives you access to the HTTP response object, and then let you use a stream download API if you want to. Note that since the payload is considered to represent objects, it will be pre-loaded in memory no matter what, but you can still use a stream syntax nonetheless.
The callback is simply one parameter:
def response_callback(response):
# Do something with the response
requests_response = response.internal_response
# Use "requests" API now
for chunk in requests_response.iter_content():
work_with_chunk(chunk)
Note that this is pretty advanced, you may encounter difficulties and this might not fit what you want precisely. We are working on a new pattern on SDK to simplify complex scenario like that, but it's not shipped yet. You would be able to send and receive raw requests using a send_request method, which gives you absolute control on all aspect of the query, like explaining you just want to stream (no pre-load in memory) or disabling the deserialization by default.
Feel free to open an issue on the Azure SDK for Python repo if you have additional questions or clarification: https://github.com/Azure/azure-sdk-for-python/issues
Edit with new suggestions: TableEntity is a dict like class, so you can json.dumps as string, or json.dump as a stream while using the ItemPaged<TableEntity>. If JSON dumps raise an exception, you can try our JSON encoder in azure.core.serialization.AzureJSONEncoder: https://github.com/Azure/azure-sdk-for-python/blob/1ffb583d57347257159638ae5f71fa85d14c2366/sdk/core/azure-core/tests/test_serialization.py#L83
(I work at MS in the Azure SDK for Python team.)
Ref:
https://docs.python-requests.org/en/master/api/#requests.Response.iter_content
https://azuresdkdocs.blob.core.windows.net/$web/python/azure-core/1.17.0/azure.core.pipeline.policies.html#azure.core.pipeline.policies.CustomHookPolicy
I'm brand new to using the Elastic Stack so excuse my lack of knowledge on the subject. I'm running the Elastic Stack on a Windows 10, corporate work computer. I have Git Bash installed for a bash cli, and I can successfully launch the entire Elastic Stack. My task is to take log data that is stored in one of our databases and display it on a Kibana dashboard.
From what my team and I have reasoned, I don't need to use Logstash because the database that the logs are sent to is effectively our 'log stash', so to use the Logstash service would be redundant. I found this nifty diagram
on freecodecamp, and from what I gather, Logstash is just the intermediary for log retrieval different services. So instead of using Logstash, since the log data is already in a database, I could just do something like this
USER ---> KIBANA <---> ELASTICSEARCH <--- My Python Script <--- [DATABASE]
My python script successfully calls our database and retrieves the data, and a function that molds the data into a dict object (as I understand, Elasticsearch takes data in a JSON format).
Now I want to insert all of that data into Elasticsearch - I've been reading the Elastic docs, and there's a lot of talk about indexing that isn't really indexing, and I haven't found any API calls I can use to plug the data right into Elasticsearch. All of the documentation I've found so far concerns the use of Logstash, but since I'm not using Logstash, I'm kind of at a loss here.
If there's anyone who can help me out and point me in the right direction I'd appreciate it. Thanks
-Dan
You ingest data on elasticsearch using the Index API, it is basically a request using the PUT method.
To do that with Python you can use elasticsearch-py, the official python client for elasticsearch.
But sometimes what you need is easier to be done using Logstash, since it can extract the data from your database, format it using many filters and send to elasticsearch.
I have an app that is meant to integrate with third-party apps. These apps should be able to trigger a function when data changes.
The way I was envisioning this, I would use a node function to safely prepare data for the third parties, and get the url to call from the app's configuration on firestore. I would call that url from the node function, and wait for it to return, updating results as necessary (actually, triggering a push notification). -- these third-party functions would tend to be python functions, so my demo should be in python.
I have the initial node function and firestore setup so that I am currently triggering a ECONNREFUSED -- because I don't know how to set up the third-party function.
Let's say this is the function I need to trigger:
def hello_world(request):
request_json = request.get_json()
if request_json and 'name' in request_json:
name = request_json['name']
else:
name = 'World'
return 'Hello, {}!\n'.format(name)
Do I need to set up a separate gcloud account to host this function, or can I include it in my firestore functions? If so, how do I deploy this to firestore? Typically with my node functions, I am running firebase deploy and it automagically finds my functions from my index.js file.
If you're asking whether Cloud Functions that are triggered by Cloud Firestore can co-exist in a project with Cloud Functions that are triggered by HTTP(S) requests, then the answer is "yes they can". There is no need to set up a separate (Firebase or Cloud) project for each function type.
However: when you deploy your Cloud Functions through the Firebase CLI with firebase deploy, it will remove any functions that it finds in the project, that are not in the code. If you have functions both in Python and in Node.js, there is never a single codebase that contains both, so a blanket deploy would always delete some of your functions. So in that case, you should use the granular deploy option of the Firebase CLI.
I am new to python.
I need to get the Usage details using python sdk.
I am able to do the same using the usage detail API.
But unable to do so using the sdk.
I am trying to use the azure.mgmt.consumption.operations.UsageDetailsOperations class. The official docs for UsageDetailsOperations
https://learn.microsoft.com/en-us/python/api/azure-mgmt-consumption/azure.mgmt.consumption.operations.usage_details_operations.usagedetailsoperations?view=azure-python#list-by-billing-period
specifies four parameters to create the object
(i.e.client:Client for service requests,config:Configuration of service client,
serializer:An object model serializer,deserializer:An object model deserializer).
Out of these parameters I only have the client.
I need help understanding how to get the other three parameters or is there another way to create the UsageDetailsOperations object.
Or is there any other approach to get the usage details.
Thanks!
This class is not designed to be created manually, you need to create a consumption client, which will have an attribute "usages" which will be the class in question (instanciated correctly).
There is unfortunately no samples for consumption yet, but creating the client will be similar to creating any other client (see Network client creation for instance).
For consumption, what might help is the tests, since they give some idea of scenarios:
https://github.com/Azure/azure-sdk-for-python/blob/fd643a0/sdk/consumption/azure-mgmt-consumption/tests/test_mgmt_consumption.py
If you're new to Azure and Python, you might want to do this quickstart:
https://learn.microsoft.com/en-us/azure/python/python-sdk-azure-get-started
Feel free to open an issue in the main Python repo, asking for more documentation about this client (this will help prioritize it):
https://github.com/Azure/azure-sdk-for-python/issues
(I'm working at Microsoft in the Python SDK team).