Python way of polling longrunning operations from operation name in Google Cloud? - python

I'm calling a Google Cloud Function that returns an Operation object implementing the google.longrunning.Operations interface. I want to poll this operation from another Python process that will only receive the operation name (will not have access to the operation object itself). So I need something like:
operation = getOperation(operationName)
isdone = operation.done()
AFAIK, you can't do the first step above. I haven't found it here: https://google-cloud-python.readthedocs.io/en/stable/core/operation.html
I would like to do what is explained in the docs about the google.longrunning interface (https://cloud.google.com/speech-to-text/docs/reference/rpc/google.longrunning#google.longrunning.Operations.GetOperation):
rpc GetOperation(GetOperationRequest) returns (Operation)
Where the GetOperationRequest simply requires the operation name. Is there a way to "re-create" an operation using functions from the google-cloud-python library?

Update for more recent clients. You need to refresh the operation using the OperationClient:
For updating an existing operation you will need to pass the channel across to the OperationClient.
For example, backing up a Firestore datastore.
from google.cloud import firestore_admin_v1
from google.api_core import operations_v1, grpc_helpers
import time
def main():
client = firestore_admin_v1.FirestoreAdminClient()
channel = grpc_helpers.create_channel(client.SERVICE_ADDRESS)
api = operations_v1.OperationsClient(channel)
db_path = client.database_path('myproject', 'mydb')
operation = client.export_documents(db_path)
current_status = api.get_operation(operation.name)
while current_status.done == False:
time.sleep(5)
current_status = api.get_operation(operation.name)
print('waiting to complete')
print('operation done')

In my case, The AutoML Tables Client didn't have a SERVICE_ADDRESS or SCOPE properties, so I can't create a new gRPC channel.
But using the existing one in the client seems to work!
from google.api_core import operations_v1
from google.cloud.automl_v1beta1 import TablesClient
automl_tables_client = TablesClient(
credentials=...,
project=...,
region=...,
)
operation_name = ""
grpc_channel = automl_tables_client.auto_ml_client.transport._channel
api_client = operations_v1.OperationsClient(grpc_channel)
response = api_client.get_operation(operation_name)

You can use the get_operation method of the "Long-Running Operations Client":
from google.api_core import operations_v1
api = operations_v1.OperationsClient()
name = ...
response = api.get_operation(name)

Related

Python InfluxDB2 - write_api.write(...) How to check for success?

I need to write historic data into InfluxDB (I'm using Python, which is not a must in this case, so I maybe willing to accept non-Python solutions). I set up the write API like this
write_api = client.write_api(write_options=ASYNCHRONOUS)
The Data comes from a DataFrame with a timestamp as key, so I write it to the database like this
result = write_api.write(bucket=bucket, data_frame_measurement_name=field_key, record=a_data_frame)
This call does not throw an exception, even if the InfluxDB server is down. result has a protected attribute _success that is a boolean in debugging, but I cannot access it from the code.
How do I check if the write was a success?
If you use background batching, you can add custom success, error and retry callbacks.
from influxdb_client import InfluxDBClient
def success_cb(details, data):
url, token, org = details
print(url, token, org)
data = data.decode('utf-8').split('\n')
print('Total Rows Inserted:', len(data))
def error_cb(details, data, exception):
print(exc)
def retry_cb(details, data, exception):
print('Retrying because of an exception:', exc)
with InfluxDBClient(url, token, org) as client:
with client.write_api(success_callback=success_cb,
error_callback=error_cb,
retry_callback=retry_cb) as write_api:
write_api.write(...)
If you are eager to test all the callbacks and don't want to wait until all retries are finished, you can override the interval and number of retries.
from influxdb_client import InfluxDBClient, WriteOptions
with InfluxDBClient(url, token, org) as client:
with client.write_api(success_callback=success_cb,
error_callback=error_cb,
retry_callback=retry_cb,
write_options=WriteOptions(retry_interval=60,
max_retries=2),
) as write_api:
...
if you want to immediately write data into database, then use SYNCHRONOUS version of write_api - https://github.com/influxdata/influxdb-client-python/blob/58343322678dd20c642fdf9d0a9b68bc2c09add9/examples/example.py#L12
The asynchronous write should be "triggered" by call .get() - https://github.com/influxdata/influxdb-client-python#asynchronous-client
Regards
write_api.write() returns a multiprocessing.pool.AsyncResult or multiprocessing.pool.AsyncResult (both are the same).
With this return object you can check on the asynchronous request in a couple of ways. See here: https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.AsyncResult
If you can use a blocking request, then write_api = client.write_api(write_options=SYNCRONOUS) can be used.
from datetime import datetime
from influxdb_client import WritePrecision, InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
with InfluxDBClient(url="http://localhost:8086", token="my-token", org="my-org", debug=False) as client:
p = Point("my_measurement") \
.tag("location", "Prague") \
.field("temperature", 25.3) \
.time(datetime.utcnow(), WritePrecision.MS)
try:
client.write_api(write_options=SYNCHRONOUS).write(bucket="my-bucket", record=p)
reboot = False
except Exception as e:
reboot = True
print(f"Reboot? {reboot}")

Make a python web API run only one at a time?

I'd like to make a python Azure Function App (web API) to process a queue of tasks. I already setup some trigger that call this API whenever a task is inserted into the queue. As this API will process all of the current tasks in the queue, I would like to prevent the API to execute if there is other execution of this API at the time, to avoid processing conflicts.
I think of using a database locking mechanism but it doesn't look so elegant. Is there any singleton design pattern that can used in Python Azure function App for this purpose? Thanks.
I found a way to solve this problem using Azure Durable function. There are 3 types of functions in an Azure Durable Function app: Orchestration Client function, Orchestrator function, Activity functions. I just need to add some checking steps in the Orchestration Client function like the following example:
# This function an HTTP starter function for Durable Functions.
import logging
import azure.functions as func
import azure.durable_functions as df
def is_finished(runtime_status : df.models.OrchestrationRuntimeStatus):
result = False
if runtime_status is None or \
runtime_status in [df.OrchestrationRuntimeStatus.Canceled,
df.OrchestrationRuntimeStatus.Completed,
df.OrchestrationRuntimeStatus.Failed,
df.OrchestrationRuntimeStatus.Terminated]:
result = True
return result
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
client = df.DurableOrchestrationClient(starter)
# general azure function url : http://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>
# function.json -> "route": "orchestrators/{functionName}/{instanceId}"
orchestrator_instance_id = req.route_params['instanceId']
function_name = req.route_params['functionName']
INVENSYNC_ORCHESTRATOR_INSTANCE_ID = '117610EF-BC37-4E31-BFA4-205EBB3CC54E' # just select any key
if orchestrator_instance_id == INVENSYNC_ORCHESTRATOR_INSTANCE_ID:
existing_instance_status = await client.get_status(orchestrator_instance_id)
logging.info(f"InventorySyncHttpStart() - existing_instance_status = '{existing_instance_status}'.")
if existing_instance_status is None or \
is_finished(existing_instance_status.runtime_status):
logging.info(f"InventorySyncHttpStart() - existing_instance_status.runtime_status = '{existing_instance_status.runtime_status}'.")
orchestrator_instance_id = await client.start_new(function_name, orchestrator_instance_id)
logging.info(f"Started orchestration with ID = '{orchestrator_instance_id}'.")
result = client.create_check_status_response(req, orchestrator_instance_id)
else:
result = func.HttpResponse(status_code=409, body=f"An instance with ID '{orchestrator_instance_id}' already exists")
else:
result = func.HttpResponse(status_code=406, body=f"Invalid Instance ID '{orchestrator_instance_id}' in URL")
return result

python aiosmtpd server with basic authentication

Im trying to create an aiosmtpd server to process emails received.
It works great without authentication, yet i simply cannot figure out how to setup the authentication.
I have gone through the documents and searched for examples on this.
a sample of how im currently using it:
from aiosmtpd.controller import Controller
class CustomHandler:
async def handle_DATA(self, server, session, envelope):
peer = session.peer
mail_from = envelope.mail_from
rcpt_tos = envelope.rcpt_tos
data = envelope.content # type: bytes
# Process message data...
print('peer:' + str(peer))
print('mail_from:' + str(mail_from))
print('rcpt_tos:' + str(rcpt_tos))
print('data:' + str(data))
return '250 OK'
if __name__ == '__main__':
handler = CustomHandler()
controller = Controller(handler, hostname='192.168.8.125', port=10025)
# Run the event loop in a separate thread.
controller.start()
# Wait for the user to press Return.
input('SMTP server running. Press Return to stop server and exit.')
controller.stop()```
which is the basic method from the documentation.
could someone please provide me with an example as to how to do simple authentication?
Alright, since you're using version 1.3.0, you can follow the documentation for Authentication.
A quick way to start is to create an "authenticator function" (can be a method in your handler class, can be standalone) that follows the Authenticator Callback guidelines.
A simple example:
from aiosmtpd.smtp import AuthResult, LoginPassword
auth_db = {
b"user1": b"password1",
b"user2": b"password2",
b"user3": b"password3",
}
# Name can actually be anything
def authenticator_func(server, session, envelope, mechanism, auth_data):
# For this simple example, we'll ignore other parameters
assert isinstance(auth_data, LoginPassword)
username = auth_data.login
password = auth_data.password
# If we're using a set containing tuples of (username, password),
# we can simply use `auth_data in auth_set`.
# Or you can get fancy and use a full-fledged database to perform
# a query :-)
if auth_db.get(username) == password:
return AuthResult(success=True)
else:
return AuthResult(success=False, handled=False)
Then you're creating the controller, create it like so:
controller = Controller(
handler,
hostname='192.168.8.125',
port=10025,
authenticator=authenticator_func, # i.e., the name of your authenticator function
auth_required=True, # Depending on your needs
)

How to create Gcp Memory-store using python

I am trying to automate gcp memory store creation but didn't find a way to create it using python. Please help.
You can use Python Client for Google Cloud Memorystore for Redis API in order to create it.
You can use the create_instance method of the Python Client Library which creates a Redis instance based on the specified tier and memory size
async create_instance(request: google.cloud.redis_v1.types.cloud_redis.CreateInstanceRequest = None, *,
parent: str = None, instance_id: str = None, instance: google.cloud.redis_v1.types.cloud_redis.Instance = None, retry:
google.api_core.retry.Retry = <object object>, timeout: float = None, metadata: Sequence[Tuple[str, str]] = ())
from google.cloud import redis_v1beta1
from google.cloud.redis_v1beta1 import enums
client = redis_v1beta1.CloudRedisClient()
parent = client.location_path('<project>', '<location>')
instance_id = 'test-instancee'
tier = enums.Instance.Tier.BASIC
memory_size_gb = 1
instance = {'tier': tier, 'memory_size_gb': memory_size_gb}
response = client.create_instance(parent, instance_id, instance)
def callback(operation_future):
# Handle result.
result = operation_future.result()
response.add_done_callback(callback)
# Handle metadata.
# metadata = response.metadata()
print "Created"
This Code will work fine but for python2, Is there any way to use it in python3 please mention.

python firebase realtime listener

Hi there I'm new in python.
I would like to implement the listener on my Firebase DB.
When I change one or more parameters on the DB my Python code have to do something.
How can I do it?
Thank a lot
my db is like simple list of data from 001 to 200:
"remote-controller"
001 -> 000
002 -> 020
003 -> 230
my code is:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://remote-controller.firebaseio.com/', None)
result = firebase.get('003', None)
print result
It looks like this is supported now (october 2018): although it's not documented in the 'Retrieving Data' guide, you can find the needed functionality in the API reference. I tested it and it works like this:
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
firebase_admin.db.reference('my/data/path').listen(listener)
As Peter Haddad suggested, you should use Pyrebase for achieving something like that given that the python SDK still does not support realtime event listeners.
import pyrebase
config = {
"apiKey": "apiKey",
"authDomain": "projectId.firebaseapp.com",
"databaseURL": "https://databaseName.firebaseio.com",
"storageBucket": "projectId.appspot.com"
}
firebase = pyrebase.initialize_app(config)
db = firebase.database()
def stream_handler(message):
print(message["event"]) # put
print(message["path"]) # /-K7yGTTEp7O549EzTYtI
print(message["data"]) # {'title': 'Pyrebase', "body": "etc..."}
my_stream = db.child("posts").stream(stream_handler)
If Anybody wants to create multiple listener using same listener function and want to get more info about triggered node, One can do like this.
Normal Listener function will get a Event object it has only Data, Node Name, Event type. If you add multiple listener and You want to differentiate between the data change. You can write your own class and add some info to it while creating object.
class ListenerClass:
def __init__(self, appname):
self.appname = appname
def listener(self, event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
print(self.appname) # Extra data related to change add your own member variable
Creating Objects:
listenerObject = ListenerClass(my_app_name + '1')
db.reference('PatientMonitoring', app= obj).listen(listenerObject.listener)
listenerObject = ListenerClass(my_app_name + '2')
db.reference('SomeOtherPath', app= obj).listen(listenerObject.listener)
Full Code:
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
# Initialising Database with credentials
json_path = r'E:\Projectz\FYP\FreshOnes\Python\PastLocations\fyp-healthapp-project-firebase-adminsdk-40qfo-f8fc938674.json'
my_app_name = 'fyp-healthapp-project'
xyz = {'databaseURL': 'https://{}.firebaseio.com'.format(my_app_name),'storageBucket': '{}.appspot.com'.format(my_app_name)}
cred = credentials.Certificate(json_path)
obj = firebase_admin.initialize_app(cred,xyz , name=my_app_name)
# Create Objects Here, You can use loops and create many listener, But listener will create thread per every listener, Don't create irrelevant listeners. It won't work if you are running on machine with thread constraint
listenerObject = ListenerClass(my_app_name + '1') # Decide your own parameters, How you want to differentiate. Depends on you
db.reference('PatientMonitoring', app= obj).listen(listenerObject.listener)
listenerObject = ListenerClass(my_app_name + '2')
db.reference('SomeOtherPath', app= obj).listen(listenerObject.listener)
As you can see on the per-language feature chart on the Firebase Admin SDK home page, Python and Go currently don't have realtime event listeners. If you need that on your backend, you'll have to use the node.js or Java SDKs.
You can use Pyrebase, which is a python wrapper for the Firebase API.
more info here:
https://github.com/thisbejim/Pyrebase
To retrieve data you need to use val(), example:
users = db.child("users").get()
print(users.val())
Python Firebase Realtime Listener Full Code :
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
json_path = r'E:\Projectz\FYP\FreshOnes\Python\PastLocations\fyp-healthapp-project-firebase-adminsdk-40qfo-f8fc938674.json'
my_app_name = 'fyp-healthapp-project'
xyz = {'databaseURL': 'https://{}.firebaseio.com'.format(my_app_name),'storageBucket': '{}.appspot.com'.format(my_app_name)}
cred = credentials.Certificate(json_path)
obj = firebase_admin.initialize_app(cred,xyz , name=my_app_name)
db.reference('PatientMonitoring', app= obj).listen(listener)
Output:
put
/
{'n0': '40', 'n1': '71'} # for first time its gonna fetch the data from path whether data is changed or not
put # On data changed
/n1
725
put # On data changed
/n0
401

Categories

Resources