How to create Gcp Memory-store using python - python

I am trying to automate gcp memory store creation but didn't find a way to create it using python. Please help.

You can use Python Client for Google Cloud Memorystore for Redis API in order to create it.
You can use the create_instance method of the Python Client Library which creates a Redis instance based on the specified tier and memory size
async create_instance(request: google.cloud.redis_v1.types.cloud_redis.CreateInstanceRequest = None, *,
parent: str = None, instance_id: str = None, instance: google.cloud.redis_v1.types.cloud_redis.Instance = None, retry:
google.api_core.retry.Retry = <object object>, timeout: float = None, metadata: Sequence[Tuple[str, str]] = ())

from google.cloud import redis_v1beta1
from google.cloud.redis_v1beta1 import enums
client = redis_v1beta1.CloudRedisClient()
parent = client.location_path('<project>', '<location>')
instance_id = 'test-instancee'
tier = enums.Instance.Tier.BASIC
memory_size_gb = 1
instance = {'tier': tier, 'memory_size_gb': memory_size_gb}
response = client.create_instance(parent, instance_id, instance)
def callback(operation_future):
# Handle result.
result = operation_future.result()
response.add_done_callback(callback)
# Handle metadata.
# metadata = response.metadata()
print "Created"
This Code will work fine but for python2, Is there any way to use it in python3 please mention.

Related

Make a python web API run only one at a time?

I'd like to make a python Azure Function App (web API) to process a queue of tasks. I already setup some trigger that call this API whenever a task is inserted into the queue. As this API will process all of the current tasks in the queue, I would like to prevent the API to execute if there is other execution of this API at the time, to avoid processing conflicts.
I think of using a database locking mechanism but it doesn't look so elegant. Is there any singleton design pattern that can used in Python Azure function App for this purpose? Thanks.
I found a way to solve this problem using Azure Durable function. There are 3 types of functions in an Azure Durable Function app: Orchestration Client function, Orchestrator function, Activity functions. I just need to add some checking steps in the Orchestration Client function like the following example:
# This function an HTTP starter function for Durable Functions.
import logging
import azure.functions as func
import azure.durable_functions as df
def is_finished(runtime_status : df.models.OrchestrationRuntimeStatus):
result = False
if runtime_status is None or \
runtime_status in [df.OrchestrationRuntimeStatus.Canceled,
df.OrchestrationRuntimeStatus.Completed,
df.OrchestrationRuntimeStatus.Failed,
df.OrchestrationRuntimeStatus.Terminated]:
result = True
return result
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
client = df.DurableOrchestrationClient(starter)
# general azure function url : http://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>
# function.json -> "route": "orchestrators/{functionName}/{instanceId}"
orchestrator_instance_id = req.route_params['instanceId']
function_name = req.route_params['functionName']
INVENSYNC_ORCHESTRATOR_INSTANCE_ID = '117610EF-BC37-4E31-BFA4-205EBB3CC54E' # just select any key
if orchestrator_instance_id == INVENSYNC_ORCHESTRATOR_INSTANCE_ID:
existing_instance_status = await client.get_status(orchestrator_instance_id)
logging.info(f"InventorySyncHttpStart() - existing_instance_status = '{existing_instance_status}'.")
if existing_instance_status is None or \
is_finished(existing_instance_status.runtime_status):
logging.info(f"InventorySyncHttpStart() - existing_instance_status.runtime_status = '{existing_instance_status.runtime_status}'.")
orchestrator_instance_id = await client.start_new(function_name, orchestrator_instance_id)
logging.info(f"Started orchestration with ID = '{orchestrator_instance_id}'.")
result = client.create_check_status_response(req, orchestrator_instance_id)
else:
result = func.HttpResponse(status_code=409, body=f"An instance with ID '{orchestrator_instance_id}' already exists")
else:
result = func.HttpResponse(status_code=406, body=f"Invalid Instance ID '{orchestrator_instance_id}' in URL")
return result

Setting Elasticsearch client object as instance variable in Apache Beam transform causes serialization error in Python

In one of my Apache Beam transforms I write data to Elasticsearch, which is running locally in a Docker container. This is done by creating an Elasticsearch client, and passing it to the transform. In the transform I have an __init__ function that sets the Elasticsearch client as an instance variable: self.es_client = es_client, which is then used by the process function to write data to Elasticsearch.
The problem is that I can't do this. Whenever I set the value of an instance variable in this transform to the client object, I receive the error "TypeError: Cannot serialize socket object".
My best guess at what's happening is that Apache Beam automatically serializes any instance variables in a transform, and it's unable to serialize this Elasticsearch client object for some reason.
The closest thing I've found online is this issue. I'm quite confused about why this happening, but would appreciate any insights!
File that creates Elasticsearch client and passes it into the Beam transform:
es_client = Elasticsearch([
{
'host': "0.0.0.0", 'network.host': "0.0.0.0", 'network.publish_host': "0.0.0.0", 'http.port': 9200,
'timeout': 30, 'retry_on_timeout': True, 'max_retries': 10
}
])
....
# Line where I call the transform (as part of larger pipeline)
"Insert sessions into Elasticsearch" >> beam.ParDo(transforms.WriteDataToElasticsearch("sessions", es_client))
File with the transform
class WriteDataToElasticsearch(beam.DoFn):
def __init__(self, index_name, es_client):
# What index to write to
self.index_name = index_name
self.es_client = es_client
def process(self, element):
# Doesn't even get to this line - error seems to be thrown at conclusion of __init__ method
index_exists = self.es_client.indices.exists(index=self.index_name)
if not index_exists:
print('Creating index {i}'.format(i=self.index_name))
self.es_client.indices.create(index=self.index_name)
print('Writing to {i} index'.format(i=self.index_name))
res = self.es_client.index(index=self.index_name, body=element)
print(res)
yield
I solved it for myself, hopefully it works for others as well. I needed to use the statsd client in Apache Beam on Google Dataflow.
class SendToStatsD(beam.DoFn):
def start_bundle(self):
self.statsd_client = statsd.StatsClient(<ip>, <port>)
def process(self):
# use self.statsd_client

How to control a SparkApplication programatically using the python kubernetes client?

I'd like to submit a SparkApplication to a Kubernetes cluster programmatically from python.
A job definition job.yaml like this
apiVersion: sparkoperator.k8s.io/v1beta1
kind: SparkApplication
metadata:
name: my-test
namespace: default
spec:
sparkVersion: "2.4.0"
type: Python
...
runs without problems using kubectl apply -f job.yaml, but I cannot figure out whether and how I can use the kubernetes-client to start this job programmatically.
Does anyone know how to do this?
Here is the example mentioned, how to create third party resource on kubernetes using kubernetes python client.
https://github.com/kubernetes-client/python/blob/master/examples/create_thirdparty_resource.md
Hope this helps.
This is probably what you are looking for:
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
# Configure API key authorization: BearerToken
configuration = kubernetes.client.Configuration()
configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['authorization'] = 'Bearer'
# create an instance of the API class
api_instance = kubernetes.client.CustomObjectsApi(kubernetes.client.ApiClient(configuration))
group = 'group_example' # str | The custom resource's group name
version = 'version_example' # str | The custom resource's version
namespace = 'namespace_example' # str | The custom resource's namespace
plural = 'plural_example' # str | The custom resource's plural name. For TPRs this would be lowercase plural kind.
body = NULL # object | The JSON schema of the Resource to create.
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
try:
api_response = api_instance.create_namespaced_custom_object(group, version, namespace, plural, body, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CustomObjectsApi->create_namespaced_custom_object: %s\n" % e)
source https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#create_namespaced_custom_object

Python way of polling longrunning operations from operation name in Google Cloud?

I'm calling a Google Cloud Function that returns an Operation object implementing the google.longrunning.Operations interface. I want to poll this operation from another Python process that will only receive the operation name (will not have access to the operation object itself). So I need something like:
operation = getOperation(operationName)
isdone = operation.done()
AFAIK, you can't do the first step above. I haven't found it here: https://google-cloud-python.readthedocs.io/en/stable/core/operation.html
I would like to do what is explained in the docs about the google.longrunning interface (https://cloud.google.com/speech-to-text/docs/reference/rpc/google.longrunning#google.longrunning.Operations.GetOperation):
rpc GetOperation(GetOperationRequest) returns (Operation)
Where the GetOperationRequest simply requires the operation name. Is there a way to "re-create" an operation using functions from the google-cloud-python library?
Update for more recent clients. You need to refresh the operation using the OperationClient:
For updating an existing operation you will need to pass the channel across to the OperationClient.
For example, backing up a Firestore datastore.
from google.cloud import firestore_admin_v1
from google.api_core import operations_v1, grpc_helpers
import time
def main():
client = firestore_admin_v1.FirestoreAdminClient()
channel = grpc_helpers.create_channel(client.SERVICE_ADDRESS)
api = operations_v1.OperationsClient(channel)
db_path = client.database_path('myproject', 'mydb')
operation = client.export_documents(db_path)
current_status = api.get_operation(operation.name)
while current_status.done == False:
time.sleep(5)
current_status = api.get_operation(operation.name)
print('waiting to complete')
print('operation done')
In my case, The AutoML Tables Client didn't have a SERVICE_ADDRESS or SCOPE properties, so I can't create a new gRPC channel.
But using the existing one in the client seems to work!
from google.api_core import operations_v1
from google.cloud.automl_v1beta1 import TablesClient
automl_tables_client = TablesClient(
credentials=...,
project=...,
region=...,
)
operation_name = ""
grpc_channel = automl_tables_client.auto_ml_client.transport._channel
api_client = operations_v1.OperationsClient(grpc_channel)
response = api_client.get_operation(operation_name)
You can use the get_operation method of the "Long-Running Operations Client":
from google.api_core import operations_v1
api = operations_v1.OperationsClient()
name = ...
response = api.get_operation(name)

python firebase realtime listener

Hi there I'm new in python.
I would like to implement the listener on my Firebase DB.
When I change one or more parameters on the DB my Python code have to do something.
How can I do it?
Thank a lot
my db is like simple list of data from 001 to 200:
"remote-controller"
001 -> 000
002 -> 020
003 -> 230
my code is:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://remote-controller.firebaseio.com/', None)
result = firebase.get('003', None)
print result
It looks like this is supported now (october 2018): although it's not documented in the 'Retrieving Data' guide, you can find the needed functionality in the API reference. I tested it and it works like this:
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
firebase_admin.db.reference('my/data/path').listen(listener)
As Peter Haddad suggested, you should use Pyrebase for achieving something like that given that the python SDK still does not support realtime event listeners.
import pyrebase
config = {
"apiKey": "apiKey",
"authDomain": "projectId.firebaseapp.com",
"databaseURL": "https://databaseName.firebaseio.com",
"storageBucket": "projectId.appspot.com"
}
firebase = pyrebase.initialize_app(config)
db = firebase.database()
def stream_handler(message):
print(message["event"]) # put
print(message["path"]) # /-K7yGTTEp7O549EzTYtI
print(message["data"]) # {'title': 'Pyrebase', "body": "etc..."}
my_stream = db.child("posts").stream(stream_handler)
If Anybody wants to create multiple listener using same listener function and want to get more info about triggered node, One can do like this.
Normal Listener function will get a Event object it has only Data, Node Name, Event type. If you add multiple listener and You want to differentiate between the data change. You can write your own class and add some info to it while creating object.
class ListenerClass:
def __init__(self, appname):
self.appname = appname
def listener(self, event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
print(self.appname) # Extra data related to change add your own member variable
Creating Objects:
listenerObject = ListenerClass(my_app_name + '1')
db.reference('PatientMonitoring', app= obj).listen(listenerObject.listener)
listenerObject = ListenerClass(my_app_name + '2')
db.reference('SomeOtherPath', app= obj).listen(listenerObject.listener)
Full Code:
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
# Initialising Database with credentials
json_path = r'E:\Projectz\FYP\FreshOnes\Python\PastLocations\fyp-healthapp-project-firebase-adminsdk-40qfo-f8fc938674.json'
my_app_name = 'fyp-healthapp-project'
xyz = {'databaseURL': 'https://{}.firebaseio.com'.format(my_app_name),'storageBucket': '{}.appspot.com'.format(my_app_name)}
cred = credentials.Certificate(json_path)
obj = firebase_admin.initialize_app(cred,xyz , name=my_app_name)
# Create Objects Here, You can use loops and create many listener, But listener will create thread per every listener, Don't create irrelevant listeners. It won't work if you are running on machine with thread constraint
listenerObject = ListenerClass(my_app_name + '1') # Decide your own parameters, How you want to differentiate. Depends on you
db.reference('PatientMonitoring', app= obj).listen(listenerObject.listener)
listenerObject = ListenerClass(my_app_name + '2')
db.reference('SomeOtherPath', app= obj).listen(listenerObject.listener)
As you can see on the per-language feature chart on the Firebase Admin SDK home page, Python and Go currently don't have realtime event listeners. If you need that on your backend, you'll have to use the node.js or Java SDKs.
You can use Pyrebase, which is a python wrapper for the Firebase API.
more info here:
https://github.com/thisbejim/Pyrebase
To retrieve data you need to use val(), example:
users = db.child("users").get()
print(users.val())
Python Firebase Realtime Listener Full Code :
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
json_path = r'E:\Projectz\FYP\FreshOnes\Python\PastLocations\fyp-healthapp-project-firebase-adminsdk-40qfo-f8fc938674.json'
my_app_name = 'fyp-healthapp-project'
xyz = {'databaseURL': 'https://{}.firebaseio.com'.format(my_app_name),'storageBucket': '{}.appspot.com'.format(my_app_name)}
cred = credentials.Certificate(json_path)
obj = firebase_admin.initialize_app(cred,xyz , name=my_app_name)
db.reference('PatientMonitoring', app= obj).listen(listener)
Output:
put
/
{'n0': '40', 'n1': '71'} # for first time its gonna fetch the data from path whether data is changed or not
put # On data changed
/n1
725
put # On data changed
/n0
401

Categories

Resources