I'd like to submit a SparkApplication to a Kubernetes cluster programmatically from python.
A job definition job.yaml like this
apiVersion: sparkoperator.k8s.io/v1beta1
kind: SparkApplication
metadata:
name: my-test
namespace: default
spec:
sparkVersion: "2.4.0"
type: Python
...
runs without problems using kubectl apply -f job.yaml, but I cannot figure out whether and how I can use the kubernetes-client to start this job programmatically.
Does anyone know how to do this?
Here is the example mentioned, how to create third party resource on kubernetes using kubernetes python client.
https://github.com/kubernetes-client/python/blob/master/examples/create_thirdparty_resource.md
Hope this helps.
This is probably what you are looking for:
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
# Configure API key authorization: BearerToken
configuration = kubernetes.client.Configuration()
configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['authorization'] = 'Bearer'
# create an instance of the API class
api_instance = kubernetes.client.CustomObjectsApi(kubernetes.client.ApiClient(configuration))
group = 'group_example' # str | The custom resource's group name
version = 'version_example' # str | The custom resource's version
namespace = 'namespace_example' # str | The custom resource's namespace
plural = 'plural_example' # str | The custom resource's plural name. For TPRs this would be lowercase plural kind.
body = NULL # object | The JSON schema of the Resource to create.
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
try:
api_response = api_instance.create_namespaced_custom_object(group, version, namespace, plural, body, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CustomObjectsApi->create_namespaced_custom_object: %s\n" % e)
source https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#create_namespaced_custom_object
Related
I am creating lambda function where I am fetching SSM parameter for EKS-Optimized AMI ID, now about EKS-Optimized AMI, it is the default AMI provided by EKS if we are not specifying any AMI explicitly. EKS-Optimized AMI are different per region & K8 version. I am working on upgrading this AMI on node groups & getting this AMI ID here for K8 version 1.21. I want to pass this k8 version ${EKS_VERSION} to get-parameter() as
ami_id = ssm.get_parameters(Names=["/aws/service/eks/optimized-ami/${EKS_VERSION}/amazon-linux-2/recommended/image_id"])
Can you help me if we can do this in boto3, if yes,how ?
Thanks in advance!
Maybe I am missing the point of the question but it is pretty straightforward as you already have the request in your question. If you put the following code into your lambda, it should get you the version you want in that region.
For something like this, you may want to use a lambda env variable with a default, and overwrite it when you want something different.
import boto3
import ssm
# get an ssm client
ssm_client = boto3.client('ssm')
# you need to pass the var somehow, here assuming you are using an environment variable in your lambda. You could use some other system to trigger and pass the information to your lambda, e.g. sns
eks_version = os.getenv('EKS_VERSION')
# set the parameter name you want to receive, note the f-string to pass the variable to it
param_name = f"/aws/service/eks/optimized-ami/{eks_version}/amazon-linux-2/recommended/image_id"
# get_parameter
response = ssm_client.get_parameters(Names=[param_name])
# print / return response
print(response)
For overwriting the param, you could use sns or cloudwatch with lambda if you are building some kind of automation but you would need to parse the input from them.
For example a simple json payload in sns
{
"eks_version": 1.21
}
and in your code, you can change make small adjustment once you parsed the sns payload. e.g.
import json
if 'Sns' in the event:
sns_eks_version = json.loads(event['Records'][0]['Sns']['Message']['eks_version'])
else:
sns_eks_version = None
eks_version = sns_eks_version or os.get_env('EKS_VERSION')
This is how I did :
import json
import os
import boto3
ssm_client = boto3.client('ssm')
eks_client = boto3.client('eks')
eksClusterName='dev-infra2-eks'
def lambda_handler(event, context):
# Get current EKS Version
response = eks_client.describe_cluster(
name = eksClusterName
)
eksVersion = response['cluster']['version']
aws_eks_ami_ssm_param = "/aws/service/eks/optimized-ami/"+eksVersion+"/amazon-linux-2/recommended/image_id"
# Get SSM param for AMI ID
try:
eks_ssm_ami = ssm_client.get_parameter(Name=aws_eks_ami_ssm_param)
latest_ami_id = eks_ssm_ami['Parameter']['Value']
return latest_ami_id
except client.exceptions.ParameterNotFound:
logging.error("Parameter Not Found")
I would need to disable Airflow DAGs with AWS Lambda or some other way. Can I use python code in order to do this? Thank you in advance.
You can pause/unpause a DAG with Airflow Rest API
The relevant endpoint is update a DAG.
https://airflow.apache.org/api/v1/dags/{dag_id}
With:
{
"is_paused": true
}
You also have Airflow official python client that you can use to interact with the API. Example:
import time
import airflow_client.client
from airflow_client.client.api import dag_api
from airflow_client.client.model.dag import DAG
from airflow_client.client.model.error import Error
from pprint import pprint
configuration = client.Configuration(
host = "http://localhost/api/v1"
)
# Configure HTTP basic authorization: Basic
configuration = client.Configuration(
username = 'YOUR_USERNAME',
password = 'YOUR_PASSWORD'
)
with client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = dag_api.DAGApi(api_client)
dag_id = "dag_id_example" # str | The DAG ID.
dag = DAG(
is_paused=True,
)
try:
# Update a DAG
api_response = api_instance.patch_dag(dag_id, dag)
pprint(api_response)
except client.ApiException as e:
print("Exception when calling DAGApi->patch_dag: %s\n" % e)
You can see the full example in the client doc.
I am trying to automate gcp memory store creation but didn't find a way to create it using python. Please help.
You can use Python Client for Google Cloud Memorystore for Redis API in order to create it.
You can use the create_instance method of the Python Client Library which creates a Redis instance based on the specified tier and memory size
async create_instance(request: google.cloud.redis_v1.types.cloud_redis.CreateInstanceRequest = None, *,
parent: str = None, instance_id: str = None, instance: google.cloud.redis_v1.types.cloud_redis.Instance = None, retry:
google.api_core.retry.Retry = <object object>, timeout: float = None, metadata: Sequence[Tuple[str, str]] = ())
from google.cloud import redis_v1beta1
from google.cloud.redis_v1beta1 import enums
client = redis_v1beta1.CloudRedisClient()
parent = client.location_path('<project>', '<location>')
instance_id = 'test-instancee'
tier = enums.Instance.Tier.BASIC
memory_size_gb = 1
instance = {'tier': tier, 'memory_size_gb': memory_size_gb}
response = client.create_instance(parent, instance_id, instance)
def callback(operation_future):
# Handle result.
result = operation_future.result()
response.add_done_callback(callback)
# Handle metadata.
# metadata = response.metadata()
print "Created"
This Code will work fine but for python2, Is there any way to use it in python3 please mention.
I am trying to use python DNS module (dnspython) to create (add) new DNS record.
Documentation specifies how to create update http://www.dnspython.org/examples.html :
import dns.tsigkeyring
import dns.update
import sys
keyring = dns.tsigkeyring.from_text({
'host-example.' : 'XXXXXXXXXXXXXXXXXXXXXX=='
})
update = dns.update.Update('dyn.test.example', keyring=keyring)
update.replace('host', 300, 'a', sys.argv[1])
But it does not precise, how to actually generate keyring string that can be passed to dns.tsigkeyring.from_text() method in the first place.
What is the correct way to generate the key? I am using krb5 at my organization.
Server is running on Microsoft AD DNS with GSS-TSIG.
TSIG and GSS-TSIG are different beasts – the former uses a static preshared key that can be simply copied from the server, but the latter uses Kerberos (GSSAPI) to negotiate a session key for every transaction.
At the time when this thread was originally posted, dnspython 1.x did not have any support for GSS-TSIG whatsoever.
(The handshake does not result in a static key that could be converted to a regular TSIG keyring; instead the GSSAPI library itself must be called to build an authenticator – dnspython 1.x could not do that, although dnspython 2.1 finally can.)
If you are trying to update an Active Directory DNS server, BIND's nsupdate command-line tool supports GSS-TSIG (and sometimes it even works). You should be able to run it through subprocess and simply feed the necessary updates via stdin.
cmds = [f'zone {dyn_zone}\n',
f'del {fqdn}\n',
f'add {fqdn} 60 TXT "{challenge}"\n',
f'send\n']
subprocess.run(["nsupdate", "-g"],
input="".join(cmds).encode(),
check=True)
As with most Kerberos client applications, nsupdate expects the credentials to be already present in the environment (that is, you need to have already obtained a TGT using kinit beforehand; or alternatively, if a recent version of MIT Krb5 is used, you can point $KRB5_CLIENT_KTNAME to the keytab containing the client credentials).
Update: dnspython 2.1 finally has the necessary pieces for GSS-TSIG, but creating the keyring is currently a very manual process – you have to call the GSSAPI library and process the TKEY negotiation yourself. The code for doing so is included at the bottom.
(The Python code below can be passed a custom gssapi.Credentials object, but otherwise it looks for credentials in the environment just like nsupdate does.)
import dns.rdtypes.ANY.TKEY
import dns.resolver
import dns.update
import gssapi
import socket
import time
import uuid
def _build_tkey_query(token, key_ring, key_name):
inception_time = int(time.time())
tkey = dns.rdtypes.ANY.TKEY.TKEY(dns.rdataclass.ANY,
dns.rdatatype.TKEY,
dns.tsig.GSS_TSIG,
inception_time,
inception_time,
3,
dns.rcode.NOERROR,
token,
b"")
query = dns.message.make_query(key_name,
dns.rdatatype.TKEY,
dns.rdataclass.ANY)
query.keyring = key_ring
query.find_rrset(dns.message.ADDITIONAL,
key_name,
dns.rdataclass.ANY,
dns.rdatatype.TKEY,
create=True).add(tkey)
return query
def _probe_server(server_name, zone):
gai = socket.getaddrinfo(str(server_name),
"domain",
socket.AF_UNSPEC,
socket.SOCK_DGRAM)
for af, sf, pt, cname, sa in gai:
query = dns.message.make_query(zone, "SOA")
res = dns.query.udp(query, sa[0], timeout=2)
return sa[0]
def gss_tsig_negotiate(server_name, server_addr, creds=None):
# Acquire GSSAPI credentials
gss_name = gssapi.Name(f"DNS#{server_name}",
gssapi.NameType.hostbased_service)
gss_ctx = gssapi.SecurityContext(name=gss_name,
creds=creds,
usage="initiate")
# Name generation tips: https://tools.ietf.org/html/rfc2930#section-2.1
key_name = dns.name.from_text(f"{uuid.uuid4()}.{server_name}")
tsig_key = dns.tsig.Key(key_name, gss_ctx, dns.tsig.GSS_TSIG)
key_ring = {key_name: tsig_key}
key_ring = dns.tsig.GSSTSigAdapter(key_ring)
token = gss_ctx.step()
while not gss_ctx.complete:
tkey_query = _build_tkey_query(token, key_ring, key_name)
response = dns.query.tcp(tkey_query, server_addr, timeout=5)
if not gss_ctx.complete:
# Original comment:
# https://github.com/rthalley/dnspython/pull/530#issuecomment-658959755
# "this if statement is a bit redundant, but if the final token comes
# back with TSIG attached the patch to message.py will automatically step
# the security context. We dont want to excessively step the context."
token = gss_ctx.step(response.answer[0][0].key)
return key_name, key_ring
def gss_tsig_update(zone, update_msg, creds=None):
# Find the SOA of our zone
answer = dns.resolver.resolve(zone, "SOA")
soa_server = answer.rrset[0].mname
server_addr = _probe_server(soa_server, zone)
# Get the GSS-TSIG key
key_name, key_ring = gss_tsig_negotiate(soa_server, server_addr, creds)
# Dispatch the update
update_msg.use_tsig(keyring=key_ring,
keyname=key_name,
algorithm=dns.tsig.GSS_TSIG)
response = dns.query.tcp(update_msg, server_addr)
return response
Hi there I'm new in python.
I would like to implement the listener on my Firebase DB.
When I change one or more parameters on the DB my Python code have to do something.
How can I do it?
Thank a lot
my db is like simple list of data from 001 to 200:
"remote-controller"
001 -> 000
002 -> 020
003 -> 230
my code is:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://remote-controller.firebaseio.com/', None)
result = firebase.get('003', None)
print result
It looks like this is supported now (october 2018): although it's not documented in the 'Retrieving Data' guide, you can find the needed functionality in the API reference. I tested it and it works like this:
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
firebase_admin.db.reference('my/data/path').listen(listener)
As Peter Haddad suggested, you should use Pyrebase for achieving something like that given that the python SDK still does not support realtime event listeners.
import pyrebase
config = {
"apiKey": "apiKey",
"authDomain": "projectId.firebaseapp.com",
"databaseURL": "https://databaseName.firebaseio.com",
"storageBucket": "projectId.appspot.com"
}
firebase = pyrebase.initialize_app(config)
db = firebase.database()
def stream_handler(message):
print(message["event"]) # put
print(message["path"]) # /-K7yGTTEp7O549EzTYtI
print(message["data"]) # {'title': 'Pyrebase', "body": "etc..."}
my_stream = db.child("posts").stream(stream_handler)
If Anybody wants to create multiple listener using same listener function and want to get more info about triggered node, One can do like this.
Normal Listener function will get a Event object it has only Data, Node Name, Event type. If you add multiple listener and You want to differentiate between the data change. You can write your own class and add some info to it while creating object.
class ListenerClass:
def __init__(self, appname):
self.appname = appname
def listener(self, event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
print(self.appname) # Extra data related to change add your own member variable
Creating Objects:
listenerObject = ListenerClass(my_app_name + '1')
db.reference('PatientMonitoring', app= obj).listen(listenerObject.listener)
listenerObject = ListenerClass(my_app_name + '2')
db.reference('SomeOtherPath', app= obj).listen(listenerObject.listener)
Full Code:
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
# Initialising Database with credentials
json_path = r'E:\Projectz\FYP\FreshOnes\Python\PastLocations\fyp-healthapp-project-firebase-adminsdk-40qfo-f8fc938674.json'
my_app_name = 'fyp-healthapp-project'
xyz = {'databaseURL': 'https://{}.firebaseio.com'.format(my_app_name),'storageBucket': '{}.appspot.com'.format(my_app_name)}
cred = credentials.Certificate(json_path)
obj = firebase_admin.initialize_app(cred,xyz , name=my_app_name)
# Create Objects Here, You can use loops and create many listener, But listener will create thread per every listener, Don't create irrelevant listeners. It won't work if you are running on machine with thread constraint
listenerObject = ListenerClass(my_app_name + '1') # Decide your own parameters, How you want to differentiate. Depends on you
db.reference('PatientMonitoring', app= obj).listen(listenerObject.listener)
listenerObject = ListenerClass(my_app_name + '2')
db.reference('SomeOtherPath', app= obj).listen(listenerObject.listener)
As you can see on the per-language feature chart on the Firebase Admin SDK home page, Python and Go currently don't have realtime event listeners. If you need that on your backend, you'll have to use the node.js or Java SDKs.
You can use Pyrebase, which is a python wrapper for the Firebase API.
more info here:
https://github.com/thisbejim/Pyrebase
To retrieve data you need to use val(), example:
users = db.child("users").get()
print(users.val())
Python Firebase Realtime Listener Full Code :
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
def listener(event):
print(event.event_type) # can be 'put' or 'patch'
print(event.path) # relative to the reference, it seems
print(event.data) # new data at /reference/event.path. None if deleted
json_path = r'E:\Projectz\FYP\FreshOnes\Python\PastLocations\fyp-healthapp-project-firebase-adminsdk-40qfo-f8fc938674.json'
my_app_name = 'fyp-healthapp-project'
xyz = {'databaseURL': 'https://{}.firebaseio.com'.format(my_app_name),'storageBucket': '{}.appspot.com'.format(my_app_name)}
cred = credentials.Certificate(json_path)
obj = firebase_admin.initialize_app(cred,xyz , name=my_app_name)
db.reference('PatientMonitoring', app= obj).listen(listener)
Output:
put
/
{'n0': '40', 'n1': '71'} # for first time its gonna fetch the data from path whether data is changed or not
put # On data changed
/n1
725
put # On data changed
/n0
401