Kubernetes REST API call using python - python

I'm looking for a way to find a pod by name, and run REST call using python. I thought of using port forwarding and kubernetes client
Can someone share a code sample or any other way to do it?
Here is what I started to do:
from kubernetes import client, config
config.load_kube_config(config_file="my file") client = client.CoreV1Api()
pods = client.list_namespaced_pod(namespace="my namespace") # loop pods to
#find my pod
Next I thought of using:
stream(client.connect_get_namespaced_pod_portforward_with_http_info ...
In kubectl command line tool I do the following:
1. List pods
2. Open port forward
3. Use curl to perform the REST call
I want to do the same in python

List all the pods:
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
then in the for loop above you can check your pod name , if it matches then return.

Calling your pod using kubernetes API is very un Kubernetes or container Like. You are coupling a microservice (or service) with a deployment technology. You should configure a service and call it using a standard Python call to Rest API.
If you are calling from inside the cluster use the service name for the URL domain If you are calling from outside the cluster use the cluster ip e.g. using docker http://localhost:8000/ with the same code different arg,
make sure you configure the service to expose the port outside correctly.
like so:
#!/usr/bin/env python
import sys
import json
import requests
def call_service(outside_call=True, protocol='http', domain='localhost', service_name='whisperer', service_port='8002', index='hello', payload=None, headers=None):
if outside_call:
url = f'{protocol}://{domain}:{service_port}/{index}'
else:
url = f'{protocol}://{service_name}:{service_port}/{index}'
try:
g = requests.get(url=url)
print(f'a is: {g}')
r = requests.post(f'{url}', data=json.dumps(payload), headers=headers)
print(f'The text returned from the server: {r.text}')
return r.text
# return json.loads(r.content)
except Exception as e:
raise Exception(f"Error occurred while trying to call service: {e}")
if __name__ == "__main__":
args = sys.argv
l = len(args)
if l > 1:
outside_call = True if args[1] == 'true' else False
else:
outside_call = False
a = call_service(payload={'key': 'value'}, outside_call=outside_call)
print(f'a is: {a}')

Related

How to fix missing port issue with GCP Cloud Function (Gen2) when deployed?

I am trying to deploy a cloud function (gen2) in GCP but running into the same issue and get this error with each deploy when Cloud Functions sets up Cloud Run:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
MAIN.PY
from google.cloud import pubsub_v1
from google.cloud import firestore
import requests
import json
from firebase_admin import firestore
import google.auth
credentials, project = google.auth.default()
# API INFO
Base_url = 'https://xxxxxxxx.net/v1/feeds/sportsbookv2'
Sport_id = 'xxxxxxxx'
AppID = 'xxxxxxxx'
AppKey = 'xxxxxxxx'
Country = 'en_AU'
Site = 'www.xxxxxxxx.com'
project_id = "xxxxxxxx"
subscription_id = "xxxxxxxx-basketball-nba-events"
timeout = 5.0
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(project_id, subscription_id)
db = firestore.Client(project='xxxxxxxx')
def winodds(message: pubsub_v1.subscriber.message.Message) -> None:
events = json.loads(message.data)
event_ids = events['event_ids']
url = f"{Base_url}/betoffer/event/{','.join(map(str, event_ids))}.json?app_id={AppID}&app_key={AppKey}&local={Country}&site={Site}"
print(url)
windata = requests.get(url).text
windata = json.loads(windata)
for odds_data in windata['betOffers']:
if odds_data['betOfferType']['name'] == 'Head to Head' and 'MAIN' in odds_data['tags']:
event_id = odds_data['eventId']
home_team = odds_data['outcomes'][0]['participant']
home_team_win_odds = odds_data['outcomes'][0]['odds']
away_team = odds_data['outcomes'][1]['participant']
away_team_win_odds = odds_data['outcomes'][1]['odds']
print(f'{event_id} {home_team} {home_team_win_odds} {away_team} {away_team_win_odds}')
# WRITE TO FIRESTORE
doc_ref = db.collection(u'xxxxxxxx').document(u'basketball_nba').collection(u'win_odds').document(f'{event_id}')
doc_ref.set({
u'event_id': event_id,
u'home_team': home_team,
u'home_team_win_odds': home_team_win_odds,
u'away_team': away_team,
u'away_team_win_odds': away_team_win_odds,
u'timestamp': firestore.SERVER_TIMESTAMP,
})
streaming_pull_future = subscriber.subscribe(subscription_path, callback=winodds)
print(f"Listening for messages on {subscription_path}..\n")
# Wrap subscriber in a 'with' block to automatically call close() when done.
with subscriber:
try:
# When `timeout` is not set, result() will block indefinitely,
# unless an exception is encountered first.
streaming_pull_future.result()
except TimeoutError:
streaming_pull_future.cancel() # Trigger the shutdown.
streaming_pull_future.result() # Block until the shutdown is complete.
if __name__ == "__main__":
winodds()
DOCKER FILE
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.10
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
ENV GOOGLE_APPLICATION_CREDENTIALS /app/xxxxx-key.json
ENV PORT 8080
# Install production dependencies.
RUN pip install functions-framework
RUN pip install -r requirements.txt
# Run the web service on container startup.
CMD exec functions-framework --target=winodds --debug --port=$PORT
I am using PyCharm and it all seems to work locally when I run via Docker, Main.py, and Cloud Run locally. But as soon as I deploy I get an error straight away.
Please can someone point me in the right direction? Where do I need to edit the ports # so my cloud function will deploy successfully?
The above error could be caused by configuration issues for the listener port which could be some mismatches in the user defined values settings.
You may check and verify the following pointers to understand the probable cause of the error and rectify these to try and eliminate the issue:
Check if you configured your service to listen to all network
interfaces,commonly denoted as 0.0.0.0 in Troubleshooting issues
Configured the PORT following Google best practices
Configured the PORT in your application as per “Deploy a Python
service to Cloud Run” guide.
You may check the following simple example initially to check if these are working properly.
const port = parseInt(process.env.PORT) || 8080;
app.listen(port, () => {
console.log(`helloworld: listening on port ${port}`);
});

Azure HTTP Function works localy but not on azure / Azure URL gives result website cant be found

I'am working on Azure HTTP function, what I'am trying to achieve is:
Azure function based on python after calling via URL it shall connect to Linux VPS execute command and return the response from VPS.
It does exactly that after running it on localhost via Visual Studio code
Then exactly same code is uploaded via Azure Pipeline which runs without issues
However after calling function via Azure URL it gives 404 error.
Function IS enabled and Code is uploaded sucessfully and can be seen in 'Code + Test' section
import logging
import azure.functions as func
import sys
import paramiko
def execute_command_on_remote_machine(ip_addr, command):
try:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(str(ip_addr), username='username', password='password')
chan = client.get_transport().open_session()
#chan.get_pty()
stdout = [], [], []
stdout = client.exec_command(command, get_pty=True)
#err_list = [line for line in stderr.read().splitlines()]
out_list = [line for line in stdout.read().splitlines()]
client.close()
return out_list
except Exception as e:
print(str(e))
sys.exit(1)
def main(req: func.HttpRequest) -> func.HttpResponse:
output = execute_command_on_remote_machine("IP", "sh /home/script.sh")
logging.info(str(output))
return func.HttpResponse(str(output))

Using mitmproxy inside a Python script to connect to upstream proxy (with user & password)

I am trying to use mitmproxy behind a company proxy that requires a user/password login.
The setup is:
Local PC's browser -> mitmproxy (on local PC) -> company proxy -> internet.
Based on this SO thread, this is how you use mitmproxy within a Python program. This example works fine when there's no proxy.
from mitmproxy.options import Options
from mitmproxy.proxy.config import ProxyConfig
from mitmproxy.proxy.server import ProxyServer
from mitmproxy.tools.dump import DumpMaster
class Addon(object):
def __init__(self):
pass
def request(self, flow):
# examine request here
pass
def response(self, flow):
# examine response here
pass
if __name__ == "__main__":
options = Options(listen_host='0.0.0.0', listen_port=8080, http2=True)
m = DumpMaster(options, with_termlog=False, with_dumper=False)
config = ProxyConfig(options)
m.server = ProxyServer(config)
m.addons.add(Addon())
try:
print('starting mitmproxy')
m.run()
except KeyboardInterrupt:
m.shutdown()
Assuming the company proxy is at IP "1.2.3.4" port 3128 and requires a
login USER and PASSWORD, how can I change this script to have mitproxy
use that proxy instead of going to the internet directly?
Addition info: I am not using mitmdump with the script-parameter to run this script.
The goal is to run this from Python 3.8 with a pip-installed mitmproxy

Running python script via AWS lambda on EC2

I am using paramiko package in lambda to run python script on EC2 and get the output back in Lambda. I was under the impression that when I run the python script from lambda then the scipt gets executed on EC2 and returns the output to lambda. But this is not happening. I installed pandas on my EC2 and ran a simple python script with import pandas. Lambda gives me an error that pandas module not found. But why do lambda need pandas module. Shouldn't it just take the output from EC2 ?
Below is my lambda function
import boto3
import paramiko
def lambda_handler(event, context):
# boto3 client
client = boto3.client('ec2')
s3_client = boto3.client('s3')
# getting instance information
describeInstance = client.describe_instances()
hostPublicIP=["59.53.239.242"]
# fetchin public IP address of the running instances
# for i in describeInstance['Reservations']:
# for instance in i['Instances']:
# if instance["State"]["Name"] == "running":
# hostPublicIP.append(instance['PublicIpAddress'])
print(hostPublicIP)
# downloading pem filr from S3
s3_client.download_file('paramiko','ec2key.pem', '/tmp/file.pem')
# reading pem file and creating key object
key = paramiko.RSAKey.from_private_key_file("/tmp/file.pem")
# an instance of the Paramiko.SSHClient
ssh_client = paramiko.SSHClient()
# setting policy to connect to unknown host
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
host=hostPublicIP[0]
print("Connecting to : " + host)
# connecting to server
ssh_client.connect(hostname=host, username="ubuntu", pkey=key)
print("Connected to :" + host)
# command list
commands = ["python3 test.py"]
# executing list of commands within server
for command in commands:
print("Executing {command}")
stdin , stdout, stderr = ssh_client.exec_command(command)
print(stdout.read())
print(stderr.read())
return {
'statusCode': 200,
'body': json.dumps('Thanks!')
}
Below is my test.py
import pandas
result = 2+2
print(result)

Kubernetes API server

So I have just started using Kubernetes API server and I tried this example :
from kubernetes import client, config
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
This worked but it returned the pods that are on my local minikube, I want to get the pods that are at the kubernetes server here :
http://192.168.237.115:8080
How do I do that?
When I do kubectl config view , I get this :
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/piyush/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/piyush/.minikube/apiserver.crt
client-key: /home/piyush/.minikube/apiserver.key
I know this is for the local cluster I set up. I want to know how to modify this to make api requests to kubernetes server on http://192.168.237.115:8080
You can actually create a simple api wrapper. This way you can pass through different yaml configuration files, that I imagine may have different hosts
import yaml
from kubernetes import client
from kubernetes.client import Configuration
from kubernetes.config import kube_config
class K8s(object):
def __init__(self, configuration_yaml):
self.configuration_yaml = configuration_yaml
self._configuration_yaml = None
#property
def config(self):
with open(self.configuration_yaml, 'r') as f:
if self._configuration_yaml is None:
self._configuration_yaml = yaml.load(f)
return self._configuration_yaml
#property
def client(self):
k8_loader = kube_config.KubeConfigLoader(self.config)
call_config = type.__call__(Configuration)
k8_loader.load_and_set(call_config)
Configuration.set_default(call_config)
return client.CoreV1Api()
# Instantiate your kubernetes class and pass in config
kube_one = K8s(configuration_yaml='~/.kube/config1')
kube_one.client.list_pod_for_all_namespaces(watch=False)
kube_two = K8s(configuration_yaml='~/.kube/config2')
kube_two.client.list_pod_for_all_namespaces(watch=False)
Also another neat reference in libcloud. https://github.com/apache/libcloud/blob/trunk/libcloud/container/drivers/kubernetes.py.
Good luck! Hope this helps! :)
I have two solution for you:
[prefered] Configure your kubectl (i.e. ~/.kube/config) file. After kubectl works with your cluster, python client should automatically work with load_kube_config. See here for configuring kubectl: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/
You can configure python client directly. For a complete list of configurations, look at: https://github.com/kubernetes-client/python-base/blob/8704ce39c241f3f184d01833dcbaf1d1fb14a2dc/configuration.py#L48
You may need to set some of those configuration for your client to connect to your cluster. For example, if you don't have any certificate or SSL enabled:
from kubernetes import client, configuration
def main():
configuration.host = "http://192.168.237.115:8080"
configuration.api_key_prefix['authorization'] = "Bearer"
configuration..api_key['authorization'] = "YOUR_TOKEN"
v1 = client.CoreV1Api()
...
You may need to set other configurations such as username, api_key, etc. That's why I think if you follow first solution it would be easier.
config.load_kube_config() takes context as a parameter. If passed None (the default) then the current context will be used. Your current context is probably your minikube.
See here:
https://github.com/kubernetes-incubator/client-python/blob/436351b027df2673869ee00e0ff5589e6b3e2b7d/kubernetes/config/kube_config.py#L283
config.load_kube_config(context='some context')
If you are not familiar with Kubernetes contexts,
Kubernetes stores your configuration under ~/.kube/config (default location). In it you will find context definition for every cluster you may have access to. A field called current-context defines your current context.
You can issue the following commands:
kubectl config current-context to see the current context
kubectl config view to view all the configuration
Can you show me the file ~/.kube/config
If you update the API server in it, the python module kubernetes will automatically pick up the new API server you nominated.
- cluster:
certificate-authority: [Update real ca.crt here]
server: http://192.168.237.115:8080
There are other changes in ~/.kube/config as well, you'd better get the config from the remote kubernetes server directly.
After successfully config with remote kubernetes API servers, you should be fine to run kubectl and get the deployments, deamons, etc.
Then you should be fine to run with python kubernetes SDK

Categories

Resources