So I have just started using Kubernetes API server and I tried this example :
from kubernetes import client, config
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
This worked but it returned the pods that are on my local minikube, I want to get the pods that are at the kubernetes server here :
http://192.168.237.115:8080
How do I do that?
When I do kubectl config view , I get this :
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/piyush/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/piyush/.minikube/apiserver.crt
client-key: /home/piyush/.minikube/apiserver.key
I know this is for the local cluster I set up. I want to know how to modify this to make api requests to kubernetes server on http://192.168.237.115:8080
You can actually create a simple api wrapper. This way you can pass through different yaml configuration files, that I imagine may have different hosts
import yaml
from kubernetes import client
from kubernetes.client import Configuration
from kubernetes.config import kube_config
class K8s(object):
def __init__(self, configuration_yaml):
self.configuration_yaml = configuration_yaml
self._configuration_yaml = None
#property
def config(self):
with open(self.configuration_yaml, 'r') as f:
if self._configuration_yaml is None:
self._configuration_yaml = yaml.load(f)
return self._configuration_yaml
#property
def client(self):
k8_loader = kube_config.KubeConfigLoader(self.config)
call_config = type.__call__(Configuration)
k8_loader.load_and_set(call_config)
Configuration.set_default(call_config)
return client.CoreV1Api()
# Instantiate your kubernetes class and pass in config
kube_one = K8s(configuration_yaml='~/.kube/config1')
kube_one.client.list_pod_for_all_namespaces(watch=False)
kube_two = K8s(configuration_yaml='~/.kube/config2')
kube_two.client.list_pod_for_all_namespaces(watch=False)
Also another neat reference in libcloud. https://github.com/apache/libcloud/blob/trunk/libcloud/container/drivers/kubernetes.py.
Good luck! Hope this helps! :)
I have two solution for you:
[prefered] Configure your kubectl (i.e. ~/.kube/config) file. After kubectl works with your cluster, python client should automatically work with load_kube_config. See here for configuring kubectl: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/
You can configure python client directly. For a complete list of configurations, look at: https://github.com/kubernetes-client/python-base/blob/8704ce39c241f3f184d01833dcbaf1d1fb14a2dc/configuration.py#L48
You may need to set some of those configuration for your client to connect to your cluster. For example, if you don't have any certificate or SSL enabled:
from kubernetes import client, configuration
def main():
configuration.host = "http://192.168.237.115:8080"
configuration.api_key_prefix['authorization'] = "Bearer"
configuration..api_key['authorization'] = "YOUR_TOKEN"
v1 = client.CoreV1Api()
...
You may need to set other configurations such as username, api_key, etc. That's why I think if you follow first solution it would be easier.
config.load_kube_config() takes context as a parameter. If passed None (the default) then the current context will be used. Your current context is probably your minikube.
See here:
https://github.com/kubernetes-incubator/client-python/blob/436351b027df2673869ee00e0ff5589e6b3e2b7d/kubernetes/config/kube_config.py#L283
config.load_kube_config(context='some context')
If you are not familiar with Kubernetes contexts,
Kubernetes stores your configuration under ~/.kube/config (default location). In it you will find context definition for every cluster you may have access to. A field called current-context defines your current context.
You can issue the following commands:
kubectl config current-context to see the current context
kubectl config view to view all the configuration
Can you show me the file ~/.kube/config
If you update the API server in it, the python module kubernetes will automatically pick up the new API server you nominated.
- cluster:
certificate-authority: [Update real ca.crt here]
server: http://192.168.237.115:8080
There are other changes in ~/.kube/config as well, you'd better get the config from the remote kubernetes server directly.
After successfully config with remote kubernetes API servers, you should be fine to run kubectl and get the deployments, deamons, etc.
Then you should be fine to run with python kubernetes SDK
Related
I am trying to deploy a cloud function (gen2) in GCP but running into the same issue and get this error with each deploy when Cloud Functions sets up Cloud Run:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
MAIN.PY
from google.cloud import pubsub_v1
from google.cloud import firestore
import requests
import json
from firebase_admin import firestore
import google.auth
credentials, project = google.auth.default()
# API INFO
Base_url = 'https://xxxxxxxx.net/v1/feeds/sportsbookv2'
Sport_id = 'xxxxxxxx'
AppID = 'xxxxxxxx'
AppKey = 'xxxxxxxx'
Country = 'en_AU'
Site = 'www.xxxxxxxx.com'
project_id = "xxxxxxxx"
subscription_id = "xxxxxxxx-basketball-nba-events"
timeout = 5.0
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path(project_id, subscription_id)
db = firestore.Client(project='xxxxxxxx')
def winodds(message: pubsub_v1.subscriber.message.Message) -> None:
events = json.loads(message.data)
event_ids = events['event_ids']
url = f"{Base_url}/betoffer/event/{','.join(map(str, event_ids))}.json?app_id={AppID}&app_key={AppKey}&local={Country}&site={Site}"
print(url)
windata = requests.get(url).text
windata = json.loads(windata)
for odds_data in windata['betOffers']:
if odds_data['betOfferType']['name'] == 'Head to Head' and 'MAIN' in odds_data['tags']:
event_id = odds_data['eventId']
home_team = odds_data['outcomes'][0]['participant']
home_team_win_odds = odds_data['outcomes'][0]['odds']
away_team = odds_data['outcomes'][1]['participant']
away_team_win_odds = odds_data['outcomes'][1]['odds']
print(f'{event_id} {home_team} {home_team_win_odds} {away_team} {away_team_win_odds}')
# WRITE TO FIRESTORE
doc_ref = db.collection(u'xxxxxxxx').document(u'basketball_nba').collection(u'win_odds').document(f'{event_id}')
doc_ref.set({
u'event_id': event_id,
u'home_team': home_team,
u'home_team_win_odds': home_team_win_odds,
u'away_team': away_team,
u'away_team_win_odds': away_team_win_odds,
u'timestamp': firestore.SERVER_TIMESTAMP,
})
streaming_pull_future = subscriber.subscribe(subscription_path, callback=winodds)
print(f"Listening for messages on {subscription_path}..\n")
# Wrap subscriber in a 'with' block to automatically call close() when done.
with subscriber:
try:
# When `timeout` is not set, result() will block indefinitely,
# unless an exception is encountered first.
streaming_pull_future.result()
except TimeoutError:
streaming_pull_future.cancel() # Trigger the shutdown.
streaming_pull_future.result() # Block until the shutdown is complete.
if __name__ == "__main__":
winodds()
DOCKER FILE
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.10
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
ENV GOOGLE_APPLICATION_CREDENTIALS /app/xxxxx-key.json
ENV PORT 8080
# Install production dependencies.
RUN pip install functions-framework
RUN pip install -r requirements.txt
# Run the web service on container startup.
CMD exec functions-framework --target=winodds --debug --port=$PORT
I am using PyCharm and it all seems to work locally when I run via Docker, Main.py, and Cloud Run locally. But as soon as I deploy I get an error straight away.
Please can someone point me in the right direction? Where do I need to edit the ports # so my cloud function will deploy successfully?
The above error could be caused by configuration issues for the listener port which could be some mismatches in the user defined values settings.
You may check and verify the following pointers to understand the probable cause of the error and rectify these to try and eliminate the issue:
Check if you configured your service to listen to all network
interfaces,commonly denoted as 0.0.0.0 in Troubleshooting issues
Configured the PORT following Google best practices
Configured the PORT in your application as per “Deploy a Python
service to Cloud Run” guide.
You may check the following simple example initially to check if these are working properly.
const port = parseInt(process.env.PORT) || 8080;
app.listen(port, () => {
console.log(`helloworld: listening on port ${port}`);
});
I have code like this
Unix = 'linux'
Mac = 'darwin'
if sys.platform == Unix
do this
elif sys.platform == Mac
do this
I have this check for sys platform because aws is unix based, and therefore if sys.platform = 'Mac'
then I am running locally.
I'm running into trouble when I try to dockerize this application because the dockerized build is linux based,
so in this if-else statement, the build will run the 1st
part of the if-else even though I'm building the docker
container locally.
Is it possible to set the sys.platform in a dockerfile?
edit:
Problem Statement:
Trying to dockerize an aws lambda function. To do so, I need
to test the lambda function locally.
My lamba function composition looks like this:
app
lambda_function1
database.py
helper.py
functions
lambda_function1.py
The main purpose of this lambda function is to read data
from the production database, and then predict some value
based on the data.
database.py
import helper
...
class DB:
def __init__(self):
self.secrets = helper.get_secrets()
self.db_name = self.secrets.get('DB', '')
self.db_host = self.secrets.get('Host', '')
self.db_password = self.secrets.get('Password', '')
...
helper.py
import sys
import boto3
....
def get_secrets():
secrets = {}
if sys.platform == constants.MAC_PLATFORM:
secrets = local_secrets()
return secrets
session = boto3.session.Session()
client = session.client(service_name='secretsmanager',
region_name='us-west-2')
secrets = get_aws_secrets()
As you can see, if sys platform is 'darwin',
then the secrets will be local secrets.
If sys platform is 'linux', then the secrets will be secrets pulled
from aws.
For some reason, I am unable to connect to the database with the aws
secrets in my local docker build due to a tcp/ip error.
I think this is due to some weird configuration issue that I don't have locally that aws might have, therefore I would like to start by working with the local database in docker, and use the get_local_secrets method
to obtain secrets.
Any ideas?
I'd use the environment variable suggestion and run locally by setting some values at runtime:
import os
if os.environ.get('LOCAL_TEST', 'false').lower().strip() == 'true':
secrets = local_secrets()
else:
# use aws secrets
And run your container like:
docker run -e LOCAL_TEST=true your_image
I'm looking for a way to find a pod by name, and run REST call using python. I thought of using port forwarding and kubernetes client
Can someone share a code sample or any other way to do it?
Here is what I started to do:
from kubernetes import client, config
config.load_kube_config(config_file="my file") client = client.CoreV1Api()
pods = client.list_namespaced_pod(namespace="my namespace") # loop pods to
#find my pod
Next I thought of using:
stream(client.connect_get_namespaced_pod_portforward_with_http_info ...
In kubectl command line tool I do the following:
1. List pods
2. Open port forward
3. Use curl to perform the REST call
I want to do the same in python
List all the pods:
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
then in the for loop above you can check your pod name , if it matches then return.
Calling your pod using kubernetes API is very un Kubernetes or container Like. You are coupling a microservice (or service) with a deployment technology. You should configure a service and call it using a standard Python call to Rest API.
If you are calling from inside the cluster use the service name for the URL domain If you are calling from outside the cluster use the cluster ip e.g. using docker http://localhost:8000/ with the same code different arg,
make sure you configure the service to expose the port outside correctly.
like so:
#!/usr/bin/env python
import sys
import json
import requests
def call_service(outside_call=True, protocol='http', domain='localhost', service_name='whisperer', service_port='8002', index='hello', payload=None, headers=None):
if outside_call:
url = f'{protocol}://{domain}:{service_port}/{index}'
else:
url = f'{protocol}://{service_name}:{service_port}/{index}'
try:
g = requests.get(url=url)
print(f'a is: {g}')
r = requests.post(f'{url}', data=json.dumps(payload), headers=headers)
print(f'The text returned from the server: {r.text}')
return r.text
# return json.loads(r.content)
except Exception as e:
raise Exception(f"Error occurred while trying to call service: {e}")
if __name__ == "__main__":
args = sys.argv
l = len(args)
if l > 1:
outside_call = True if args[1] == 'true' else False
else:
outside_call = False
a = call_service(payload={'key': 'value'}, outside_call=outside_call)
print(f'a is: {a}')
Here is my script :
from fabric2 import Connection
c = Connection('127.0.0.1')
with c.cd('/home/bussiere/'):
c.run('ls -l')
But i have this error :
paramiko.ssh_exception.AuthenticationException: Authentication failed.
So how do I run a command on localhost ?
In Fabric2, the Connection object has got a local() method.
Have a look at this object's documentation here.
As of July 2020, with fabric2 if you don't pass argument to your task decorator by default you are on the local machine.
for example the following will run on your local machine (localhost) :
Example 1 : Only on local
#python3
#fabfile.py
from fabric import task, Connection
c = Connection('remote_user#remote_server.com')
#task
def DetailList(c):
c.run('ls -l') # will run on local server because the decorator #task does not contain the hosts parameter
You then would run this on your machine with
fab DetailList
If you want to mix code that should be running on remote server and on local you should pass the connection to the #task decorator as a parameter.
Example 2: on local and on remote (but different functions)
#python3
#fabfile.py
#imports
from fabric import task, Connection
#variables
list_of_hosts = ['user#yourserver.com'] #you should have already configure the ssh access
c = Connection(list_of_hosts[0])
working_dir = '/var/www/yourproject'
#will run on remote
#task(hosts = list_of_hosts)
def Update(c):
c.run('sudo apt get update') # will run on remote server because hosts are passed to the task decorator
c.run(f'cd {working_dir} && git pull') # will run on remote server because hosts are passed to the task decorator
c.run('sudo service apache2 restart') # will run on remote server because hosts are passed to the task decorator
#will run on local because you do not specify a host
#task
def DetailsList(c):
c.run('ls -l') # # will run on local server because hosts are NOT passed to the task decorator
As mentionned by Ismaïl there also is a 'local' method that can be used when passing the hosts parameter, the 'local' method will run on the localhost although you have specified the host parameter to the task decorator. Be careful though you can not use the 'local' method if you didn't specified any hosts parameters, use run instead as shown in example 1 & 2.
Example 3 : use both on remote and local servers but under the same functions, note we are not decorating functions that are called in the UpdateAndRestart function.
#python3
#fabfile.py
#imports
from fabric import task, Connection
#variables
list_of_hosts = ['www.yourserver.com'] #you should have already configure the ssh access
c = Connection(list_of_hosts[0])
working_dir = '/var/www/yourproject'
def UpdateServer(c):
c.run('sudo apt get update') # will run on remote server because hosts are passed to the task decorator
c.local('echo the remote server is now updated') # will run on local server because you used the local method when hosts are being passed to the decorator
def PullFromGit(c):
c.run(f'cd {working_dir} && git pull') # will run on remote server because hosts are passed to the task decorator
c.local('echo Git repo is now pulled') # will run on local server because you used the local method when hosts are being passed to the decorator
def RestartServer(c):
c.run('sudo service apache2 restart') # will run on remote server because hosts are passed to the task decorator
c.local('echo Apache2 is now restarted') # will run on local server because you used the local method when hosts are being passed to the decorator
#task(hosts = list_of_hosts)
def UpdateAndRestart(c):
UpdateServer(c)
PullFromGit(c)
RestartServer(c)
c.local('echo you have updated, pulled and restarted Apache2') # will run on local server because you used the local method when hosts are being passed to the decorator
You will be able to run the entire stack with :
fab UpdateAndRestart
I want to make a task use a different set of hosts (role) depending on which network I'm currently in. If I'm in the same network of my servers, I don't need to go through the gateway.
Here's a snippet from my fabfile.py:
env.use_ssh_config = True
env.roledefs = {
'rack_machines': ['rack4', 'rack5', 'rack6', 'rack7'],
'external_rack_machines': ['erack4', 'erack5', 'erack6', 'erack7']
}
#roles('rack_machines')
def host_type():
run('uname -s')
So, for my task host_type(), I'd like its role to be rack_machines if I'm in the same network as rack4, rack5, etc. Otherwise, I'd like its role to be external_rack_machines, therefore going through the gateway to access those same machines.
Maybe there's a way to do this with ssh config alone. Here's a snippet of my ssh_config file as well:
Host erack4
HostName company-gw.foo.bar.com
Port 2261
User my_user
Host rack4
HostName 10.43.21.61
Port 22
User my_user
Role definitions are taken into account after module has been imported. So you can place some code in your fabfile which executes on import, detects network and set appropriate roledefs.
Second way to achieve a goal is to use "flag-task". This is a task which does nothing but set appropriate roledefs. I.e.:
hosts = {
"rack": ["rack1", "rack2"],
"external_rack": ["external_rack1", "external_rack2"]
}
env.roledefs = {"rack_machines": hosts["rack"]}
#task
def set_hosts(hostset="rack"):
if hostset in hosts:
env.roledefs["rack_machines"] = hosts[hostset]
else:
print "Invalid hostset"
#roles("rack_machines")
def business():
pass
And invoke that way: fab set_hosts:external_rack business