Simple way to delete existing pods from Python - python

I have end-to-end tests written in pytest, running on a Kubernetes cluster in Namespace foo. Now I want to add simple chaos engineering to the tests to check the resilience of my services. For this, I only need to delete specific pods within foo -- since K8s restarts the corresponding service, this simulates the service being temporarily unreachable.
What is a simple way to delete a specific pod in the current namespace with Python?
What I have tried so far:
Since I did not find a suitable example in https://github.com/kubernetes-client/python/tree/master/examples, but the ones using pods looked quite complex,
I looked into kubetest, which seems very simple and elegant.
I wanted to use the kube fixture and then do something like this:
pods = kube.get_pods()
for pod in pods:
if can_be_temporarily_unreachable(pod):
kube.delete(pod)
I thought calling pytest with parameter --in-cluster would tell kubetest to use the current cluster setup and not create new K8s resources.
However, kubetest wants to create a new namespace for each test case that uses the kube fixture, which I do not want.
Is there a way to tell kubetest not to create new namespaces but do everything in the current namespace?
Though kubetest looks very simple and elegant otherwise, I am happy to use another solution, too.
A simple solution that requires little time and maintenance and does not complicate (reading) the tests would be awesome.

you can use delete_namespaced_pod(from CoreV1Api) to delete specific pods within a namespace.
Here is an example:
from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_incluster_config() # or config.load_kube_config()
configuration = client.Configuration()
with client.ApiClient(configuration) as api_client:
api_instance = client.CoreV1Api(api_client)
namespace = 'kube-system' # str | see #Max Lobur's answer on how to get this
name = 'kindnet-mpkvf' # str | Pod name, e.g. via api_instance.list_namespaced_pod(namespace)
try:
api_response = api_instance.delete_namespaced_pod(name, namespace)
print(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->delete_namespaced_pod: %s\n" % e)

A continuation to an existing answer by Tibebes. M
How do I find out the namespace that the code itself is running in?
There's two ways to do this:
Use Downward API to pass pod namespace to a pod environment variable:
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api See metadata.namespace
Use your template engine / helm to additionally pass environment variable namespace to a pod. Example:
env:
- name: CURRENT_NAMESPACE
value: "{{ .Values.namespace }}"

Related

Calling a multiple python scripts from python with predefined environment

Probably related to globals and locals in python exec(), Python 2 How to debug code injected by the exec block and How to get local variables updated, when using the `exec` call?
I am trying to develop a test framework for our desktop applications which uses click bot like functions.
My goal was to enable non-programmers to write small scripts which could work as a test script. So my idea is to structure the test scripts by files like:
tests-folder
| -> 01-first-test.py
| -> 02-second-test.py
| ... etc
| -> fixture.py
And then just execute these scripts in alphabetical order. However, I also wanted to have fixtures which would define functions, classes, variables and make them available to the different scripts without having the scripts to import that fixture explicitly. If that works, I could also have that approach for 2 or more directory levels.
I could get it working-ish with some hacking around, but I am not entirely convinced. I have a test_sequence.py which looks like this:
from pathlib import Path
from copy import deepcopy
from my_module.test import Failure
def run_test_sequence(test_dir: str):
error_occured = False
fixture = {
'some_pre_defined_variable': 'this is available in all scripts and fixtures',
'directory_name': test_dir,
}
# Check if fixture.py exists and load that first
fixture_file = Path(dir) / 'fixture.py'
if fixture_file.exists():
with open(fixture_file.absolute(), 'r') as code:
exec(code.read(), fixture, fixture)
# Go over all files in test sequence directory and execute them
for test_file in sorted(Path(test_dir).iterdir()):
if test_file.name == 'fixture.py':
continue
# Make a deepcopy, so scripts cannot influence one another
fixture_copy = deepcopy(fixture)
fixture_copy.update({
'some_other_variable': 'this is available in all scripts but not in fixture'
})
try:
with open(test_file.absolute(), 'r') as code:
exec(code.read(), fixture_locals, fixture_locals)
except Failure:
error_occured = True
return error_occured
This iterates over all files in the directory tests-folder and executes them in order (with fixture.py first). It also makes the local variables, functions and classes from fixture.py available to each test-script.
A test script could then just be arbitrary code that will be executed and if it raises my custom Failure exception, this will be noted as a failed test.
The whole sequence is started with a script that does
from my_module.test_sequence import run_test_sequence
if __name__ == '__main__':
exit(run_test_sequence('tests-folder')
This mostly works.
What it cannot do, and what leaves me unsatisfied with this approach:
I cannot debug the scripts itself. Since the code is loaded as string and then interpreted, breakpoints inside the test scripts are not recognized.
Calling fixture functions behaves weird. When I define a function in fixture.py like:
from my_hello_module import print_hello
def printer():
print_hello()
I will receive a message during execution that print_hello is undefined because the variables/modules/etc. in the scope surrounding printer are lost.
Stacktraces are useless. On failure it shows the stacktrace but of course only shows my line which says `exec(...)' and the insides of that function, but none of the code that has been loaded.
I am sure there are other drawbacks, that I have not found yet, but these are the most annoying ones.
I also tried to find a solution through __import__ but I couldn't get it to inject my custom locals or globals into the imported script.
Is there a solution, that I am too inexperienced to find or another builtin Python function that actually does, what I am trying to do? Or is there no way to achieve this and I should rather have each test-script import the fixture and file/directory names from the test-scripts itself. I want those scripts to have as few dependencies and pythony code as possible. Ideally they are just:
from my_module.test import *
click(x, y, LEFT)
write('admin')
press('tab')
write('password')
press('enter')
if text_on_screen('Login successful'):
succeed('Test successful')
else:
fail('Could not login')
Additional note: I think I had the debugger working when I still used execfile, but it is not available in python3 environments.

Creating state in a Flask webapplication

I'm building a webapplication with the Flask framework of python. On the server I would like to preserve some state. I think the following code example makes my goal clear (and was also my initial idea):
name = ""
#app.route('/<input_name>')
def home(input_name):
global name
name = input_name
return "name set"
#app.route('/getname')
def getname():
global name
return name
Though when I deployed my website the response for an /getname request behaves inconsistent because there are multiple thread-instances of the code (I could be wrong). I have some plausible solutions to overcome this problem but I wonder if there would be a more 'clean' solution:
Solution 1: read and write the name from a database (a database seems like overkill if a only want to store 1 variable)
Solution 2: store the value of name in a file and setup a locking mechanism so that only one process/thread could write to the file at the same moment.
Goal: when client 'A' requests www.website.com/sven and after that client 'B' requests www.website.com/getname I want the response for client B to be 'sven'
Any suggestions?
Your example should not be done with global state, it will not work for the reason you mentioned - requests might land into different processes that will have different global values.
You can handle storing global variables by using a key-value cache, such as memcached or Redis, or file-system based cache - check Flask-Chaching package and particular docs https://flask-caching.readthedocs.io/en/latest/#built-in-cache-backends

Using the paste "call" scheme when starting a Pyramid application

I have a Pyramid application which I can start using pserve some.ini. The ini file contains the usual paste configuration and everything works fine. In production, I use uwsgi, having a paste = config:/path/to/some.ini entry, which works fine too.
But instead of reading my configuration from a static ini file, I want to retrieve it from some external key value store. Reading the paste documentation and source code, I figured out, that there is a call scheme, which calls a python function to retrieve the "settings".
I implemented some get_conf method and try to start my application using pserve call:my.module:get_conf. If the module/method do not exist, I get an appropriate error, so the method seems to be used. But whatever I return from the method, I end up with this error message:
AssertionError: Protocol None unknown
I have no idea what return value of the method is expected and how to implement it. I tried to find documentation or examples, but without success. How do I have to implement this method?
While not the answer to your exact question, I think this is the answer to what you want to do. When pyramid starts, your ini file vars from the ini file just get parsed into the settings object that is set on your registry, and you access them through the registry from the rest of your app. So if you want to get settings somewhere else (say env vars, or some other third party source), all you need to do is build yourself a factory component for getting them, and use that in the server start up method that is typically in your base _ _ init _ _.py file. You don't need to get anything from the ini file if that's not convenenient, and if you don't, it doesn't matter how you deploy it. The rest of your app doesn't need to know where they came from. Here's an example of how I do this for getting settings from env vars because I have a distributed app with three separate processes and I don't want to be mucking about with three sets of ini files (instead I have one file of env vars that doesn't go in git and gets sourced before anything gets turned on):
# the method that runs on server startup, no matter
# how you deploy.
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application."""
# settings has your values from the ini file
# go ahead and stick things it from any source you'd like
settings['db_url'] = os.environ.get('DB_URL')
config = Configurator(
settings=settings,
# your other configurator args
)
# you can also just stick things directly on the registry
# for other components to use, as everyone has access to
# request.registry.
# here we look in an env var and fall back to the ini file
amqp_url = os.environ.get('AMQP_URL', settings['amqp.url'] )
config.registry.bus = MessageClient( amqp_url=amqp_url )
# rest of your server start up code.... below

Python Fabric as a Library, execute and environments?

I've tried really hard to find this but no luck - I'm sure it's possible I just can't find and example or figure out the syntax for myself
I want to use fabric as a library
I want 2 sets of hosts
I want to reuse the same functions for these different sets of hosts (and so cannot us the #roles decorator on said functions)
So I think I need:
from fabric.api import execute, run, env
NODES = ['192.168.56.141','192.168.56.152']
env.roledefs = {'head':['localhost'], 'nodes':NODES}
env.user('r4space')
def testfunc ():
run('touch ./fred.txt')
execute(testfunc(),<somehow specific 'head' or 'nodes' as my hosts list and user >)
I've tried a whole range of syntax // hosts=NODES, -H NODES, user='r4space'....much more but I either get a syntax error or "host_string = raw_input("No hosts found. Please specify (single)""
If it makes a difference, ultimately my function defs would be in a separate file that I import into main where hosts etc are defined and execute is called.
Thanks for any help!
You have some errors in your code.
env.user('r4space') is wrong. Should be env.user = 'r4space'
When you use execute, the first parameter should be a callable. You have used the return value of the function testfunc.
I think if you fix the last line, it will work:
execute(testfunc, hosts = NODES)

Using Python Boto with AWS Support API

I've used boto to interact with S3 with with no problems, but now I'm attempting to connect to the AWS Support API to pull back info on open tickets, trusted advisor results, etc. It seems that the boto library has different connect methods for each AWS service? For example, with S3 it is:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
According to the boto docs, the following should work to connect to AWS Support API:
>>> from boto.support.connection import SupportConnection
>>> conn = SupportConnection('<aws access key>', '<aws secret key>')
However, there are a few problems I see after digging through the source code. First, boto.support.connection doesn't actually exist. boto.connection does, but it doesn't contain a class SupportConnection. boto.support.layer1 exists, and DOES have the class SupportConnection, but it doesn't accept key arguments as the docs suggest. Instead it takes 1 argument - an AWSQueryConnection object. That class is defined in boto.connection. AWSQueryConnection takes 1 argument - an AWSAuthConnection object, class also defined in boto.connection. Lastly, AWSAuthConnection takes a generic object, with requirements defined in init as:
class AWSAuthConnection(object):
def __init__(self, host, aws_access_key_id=None,
aws_secret_access_key=None,
is_secure=True, port=None, proxy=None, proxy_port=None,
proxy_user=None, proxy_pass=None, debug=0,
https_connection_factory=None, path='/',
provider='aws', security_token=None,
suppress_consec_slashes=True,
validate_certs=True, profile_name=None):
So, for kicks, I tried creating an AWSAuthConnection by passing keys, followed by AWSQueryConnection(awsauth), followed by SupportConnection(awsquery), with no luck. This was inside a script.
Last item of interest is that, with my keys defined in a .boto file in my home directory, and running python interpreter from the command line, I can make a direct import and call to SupportConnection() (no arguments) and it works. It clearly is picking up my keys from the .boto file and consuming them but I haven't analyzed every line of source code to understand how, and frankly, I'm hoping to avoid doing that.
Long story short, I'm hoping someone has some familiarity with boto and connecting to AWS API's other than S3 (the bulk of material that exists via google) to help me troubleshoot further.
This should work:
import boto.support
conn = boto.support.connect_to_region('us-east-1')
This assumes you have credentials in your boto config file or in an IAM Role. If you want to pass explicit credentials, do this:
import boto.support
conn = boto.support.connect_to_region('us-east-1', aws_access_key_id="<access key>", aws_secret_access_key="<secret key>")
This basic incantation should work for all services in all regions. Just import the correct module (e.g. boto.support or boto.ec2 or boto.s3 or whatever) and then call it's connect_to_region method, supplying the name of the region you want as a parameter.

Categories

Resources