Change remote user in custom module - python

I'm developing a custom network module for Ansible 2.5.2 for MikroTik RouterOS.
I want to change the remote username under the hood, e.g. if in my inventory I have ansible_user: ansible, I would like to change it to ansible_user: ansible+cet.
I have seen modules that do this in action plugins, like here: CFSworks/ansible-routeros.
However, it seems that in Ansible 2.5 with network_cli connection type this is no longer the case. When I add the following sample action plugin to plugins/action directory nothing happens – I don't see routeros action plugin is called in debug console:
class ActionModule(_ActionModule):
def run(self, tmp=None, task_vars=None):
display.debug('routeros action plugin is called')
result = super(ActionModule, self).run(task_vars=task_vars)
return result
This is my playbook:
- hosts: routeros
gather_facts: no
connection: network_cli
tasks:
- ...
And this is my inventory:
routeros:
hosts:
example:
ansible_host: 192.168.88.1
ansible_user: ansible
# ansible_user: ansible+cet <-- i want to add '+cet' on the fly
ansible_ssh_pass: ansible
ansible_network_os: routeros
What is the proper way to replace the connection user in Ansible 2.5 with network_cli connection type?

Related

is there a way to test airflow connection from CLI or by a custom dag

We are using airflow 2.1 version and we have multiple connections(Linux/Non Linux based OS) defined in Airflow.
We created a dag by exporting all the values from connection metadata table and running a dummy bash command
session = airflow.settings.Session()
t1 = session.query(Connection)
But because of different OS we are facing some errors. Is there any generic way through CLI or any other way we can create a dag and then test all the connections and if it fails we want to get notified .
Airflow has the ability to test connection assuming the hook implemented test_connection function (see docs). You can test the connection from the UI in Admin -> Connections -> Choose the specific connection:
Or by using python code :
hook = YourHook(conn_id='your_conn_id')
status, msg = hook.test_connection()
assert status is True
assert msg == 'Connection successfully tested'

How to wait for a shell script reboot in Fabric 2

I'm using Fabric 2 and I'm trying to run a shell script on a number of hosts sequentially. In the script, it configures a few settings and reboots that host. When I run my task however it ends after the script has run on the first host (I'm guessing because the SSH connection terminates due to the reboot). I tried looking into setting 'warn_only' to True, but I don't see where to set this value on Fabric 2.
Adding:
with settings(warn_only=True):
throws a "NameError: global name 'settings' is not defined" error.
Is there a correct format to warn_only? If not possible yet in Fabric 2, is there a way to continue running my task regardless of this reboot?
My script:
from fabric import *
import time
import csv
#task
def test(ctx):
hosts = ['1.1.1.1', '2.2.2.2']
for host in hosts:
c = Connection(host=host, user="user", connect_kwargs={"password": "password"})
c.run("./shell_script.sh")
configured = False
while not configured:
result = c.run("cat /etc/hostname")
if result != "default": configured = True
time.sleep(10)
So a workaround has been to run the task with the -w flag which will enable warn_only and give me the desired functionality. It would be more preferable to be able to set this property in the code though.
Looks like you can use the config argument to the connection class instead of the with settings() construction, and warn_only as been renamed to warn;
with Connection(host=host, user="user", connect_kwargs={"password": "password"}) as c:
c.run("./shell_script.sh", warn=True)
More generally, you can get upgrade documentation at https://www.fabfile.org/upgrading.html#run

How to launch command on localhost with fabric2?

Here is my script :
from fabric2 import Connection
c = Connection('127.0.0.1')
with c.cd('/home/bussiere/'):
c.run('ls -l')
But i have this error :
paramiko.ssh_exception.AuthenticationException: Authentication failed.
So how do I run a command on localhost ?
In Fabric2, the Connection object has got a local() method.
Have a look at this object's documentation here.
As of July 2020, with fabric2 if you don't pass argument to your task decorator by default you are on the local machine.
for example the following will run on your local machine (localhost) :
Example 1 : Only on local
#python3
#fabfile.py
from fabric import task, Connection
c = Connection('remote_user#remote_server.com')
#task
def DetailList(c):
c.run('ls -l') # will run on local server because the decorator #task does not contain the hosts parameter
You then would run this on your machine with
fab DetailList
If you want to mix code that should be running on remote server and on local you should pass the connection to the #task decorator as a parameter.
Example 2: on local and on remote (but different functions)
#python3
#fabfile.py
#imports
from fabric import task, Connection
#variables
list_of_hosts = ['user#yourserver.com'] #you should have already configure the ssh access
c = Connection(list_of_hosts[0])
working_dir = '/var/www/yourproject'
#will run on remote
#task(hosts = list_of_hosts)
def Update(c):
c.run('sudo apt get update') # will run on remote server because hosts are passed to the task decorator
c.run(f'cd {working_dir} && git pull') # will run on remote server because hosts are passed to the task decorator
c.run('sudo service apache2 restart') # will run on remote server because hosts are passed to the task decorator
#will run on local because you do not specify a host
#task
def DetailsList(c):
c.run('ls -l') # # will run on local server because hosts are NOT passed to the task decorator
As mentionned by Ismaïl there also is a 'local' method that can be used when passing the hosts parameter, the 'local' method will run on the localhost although you have specified the host parameter to the task decorator. Be careful though you can not use the 'local' method if you didn't specified any hosts parameters, use run instead as shown in example 1 & 2.
Example 3 : use both on remote and local servers but under the same functions, note we are not decorating functions that are called in the UpdateAndRestart function.
#python3
#fabfile.py
#imports
from fabric import task, Connection
#variables
list_of_hosts = ['www.yourserver.com'] #you should have already configure the ssh access
c = Connection(list_of_hosts[0])
working_dir = '/var/www/yourproject'
def UpdateServer(c):
c.run('sudo apt get update') # will run on remote server because hosts are passed to the task decorator
c.local('echo the remote server is now updated') # will run on local server because you used the local method when hosts are being passed to the decorator
def PullFromGit(c):
c.run(f'cd {working_dir} && git pull') # will run on remote server because hosts are passed to the task decorator
c.local('echo Git repo is now pulled') # will run on local server because you used the local method when hosts are being passed to the decorator
def RestartServer(c):
c.run('sudo service apache2 restart') # will run on remote server because hosts are passed to the task decorator
c.local('echo Apache2 is now restarted') # will run on local server because you used the local method when hosts are being passed to the decorator
#task(hosts = list_of_hosts)
def UpdateAndRestart(c):
UpdateServer(c)
PullFromGit(c)
RestartServer(c)
c.local('echo you have updated, pulled and restarted Apache2') # will run on local server because you used the local method when hosts are being passed to the decorator
You will be able to run the entire stack with :
fab UpdateAndRestart

Kubernetes API server

So I have just started using Kubernetes API server and I tried this example :
from kubernetes import client, config
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
This worked but it returned the pods that are on my local minikube, I want to get the pods that are at the kubernetes server here :
http://192.168.237.115:8080
How do I do that?
When I do kubectl config view , I get this :
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/piyush/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/piyush/.minikube/apiserver.crt
client-key: /home/piyush/.minikube/apiserver.key
I know this is for the local cluster I set up. I want to know how to modify this to make api requests to kubernetes server on http://192.168.237.115:8080
You can actually create a simple api wrapper. This way you can pass through different yaml configuration files, that I imagine may have different hosts
import yaml
from kubernetes import client
from kubernetes.client import Configuration
from kubernetes.config import kube_config
class K8s(object):
def __init__(self, configuration_yaml):
self.configuration_yaml = configuration_yaml
self._configuration_yaml = None
#property
def config(self):
with open(self.configuration_yaml, 'r') as f:
if self._configuration_yaml is None:
self._configuration_yaml = yaml.load(f)
return self._configuration_yaml
#property
def client(self):
k8_loader = kube_config.KubeConfigLoader(self.config)
call_config = type.__call__(Configuration)
k8_loader.load_and_set(call_config)
Configuration.set_default(call_config)
return client.CoreV1Api()
# Instantiate your kubernetes class and pass in config
kube_one = K8s(configuration_yaml='~/.kube/config1')
kube_one.client.list_pod_for_all_namespaces(watch=False)
kube_two = K8s(configuration_yaml='~/.kube/config2')
kube_two.client.list_pod_for_all_namespaces(watch=False)
Also another neat reference in libcloud. https://github.com/apache/libcloud/blob/trunk/libcloud/container/drivers/kubernetes.py.
Good luck! Hope this helps! :)
I have two solution for you:
[prefered] Configure your kubectl (i.e. ~/.kube/config) file. After kubectl works with your cluster, python client should automatically work with load_kube_config. See here for configuring kubectl: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/
You can configure python client directly. For a complete list of configurations, look at: https://github.com/kubernetes-client/python-base/blob/8704ce39c241f3f184d01833dcbaf1d1fb14a2dc/configuration.py#L48
You may need to set some of those configuration for your client to connect to your cluster. For example, if you don't have any certificate or SSL enabled:
from kubernetes import client, configuration
def main():
configuration.host = "http://192.168.237.115:8080"
configuration.api_key_prefix['authorization'] = "Bearer"
configuration..api_key['authorization'] = "YOUR_TOKEN"
v1 = client.CoreV1Api()
...
You may need to set other configurations such as username, api_key, etc. That's why I think if you follow first solution it would be easier.
config.load_kube_config() takes context as a parameter. If passed None (the default) then the current context will be used. Your current context is probably your minikube.
See here:
https://github.com/kubernetes-incubator/client-python/blob/436351b027df2673869ee00e0ff5589e6b3e2b7d/kubernetes/config/kube_config.py#L283
config.load_kube_config(context='some context')
If you are not familiar with Kubernetes contexts,
Kubernetes stores your configuration under ~/.kube/config (default location). In it you will find context definition for every cluster you may have access to. A field called current-context defines your current context.
You can issue the following commands:
kubectl config current-context to see the current context
kubectl config view to view all the configuration
Can you show me the file ~/.kube/config
If you update the API server in it, the python module kubernetes will automatically pick up the new API server you nominated.
- cluster:
certificate-authority: [Update real ca.crt here]
server: http://192.168.237.115:8080
There are other changes in ~/.kube/config as well, you'd better get the config from the remote kubernetes server directly.
After successfully config with remote kubernetes API servers, you should be fine to run kubectl and get the deployments, deamons, etc.
Then you should be fine to run with python kubernetes SDK

Python/Fabric misunderstanding [duplicate]

This question already has answers here:
How can I properly set the `env.hosts` in a function in my Python Fabric `fabfile.py`?
(5 answers)
Closed 9 years ago.
I'm cutting my teeth on Python as I work with Fabric. Looks like I have a basic misunderstanding of how Python and/or Fabric works. Take a look at my 2 scripts
AppDeploy.py
from fabric.api import *
class AppDeploy:
# Environment configuration, all in a dictionary
environments = {
'dev' : {
'hosts' : ['localhost'],
},
}
# Fabric environment
env = None
# Take the fabric environment as a constructor argument
def __init__(self, env):
self.env = env
# Configure the fabric environment
def configure_env(self, environment):
self.env.hosts.extend(self.environments[environment]['hosts'])
fabfile.py
from fabric.api import *
from AppDeploy import AppDeploy
# Instantiate the backend class with
# all the real configuration and logic
deployer = AppDeploy(env)
# Wrapper functions to select an environment
#task
def env_dev():
deployer.configure_env('dev')
#task
def hello():
run('echo hello')
#task
def dev_hello():
deployer.configure_env('dev')
run('echo hello')
Chaining the first 2 tasks works
$ fab env_dev hello
[localhost] Executing task 'hello'
[localhost] run: echo hello
[localhost] out: hello
Done.
Disconnecting from localhost... done.
however, running the last task, which aims to configure the environment and do something in a single task, it appears fabric does not have the environment configured
$ fab dev_hello
No hosts found. Please specify (single) host string for connection:
I'm pretty lost though, because if I tweak that method like so
#task
def dev_hello():
deployer.configure_env('dev')
print(env.hosts)
run('echo hello')
it looks like env.hosts is set, but still, fabric is acting like it isn't:
$ fab dev_hello
['localhost']
No hosts found. Please specify (single) host string for connection:
What's going on here?
I'm not sure what you're trying to do, but...
If you're losing info on the shell/environment -- Fabric runs each command in a separate shell statement, so you need to either manually chain the commands or use the prefix context manager.
See http://docs.fabfile.org/en/1.8/faq.html#my-cd-workon-export-etc-calls-don-t-seem-to-work
If you're losing info within "python", it might be tied into this bug/behavior that I ran into recently [ https://github.com/fabric/fabric/issues/1004 ] where the shell i entered into Fabric with seems to be obliterated.

Categories

Resources