I need to run some tests with a traffic generator that has different client and server commands. I would like to roll this into a fabric2 script which executes the traffic generation commands while cd'd into /root.
I have public-key authentication on the iperf machines. How can I run this traffic generation test under fabric2?
This was a little interesting to get running because the fabric2 docs don't include much information about run() arguments... you need to look at the invoke Runner.run() documentation to see all fabric run() keywords.
The key to making iperf work in this case was setting pty=True and asynchronous=True when I run the iperf server commands. If I did not run the iperf server as asynchronous, it would block execution of the iperf client command.
# Save this script as run_iperf.py and run with "python run_iperf.py"
from getpass import getuser
import os
#from fabric import Config, SerialGroup, ThreadingGroup, exceptions, runners
#from fabric.exceptions import GroupException
from fabric import Connection
server_vm = "10.1.0.1"
client_vm = "10.2.0.1"
# This matters because my user .ssh/id_rsa.pub is authorized on the remote sytems
assert getuser()=="mpenning"
hosts = list()
conn1 = Connection(host=client_vm, user="root",
connect_kwargs={"key_filename": os.path.expanduser("~/.ssh/id_rsa")})
conn2 = Connection(host=server_vm, user="root",
connect_kwargs={"key_filename": os.path.expanduser("~/.ssh/id_rsa")})
hosts.append(conn1)
hosts.append(conn2)
iperf_udp_client_cmd = "nice -19 iperf3 --plus-more-client-commands"
iperf_udp_server_cmd = "nice -19 iperf3 --plus-more-server-commands"
# ThreadingGroup is optional for this use case, but the iperf commands
# definitely require pty and asynchronous (server-side)...
# ThreadingGroup() is required for concurrent fabric commands.
#
# Uncomment below to use ThreadingGroup()...
# t_hosts = ThreadingGroup.from_connections(hosts)
#
# also ref invoke Runner.run() for more run() args:
# -> https://github.com/pyinvoke/invoke/blob/master/invoke/runners.py
with conn2.cd("/root"):
conn2.run(iperf_udp_server_cmd, pty=True, asynchronous=True, disown=False, echo=True)
with conn1.cd("/root"):
conn1.run("sleep 1;%s" % iperf_udp_client_cmd, pty=True, asynchronous=False, echo=True)
This script was loosely based-on this answer:
https://stackoverflow.com/a/53763786/667301
Related
I am following the Google Cloud Functions python testing example here:
https://cloud.google.com/functions/docs/testing/test-http
import os
import subprocess
import uuid
import requests
from requests.packages.urllib3.util.retry import Retry
def test_args():
name = str(uuid.uuid4())
port = os.getenv('PORT', 8080) # Each functions framework instance needs a unique port
process = subprocess.Popen(
[
'functions-framework',
'--target', 'hello_http',
'--port', str(port)
],
cwd=os.path.dirname(__file__),
stdout=subprocess.PIPE
)
# Send HTTP request simulating Pub/Sub message
# (GCF translates Pub/Sub messages to HTTP requests internally)
BASE_URL = f'http://localhost:{port}'
retry_policy = Retry(total=6, backoff_factor=1)
retry_adapter = requests.adapters.HTTPAdapter(
max_retries=retry_policy)
session = requests.Session()
session.mount(BASE_URL, retry_adapter)
name = str(uuid.uuid4())
res = session.post(
BASE_URL,
json={'name': name}
)
assert res.text == 'Hello {}!'.format(name)
# Stop the functions framework process
process.kill()
process.wait()
After running pytest against this test file, the test passes and the code exits nearly immediately, but it doesn't appear to have killed the process:
± ps -ef | grep functions
thoraxe 11661 1985 49 20:30 pts/0 00:00:02 /home/thoraxe/.pyenv/versions/3.9.13/envs/avogadro-trainer-3-9-13/bin/python3.9 /home/thoraxe/.pyenv/versions/3.9.13/envs/avogadro-trainer-3-9-13/bin/functions-framework --target train --port 8080
thoraxe 11684 11661 0 20:30 pts/0 00:00:00 /home/thoraxe/.pyenv/versions/3.9.13/envs/avogadro-trainer-3-9-13/bin/python3.9 /home/thoraxe/.pyenv/versions/3.9.13/envs/avogadro-trainer-3-9-13/bin/functions-framework --target train --port 8080
I'm using Python 3.9.13 in a virtual environment on Fedora.
As this is sample code from Google, I'd expect it to work, but something is definitely not working here. Can someone suggest what I might be doing wrong?
When a Python assertion fails, the program exits immediately and does not continue. The kill/wait are never actually executed unless the test is successful. This is a major bummer because the function framework will continue to run in the background and new code changes aren't apparently picked up on subsequent pytest runs.
Using a different wrapper framework like https://github.com/okken/pytest-check solves the problem because all steps will be performed, even if there are failures.
However, note that legitimate Python failures/errors/explosions will still result in the functions framework not properly exiting.
I have a mariadb running in a container. On 'docker run', an import script (from a db dump) is run by mariadb, which creates users, builds schema, etc.
As the size of that dump script grows, the time to do the import increases. At this point it's about 8-10 seconds, but i expect amount of data to increase substantially, and the import time will be more difficult to predict.
I'd like to be able to send a signal from the container to the host, to let it know that the data has been loaded, and that db is ready to be used. So far i have found info on how to send signal from one container to another container, but there's no information on how to send signal from container to the host. Also, i need to be able to do this programmatically, as creating container is part part of a larger pipeline.
Ideally, i'd like to be able to do something like this:
client = docker.from_env()
db_c = client.containers.run('my_db_image', ....)
# check for signal from db_c container
# do other things
Thank you!
AFAIK you cannot send signals from the container to a process running on the host but there are other ways to know when the import has finished. I think the easiest is to start the container in detached mode and wait until a specific line gets logged. The following script for example waits until the line done is logged:
import os
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:latest', 'bash -c "for i in {1..10}; do sleep 1; echo working; done; echo done"', detach=True)
print('container started')
for line in container.logs(stream=True):
print line.strip()
if line.strip() == 'done':
break
print('continue....')
If the output of the import script goes to stdout it could contain a simple print at the end:
select 'The import has finished' AS '';
Wait for this string in the python script.
Another approach is to use some other form of inter-process communication. An example using named pipes:
import os
import docker
import errno
client = docker.from_env()
FIFO = '/tmp/apipe'
# create the pipe
try:
os.mkfifo(FIFO)
except OSError as oe:
if oe.errno != errno.EEXIST:
raise
# start the container sharing the pipe
container = client.containers.run('ubuntu:latest', 'bash -c "sleep 5; echo done > /tmp/apipe"', volumes={FIFO: {'bind': FIFO, 'mode': 'rw'}}, detach=True)
print("container started")
with open(FIFO) as fifo:
print("FIFO opened")
while True:
data = fifo.read()
if len(data) == 0:
print("Writer closed")
break
print('Read: "{0}"'.format(data))
print("continue...")
The host shares the named pipe with the container. In the python script the read call to the FIFO is blocked until some data is available in the pipe.
In the container the import script writes to the pipe notifying the program that the data has been loaded. The mysql system command, \! command to execute an external command might come in handy in this situation. You could simply add to the end of the script:
\! echo done > /tmp/apipe
In a similar way you could use IPC sockets (aka Unix sockets) or shared memory but things get a bit more complicated.
Yet another solution is to add a health-check to the container. The health status can be polled on the host by inspecting the container. See How to wait until docker start is finished?
Edited:
The above approaches assume the container is initialized and accepting connections. If the script is executed as part of the initialization process (Initializing a fresh instance), which seems to be the case here, the database is not ready and accepting connections when the import completes. For the initialization the server is temporarily started with the
--skip_networking (allowing only local clients) and only after the initialization completes it is restarted and becomes available remotely.
you can add this code to check if the db is ready to accept the connections:
import MySQLdb
import time
db = MySQLdb.connect(host='MYHost', user='MYNAME', passwd='PASS', db='MYDB')
if not db:
while True:
db = MySQLdb.connect(host='MYHost', user='MYNAME', passwd='PASS', db='MYDB')
if db:
break
print("Still waiting for the DB")
time.sleep(10)
I'm using Fabric 2 and I'm trying to run a shell script on a number of hosts sequentially. In the script, it configures a few settings and reboots that host. When I run my task however it ends after the script has run on the first host (I'm guessing because the SSH connection terminates due to the reboot). I tried looking into setting 'warn_only' to True, but I don't see where to set this value on Fabric 2.
Adding:
with settings(warn_only=True):
throws a "NameError: global name 'settings' is not defined" error.
Is there a correct format to warn_only? If not possible yet in Fabric 2, is there a way to continue running my task regardless of this reboot?
My script:
from fabric import *
import time
import csv
#task
def test(ctx):
hosts = ['1.1.1.1', '2.2.2.2']
for host in hosts:
c = Connection(host=host, user="user", connect_kwargs={"password": "password"})
c.run("./shell_script.sh")
configured = False
while not configured:
result = c.run("cat /etc/hostname")
if result != "default": configured = True
time.sleep(10)
So a workaround has been to run the task with the -w flag which will enable warn_only and give me the desired functionality. It would be more preferable to be able to set this property in the code though.
Looks like you can use the config argument to the connection class instead of the with settings() construction, and warn_only as been renamed to warn;
with Connection(host=host, user="user", connect_kwargs={"password": "password"}) as c:
c.run("./shell_script.sh", warn=True)
More generally, you can get upgrade documentation at https://www.fabfile.org/upgrading.html#run
How to logged in to multiple server using pywinauto application? I am using below program for access putty and run the command, This is work for single server when i have defined like
app = Application ().Start (cmd_line = 'C:\Program Files\PuTTY\putty.exe -ssh user#10.55.167.4')
But while i am passing with for loop to do the same task for another server it does not work.
from pywinauto.application import Application
import time
server = [ "10.55.167.4", "10.56.127.23" ]
for inser in server:
print(inser)
app = Application ().Start (cmd_line = 'C:\Program Files\PuTTY\putty.exe -ssh user#inser')
putty = app.PuTTY
putty.Wait ('ready')
time.sleep (1)
putty.TypeKeys ("aq#wer")
putty.TypeKeys ("{ENTER}")
putty.TypeKeys ("ls")
putty.TypeKeys ("{ENTER}")
pywinauto is a Python module to automatize graphical user interface (GUI) operations for example to simulate mouse clicking and everything else humans used to do on GUIs to interact with the computer. SSH is a remote access protocol designed primarily for command line and having excellent programmatic support in Python. Putty is a little GUI tool to manage SSH and Telnet connections. Although, based on quick check, I think it is possible to read the console output (you mean in cmd.com?) by pywinauto, I think your approach is unnecessarily complicated: you don't need to use a GUI tool designed for human interaction to access a protocol with excellent command line and programmatic library support. I suggest you to use paramiko which gives a very simple and convenient interface to SSH:
import paramiko
def ssh_connect(server, **kwargs):
client = paramiko.client.SSHClient()
client.load_system_host_keys()
# note: here you can give other paremeters
# like user, port, password, key file, etc.
# also you can wrap the call below into
# try/except, you can expect many kinds of
# errors here, if the address is unreachable,
# times out, authentication fails, etc.
client.connect(server, **kwargs)
return client
servers = ['10.55.167.4', '10.56.127.23']
# connect to all servers
clients = dict((server, ssh_connect(server)) for server in servers)
# execute commands on the servers by the clients, 2 examples:
stdin, stdout, stderr = clients['10.55.167.4'].exec_command('pwd')
print(stdout.read())
# b'/home/denes\n'
stdin, stdout, stderr = clients['10.56.127.23'].exec_command('rm foobar')
print(stderr.read())
# b"rm: cannot remove 'foobar': No such file or directory\n"
# once you finished close the connections
_ = [cl.close() for cl in clients.values()]
Flow of the program is:
Connect to OpenSSH server on Linux machine using Paramiko library
Open X11 session
Run xterm executable
Run some other program (e.g. Firefox) by typing executable name in the terminal and running it.
I would be grateful if someone can explain how to cause some executable to run in a terminal which was open by using the following code and provide sample source code (source):
import select
import sys
import paramiko
import Xlib.support.connect as xlib_connect
import os
import socket
import subprocess
# run xming
XmingProc = subprocess.Popen("C:/Program Files (x86)/Xming/Xming.exe :0 -clipboard -multiwindow")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(SSHServerIP, SSHServerPort, username=user, password=pwd)
transport = ssh_client.get_transport()
channelOppositeEdges = {}
local_x11_display = xlib_connect.get_display(os.environ['DISPLAY'])
inputSockets = []
def x11_handler(channel, (src_addr, src_port)):
local_x11_socket = xlib_connect.get_socket(*local_x11_display[:3])
inputSockets.append(local_x11_socket)
inputSockets.append(channel)
channelOppositeEdges[local_x11_socket.fileno()] = channel
channelOppositeEdges[channel.fileno()] = local_x11_socket
transport._queue_incoming_channel(channel)
session = transport.open_session()
inputSockets.append(session)
session.request_x11(handler = x11_handler)
session.exec_command('xterm')
transport.accept()
while not session.exit_status_ready():
readable, writable, exceptional = select.select(inputSockets,[],[])
if len(transport.server_accepts) > 0:
transport.accept()
for sock in readable:
if sock is session:
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stderr.write(session.recv_stderr(4096))
else:
try:
data = sock.recv(4096)
counterPartSocket = channelOppositeEdges[sock.fileno()]
counterPartSocket.sendall(data)
except socket.error:
inputSockets.remove(sock)
inputSockets.remove(counterPartSocket)
del channelOppositeEdges[sock.fileno()]
del channelOppositeEdges[counterPartSocket.fileno()]
sock.close()
counterPartSocket.close()
print 'Exit status:', session.recv_exit_status()
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stdout.write(session.recv_stderr(4096))
session.close()
XmingProc.terminate()
XmingProc.wait()
I was thinking about running the program in child thread, while the thread running the xterm is waiting for the child to terminate.
Well, this is a bit of a hack, but hey.
What you can do on the remote end is the following: Inside the xterm, you run netcat, listen to any data coming in on some port, and pipe whatever you get into bash. It's not quite the same as typing it into xterm direclty, but it's almost as good as typing it into bash directly, so I hope it'll get you a bit closer to your goal. If you really want to interact with xterm directly, you might want to read this.
For example:
terminal 1:
% nc -l 3333 | bash
terminal 2 (type echo hi here):
% nc localhost 3333
echo hi
Now you should see hi pop out of the first terminal. Now try it with xterm&. It worked for me.
Here's how you can automate this in Python. You may want to add some code that enables the server to tell the client when it's ready, rather than using the silly time.sleeps.
import select
import sys
import paramiko
import Xlib.support.connect as xlib_connect
import os
import socket
import subprocess
# for connecting to netcat running remotely
from multiprocessing import Process
import time
# data
import getpass
SSHServerPort=22
SSHServerIP = "localhost"
# get username/password interactively, or use some other method..
user = getpass.getuser()
pwd = getpass.getpass("enter pw for '" + user + "': ")
NETCAT_PORT = 3333
FIREFOX_CMD="/path/to/firefox &"
#FIREFOX_CMD="xclock&"#or this :)
def run_stuff_in_xterm():
time.sleep(5)
s = socket.socket(socket.AF_INET6 if ":" in SSHServerIP else socket.AF_INET, socket.SOCK_STREAM)
s.connect((SSHServerIP, NETCAT_PORT))
s.send("echo \"Hello there! Are you watching?\"\n")
s.send(FIREFOX_CMD + "\n")
time.sleep(30)
s.send("echo bye bye\n")
time.sleep(2)
s.close()
# run xming
XmingProc = subprocess.Popen("C:/Program Files (x86)/Xming/Xming.exe :0 -clipboard -multiwindow")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(SSHServerIP, SSHServerPort, username=user, password=pwd)
transport = ssh_client.get_transport()
channelOppositeEdges = {}
local_x11_display = xlib_connect.get_display(os.environ['DISPLAY'])
inputSockets = []
def x11_handler(channel, (src_addr, src_port)):
local_x11_socket = xlib_connect.get_socket(*local_x11_display[:3])
inputSockets.append(local_x11_socket)
inputSockets.append(channel)
channelOppositeEdges[local_x11_socket.fileno()] = channel
channelOppositeEdges[channel.fileno()] = local_x11_socket
transport._queue_incoming_channel(channel)
session = transport.open_session()
inputSockets.append(session)
session.request_x11(handler = x11_handler)
session.exec_command("xterm -e \"nc -l 0.0.0.0 %d | /bin/bash\"" % NETCAT_PORT)
p = Process(target=run_stuff_in_xterm)
transport.accept()
p.start()
while not session.exit_status_ready():
readable, writable, exceptional = select.select(inputSockets,[],[])
if len(transport.server_accepts) > 0:
transport.accept()
for sock in readable:
if sock is session:
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stderr.write(session.recv_stderr(4096))
else:
try:
data = sock.recv(4096)
counterPartSocket = channelOppositeEdges[sock.fileno()]
counterPartSocket.sendall(data)
except socket.error:
inputSockets.remove(sock)
inputSockets.remove(counterPartSocket)
del channelOppositeEdges[sock.fileno()]
del channelOppositeEdges[counterPartSocket.fileno()]
sock.close()
counterPartSocket.close()
p.join()
print 'Exit status:', session.recv_exit_status()
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stdout.write(session.recv_stderr(4096))
session.close()
XmingProc.terminate()
XmingProc.wait()
I tested this on a Mac, so I commented out the XmingProc bits and used /Applications/Firefox.app/Contents/MacOS/firefox as FIREFOX_CMD (and xclock).
The above isn't exactly a secure setup, as anyone connecting to the port at the right time could run arbitrary code on your remote server, but it sounds like you're planning to use this for testing purposes anyway. If you want to improve the security, you could make netcat bind to 127.0.0.1 rather than 0.0.0.0, setup an ssh tunnel (run ssh -L3333:localhost:3333 username#remote-host.com to tunnel all traffic received locally on port 3333 to remote-host.com:3333), and let Python connect to ("localhost", 3333).
Now you can combine this with selenium for browser automation:
Follow the instructions from this page, i.e. download the selenium standalone server jar file, put it into /path/to/some/place (on the server), and pip install -U selenium (again, on the server).
Next, put the following code into selenium-example.py in /path/to/some/place:
#!/usr/bin/env python
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox() # Get local session of firefox
browser.get("http://www.yahoo.com") # Load page
assert "Yahoo" in browser.title
elem = browser.find_element_by_name("p") # Find the query box
elem.send_keys("seleniumhq" + Keys.RETURN)
time.sleep(0.2) # Let the page load, will be added to the API
try:
browser.find_element_by_xpath("//a[contains(#href,'http://docs.seleniumhq.org')]")
except NoSuchElementException:
assert 0, "can't find seleniumhq"
browser.close()
and change the firefox command:
FIREFOX_CMD="cd /path/to/some/place && python selenium-example.py"
And watch firefox do a Yahoo search. You might also want to increase the time.sleep.
If you want to run more programs, you can do things like this before or after running firefox:
# start up xclock, wait for some time to pass, kill it.
s.send("xclock&\n")
time.sleep(1)
s.send("XCLOCK_PID=$!\n") # stash away the process id (into a bash variable)
time.sleep(30)
s.send("echo \"killing $XCLOCK_PID\"\n")
s.send("kill $XCLOCK_PID\n\n")
time.sleep(5)
If you want to do perform general X11 application control, I think you might need to write similar "driver applications", albeit using different libraries. You might want search for "x11 send {mouse|keyboard} events" to find more general approaches. That brings up these questions, but I'm sure there's lots more.
If the remote end isn't responding instantaneously, you might want to sniff your network traffic in Wireshark, and check whether or not TCP is batching up the data, rather than sending it line by line (the \n seems to help here, but I guess there's no guarantee). If this is the case, you might be out of luck, but nothing is impossible. I hope you don't need to go that far though ;-)
One more note: if you need to communicate with CLI programs' STDIN/STDOUT, you might want to look at expect scripting (e.g. using pexpect, or for simple cases you might be able to use subprocess.Popen.communicate](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate)).