I'm using django-selenium to add Selenium testing functionality to existing unittests.
My Selenium tests are reliant on a web server running on my machine which would be triggered by running our django app like so; main.py -a
So the first thing I want to do in my Selenium test is start this server which I setup like so;
def start_server():
path = os.path.join(os.getcwd(), 'main.py -a')
server_running = is_server_running()
if server_running is False:
server = subprocess.Popen('cmd.exe', stdin= subprocess.PIPE, stdout= subprocess.PIPE)
stdout, stderr = server.communicate(input='%s\n' % path)
print 'Server error:\n{0}\n'.format(stderr)
server_running = is_server_running()
return server_running
However when I do this the webserver takes over the execution of the django test process in the command line. I assume the way I should be doing this is to launch the command prompt in a separate process and then trigger the main.py -a command in that process.
Is this the right idea and if so, how can I modify that function to spawn a new process and launch my command? I was trying to run 'cmd.exe' using Process(target=path but I couldn't get it to work. Thanks :)
The way I have gone with this is with a much simpler launch method;
startServer.py
def run():
path = os.path.join(os.getcwd(), 'main.py')
server_running = is_server_running()
if server_running is False:
subprocess.Popen(['python', path, '-a'])
if __name__ == '__main__':
run()
Which I can then start and stop in my tests' setup & teardown as so;
def setUp(self):
self.server = Process(target= startServer.run)
self.server.start()
def test(self):
# run test process
def tearDown(self):
utils.closeBrowser(self.ff)
There may well be a better way of doing things & something here may not be 'as it should be' but it works (with a socket forcibly closed error) :)
My only outstanding issue is test starting before the database tables have been created :(
Related
I've spent the last hour and a half trying and failing to debug this test and I am utterly stumped. To simplify the process of testing the Flask server I am building, I have made a relatively simple script which starts the server, then runs pytest, kills the server, writes the outputs to files, and exits with Pytest's exit code. This code was working perfectly until today, and I haven't modified it since (aside from debugging this issue).
Here's the problem: when it gets to a certain point in the tests, it hangs. The weird thing is that this does not happen if I run my tests in any other way.
Debugging my server in VS Code, and running tests in the terminal: works
Running my server using the same code used in the test script and running pytest manually: works
Running pytest using the test script and running the server through the start server script (which uses the same code for running the server as the test script does) in a second terminal: works
Here's the other interesting thing: the tests always hang in the same place, part way through the setup fixture. It sends the clear command, and an echo request to the server (which prints the name of the current test). The database clears successfully, and the server echoes the correct information, but the echo route never exits - my tests never get a response. This echo route behaves perfectly for the 50 or so tests that happen before this point. If I comment out the test that is causing it to fail, it fails on the next test. If I comment out the call to the echo then it hangs on a later test on a completely different request to a different route. When it hangs, the server cannot be killed using a SIGTERM, but instead requires a SIGKILL.
Here is my echo route:
#debug.get('/echo')
def echo() -> IEcho:
"""
Echo an input. This returns the given value, but also prints it to stdout
on the server. Useful for debugging tests.
## Params:
* `value` (`str`): value to echo
"""
try:
value = request.args['value']
except KeyError:
raise http_errors.BadRequest('echo route requires a `value` argument')
to_print = f'{Fore.MAGENTA}[ECHO]\t\t{value}{Fore.RESET}'
# Print it to both stdout and stderr to ensure it is seen across all logs
# Otherwise it could be more difficult to figure out what's up with server
# output
print(to_print)
print(to_print, file=sys.stderr)
return {'value': value}
And here is my code that sends the requests:
def get(token: JWT | None, url: str, params: dict) -> dict:
"""
Returns the response to a GET web request
This also parses the response to help with error checking
### Args:
* `url` (`str`): URL to request to
* `params` (`dict`): parameters to send
### Returns:
* `dict`: response data
"""
return handle_response(requests.get(
url,
params=params,
headers=encode_headers(token),
timeout=3
))
def echo(value: str) -> IEcho:
"""
Echo an input. This returns the given value, but also prints it to stdout
on the server. Useful for debugging tests.
## Params:
* `value` (`str`): value to echo
"""
return cast(IEcho, get(None, f"{URL}/echo", {"value": value}))
#pytest.fixture(autouse=True)
def before_each(request: pytest.FixtureRequest):
"""Clear the database between tests"""
clear()
echo(f"{request.module.__name__}.{request.function.__name__}")
print("After echo") # This never prints
Here is my code for running Pytest in my test script
def pytest():
pytest = subprocess.Popen(
[sys.executable, '-u', '-m', 'pytest', '-v', '-s'],
)
# Wait for tests to finish
print("🔨 Running tests...")
try:
ret = pytest.wait()
except KeyboardInterrupt:
print("❗ Testing cancelled")
pytest.terminate()
# write_outputs(pytest, None)
# write_outputs(pytest, "pytest")
raise
# write_outputs(pytest, "pytest")
if ret == 0:
print("✅ It works!")
else:
print("❌ Tests failed")
return bool(ret)
And here is my code for running my server in my test script:
def backend(debug=False, live_output=False):
env = os.environ.copy()
if debug:
env.update({"ENSEMBLE_DEBUG": "TRUE"})
debug_flag = ["--debug"]
else:
debug_flag = []
if live_output is False:
outputs = subprocess.PIPE
else:
outputs = None
flask = subprocess.Popen(
[sys.executable, '-u', '-m', 'flask'] + debug_flag + ['run'],
env=env,
stderr=outputs,
stdout=outputs,
)
if outputs is not None and (flask.stderr is None or flask.stdout is None):
print("❗ Can't read flask output", file=sys.stderr)
flask.kill()
sys.exit(1)
# Request until we get a success, but crash if we failed to start in 10
# seconds
start_time = time.time()
started = False
while time.time() - start_time < 10:
try:
requests.get(
f'http://localhost:{os.getenv("FLASK_RUN_PORT")}/debug/echo',
params={'value': 'Test script startup...'},
)
except requests.ConnectionError:
continue
started = True
break
if not started:
print("❗ Server failed to start in time")
flask.kill()
if outputs is not None:
write_outputs(flask, None)
sys.exit(1)
else:
if flask.poll() is not None:
print("❗ Server crashed during startup")
if outputs is not None:
write_outputs(flask, None)
sys.exit(1)
print("✅ Server started")
return flask
So in summary, does anyone have any idea what on earth is happening? It freezes on such a simple route that this makes me very concerned. I think I may have found some crazy bug in Flask or in the requests library or something.
Even if you don't know what's happening with this, it'd be really helpful to have any ideas as to how I can debug this further, as I have absolutely no idea what is going on.
It turns out that my server output was filling up all the buffer space in the pipe, meaning that it would wait for the buffer to empty. The issue is that my test script was waiting for the tests to exit, and the tests could not progress unless the server was active. As such, the code reached a three-way deadlock. I fixed it by redirecting my output through a file (where limited buffer size wasn't a problem).
I am trying to automate a small task that requires several steps and some of the steps would be identical for all the "devices":
- ssh login
- run a command
- clean after itself
I have a script that uses pexpect but for every function (task) I have to establish SSH connection which is lame.
What I am trying to do is kind of like this:
A function that would create a session and another functions that would use the same "child"
def ssh_login(device):
child.spawn("ssh root#"+device)
child.expect("password:")
child.sendline(password)
child.expect("#")
Another function that would use the session and run some command like
def run_command():
# run some command here
child.sendline("some_command")
child.expect("#")
And a clean up function
def cleanup():
child.sendline(cleanup)
child.expect("#")
child.sendline("exit")
child.interract()
Any ideas?
Like this:
pexpect.spawn('ssh', [ '-o' + 'ControlMaster=auto',username + "#" + hostname, '-o' + 'ControlPath=~/.ssh/master-%r#%h:%p'])
And you will find session:
ls ~/.ssh/master-yourname#hostname:22
I've done something similar. All you have to do is return the child from the ssh_login function and pass it along as an input to your other functions.
def ssh_login(device):
child = pexpect.spawn("ssh root#"+device)
#Do your usual login
return child
When you call it, save the child in a variable.
session = ssh_login(my_device)
run_command(session)
cleanup(session)
Of course you will need to change the other functions to accept a session input:
def run_command(session):
# run some command here
session.sendline("some_command")
session.expect("#")
def cleanup(session):
session.sendline(cleanup)
session.expect("#")
session.sendline("exit")
session.interract()
When I do anything with python and SSH, I use Paramiko, its a really solid module, Here's my "Starter Code" for any project that uses it. I've argmentized it, and added some comments, you'll probably want to generate a list of the servers you want to run the command on, and loop through it. I would reccomend though, if you need to frequently run commands on a lot of servers, consider getting something like Saltstack, or Ansible, they make it very easy to manage servers regularly.
https://saltstack.com/
https://www.ansible.com/
#!/usr/bin/env python
import paramiko
def run_ssh_cmd(remote_server, connect_user, identity, cmd=None):
""" create an ssh connection to the remote server and retrieve
information"""
# kludge to make ssh work - add 'your_domain.com' to the remote_server
remote_server += '.your_domain.com'
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(remote_server, username=connect_user, key_filename=identity)
command_str = cmd
stdin, stdout, stderr = client.exec_command(command_str)
print stdout.readlines()
client.close()
if __name__ == '__main__':
import sys
import argparse
import datetime
parser = argparse.ArgumentParser()
parser.add_argument("-s", "--server", action="store", required=True,
dest="server", help="Server to query")
parser.add_argument("-u", "--user", action="store", required=True,
dest="user", help="User ID for remote server connection")
parser.add_argument("-i", "--identity", action="store", required=True,
dest="id_file", help="SSH key file")
args = parser.parse_args()
run_ssh_cmd(args.server, args.user, args.id_file, "hostname;date")
I have the following script which just boots up a web server serving a dynamically created website. In order to get dynamic data the script opens a file to read the data.
My concern is how can I catch CTRL-C command for killing the python script so I can close the file before script thread is killed.
I tried the following couple things but neither work:
from flask import Flask, render_template
import time
# Initialize the Flask application
app = Flask(__name__)
fileNames = {}
fileDesc = {}
for idx in range(1,4):
fileNames["name{}".format(idx)] = "./name" + str(idx) + ".txt"
fileDesc["name{}".format(idx)] = open(fileNames["name{}".format(idx)],'r')
try:
#app.route('/')
def index():
# code for reading data from files
return render_template('index.html', var1 = var1)
#app.errorhandler(Exception)
def all_exception_handler(error):
print("Closing")
for key, value in fileDesc:
val.close()
print("Files closed")
if __name__ == '__main__':
app.run(
host="192.168.0.166",
port=int("8080"),
debug=True
)
except KeyboardInterrupt:
print("Closing")
for key, value in fileDesc:
val.close()
print("Files closed")
Thanks in advance.
I am struggling with the same thing in my project. Something that did work for me was using signal to capture CTRL-C.
import sys
import signal
def handler(signal, frame):
print('CTRL-C pressed!')
sys.exit(0)
signal.signal(signal.SIGINT, handler)
signal.pause()
When this piece of code is put in the script that is running the Flask app, the CTRL-C can be captured. As of now, you have to use CTRL-C twice and then the handler is executed though. I'll investigate further and edit the answer if I find something new.
Edit 1
Okay I've done some more research and came up with some other methods, as the above is quite hack 'n slash.
In production, clean-up code such as closing databases or files is done via the #app.teardown_appcontext decorator. See this part of the tutorial.
When using the simple server, you can shut it down via exposing the werkzeug shutdown function. See this post.
Edit 2
I've tested the Werkzeug shutdown function, and it also works together with the teardown_appcontext functions. So I suggest to write your teardown functions using the decorator and writing a simple function that just does the shutdown of the werkzeug server. That way production and development code are the same.
Use atexit to handle this, from: https://stackoverflow.com/a/30739397/5782985
import atexit
#defining function to run on shutdown
def close_running_threads():
for thread in the_threads:
thread.join()
print "Threads complete, ready to finish"
#Register the function to be called on exit
atexit.register(close_running_threads)
#start your process
app.run()
I'm writing a python debugging library which opens a flask server in a new thread and serves information about the program it's running in. This works fine when the program being debugged isn't a web server itself. However if I try to run it concurrently with another flask server that's running in debug mode, things break. When I try to access the second server, the result alternates between the two servers.
Here's an example:
from flask.app import Flask
from threading import Thread
# app1 represents my debugging library
app1 = Flask('app1')
#app1.route('/')
def foo():
return '1'
Thread(target=lambda: app1.run(port=5001)).start()
# Cannot change code after here as I'm not the one writing it
app2 = Flask('app2')
#app2.route('/')
def bar():
return '2'
app2.run(debug=True, port=5002)
Now when I visit http://localhost:5002/ in my browser, the result may either be 1 or 2 instead of consistently being 2.
Using multiprocessing.Process instead of Thread has the same result.
How does this happen, and how can I avoid it? Is it unavoidable with flask/werkzeug/WSGI? I like flask for its simplicity and ideally would like to continue using it. If that's not possible, what's the simplest library/framework that I can use that won't interfere with any other web servers running at the same time? I'd also like to use threads instead of processes if possible.
The reloader of werkzeug (which is used in debug mode by default) creates a new process using subprocess.call, simplified it does something like:
new_environ = os.environ.copy()
new_environ['WERKZEUG_RUN_MAIN'] = 'true'
subprocess.call([sys.executable] + sys.argv, env=new_environ, close_fds=False)
This means that your script is reexecuted, which is usually fine if all it contains is an app.run(), but in your case it would restart both app1 and app2, but both now use the same port because if the OS supports it the listening port is opened in the parent process, inherited by the child and used there directly if an environment variable WERKZEUG_SERVER_FD is set.
So now you have two different apps somehow using the same socket.
You can see this better if you add some output, e.g:
from flask.app import Flask
from threading import Thread
import os
app1 = Flask('app1')
#app1.route('/')
def foo():
return '1'
def start_app1():
print("starting app1")
app1.run(port=5001)
app2 = Flask('app2')
#app2.route('/')
def bar():
return '2'
def start_app2():
print("starting app2")
app2.run(port=5002, debug=True)
if __name__ == '__main__':
print("PID:", os.getpid())
print("Werkzeug subprocess:", os.environ.get("WERKZEUG_RUN_MAIN"))
print("Inherited FD:", os.environ.get("WERKZEUG_SERVER_FD"))
Thread(target=start_app1).start()
start_app2()
This prints for example:
PID: 18860
Werkzeug subprocess: None
Inherited FD: None
starting app1
starting app2
* Running on http://127.0.0.1:5001/ (Press CTRL+C to quit)
* Running on http://127.0.0.1:5002/ (Press CTRL+C to quit)
* Restarting with inotify reloader
PID: 18864
Werkzeug subprocess: true
Inherited FD: 4
starting app1
starting app2
* Debugger is active!
If you change the startup code to
if __name__ == '__main__':
if os.environ.get("WERKZEUG_RUN_MAIN")) != 'true':
Thread(target=start_app1).start()
start_app2()
then it should work correctly, only app2 is reloaded by the reloader. However it runs in a separate process, not in a different thread, that is implied by using the debug mode.
A hack to avoid this would be to use:
if __name__ == '__main__':
os.environ["WERKZEUG_RUN_MAIN"] = 'true'
Thread(target=start_app1).start()
start_app2()
Now the reloader thinks it's already running in the subprocess and doesn't start a new one, everything runs in the same process. Reloading won't work and I don't know what other side effects that may have.
I'm using a simple pexpect script to ssh to a remote machine and grab a value returned by a command.
Is there any way, pexpect or sshwise I can use to ignore the unix greeting?
That is, from
child = pexpect.spawn('/usr/bin/ssh %s#%s' % (rem_user, host))
child.expect('[pP]assword: ', timeout=5)
child.sendline(spass)
child.expect([pexpect.TIMEOUT, prompt])
child.before = '0'
child.sendline ('%s' % cmd2exec)
child.expect([pexpect.EOF, prompt])
# Collected data processing
result = child.before
# logon to the machine returns a lot of garbage, the returned executed command is at the 57th position
print result.split('\r\n') [57]
result = result.split('\r\n') [57]
How can I simply get the returned value, ignoring,
the "Last successful login" and "(c)Copyright" stuff
and without having to concern with the value correct position?
Thanks !
If you have access to the server to which you are logging in, you can try creating a file named .hushlogin in the home directory. The presence of this file silences the standard MOTD greeting and similar stuff.
Alternatively, try ssh -T, which will disable terminal allocation entirely; you won't get a shell prompt, but you may still issue commands and read the response.
There is also a similar thread on ServerFault which may be of some use to you.
If the command isn't interactive, you can just run ssh HOST COMMAND to run the command without all the login excitement happening at all. If the command is interactive, you can frequently use the ssh -t option (ssh -t HOST COMMAND) to force pseudo-tty allocation and trick the remote process to think that it's running attached to a TTY.
I have used paramiko to automate ssh connection and I have found it useful. It can deal with greetings and silent execution.
http://www.lag.net/paramiko/
Hey there you kann kill all that noise by using the sys module and a small class:
class StreamToLogger(object):
"""
Fake file-like stream object that redirects writes to a logger instance.
"""
def __init__(self, logger, log_level=logging.INFO):
self.logger = logger
self.log_level = log_level
self.linebuf = ''
def write(self, buf):
for line in buf.rstrip().splitlines():
self.logger.log(self.log_level, line.rstrip())
#Mak
stdout_logger = logging.getLogger('STDOUT')
sl = StreamToLogger(stdout_logger, logging.INFO)
sys.stdout = sl
stderr_logger = logging.getLogger('STDERR')
sl = StreamToLogger(stderr_logger, logging.ERROR)
sys.stderr = sl
Can't remember where i found that snippet but it works for me :)