How to check results of wget/urllib2 in Python? - python

I am brand new to Python and am trying to write a monitor to determine whether a Java web app (WAR file) running on localhost (hosted by Apache Tomcat) is running or not. I had earlier devised a script that ran:
ps -aef | grep myWebApp
And inspected the results of the grep to see if any process IDs came back in those results.
But it turns out that the host OS only sees the Tomcat process, not any web apps Tomcat is hosting. I next tried to see if Tomcat came with any kind of CLI that I could hit from the terminal, and it looks like the answer is no.
Now, I'm thinking of using wget or maybe even urllib2 to determine if my web app is running by having them hit http://localhost:8080/myWebApp and checking the results. Here is my best attempt with wget:
wgetCmd = "wget http://localhost:8080/myWebApp"
wgetResults = subprocess.check_output([wgetCmd], shell=True, stderr=subprocess.STDOUT)
for line in wgetResults.strip().split('\n'):
if 'failed' in line:
print "\nError: myWebApp is not running."
sys.exit()
My thinking here is that, if the web app isn't running, wget's output should always contain the word "failed" inside of it (at least, from my experience). Unfortunately, when I run this, I get the following error:
Traceback (most recent call last):
File "/home/myUser/mywebapp-mon.py", line 52, in <module>
main()
File "/home/myUser/mywebapp-mon.py", line 21, in main
wgetResults = subprocess.check_output([wgetCmd], shell=True, stderr=subprocess.STDOUT)
File "/usr/lib/python2.7/subprocess.py", line 544, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['wget http://localhost:8080/myWebApp']' returned non-zero exit status 4
Any thoughts as to what's going on here (what the error is)? Also, and more importantly, am I going about this the wrong way? Thanks in advance!

I suggest you try the Requests module. It is much more user-friendly then wget or urllib. Try something like this:
import requests
r = requests.get('http://localhost:8080/myWebApp')
>>> r.status_code
200
>>> r.text
Some text of your webapp
*EDIT * installation instructions http://docs.python-requests.org/en/latest/user/install/

To check url:
import urllib2
def check(url):
try:
urllib2.urlopen(url).read()
except EnvironmentError:
return False
else:
return True
To investigate what kind of an error occurred you could look at the exception instance.

You can also build on using urllib2/requests and interact with Tomcat's manager service if it's installed. My using the list method you can receive the following information:
OK - Listed applications for virtual host localhost
/webdav:running:0
/examples:running:0
/manager:running:0
/:running:0

Related

Execute a program with python, then send API commands to that program

I'm trying to write a python script that can launch DaVinci Resolve in headless mode, then send it some commands via its API, then close it.
What I'm looking for would look something like
Open resolve.exe with argument --nogui
Do stuff with the API here
Terminate this instance of Resolve
I've managed to launch an instance of Resolve in headless. But it always ends up being a subprocess of something else. While it's running as a subprocess, I can't get the API to communicate with it.
Here's the code of tried
import subprocess
args = ["C:\Program Files\Blackmagic Design\DaVinci Resolve\Resolve.exe", '--nogui']
resolve_headles = subprocess.Popen(args)
from python_get_resolve import GetResolve
resolve = GetResolve()
This should return an object of Resolve, but it always fails.
I believe this is because its running as a subprocess of my IDE
I've also tried this
from subprocess import call
dir = "C:\Program Files\Blackmagic Design\DaVinci Resolve"
cmdline = "Resolve.exe --nogui"
rc = call("start cmd /K " + cmdline, cwd=dir, shell=True)
This just has the same problem of Resolve running as a subprocess of Windows Command Processor.

Script to capture everything on screen

So I have this python3 script that does a lot of automated testing for me, it takes roughly 20 minutes to run, and some user interaction is required. It also uses paramiko to ssh to a remote host for a separate test.
Eventually, I would like to hand this script over to the rest of my team however, it has one feature missing: evidence collection!
I need to capture everything that appears on the terminal to a file. I have been experimenting with the Linux command 'script'. However, I cannot find an automated method of starting script, and executing the script.
I have a command in /usr/bin/
script log_name;python3.5 /home/centos/scripts/test.py
When I run my command, it just stalls. Any help would be greatly appreciated!
Thanks :)
Is a redirection of the output to a file what you need ?
python3.5 /home/centos/scripts/test.py > output.log 2>&1
Or if you want to keep the output on the terminal AND save it into a file:
python3.5 /home/centos/scripts/test.py 2>&1 | tee output.log
I needed to do this, and ended up with a solution that combined pexpect and ttyrec.
ttyrec produces output files that can be played back with a few different player applications - I use TermTV and IPBT.
If memory serves, I had to use pexpect to launch ttyrec (as well as my test's other commands) because I was using Jenkins to schedule the execution of my test, and pexpect seemed to be the easiest way to get a working interactive shell in a Jenkins job.
In your situation you might be able to get away with using just ttyrec, and skip the pexpect step - try running ttyrec -e command as mentioned in the ttyrec docs.
Finally, on the topic of interactive shells, there's an alternative to pexpect named "empty" that I've had some success with too - see http://empty.sourceforge.net/. If you're running Ubuntu or Debian you can install empty with apt-get install empty-expect
I actually managed to do it in python3, took a lot of work, but here is the python solution:
def record_log(output):
try:
with open(LOG_RUN_OUTPUT, 'a') as file:
file.write(output)
except:
with open(LOG_RUN_OUTPUT, 'w') as file:
file.write(output)
def execute(cmd, store=True):
proc = Popen(cmd.encode("utf8"), shell=True, stdout=PIPE, stderr=PIPE)
output = "\n".join((out.decode()for out in proc.communicate()))
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = template % (cmd, output)
print(output)
if store:
record_log(output)
return output
# SSH function
def ssh_connect(start_message, host_id, user_name, key, stage_commands):
print(start_message)
try:
ssh.connect(hostname=host_id, username=user_name, key_filename=key, timeout=120)
except:
print("Failed to connect to " + host_id)
for command in stage_commands:
try:
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
except:
input("Paused, because " + command + " failed to run.\n Please verify and press enter to continue.")
else:
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = ssh_stderr.read() + ssh_stdout.read()
output = template % (command, output)
record_log(output)
print(output)

How to 'pipe' password to remote.update() with gitPython

I am trying to write a script (probably python) that needs to fetch a remote repo (housed with Stash via git), and checkout a specific commit (based on the hash). The problem is that this needs to happen 'blindly' to the user, but it pauses for a password. I need to figure out a way to pipe (or proc.communicate() (or something) the password to the repo.fetch() or origin.update() proc.
Currently, I have code that's something like this:
remoteUrl = 'http://uname#build:7990'
commitId = '9a5af460615'
pw = 'MyPassword'
repo = git.Repo('C:\\Myfolder\\MyRepo')
proc = subprocess.Popen('echo pw | repo.git.fetch()', shell=True, stdin = PIPE)
but it doesn't seem to work
if I done try the echo/pipe, but follow repo.git.fetch() with proc.communicate(pw), I get error:
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Program Files (x86)\Python34\lib\subprocess.py", line 941, in communicate
self.stdin.write(input)
TypeError: 'str' does not support the buffer interface
Finally, I've also tried adding:
o = repo.remotes.origin
proc = subprocess.Popen('o.update()', shell=True, stdin = PIPE)
proc.communicate(pw)
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Program Files (x86)\Python34\lib\subprocess.py", line 941, in communicate
self.stdin.write(input)
TypeError: 'str' does not support the buffer interface
But to no avail, as you can see from the error.
I think I'm over-complicating this, as it seems gitpython probably has a good way to send the pw to o.update() or repo.git.fetch() without using the subprocess?
EDIT:
I'm hoping for code something along these lines:
remoteUrl = 'http://uname#build:7990'
commitId = '9a5af460615'
pw = 'MyPassword'
repo = git.Repo('C:\\Myfolder\\MyRepo')
repo.fetch(remoteUrl)
repo.checkout(commitId) # or maybe repo.pull, or something?
That is more pseudocode than anything else, but it may help you see what I'm hoping for. Also, I want to force through any 'hiccups' I don't want detached head warnings or anything, I want to completely replace the local working copy with the remote at the specified commit.
Instead of using http use ssh.
Once you set up your ssh keys the authentication is done "silently" with the ssh keys and there will be no need to enter password ever again.
Generating ssh keys:
ssh-keygen -t rsa -C 'email#address'
Once you create your keys add them to the central machine or to your central git software and update your fetch url to clone using ssh instead of http

how to prevent failure of subprocess stopping the main process in python

I wrote a python script to run a command called "gtdownload" on a bunch of files with multiprocessing. The function "download" is where I am having trouble with.
#/usr/bin/env python
import os, sys, subprocess
from multiprocessing import Pool
def cghub_dnld_file(file1, file2, file3, np):
<open files>
<read in lines>
p = Pool(int(np))
map_args = [(line.rstrip(),name_lines[i].rstrip(),bar_lines[i].rstrip()) for i, line in enumerate(id_lines)]
p.map(download_wrapper,map_args)
def download(id, name, bar):
<check if file has been downloaded, if not download>
<.....>
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dnld_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dnld_cmd,shell=True)
def download_wrapper(args):
return download(*args)
def main():
<read in arguments>
<...>
cghub_dnld_file(file1,file2,file3,threads)
if __name__ == "__main__":
main()
If this file does not exist in the database, gtdownload would quit which also kills my python job with the following error:
Traceback (most recent call last):
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 102, in <module>
main()
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 98, in main
cghub_dnld_file(file1,file2,file3,threads)
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 22, in cghub_dnld_file
p.map(download_wrapper,map_args)
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
OSError: [Errno 2] No such file or directory
The actual error message from gtdownload :
Welcome to gtdownload-3.8.5a.
Ready to download
Communicating with GT Executive ...
Headers received from the client: 'HTTP/1.1 100 Continue
HTTP/1.1 404 Not Found
Date: Tue, 29 Jul 2014 18:49:57 GMT
Server: Apache/2.2.15 (Red Hat and CGHub)
Strict-Transport-Security: max-age=31536000
X-Powered-By: PHP/5.3.3
Content-Length: 669
Connection: close
Content-Type: text/xml
'
Error: You have requested to download a uuid which either does not exist within the system, or has not yet reached the 'live' state. The requested action will not be performed. Please double check the supplied uuid or contact thelpdesk for further assistance.
I would like the script to skip the one that does not exist and start gtdownload on the next one. I tried to output the stderr of subprocess.call to a pipe and see if there is the "error" keyword. But it seems it stops at the exact subprocess.call command. Same thing with os.system.
I made a MCV case without the multiprocessing and subprocess did not kill the main process at all. Looks like multiprocessing messes things up although I had it run with 1 thread just for testing.
#!/usr/bin/env python
import subprocess
#THis is the id that gtdownload had problem with
id = "df1e073f-4485-4d5f-8659-cd8eaac17329"
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
print dlnd_cmd
subprocess.call(dlnd_cmd,shell=True)
print "done"
Clearly multiprocessing conflicts subprocess.call but it is not clear to me why.
What is the best way to avoid the failure of subprocess killing the main process?
Handle the exception in some appropriate way and move on, of course.
try:
subprocess.call(dlnd_cmd)
except OSError as e:
print 'failed to download: {!r}'.format(e)
However, this may not be appropriate here. The kinds of exceptions that subprocess.call raises are usually not transient things that you can just log and work around; if it's not working now, it will continue to not work forever until you fix the underlying problem (a bug in your script, or gtdownload not being installed right, or whatever).
For example, if the code you showed us is your actual code:
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dlnd_cmd)
… then this is guaranteed to raise an OSError for the reason explained in dano's answer: call (without shell=True) will try to take that entire string—spaces, shell-redirection, etc.—as the name of an executable program to find on your $PATH. And there is no such program. So it will raise an OSError(errno.ENOENT). (Which is exactly what you're seeing.) Just logging that doesn't do you any good; it's a good thing that your entire process is exiting, so you can debug that problem.
subprocess.call should not kill the main process. Something else must be wrong with your script or your conclusions about the script's behaviour is wrong. Did you try printing some trace output after the subprocess call?
You have to use shell=True to use subprocess.call with a string for an argument (and with shell redirection):
subprocess.call(dlnd_cmd, shell=True)
Without shell=True, subprocess tries to treat your entire command string like a single executable name, which of course doesn't exist, and leads to the No such file or directory exception.
See this answer for more info on when to use a string vs. when to use a sequence with subprocess.

Is it possible to stream output from a python subprocess to a webpage in real time?

Thanks in advance for any help. I am fairly new to python and even newer to html.
I have been trying the last few days to create a web page with buttons to perform tasks on a home server.
At the moment I have a python script that generates a page with buttons:
(See the simplified example below. removed code to clean up post)
Then a python script which runs said command and outputs to an iframe on the page:
(See the simplified example below. removed code to clean up post)
This does output the entire finished output after the command is finished. I have also tried adding the -u option to the python script to run it unbuffered. I have also tried using the Python subprocess as well. If it helps the types of commands I am running are apt-get update, and other Python scripts for moving files and fixing folder permissions.
And when run from normal Ubuntu server terminal it runs fine and outputs in real time and from my research it should be outputting as the command is run.
Can anyone tell me where I am going wrong? Should I be using a different language to perform this function?
EDIT Simplified example:
initial page:
#runcmd.html
<head>
<title>Admin Tasks</title>
</head>
<center>
<iframe src="/scripts/python/test/createbutton.py" width="650" height="800" frameborder="0" ALLOWTRANSPARENCY="true"></iframe>
<iframe width="650" height="800" frameborder="0" ALLOWTRANSPARENCY="true" name="display"></iframe>
</center>
script that creates button:
cmd_page = '<form action="/scripts/python/test/runcmd.py" method="post" target="display" >' + '<label for="run_update">run updates</label><br>' + '<input align="Left" type="submit" value="runupdate" name="update" title="run_update">' + "</form><br>" + "\n"
print ("Content-type: text/html")
print ''
print cmd_page
script that should run command:
# runcmd.py:
import os
import pexpect
import cgi
import cgitb
import sys
cgitb.enable()
fs = cgi.FieldStorage()
sc_command = fs.getvalue("update")
if sc_command == "runupdate":
cmd = "/usr/bin/sudo apt-get update"
pd = pexpect.spawn(cmd, timeout=None, logfile=sys.stdout)
print ("Content-type: text/html")
print ''
print "<pre>"
line = pd.readline()
while line:
line = pd.readline()
I havent tested the above simplified example so unsure if its functional.
EDIT:
Simplified example should work now.
Edit:
Imrans code below if I open a browser to the ip:8000 it displays the output just like it was running in a terminal which is Exactly what I want. Except I am using Apache server for my website and an iframe to display the output. How do I do that with Apache?
edit:
I now have the output going to the iframe using Imrans example below but it still seems to buffer for example:
If I have it (the script through the web server using curl ip:8000) run apt-get update in terminal it runs fine but when outputting to the web page it seems to buffer a couple of lines => output => buffer => ouput till the command is done.
But running other python scripts the same way buffer then output everything at once even with the -u flag. While again in terminal running curl ip:800 outputs like normal.
Is that just how it is supposed to work?
EDIT 19-03-2014:
any bash / shell command I run using Imrans way seems to output to the iframe in near realtime. But if I run any kind of python script through it the output is buffered then sent to the iframe.
Do I possibly need to PIPE the output of the python script that is run by the script that runs the web server?
You need to use HTTP chunked transfer encoding to stream unbuffered command line output. CherryPy's wsgiserver module has built-in support for chunked transfer encoding. WSGI applications can be either functions that return list of strings, or generators that produces strings. If you use a generator as WSGI application, CherryPy will use chunked transfer automatically.
Let's assume this is the program, of which the output will be streamed.
# slowprint.py
import sys
import time
for i in xrange(5):
print i
sys.stdout.flush()
time.sleep(1)
This is our web server.
2014 Version (Older cherrpy Version)
# webserver.py
import subprocess
from cherrypy import wsgiserver
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
proc = subprocess.Popen(['python', 'slowprint.py'], stdout=subprocess.PIPE)
line = proc.stdout.readline()
while line:
yield line
line = proc.stdout.readline()
server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 8000), application)
server.start()
2018 Version
#!/usr/bin/env python2
# webserver.py
import subprocess
import cherrypy
class Root(object):
def index(self):
def content():
proc = subprocess.Popen(['python', 'slowprint.py'], stdout=subprocess.PIPE)
line = proc.stdout.readline()
while line:
yield line
line = proc.stdout.readline()
return content()
index.exposed = True
index._cp_config = {'response.stream': True}
cherrypy.quickstart(Root())
Start the server with python webapp.py, then in another terminal make a request with curl, and watch output being printed line by line
curl 'http://localhost:8000'

Categories

Resources