Executing Code in Python Web - python

im creating a webpage that will show me the SSID's available in my Network
For this I have use this code:
nm-tool | grep "Infra" | cut -d " " -f5 > /home/nunukene/SSID3.txt
Im saving this into a file called SSID3, to later open it using the open() , read() and str.split
My problem is that the code I want to execute in the page, wont get executed, the file SSID3.txt wont be created
This is my website code so far:
#!/usr/bin/python
import os
import subprocess
import cgitb
cgitb.enable()
a=os.system("""nm-tool | grep "Infra" | cut -d " " -f5 > /home/nunukene/SSID3.txt""")
#SSIDStr = subprocess.check_output('nm-tool | grep "Infra" | cut -d " " -f5-6', shell=True)
#SSIDArray = str.split(SSIDStr)
ID = subprocess.check_output('ls', shell=True)
a='devilman'
print "Content-type:text/html\r\n\r\n"
print "<!DOCTYPE html>"
print "<html>"
print "<title> Not Hacking lol</title>"
print "<body>"
print "<h1> Join %s One of this networks <h1>" %(a)
print "</body>"
print "</html>"
I dont know how to get this process working before the rest!

I highly suggest using the logging module to help you diagnose where your problem is.
a reason your subprocess call didn't work is you would need to make a list out of all of the arguments to the command.
SSIDStr = subprocess.check_output(['nm-tool','|','grep','"Infra"','|','cut','-d','" "','-f5'])
(I'm not sure if you have to escape the double-quotes in this string)
using the subprocess call this way avoids using the text file and it's permission problems that the web server user may be experiencing writing files.
you were re-writing the "a" variable and you weren't using the text file in the output.
and I hope the cgitb.enable() line wasn't the problem. Haven't seen that before. Have you thought of using Flask?
If you are using python 3 then the print statements need to be functions.

Related

CGI script end of script before headers

I have a VPS server with Apache on it. I want to execute simple CGI script. I created a python script, saved it as .cgi file and placed it to the folder cgi-bin, but it only displays error message: "End of script output before headers".
However when I saved this script as .py file and did not place it into cgi-bin folder, it worked, but whenever there was an error, it did not show any error message, just server error. Command cgitb.enable() did not show any error.
I tried to give the file 755 permission, but that still did not solve my problem.
Where could be a problem?
Source code:
#!/usr/local/bin/python3
print("Content-type:text/html")
print("")
print ("<html><head><title>CGI</title></head>")
print ("<body>")
print ("hello cgi")
print ("</body>")
print ("</html>")
Thank you for your answers.
It might be more useful to use sys.stdout.write and add \n so that you know exactly what gets written to standard output:
#!/usr/bin/env python
import sys
import os
sys.stdout.write('Status: 200 OK\n')
sys.stdout.write('Content-Type: text/html; charset=utf-8\n\n')
sys.stdout.write('<html><body>Hello, world!</body></html>\n')
sys.exit(os.EX_OK)
Run the script with python script.py | cat -e so that you can verify line endings.
Make sure you aren't sending any more HTTP headers after you start sending content.

shell multipipe broken with multiple python scripts

I am trying to get the stdout of a python script to be shell-piped in as stdin to another python script like so:
find ~/test -name "*.txt" | python my_multitail.py | python line_parser.py
It should print an output but nothing comes out of it.
Please note that this works:
find ~/test -name "*.txt" | python my_multitail.py | cat
And this works too:
echo "bla" | python line_parser.py
my_multitail.py prints out the new content of the .txt files:
from multitail import multitail
import sys
filenames = sys.stdin.readlines()
# we get rid of the trailing '\n'
for index, filename in enumerate(filenames):
filenames[index] = filename.rstrip('\n')
for fn, line in multitail(filenames):
print '%s: %s' % (fn, line),
sys.stdout.flush()
When a new line is added to the .txt file ("hehe") then my_multitail.py prints:
/home/me/test2.txt: hehe
line_parser.py simply prints out what it gets on stdin:
import sys
for line in sys.stdin:
print "line=", line
There is something I must be missing. Please community help me :)
There's a hint if you run your line_parser.py interactively:
$ python line_parser.py
a
b
c
line= a
line= b
line= c
Note that I hit ctrl+D to provoke an EOF after entering the 'c'. You can see that it's slurping up all the input before it starts iterating over the lines. Since this is a pipeline and you're continuously sending output through to it, this doesn't happen and it never starts processing. You'll need to choose a different way of iterating over stdin, for example:
import sys
line = sys.stdin.readline()
while line:
print "line=", line
line = sys.stdin.readline()

Script to capture everything on screen

So I have this python3 script that does a lot of automated testing for me, it takes roughly 20 minutes to run, and some user interaction is required. It also uses paramiko to ssh to a remote host for a separate test.
Eventually, I would like to hand this script over to the rest of my team however, it has one feature missing: evidence collection!
I need to capture everything that appears on the terminal to a file. I have been experimenting with the Linux command 'script'. However, I cannot find an automated method of starting script, and executing the script.
I have a command in /usr/bin/
script log_name;python3.5 /home/centos/scripts/test.py
When I run my command, it just stalls. Any help would be greatly appreciated!
Thanks :)
Is a redirection of the output to a file what you need ?
python3.5 /home/centos/scripts/test.py > output.log 2>&1
Or if you want to keep the output on the terminal AND save it into a file:
python3.5 /home/centos/scripts/test.py 2>&1 | tee output.log
I needed to do this, and ended up with a solution that combined pexpect and ttyrec.
ttyrec produces output files that can be played back with a few different player applications - I use TermTV and IPBT.
If memory serves, I had to use pexpect to launch ttyrec (as well as my test's other commands) because I was using Jenkins to schedule the execution of my test, and pexpect seemed to be the easiest way to get a working interactive shell in a Jenkins job.
In your situation you might be able to get away with using just ttyrec, and skip the pexpect step - try running ttyrec -e command as mentioned in the ttyrec docs.
Finally, on the topic of interactive shells, there's an alternative to pexpect named "empty" that I've had some success with too - see http://empty.sourceforge.net/. If you're running Ubuntu or Debian you can install empty with apt-get install empty-expect
I actually managed to do it in python3, took a lot of work, but here is the python solution:
def record_log(output):
try:
with open(LOG_RUN_OUTPUT, 'a') as file:
file.write(output)
except:
with open(LOG_RUN_OUTPUT, 'w') as file:
file.write(output)
def execute(cmd, store=True):
proc = Popen(cmd.encode("utf8"), shell=True, stdout=PIPE, stderr=PIPE)
output = "\n".join((out.decode()for out in proc.communicate()))
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = template % (cmd, output)
print(output)
if store:
record_log(output)
return output
# SSH function
def ssh_connect(start_message, host_id, user_name, key, stage_commands):
print(start_message)
try:
ssh.connect(hostname=host_id, username=user_name, key_filename=key, timeout=120)
except:
print("Failed to connect to " + host_id)
for command in stage_commands:
try:
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
except:
input("Paused, because " + command + " failed to run.\n Please verify and press enter to continue.")
else:
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = ssh_stderr.read() + ssh_stdout.read()
output = template % (command, output)
record_log(output)
print(output)

Subprocess wont take the output of dxl script

Im using Python's subprocess module to run a dxl script. My Problem is when i try to catch the Output (In this example a print-statement or a error message) of my dxl script, it is shown in the command prompt, but when i try to catch it with stdout=subprocess.PIPE or subprocess.check_output it always returns an empty string. Is there a way to catch the output or how could I get the Error messages from Doors?
It's important that you dont see the GUI of DOORS.
Here is my quick example that shows my problem:
test.dxl
print "Hello World"
test.py
import subprocess
doorsPath = "C:\\Program Files (x86)\\IBM\\Rational\\DOORS\\9.5\\bin\\doors.exe"
userInfo = ' -user dude -password 1234 -d 127.0.0.1 -batch ".\\test.dxl"'
dxl = " -W"
output = subprocess.check_output(doorsPath+dxl+userInfo)
print(output)
Edit: Using Windows 7 , DOORS 9.5 and Python 2.7
I know this post is pretty old, but the solution to the problem is to use
cout << ... instead of print. You can override the print perms like shown here
DOORS Print Redirect Tutorial for print, cout and logfiles
I'm feeling lucky here,
change print "Hello World" to cout << "Hello World"
and userInfo = ' -user dude -password 1234 -d 127.0.0.1 -batch ".\\test.dxl > D:\output.txt"', as in cmd promt the text can be directly exported to a text file.
your script have many error try this link for example for subprocess
and try this :
import subprocess
import sys
path = "C:\\Program Files(x86)\\IBM\\Rational\\DOORS\\9.5\\bin\\doors.exe"
userInfo = "C:\\Program Files (x86)\\IBM\\Rational\\DOORS\\9.5\\bin\\doors.exe"
proc = subprocess.Popen([path,userInfo,"-W"])
proc.communicate()
i hape it work on your system!

Compare over directory and text file using Python

My goal is to compare two data one is from text file and one is from directory and after comparing it this is will notify or display in the console what are the data that is not found for example:
ls: /var/patchbundle/rpms/:squid-2.6.STABLE21-7.el5_10.x86_64.rpm NOT FOUND!
ls: /var/patchbundle/rpms/:tzdata-2014j-1.el5.x86_64.rpm
ls: /var/patchbundle/rpms/:tzdata-java-2014j-1.el5.x86_64.rpm
ls: /var/patchbundle/rpms/:wireshark-1.0.15-7.el5_11.x86_64.rpm
ls: /var/patchbundle/rpms/:wireshark-gnome-1.0.15-7.el5_11.x86_64.rpm
ls: /var/patchbundle/rpms/:yum-updatesd-0.9-6.el5_10.noarch.rpm NOT FOUND
It must be like that. So Here's my python code.
import package, sys, os, subprocess
path = '/var/tools/tools/newrpms.txt'
newrpms = open(path, "r")
fds = newrpms.readline()
def checkrc(rc):
if(rc != 0):
sys.exit(rc)
cmd = package.Errata()
for i in newrpms:
rc = cmd.execute("ls /var/patchbundle/rpms/ | grep %newrpms ")
if ( != 0):
cmd.logprint ("%s not found !" % i)
checkrc(rc)
sys.exit(0)
newrpms.close
Please see the shell script. This script its executing file but because I want to use another language that's why Im trying python
retval=0
for i in $(cat /var/tools/tools/newrpms.txt)
do
ls /var/patchbundle/rpms/ | grep $i
if [ $? != 0 ]
then
echo "$i NOT FOUND!"
retval=255
fi
done
exit $retval
Please see my Python code. What is wrong because it is not executing like the shell executing it.
You don't say what the content of "newrpms.txt" is; you say the script is not executing how you want - but you don't say what it is doing; I don't know what package or package.Errata are, so I'm playing guess-the-problem; but lots of things are wrong.
if ( != 0): is a syntax error. If {empty space} is not equal to zero?
cmd.execute("ls /var/patchbundle/rpms/ | grep %newrpms ") is probably not doing what you want. You can't put a variable in a string in Python like that, and if you could newrpms is the file handle not the current line. That should probably be ...grep %s" % (i,)) ?
The control flow is doing:
Look in this folder, try to find files
Call checkrc()
Only quit with an error if the last file was not found
newrpms.close isn't doing anything, it would need to be newrpms.close() to call the close method.
You're writing shell-script-in-Python. How about:
import os, sys
retval=0
for line in open('/var/tools/tools/newrpms.txt'):
rpm_path = '/var/patchbundle/rpms/' + line.strip()
if not os.path.exists(rpm_path):
print rpm_path, "NOT FOUND"
retval = 255
else:
print rpm_path
sys.exit(retval)
Edited code slightly, and an explanation:
The code is almost a direct copy of the shell script into Python. It loops over every line in the text file, and calls line.strip() to get rid of the newline character at the end. It builds rpm_path which will be something like "/var/patchbundle/rpms/:tzdata-2014j-1.el5.x86_64.rpm".
Then it uses sys.path.exists() which tests if a file exists and returns True if it does, False if it does not, and uses that test to set the error value and print the results like the shell script prints them. This replaces the "ls ... | grep " part of your code for checking if a file exists.

Categories

Resources