I'm new to python, and i'm trying to run a simple script (On a Mac if that's important).
Now, this code, gives me Internal Server Error:
#!/usr/bin/python
print 'hi'
But this one works like a charm (Only extra 'print' command):
#!/usr/bin/python
print
print 'hi'
Any explanation? Thanks!
Update:
When I run this script from the Terminal everything is fine. But when I run it from the browser:
http://localhost/cgi-bin/test.py
I get this error (And again, only if i'm not adding the extra print command).
I use Apache server of course.
Looks like you're running your script as a CGI-script (your edit confirms that you're using CGI)
...and the initial (empty) print is required to signify the end of the headers.
Check your Apache's error log (/var/log/apache2/error.log probably) to see if it says 'Premature end of script headers' (more info here).
EDIT: a bit more explanation:
A CGI script in Apache is responsible for generating it's own HTTP response.
An HTTP response consists of a header block, an empty line, and the so-called body contents. Even though you should generate some headers, it's not mandatory to do so. However, you do need to output the empty line; Apache expects it to be there, and if it's not (or if you only output a body which can't be parsed as headers), Apache will generate an error.
That's why your first version didn't work, but your second did: adding the empty print added the required empty line that Apache was expecting.
This will also work:
#!/usr/bin/env python
print "Content-Type: text/html" # header block
print "Vary: *" # also part of the header block
print "X-Test: hello world" # this too is a header, but a made-up one
print # empty line
print "hi" # body
Related
I have a bit of code that I am trying to capture the stdout:
def MediaInfo():
cmd= ['MediaInfo.exe', 'videofile.mkv']
test = subprocess.run(cmd, capture_output=True)
info = test.stdout.decode("utf-8")
print (info)
When using print or writing it to file, it looks fine. But when I use selenium to fill it into a message box:
techinfo = driver.find_element(By.NAME, "techinfo").send_keys(info)
there is an additional empty line between every line. Originally I had an issue where the stdout was a byte literal. It looked like b"This is the first line.\r\nThis is the second line.\r\n" Adding .decode("utf-8") is what fixed that but I am wondering if in certain instances something is interpreting \r\n as creating two lines. I'm just not sure if it is an issue with Selenium or subprocess or something else. The webpage element Selenium is writing to doesn't seem to have an issue. It looks correct if I copy and paste it from the text file. Meaning, it's not just the way it's displayed, there are actually twice as many line feeds. Any ideas? I don't want to just loop through and delete the extra lines. Too kludgy. I'm guessing this is an issue with Python 3, from what I've read.
send_keys() will send each key individually which means "\r\n" is sent as two key presses. Replacing "\r\n" with "\n" prior to sending to element should do the trick.
I'm using python to run "drush scr" (drupal 7 command line) with Popen. Typical usage is "drush scr file.php", where file.php is a file containing php code. I use my python code to generate the php code as a string subsequently sent to file.php, and I don't want to have to create file.php just to then immediately send it to "drush scr file.php".
I've tried
sub=subprocess.Popen(['drush','scr'],stdin=subprocess.PIPE)
ret=sub.communicate(input=createPHP())
where createPHP() creates the code that otherwise would be sent to file.php and then simply used as subprocess.Popen(['drush','scr','file.php']) to minimic command $drush scr file.php
def createPHP():
#dynamically create php code as multi-line phpstring#
return phpString
sub=subprocess.Popen(['drush','scr'],stdin=subprocess.PIPE)
ret=sub.communicate(input=createPHP())
print(ret)
This code throws an error from drush but also returns (None,None) from the print command.
The following code correctly executes the drush command:
f=open("file.php","w+")
f.write(createPHP())
f.close()
sub=subprocess.Popen(['drush','scr','file.php'])
ret=sub.communicate()
print(ret)
and returns (None,None). The latter code definitely works, but I would prefer to not be creating and closing files if possible, and I feel like I am just not using Popen correctly.
I'm passing a URL to a python script using cgi.FieldStorage():
http://localhost/cgi-bin/test.py?file=http://localhost/test.xml
test.py just contains
#!/usr/bin/env python
import cgi
print "Access-Control-Allow-Origin: *"
print "Content-Type: text/plain; charset=x-user-defined"
print "Accept-Ranges: bytes"
print
print cgi.FieldStorage()
and the result is
FieldStorage(None, None, [MiniFieldStorage('file', 'http:/localhost/test.xml')])
Note that the URL only contains http:/localhost - how do I pass the full encoded URI so that file is the whole URI? I've tried encoding the file parameter (http%3A%2F%2Flocalhost%2ftext.xml) but this also doesn't work
The screenshot shows that the output to the webpage isn't what is expected, but that the encoded url is correct
Your CGI script works fine for me using Apache 2.4.10 and Firefox (curl also). What web server and browser are you using?
My guess is that you are using Python's CGIHTTPServer, or something based on it. This exhibits the problem that you identify. CGIHTTPServer assumes that it is being provided with a path to a CGI script so it collapses the path without regard to any query string that might be present. Collapsing the path removes duplicate forward slashes as well as relative path elements such as ...
If you are using this web server I don't see any obvious way around by changing the URL. You won't be using it in production, so perhaps look at another web server such as Apache, nginx, lighttpd etc.
The problem is with your query parameters, you should be encoding them:
>>> from urllib import urlencode
>>> urlencode({'file': 'http://localhost/test.xml', 'other': 'this/has/forward/slashes'})
'other=this%2Fhas%2Fforward%2Fslashes&file=http%3A%2F%2Flocalhost%2Ftest.xml'
I am trying to write a program that reads a webpage looking for file links, which it then attempts to download using curl/libcurl/pycurl. I have everything up to the pycurl correctly working, and when I use a curl command in the terminal, I can get the file to download. The curl command looks like the following:
curl -LO https://archive.org/download/TheThreeStooges/TheThreeStooges-001-WomanHaters1934moeLarryCurleydivxdabaron19m20s.mp4
This results in one redirect (a file that reads as all 0s on the output) and then it correctly downloads the file. When I remove the -L flag (so the command is just -O) it only reaches the first line, where it doesn't find a file, and stops.
But when I try to do the same operation using pycurl in a Python script, I am unable to successfully set [Curl object].FOLLOWLOCATION to 1, which is supposed to be the equivalent of the -L flag. The python code looks like the following:
c = [class Curl object] # get a Curl object
fp = open(file_name,'wb')
c.setopt(c.URL , full_url) # set the url
c.setopt(c.FOLLOWLOCATION, 1)
c.setopt(c.WRITEDATA , fp)
c.perform()
When this runs, it gets to c.perform() and shows the following:
python2.7: src/pycurl.c:272: get_thread_state: Assertion `self->ob_type == p_Curl_Type' failed.
Is it missing the redirect, or am I missing something else earlier because I am relatively new to cURL?
When I enabled verbose output for the c.perform() step, I was able to uncover what I believe was/is the underlying problem that my program had. The first line, which was effectively flagged, indicated that an open connection was being reused.
I had originally packaged the file into an object oriented setup, as opposed to a script, so the curl object had been read and reused without being closed. Therefore after the first connection attempt, which failed because I didn't set options correctly, it was reusing the connection to the website/server (which presumably had the wrong connection settings).
The problem was resolved by having the script close any existing Curl objects, and create a new one before the file download.
I can't seem to get the python module cgitb to output the stack trace in a browser. I have no problems in a shell environment. I'm running Centos 6 with python 2.6.
Here is an example simple code that I am using:
import cgitb; cgitb.enable()
print "Content-type: text/html"
print
print 1/0
I get an Internal Server error instead of the printed detailed report. I have tried different error types, different browsers, etc.
When I don't have an error, of course python works fine. It will print the error in a shell fine. The point of cgitb is to print the error instead of returning an "Internal Server Error" in the browser for most error exceptions. Basically I'm just trying to get cgitb to work in a browser environment.
Any Suggestions?
Okay, I got my problem fixed and the OP brought me to it: Even tho cgitb will output HTML by default, it will not output a header! And Apache does not like that and might give you some stupid error like:
<...blablabla>: Response header name '<!--' contains invalid characters, aborting request
It indicates, that Apache was still working its way through the headers when it already encountered some HTML. Look at what the OP prints before the error is triggered. That is a header and you need that. Including the empty line.
I will just quote the docs:
Make sure that your script is readable and executable by "others"; the Unix file mode should be 0755 octal (use chmod 0755 filename).
Make sure that the first line of the script contains #! starting in column 1 followed by the pathname of the Python interpreter, for instance:
#!/usr/local/bin/python