I have this issue I can't resolve for myself and all the other advices already present I could find were not helpful at all.
I'm using http.server and socket server packages and I want to transfer variables from main .py file to html pages. How do I do that? I've even found about some post methods, but even then, there was no answer as to how to actually get those variables in the html.
thanks for any advice
Related
So I have created a game (of sorts) in python that I want to embed into an html webpage so I can add UI features. I have used the tags to do this but I am having issues with importing packages and also it clutters up the code. So is there a way to link to the python file instead, like I would a JS or CSS file?
I apologise in advance for any ambiguity or poor phrasing in my question, I am new to programming and don't really have anyone to turn to when I need help so I have to use SO for even the most minor errors.
If you’re using a <py-script> tag, you can use the src attribute to reference a URL where the relevant python code is located. In this case, any code written within the tag itself (that is, in the HTML page) is ignored. For example:
<py-script src="some/url/with/code.py"></py-script>
Note that the attribute is a URL, not a local file path, so you’ll likely want to use a small server program to make the python file available on the network. Running python -m http.server from the command line will do.
I am having a problem posting map data to postgis via apache2 --> geoserver-->OpenLayers 12.04.
I am receiving data from geoserver just fine but unable to post new data back.
The post error is:
XML Parsing Error: not well-formed Location: moz-nullprincipal:
#!/usr/bin/env python
-^
What I get for a response is the text of the proxy.cgi script provided by OpenLayers. I have edited this script to include all sources found in the xml formed by the request to make sure that I have included all urls.
I have Python, Python2 and Python2.7 available but using any of these produces the same result. All includes appear to be loading correctly.
I have read numerous posts related to this issue but none have provided a solution. I used to be able to bypass the same domain issue by creating an index.html outside the apache-tomcat directory that would define an iframe that would call my actual site.html residing in /geoserver/www. This no longer appears to work hence my proxy problem. This project is on hold until this issue is solved.
Any help would be greatly appreciated.
Thanks, Larry
I found another way to do it. Rather than using the OpenLayers proxy I found this blog:
http://bikerjared.wordpress.com/2012/10/18/ubuntu-12-04-mod-proxy-install-and-configuration/
that provides a very good tutorial on using an Apache2 proxy that fixed my problem nicely.
I want to make a Python script available as a service on the net. The script, which is my first 'proper' Python program, takes a txt file as argument and writes an image into the work directory. So:
How difficult is it for somebody who is new to Python and web development?
How much work is it?
Do I need a framework (Django, cherryPy, web2py)?
Are there good tutorials?
How do I avoid the server to be compromised?
What are my next steps?
==> What is the easiest way?
In the end it is enough, if it is a white page, with some text, and a button, which when clicked, opens a file dialog. After the txt is processed, the server should just return the image, which was written on the hard drive. Already I have access to a server which has Ubuntu installed through a friend.
[update]
Thanks for all your answers. After reading them I want to stress again, that I want to have it as minimal as possible. Srikar's suggestion sounds like the easiest one:
Put it in executable directory of your OS (commonly known as CGI
path). Provide a simple HTML form & upon form submission hit this
script which executes & returns back the image you want to display.
Any objections or comments? Do you know any tutorials for that?
[udpate2]
I found this SO answer: File Sharing Site in Python Is this a sensible approach?
It's not too difficult. Actually, it sounds like a good first project.
That too subjective to answer. An hour to days.
No, you don't need one, but I'd use one if I were you. They abstract away some of the stuff you really don't care about, and you'll learn a tool you can use again in the future.
Plenty. If you want a real rundown of how Python works for the web, read the HOWTO from Python.org. If you just want to learn how to do this one project, pick a framework and do their tutorial.
This question is so broad and complex that I'm not going to try to answer it. Search this site, or Google, for questions like that.
Your next step should be to pick a framework; I've used Django successfully. Just download it, follow the installation instructions, and work your way through their tutorial; it should tell you everything you need to know to do what you want. If you still have questions once you've learned how to do the basics, come back and ask again!
Edit: The answer to that other question will certainly work for you. There, they just receive a GET request and respond with data from a Python file. You need to receive a GET request, respond with an HTML page (easy enough), then respond to a POST request that includes an uploaded file (slightly more complicated) and run your python routine on the uploaded file and then respond with the created image (or a link to it).
Take a look at this page which includes a simple Python script to do file uploads. You should easily be able to modify it to do what you want.
How difficult is it for somebody who is new to Python and web development?
Depends on your level of knowledge.
How much work is it?
Depends on which method you choose to solve the problem.
Do I need a framework (Django, cherryPy, web2py)?
Not necessarily - you could get started by using the CGI (http://docs.python.org/library/cgi.html)
Are there good tutorials?
Yes, there are plenty. The Python docs are an excellent place to start.
How do I avoid the server to be compromised?
Again, depends on the method you choose to solve the problem, although there are commonalities.
What are my next steps?
Dare I say it again, choose a method, read the docs, have a play!
If its just as simple as you have described it. Then you might not even need Django. You could simply use CGI scripting. All of these design decisions, depend on whether
You need (or foresee) a SQL storage?
or a Content-Management-System?
Will you need multiple-user support?
Do you need tight security?
Do you need different privileges for different users?
Do you need an Admin to manage your site?
If the answer to above questions is atleast 60% correct, then you might consider Django. otherwise, just write a python script. Put it in executable directory of your OS (commonly known as CGI path). Provide a simple HTML form & upon form submission hit this script which executes & returns back the image you want to display. So, it all depends on the features you need...
In the end, I created what I needed with Flask.
They have a well documented pattern / tutorial on Uploading Files. The tutorial is understandable even for people with little python and web expericence.
To get a first working version it took me 2h and the resulting code was only 50 lines. This includes, starting the webserver, having a html file/form with file upload and serving a file back to the user.
If I add a feed URL to Google Reader or to a desktop feed aggregator, I receive nice results. The URL is:
http://estaticos03.marca.com/rss/futbol_1adivision.xml
But when I fetch the same URL from a script (python script, using feedparser library) I am getting slightly different content for the same results (the title for each entry, for example, is different and all in uppercase).
I believe something is done on the server-side to try to discourage people like me to parse the content for my own projects (the feed is from a popular football newspaper), but I am not sure about it. I tried to pass some user agents (like the google reader one) but still no luck, so maybe they check the IP as well? I am really confused.
Any idea why is this happening to me?
Thanks!
AFAIK Google Reader does some "magic" in the content to beautify it. They strip some tags and styles to avoid breaking their interface.
Can you provide more details on the differences?
Did you changed the user agent of your script? Try to mimic Firefox and see what happen.
All right folks, I found it. I analyzed the source XML received (as #TryPyPy). I had been trusting too much the feedparser library. Latest official version (4.1) has a bug related to mistakeing the title tag from media namespace instead of the original one:
http://code.google.com/p/feedparser/issues/detail?id=76
So I reinstalled from trunk and now everything is OK. Thanks for helping anyway!
So ive just started learning python on WAMP, ive got the results of a html form using cgi, and successfully performed a database search with mysqldb. I can return the results to a page that ends with .py by using print statements in the python cgi code, but i want to create a webpage that's .html and have that returned to the user, and/or keep them on the same webaddress when the database search results return.
thanks
paul
edit: to clarify on my local machine, i see /localhost/search.html in the address bar i submit the html form, and receive a results page at /localhost/cgi-bin/searchresults.py. i want to see the results on /localhost/results.html or /localhost/search.html. if this was on a public server im ASSUMING it would return .../cgi-bin/searchresults.py, the last time i saw /cgi-bin/ directories was in the 90s in a url. ive glanced at addhandler, as david suggested, im not sure if thats what i want.
edit: thanks all of you for your input, yep without using frameworks, mod_rewrite seems the way to go, but having looked at that, I decided to save myself the trouble and go with django with mod_wsgi, mainly because of the size of its userbase and amount of docs. i might switch to a lighter/more customisable framework, once ive got the basics
First, I'd suggest that you remember that URLs are URLs and that file extensions don't matter, and that you should just leave it.
If that isn't enough, then remember that URLs are URLs and that file extensions don't matter — and configure Apache to use a different rule to determine that is a CGI program rather than a static file to be served up as is. You can use AddHandler to add a handler for files on the hard disk with a .html extension.
Alternatively, you could use mod_rewrite to tell Apache that …/foo.html means …/foo.py
Finally, I'd suggest that if you do muck around with what URLs look like, that you remove any sign of something that looks like a file extension (so that …/foo is requested rather then …/foo.anything).
As for keeping the user on the same address for results as for the request … that is just a matter of having the program output the basic page without results if it doesn't get the query string parameters that indicate a search term had been passed.