Conver Python to html using Django - python

I have a Python program for customers to query price. Each time a customer can input some necessary information, the program will calculate and return the price to customer. Note: during the calculation process, the program also need to query a third party map service web API to get some information (such as google map API or other similar service).
I have a website developed using web development tools such Wix, Strikingly. It offers a capability to customize a web page by simply input a block of HTML codes. So, I want to study the possibility of using Django to convent my python program into HTML (incl. add some user interface such as text box and button), which can then be pasted into the website to form a unique webpage.
I am not sure if it is doable? Especially, the part of connecting third party map service API. Would Django be able to convert this part automatically to HTML as well? (how does it deal with API key and connection).

Python itself runs only on the console, and is meant to be the backend in site development, whereas HTML is meant only to be the frontend, so no calculation or data fetching. Wix is a frontend tool with some content management that offers customization but still in the frontends (html/css), and there's nothing more you could do with the content management other than using the built in table like feature. Trying to use the html generated by wix will be so much pain due to its css name optimization and making it quite unscalable.
If you don't wish to learn frontend building at all then you could look up other html generator tool for the frontend codes. From there, django itself is capable of building the entire website, using the html you generated as template, and passing the data you've computed into the templates. That's what Django is meant to do. In this case you would need to learn Django itself, which I would recommend if you intend to showcase your project as an interactive program rather just console logs.
Other alternatives include converting your python codes into javascript, which is capable of doing calculations and fetching from APIs, and you can include the javascript code directly in HTML with the tag.

Related

The right API/scrape method to extract tables from a Qlikview app

I'm trying to get some tables with specific filters on this qlikview page, for future analysis: http://transferenciasabertas.planejamento.gov.br/QvAJAXZfc/opendoc.htm?document=painelcidadao.qvw&lang=en-US&host=QVS%40srvbsaiasprd01&anonymous=true
I don't want to do it manually (downloading tables for every filter). Therefore, I searched for API's for Python on qlikview website, but only found qliksense API's for SSE (like this https://github.com/qlik-oss/server-side-extension).
Is there any chance that I could automate the retrieving process that I explained using Python?
Server side extensions are used for something else. They extend Qlik's functionality to process data (for example running some statistical functions on top of the displayed data if such functions do not exists in Qlik natively)
Interestingly is that the portal link (http://transferenciasabertas.planejamento.gov.br) is a QlikView app that later redirects to a Qlik Sense app(s). It seems that anonymous users are allowed on the platform (which makes automating data retrieval easier).
Qlik Sense communicates with the browser via web sockets. So the answer to your question is - yes. You can used Python to connect to the underlying Qlik Sense Engine and make some selections and get the data back.
The not very good news is that I dont think there is dedicated Python library so you'll have to send the raw web socket requests by yourself. The documentation for the Engine API can be found at Qlik's help site
If you are open for JS solution then you can use Qlik's enigma.js library for Engine communication.
The web sockets traffic can be monitored from the browser (to view what data is being send/received and its format)

Can't scrape non-html elements

I'm trying to scrape search results from a number of websites. The problem is that not all of these sites return their search results as plain html text, a lot of it is dynamically generated with with JS, AJAX, etc. However, I can see exactly what I need by looking at the page with the Firefox inspector, since the scripts have all run and modified the html.
My question is: is there a way for me to download a webpage AFTER allowing the scripts to run, or at least get them to run locally. That way, I'd get the final html.
For reference, I'm using python.
Possible duplicate. In that case the question is with php and JS.
Sure, you have to provide some enviroment for scripts (js) to run and often to return a test value to target server. It's not that easy for the server side languages. So today for this we mostly leverage browser driving or imitating tools mentioned there.
I've found for you the python analog to v8js php plugin: PyV8.
PyV8 is a python wrapper for Google V8 engine, it act as a bridge between the Python and JavaScript objects, and support to hosting Google's v8 engine in a python script.
If properly configured, your scraper:
Gets site's js
Evaluates this js thru the given plugin
Gets access to target html for further parse.

How to Combine Html + CSS code with python function?

I have zero experience with website development but am working on a group project and am wondering whether it would be possible to create an interaction between a simple html/css website and my python function.
Required functionality:
I have to take in a simple string input from a text box in the website, pass it into my python function which gives me a single list of strings as output. This list of strings is then passed back to the website. I would just like a basic tutorial website to achieve this. Please do not give me a link to the CGI python website as I have already read it and would like a more basic and descriptive view as to how this is done. I would really appreciate your help.
First you will need to understand HTTP. It is a text based protocol.
I assume by "web site" you mean User-Agent, like FireFox.
Now, your talking about an input box, well this will mean that you've already handled an HTTP request for your content. In most web applications this would have been several requests (one for the dynamically generated application HTML, and more for the static css and js files).
CGI is the most basic way to programmatically inspect already parsed HTTP requests and create HTTP responses from objects you've set.
Now your application is simple enough where you can probably do all the HTTP parsing yourself to gain a basic understanding of what's going on, but you will still need to understand how to develop a server that can listen on a socket.
To avoid all that just find a Python application server that has already implemented all of the above and much more. There are many python application servers to choose from. Use one with a small learning curve for something as simple as above. Some are labeled as "micro-frameworks" in this genre.
Have you considered creating an app on Google App Engine (https://developers.google.com/appengine/)?
Their Handling Forms tutorial seems to describe your problem:
https://developers.google.com/appengine/docs/python/gettingstartedpython27/handlingforms

Python 3 - way to interact with a web page

I have experience with reading and extracting html source 'as given'(via urllib.request), but now I would like to perform browser-alike actions(like filling a form, or selecting a value from the option menu) and then, of course, read a resulting html code as usual. I did come across some modules that seemed promising, but turned out not supporting Python 3.
So, I'm here asking for a name of library/module that does the wanted, or pointing to a solution within standard libraries if it's there and I failed to see it.
Usually many websites (like Twitter, facebook or Wikipedia) provide their API's to let developers hook into their app and perform activities programmatically. For what so ever web site you wish to perform activities through code, just look for their API support.
In case you need to do web scraping, you can use scrapy. But it only has support upto python 2.7.x. Anyways, you can use requests for HTTP client and beautiful soup for HTML parsing.

Should I use Screen Scrapers or API to read data from websites

I am building a web application as college project (using Python), where I need to read content from websites. It could be any website on internet.
At first I thought of using Screen Scrapers like BeautifulSoup, lxml to read content(data written by authors) but I am unable to search content based upon one logic as each website is developed on different standards.
Thus I thought of using RSS/ Atom (using Universal Feed Parser) but I could only get content summary! But I want all the content, not just summary.
So, is there a way to have one logic by which we can read a website's content using lib's like BeautifulSoup, lxml etc?
Or I should use API's provided by the websites.
My job becomes easy if its a blogger's blog as I can use Google Data API but the trouble is, should I need to write code for every different API for the same job?
What is the best solution?
Using the website's public API, when it exists, is by far the best solution. That is quite why the API exists, it is the way that the website administrators say "use our content". Scraping may work one day and break the next, and it does not imply the website administrator's consent to have their content reused.
You could look into content extraction libraries - I've used Full Text RSS (php) and Boilerpipe (java). Both have web service available so you can easily test if it meets your requirements. Also you can download and run them yourself and further modify its behavior on individual sites.

Categories

Resources