I have a html page displayed using...
cherrypy.quickstart(ShowHTML(htmlfile), config=configfile)
Once the page is loaded (eg. initiated via. the command 'python mypage.py'), I would like to automatically launch the browser to display the page (eg. via. http://localhost/8000). Is there any way I can achieve this (eg. via. a hook within CherryPy), or do I have to call-up the browser manually (eg. by double-clicking an icon)?
TIA
Alan
You can either hook your webbrowser into the engine start/stop lifecycle:
def browse():
webbrowser.open("http://127.0.0.1:8080")
cherrypy.engine.subscribe('start', browse, priority=90)
Or, unpack quickstart:
from cherrypy import config, engine, tree
config.update(configfile)
tree.mount(ShowHTML(htmlfile), '/', configfile)
if hasattr(engine, "signal_handler"):
engine.signal_handler.subscribe()
if hasattr(engine, "console_control_handler"):
engine.console_control_handler.subscribe()
engine.start()
webbrowser.open("http://127.0.0.1:8080")
engine.block()
Related
So I'm creating a website and I was wondering if I could create a button / link to open a FILE, not another website, but a .py file. -Thanks
We have object tag in html. You create one button and an object. When u click on the button then you have to change the data attribute value of object tag.
It will work.
Use any lightweight python web framework like Flask.
Then use a script like this to run it on a website:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
#do your things here
return "It works!"
if __name__ == "__main__":
app.run()
Then run the server :
python script. py
Now the server is running on port 5000
So when you visit (eg: http://rasp_pi_ip:5000/) the website, your script will run.
I am working on robotframework and stuck with login screen because my application is using selfsign certificates, How I can set the flag for accept all certificates programmaticaly? I tried to create FireFox profile and set
fp.set_preference("webdriver_accept_untrusted_certs",True)
But It didn't work.
I know that there is workaround after manually accept certificates but I don't want to do it manually.
My Python code is looks like
def create_profile(path):
from selenium import webdriver
fp =webdriver.FirefoxProfile(profile_directory=path)
fp.set_preference("browser.download.folderList",2)
fp.set_preference("browser.download.manager.showWhenStarting",False)
fp.set_preference("browser.helperApps.alwaysAsk.force", False)
fp.set_preference("webdriver_accept_untrusted_certs",True)
fp.set_preference("webdriver_assume_untrusted_issuer", False)
fp.update_preferences()
return fp.path
And robot Test case is looks like
Launch Infoscale Access
${random_string} = Generate Random String 8
${path} Catenate SEPARATOR=\\ ${EXECDIR} ${random_string}
Create Directory ${path}
${profile} = create_profile ${path}
Log ${path}
Open Browser ${LOGIN URL} ${BROWSER} ff_profile_dir=${profile}
Below is version of libraries which I am using
python2.7
selenium2.32.0
robotframework2.6.0
geckodriver-v0.11.1-win64
I am trying to use spynner for web scraping ... below I used www.google.com as an example .... I want to automatically search for "Barack Obama" using spynner ... However, the web browser created by spynner keeps not responding ... and the search string ("Barack Obama") is not filled in the search box (You will see it when you run the code below yourself).
import spynner
browser = spynner.Browser()
browser.show()
browser.load("https://www.google.com")
browser.wait_page_load()
browser.fill("input[name=q]", "Barack Obama")
browser.click("input[name=btnK]")
The input fields are identfied correctly in my code ... you can check for yourself. ... So why is this not working?
Trie this code snippet.. I used qt
import spynner
from PyQt4.QtCore import Qt
b = spynner.Browser()
b.show()
b.load("http://www.google.com")
b.wk_fill('input[name=q]', 'soup')
b.sendKeys("input[name=q]",[Qt.Key_Enter])
b.browse()
I'm trying to login in https://accounts.coursera.org/ using twill for python
I tried this sheet of code
import twill
b = get_browser()
b.go("https://accounts.coursera.org/")
b.showforms()
twill doesn't detect the form in the page and showforms methods doesn't show anything !!
Is that an internal issue in twill package or I'm misssing something
import twill
import webbrowser
b = twill.get_browser()
b.go("https://accounts.coursera.org/")
page = b.result.get_page()
tmp_page = "tmp.html"
with file(tmp_page, "w") as f:
f.write(page)
webbrowser.open(tmp_page)
# b.showforms()
I get a page that says..
Please use a modern browser with JavaScript enabled to use Coursera.
So I suspect that twill doesn't include a javascript interpreter?
I am writing a python script for scraping a webpage. I have created a webkit webview object and used the open method for loading the url. But I want to load the url through a proxy.
How can i done this ? How to integrate webkit with proxy? which webkit class support proxy?
try below code snippets. (reference from url)
import gtk, webkit
import ctypes
libgobject = ctypes.CDLL('/usr/lib/libgobject-2.0.so.0')
libwebkit = ctypes.CDLL('/usr/lib/libsoup-2.4.so.1')
libsoup = ctypes.CDLL('/usr/lib/libsoup-2.4.so.1')
libwebkit = ctypes.CDLL('/usr/lib/libwebkit-1.0.so')
proxy_uri = libsoup.soup_uri_new('http://127.0.0.1:8000') # set your proxy url
session = libwebkit.webkit_get_default_session()
libgobject.g_object_set(session, "proxy-uri", proxy_uri, None)
w = gtk.Window()
s = gtk.ScrolledWindow()
v = webkit.WebView()
s.add(v)
w.add(s)
w.show_all()
v.open('http://www.google.com')
Hope, it could help you.
You can use QApplicationProxy if you're on pyqt or this snippet if you're using pygi:
from gi.repository import WebKit
from gi.repository import Soup
proxy_uri = Soup.URI.new("http://127.0.0.1:8080")
session = WebKit.get_default_session().set_property("proxy-uri")
session.set_property("proxy-uri",proxy_uri)
References:
PyGI
PyQt
How about a solution that's already made?
PyPhantomJS is a minimalistic, headless, WebKit-based, JavaScript-driven tool. It is written in PyQt4 and Python. It runs on Linux, Windows, and Mac OS X.
It gives you access to a full headless WebKit browser, controllable via scripts written in JavaScript, with the ability to do various things, amongst which is screen scraping and proxy support. It uses the command line.
You can see the API here.
* When I say screen scraping, I mean you can either scrape page content, or even save page renders to a file. There's even a screen scraping JS library already written here.