I'd like to use a standalone instance of CherryPy to serve several domains from a single server. I'd like each domain to be served from a completely separate CherryPy application, each with its own configuration file.
I played with cherrypy.dispatch.VirtualHost, but it seems like separate configuration files aren't possible.
A similar question (here) suggests that this is quite difficult, but doesn't explain why and might have been due to the fact that no one answered the question.
This CherryPy recipe for multiple apps shows how to load multiple sandboxed apps with separate configuration files, but it looks like they are being served form the same domain.
I can understand that the answer might be, "use CherryPy as a WSGI server behind Nginx or Apache," but I'd rather only deal with CherryPy on this particular server.
In the same repo, there's vhost recipe. However it uses a shared app. I don't see the way to get cherrypy.dispatch.VirtualHost working with separately mounted apps. This is because cherrypy.serving.request.app is set before invocation of the dispatcher. Say you have the following.
hostmap = {
'api.domain.com' : '/app1',
'www.domain.com' : '/app2'
}
cherrypy.tree.mount(App1(), '/app1', appConfig1)
cherrypy.tree.mount(App2(), '/app2', appConfig2)
All what cherrypy.dispatch.VirtualHost does is prepending domain prefix to current url, e.g. requesting http://www.domain.com/foo will result in /app2/foo/ as internal path that is sent to a next dispatcher which is usually cherrypy.dispatch.Dispatcher. However the latter will try to find a page handler using current cherrypy.serving.request.app which is set to empty app because there's nothing in CherryPy tree that corresonded to /foo path. So it will find nothing.
All you need here is to replace prefixing to changing the current app. That is to say changing that line to this.
cherrypy.serving.request.app = cherrypy.tree.apps[prefix]
But because cherrypy.dispatch.VirtualHost is pretty small, you can rewrite in your code easily.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import cherrypy
from cherrypy._cpdispatch import Dispatcher
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 80,
'server.thread_pool' : 8
},
'hostmap' : {
'api.domain.com' : '/app1',
'www.domain.com' : '/app2'
}
}
appConfig1 = {
'/' : {
'tools.json_out.on' : True
}
}
appConfig2 = {
'/' : {
'tools.encode.encoding' : 'utf-8'
}
}
def VirtualHost(nextDispatcher = Dispatcher(), useXForwardedHost = True, **domains):
def dispatch(pathInfo):
request = cherrypy.serving.request
domain = request.headers.get('Host', '')
if useXForwardedHost:
domain = request.headers.get('X-Forwarded-Host', domain)
prefix = domains.get(domain, '')
if prefix:
request.app = cherrypy.tree.apps[prefix]
result = nextDispatcher(pathInfo)
# Touch up staticdir config. See
# https://bitbucket.org/cherrypy/cherrypy/issue/614.
section = request.config.get('tools.staticdir.section')
if section:
section = section[len(prefix):]
request.config['tools.staticdir.section'] = section
return result
return dispatch
class App1:
#cherrypy.expose
def index(self):
return {'bar': 42}
class App2:
#cherrypy.expose
def index(self):
return '<em>foo</em>'
if __name__ == '__main__':
config['/'] = {'request.dispatch': VirtualHost(**config['hostmap'])}
cherrypy.tree.mount(App1(), '/app1', appConfig1)
cherrypy.tree.mount(App2(), '/app2', appConfig2)
cherrypy.quickstart(config = config)
Related
TLDR: azurerm_function_app_function will work fine on Terraform Apply, but disappears from Azure Portal afterwards.
I am trying to deploy an Azure Function via Terraform for months now and have not had any luck with it.
The Terraform apply will run fine. I will then go into the Azure Portal and look at the function app functions and this function will be there. However when I refresh the blade the function will disappear. I have made the same function and deployed it via VS Code no issues, but with Terraform there is no luck.
resource "azurerm_linux_function_app" "main" {
name = "tf-linux-app"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
service_plan_id = azurerm_service_plan.main.id
storage_account_name = azurerm_storage_account.main.name
storage_account_access_key = azurerm_storage_account.main.primary_access_key
site_config {
app_scale_limit = 200
elastic_instance_minimum = 0
application_stack {
python_version = "3.9"
}
}
app_settings = {
"${azurerm_storage_account.main.name}_STORAGE" = azurerm_storage_account.main.primary_connection_string
}
client_certificate_mode = "Required"
identity {
type = "SystemAssigned"
}
}
resource "azurerm_function_app_function" "main" {
name = "tf-BlobTrigger"
function_app_id = azurerm_linux_function_app.main.id
language = "Python"
file {
name = "__init__.py"
content = file("__init__.py")
}
test_data = "${azurerm_storage_container.container1.name}/{name}"
config_json = jsonencode({
"scriptFile" : "__init__.py",
"disabled": false,
"bindings" : [
{
"name" : "myblob",
"type" : "blobTrigger",
"direction" : "in",
"path" : "${azurerm_storage_container.container1.name}/{name}",
"connection" : "${azurerm_storage_container.container1.name}_STORAGE"
}
]
})
}
As far as the Python script, I'm literally just trying
the demo found here
that Azure provides.
__init__.py:
import logging
import azure.functions as func
def main(myblob: func.InputStream):
logging.info('Python Blob trigger function processed %s', myblob.name)
I tried running Terraform apply, I expected the function app to appear and stay there, but it appears and then disappears. I also tried deploying a C# function to a Windows app. This worked as expected, but I now need the script in Python.
I'm trying to enable authentication in Apache SuperSet through Oauth2.
It shoud be straightforward due to the fact that it's built upon Flask AppBuilder which supports OAuth and is extremely easy to setup and use.
I managed to make both the following examples work seamlessy with Twitter Oauth configuration:
FAB OAuth example
flask-oauthlib examples
Now I'm trying to apply the same configuration to SuperSet.
Docker
As I can't manually build the project for several mysterious python errors (tried on Windows 7/Ubuntu Linux and with Python versions 2.7 and 3.6), I decided to use this Superset docker image (that installs and works fine) and inject my configuration as suggested by docs:
Follow the instructions provided by Apache Superset for writing your own superset_config.py. Place this file in a local directory and mount this directory to /home/superset/.superset inside the container.
I added a superset_config.py (in a folder and alone) and mounted it by adding to the Dockerfile the following:
ADD config .superset/config
(config is the name of the folder) or (for the single file):
COPY superset_config.py .superset
In both cases the files end up in the right place in the container (I check with docker exec /bin/bash) but the web application shows no difference: no traces of Twitter authentication.
Can somebody figure out what I am doing wrong?
You have to change the superset_config.py. Look at this example config, it works for me.
import os
from flask_appbuilder.security.manager import AUTH_OID,
AUTH_REMOTE_USER,
AUTH_DB, AUTH_LDAP, AUTH_OAUTH
basedir = os.path.abspath(os.path.dirname(__file__))
ROW_LIMIT = 5000
SUPERSET_WORKERS = 4
SECRET_KEY = 'a long and random secret key'
SQLALCHEMY_DATABASE_URI = ‘postgresql://username:pass#host:port/dbname’
CSRF_ENABLED = True
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Public"
OAUTH_PROVIDERS = [
{
'name': 'google',
'whitelist': ['#company.com'],
'icon': 'fa-google',
'token_key': 'access_token',
'remote_app': {
'base_url': 'https://www.googleapis.com/oauth2/v2/',
'request_token_params': {
'scope': 'email profile'
},
'request_token_url': None,
'access_token_url':
'https://accounts.google.com/o/oauth2/token',
'authorize_url': 'https://accounts.google.com/o/oauth2/auth',
'consumer_key': 'GOOGLE_AUTH_KEY',
'consumer_secret': 'GOOGLE_AUTH_SECRET'
}
}
]
2021 update: The FAB OAuth provider schema seems like it changed a bit since this answer. If you're trying to do this with Superset >= 1.1.0, try this instead:
OAUTH_PROVIDERS = [
{
'name': 'google',
'icon': 'fa-google',
'token_key': 'access_token',
'remote_app': {
'client_id': 'GOOGLE_KEY',
'client_secret': 'GOOGLE_SECRET',
'api_base_url': 'https://www.googleapis.com/oauth2/v2/',
'client_kwargs':{
'scope': 'email profile'
},
'request_token_url': None,
'access_token_url': 'https://accounts.google.com/o/oauth2/token',
'authorize_url': 'https://accounts.google.com/o/oauth2/auth'
}
}
]
Of course, sub out GOOGLE_KEY and GOOGLE_SECRET. The rest should be fine. This was cribbed from the FAB security docs for the next time there is drift.
I have a CherryPy server that dispenses a few static HTML/JS/etc. files to /foosball, plus some JSON through a REST API to /.
import cherrypy
config = {
'global': {
'server.socket_host': '0.0.0.0',
'server.socket_port': # my port here,
'tools.json_in.on': True
},
'/foosball': {
'tools.staticdir.on': True,
'tools.staticdir.root': '/var/www/html',
'tools.staticdir.dir': 'foosball',
'tools.staticdir.index': 'index.html'
}
}
#cherrypy.popargs('player_id')
class RESTClient_Player(object):
# stuff
class RESTClient_Game(object):
# stuff
class RESTClient:
players = RESTClient_Player()
games = RESTClient_Game()
#cherrypy.expose
def index(self):
http_method = getattr(self, cherrypy.request.method)
return (http_method)()
cherrypy.quickstart(RESTClient(), '/', config)
I also want to keep these pages protected by a basic access restriction scheme, so I've been examining the excellent tutorial CherryPy provides.
Trouble is, the documentation is geared towards authenticating non-static pages, the kind explicitly declared by def statements. I tried and failed to adapt this documentation to the files in /foosball, but without success. /foosball always ends up loading without any authentication request.
What can I add to give static files some access restriction ability?
Thanks!
EDIT: I got pointed towards auth_tool. With the below config block, I was able to lock up the REST API portion with a login screen, but all static files in /foosball are still openly accessible:
def check_login_and_password(login, password):
cherrypy.log(login)
cherrypy.log(password)
return
config = {
'global': {
'server.socket_host': '0.0.0.0',
'server.socket_port': # my port here,
'tools.json_in.on': True,
'tools.sessions.on': True,
'tools.session_auth.on': True,
'tools.session_auth.check_username_and_password': check_login_and_password
},
'/foosball': {
'tools.staticdir.on': True,
'tools.staticdir.root': '/var/www/html',
'tools.staticdir.dir': 'foosball',
'tools.staticdir.index': 'index.html',
'tools.sessions.on': True,
'tools.session_auth.on': True,
'tools.session_auth.check_username_and_password': check_login_and_password
}
}
Instead of using the "staticdir" in your config, you can create a function in your class that will return static files. If you do that, you can wrap authentication around your function.
import cherrypy
from cherrypy.lib.static import serve_file
import os
class Hello(object):
#cherrypy.expose
def index(self):
return "Hello World"
#cherrypy.expose
def static(self, page):
return serve_file(os.path.join(current_dir, 'static', page), content_type='text/html')
if __name__ == '__main__':
current_dir = os.path.dirname(os.path.abspath(__file__))
cherrypy.quickstart(Hello())
I'm trying to upload my Python script to authorize the user for the Spotify iOS SDK. Honestly, I dont know what I'm doing but the documentation is really poor. I'm using Heroku as web server but when I use foreman start I only get this on localhost:5000:
Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
spotify_token_swap.py looks as following:
import cherrypy
from cherrypy import tools
import simplejson as json
import requests
import os
from flask import Flask
app = Flask(__name__)
# CHANGE these values to your own
k_client_id = "spotify-ios-sdk-beta"
k_client_secret = "ba95c775e4b39b8d60b27bcfced57ba473c10046"
k_client_callback_url = "spotify-ios-sdk-beta://callback"
verbose = True
class SpotifyTokenSwap(object):
#cherrypy.expose
#tools.json_out()
def swap(self, code=None):
params = {
'grant_type': 'authorization_code',
'client_id': k_client_id,
'client_secret': k_client_secret,
'redirect_uri': k_client_callback_url,
'code' : code
}
r = requests.post('https://ws.spotify.com/oauth/token', params)
cherrypy.response.status = r.status_code
if verbose:
print
print code
print r.status_code
print r.text
print
return r.json()
def CORS():
cherrypy.response.headers["Access-Control-Allow-Origin"] = "*"
if __name__ == '__main__':
cherrypy.tools.CORS = cherrypy.Tool('before_handler', CORS)
root = SpotifyTokenSwap()
config = {
'global' : {
'server.socket_host' : '0.0.0.0',
'server.socket_port' : 5000,
'server.thread_pool' : 10,
# 'environment' : 'production',
},
'/' : {
'tools.CORS.on' : True,
}
}
cherrypy.quickstart(root, '/', config=config)
and I start the foreman webserver using this in my Procfile:
web: gunicorn spotify_token_swap:app
I'm pretty sure you are pointing to the wrong wsgi application. Pointing to app from the Procfile meant that flask was serving the page. You registered and built everything with cherrypy, and did not include any routes in flask. So the app object had no routes, ie no '/'. So you need to switch to serving the cherrypy app.
Since you're removing the flask app part, you should remove the if __name__ == '__main__': line and change the rest to
config = {
'global' : {
'server.socket_host' : '0.0.0.0',
'server.socket_port' : 5000,
'server.thread_pool' : 10,
# 'environment' : 'production',
},
'/' : {
'tools.CORS.on' : True,
}
}
wsgiapp = cherrypy.Application(SpotifyTokenSwap(), '/', config=config)
And then use this in the ProcFile
web: gunicorn spotify_token_swap:wsgiapp
I'm not used to Foreman or cherrypy, but I think this is what you need to do.
You can use this python service instead:
Download Google App engine here
Install the launcher
Go to ChirsmLarssons GitHub and download the project, it will have everything you need.
In Google app engine launcher, Press add excisting project
Go Google app engines website and create a project, here you will get an app-id
in app.yaml , replace spotifyauth with the app-id
Press deploy
Done, you can now access it on the web at app-id.appspot.com/swap
Before I got this solution, I've spend hours in the jungle of Python and Ruby, Cheers!
I have tried Qooxdoo and I made a simple Python server with SimpleXMLRPCServer, with a Python test I get the data without problems, but can I get this data from Qooxdoo? I get lost, and I've searched for 3 days but didn't get solutions.
I try this:
var JSON_lista_empresas = 1000
button1.addListener("execute", function(e)
{
var rpc = new qx.io.remote.Rpc();
rpc.setServiceName("get_data");
//rpc.setCrossDomain(true);
rpc.setUrl("http://192.168.1.54:46000");
rpc.addListener("completed", function(event)
{
console.log(event.getData());
});
rpc.callAsync( JSON_lista_empresas, '');
});
And I tried other options but got nothing :(
The link to files:
http://mieresdelcamin.es/owncloud/public.php?service=files&dir=%2Fjesus%2Ffiles%2FQooxdoo
I tried and read all of qooxdoo-contrib.
Well,
RpcPython --> Ok
and in class/qooxdoo -> test.py
run server [start-server.py] and query from webroser:
http://127.0.0.1:8000//?_ScriptTransport_id=1&nocache=1366909868006&_ScriptTransport_data={%22service%22%3A%22qooxdoo.test%22%2C%22method%22%3A%22echo%22%2C%22id%22%3A1%2C%22params%22%3A[%22Por%20fin%22]}
and the reply in webroser is:
qx.io.remote.ScriptTransport._requestFinished(1,{"error": null, "id": 1, "result": "Client said: [ Por fin ]"});
but if i query from qooxdoo like the reply is [error.png]
The code for qooxdoo:
var rpc = new qx.io.remote.Rpc( "http://127.0.0.1:8000/");
rpc.setCrossDomain( true);
rpc.setServiceName( 'qooxdoo.test');
// asynchronous call
var handler = function(result, exc) {
if (exc == null) {
alert("Result of async call: " + result);
} else {
alert("Exception during async call: " + exc+ result);
}
};
rpc.callAsync(handler, "echo", "Por fin");
I lost :((
Files in:
http://mieresdelcamin.es/owncloud/public.php?service=files&dir=%2Fjesus%2Ffiles%2FQooxdoo
Well, with Firebug this error in owncloud qx.io.remote.ScriptTransport.....is detect
¿?.............
Best Regards.
I'm guessing you confuse XML-RPC with JSON-RPC and qooxdoo only supports the latter. These protocols are similar but the data interchange format is different (XML or JSON). Instead of the SimpleXMLRPCServer you could use "RpcPython" on the server side which is a qooxdoo contrib project.
See:
http://qooxdoo.org/contrib/project/rpcpython
http://sourceforge.net/p/qooxdoo-contrib/code/HEAD/tree/trunk/qooxdoo-contrib/RpcPython/
Once you have this server up and running you should be able to test it:
http://manual.qooxdoo.org/2.1.1/pages/communication/rpc_server_writer_guide.html#testing-a-new-server
http://sourceforge.net/p/qooxdoo-contrib/code/HEAD/tree/trunk/qooxdoo-contrib/RpcPython/trunk/services/class/qooxdoo/test.py
After that your qooxdoo (client) code hopefully works also. :)
Ok,
In the file http.py of qxjsonrc module in the line 66 change
response='qx.io.remote.ScriptTransport._requestFinished(%s,%s);'%(scriptTransportID,response)
for
response='qx.io.remote.transport.Script._requestFinished(%s,%s);'%(scriptTransportID,response)
And run fine :))
This link for package modified:
http://mieresdelcamin.es/owncloud/public.php?service=files&dir=%2Fjesus%2Ffiles%2FQooxdoo
Best Regards and thanks!!!
As Richard already pointed Qooxdoo only supports its flavor of JSON-RPC.
I maintain a fork of original rpcpython called QooxdooCherrypyJsonRpc. The main goal was to hand over transport protocol to some robust framework, and leave only JSON RPC stuff. CherryPy, obviously a robust framework, allows HTTP, WSGI and FastCGI deployment. Code was refactored and covered with tests. Later I added upload/download support and consistent timezone datetime interchange.
At very minimum your Python backend may look like (call it test.py):
import cherrypy
import qxcpjsonrpc as rpc
class Test(rpc.Service):
#rpc.public
def add(self, x, y):
return x + y
config = {
'/service' : {
'tools.jsonrpc.on' : True
},
'/resource' : {
'tools.staticdir.on' : True,
'tools.staticdir.dir' : '/path/to/your/built/qooxdoo/app'
}
}
cherrypy.tools.jsonrpc = rpc.ServerTool()
if __name__ == '__main__':
cherrypy.quickstart(config = config)
Then you can do in your qooxdoo code as follows:
var rpc = new qx.io.remote.Rpc();
rpc.setServiceName('test.Test');
rpc.setUrl('http://127.0.0.1:8080/service');
rpc.setCrossDomain(true); // you need this for opening app from file://
rpc.addListener("completed", function(event)
{
console.log(event.getData());
});
rpc.callAsyncListeners(this, 'add', 5, 7);
Or open the link directly:
http://127.0.0.1:8080/service?_ScriptTransport_id=1&_ScriptTransport_data=%7B%22params%22%3A+%5B12%2C+13%5D%2C+%22id%22%3A+1%2C+%22service%22%3A+%22test.Test%22%2C+%22method%22%3A+%22add%22%7D
For more info take a look at the package page I posted above.
Richard Sternagel wrote about rpcpython. This version of rpcpython doesn't work with present version of simplejson. Becouse in json.py have incorrect import:
from simplejson.decoder import ANYTHING
from simplejson.scanner import Scanner, pattern
Improve rpcpython or use another server, for example CherryPy.