I deployed my Falcon app in azure functions using bitbucket. But I cannot see any of the files in the function app. I also tried by pulling the repo to azure function folder but that didn't work either becuase my routes didn't work as expected. I use MVC architecture in my app. My run.py looks like
import falcon
from wsgiref import simple_server
from project.routes import *
if __name__ == "__main__":
host = '127.0.0.1'
port = 5000
httpd = simple_server.make_server(host, port, app)
print("Serving on %s:%s" % (host, port))
httpd.serve_forever()
Is there any way to deploy my app as it is or should I change the structure.
Current folder structure
Azure Functions is not an appropriate place to just place a web application and expect it to run as normal. It is a "serverless" framework, so having your app use a MVC architecture is a sign that your app is currently not a good fit for Azure Functions. In your application's current state, it is better suited for an Azure Web App.
Azure Functions apps should be built around small functions that are called in response to events. To make your application a better fit for Azure Functions, it would involve refactoring your application into individual functions, that are triggered by events such as HTTP requests, timers, and many others that can be found here.
Related
I have been scouring the web to help me find the best way to do this and haven't found a proper answer.
I want to create a single web app with flask that contains multiple dashboard pages. The app needs to run on a different subdomain for every user–the user being a different business eg. client1.myapp.com. The functionality will be largely shared across the different clients and thus subdomains. However, I want to define a config file that will look something like this:
client1 = {"show_graph1":True, "show_graph2":False}
client2 = {"show_graph1":False, "show_graph2":True}
So the app would be hosted on a single aws elastic beanstalk instance and serve all these subdomains. The flow would be:
Client1 goes on unique url client1.myapp.com
Client1 logs in to myapp
Myapp recognises that it is on subdomain for client1, fetches the configuration from the config file and configures the dashboard pages accordingly.
I have looked into flask blueprints and from what I've understood this would be the best way to set this up, but I am not clear on how I would dynamically fetch and implement the configuration nor on how will flask simultaneously serve all subdomains at once.
What would be the best application structure to setup this use case with flask?
Any help would be much appreciated.
If your flask app is listening for all connections, you can point as many domains to it as you like. Then in your dashboard views, or globally if you prefer, you can get your configuration based on what domain the app was requested through.
For example:
#app.before_request
def before_request_func():
domain = request.host
g.client_config = get_client_config(domain)
I've developed a Python Flask Back-end app which allows me to do some HTTP requests on a Jsonfile (a sort of database) such as GET (to see a list of items) or POST (to create a new item in the Json database). Until now, I used Postman to test my app and all worked well. However, I'd like to develop a Python Flask Front-end app in order to create a graphical interface (in a web browser with jinja templates) to do the same requests. The problem is, I don't know how to begin my new app, I've googled all my questions but found no answer...
How can we "link" front-end and back-end apps in order to get the information from the web brower, make the requests via the back-end and then send the response with the front-end?
Using RESTful API.
A infrastructure solution could be (a classic one):
Your app server listening on 5000. It uses the REST architectural.
Your frontend server listening on 80/443. It makes requests to your app server to get the data and fill your html template with this data. The requests can be made with a library like requests. The requests library is one of the simpliest but you can choose another one to make HTTP requests.
Workflow:
Client <-HTTP requests-> Frontend <-HTTP requests made with requests->
App Server <--> Database
Advantage:
One of the advantage of this architecture: you can easily duplicate your servers or having multiple app servers responsible of different tasks. They can be running on the same machine or separated machines.
Serving static files:
If you are talking about serving static files for the frontend, then you should use an existing Webserver like Nginx (html/css/js files).
I've created a shopping site with a backend and a frontend.
The backend is python (3.6.5) with Flask.
I want to deploy my site to Google App Engine (gae).
When in development, everything works fine.
When deployed (in production) each rpc gets it's own 'thread' and everything is a mess.
I tried slapping gunicorn on it with sync and gevent worker class, but to no avail.
In deployment, how can I make each connection/session remember it's own 'instance of the backend'?
-instead of gae/flask/gunicorn serving a new instance of the backend for each request?
I need each user connection to be consistent and 'its own'/'private'.
It's not possible to do this. App Engine will spread the request load to your application across all instances, regardless of which one previously handled a request from a specific IP. A specific instance may also come online or go offline due to load or underlying changes to App Engine (e.g., a data center needs maintenance).
If you need to maintain session state between multiple requests to your app, you have a couple options depending on the architecture:
Keep session state in cookies with Flask.session
Keep session state in storage with Memorystore
Background
I am trying to create a simple REST API using the Flask-RESTful extension. This API will be working primarily to manage the CRUD and authentication of users for a simple service.
I am also trying to create a few web sockets using the Flask-SocketIO extension that these users will be able to connect to and see real-time updates for some data related to other people using the service. As such, I need to know that these users are authenticated and authorized to connect to certain sockets.
Problem
However, I'm having a bit of trouble getting set up. It seems like I am not able to have these two components (the REST API and SocketIO server) work together on the same Flask instance. The reason I say this is because when I run the following, either the REST API or the SocketIO server will work, but not both:
from flask import Flask
from flask_restful import Api
from flask.ext.socketio import SocketIO
app = Flask(__name__)
api = Api(app)
socketio = SocketIO(app)
# some test resources for the API and
# a test emitter statement for the SocketIO server
# are added here
if __name__ == '__main__':
app.run(port=5000)
socketio.run(app, port=5005)
Question
Is the typical solution for this type of setup to have two distinct instances of Flask going at the same time? For instance, would my SocketIO server have to make requests to my REST API in order to check to see that a specific user is authenticated/authorized to connect to a specific socket?
You just want to run socketio.run(app, port=5005) and hit the REST API on port 5005.
The reason this works is because under the hood, Flask-SocketIO is running an evented webserver based on gevent (or with the 1.0 release, also eventlet) - this server handling the websocket requests directly (using the handlers you register via the socketio.on decorator) and is passing on the non-websocket requests to Flask.
The reason your code wasn't working is because both app.run and socketio.run are blocking operations. Whichever one ran first was looping, waiting for connections, never allowing the second to kick off. If you really needed to run your websocket connections on a different port you'd need to spawn either the socketio or the app run call on a different process.
I'm starting to develop an application using App Engine (Python) that will be integrated in the Google Apps platform and will be sold in the Marketplace. I've implemented the Single Sign-On openID authentication and it works fine when deployed, but doesn't work locally at all.
How can i do that locally? user.federated_identity() apparently does not work on the localhost.
--edit--
Precisely, what i need to do is to be able to run this tutorial on App Engine's devserver.
On localhost there's no point in verifying that the email matches the federated_id domain, so you should just add this to check_email:
def check_email(self, user):
if os.environ.get('SERVER_SOFTWARE', '').startswith('Dev'):
return True
It looks like if your consumer secret and key are set up correctly everything else should work and it will return the first entry from the calendar feed.