I have created an API using the Bottle Library in Python. It endpoints look like this
#app.get('/api/users/<id>', name='user')
def get_user(model, id):
user = model.get_user(id)
if not user:
return HTTPError(404, 'User not found')
else:
response.content_type = "application/json"
return json.dumps(user)
I want to call the API in other functions within the same app
#app.route('/users/<id>')
def users (id=1):
user = request.get("http://localhost:8001/api/user/1")
return template('user', user=user)
However, this is showing no results. The request get timed out each time
So my question is, how to call a bottle API from within that app using Requests library or through any other means.
Are you running Bottle in single-threaded mode (the default)? If so, then your internal get request will hang forever. This is because your server can serve only one request at a time, and you are asking it to handle two at once: the first call to /users/<id>, and then the second call to /api/users/<id>.
A band-aid fix would be to run the server in asynchronous mode. Try this method and see if your timeouts go away:
run(host='0.0.0.0', port=YOUR_PORT_NUMBER, server='gevent')
However: You shouldn't be designing your application this way in the first place. Instead, refactor your code so that both methods can call a function that returns the JSON representation of a user. Then one api call can return that raw json object, while the other api call can present it as HTML. NOTE: that's not how I would design this API, but it's the answer that's the shortest distance from how you've structured your application so far.
Related
I am upgrading my Twilio-enabled app from the old SDK to the new Twilio Programmable Voice (beta 5) but have run into several problems. Chief among them is poor audio quality of outgoing calls, in what can only be described as what lost packets must sound like. The problem exists even when I run the Quickstart demo app, leading me to the conclusion the problem rests in my Twiml. I've followed the instructions to a "T" with respect to setting the appropriate capabilities, entitlements, provisioning profile and uploading the voip push credential, but with little documentation on the new SDK or for Python versions of the server, I'm left scratching my head.
The only modifications to the demo app I've made are to include the "to" and "from" parameters in my call request like so:
NSDictionary *params = #{#"To" : self.phoneTextField.text, #"From": #"+16462332222",};
[[VoiceClient sharedInstance] configureAudioSession];
self.outgoingCall = [[VoiceClient sharedInstance] call:[self fetchAccessToken] params:params delegate:self];
The call goes out to my Twiml server (a python deployment on Heroku) at the appropriate endpoint as seen here:
import os
from flask import Flask, request
from twilio.jwt.access_token import AccessToken, VoiceGrant
from twilio.rest import Client
import twilio.twiml
ACCOUNT_SID = 'ACblahblahblahblahblahblah'
API_KEY = 'SKblahblahblahblahblahblah'
API_KEY_SECRET = 'blahblahblahblahblahblah'
PUSH_CREDENTIAL_SID = 'CRblahblahblahblahblahblah'
APP_SID = 'APblahblahblahblahblahblah'
IDENTITY = 'My_App'
CALLER_ID = '+15551111' # my actual number
app = Flask(__name__)
#app.route('/makeTheDamnCall', methods=['GET', 'POST'])
def makeTheDamnCall():
account_sid = os.environ.get("ACCOUNT_SID", ACCOUNT_SID)
api_key = os.environ.get("API_KEY", API_KEY)
api_key_secret = os.environ.get("API_KEY_SECRET", API_KEY_SECRET)
CALLER_ID = request.values.get('From')
IDENTITY = request.values.get('To')
client = Client(api_key, api_key_secret, account_sid)
call = client.calls.create(url=request.url_root, to='client:' + IDENTITY, from_='client:' + CALLER_ID)
return str(call.sid)
The console outputs outgoingCall:didFailWithError: Twilio Services Error and the call logs show a completed client call. An inspection of the debugger shows TwilioRestException: HTTP 400 error: Unable to create record. As you can see, the url I include in the request might be problematic as it just goes to the root but there is no way to leave the url blank (that I have found). I will eventually change this to a url=request.url_root + 'handleRecording' for call recording purposes but am taking things one step at a time for now.
My solution so far has been to ditch the call = client.calls.create in favor of the dial verb like so:
resp = twilio.twiml.Response()
resp.dial(number = IDENTITY, callerId = CALLER_ID)
return str(resp)
This makes calls, but the quality is so poor as to render it useless. (10+ seconds of silence followed by intermittent spurts of hearing the other party). Using the dial verb in this way is also unacceptable because of its inefficiency as I'm now billed for two calls each time.
The other major problem, which I'm not sure is connected or not, is the fact that I haven't yet been able to receive any incoming calls, though I suspect I may need to ask that question separately.
How can I get this line to work? I'm looking at you, #philnash. Help me make my app great again. :)
call = client.calls.create(url=request.url_root, to='client:' + IDENTITY, from_='client:' + CALLER_ID)
sorry it's taken me a while to get back to your question.
Firstly, the correct way to make the ongoing connection from your Programmable Voice SDK call is using TwiML <Dial>. You were creating a call using the REST API, however you will have already have created the first leg of the call in the SDK and the TwiML forwards onto the second leg of the call, the person you dialled. Notably, you are billed for each leg of the call, not for two calls (legs can be of different length, for example, you could put the original caller through a menu system before dialling onto the recipient).
Secondly, regarding poor call quality, that's not something I can help with on Stack Overflow. The best thing to do in this situation is to get in touch with Twilio support and provide some Call SIDs for affected calls. If you can record an example call that would help too.
Finally, I haven't seen if you've asked another question about incoming calls yet, but please do and I'll do my best to help there. That probably is a code question that we can cover on SO.
I'm working on a Flask app which retrieves the user's XML from the myanimelist.net API (sample), processes it, and returns some data. The data returned can be different depending on the Flask page being viewed by the user, but the initial process (retrieve the XML, create a User object, etc.) done before each request is always the same.
Currently, retrieving the XML from myanimelist.net is the bottleneck for my app's performance and adds on a good 500-1000ms to each request. Since all of the app's requests are to the myanimelist server, I'd like to know if there's a way to persist the http connection so that once the first request is made, subsequent requests will not take as long to load. I don't want to cache the entire XML because the data is subject to frequent change.
Here's the general overview of my app:
from flask import Flask
from functools import wraps
import requests
app = Flask(__name__)
def get_xml(f):
#wraps(f)
def wrap():
# Get the XML before each app function
r = requests.get('page_from_MAL') # Current bottleneck
user = User(data_from_r) # User object
response = f(user)
return response
return wrap
#app.route('/one')
#get_xml
def page_one(user_object):
return 'some data from user_object'
#app.route('/two')
#get_xml
def page_two(user_object):
return 'some other data from user_object'
if __name__ == '__main__':
app.run()
So is there a way to persist the connection like I mentioned? Please let me know if I'm approaching this from the right direction.
I think you aren't approaching this from the right direction because you place your app too much as a proxy of myanimelist.net.
What happens when you have 2000 users? Your app end up doing tons of requests to myanimelist.net, and a mean user could definitely DoS your app (or use it to DoS myanimelist.net).
This is a much cleaner way IMHO :
Server side :
Create a websocket server (ex: https://github.com/aaugustin/websockets/blob/master/example/server.py)
When a user connects to the websocket server, add the client to a list, remove it from the list on disconnect.
For every connected users, do frequently check myanimelist.net to get the associated xml (maybe lower the frequence the more online users you get)
for every xml document, make a diff with your server local version, and send that diff to the client using the websocket channel (assuming there is a diff).
Client side :
on receiving diff : update the local xml with the differences.
disconnect from websocket after n seconds of inactivity + when disconnected add a button on the interface to reconnect
I doubt you can do anything much better assuming myanimelist.net doesn't provide a "push" API.
I have some code I want to run for every request that comes into Flask-- specifically adding some analytics information. I know I could do this with a decorator, but I'd rather not waste the extra lines of code for each of my views. Is there a way to just write this code in a catch all that will be applied before or after each view?
Flask has dedicated hooks called before and after requests. Surprisingly, they are called:
Flask.before_request()
Flask.after_request()
Both are decorators:
#app.before_request
def do_something_whenever_a_request_comes_in():
# request is available
#app.after_request
def do_something_whenever_a_request_has_been_handled(response):
# we have a response to manipulate, always return one
return response
I am new to Flask.
I have a public api, call it api.example.com.
#app.route('/api')
def api():
name = request.args.get('name')
...
return jsonify({'address':'100 Main'})
I am building an app on top of my public api (call it www.coolapp.com), so in another app I have:
#app.route('/make_request')
def index():
params = {'name':'Fred'}
r = requests.get('http://api.example.com', params=params)
return render_template('really_cool.jinja2',address=r.text)
Both api.example.com and www.coolapp.com are hosted on the same server. It seems inefficient the way I have it (hitting the http server when I could access the api directly). Is there a more efficient way for coolapp to access the api and still be able to pass in the params that api needs?
Ultimately, with an API powered system, it's best to hit the API because:
It's user testing the API (even though you're the user, it's what others still access);
You can then scale easily - put a pool of API boxes behind a load balancer if you get big.
However, if you're developing on the same box you could make a virtual server that listens on localhost on a random port (1982) and then forwards all traffic to your api code.
To make this easier I'd abstract the API_URL into a setting in your settings.py (or whatever you are loading in to Flask) and use:
r = requests.get(app.config['API_URL'], params=params)
This will allow you to make a single change if you find using this localhost method isn't for you or you have to move off one box.
Edit
Looking at your comments you are hoping to hit the Python function directly. I don't recommend doing this (for the reasons above - using the API itself is better). I can also see an issue if you did want to do this.
First of all we have to make sure the api package is in your PYTHONPATH. Easy to do, especially if you're using virtualenvs.
We from api import views and replace our code to have r = views.api() so that it calls our api() function.
Our api() function will fail for a couple of reasons:
It uses the flask.request to extract the GET arg 'name'. Because we haven't made a request with the flask WSGI we will not have a request to use.
Even if we did manage to pass the request from the front end through to the API the second problem we have is using the jsonify({'address':'100 Main'}). This returns a Response object with an application type set for JSON (not just the JSON itself).
You would have to completely rewrite your function to take into account the Response object and handle it correctly. A real pain if you do decide to go back to an API system again...
Depending on how you structure your code, your database access, and your functions, you can simply turn the other app into package, import the relevant modules and call the functions directly.
You can find more information on modules and packages here.
Please note that, as Ewan mentioned, there's some advantages to using the API. I would advise you to use requests until you actually need faster requests (this is probably premature optimization).
Another idea that might be worth considering, depending on your particular code, is creating a library that is used by both applications.
I am new to python. I am using Flask for creating a web service which makes lots of api calls to linkedin. The problem with this is getting the final result set lot of time and frontend remains idle for this time. I was thinking of returning partial results found till that point and continuing api calling at server side. Is there any way to do it in Python? Thanks.
Flask has the ability to stream data back to the client. Sometimes this requires javascript modifications to do what you want but it is possible to send content to a user in chunks using flask and jinja2. It requires some wrangling but it's doable.
A view that uses a generator to break up content could look like this (though the linked to SO answer is much more comprehensive).
from flask import Response
#app.route('/image')
def generate_large_image():
def generate():
while True:
if not processing_finished():
yield ""
else:
yield get_image()
return Response(generate(), mimetype='image/jpeg')
There are a few ways to do this. The simplest would be to return the initial request via flask immediately and then use Javascript on the page you returned to make an additional request to another URL and load that when it comes back. Maybe displaying a loading indicator or something.
The additional URL would look like this
#app.route("/linkedin-data")
def linkedin():
# make some call to the linked in api which returns "data", probably in json
return flask.jsonify(**data)
Fundamentally, no. You can't return a partial request. So you have to break your requests up into smaller units. You can stream data using websockets. But you would still be sending back an initial request, which would then create a websocket connection using Javascript, which would then start streaming data back to the user.