How to use Flask to Display Data From Print - python

I have this python code that I am trying to display data on static page
using flask instead of console.
from __future__ import print_function
from private import private, private, private
optimizeit = get_optimizeit(web.site, private.private)
optimizeit.load_data_from_CSV("/path.../to..cvs.csv")
data = optimizeit.get_data_by_name('somename')
data = optimizeit.data[0]
data.max_exposure = 0.5
generatedata = optimizeit.optimizeit(4)
for datafield in generatedata:
print (datafield)
Where print I want to print this to simple flask page. I tried few things and I just can't think of way of doing it best way.
EDIT: What I tried
from __future__ import print_function
import flask
from private import private, private, private
import time
app = flask.Flask(__name__)
#app.route('/sitea')
def index():
def inner():
optimizeit = get_optimizit(website.site12, private.someprivate)
optimizer.load_players_from_CSV("/mypath to csv.../.csv") #import csv
data = optimizeit.datas[0] #optimize that data
data.max_exposure = 0.5 #set some exposure to that data
data_generator = optimizeit.optimizeit(4)
for datalive in datalive_generator:
return datalive
return flask.Response(inner(), mimetype='text/html') # text/html is required for most browsers to show the partial page immediately
app.run(debug=True)
EDIT 2: THIS WORKED!
from __future__ import print_function
import flask
from private import private, private, private
import time
app = flask.Flask(__name__)
#app.route('/sitea')
def index():
def inner():
optimizeit = get_optimizit(website.site12, private.someprivate)
optimizer.load_players_from_CSV("/mypath to csv.../.csv") #import csv
data = optimizeit.datas[0] #optimize that data
data.max_exposure = 0.5 #set some exposure to that data
data_generator = optimizeit.optimizeit(4)
for datalive in datalive_generator:
yield '%s<br/>\n' % datalive
return flask.Response(inner(), mimetype='text/html') # text/html is required for most browsers to show the partial page immediately

Here
for datalive in datalive_generator:
return datalive
this just returns the first item in datalive_generator and then exits the function, never to return. You probably meant yield datalive. This way it will continue streaming output to the response. In the meantime you'll want to do some search into the difference between generators and normal functions in Python.

Related

Flask: How to use url_for() outside the app context?

I'm writing a script to collect the emails of those users that didn't receive an email confirmation email and resend it to them. The script works obviously outside of flask app context. I would like to use url_for() but can't get it right.
def resend(self, csv_path):
self.ctx.push()
with open(csv_path) as csv_file:
csv_reader = csv.reader(csv_file)
for row in csv_reader:
email = row[0]
url_token = AccountAdmin.generate_confirmation_token(email)
confirm_url = url_for('confirm_email', token=url_token, _external=True)
...
self.ctx.pop()
The first thing I had to do was to set SERVER_NAME in config. But then I get this error message:
werkzeug.routing.BuildError: Could not build url for endpoint
'confirm_email' with values ['token']. Did you mean 'static' instead?
This is how it's defined, but I don't think it can even find this, because it's not registered when ran as script:
app.add_url_rule('/v5/confirm_email/<token>', view_func=ConfirmEmailV5.as_view('confirm_email'))
Is there a way to salvage url_for() or do I have to build my own url?
Thanks
It is much easier and proper to get the URL from the application context.
You can either import the application and manually push context with app_context
https://flask.palletsprojects.com/en/2.0.x/appcontext/#manually-push-a-context
from flask import url_for
from whereyoudefineapp import application
application.config['SERVER_NAME'] = 'example.org'
with application.app_context():
url_for('yourblueprint.yourpage')
Or you can redefine your application and register the wanted blueprint.
from flask import Flask, url_for
from whereyoudefineyourblueprint import myblueprint
application = Flask(__name__)
application.config['SERVER_NAME'] = 'example.org'
application.register_blueprint(myblueprint)
with application.app_context():
url_for('myblueprint.mypage')
We can also imagine different ways to do it without the application, but I don't see any adequate / proper solution.
Despite everything, I will still suggest this dirty solution.
Let's say you have the following blueprint with the following routes inside routes.py.
from flask import Blueprint
frontend = Blueprint('frontend', __name__)
#frontend.route('/mypage')
def mypage():
return 'Hello'
#frontend.route('/some/other/page')
def someotherpage():
return 'Hi'
#frontend.route('/wow/<a>')
def wow(a):
return f'Hi {a}'
You could use the library inspect to get the source code and then parse it in order to build the URL.
import inspect
import re
BASE_URL = "https://example.org"
class FailToGetUrlException(Exception):
pass
def get_url(function, complete_url=True):
source = inspect.getsource(function)
lines = source.split("\n")
for line in lines:
r = re.match(r'^\#[a-zA-Z]+\.route\((["\'])([^\'"]+)\1', line)
if r:
if complete_url:
return BASE_URL + r.group(2)
else:
return r.group(2)
raise FailToGetUrlException
from routes import *
print(get_url(mypage))
print(get_url(someotherpage))
print(get_url(wow).replace('<a>', '456'))
Output:
https://example.org/mypage
https://example.org/some/other/page
https://example.org/wow/456

Optimise python function fetching multi-level json attributes

I have a 3 level json file. I am fetching the values of some of the attributes from each of the 3 levels of json. At the moment, the execution time of my code is pathetic as it is taking about 2-3 minutes to get the results on my web page. I will be having a much larger json file to deal with in production.
I am new to python and flask and haven't done much of web programming. Please suggest me ways I could optimise my below code! Thanks for help, much appreciated.
import json
import urllib2
import flask
from flask import request
def Backend():
url = 'http://localhost:8080/surveillance/api/v1/cameras/'
response = urllib2.urlopen(url).read()
response = json.loads(response)
components = list(response['children'])
urlComponentChild = []
for component in components:
urlComponent = str(url + component + '/')
responseChild = urllib2.urlopen(urlComponent).read()
responseChild = json.loads(responseChild)
camID = str(responseChild['id'])
camName = str(responseChild['name'])
compChildren = responseChild['children']
compChildrenName = list(compChildren)
for compChild in compChildrenName:
href = str(compChildren[compChild]['href'])
ID = str(compChildren[compChild]['id'])
urlComponentChild.append([href,ID])
myList = []
for each in urlComponentChild:
response = urllib2.urlopen(each[0]).read()
response = json.loads(response)
url = each[0] + '/recorder'
responseRecorder = urllib2.urlopen(url).read()
responseRecorder = json.loads(responseRecorder)
username = str(response['subItems']['surveillance:config']['properties']['username'])
password = str(response['subItems']['surveillance:config']['properties']['password'])
manufacturer = str(response['properties']['Manufacturer'])
model = str(response['properties']['Model'])
status = responseRecorder['recording']
myList.append([each[1],username,password,manufacturer,model,status])
return myList
APP = flask.Flask(__name__)
#APP.route('/', methods=['GET', 'POST'])
def index():
""" Displays the index page accessible at '/'
"""
if request.method == 'GET':
return flask.render_template('index.html', response = Backend())
if __name__ == '__main__':
APP.debug=True
APP.run(port=62000)
Ok, caching. So what we're going to do is start returning values to the user instantly based on data we already have, rather than generating new data every time. This means that the user might get slightly less up to date data than is theoretically possible to get, but it means that the data they do receive they receive as quickly as is possible given the system you're using.
So we'll keep your backend function as it is. Like I said, you could certainly speed it up with multithreading (If you're still interested in that, the 10 second version is that I would use grequests to asynchronously get data from a list of urls).
But, rather than call it in response to the user every time a user requests data, we'll just call it routinely every once in a while. This is almost certainly something you'd want to do eventually anyway, because it means you don't have to generate brand new data for each user, which is extremely wasteful. We'll just keep some data on hand in a variable, update that variable as often as we can, and return whatever's in that variable every time we get a new request.
from threading import Thread
from time import sleep
data = None
def Backend():
.....
def main_loop():
while True:
sleep(LOOP_DELAY_TIME_SECONDS)
global data
data = Backend()
APP = flask.Flask(__name__)
#APP.route('/', methods=['GET', 'POST'])
def index():
""" Displays the index page accessible at '/'
"""
if request.method == 'GET':
# Return whatever data we currently have cached
return flask.render_template('index.html', response = data)
if __name__ == '__main__':
data = Backend() # Need to make sure we grab data before we start the server so we never return None to the user
Thread(target=main_loop).start() #Loop and grab new data at every loop
APP.debug=True
APP.run(port=62000)
DISCLAIMER: I've used Flask and threading before for a few projects, but I am by no means an expert on it or web development, at all. Test this code before using it for anything important (or better yet, find someone who knows that they're doing before using it for anything important)
Edit: data will have to be a global, sorry about that - hence the disclaimer

How to retrieve an image from redis in python flask

I am trying to store an image in the redis and retrieve it and send it to an HTML template. I am able to cache the image but I dunno how to retrieve the image back and send it to the HTML template. This is the part of my code which does caching and retrieving.
from urllib2 import Request, urlopen
import json
import redis
import urlparse
import os
from StringIO import StringIO
import requests
from PIL import Image
from flask import send_file
REDIS_URL = urlparse.urlparse(os.environ.get('REDISCLOUD_URL', 'redis://:#localhost:6379/'))
r = redis.StrictRedis(
host=REDIS_URL.hostname, port=REDIS_URL.port,
password=REDIS_URL.password)
class MovieInfo(object):
def __init__(self, movie):
self.movie_name = movie.replace(" ", "+")
def get_movie_info(self):
url = 'http://www.omdbapi.com/?t=' + self.movie_name + '&y=&plot=short&r=json'
result = Request(url)
response = urlopen(result)
infoFromJson = json.loads(response.read())
self._cache_image(infoFromJson)
return infoFromJson
def _cache_image(self, infoFromJson):
key = "{}".format(infoFromJson['Title'])
# Open redis.
cached = r.get(key)
if not cached:
response = requests.get(infoFromJson['Poster'])
image = Image.open(StringIO(response.content))
r.setex(key, (60*60*5), image)
return True
def get_image(self, key):
cached = r.get(key)
if cached:
image = StringIO(cached)
image.seek(0)
return send_file(image, mimetype='image/jpg')
if __name__ == '__main__':
M = MovieInfo("Furious 7")
M.get_movie_info()
M.get_image("Furious 7")
Any help on the retrieving part would be helpful. Also whats the best way to send the image file from a cache to a HTML template in Flask.
What you saved in Redis is a string,
Something likes:'<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=300x475 at 0x4874090>'.
response.content is rawData . use Image.frombytes() to get Image object.
Check here : Doc
You can't create nested structures in Redis, meaning you can't (for
example) store a native redis list inside a native redis hash-map.
If you really need nested structures, you might want to just store a
JSON-blob (or something similar) instead. Another option is to store
an "id"/key to a different redis object as the value of the map key,
but that requires multiple calls to the server to get the full object.
Try this:
response = requests.get(infoFromJson['Poster'])
# Create a string buffer then set it raw data in redis.
output = StringIO(response.content)
r.setex(key, (60*60*5), output.getvalue())
output.close()
see: how-to-store-a-complex-object-in-redis-using-redis-py

Unable to parse Yahoo Finance API JSON Data? Code included (Python Flask)

A bit new to JSON... Does anyone know how to properly iterate through and grab the symbol and change for example? I've tried wrapping everything in json.loads and using strings, but I keep getting errors regarding tuples. FYI, I'm using ticker inside the string, but I changed it to be YHOO for this question for convenience of anyone trying to run the same code.
from flask import Flask
from flask.ext.compress import Compress
from flask import render_template
from httplib2 import Http
import json
http = Http()
app = Flask(__name__)
Compress(app)
app.config['DEBUG'] = True
app.config['TESTING'] = True
#app.route('/<ticker>', methods=['GET'])
def check(ticker):
yahoo_api = http.request("http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20IN%20(%22YHOO%22)&format=json&env=http://datatables.org/alltables.env")
return yahoo_api[1]
if __name__ == '__main__':
app.run()
yahoo_api[1] is a string, use json.loads to get the json.
import json
from httplib2 import Http
yahoo_api = Http().request('http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20IN%20(%22YHOO%22)&format=json&env=http://datatables.org/alltables.env')
yahoo_json = json.loads(yahoo_api[1])
change = yahoo_json['query']['results']['quote']['Change']
symbol = yahoo_json['query']['results']['quote']['symbol']
Anthoer way is using requests, no worry about the json, it is esay to use.
import requests
r = requests.get('http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20IN%20%28%22YHOO%22%29&format=json&env=http://datatables.org/alltables.env')
change = r.json()['query']['results']['quote']['Change']
symbol = r.json()['query']['results']['quote']['symbol']
I'd think you might have forgotten to take the second part of the tuple (the content), although that seemed unlikely as you do do this for the return statement. Or maybe you forgot the UTF-8 decode?
import json
import pprint
from httplib2 import Http
http = Http()
url = "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%20IN%20(%22YHOO%22)&format=json&env=http://datatables.org/alltables.env"
yahoo_api = http.request(url)
result = json.loads(yahoo_api[1].decode('utf-8'))
pprint.pprint(result)

Retrieving Twitter data on the fly

Our company is trying to read in all live streams of data entered by random users, i.e., a random user sends off a tweet saying "ABC company".
Seeing as how you could use a twitter client to search for said text, I labour under the assumption that it's possible to aggregate all tweets that send off ones without using a client, i.e., to file, streaming in live without using hashtags.
What's the best way to do this? And if you've done this before, could you share your script? I reckon the simplest way would be via ruby/python script left running, but my understanding of ruby/python is limited at best.
Kindly help?
Here's a bare minimum:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import twitter
from threading import *
from os import _exit, urandom
from time import sleep
from logger import *
import unicodedata
## Based on: https://github.com/sixohsix/twitter
class twitt(Thread):
def __init__(self, tags = None, *args, **kwargs):
self.consumer_key = '...'
self.consumer_secret = '...'
self.access_key = '...'
self.access_secret = '...'
self.encoding = 'iso-8859-15'
self.args = args
self.kwargs = kwargs
self.searchapi = twitter.Twitter(domain="search.twitter.com").search
Thread.__init__(self)
self.start()
def search(self, tag):
try:
return self.searchapi(q=tag)['results']
except:
return {}
def run(self):
while 1:
sleep(3)
To use it, do something like:
if __name__ == "__main__":
t = twitt()
print t.search('#DHSupport')
t.alive = False
Note: The only reason this is threaded is because it's just a piece of code i had laying around for other projects, it gives you an idea how to work with the API and perhaps build a background service to fetch search results on twitter.
There's a lot of crap in my original code so the structure might look a bit odd.
Note that you don't really need the consumer_keys etc for just a search but you will need OAuth login for more features such as posting or checking messages.
The only two things you really need is:
import twitter
print twitter.Twitter(domain="search.twitter.com").search(q='#hashtag')['results']

Categories

Resources