please i need help with my django project. The code i have here is taking screenshots and saving it in my media folder so i want to output the pictures in my html page but it's giving me issues, i think i'm missing a lot.
Here is my python code which is taking the screenshots
from django.http import HttpResponse
import time,random,pyautogui
from django.db import models
import sys,os
from shots.models import pictures
from shots.forms.forms import DocumentForm
from django.conf import settings
def button(request):
return render(request,'index.html')
def output(request):
while True:
myScreenshot = pyautogui.screenshot()
name = random.randrange(1,1000)
full_name = str(name) + ".jpg"
filepath = settings.MEDIA_ROOT + '\shots'
full_path = filepath + full_name
saver = myScreenshot.save(full_path)
# generate a random time between 120 and 300 sec
random_time = random.randrange(3,6)
# wait between 120 and 300 seconds (or between 2 and 5 minutes)
time.sleep(random_time)
myScreenshots2 = []
myScreenshots2.append(saver)
# return (myScreenshots2)
return HttpResponse(request,'task.html',saver)
def stopshot(request):
os.system("pause")```
Python code runs on the server, and user is using a client to connect to the server.When you want to take screenshots it should be done by client not the server, since server is not users computer.
Check out this question to see how you can take screenshots from client using js.
Related
I have this python code that I am trying to display data on static page
using flask instead of console.
from __future__ import print_function
from private import private, private, private
optimizeit = get_optimizeit(web.site, private.private)
optimizeit.load_data_from_CSV("/path.../to..cvs.csv")
data = optimizeit.get_data_by_name('somename')
data = optimizeit.data[0]
data.max_exposure = 0.5
generatedata = optimizeit.optimizeit(4)
for datafield in generatedata:
print (datafield)
Where print I want to print this to simple flask page. I tried few things and I just can't think of way of doing it best way.
EDIT: What I tried
from __future__ import print_function
import flask
from private import private, private, private
import time
app = flask.Flask(__name__)
#app.route('/sitea')
def index():
def inner():
optimizeit = get_optimizit(website.site12, private.someprivate)
optimizer.load_players_from_CSV("/mypath to csv.../.csv") #import csv
data = optimizeit.datas[0] #optimize that data
data.max_exposure = 0.5 #set some exposure to that data
data_generator = optimizeit.optimizeit(4)
for datalive in datalive_generator:
return datalive
return flask.Response(inner(), mimetype='text/html') # text/html is required for most browsers to show the partial page immediately
app.run(debug=True)
EDIT 2: THIS WORKED!
from __future__ import print_function
import flask
from private import private, private, private
import time
app = flask.Flask(__name__)
#app.route('/sitea')
def index():
def inner():
optimizeit = get_optimizit(website.site12, private.someprivate)
optimizer.load_players_from_CSV("/mypath to csv.../.csv") #import csv
data = optimizeit.datas[0] #optimize that data
data.max_exposure = 0.5 #set some exposure to that data
data_generator = optimizeit.optimizeit(4)
for datalive in datalive_generator:
yield '%s<br/>\n' % datalive
return flask.Response(inner(), mimetype='text/html') # text/html is required for most browsers to show the partial page immediately
Here
for datalive in datalive_generator:
return datalive
this just returns the first item in datalive_generator and then exits the function, never to return. You probably meant yield datalive. This way it will continue streaming output to the response. In the meantime you'll want to do some search into the difference between generators and normal functions in Python.
I'm pretty new to Pandas and Flask, trying to leverage it to output a summarised version of a CSV containing survey feedback that I can email to users periodically.
As a standalone function, it works so long as I give it an input file that's specified (e.g. 'users/sample.csv') and outfile but when running as part of an application and using an uploaded html file, it fails with
TypeError: csuppfb() takes at least 2 arguments (0 given)
Essentially I want to pass the uploaded file to the function, and have Pandas do its thing but it doesn't get that far. Below is the code:
import re,os
import beatbox
import pandas as pd
import numpy as np
import argparse
from jinja2 import Environment, FileSystemLoader
from weasyprint import HTML
from os.path import isfile,join
from flask import Flask, request, redirect, url_for,render_template,json as fjson,send_from_directory
from werkzeug import secure_filename
from mapping import Autotagging,Manualtagging
from defs import *
UPLOAD_FOLDER = './uploads'
PIVOT_FOLDER = './pivot'
ALLOWED_EXTENSIONS = set(['csv'])
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config['PIVOT_FOLDER']= PIVOT_FOLDER
#app.route('/feedback',methods=['GET', 'POST'])
def feedback():
if request.method == 'POST':
file = request.files['file']
if file and allowed_file(file.filename):
filename = randomword(6)+'_'+secure_filename(file.filename)
file.save(os.path.join(app.config['PIVOT_FOLDER'], filename))
return redirect(url_for('csuppfb',df=filename))
return render_template('mappingtest.html')
#app.route('/csuppfb', methods=['POST','GET'])
def csuppfb(df,infile, index_list=["Case Owner","Case Number","Support Survey - Service rating"], value_list = ["Age (Hours)"]):
"""
Creating a pivot table from the raw dataframe and returning it as a dataframe
"""
table = pd.pivot_table(df, index=index_list, values = value_list,
aggfunc=[np.sum,np.mean], fill_value=0)
return table
def get_summary_stats(df, product):
"""
Get a stats summary
"""
results.append(df[df["Support Survey - Service rating"]==product]["Closed"].mean())
results.append(df[df["Support Survey - Service rating"]==product]["Age (Hours)"].mean())
return results
def dataform(df):
"""
Take the dataframe and output it in html to output a pdf report or display on a web page
"""
df = pd.read_csv(filename)
csuppreport = pivot_table(df,filename)
agent_df = []
for agent in csuppreport.index.get_level_values(0).unique():
agent_df.append([agent, csuppreport.xs(agent, level=0).to_html()])
env = Environment(loader=FileSystemLoader('.'))
template = env.get_template("csupp.html")
template_vars={"title": "CSUPP FB REPORT",
"Excellent": get_summary_stats(df,"Excellent"),
"Good": get_summary_stats(df,"Good"),
"csupp_pivot_table": csuppreport.to_html(),
"agent_detail": agent_df}
html_out = template.render(template_vars)
HTML(string=html_out).write_pdf(args.outfile.name,stylesheets=["style.css"])
return render_template('csupp.html')
What's the best way to have the file I've uploaded be used as the dataframe argument in
def csuppfb(df,infile...
?
Any advice would be very much appreciated. I've a feeling it's something glaringly obvious I'm missing.
you need to use the args object from request which contains all the url params
http://flask.pocoo.org/docs/0.10/quickstart/#the-request-object
See this basic example:
#app.route('/csuppfb', methods=['POST','GET'])
def csuppfb():
if request.args['df'] :
df = request.args['df']
#Do your panda stuff here. Instead of returning the filename :D
return str(df)
else :
return 'nofile'
let's say I created a basic web for converting youtube videos to gifs
this is my views in django code
from django.shortcuts import render
from django.http import HttpResponse
from django.template import RequestContext
from django.shortcuts import render_to_response
import pafy
from PIL import Image
from moviepy.editor import *
import os
context = RequestContext(request)
if request.GET:
link = request.GET.get('link','')
url = link
try:
video = pafy.new(url)
except:
return render_to_response("generated/invalid.html", context)
video = video.getbest()
if os.path.isfile(os.getcwd()+'\\static\\'+video.title+'.gif')==True:
gifpath=video.title+'.gif'
context_dict = {'staticpath':gifpath}
return render_to_response("generated/generated.html", context_dict, context)
video.download(quiet=True)
clip = (VideoFileClip(os.getcwd()+"\\"+video.title+'.'+video.extension).resize(0.4))
clip.write_gif(os.getcwd()+'\\static\\'+video.title+'.gif', fps=9, opt='optimizeplus', loop=0)
gifpath=video.title+'.gif'
context_dict = {'staticpath':gifpath}
return render_to_response("generated/generated.html", context_dict, context)
it works fine, but after I convert 2-5 videos (it's random) it gives me this error,
from the looks of it, the cause is in the VideoFileClip method AttributeError
It will works again if I restart the django server, strange!!!
in the django debugger, the exception is [WinError 6] The handle is invalid
Tried it with windows 8.1 64 bit and windows 7 32 bit
UPDATE, I think it's the gif converter causing this, after I turned on the downloading status pafy confirmed I downloaded the video (look at uppest and lowest request)
but what makes it strange is that, the error pops out after running it random times. Do you think it's my code's fault or the lib?
I use cv.CaptureFromCAM in a Django app, but my script block a this command.Without Django, it works and I can see my webcam turns on.
Here's my script :
import cv, Image
def takePhoto():
"""Return a PIL img"""
print "Taking photo"
cv_img = cv.QueryFrame( cv.CaptureFromCAM(0) )
pil_img = Image.fromstring("L", cv.GetSize(cv_img), cv_img.tostring())
return pil_img
If someone know why I can't use a method like cv.CaptureFromCAM in Django's scripts ?
PS : I already tried to decompose in several lines...
Resolved :
I put cv.CaptureFromCAM in a var settings.py for launch it at website start up.
I access to that var for take a photo, example :
In settings.py:
CAM = cv.CaptureFromCAM(0)
In views.py:
from django.http import HttpResponse
import cv, Image
def instantPhoto(request) :
cv_img = cv.QueryFrame( CAM[0] )
pil_img = Image.fromstring("RGB", cv.GetSize(cv_img), cv_img.tostring())
response = HttpResponse(mimetype="image/png")
pil_img.save(response, "PNG")
return response
What is your best way to remove all of the blob from blobstore? I'm using Python.
I have quite a lot of blobs and I'd like to delete them all. I'm
currently doing the following:
class deleteBlobs(webapp.RequestHandler):
def get(self):
all = blobstore.BlobInfo.all();
more = (all.count()>0)
blobstore.delete(all);
if more:
taskqueue.add(url='/deleteBlobs',method='GET');
Which seems to be using tons of CPU and (as far as I can tell) doing
nothing useful.
I use this approach:
import datetime
import logging
import re
import urllib
from google.appengine.ext import blobstore
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.ext.webapp import blobstore_handlers
from google.appengine.ext.webapp import util
from google.appengine.ext.webapp import template
from google.appengine.api import taskqueue
from google.appengine.api import users
class IndexHandler(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('Hello. Blobstore is being purged.\n\n')
try:
query = blobstore.BlobInfo.all()
index = 0
to_delete = []
blobs = query.fetch(400)
if len(blobs) > 0:
for blob in blobs:
blob.delete()
index += 1
hour = datetime.datetime.now().time().hour
minute = datetime.datetime.now().time().minute
second = datetime.datetime.now().time().second
self.response.out.write(str(index) + ' items deleted at ' + str(hour) + ':' + str(minute) + ':' + str(second))
if index == 400:
self.redirect("/purge")
except Exception, e:
self.response.out.write('Error is: ' + repr(e) + '\n')
pass
APP = webapp.WSGIApplication(
[
('/purge', IndexHandler),
],
debug=True)
def main():
util.run_wsgi_app(APP)
if __name__ == '__main__':
main()
My experience is that more than 400 blobs at once will fail, so I let it reload for every 400. I tried blobstore.delete(query.fetch(400)), but I think there's a bug right now. Nothing happened at all, and nothing was deleted.
You're passing the query object to the delete method, which will iterate over it fetching it in batches, then submit a single enormous delete. This is inefficient because it requires multiple fetches, and won't work if you have more results than you can fetch in the available time or with the available memory. The task will either complete once and not require chaining at all, or more likely, fail repeatedly, since it can't fetch every blob at once.
Also, calling count executes the query just to determine the count, which is a waste of time since you're going to try fetching the results anyway.
Instead, you should fetch results in batches using fetch, and delete each batch. Use cursors to set the next batch and avoid the need for the query to iterate over all the 'tombstoned' records before finding the first live one, and ideally, delete multiple batches per task, using a timer to determine when you should stop and chain the next task.