I am using Flask and MySQL. I have an issue with updated data not showing up after the execution.
Currently, I am deleting and redirecting back to the admin page so I may then have a refreshed version of the website. However, I still get old entries showing up in the table I have in the front end. After refreshing manually, everything works normally. The issue sometimes also happens with data simply not being sent to the front-end at all as a result of the template being rendered faster than the MySQL execution and an empty list being sent forward, I assume.
On Python I have:
#app.route("/admin")
def admin_main():
query = "SELECT * FROM categories"
conn.executing = True
results = conn.exec_query(query)
return render_template("src/admin.html", categories = results)
#app.route('/delete_category', methods = ['POST'])
def delete_category():
id = request.form['category_id']
query = "DELETE FROM categories WHERE cid = '{}'".format(id)
conn.delete_query(query)
return redirect("admin", code=302)
admin_main is the main page. I tried adding some sort of "semaphore" system, only executing once "conn.executing" would become false, but that did not work out either. I also tried playing around with async and await, but no luck ("The view function did not return a valid response").
I am somehow out of options in this case and do not really know how to treat the problem. Any help is appreciated!
I figured that the problem was not with the data not being properly read, but with the page not being refreshed. Though the console prints a GET request for the page, it does not direct since it is already on that same page.
The only workaround I am currently working on is socket.io implementation to have the content updated dynamically.
Related
I'd like to mark an email as read from my python code. I'm using
from exchangelib import Credentials, Account
my_account = Account(...)
credentials = Credentials(...)
to get access to the account. This part works great. I then get into my desired folder using this
var1 = my_account.root / 'branch1' / 'desiredFolder'
Again, this works great. This is where marking it as read seems to not work.
item = var1.filter(is_read=False).values('body')
for i, body in enumerate(item):
#Code doing stuff
var1.filter(is_read=False)[i].is_read = True
var1.filter(is_read=False)[i].save(updated_fields=['is_read'])
I've tried the tips and answers from this post Mark email as read with exchangelib, but the emails still show as unread. What am I doing wrong?
I think you the last line of code that you save() do not work as you think that after you set is_read of unread[i] element to True, this unread[i] of course not appear in var1.filter(is_read=False)[i] again, so you acttually did not save this.
I think this will work.
for msg in my_account.inbox.filter(is_read=False):
msg.is_read = True
msg.save(updated_fields=['is_read'])
To expand on #thangnd's answer, the reason your code doesn't work is that you are calling .save() on a different Python object than the object you are setting the is_read flag on. Every time you call var1.filter(is_read=False)[i], a new query is sent to the server, and a new Python object is created.
Additionally, since you didn't specify a sort order (and new emails may be incoming while the code is running), you may not even be hitting the same email each time you call var1.filter(is_read=False)[i].
I am trying to retrieve from JIRA tree graph of parent child issues
epic->story->task using python3.10 jira=3.1.1
for project with 400 issues it takes minutes to retrieve the result.
is there a way how to improve this code for better performance
The result is displayed in flask frontend web and response time 2minutes is not pleasant for users
jira = JIRA(options={'server': self.url, 'verify': False}, basic_auth=(self.username, self.password))
for singleIssue in jira.search_issues(jql_str=querystring, maxResults=False):
issue = jira.issue(singleIssue.key)
links = issue.fields.issuelinks
for link in links:
if hasattr(link, 'inwardIssue'):
item['inward_issue'] = link.inwardIssue.key
if hasattr(link, 'outwardIssue'):
item['outward_issue'] = link.outwardIssue.key
item['typename'] = link.type.nam
I realised this is moreless matter of flask and long queries handling.
It should be solved either by asynchronous calls or queuing.
Limit the fields you are requesting via the "fields=[ 'key' ]" parameter to include only those which you need.
In your example you are fetching the whole issue twice: once from the .search_issues() iterator every time round the loop (as expected), and (unnecessarily) again in the .issue() call on the following line.
Below is my current code. It connects successfully to the organization. How can I fetch the results of a query in Azure like they have here? I know this was solved but there isn't an explanation and there's quite a big gap on what they're doing.
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from azure.devops.v5_1.work_item_tracking.models import Wiql
personal_access_token = 'xxx'
organization_url = 'zzz'
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
wit_client = connection.clients.get_work_item_tracking_client()
results = wit_client.query_by_id("my query ID here")
P.S. Please don't link me to the github or documentation. I've looked at both extensively for days and it hasn't helped.
Edit: I've added the results line that successfully gets the query. However, it returns a WorkItemQueryResult class which is not exactly what is needed. I need a way to view the column and results of the query for that column.
So I've figured this out in probably the most inefficient way possible, but hope it helps someone else and they find a way to improve it.
The issue with the WorkItemQueryResult class stored in variable "result" is that it doesn't allow the contents of the work item to be shown.
So the goal is to be able to use the get_work_item method that requires the id field, which you can get (in a rather roundabout way) through item.target.id from results' work_item_relations. The code below is added on.
for item in results.work_item_relations:
id = item.target.id
work_item = wit_client.get_work_item(id)
fields = work_item.fields
This gets the id from every work item in your result class and then grants access to the fields of that work item, which you can access by fields.get("System.Title"), etc.
How do you properly set cookies in Django?
I have tried this:
re=HttpResponse('Hello world')
re.set_cookie('key','value')
and also this:
request.COOKIES['key']='value'
None of these are working and I have yet to figure out why.
Edit 1
Here is what my code looks like so far:
lang=UserData.objects.get(user_id=request.user.id)
lang.pref_language=request.POST.get('lang','')
re=HttpResponse('Hello world')
re.set_cookie('dddd',request.POST.get('lang','') )
request.COOKIES['ffff']=request.POST.get('lang','')
lang.save()
return HttpResponse('Updated')
so language is being saved every time function runs but cookies are not working properly.
Finally I figured out that the reason why cookies are not set is that I am nor returning my response object . Here is final version
re=HttpResponse('/')
re.set_cookie('language',request.POST.get('lang','') )
return re
I want some search feature in my website. In the output page, I am getting all the results in single page. However, I want to distribute it to many pages (i.e. 100 searches/page). For that, I am passing a number of default searches in "urlfor" but it isn't working. I know I am making a small error but I am not catching it.
Here is my code below:
#app.route('/', methods=['GET', 'POST'])
def doSearch():
entries=None
error=None
if request.method=='POST':
if request.form['labelname']:
return redirect(url_for('show_results',results1='0-100', labelname=request.form['labelname'] ))
else:
error='Please enter any label to do search'
return render_template('index.html',entries=entries, error=error)
#app.route('/my_search/<labelname>')
def show_results(labelname=None, resultcount=None, results1=None):
if not session.get('user_id'):
flash('You need to log-in to do any search!')
return redirect(url_for('login'))
else:
time1=time()
if resultcount is None:
total_count=g.db.execute(query_builder_count(tablename='my_data',nametomatch=labelname, isextension=True)).fetchall()[0][0]
limit_factor=" limit %s ,%s"%(results1.split('-')[0],results1.split('-')[1])
nk1=g.db.execute(query_builder(tablename='my_data',nametomatch=labelname, isextension=True) + limit_factor)
time2=time()
entries=[]
maxx_count=None
for rows in nk1:
if maxx_count is None:
maxx_count=int(rows[0])
entries.append({"xmlname":rows[1],'xmlid':rows[2],"labeltext":rows[12]})
return render_template('output.html', labelname=labelname,entries=entries, resultcount=total_count, time1=time2-time1, current_output=len(entries))
Here I want output on the URL like "http://127.0.0.1:5000/my_search/assets?results1=0-100"
Also, if I edit the url address in browser like I want the next 100 result I can get it on "http://127.0.0.1:5000/my_search/assets?results1=100-100"
Note: here I am using sqlite as backend; so I will use "limit_factor" in my queries to limit my results. And "query_builder" and "query_builder_count" are just simple functions that are generating complex sql queries.
but the error I am getting is "NoneType" can't have split. It stopped at "limit_factor".
Here limit factor is just one filter that I have applied; but I want to apply more filters, for example i want to search by its location "http://127.0.0.1:5000/my_search/assets?results1=0-100&location=asia"
Function parameters are mapped only to the route variables. That means in your case, the show_results function should have only one parameter and that's labelname. You don't even have to default it to None, because it always has to be set (otherwise the route won't match).
In order to get the query parameters, use flask.request.args:
from flask import request
#app.route('/my_search/<labelname>')
def show_results(labelname=None):
results1 = request.args.get('results1', '0-100')
...
Btw, you better not construct your SQL the way you do, use placeholders and variables. Your code is vulnerable to SQL injection. You can't trust any input that comes from the user.
The correct way to do this depends on the actual database, but for example if you use MySQL, you would do this (not that I'm not using the % operator):
sql = ".... LIMIT %s, %s"
g.db.execute(sql, (limit_offset, limit_count))