I'm using Django to create a website, and in one portion I need two get requests in the url - one called "search", and one called "page". I've tried the following -
return HttpResponseRedirect('/explore/?search=test/?page=1')
However, '/?page=1' is being included as part of the search, and this is messing it up. Any way to keep them as two different gets, or will I have to fuse them into one?
Do it like this:
return HttpResponseRedirect('/explore/?search=test&page=1')
and in view you can get both parameters as:
search = request.GET.get('search')
page = request.GET.get('page')
Related
I am trying to create a search page where buttons can be clicked which will filter the posts like in this page [Splice Sounds][2] (i think you need an account to view this so ill add screenshots).
To do this i think i need to pass a list so that i can filter by that list but i can't find a way to do this.
having a GET form for each genre (which is being created by a for loop) would allow me to filter by one genre at a time but i want to filter by multple genres at once so that won't work
in the site that i linked to: they pass the genres/tags into the url so how could i do this in django?
Also: i could make seperate url paths and link to those but then i would have to do this for every combination of genres/tags which would be too much so i can't do that.
the link shows a site which passes tags through url like this https://splice.com/sounds/search?sound_type=sample&tag=drums,kicks
here is some relevant code:
this is how i want to filter which is why i need to pass a list of args
for arg in args:
Posts = Posts.filter(genres=arg)
urls
urlpatterns = [
path('', views.find, name='find'),
path('searchgenres=<genres_selected>', views.find_search, name='find_search'),
]
EDIT: I have tried this many ways such as using ajax but i couldn't get that to work well
EDIT 2: i have changed the question to How To Pass Only Selected Arguments Through URL
To pass a list into a request you could:
Use html checkboxes and in views aggregate them into a list
Use a single textbox and parse in views
If you obtain the request as a list, you could use Post.objects.filter(genre__in=genres).
It might also be helpful to know that Django allows for complex lookups with Q objects from django.db.models import Q. The | character represents OR. This allows complex filtering. For instance:
Posts.objects.filter(Q(genre='Pop') | Q(genre='Rock') | Q(genre='Jazz'))
I'm using Py-StackExchange to get a list of questions from CrossValidated. I need to filter by the titles of pages that include the word "keras".
This is my code. Its execution takes a very long time and finally returns nothing.
cv = stackexchange.Site(stackexchange.CrossValidated, app_key=user_api_key, impose_throttling=True)
cv.be_inclusive()
for q in cv.questions(pagesize=100):
if "keras" in q.title:
print('--- %s ---' % q.title)
print(q.creation_date)
I checked the same query manually with a search and obtained the list of questions very quickly.
How can I do the same using Py-StackExchange?
You have two options:
Use this SEDE query. This will give you all questions which contain keras in their title on Cross Validated. However, note that SEDE is updated weekly.
Use the Stack Exchange API's /search/advanced method. This method has a title parameter which accepts:
text which must appear in returned questions' titles.
I haven't used Py-StackExchange before, so I don't know how it works. Therefore, in this example I'm going to use the StackAPI library (docs):
from stackapi import StackAPI
q_filter = '!4(L6lo9D9ItRz4WBh'
word_to_search = 'keras'
SITE = StackAPI('stats')
keras_qs = SITE.fetch('search/advanced',
filter = q_filter,
title = word_to_search)
print(keras_qs['items'])
print(f"Found {len(keras_qs['items'])} questions.")
The filter I'm using here is !-MOiN_e9RRw)Pq_PfQ*ovQp6AZCUT08iP; you can change that or not provide it at all. There's no reason to provide an API key (the lib uses one) unless there's a readon to do so.
I'm quite a newbie in Python espescially to use Gmaps API to get place details.
I want to search for places with this parameters:
places_result = gmaps.places_nearby(location=' -6.880270,107.60794', radius = 300, type = 'cafe')
But actually i want to get many data as i can in the specific lat/lng and radius. So, I try to get new parameters that google api has provided. That's page_token. This is the detail of documentation:
pagetoken — Returns up to 20 results from a previously run search. Setting a pagetoken parameter will execute a search with the same parameters used previously — all parameters other than pagetoken will be ignored.
https://developers.google.com/places/web-service/search
So i tried to get more data (Next page data) with this function:
places_result = gmaps.places_nearby(location=' -6.880270,107.60794', radius = 300, type = 'cafe')
time.sleep(5)
place_result = gmaps.places_nearby(page_token = places_result['next_page_token'])
And this is my whole output function:
for place in places_result['results']:
my_place_id = place['place_id']
my_fields = ['name','formatted_address','business_status','rating','user_ratings_total','formatted_phone_number']
places_details = gmaps.place(place_id= my_place_id , fields= my_fields)
pprint.pprint(places_details['result'])
But unfortunately when i start to running i only get 20 (Max) data of place details. I don't know whether my function of page token parameter it's true or not, because the output can't get more than 20 data.
I'm very appreciate for anyone who can give me an advice to solve the problem. Thank you very much :)
As stated on the documentation here:
By default, each Nearby Search or Text Search returns up to 20 establishment results per query; however, each search can return as many as 60 results, split across three pages.
So basically, what you are currently experiencing is an intended behavior. There is no way for you to get more than 20 results in a single nearby search query.
If a next_page_token was returned upon sending your first nearby search query, then, this means that a second page with results is available.
To access this second page of results, then just like what you did, you just have to send another nearby search request, but use the pagetoken parameter this time, and set its value with the next_page_token you got from the first response.
And if the next_page_token also exists on the response of your second nearby search query, then this means that the third (and the last) page of the result is also available. You could access the third page of results using the same way you accessed the second page.
Going back to your query, I tried the parameters you've specified but I could only get around 9 results. Is it intended that your radius parameter is only set at 300 meters?
I can get the filler variable from the URL below just fine.
url(r'^production/(?P<filler>\w{7})/$', views.Filler.as_view()),
In my view I can retrieve the filler as expected. However, if I try to do use a URL like the one below.
url(r'^production/(?P<filler>)\w{7}/(?P<day>).*/$', views.CasesByDay.as_view()),
Both variables (filler, day) are blank.
You need to include the entire parameter in parenthesis. It looks like you did that in your first example but not the second.
Try:
url(r'^production/(?P<filler>\w{7})/(?P<day>.*)/$', views.CasesByDay.as_view()),
See the official documentation for more information and examples: URL dispatcher
I have a flask application. On a particular view, I show a table with about 100k rows in total. It's understandably taking a long time for the page to load, and I'm looking for ways to improve it. So far I've determined I query the database and get a result fairly quickly. I think the problem lies in rendering the actual page. I've found this page on streaming and am trying to work with that, but keep running into problems. I've tried the stream_template solution provided there with this code:
#app.route('/thing/matches', methods = ['GET', 'POST'])
#roles_accepted('admin', 'team')
def r_matches():
matches = Match.query.filter(
Match.name == g.name).order_by(Match.name).all()
return Response(stream_template('/retailer/matches.html',
dashboard_title = g.name,
match_show_option = True,
match_form = form,
matches = matches))
def stream_template(template_name, **context):
app.update_template_context(context)
t = app.jinja_env.get_template(template_name)
rv = t.stream(context)
rv.enable_buffering(5)
return rv
The Match query is the one that returns 100k+ items. However, whenever I run this the page just shows up blank with nothing there. I've also tried the solution with streaming the data to a json and loading it via ajax, but nothing seems to be in the json file either! Here's what that solution looks like:
#app.route('/large.json')
def generate_large_json():
def generate():
app.logger.info("Generating JSON")
matches = Product.query.join(Category).filter(
Product.retailer == g.retailer,
Product.match != None).order_by(Product.name)
for match in matches:
yield json.dumps(match)
app.logger.info("Sending file response")
return Response(stream_with_context(generate()))
Another solution I was looking at was for pagination. This solution works well, except I need to be able to sort through the entire dataset by headers, and couldn't find a way to do that without rendering the whole dataset in the table then using JQuery for sorting/pagination.
The file I get by going to /large.json is always empty. Please help or recommend another way to display such a large data set!
Edit: I got the generate() part to work and updated the code.
The problem in both cases is almost certainly that you are hanging on building 100K+ Match items and storing them in memory. You will want to stream the results from the DB as well using yield_per. However, only PostGres+psycopg2 support the necessary stream_result argument (here's a way to do it with MySQL):
matches = Match.query.filter(
Match.name == g.name).order_by(Match.name).yield_per(10)
# Stream ten results at a time
An alternative
If you are using Flask-SQLAlchemy you can make use of its Pagination class to paginate your query server-side and not load all 100K+ entries into the browser. This has the added advantage of not requiring the browser to manage all of the DOM entries (assuming you are doing the HTML streaming option).
See also
SQLAlchemy: Scan huge tables using ORM?
How to Use SQLAlchemy Magic to Cut Peak Memory and Server Costs in Half