I'm building an application in both Bottle and Flask to see which I am more comfortable with as Django is too much 'batteries included'.
I have read through the routing documentation of both, which is very clear and understandable but I am struggling to find a way of dealing with an unknown, possibly unlimited number of URL segments. ie:
http://www.example.com/seg1/seg2/seg3/seg4/seg5.....
I was looking at using something like #app.route(/< path:fullurl >) using regex to remove unwanted characters and splitting the fullurl string into a list the same length as the number of segments, but this seems incredibly inefficient.
Most PHP frameworks seem to have a method of building an array of the segment variable names regardless of the number but neither Flask, Bottle or Django seem to have a similar option, I seem to need to specify an exact number of segments to capture variables. A couple of PHP cms's seem to collect the first 9 segments immediately as variables and anything any longer gets passed as a full path which is then broken down in the way I mentioned above.
Am I not understanding the way things work in URL routing? Is the string splitting method really inefficient or the best way to do it? Or, is there a way of collecting an unknown number of segments straight into variables in Flask?
I'm pretty new on Python frameworks so a five year olds explanation would help,
many thanks.
I'm fairly new to Flask myself, but from what I've worked out so far, I'm pretty sure that the idea is that you have lots of small route/view methods, rather than one massive great switching beast.
For example, if you have urls like this:
http://example.com/unit/57/
http://example.com/unit/57/page/23/
http://example.com/unit/57/page/23/edit
You would route it like this:
#app.route('/unit/<int:unit_number>/')
def display_unit(unit_number):
...
#app.route('/unit/<int:unit_number>/page/<int:page_number>/')
def display_page(unit_number, page_number):
...
#app.route('/unit/<int:unit_number>/page/<int:page_number>/edit')
def page_editor(unit_number, page_number):
...
Doing it this way helps to keep some kind of structure in your application and relies on the framework to route stuff, rather than grabbing the URL and doing all the routing yourself. You could then also make use of blueprints to deal with the different functions.
I'll admit though, I'm struggling to think of a situation where you would need a possibly unlimited number of sections in the URL?
Splitting the string doesn't introduce any inefficiency to your program. Performance-wise, it's a negligible addition to the URL processing done by the framework. It also fits in a single line of code.
#app.route('/<path:fullurl>')
def my_view(fullurl):
params = fullurl.split('/')
it works:
#app.route("/login/<user>/<password>")
def login(user, password):
app.logger.error('An error occurred')
app.logger.error(password)
return "user : %s password : %s" % (user, password)
then:
http://example.com:5000/login/jack/hi
output:
user : jack password : hi
Related
I have these 2 routes:
api.add_resource(X, "/<string:stage>/api/sales/by-type")
api.add_resource(Y, "/<string:stage>/api/sales/filters/by-type")
Should it be /by_type or /by-type? /by/type would be weird because /by would be a route by itself, which makes no sense. Can't find any docs about it.
#nitul was right, it's about API design in general, but hyphens are commonly used in urls even it's not standard or official but seen as the best practice, seo friendly and urls are more elegant and pretty.
in other hand, i would like to drew your attention about some particular/extra parameters in urls like filters, sorting and pagination, it does more sens to use them as extra arguments ?type=TYPE along with your base/canonical url /<string:stage>/api/sales because the two routes you mentioned are logically the same at the end. Have a look at this good post https://www.moesif.com/blog/technical/api-design/REST-API-Design-Filtering-Sorting-and-Pagination/ it elaborates more the topic with good patterns to adopt. that's been said, you'll need only one route :
api.add_resource(X, "/<string:stage>/api/sales")
and then depending on extra arguments eg: ?type=TYPE in the url you return the appropriate set of objects and this way your API are more compact (you avoid redundancy) and maintainable and extensible.
and as bonus, since you are using Flask and depending on your need (if any), think of Custom URL Converter (this topic https://exploreflask.com/en/latest/views.html#custom-converters will help you)
I have some text fields in my Django model that are filled by a script, with values in English (the list of values is known).
But the app is actually made for Russian clients only. I'd like to translate those fields into Russian, and here comes a little question. These values are taken from an API response, which means I should check the value to translate it. What's faster: to check and translate fields in template or to make extra fields and translate strings in the Python script?
The problem is overhead of compiling Templates when rendering. So the more complicated the template gets (method calls etc), the performance tends to get slow (like py files are converted to pyc). Django has template caching but that also is limited (I don't know how much). I have faced performance issue because of lot of logic in templates. Plus its always good to have a dumb client (template). I will prefer the Python approach because of the idea to keep client thin and not because of the performance gap. Plus if tomorrow you need to add one more language then changing templates is always going to be difficult then server.
I've included a search form in my web2py application, in the following form:
myapp/controller/search?query=myquery
However, for security reasons web2py automatically replaces spaces and non-alphanumeric characters with underscores, which is okay for English-only sites but
an impediment for languages that use accent marks. For example, searching for "áéíóú" returns five underscores.
This could be solved by using POST instead of GET for the search form, but then the users wouldn't be able to bookmark the results.
Is there any option to solve this?
Thanks in advance.
Here's an idea that I've used in the past:
Use post to submit the query
Generate a unique string (e.g. youtube: https://www.youtube.com/watch?v=jX3DuS2Ak3g)
Associate the query to that string and store as key/value pair in session/app state/db (depending on how long you want it to live)
Redirect the user to that
If you don't want to occupy extra memory/space as they tend to grow a lot in some cases, you can substitute steps 2-3 with encrypting the string to something you can decrypt afterwards. You can do this in a middleware class so that it's transparent to your app's logic.
This is a general problem people face while handling urls.
You can use the quote/quote_plus module in urllib to normalize the strings -
For example, from the strings you suggested -
>>> print urllib.quote('éíóú')
%C3%A9%C3%AD%C3%B3%C3%BA
>>> print urllib.unquote('%C3%A9%C3%AD%C3%B3%C3%BA')
éíóú
you will have to perform the unquote when you retrieve it on the backend from the request.
There are also some other posts which might be helpful - urlencode implementation and unicode ready urls
I'm trying to reduce the size of a string like this:
'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE0NDU0OTk3NDUsImQiOnsiYXV0aF9kYXRhIjoiZm9vIiwib3RoZXJfYXV0aF9kYXRhIjoiYmFyIiwidWlkIjoidW5pcXVlSWQxIn0sInYiOjB9.h6LV3boj0ka2PsyOjZJb8Q48ugiHlEkNksusRGtcUBk'
to something that someone could type in less then 30 seconds like this:
'aF9kYX'
and be able to turn it back to the original string too. How could I achieve that?
EDIT: I guess I'm not being clear, first I don't know if what I want is possible.
So, I have my app which asks for a token to log in, which is that JWT. But it is way too long for someone to manually type. So I supposed there was an algorithm to make this string smaller (compress it) so that it could be easier and faster to type. An example that comes to my mind of how I would use such algorithm is:
short_to_big(small_string) //Returns the original JWT
big_to_short(JWT_string) //Returns the smaller string
Stupid simple answer: use a dict to store the short string as key and the long one as value. Then you just have to generate the short string the way you like and make sure it's not already in the dict. If you need to persist the key/value, you can use almost any kind of database (sql, key:value, document, or even a csv file FWIW).
Oh and if that doesn't solve your problem then you may want to consider giving more context ;)
You need more constraints. A 200 character string contains a lot more information than a 6 character string, so either need to a lot more about the original strings (e.g. that they come from some known set of strings, or have a limited character set) or you need to store the original strings somewhere and use the string the user type as a key to a map or similar.
There are lossless compression algorithms, but these depend on knowing some probabilistic information about the string (e.g. that repeated characters are likely) and will typically expand the strings if the probabilities are wrong.
UPDATE (After question clarification and comments suggestion)
You could implement an algorithm that uniquely maps this big string into a short representation of the string and store this mapping in a dictionary. The following algorithm does not guarantee the uniqueness but should give you some path to follow.
import random
import string
def long_string_to_short(original_string, length=10):
random.seed(original_string)
filling_values = string.digits + string.ascii_letters
short_string = ''.join(random.choice(filling_values) for char_ in xrange(length))
return short_string
When calling the function you can specify an appropriate length for the short string.
Then you could:
my_mapping_dict = {}
my_long_string = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE0NDU0OTk3NDUsImQiOnsiYXV0aF9kYXRhIjoiZm9vIiwib3RoZXJfYXV0aF9kYXRhIjoiYmFyIiwidWlkIjoidW5pcXVlSWQxIn0sInYiOjB9.h6LV3boj0ka2PsyOjZJb8Q48ugiHlEkNksusRGtcUBk'
short_string = long_string_to_short(my_long_string)
my_mapping_dict[short_string] = my_long_string
Ok, so, because I couldn't find a solution for shrinking the string, I tried to give it a different approach, and found a solution.
Now to clarify why I wanted to log in with the token, I'm going to write what I want to do with my app:
In Firebase anyone can create an account, but I don't want that, so for that I made a group of users that were the only ones that could write or read the data.
So in order to create an account, the user would have to request a register code, (Which in reality is a JWT generated from Firebase, so that you have permission to add a user to that group I was talking about).
This app is for local use, meaning that only people that lives here are going to use it. So, back to the original question, the token is too big for someone to type (as I have said many times), and I wanted to know if I could shrink it and how. But without success I tried a different approach, which is to generate the token (from a different program), encrypt it with a random code, and upload it to a firebase, that way I give the random code to people so that users can type it in the app so that it can retrieve and decrypt the token and authenticate with it, so that finally the user has an account that has the privilege to read or write data.
Thanks for your responses and sorry if I wasted your time.
I have an Django application where the URL's to be handled shall have a specific pattern like /servername/alpha/beta/2/delta/10/pie/1
Now i will be be needing these parameters contained in the URL and will persist them to the database when a URL beginning from /servername/ is being called.So i have 2 ways of doing it
Pass parameters along with request to the relevant view.In this case my regex would ensure that i have param1 through param7 having values alpha,beta,2,delta,10,pie and 1 respectively.
Pass only the request without passing the parameters.I will either parse using regex the request.path_info or split request.path_info on a "/" and obtain relevant entries
Which of these two methods shall be preferred in order to have better performance in terms of CPU and memory or maybe other factors i am not aware of.
I believe that one can compare the two using time functions but i believe it won't present an accurate picture.Theoretically which approach shall be preferred and why?
Option two is inherently slower as your view would need to do this parsing each time, whereas Django's standard URL parser works off of compiled regular expressions. (The urlpatterns in urls.py is compiled once on first run.)
However, the speed difference in either approach is pretty negligible. This will never be the bottleneck of your application; focus on things like your database and queries thereof and any I/O operations in your app (anything that writes or reads extensively from the hard drive). Those are where apps get slowed down. Otherwise, you're talking in terms of saving a millisecond here or there, which is fruitlessly pointless.