Flask SQLAlchemy Contains/Ilike producing different results? - python

I am trying to query a column from a database with contains/ilike, they are producing different results. Any idea why?
My current code;
search = 'nel'
find = Clients.query.filter(Clients.lastName.ilike(search)).all()
# THE ABOVE LINE PRODUCES 0 RESULTS
find = Clients.query.filter(Clients.lastName.contains(search)).all()
# THE ABOVE LINE PRODUCES THE DESIRED RESULTS
for row in find:
print(row.lastName)
My concern is am I missing something? I have read that 'contains' does not always work either. Is there a better way to do what I am doing?

For ilike and like, you need to include wildcards in your search like this:
Clients.lastName.ilike(r"%{}%".format(search))
As the Postgres docs say:
LIKE pattern matching always covers the entire string. Therefore, to match a sequence anywhere within a string, the pattern must start and end with a percent sign.
The other difference is that contains is case-sensitive, while ilike is insensitive.

Related

How to find all cells matching a regex with gspread?

So I am very new to programming and I am using python gspread module to use a google sheet as a database.
There's a function for said module called sheet.findall(query, row, column), and this is great, but there's one issue, the query parameter will only look for an exact match, meaning that if i write "DDG", it will not get me the info from a cell with the value of "DDG-87".
After reading the documentation, I found out that you can use python regular expressions to structure the query parameter, so I did that, but there's a problem; The second parameter in re.findall is WHERE to look for, but the issue is that the whole variable is the action of searching, example shown below:
search = sheet.findall(re.findall("[DDG]", The where to search goes here))
As you can see, the whole variable (SEARCH) is the search function, and therefore, I can not specify where to search.
I have tried to set the second parameter of the regex as (SEARCH), but obviously, it won't work.
Any idea or a clue on how I can set the second parameter of re.findall() to be self, or what I can do so that the function doesn't search for an exact match, but if it contains the text?
Thank you.
From the gspread docs:
Find all cells matching a regexp:
criteria_re = re.compile(r'(Small|Room-tiering) rug')
cell_list = worksheet.findall(criteria_re)
So the following should work in your case:
criteria_re = re.compile(r'DDG.*')
search = sheet.findall(criteria_re)

Check if string is in certain format in Python

I have string as below.
/customer/v1/123456789/account/
The id in the url is dynamic.
What I want to check is if I have that string how can I be sure that if first part and second part is matching with below structure. /customer/v1/<customer_id>/account
What I have done so far is this. however, I want to check if endpoints is totally matching to the structure or not.
endpoint_structure = '/customer/v1/'
endpoint = '/customer/v1/123456789/account/'
if endpoint_structure in endpoint:
return True
return False
Endpoint structure might change as well.
For example: /customer/v1/<customer_id>/documents/<document_id>/ and there will be again given endpoint and I need to check if given endpoint fits with the structure.
You can use a regular expression;
import re
return re.match(r'^/customer/v1/\d+/account/$', endpoint)
or you can examine the beginning and the end:
return endpoint.startswith('/customer/v1/') and endpoint.endswith('/account/')
... though this doesn't attempt to verify that the stuff between the beginning and the end is numeric.
Can solve this using regular expression
^(/customer/v1/)(\d)+(/account/)$
Also if you want to specify the minimum length for customer_id
(/customer/v1/<customer_id>/account ) then use the following regexp
^(/customer/v1/)(\d){5,}(/account/)$
Here expecting the customer_id must have at least 5 digits length
Check here

Ealasticsearch results exactly as parameter

I'm trying to filter logs based on the domain name. For example I only want the results of domain: bh250.example.com.
When I use the following query:
http://localhost:9200/_search?pretty&size=150&q=domainname=bh250.example.com
the first 3 results have a domain name: bh250.example.com where the 4th having bh500.example.com
I have read several documentations on how to query to Elasticsearch but I seem to miss something. I only want results having 100% match with the parameter.
UPDATE!! After question from Val
queryFilter = Q("match", domainname="bh250.example.com")
search=Search(using=dev_client, index="logstash-2016.09.21").query("bool", filter=queryFilter)[0:20]
You're almost there, you just need to make a small change:
http://localhost:9200/_search?pretty&size=150&q=domainname:"bh250.example.com"
^ ^
| |
use colon instead of equal... and double quotes

Django MySQL Exact Word Match With iRegex Search

I am trying to get a column that has an exact word say yax and not yaxx but I keep getting the two for whichever one that search for. I want only yax when I search for yax regardless of case.
I have tried:
key = 'yax'
query = Model.objects.filter(content__iregex=r"[[:<:]]*{0}*[[:>:]]".format(key))
Answers I have checked but didn't quite help me:
This...
And this...
And this too...
Remove the * from your regex.
query = Model.objects.filter(content__iregex=r"[[:<:]]{0}[[:>:]]".format(key))

Is it possible to use "find" method with "javascript" query with pymongo?

With Mongo, it is OK with the following:
> db.posts.find("this.text.indexOf('Hello') > 0")
But with pymongo, when executing the following:
for post in db.posts.find("this.text.indexOf('Hello') > 0"):
print post['text']
the error occurred.
I think Full Text Search in Mongo is better way in this example, but is it possible to use "find" method with "javascript" query with pymongo?
You are correct - you do this with server side javascript by using the $where clause[1]:
db.posts.find({"$where": "this.text.indexOf('Hello') > 0"})
Will work on all but sharded setups but the costs of doing are deemed prohibitive as you will be inspecting all documents in the collection, which is why generally its not considered a great idea.
You could also do a regular expression search:
db.posts.find({'text':{'$regex':'Hello'}})
This will also do a full collection scan as the regular expression isn't anchored (if you anchor a regular expression for example you're checking if a field begins with an value and have an index on that field you can utilise the index).
Given that those two approaches are expensive and won't perform or scale well then the best approach?
Well the full text search approach as described in the link you gave[2] works well. Create a _keywords field which stores the keywords as lowercase in an array, index that field then you can query like so:
db.posts.find({"_keywords": {"$in": "hello"});
That will scale and utilises an index so will be performant.
[1] http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-JavascriptExpressionsand%7B%7B%24where%7D%7D
[2] http://www.mongodb.org/display/DOCS/Full+Text+Search+in+Mongo

Categories

Resources