In python using elasticsearch_dsl.query there is a helper function Q that does the DSL query. However, i do not understand what this query is trying to say in a code i found:
ES_dsl.Q('match', path=path_to_file)
What exactly is Q('match', path = path_to_file) doing?
Where path_to_file is a valid path to a file in the system in the index.
Isn't path only in nested queries? There is no path in 'match' queries? I'm guessing it is to detokenize the path_to_file to find an exact match? An explanation to what is happening would be appreciated.
the approach it takes is the query type as the first value, then what you want to query next. so that is saying;
run a match query - https://www.elastic.co/guide/en/elasticsearch/reference/7.15/query-dsl-match-query.html
use the path field and search for the value path_to_file
so matching that back to the docs page from above, it'd look like this in direct DSL;
GET /_search
{
"query": {
"match": {
"path": {
"query": "path_to_file"
}
}
}
}
Related
I've been scouring the web for some good python documentation for Elasticsearch. I've got a query term that I know returns the information I need, but I'm struggling to convert the raw string into something Python can interpret.
This will return a list of all unique 'VALUE's in the dataset.
{"find": "terms", "field": "hierarchy1.hierarchy2.VALUE"}
Which I have taken from a dashboarding tool which accesses this data.
But I don't seem to be able to convert this into correct python.
I've tried this:
body_test = {"find": "terms", "field": "hierarchy1.hierarchy2.VALUE"}
es = Elasticsearch(SETUP CONNECTION)
es.search(
index="INDEX_NAME",
body = body_test
)
but it doesn't like the find value. I can't find anything in the documentation about find.
RequestError: RequestError(400, 'parsing_exception', 'Unknown key for
a VALUE_STRING in [find].')
The only way I've got it to slightly work is with
es_search = (
Search(
using=es,
index=db_index
).source(['hierarchy1.hierarchy2.VALUE'])
)
But I think this is pulling the entire dataset and then filtering (which I obviously don't want to be doing each time I run this code). This needs to be done through python and so I cannot simply POST the query I know works.
I am completely new to ES and so this is all a little confusing. Thanks in advance!
So it turns out that the find in this case was specific to Grafana (the dashboarding tool I took the query from.
In the end I used this site and used the code from there. It's a LOT more complicated than I thought it was going to be. But it works very quickly and doesn't put a strain on the database (which my alternative method was doing).
In case the link dies in future years, here's the code I used:
from elasticsearch import Elasticsearch
es = Elasticsearch()
def iterate_distinct_field(es, fieldname, pagesize=250, **kwargs):
"""
Helper to get all distinct values from ElasticSearch
(ordered by number of occurrences)
"""
compositeQuery = {
"size": pagesize,
"sources": [{
fieldname: {
"terms": {
"field": fieldname
}
}
}
]
}
# Iterate over pages
while True:
result = es.search(**kwargs, body={
"aggs": {
"values": {
"composite": compositeQuery
}
}
})
# Yield each bucket
for aggregation in result["aggregations"]["values"]["buckets"]:
yield aggregation
# Set "after" field
if "after_key" in result["aggregations"]["values"]:
compositeQuery["after"] = \
result["aggregations"]["values"]["after_key"]
else: # Finished!
break
# Usage example
for result in iterate_distinct_field(es, fieldname="pattern.keyword", index="strings"):
print(result) # e.g. {'key': {'pattern': 'mypattern'}, 'doc_count': 315}
I would like to pretty print a json file where i can see the array ID's. Im working on a Cisco Nexus Switch with NX-OS that runs Python (2.7.11). Looking at following code:
cmd = 'show interface Eth1/1 counters'
out = json.loads(clid(cmd))
print (json.dumps(out, sort_keys=True, indent=4))
This gives me:
{
"TABLE_rx_counters": {
"ROW_rx_counters": [
{
"eth_inbytes": "442370508663",
"eth_inucast": "76618907",
"interface_rx": "Ethernet1/1"
},
{
"eth_inbcast": "4269",
"eth_inmcast": "49144",
"interface_rx": "Ethernet1/1"
}
]
},
"TABLE_tx_counters": {
"ROW_tx_counters": [
{
"eth_outbytes": "217868085254",
"eth_outucast": "66635610",
"interface_tx": "Ethernet1/1"
},
{
"eth_outbcast": "1137",
"eth_outmcast": "557815",
"interface_tx": "Ethernet1/1"
}
]
}
}
But i need to access the field by:
rxuc = int(out['TABLE_rx_counters']['ROW_rx_counters'][0]['eth_inucast'])
rxmc = int(out['TABLE_rx_counters']['ROW_rx_counters'][1]['eth_inmcast'])
rxbc = int(out['TABLE_rx_counters']['ROW_rx_counters'][1]['eth_inbcast'])
txuc = int(out['TABLE_tx_counters']['ROW_tx_counters'][0]['eth_outucast'])
txmc = int(out['TABLE_tx_counters']['ROW_tx_counters'][1]['eth_outmcast'])
txbc = int(out['TABLE_tx_counters']['ROW_tx_counters'][1]['eth_outbcast'])
So i need to know the array ID (in this example zeros and ones) to access the information for this interface. It seems pretty easy with only 2 arrays, but imagine 500. Right now, i always copy the json code to jsoneditoronline.org where i can see the ID's:
Is there an easy way to make the IDs visible within python itself?
You posted is valid JSON.
The image is from a tool that takes the data from JSON and displays it. You can display it in any way you want, but the contents in the file will need to be valid JSON.
If you do not need to load the JSON later, you can do with it whatever you like, but json.dumps() will give you JSON only.
Here's a simplified version of the JSON I am working with:
{
"libraries": [
{
"library-1": {
"file": {
"url": "foobar.com/.../library-1.bin"
}
}
},
{
"library-2": {
"application": {
"url": "barfoo.com/.../library-2.exe"
}
}
}
]
}
Using json, I can json.loads() this file. I need to be able to find the 'url', download it, and save it to a local folder called library. In this case, I'd create two folders within libraries/, one called library-1, the other library-2. Within these folder's would be whatever was downloaded from the url.
The issue, however, is being able to get to the url:
my_json = json.loads(...) # get the json
for library in my_json['libraries']:
file.download(library['file']['url']) # doesn't access ['application']['url']
Since the JSON I am using uses a variety of accessors, sometimes 'file', other times 'dll' etc, I can't use one specific dictionary key. How can I use multiple. Would there be a modular way to do this?
Edit: There are numerous accessors, 'file', 'application' and 'dll' are only some examples.
You can just iterate through each level of the dictionary and download the files if you find a url.
urls = []
for library in my_json['libraries']:
for lib_name, lib_data in library.items():
for module_name, module_data in lib_data.items():
url = module_data.get('url')
if url is not None:
# create local directory with lib_name
# download files from url to local directory
urls.append(url)
# urls = ['foobar.com/.../library-1.bin', 'barfoo.com/.../library-2.exe']
This should work:
for library in my_json['libraries']:
for value in library.values():
for url in value.values():
file.download(url)
I would suggest doing it like this:
for library in my_json['libraries']:
library_data = library.popitem()[1].popitem()[1]
file.download(library_data['url'])
Try this
for library in my_json['libraries']:
if 'file' in library:
file.download(library['file']['url'])
elif 'dll' in library:
file.download(library['dll']['url'])
It just sees if your dict(created by parsing JSON) has a key named 'file'. If so, then use 'url' of the dict corresponds to the 'file' key. If not, try the same with 'dll' keyword.
Edit: If you don't know the key to access the dict containing the url, try this.
for library in my_json['libraries']:
for key in library:
if 'url' in library['key']:
file.download(library[key]['url'])
This iterates over all the keys in your library. Then, whichever key contains an 'url', downloads using that.
I am trying to write a django app and use elasticsearch in it with elasticsearch-dsl library of python. I don't want to create all switch-case statements and then pass search queries and filters accordingly.
I want a function that does the parsing stuff by itself.
For e.g. If i pass "some text url:github.com tags:es,es-dsl,django",
the function should output corresponding query.
I searched for it in elasticsearch-dsl documentation and found a function that does the parsing.
https://github.com/elastic/elasticsearch-dsl-py/search?utf8=%E2%9C%93&q=simplequerystring&type=
However, I dont know how to use it.
I tried s = Search(using=client).query.SimpleQueryString("1st|ldnkjsdb"), but it is showing me parsing error.
Can anyone help me out?
You can just plug the SimpleQueryString in the Search object, instead of a dictionary send the elements as parameters of the object.
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search
from elasticsearch_dsl.query import SimpleQueryString
client = Elasticsearch()
_search = Search(using=client, index='INDEX_NAME')
_search = _search.filter( SimpleQueryString(
query = "this + (that | thus) -those",
fields= ["field_to_search"],
default_operator= "and"
))
A lot of elasticsearch_dsl simply change the dictionary representation to classes of functions that makes the code look pythonic, and avoid the use of hard-to-read elasticsearch JSONs.
Im guessing you are asking about the usage of elasticsearch-dsl with query string like you are making a request with json data to the elasticsearch api. If that's the case, this is how you are going to use elasticsearch-dsl:
assume you have the query in query variable like this:
{
"query": {
"query_string" : {
"default_field" : "content",
"query" : "this AND that OR thus"
}
}
}
and now do this:
es = Elasticsearch(
host=settings.ELASTICSEARCH_HOST_IP, # Put your ES host IP
port=settings.ELASTICSEARCH_HOST_PORT, # Put yor ES host port
)
index = settings.MY_INDEX # Put your index name here
result = es.search(index=index, body=query)
I am trying to write a call to db.command() using PyMongo that performs a geoNear search and I would like to exclude fields. The documentation for db.runCommand on the Mongo site and the PyMongo documentation both do not explain how one can accomplish this.
I understand how to do this using db.collection.find():
response = collection.find_one(
filter = {"PostalCode": postal_code},
projection = {'_id': False}
)
However, I cannot find any example anywhere of how to accomplish this when performing a geoNear search utilizing db.command():
params = {
"near": {
"type": "Point",
"coordinates": [longitude, latitude]
},
"spherical": True,
"limit": 1,
}
response = self.db.command("geoNear", value=self._collection_name, **params)
Can anyone provide insight into how one excludes fields when using db.command?
The geoNear command does not have a "projection" feature. It always returns entire documents. See the geoNear command reference for its options:
https://docs.mongodb.com/manual/reference/command/geoNear/