I've been scouring the web for some good python documentation for Elasticsearch. I've got a query term that I know returns the information I need, but I'm struggling to convert the raw string into something Python can interpret.
This will return a list of all unique 'VALUE's in the dataset.
{"find": "terms", "field": "hierarchy1.hierarchy2.VALUE"}
Which I have taken from a dashboarding tool which accesses this data.
But I don't seem to be able to convert this into correct python.
I've tried this:
body_test = {"find": "terms", "field": "hierarchy1.hierarchy2.VALUE"}
es = Elasticsearch(SETUP CONNECTION)
es.search(
index="INDEX_NAME",
body = body_test
)
but it doesn't like the find value. I can't find anything in the documentation about find.
RequestError: RequestError(400, 'parsing_exception', 'Unknown key for
a VALUE_STRING in [find].')
The only way I've got it to slightly work is with
es_search = (
Search(
using=es,
index=db_index
).source(['hierarchy1.hierarchy2.VALUE'])
)
But I think this is pulling the entire dataset and then filtering (which I obviously don't want to be doing each time I run this code). This needs to be done through python and so I cannot simply POST the query I know works.
I am completely new to ES and so this is all a little confusing. Thanks in advance!
So it turns out that the find in this case was specific to Grafana (the dashboarding tool I took the query from.
In the end I used this site and used the code from there. It's a LOT more complicated than I thought it was going to be. But it works very quickly and doesn't put a strain on the database (which my alternative method was doing).
In case the link dies in future years, here's the code I used:
from elasticsearch import Elasticsearch
es = Elasticsearch()
def iterate_distinct_field(es, fieldname, pagesize=250, **kwargs):
"""
Helper to get all distinct values from ElasticSearch
(ordered by number of occurrences)
"""
compositeQuery = {
"size": pagesize,
"sources": [{
fieldname: {
"terms": {
"field": fieldname
}
}
}
]
}
# Iterate over pages
while True:
result = es.search(**kwargs, body={
"aggs": {
"values": {
"composite": compositeQuery
}
}
})
# Yield each bucket
for aggregation in result["aggregations"]["values"]["buckets"]:
yield aggregation
# Set "after" field
if "after_key" in result["aggregations"]["values"]:
compositeQuery["after"] = \
result["aggregations"]["values"]["after_key"]
else: # Finished!
break
# Usage example
for result in iterate_distinct_field(es, fieldname="pattern.keyword", index="strings"):
print(result) # e.g. {'key': {'pattern': 'mypattern'}, 'doc_count': 315}
Related
In python using elasticsearch_dsl.query there is a helper function Q that does the DSL query. However, i do not understand what this query is trying to say in a code i found:
ES_dsl.Q('match', path=path_to_file)
What exactly is Q('match', path = path_to_file) doing?
Where path_to_file is a valid path to a file in the system in the index.
Isn't path only in nested queries? There is no path in 'match' queries? I'm guessing it is to detokenize the path_to_file to find an exact match? An explanation to what is happening would be appreciated.
the approach it takes is the query type as the first value, then what you want to query next. so that is saying;
run a match query - https://www.elastic.co/guide/en/elasticsearch/reference/7.15/query-dsl-match-query.html
use the path field and search for the value path_to_file
so matching that back to the docs page from above, it'd look like this in direct DSL;
GET /_search
{
"query": {
"match": {
"path": {
"query": "path_to_file"
}
}
}
}
I would like to pretty print a json file where i can see the array ID's. Im working on a Cisco Nexus Switch with NX-OS that runs Python (2.7.11). Looking at following code:
cmd = 'show interface Eth1/1 counters'
out = json.loads(clid(cmd))
print (json.dumps(out, sort_keys=True, indent=4))
This gives me:
{
"TABLE_rx_counters": {
"ROW_rx_counters": [
{
"eth_inbytes": "442370508663",
"eth_inucast": "76618907",
"interface_rx": "Ethernet1/1"
},
{
"eth_inbcast": "4269",
"eth_inmcast": "49144",
"interface_rx": "Ethernet1/1"
}
]
},
"TABLE_tx_counters": {
"ROW_tx_counters": [
{
"eth_outbytes": "217868085254",
"eth_outucast": "66635610",
"interface_tx": "Ethernet1/1"
},
{
"eth_outbcast": "1137",
"eth_outmcast": "557815",
"interface_tx": "Ethernet1/1"
}
]
}
}
But i need to access the field by:
rxuc = int(out['TABLE_rx_counters']['ROW_rx_counters'][0]['eth_inucast'])
rxmc = int(out['TABLE_rx_counters']['ROW_rx_counters'][1]['eth_inmcast'])
rxbc = int(out['TABLE_rx_counters']['ROW_rx_counters'][1]['eth_inbcast'])
txuc = int(out['TABLE_tx_counters']['ROW_tx_counters'][0]['eth_outucast'])
txmc = int(out['TABLE_tx_counters']['ROW_tx_counters'][1]['eth_outmcast'])
txbc = int(out['TABLE_tx_counters']['ROW_tx_counters'][1]['eth_outbcast'])
So i need to know the array ID (in this example zeros and ones) to access the information for this interface. It seems pretty easy with only 2 arrays, but imagine 500. Right now, i always copy the json code to jsoneditoronline.org where i can see the ID's:
Is there an easy way to make the IDs visible within python itself?
You posted is valid JSON.
The image is from a tool that takes the data from JSON and displays it. You can display it in any way you want, but the contents in the file will need to be valid JSON.
If you do not need to load the JSON later, you can do with it whatever you like, but json.dumps() will give you JSON only.
I am trying to change the font of an entire Google Doc using the API. The purpose is to let users of our application export documents with their company’s font.
This is what I am currently doing:
from googleapiclient.discovery import build
doc_service = build("docs", "v1")
document = self.doc_service.documents().get(documentId="[Document ID]").execute()
requests = []
for element in document["body"]["content"]:
if "sectionBreak" in element:
continue # Trying to change the font of a section break causes an error
requests.append(
{
"updateTextStyle": {
"range": {
"startIndex": element["startIndex"],
"endIndex": element["endIndex"],
},
"textStyle": {
"weightedFontFamily": {
"fontFamily": "[Font name]"
},
},
"fields": "weightedFontFamily",
}
}
)
doc_service.documents().batchUpdate(
documentId=self.copy_id, body={"requests": requests}
).execute()
The code above changes the font, but it also removes any bold text formatting because it overrides the entire style of an element. Some options I have looked into:
DocumentStyle
Documents have a DocumentStyle property, but it does not contain any font information.
NamedStyles
Documents also have a NamedStyles property. It contains styles like NORMAL_TEXT and HEADING_1. I could loop through all these and change their textStyle.weightedFontFamily. This would be the ideal solution, because it would keep style information where it belongs. But I have not found a way to change NamedStyles using the API.
Deeper loop
I could continue with my current approach, looping through the elements list on each element, keeping everything but the font from textStyle (which contains things like bold: true). However, our current approach already takes too long to execute, and such an approach would be both slower and more brittle, so I would like to avoid this.
Answer:
Extract the textStyle out of the current element and only change/add the weightedFontFamily/fontFamily object.
Code Example:
for element in document["body"]["content"]:
if "sectionBreak" in element:
continue # Trying to change the font of a section break causes an error
textStyle = element["paragraph"]["elements"][0]["textStyle"]
textStyle["weightedFontFamily"]["fontFamily"] = "[Font name]"
requests.append(
{
"updateTextStyle": {
"range": {
"startIndex": element["startIndex"],
"endIndex": element["endIndex"],
},
"textStyle": textStyle,
"fields": "weightedFontFamily",
}
}
)
doc_service.documents().batchUpdate(
documentId=self.copy_id, body={"requests": requests}
).execute()
This seems to work for me, even with section breaks in between, and at the end of the document. You might want to explore more corner cases..
This basically tries to mimic the SelectAll option
document = service.documents().get(documentId=doc_id).execute()
endIndex = sorted(document["body"]["content"], key=lambda x: x["endIndex"], reverse=True,)[0]["endIndex"]
service.documents().batchUpdate(
documentId=doc_id,
body={
"requests": {
"updateTextStyle": {
"range": {
"endIndex": endIndex,
"startIndex": 1,
},
"fields": "fontSize",
"textStyle": {"fontSize": {"magnitude": 100, "unit": "pt"}},
}
}
},
).execute()
Same should work for other fields too.
However, if you are going to just share a docx file to all the clients, you could keep a local copy of the PDF / DOCX and then modify those. It is fairly easy to work around the styles in DOCX (it is a bunch of xml files)
Use this to explore and update DOCX files OOXML Tools Chrome Extension
Similarly PDFs are key-values pairs stored as records. Check this: ReportLab
I have the DynamoDB table that is filled from different services/sources. The table has next schema:
{
"Id": 14782,
"ExtId": 1478240974, //pay attention it is Number
"Name": "name1"
}
Sometimes, after services started to work I had found that one service sends data in incorrect format. It looks like:
{
"Id": 14782,
"ExtId": "1478240974", //pay attention it is String
"Name": "name1"
}
DynamoDB is NoSQL database so, now I have millions mixed records that are difficult to query or scan. I understand that my main fault was missed validation.
Now I have to go throw all records and if it is inappropriate type - remove it and add with same data but with the correct format. Is it possible to do in another gracefully way?
So it was pretty easy. It is possible to do with attribute_type method.
First of all, I added imports:
from boto3.dynamodb.conditions import Attr
import boto3
And my code:
attr = Attr('ExtId').attribute_type('S')
response = table.scan(FilterExpression = attr)
items = response['Items']
while 'LastEvaluatedKey' in response:
response = table.scan(FilterExpression = attr, ExclusiveStartKey = response['LastEvaluatedKey'])
items.extend(response['Items'])
It is possible to find more condition customization in the next article - DynamoDB Customization Reference
I am trying to write a call to db.command() using PyMongo that performs a geoNear search and I would like to exclude fields. The documentation for db.runCommand on the Mongo site and the PyMongo documentation both do not explain how one can accomplish this.
I understand how to do this using db.collection.find():
response = collection.find_one(
filter = {"PostalCode": postal_code},
projection = {'_id': False}
)
However, I cannot find any example anywhere of how to accomplish this when performing a geoNear search utilizing db.command():
params = {
"near": {
"type": "Point",
"coordinates": [longitude, latitude]
},
"spherical": True,
"limit": 1,
}
response = self.db.command("geoNear", value=self._collection_name, **params)
Can anyone provide insight into how one excludes fields when using db.command?
The geoNear command does not have a "projection" feature. It always returns entire documents. See the geoNear command reference for its options:
https://docs.mongodb.com/manual/reference/command/geoNear/