python mongoening time series collection support - python

I was looking for a solution for storage and retrieval of time series data.
As I have mongodb set up already in my project, I searched for a solution with mongodb and mongoengine (instead of pymongo).
So I wonder if there a similar solution to this for mongoengine or if there ain't one, how-to develop it.
{
"_id" : ObjectId("60c0d44894c10494260da31e"),
"source" : {sensorId: 123, region: "americas"},
"airPressure" : 99 ,
"windSpeed" : 22,
"temp" : { "degreesF": 39,
"degreesC": 3.8
},
"ts" : ISODate("2021-05-20T10:24:51.303Z")
}
db.createCollection("weather", {
timeseries: {
timeField: "ts",
metaField: "source",
granularity: "minutes"
},
expireAfterSeconds: 9000
});
Sample code is taken from MongoDB's New Time Series Collections in which the solution by pymongo is described but I wanna do it with mongoengine. Is that possible?

try to create your time-series collections with pymongo like this:
import pymongo
connection= pymongo.MongoClient('mongodb://localhost')
db = connection.<dbName>
db.create_collection('<tsCollectionName>', timeseries={ 'timeField': '<timeField>', 'metaField': '<metaField>', 'granularity': '<granularity>' }) })
you can replace every value between <> with yours

Related

Find all unique values for field in Elasticsearch through python

I've been scouring the web for some good python documentation for Elasticsearch. I've got a query term that I know returns the information I need, but I'm struggling to convert the raw string into something Python can interpret.
This will return a list of all unique 'VALUE's in the dataset.
{"find": "terms", "field": "hierarchy1.hierarchy2.VALUE"}
Which I have taken from a dashboarding tool which accesses this data.
But I don't seem to be able to convert this into correct python.
I've tried this:
body_test = {"find": "terms", "field": "hierarchy1.hierarchy2.VALUE"}
es = Elasticsearch(SETUP CONNECTION)
es.search(
index="INDEX_NAME",
body = body_test
)
but it doesn't like the find value. I can't find anything in the documentation about find.
RequestError: RequestError(400, 'parsing_exception', 'Unknown key for
a VALUE_STRING in [find].')
The only way I've got it to slightly work is with
es_search = (
Search(
using=es,
index=db_index
).source(['hierarchy1.hierarchy2.VALUE'])
)
But I think this is pulling the entire dataset and then filtering (which I obviously don't want to be doing each time I run this code). This needs to be done through python and so I cannot simply POST the query I know works.
I am completely new to ES and so this is all a little confusing. Thanks in advance!
So it turns out that the find in this case was specific to Grafana (the dashboarding tool I took the query from.
In the end I used this site and used the code from there. It's a LOT more complicated than I thought it was going to be. But it works very quickly and doesn't put a strain on the database (which my alternative method was doing).
In case the link dies in future years, here's the code I used:
from elasticsearch import Elasticsearch
es = Elasticsearch()
def iterate_distinct_field(es, fieldname, pagesize=250, **kwargs):
"""
Helper to get all distinct values from ElasticSearch
(ordered by number of occurrences)
"""
compositeQuery = {
"size": pagesize,
"sources": [{
fieldname: {
"terms": {
"field": fieldname
}
}
}
]
}
# Iterate over pages
while True:
result = es.search(**kwargs, body={
"aggs": {
"values": {
"composite": compositeQuery
}
}
})
# Yield each bucket
for aggregation in result["aggregations"]["values"]["buckets"]:
yield aggregation
# Set "after" field
if "after_key" in result["aggregations"]["values"]:
compositeQuery["after"] = \
result["aggregations"]["values"]["after_key"]
else: # Finished!
break
# Usage example
for result in iterate_distinct_field(es, fieldname="pattern.keyword", index="strings"):
print(result) # e.g. {'key': {'pattern': 'mypattern'}, 'doc_count': 315}

Updating a collection based on the value extracted from another collection

I'm revising my previous question. I have a collection named FileCollection with the following document:
{
"_id": {
"$oid": "5e791a53185fbb070378660a"
},
"selectedfiles": [{
"inputfile": "https://localhost/_HAC-154_1584994899979.jpg",
"Selectedby: "Joe"
}]}
I need to read the value of selectedfiles.inputfile as a string variable. I'm trying to do this in Python using this code:
from pymongo import MongoClient
mydb = MongoClient(mongodbConnection)
myCollection=mydb.FileCollection
myValue=myCollection.selectedfile[0].inputfile.value
print(myValue)
client.close
the output is a JSON without having the actual value of inputfile. Please help.
Thanks
Isn't it just because you're missing an s?
You had:
myValue=myCollection.selectedfile[0].inputfile.value
instead of:
myValue=myCollection.selectedfiles[0].inputfile.value

Examine and tweak a given analyzer?

I'm using the French analyzer.
Having examined the output from IndexClient.analyze(...) for this analyzer I'm a little unhappy with some of the stopwords (e.g. the expression 'ayant-cause' comes out as 'caus', because 'ayant' is a stopword: French stopwords).
How do I go about examining these stopwords and then tweaking them? Do I have to create a custom analyzer based on the existing French one? Or can I directly tweak the French one?
NB I am using the Python elasticsearch module ("thin client"), but an answer in terms of REST commands would be fine.
Yes, you can easily tweak the existing analyzer and examine them using the Analyze API of elasticsearch
Ultimately analyzer is made of three things, char filter, tokeniser and token-filter and you can create your own combination of these things to build your own custom analyzer and test it using the REST API.
Spent quite a bit of time figuring out at least a workaround arrangement.
Having downloaded that French stop-words file from Github I then edited it (e.g. to exclude "ayant"). Currently residing in the "config" directory of my installed ES setup (although you can set an absolute path).
Then I made my settings/mappings object like this:
{
'settings' : {
'analysis' : {
'analyzer' : {
'tweaked_french' : {
'type' : 'french',
# NB W10, config path currently D:\apps\ElasticSearch\elasticsearch-7.10.2\config
'stopwords_path' : 'tweaked_french_stop.txt',
},
},
},
},
'mappings': {
'dynamic': 'strict',
'properties': {
'my_french_field' : {
'type' : 'text',
'term_vector' : 'with_positions_offsets',
'fields' : {
'french' : {
'type' : 'text',
'analyzer' : 'tweaked_french',
'term_vector' : 'with_positions_offsets',
},
},
},
},
},
}
What is then rather wonderful is that, according to my experiments, you can get a query object to find and use that custom-built analyser (i.e. it's there and available, in the installed index). So your query object is relatively simple:
{
'query': {
'simple_query_string': {
'query': query_text,
'fields': [
'my_french_field.french',
],
'analyzer' : 'tweaked_french',
},
},
'highlight': {
'fields': {
'my_french_field.french': {
'type': 'fvh',
...
},
},
'number_of_fragments': 0
}
}
After that you can query in French: your query gets stemmed and the result is used for the search. If "ayant" is a word in your query string, it will now return hits including "ayant-cause", proving that both the query and the mapping spec are using the tweaked stop-word list.
I'd still like to know whether a way exists not involving using an external file, i.e. of editing on-the-fly what is already there (or of just seeing what it already there...).

Using Bad Json in Python

I am having a json in a file which i want to access in my Python Code. The Json file looks like :
{
"fc1" : {
region : "Delhi",
marketplace : "IN"
},
"fc2" : {
region : "Rajasthan",
marketplace : "IN"
}
}
The above json i want to use in my Python code. I want to access according to its keys("fc1", "fc2")
Since this is not like actual json, i am facing difficulty in accessing the values in json.
Is there any way in python language to access these type of json.
Thanks.
I agree with the comment that, if you generated that file, then you should put quotes around region and marketplace when generating it (or have the person who generated it do the same). However, if this absolutely isn't an option for whatever reason, the following approach might work:
import json
data_string = """
{
"fc1":{
region:"Delhi",
marketplace: "IN"
},
"fc2" : {
region:"Rajasthan",
marketplace: "IN"
}
}
"""
data = json.loads(data_string.replace('region', '"region"').replace('marketplace', '"marketplace"'))
data
>>>{'fc1': {'region': 'Delhi', 'marketplace': 'IN'},
'fc2': {'region': 'Rajasthan', 'marketplace': 'IN'}}
Note that you would have to do the same for any unquoted key.
There is module dirtyjson which reads this incorrect JSON.
import dirtyjson
data_string = """
{
"fc1":{
region:"Delhi",
marketplace: "IN"
},
"fc2" : {
region:"Rajasthan",
marketplace: "IN"
}
}
"""
data = dirtyjson.loads(data_string)
print(data)
print(data['fc1'])
print(data['fc2'])

How does Python JSON library dealing with time?

So I'm currently learning MongoDB and I'm using PyMongo rather than MongoDB shell.
When I started trying the basic CRUD operations, I found it is hard to load the bios data using PyMongo, since the original data posted on the website had a strange ISODATA for time.
The original python JSON library seemed to be not support this and the mongoimport seemed to be not support this either(not sure). But I found this, after modifying into {$date:"2017-04-01T05:00:00Z"}, mongoimport was working.
Right now I'm using subprocess to call a external command to import the data. So my question is, how to use python correctly read the JSON data and using PyMongo to insert the data.
Details
the bios data in the mongodb documentation looks like this
{
"_id" : 1,
"name" : {
"first" : "John",
"last" : "Backus"
},
"birth" : ISODate("1924-12-03T05:00:00Z"),
"death" : ISODate("2007-03-17T04:00:00Z"),
"contribs" : [
"Fortran",
"ALGOL",
"Backus-Naur Form",
"FP"
],
"awards" : [
{
"award" : "W.W. McDowell Award",
"year" : 1967,
"by" : "IEEE Computer Society"
},
{
"award" : "National Medal of Science",
"year" : 1975,
"by" : "National Science Foundation"
},
{
"award" : "Turing Award",
"year" : 1977,
"by" : "ACM"
},
{
"award" : "Draper Prize",
"year" : 1993,
"by" : "National Academy of Engineering"
}
]
}
And when I try to parse it with Python's JSON library, I get a error messagejson.decoder.JSONDecodeError because of the "birth" : ISODate("1924-12-03T05:00:00Z"),. And mongoimport can not parse this because of the same reason.
When I modified,
"birth" : ISODate("1924-12-03T05:00:00Z"), into
"birth" : $date:"2017-04-01T05:00:00Z"
mongoimport was working but python still wasn't able to parse it.
What I am asking here is a way to deal this problem within Python and PyMongo rather than calling a external commands.
The example that you're looking at was probably intended to be used within the mongo shell, where the use of the ISODate bson type can be parsed as shown.
Outside of that, we have the challenge that JSON does not have a date datatype, nor does it have a standard way of representing dates. To deal with this challenge, MongoDB created something called Extended JSON, which can encode dates in JSON similar to how you have shown with $date.
In order to work with Extended JSON in Python / PyMongo, you could use json_util.
Here's a brief example:
from bson.json_util import loads
from pymongo import MongoClient
json = '''
{
"_id" : 1,
"name" : {
"first" : "John",
"last" : "Backus"
},
"birth" : {"$date":"2017-04-01T05:00:00.000Z"},
"death" : {"$date":"2017-04-01T05:00:00.000Z"}
}
'''
bson = loads(json)
print(str(bson))
db = MongoClient().test
collection = db.bios
collection.insert(bson)

Categories

Resources