Query builder for InfluxDB? - python

What is the preferred way of properly building InfluxDB queries in Python?
(I have been looking for a proper InfluxDB query builder. The ones I've found are either old/outdated or can't really generate valid queries for InfluxDB.)

you might be able to use Pinform which is some kind of ORM/OSTM (Object time series mapping) for InfluxDB.
It can help with designing the schema and building normal or aggregation queries.
cli.get_fields_as_series(OHLC,
field_aggregations={'close': [AggregationMode.MEAN]},
tags={'symbol': 'AAPL'},
group_by_time_interval='10d')
Disclaimer: I am the author of this library

Related

Is there a way to define a MongoDB schema using Motor?

There is a way to define MongoDB collection schema using mongoose in NodeJS. Mongoose verifies the schema at the time of running the queries.
I have been unable to find a similar thing for Motor in Python/Tornado. Is there a way to achieve a similar effect in Motor, or is there a package which can do that for me?
No there isn't. Motor is a MongoDB driver, it does basic operations but doesn't provide many conveniences. An Object Document Mapper (ODM) library like MongoTor, built on Motor, provides higher-level features like schema validation.
I don't vouch for MongoTor. Proceed with caution. Consider whether you really need an ODM: mongodb's raw data format is close enough to Python types that most applications don't need a layer between their code and the driver.
Currently (2019) this project Umongo https://github.com/Scille/umongo seems the more active and usefull if you need a sync/async Python MongoDB ODM. It work with multiple drivers like PyMongo or Motor for async.
Doc is here: http://umongo.readthedocs.io
Also you can use ODMantic as it has the best documentation and the engine supports motor client.
Currently (2023), Beanie [Github Link] is the best ODM I have ever used. It works beautifully with FASTAPI and is deeply integrated with Pydantic making it very easy to model data models.
They have a very nice documentation given in here

Building a DSL query language

i'm working on a project (written in Django) which has only a few entities, but many rows for each entity.
In my application i have several static "reports", directly written in plain SQL. The users can also search the database via a generic filter form. Since the target audience is really tech-savvy and at some point the filter doesn't fit their needs, i think about creating a query language for my database like YQL or Jira's advanced search.
I found http://sourceforge.net/projects/littletable/ and http://www.quicksort.co.uk/DeeDoc.html, but it seems that they only operate on in-memory objects. Since the database can be too large for holding it in-memory, i would prefer that the query is translated in SQL (or better a Django query) before doing the actual work.
Are there any library or best practices on how to do this?
Writing such a DSL is actually surprisingly easy with PLY, and what ho—there's already an example available for doing just what you want, in Django. You see, Django has this fancy thing called a Q object which make the Django querying side of things fairly easy.
At DjangoCon EU 2012, Matthieu Amiguet gave a session entitled Implementing Domain-specific Languages in Django Applications in which he went through the process, right down to implementing such a DSL as you desire. His slides, which include all you need, are available on his website. The final code (linked to from the last slide, anyway) is available at http://www.matthieuamiguet.ch/media/misc/djangocon2012/resources/compiler.html.
Reinout van Rees also produced some good comments on that session. (He normally does!) These cover a little of the missing context.
You see in there something very similar to YQL and JQL in the examples given:
groups__name="XXX" AND NOT groups__name="YYY"
(modified > 1/4/2011 OR NOT state__name="OK") AND groups__name="XXX"
It can also be tweaked very easily; for example, you might want to use groups.name rather than groups__name (I would). This modification could be made fairly trivially (allow . in the FIELD token, by modifying t_FIELD, and then replacing . with __ before constructing the Q object in p_expression_ID).
So, that satisfies simple querying; it also gives you a good starting point should you wish to make a more complex DSL.
I've faced exactly this problem - a large database which needs searching. I made some static reports and several fancy filters using django (very easy with django) just like you have.
However the power users were clamouring for more. I decided that there already was a DSL that they all knew - SQL. The question was how to make it secure enough.
So I used django permissions to give the power users permission to make SQL queries in a new table. I then made a view for the not-quite-so-power users to use these queries. I made them take optional parameters. The queries were run using Python's lower level DB-API which django is using under the hood for its ORM anyway.
The real trick was opening a read only database connection to run these queries just to make sure that no updates were ever run. I made a read only connection by creating a different user in the database with lower permissions and opening a specific connection for that in the view.
TL;DR - SQL is the way to go!
Depending on the form of your data, the types of queries your users need to use, and the frequency that your data is updated, an alternative to the pure SQL solution suggested by Nick Craig-Wood is to index your data in Solr and then run queries against it.
Solr is an added layer of complexity (configuration, data synchronization) but it is super-fast, can handle large datasets, and provides a (relatively) intuitive query language.
You could write your own SQL-ish language using pyparsing, actually. There is even pretty verbose example you could extend.

ORM with Graph-Databases like Neo4j in Python

i wonder wether there is a solution (or a need for) an ORM with Graph-Database (f.e. Neo4j). I'm tracking relationships (A is related to B which is related to A via C etc., thus constructing a large graph) of entities (including additional attributes for those entities) and need to store them in a DB, and i think a graph database would fit this task perfectly.
Now, with sql-like DBs, i use sqlalchemyś ORM to store my objects, especially because of the fact that i can retrieve objects from the db and work with them in a pythonic style (use their methods etc.).
Is there any object-mapping solution for Neo4j or other Graph-DB, so that i can store and retrieve python objects into and from the Graph-DB and work with them easily?
Or would you write some functions or adapters like in the python sqlite documentation (http://docs.python.org/library/sqlite3.html#letting-your-object-adapt-itself) to retrieve and store objects?
Shameless plug... there is also my own ORM which you may also want to checkout: https://github.com/robinedwards/neomodel
It's built on top of py2neo, using cypher and rest API calls under hood, i.e no dependency on gremlin.
There are a couple choices in Python out there right now, based on databases' REST interfaces.
As I mentioned in the link #Peter provided, we're working on neo4django, which updates the old Neo4j/Django integration. It's a good choice if you need complex queries and want an ORM that will manage node indexing as well- or if you're already using Django. It works very similarly to the native Django ORM. Find it on PyPi or GitHub.
There's also a more general solution called Bulbflow that is supposed to work with any graph database supported by Blueprints. I haven't used it, but from what I've seen it focuses on domain modeling - Bulbflow already has working relationship models, for example, which we're still working on- but doesn't much support complex querying (as we do with Django querysets + index use). It also lets you work a bit closer to the graph.
Maybe you could take a look on Bulbflow, that allows to create models in Django, Flask or Pyramid. However, it works over a REST client instead of the python-binding provided by Neo4j, so perhaps it's not as fast as the native binding is.

Setting up Pyramid to use MySQL raw instead of SQLAlchemy

We're trying to set up a Pyramid project that will use MySQL instead of SQLAlchemy.
My experience with Pyramid/Python is limited, so I was hoping to find a guide online. Unfortunately, I haven't been able to find anything to push us in the right direction. Most search results were for people trying to use raw SQL/MySQL commands with SQLAlchemy (many were re-posted links).
Anyone have a useful tutorial on this?
Pyramid at its base does not assume that you will use any one specific library to help you with your persistence. In order to make things easier, then, for people who DO wish to use libraries such as SQLALchemy, the Pyramid library contains Scaffolding, which is essentially some auto-generated code for a basic site, with some additions to set up items like SQLAlchemy or a specific routing strategy. The pyramid documentation should be able to lead you through creating a new project using the "pyramid_starter" scaffolding, which sets up the basic site without SQLAlchemy.
This will give you the basics you need to set up your views, but next you will need to add code to allow you to connect to a database. Luckily, since your site is just python code, learning how to use MySQL in Pyramid is simply learning how to use MySQL in Python, and then doing the exact same steps within your Pyramid project.
Keep in mind that even if you'd rather use raw SQL queries, you might still find some usefulness in SQLAlchemy. At it's base level, SQLAlchemy simply wraps around the DBAPI calls and adds in useful features like connection pooling. The ORM functionality is actually a large addition to the tight lower-level SQLAlchemy toolset.
sqlalchemy does not make any assumption that you will be using it's orm. If you wish to use plain sql, you can do so, with nothing more than what sqlalchemy already provides. For instance, if you followed the recipe in the cookbook, you would have access to the sqlalchemy session object as request.db, your handler would look something like this:
def someHandler(request):
rows = request.db.execute("SELECT * FROM foo").fetchall()
The Quick Tutorial shows a Pyramid application that uses SQL but not SQLAlchemy. It uses SQLite, but should be reasonably easy to adapt for MySQL.

Lightweight Object->Database in Python

I am in need of a lightweight way to store dictionaries of data into a database. What I need is something that:
Creates a database table from a simple type description (int, float, datetime etc)
Takes a dictionary object and inserts it into the database (including handling datetime objects!)
If possible: Can handle basic references, so the dictionary can reference other tables
I would prefer something that doesn't do a lot of magic. I just need an easy way to setup and get data into an SQL database.
What would you suggest? There seems to be a lot of ORM software around, but I find it hard to evaluate them.
SQLAlchemy's SQL expression layer can easily cover the first two requirements. If you also want reference handling then you'll need to use the ORM, but this might fail your lightweight requirement depending on your definition of lightweight.
SQLAlchemy offers an ORM much like django, but does not require that you work within a web framework.
From it's description, perhaps Axiom is a pythonic tool for this .
Seeing as you have mentioned sql, python and orm in your tags, are you looking for Django? Of all the web frameworks I've tried, I like this one the best. You'd be looking at models, specifically. This could be too fancy for your needs, perhaps, but that shouldn't stop you looking at the code of Django itself and learning from it.

Categories

Resources