how to use multiple table in python sqlitedict in the same file? - python

In sqlitedict README on github, it says that one of the features is:
Support for multiple tables (=dicts) living in the same database file.
https://github.com/RaRe-Technologies/sqlitedict#features
How can I do that? I have researched but all were workarounds, e.g. nested dictionaries.

In the source code it shows that the class takes tablename as an argument. So, the answer is to use that argument. Kinda simple but wasn't available in the docs.
https://github.com/RaRe-Technologies/sqlitedict/blob/master/sqlitedict.py#L108

Related

Node-level documentation for sqlglot

Using the Python library sqlglot, where can I find documentation that explains:
Which attributes I should expect to find on which expression nodes types (which arg types does Join, Table, Select, etc. have?)
What overall structure I should expect the AST to have for various kinds of SQL statements? (e.g. that a Select has a "joins" child, which in turn has a list of tables) And what "arg" name do I use to access each of these?
For example, what documentation could I look at to know that code like below (from here) will find the names of table within the joins? How would I know to request "joins" from node.args? What does "this" refer to?
node = sqlglot.parse_one(sql)
for join in node.args["joins"]:
table = join.find(exp.Table).text("this")
My use case is to parse a bunch of Hive SQL scripts in order to find FROM, INSERT, ADD/DROP TABLE statements/clauses within the scripts, for analyzing which statements interact with which tables. So I am using sqlglot as a general-purpose SQL parser/AST, rather than as a SQL translator.
I have generated a copy of the pdocs locally, but it only tells me which Python API methods are available on the Expression nodes. It does not seem to answer the questions above, unless I am looking in the wrong place.
You can look in the expressions.py file
https://github.com/tobymao/sqlglot/blob/main/sqlglot/expressions.py
every expression type has arg_types

Insert row into database with unknown schema using Python module peewee

I am building a database interface using Python's peewee module. I am trying to figure out how to insert data into an existing database where I do not know the schema.
My idea is to use playhouse.reflection.Introspector to find out the database schema, then use that information to create class objects which can then be inserted into the existing database.
So far I've gotten to:
introspector = Introspector.from_database(database)
models = introspector.generate_models()
I'm don't know where to go from there.
1) Can I create database objects in this manner? What is the next step?
2) Is there an easier way to do this?
peewee includes an introspection tool called pwiz that can (basically) introspect a database and produce model definitions. It is run as a command line script and dumps the model definitions to stdout, so invokation is like any other unix tool. Here is an example from the docs:
python -m pwiz -e postgresql my_postgres_db > mymodels.py
From there edit mymodels.py to get what you need.
You could do this on the fly, but it would require a few steps and is hackish (not to mention pointless if you really don't know anything about the schema):
Run pwiz as an os command
Read it to pick out the model names
Import whatever you find
BUT
If you really don't know the schema to start with then you have no idea what the semantics of the database are anyway, which means whatever you find is literally meaningless. Unless you at least know some schema/table/column names you are hunting for (in which case you do know something about the schema) there isn't really much you can do with regard to inserting data (not in a sane way), though you could certainly dump data from the db. But if you just wanted a database dump then pg_dump would have been easier.
I suspect this is actually an X-Y problem. What problem is it you are trying to solve by using this technique? What effect is it supposed to achieve within the context of your system?
If you want to create a GUI, check out the sqlite_web project. It uses Peewee to create a web-based SQLite database manager.

Loading data from a (MySQL) database into Django without models

This might sound like a bit of an odd question - but is it possible to load data from a (in this case MySQL) table to be used in Django without the need for a model to be present?
I realise this isn't really the Django way, but given my current scenario, I don't really know how better to solve the problem.
I'm working on a site, which for one aspect makes use of a table of data which has been bought from a third party. The columns of interest are liklely to remain stable, however the structure of the table could change with subsequent updates to the data set. The table is also massive (in terms of columns) - so I'm not keen on typing out each field in the model one-by-one. I'd also like to leave the table intact - so coming up with a model which represents the set of columns I am interested in is not really an ideal solution.
Ideally, I want to have this table in a database somewhere (possibly separate to the main site database) and access its contents directly using SQL.
You can always execute raw SQL directly against the database: see the docs.
There is one feature called inspectdb in Django. for legacy databases like MySQL , it creates models automatically by inspecting your db tables. it stored in our app files as models.py. so we don't need to type all column manually.But read the documentation carefully before creating the models because it may affect the DB data ...i hope this will be useful for you.
I guess you can use any SQL library available for Python. For example : http://www.sqlalchemy.org/
You have just then to connect to your database, perform your request and use the datas at your will. I think you can't use Django without their model system, but nothing prevents you from using another library for this in parallel.

Create sqlite virtual table in Python

I want to create a SQL-like interface for a special data source which I can query using Python. That is, I have a data source with a number of named iterable containers of entities, and I would like to be able to use SQL to filter, join, sort and preferably update/insert/delete as well.
To my understanding, sqlite3's virtual table functionality is quite suitable for this task. Is it possible to create the necessary bindings in Python? I understand that the glue has to be c-like, but my hope is someone already wrote a Python wrapper in C or using ctypes.
I will also accept an answer for a better/easier way of doing this.
You can do this by registering a virtual table in SQLite with the APSW Python bindings.
An example for talking to CouchDB using APSW.
There's a similar capability for Perl, namely:
Create SQLite Virtual Table extensions in Perl
Finally, if you want to make a Python-based virtual table in PostgreSQL 9.1, check out http://multicorn.org/.
Sounds like you can use SQLAlchemy to persist these objects to sqlite3, possibly to an in :memory: db, and issue both object and table level (raw sql) queries. You can also update/insert/delete them easily.

What should I use for a python program to do stuff with tables of data?

I want to be able to add daily info to each object and want to have the ability to delete info x days old easily. With the tables I need to look at the trends and do stuff like selecting objects which match some criteria.
Edit: I asked this because I'm not able to think of a way to implement deleting old data easily because you cannot delete tables in sqlite
Using sqlite would it be the best option, is file based, easy to use, you can use Lookups with SQL and it's builtin on python you don't need to install anything.
→ http://docs.python.org/library/sqlite3.html
If your question means that you are just going to be using "table like data" but not bound to a db, look into using this python modul: Module for table like snytax
If you are going to be binding to a back end, and not* distributing your data among computers, then SQLite is the way to go.
A "proper" database would probably be the way to go. If your application only runs on one computer and the database doesn't get to big, sqlite is good and easy to use with python (standard module sqlite3, see the Library Reference for more information)
take a look at the sqlite3 module, it lets you create a single-file database (no server to setup) that will let you perform sql queries. It's part of the standard library in python, so you don't need to install anythin additional.
http://docs.python.org/library/sqlite3.html

Categories

Resources