Execute text in Postgresql database as Python code - python

If I have text that is saved in a Postgresql database is there any way to execute that text as Python code and potentially have it update the same database?

That sounds terrifying.
Yes, plpy
PL/Python is a extension for postgres that lets you write functions in python. You may have to install the extension from your package manager or it may have been bundled in already when you installed postgres (this depends on how you installed postgres apt-get install postgresql-plpython-9.1 in debian).
To enable the extension in your database first use psql to run:
CREATE EXTENSION plpythonu
Now you can specify functions with python so you could write a function to execute that code like:
CREATE FUNCTION eval_python(code text) RETURNS integer AS $$
eval(code)
return 1
$$ LANGUAGE plpythonu;
And execute it for every code field in my_table like:
SELECT eval_python(code) FROM my_table;
Read the docs on PL/python for more details on how to interact with the db from python.

let me see if I understand what you are trying to accomplish:
store ad-hoc user code in a varchar field on a database
read and execute said code
allow said code to affect the database in question, say drop table ...
Assuming that I've got it, you could write something that
reads the table holding the code (use pyodbc or something)
runs an eval on what was pulled from the db - this will let you execute ANY code, including self updating code
are you sure this is what you want to do?

Related

Insert row into database with unknown schema using Python module peewee

I am building a database interface using Python's peewee module. I am trying to figure out how to insert data into an existing database where I do not know the schema.
My idea is to use playhouse.reflection.Introspector to find out the database schema, then use that information to create class objects which can then be inserted into the existing database.
So far I've gotten to:
introspector = Introspector.from_database(database)
models = introspector.generate_models()
I'm don't know where to go from there.
1) Can I create database objects in this manner? What is the next step?
2) Is there an easier way to do this?
peewee includes an introspection tool called pwiz that can (basically) introspect a database and produce model definitions. It is run as a command line script and dumps the model definitions to stdout, so invokation is like any other unix tool. Here is an example from the docs:
python -m pwiz -e postgresql my_postgres_db > mymodels.py
From there edit mymodels.py to get what you need.
You could do this on the fly, but it would require a few steps and is hackish (not to mention pointless if you really don't know anything about the schema):
Run pwiz as an os command
Read it to pick out the model names
Import whatever you find
BUT
If you really don't know the schema to start with then you have no idea what the semantics of the database are anyway, which means whatever you find is literally meaningless. Unless you at least know some schema/table/column names you are hunting for (in which case you do know something about the schema) there isn't really much you can do with regard to inserting data (not in a sane way), though you could certainly dump data from the db. But if you just wanted a database dump then pg_dump would have been easier.
I suspect this is actually an X-Y problem. What problem is it you are trying to solve by using this technique? What effect is it supposed to achieve within the context of your system?
If you want to create a GUI, check out the sqlite_web project. It uses Peewee to create a web-based SQLite database manager.

sqlite3 insert using python and python cgi

In db.py,I can use a function(func insert) insert data into sqlite correctly.
Now I want to insert data into sqlite through python-fastcgi, in
fastcgi (just named post.py ) I can get the request data correctly,but
when I call db.insert,it gives me internal server error.
I already did chmod 777 slqite.db. Anyone know whats problem?
Ffinally I found the answer:
the sqlite3 library needs write permissions also on the directory that contains it, probably because it needs to create a lockfile.
Therefor when I use sql to insert data there is no problem, but when I do it through web cgi,fastcgi etc)to insert data there would be an error.
Just add write permission to the directory.

Importing postgres data to mysql

I have a requirement where I need to insert the postgres data into mysql. Suppose I have user table in postgres. I have user table also in mysql. I tried to do something like this:
gts = 'cd '+js_browse[0].js_path #gts prints correct folder name/usr/local/myfolder_name
os.system(gts)
gts_home = 'export GTS_HOME='+js_browse[0].js_path
os.system(gts_home)
tt=gts+'&& sh bin/admin.sh User --input-dir /tmp/import'
#inside temp/import i import store my postgres user table data
#bin is the folder inside myfolder_name
In mysql if I use the command it works perfectly fine:
cd /usr/local/myfolder_name
bin/admin.sh User -account=1 user=hamid -create'
I am unable to store data inside mysql this way. Any help shall be appreciated.
You don't really give us much information. And why would go from postgres to mysql?
But you can use one of these tools - I have seen people talk good about them
pg2mysql or pgs2sql
Hope it works out.
PostgreSQL provides possibility to dump data into the CSV format using COPY command.
The easiest path for you will be to spend time once to copy schema objects from PostgreSQL to MySQL, you can use pg_dump -s for this on the PostgreSQL side. IMHO, it will be the biggest challenge to properly move schemas.
And then you should import CSV-formatted data dumps into the MySQL, check this for reference. Scrolling down to the comments you'll find recipes for Windows also. Something like this should do the trick (adjust parameters accordingly):
LOAD DATA LOCAL INFILE C:\test.csv
INTO TABLE tbl_temp_data
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';

how to generate various database dumps

I have a CSV file and want to generate dumps of the data for sqlite, mysql, postgres, oracle, and mssql.
Is there a common API (ideally Python based) to do this?
I could use an ORM to insert the data into each database and then export dumps, however that would require installing each database. It also seems a waste of resources - these CSV files are BIG.
I am wary of trying to craft the SQL myself because of the variations with each database. Ideally someone has already done this hard work, but I haven't found it yet.
SQLAlchemy is a database library that (as well as ORM functionality) supports SQL generation in the dialects of the all the different databases you mention (and more).
In normal use, you could create a SQL expression / instruction (using a schema.Table object), create a database engine, and then bind the instruction to the engine, to generate the SQL.
However, the engine is not strictly necessary; the dialects each have a compiler that can generate the SQL without a connection; the only caveat being that you need to stop it from generating bind parameters as it does by default:
from sqlalchemy.sql import expression, compiler
from sqlalchemy import schema, types
import csv
# example for mssql
from sqlalchemy.dialects.mssql import base
dialect = base.dialect()
compiler_cls = dialect.statement_compiler
class NonBindingSQLCompiler(compiler_cls):
def _create_crud_bind_param(self, col, value, required=False):
# Don't do what we're called; return a literal value rather than binding
return self.render_literal_value(value, col.type)
recipe_table = schema.Table("recipe", schema.MetaData(), schema.Column("name", types.String(50), primary_key=True), schema.Column("culture", types.String(50)))
for row in [{"name": "fudge", "culture": "america"}]: # csv.DictReader(open("x.csv", "r")):
insert = expression.insert(recipe_table, row, inline=True)
c = NonBindingSQLCompiler(dialect, insert)
c.compile()
sql = str(c)
print sql
The above example actually works; it assumes you know the target database table schema; it should be easily adaptable to import from a CSV and generate for multiple target database dialects.
I am no database wizard, but AFAIK in Python there's not a common API that would do out-of-the-box what you ask for. There is PEP 249 that defines an API that should be used by modules accessing DB's and that AFAIK is used at least by the MySQL and Postgre python modules (here and here) and that perhaps could be a starting point.
The road I would attempt to follow myself - however - would be another one:
Import the CVS nto MySQL (this is just because MySQL is the one I know best and there are tons of material on the net, as for example this very easy recipe, but you could do the same procedure starting from another database).
Generate the MySQL dump.
Process the MySQL dump file in order to modify it to meet SQLite (and others) syntax.
The scripts for processing the dump file could be very compact, although they might somehow be tricky if you use regex for parsing the lines. Here's an example script MySQL → SQLite that I simply pasted from this page:
#!/bin/sh
mysqldump --compact --compatible=ansi --default-character-set=binary mydbname |
grep -v ' KEY "' |
grep -v ' UNIQUE KEY "' |
perl -e 'local $/;$_=<>;s/,\n\)/\n\)/gs;print "begin;\n";print;print "commit;\n"' |
perl -pe '
if (/^(INSERT.+?)\(/) {
$a=$1;
s/\\'\''/'\'\''/g;
s/\\n/\n/g;
s/\),\(/\);\n$a\(/g;
}
' |
sqlite3 output.db
You could write your script in python (in which case you should have a look to re.compile for performance).
The rationale behind my choice would be:
I get the heavy-lifting [importing and therefore data consistency checks + generating starting SQL file] done for me by mysql
I only have to have one database installed.
I have full control on what is happening and the possibility to fine-tune the process.
I can structure my script in such a way that it will be very easy to extend it for other databases (basically I would structure it like a parser that recognises individual fields + a set of grammars - one for each database - that I can select via command-line option)
There is much more documentation on the differences between SQL flavours than on single DB import/export libraries.
EDIT: A template-based approach
If for any reason you don't feel confident enough to write the SQL yourself, you could use a sort of template-based script. Here's how I would do it:
Import and generate a dump of the table in all the 4 DB you are planning to use.
For each DB save the initial part of the dump (with the schema declaration and all the rest) and a single insert instruction.
Write a python script that - for each DB export - will output the "header" of the dump plus the same "saved line" into which you will programmatically replace the values for each line in your CVS file.
The obvious drawback of this approach is that your "template" will only work for one table. The strongest point of it is that writing such script would be extremely easy and quick.
HTH at least a bit!
You could do this - Create SQL tables from CSV files
or Generate Insert Statements from CSV file
or try this Generate .sql from .csv python
Of course you might need to tweak the scripts mentioned to suite your needs.

Using SQLite in a Python program

I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production.
What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better).
I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn).
Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler.
Do the following.
Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that.
CREATE TABLE REVISION(
RELEASE_NUMBER CHAR(20)
);
In your application, connect to your database normally.
Execute a simple query against the revision table. Here's what can happen.
The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it.
The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this.
The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
AFAIK an SQLITE database is just a file.
To check if the database exists, check for file existence.
When you open a SQLITE database it will automatically create one if the file that backs it up is not in place.
If you try and open a file as a sqlite3 database that is NOT a database, you will get this:
"sqlite3.DatabaseError: file is encrypted or is not a database"
so check to see if the file exists and also make sure to try and catch the exception in case the file is not a sqlite3 database
SQLite automatically creates the database file the first time you try to use it. The SQL statements for creating tables can use IF NOT EXISTS to make the commands only take effect if the table has not been created This way you don't need to check for the database's existence beforehand: SQLite can take care of that for you.
The main thing I would still be worried about is that executing CREATE TABLE IF EXISTS for every web transaction (say) would be inefficient; you can avoid that by having the program keep an (in-memory) variable saying whether it has created the database today, so it runs the CREATE TABLE script once per run. This would still allow for you to delete the database and start over during debugging.
As #diciu pointed out, the database file will be created by sqlite3.connect.
If you want to take a special action when the file is not there, you'll have to explicitly check for existance:
import os
import sqlite3
if not os.path.exists(mydb_path):
#create new DB, create table stocks
con = sqlite3.connect(mydb_path)
con.execute('''create table stocks
(date text, trans text, symbol text, qty real, price real)''')
else:
#use existing DB
con = sqlite3.connect(mydb_path)
...
Sqlite doesn't throw an exception if you create a new database with the same name, it will just connect to it. Since sqlite is a file based database, I suggest you just check for the existence of the file.
About your second problem, to check if a table has been already created, just catch the exception. An exception "sqlite3.OperationalError: table TEST already exists" is thrown if the table already exist.
import sqlite3
import os
database_name = "newdb.db"
if not os.path.isfile(database_name):
print "the database already exist"
db_connection = sqlite3.connect(database_name)
db_cursor = db_connection.cursor()
try:
db_cursor.execute('CREATE TABLE TEST (a INTEGER);')
except sqlite3.OperationalError, msg:
print msg
Doing SQL in overall is horrible in any language I've picked up. SQLalchemy has shown to be easiest from them to use because actual query and committing with it is so clean and absent from troubles.
Here's some basic steps on actually using sqlalchemy in your app, better details can be found from the documentation.
provide table definitions and create ORM-mappings
load database
ask it to create tables from the definitions (won't do so if they exist)
create session maker (optional)
create session
After creating a session, you can commit and query from the database.
See this solution at SourceForge which covers your question in a tutorial manner, with instructive source code :
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
Yes, I was nuking out the problem. All I needed to do was check for the file and catch the IOError if it didn't exist.
Thanks for all the other answers. They may come in handy in the future.

Categories

Resources