I have a python script I'm creating that will replace an set of SQL Server stored procedures to make the process more efficient. However, I have a 20-30 queries I need to execute at different points. To make the main query more simple I organized them into a separate file in a dictionary and created a function to pull the query to be executed.
My question here is there a better way to organize them? An idea I had was to put them into a table on the SQL Server or is this method best or is there another better method? Below is an example of what I'm doing now:
queryDict = {}
queryDict.update({"dbQuery1": "TRUNCATE TABLE MyTable;\
INSERT MyTable (Column1, Column2)\
SELECT Col1, Col2 FROM myTable2;"})
queryDict.update({"dbQuery1": 'SELECT MAX(val) FROM MyTable3;'})
def queryRequest(query):
return queryDict[query]
Related
I am calling on fake data within the sql query that I am using in place of tables untill I get access so I can move on with my dev. I am trying to call on this data from python but every library I see seems to need to connect to a database and have a defined object for this connection. Since all my fake data is in the query, I am unsure what to do as far as calling that sql query in python without that object, what do I need to do to make that work.
Also I am pretty new to sql so may be misunderstanding something here. Thank you.
Here is the fake sql data:
WITH
fake_data
as
(SELECT
f1.username,
f1.date,
f1.date1,
f1.num,
f1.num1,
f1.num2,
FROM
(VALUES
('user.name1','06/15/20','07/2/20','298','0.17446838','0.2541086'),
('user.name2','05/5/19','03/4/20','1401','0.305338','0.40653'),
('user.name3','10/24/16','12/3/18','350','0.09938','0.1463432'),
) f1 (username,date,date1,num,num1,num2)
ORDER BY
f1.username asc
)
SELECT
f1.username as username,
f1.date as date,
f1.date1 as date1,
f1.num as num,
f1.num1 as num1,
f1.num2 as num2
What is the best way to make raw SQL queries in django?
I have to search a table for the mode of another table. I could not find a way to solve this in django's ORM so I turned to raw SQL queries.
Yet creating all these very long queries in python is very unreadable and does not feel like a proper way to do this. Is there a way to save these queries in a neat format perhaps in the database.
I have to join three separate tables and compute the mode of a few columns on the last table. The length of the queries is getting very big and the code to make these queries becomes very unreadable. An example query would be
SELECT * FROM "core_assembly" INNER JOIN (SELECT * FROM "core_taxonomy" INNER JOIN(SELECT "core_phenotypic"."taxonomy_id" , \
array_agg("core_phenotypic"."isolation_host_NCBI_tax_id") FILTER (WHERE "core_phenotypic"."isolation_host_NCBI_tax_id" IS NOT NULL) \
AS super_set_isolation_host_NCBI_tax_ids FROM core_phenotypic GROUP BY "core_phenotypic"."taxonomy_id") "mode_table" ON \
"core_taxonomy"."id"="mode_table"."taxonomy_id") "tax_mode" ON "core_assembly"."taxonomy_id"="tax_mode"."id" WHERE ( 404=ANY(super_set_isolation_host_NCBI_tax_ids));
Where I would have a very big parse function to make all the WHERE clauses based on user input.
You can try this:
from django.db import connection
cursor = connection.cursor()
raw_query = "write your query here"
cursor.execute(raw_query)
You can also run raw queries for models. eg. MyModel.objects.raw('my query').
Read Performing raw SQL queries | Django documentation | Django for more.
I'm attempting to use python with sqlalchemy to download some data, create a temporary staging table on a Teradata Server, then MERGEing that table into another table which I've created to permanently store this data. I'm using sql = slqalchemy.text(merge) and td_engine.execute(sql) where merge is a string similar to the below:
MERGE INTO perm_table as p
USING temp_table as t
ON p.Id = t.Id
WHEN MATCHED THEN
UPDATE
SET col1 = t.col1,
col2 = t.col2,
...
col50 = t.col50
WHEN NOT MATCHED THEN
INSERT (col1,
col2,
...
col50)
VALUES (t.col1,
t.col2,
...
t.col50)
The script runs all the way to the end without error and the SQL executes properly through Teradata Studio, but for some reason the table won't update when I execute it through SQLAlchemy. However, I've also run different SQL expressions, like the insert that populated perm_table from the same python script and it worked fine. Maybe there's something specific to the MERGE and SQLAlchemy combo?
Since you're using the engine directly, without using a transaction, you're probably (barring unseen configuration on your part) relying on SQLAlchemy's version of autocommit, which works by detecting data changing operations such as INSERTs etc. Possibly MERGE is not one of the detected operations. Try
sql = sqlalchemy.text(merge).execution_options(autocommit=True)
td_engine.execute(sql)
I would like to know how to read a csv file using sql. I would like to use group by and join other csv files together. How would i go about this in python.
example:
select * from csvfile.csv where name LIKE 'name%'
SQL code is executed by a database engine. Python does not directly understand or execute SQL statements.
While some SQL database store their data in csv-like files, almost all of them use more complicated file structures. Therefore, you're required to import each csv file into a separate table in the SQL database engine. You can then use Python to connect to the SQL engine and send it SQL statements (such as SELECT). The engine will perform the SQL, extract the results from its data files, and return them to your Python program.
The most common lightweight engine is SQLite.
littletable is a Python module I wrote for working with lists of objects as if they were database tables, but using a relational-like API, not actual SQL select statements. Tables in littletable can easily read and write from CSV files. One of the features I especially like is that every query from a littletable Table returns a new Table, so you don't have to learn different interfaces for Table vs. RecordSet, for instance. Tables are iterable like lists, but they can also be selected, indexed, joined, and pivoted - see the opening page of the docs.
# print a particular customer name
# (unique indexes will return a single item; non-unique
# indexes will return a Table of all matching items)
print(customers.by.id["0030"].name)
print(len(customers.by.zipcode["12345"]))
# print all items sold by the pound
for item in catalog.where(unitofmeas="LB"):
print(item.sku, item.descr)
# print all items that cost more than 10
for item in catalog.where(lambda o : o.unitprice>10):
print(item.sku, item.descr, item.unitprice)
# join tables to create queryable wishlists collection
wishlists = customers.join_on("id") + wishitems.join_on("custid") + catalog.join_on("sku")
# print all wishlist items with price > 10
bigticketitems = wishlists().where(lambda ob : ob.unitprice > 10)
for item in bigticketitems:
print(item)
Columns of Tables are inferred from the attributes of the objects added to the table. namedtuples are good also, as well as a types.SimpleNamespaces. You can insert dicts into a Table, and they will be converted to SimpleNamespaces.
littletable takes a little getting used to, but it sounds like you are already thinking along a similar line.
You can easily query an SQL Database using PHP script. PHP runs serverside, so all your code will have to be on a webserver (the one with the database). You could make a function to connect to the database like this:
$con= mysql_connect($hostname, $username, $password)
or die("An error has occured");
Then use the $con to accomplish other tasks such as looping through data and creating a table, or even adding rows and columns to an existing table.
EDIT: I noticed you said .CSV file. You can upload a CSV file into a SQL database and create a table out of it. If you are using a control panel service such as phpMyAdmin, you can simply import a CSV file into your database like this:
If you are looking for a free web host to test your SQL and PHP files on, check out x10 hosting.
I am about to write a python script to help me migrate data between different versions of the same application.
Before I get started, I would like to know if there is a script or module that does something similar, and I can either use, or use as a starting point for rolling my own at least. The idea is to diff the data between specific tables, and then to store the diff as SQL INSERT statements to be applied to the earlier version database.
Note: This script is not robust in the face of schema changes
Generally the logic would be something along the lines of
def diff_table(table1, table2):
# return all rows in table 2 that are not in table1
pass
def persist_rows_tofile(rows, tablename):
# save rows to file
pass
dbnames=('db.v1', 'db.v2')
tables_to_process = ('foo', 'foobar')
for table in tables_to_process:
table1 = dbnames[0]+'.'+table
table2 = dbnames[1]+'.'+table
rows = diff_table(table1, table2)
if len(rows):
persist_rows_tofile(rows, table)
Is this a good way to write such a script or could it be improved?. I suspect it could be improved by cacheing database connections etc (which I have left out - because I am not too familiar with SqlAlchemy etc).
Any tips on how to add SqlAlchemy and to generally improve such a script?
To move data between two databases I use pg_comparator. It's like diff and patch for sql! You can use it to swap the order of columns but if you need to split or merge columns you need to use something else.
I also use it to duplicate a database asynchronously. A cron-job runs every five minutes and pushes all changes on the "master"-database to the "slave"-databases. Especially handy if you only need distribute a single table, or a not all columns per table etc.