I am using a big nested sql query in python and need to run that query using different dates (date is a field name used in the query).
I also need to change the table name (assume that there are various tables where I need to update the date or insert new records).
Now I am finding it a bit clunky and want to shift the query to config (or .ini) file. I also want to do this so that user can easily change the query without opening the code.
I am able to read the sql but python can't change the variables inside the code.
For example in .ini file the sql is stored as [SQL]:
p_insert_query = insert into + tbl_p + <...nested sql>
I read this inside python and having the tbl_p already defined as 'My_tbl' in python but the query string is not updating the table name.
Is there any other way to this?
You could store a .sql or .txt file containing a "parameterized query".
If you use psycopg library you can do it that way (as stated in the doc: http://initd.org/psycopg/docs/sql.html) :
from psycopg2 import sql
my_query_text = "insert into {} values (%s, %s)" # just load that str from .sql or .txt file instead
tbl_p = "my_table_name"
cur.execute(sql.SQL(my_query_text).format(sql.Identifier(tbl_p)), [10, 20]) # [10,20] are sample values
Related
I have a JSON file, how to store a JSON file in MS SQL? And read the file data after storing it into Database?
I'm using a python script to interact with SQL Server.
Note: I don't want to store key-value pairs as an individual records in DB, I want to store the whole file in DB using python.
There is no specific data type for JSON in SQL Server, unlike say XML which has the xml data type.
If you are, however, storing JSON data in SQL Server then you will want to use an nvarchar(MAX). If you are on SQL Server 2016+ I also recommend adding a CHECK CONSTRAINT to the column to ensure that the JSON is valid, as otherwise parsing it (in SQL) will be impossible. You can check if a value is valid JSON using ISJSON. For example, if you were adding the column to an existing table:
ALTER TABLE dbo.YourTable ADD YourJSON nvarchar(MAX) NULL;
GO
ALTER TABLE dbo.YourTable ADD CONSTRAINT chk_YourTable_ValidJSON CHECK (ISJSON(YourJSON) = 1 OR YourJSON IS NULL);
SQL server has a JSON data type for this. This is wrong.
If your version doesn’t, you can just store it as a string with VARCHAR or TEXT.
This article reckons NVARCHAR(max) is the answer for documents greater than 8KB, for documents under that you can use NVARCHAR(4000) which apparently has better performance.
I have a 250GB sqlite database file on an SSD drive and need to search through this file and search for a specific value in a table.
I wrote a script to perform the lookup in python and here is a similar sql statement to the one that I wrote:
SELECT table FROM database WHERE table like X'003485FAd480'.
I am looking to compare between hex values stored in a table to a given hex value.I am using Anaconda command prompt and not sure if this is the best route.
My question is about possible recommendations or tools to help speed up the lookup?
Thanks!
LIKE converts both operands into strings, so it might not work correctly if a value contains zero bytes or bytes that are not valid in the UTF-8 encoding.
To compare for equality, use =:
SELECT ... FROM MyTable WHERE MyColumn = x'003485FAD480';
This search can be sped up with an index on the lookup column; if you do not already have a primary key or unique constraint on this column, you can create an index manually:
CREATE INDEX MyLittleIndex ON MyTable(MyColumn);
I don't know if this is what your looking for, you mentioned using Python. If you're searching different values that are in Python, have you thought about writing two functions, one to search the database and one to compare those results and do something with them?
def queryFuntion():
cnxn = pyodbc.connect('DRIVER={SQLite3 ODBC Driver};SERVER=localhost;DATABASE=test.db;Trusted_connection=yes') #for production use only
cursor = cnxn.cursor()
query = cursor.execute("SELECT table FROM database")
for row in cursor.fetchall():
yield str(row.table)
def compareFunction(row):
search = '003485FAd480'
if row == search:
print('Yes')
else:
print('No')
I would like to know how to read a csv file using sql. I would like to use group by and join other csv files together. How would i go about this in python.
example:
select * from csvfile.csv where name LIKE 'name%'
SQL code is executed by a database engine. Python does not directly understand or execute SQL statements.
While some SQL database store their data in csv-like files, almost all of them use more complicated file structures. Therefore, you're required to import each csv file into a separate table in the SQL database engine. You can then use Python to connect to the SQL engine and send it SQL statements (such as SELECT). The engine will perform the SQL, extract the results from its data files, and return them to your Python program.
The most common lightweight engine is SQLite.
littletable is a Python module I wrote for working with lists of objects as if they were database tables, but using a relational-like API, not actual SQL select statements. Tables in littletable can easily read and write from CSV files. One of the features I especially like is that every query from a littletable Table returns a new Table, so you don't have to learn different interfaces for Table vs. RecordSet, for instance. Tables are iterable like lists, but they can also be selected, indexed, joined, and pivoted - see the opening page of the docs.
# print a particular customer name
# (unique indexes will return a single item; non-unique
# indexes will return a Table of all matching items)
print(customers.by.id["0030"].name)
print(len(customers.by.zipcode["12345"]))
# print all items sold by the pound
for item in catalog.where(unitofmeas="LB"):
print(item.sku, item.descr)
# print all items that cost more than 10
for item in catalog.where(lambda o : o.unitprice>10):
print(item.sku, item.descr, item.unitprice)
# join tables to create queryable wishlists collection
wishlists = customers.join_on("id") + wishitems.join_on("custid") + catalog.join_on("sku")
# print all wishlist items with price > 10
bigticketitems = wishlists().where(lambda ob : ob.unitprice > 10)
for item in bigticketitems:
print(item)
Columns of Tables are inferred from the attributes of the objects added to the table. namedtuples are good also, as well as a types.SimpleNamespaces. You can insert dicts into a Table, and they will be converted to SimpleNamespaces.
littletable takes a little getting used to, but it sounds like you are already thinking along a similar line.
You can easily query an SQL Database using PHP script. PHP runs serverside, so all your code will have to be on a webserver (the one with the database). You could make a function to connect to the database like this:
$con= mysql_connect($hostname, $username, $password)
or die("An error has occured");
Then use the $con to accomplish other tasks such as looping through data and creating a table, or even adding rows and columns to an existing table.
EDIT: I noticed you said .CSV file. You can upload a CSV file into a SQL database and create a table out of it. If you are using a control panel service such as phpMyAdmin, you can simply import a CSV file into your database like this:
If you are looking for a free web host to test your SQL and PHP files on, check out x10 hosting.
I'm trying to figure out if it's possible to replace record values in a Microsoft Access (either .accdb or .mdb) database using pyodbc. I've poured over the documentation and noted where it says that "Row Values Can Be Replaced" but I have not been able to make it work.
More specifically, I'm attempting to replace a row value from a python variable. I've tried:
setting the connection autocommit to "True"
made sure that it's not a data type issue
Here is a snippet of the code where I'm executing a SQL query, using fetchone() to grab just one record (I know with this script the query is only returning one record), then I am grabbing the existing value for a field (the field position integer is stored in the z variable), and then am getting the new value I want to write to the field by accessing it from an existing python dictionary created in the script.
pSQL = "SELECT * FROM %s WHERE %s = '%s'" % (reviewTBL, newID, basinID)
cursor.execute(pSQL)
record = cursor.fetchone()
if record:
oldVal = record[z]
val = codeCrosswalk[oldVal]
record[z] = val
I've tried everything I can think bit cannot get it to work. Am I just misunderstanding the help documentation?
The script runs successfully but the newly assigned value never seems to commit. I even tried putting "print str(record[z])this after the record[z] = val line to see if the field in the table has the new value and the new value would print like it worked...but then if I check in the table after the script has finished the old values are still in the table field.
Much appreciate any insight into this...I was hoping this would work like how using VBA in MS Access databases you can use an ADO Recordset to loop through records in a table and assign values to a field from a variable.
thanks,
Tom
The "Row values can be replaced" from the pyodbc documentation refers to the fact that you can modify the values on the returned row objects, for example to perform some cleanup or conversion before you start using them. It does not mean that these changes will automatically be persisted in the database. You will have to use sql UPDATE statements for that.
I am looking for a syntax definition, example, sample code, wiki, etc. for
executing a LOAD DATA LOCAL INFILE command from python.
I believe I can use mysqlimport as well if that is available, so any feedback (and code snippet) on which is the better route, is welcome. A Google search is not turning up much in the way of current info
The goal in either case is the same: Automate loading hundreds of files with a known naming convention & date structure, into a single MySQL table.
David
Well, using python's MySQLdb, I use this:
connection = MySQLdb.Connect(host='**', user='**', passwd='**', db='**')
cursor = connection.cursor()
query = "LOAD DATA INFILE '/path/to/my/file' INTO TABLE sometable FIELDS TERMINATED BY ';' ENCLOSED BY '\"' ESCAPED BY '\\\\'"
cursor.execute( query )
connection.commit()
replacing the host/user/passwd/db as appropriate for your needs. This is based on the MySQL docs here, The exact LOAD DATA INFILE statement would depend on your specific requirements etc (note the FIELDS TERMINATED BY, ENCLOSED BY, and ESCAPED BY statements will be specific to the type of file you are trying to read in).
You can also get the results for the import by adding the following lines after your query:
results = connection.info()