Sql Insert Statement not displaying anything - python

Im trying to insert a value in a python script, and im not getting any errors but it isnt displaying in the SQL file and I cant find any documentation that helps me understand why it isnt displaying.
Here is the code
#Connect the db to this python script
top10_connection = connect(database = 'top_ten.db')
#Get a cursor for the database which allows you to make
#and execute sql queries in this script
top_ten_db = top10_connection.cursor()
top_ten_db.execute("INSERT INTO Top_Ten (Rank) VALUES(1)")
top10_connection.close()

If you arent getting any traceback errors you are msot likely forgetting to commit changes to the db.
add top10_connection.commit() before you close the connection:
top10_connection = connect(database = 'top_ten.db')
#Get a cursor for the database which allows you to make
#and execute sql queries in this script
top_ten_db = top10_connection.cursor()
top_ten_db.execute("INSERT INTO Top_Ten (Rank) VALUES(1)")
top10_connection.commit()
top10_connection.close()

Related

Read_sql returning results even though SQL table not present

I think I've lost my mind. I have created a python script to read a temp table in SQL SSMS. While testing, we found out that python is able to query and read the table even when it's not there/queryable in SSMS. I believe the DF is storing in cache or something but let me break down the problem into steps:
Starting point, the temp table is present in SSMS, MAIN_DF = python.read_sql('SELECT Statement') and stored in DF and saved to excel file (using ExcelWriter)
We delete the temp table in SQL, then run the python script again. To make sure, we use THE SAME 'SELECT' statement found in the python script in SSMS and it displays 'Invalid object name' which is correct because the table has been dropped. BUT when I run the python script again, it is able to query the table and get the same results it had before! It should be throwing the same error as SSMS because the table isn't there! Why isn't python starting from scratch when I run it? It seems to be holding information over from the initial run. How do I ensure I am starting from scratch every time I run it?
I have tried many things including starting the script with blank DF's so they should not have anything held over. 'MAIN_DF = pd.DataFrame()'. I have tried deleting the DF's at the end as well. 'del MAIN_DF'
I don't understand what is happening..
try:
conn = pyodbc.connect(r'Driver={SQL Server};Server=GenericServername;Database=testdb;Trusted_Connection=yes;')
print('Connected to SQL: ' + str(datetime.now()))
MAIN_DF = pd.read_sql('SELECT statement',conn)
print('Queried Main DF: ' + str(datetime.now()))
It's because I didn't close the connection conn.close() so it was cached in the memory and SQL didn't perform / close it

Setting file name as variable with pyodbc, then running program as executable in data storage program instead of through Python

I have created a program that calls a specific data set from Microsoft Access and applies some small changes and then creates a csv file.
However, currently the program needs to have the file name manually inputted into the code. When I try to create a variable and input it when pyodbc asks for the file to be called, the program returns an error.
sql_query = pd.read_sql_query('SELECT * FROM BMDL_SFAM_Final', conn)
df = pd.DataFrame(sql_query)
# Creating the cursor that allows us to select data with pyodbc
cur = conn.cursor()
cur.execute('SELECT * FROM BMDL_SFAM_Final')
My first question is; is there a way to set the file name to a variable at the start of the program, and then just call that variable each time the program asks for the file name? I have tried setting a variable and also tried using .format(), each time getting the same error:
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Microsoft Access Driver] Syntax error (missing operator) in query expression '* BDML_SFAM_Final'. (-3100) (SQLExecDirectW)")
My next step is to create an executable that can be called in the program where I store data. I don't want to have to open my python program to input the file name. Is there a way to have the file name selected when I select the file in the data storage software? The data storage software is called Element.
I found an answer to my own question.
You set your variable to the value you need for your database, query etc.
Then when you call your variable in the connection string or the query selection string you surround the variable with quotations and plus signs:
example: '+database+'
My code looks like this.
database = "C:\Element\Temp\outCCT.mdb"
query = "BMDL_SFAM_Final"
conn_str = (r'Driver={Microsoft Access Driver (*.mdb)};'
r'Server=(local);'
r'DBQ='+database+'; ')
conn = pyodbc.connect(conn_str)
except pyodbc.Error as e:
print("Error in Connection", e)
# Reading the Query and putting it into a panda Database
sql_query = pd.read_sql_query('SELECT * FROM '+query+'', conn)
df = pd.DataFrame(sql_query)
I found this help from the microsoft website:
https://learn.microsoft.com/en-us/sql/connect/python/pyodbc/step-3-proof-of-concept-connecting-to-sql-using-pyodbc?view=sql-server-ver15#next-steps

Azure Timer Function API Call to Azure SQL

I am VERY new to Azure and Azure functions, so be gentle. :-)
I am trying to write an Azure timer function (using Python) that will take the results returned from an API call and insert the results into a table in Azure SQL.
I am virtually clueless. If someone would be willing to handhold me through the process, it would be MOST appreciated.
I have the API call already written, so that part is done. What I totally don't get is how to get the results from what is returned into Azure SQL.
The result set I am returning is in the form of a Pandas dataframe.
Again, any and all assistance would be AMAZING!
Thanks!!!!
Here is an example that writes a panda data structure to and SQL Table:
import pyodbc
import pandas as pd
# insert data from csv file into dataframe.
# working directory for csv file: type "pwd" in Azure Data Studio or Linux
# working directory in Windows c:\users\username
df = pd.read_csv("c:\\user\\username\department.csv")
# Some other example server values are
# server = 'localhost\sqlexpress' # for a named instance
# server = 'myserver,port' # to specify an alternate port
server = 'yourservername'
database = 'AdventureWorks'
username = 'username'
password = 'yourpassword'
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
# Insert Dataframe into SQL Server:
for index, row in df.iterrows():
cursor.execute("INSERT INTO HumanResources.DepartmentTest (DepartmentID,Name,GroupName) values(?,?,?)", row.DepartmentID, row.Name, row.GroupName)
cnxn.commit()
cursor.close()
To make it work for your case you need to:
replace the read from csv file with your function call
Change the insert statement to match the structure of your SQL Table.
For more details see: https://learn.microsoft.com/en-us/sql/machine-learning/data-exploration/python-dataframe-sql-server?view=sql-server-ver15

Why is connecting mysql with Python failing?

I have written the following short and simple script in order to connect a MySQL database called mybase with Python. I have populated already the table users of my database with data in MySQL Workbench. The problem is that when I run the script, I see no results printed in the console, but only my cmd opening for one second and closing automatically. Could someone help me find out what I am doing wrong? This is my script:
import mysql.connector as mysql
db = mysql.connect(host='localhost',
database='mybase',
user='root',
password='xxx',
port= 3306)
cursor = db.cursor()
q= "SELECT*FROM users"
cursor.execute(q)
for row in cursor.fetchall():
print (row[0])
I appreciate any help you can provide!
Maybe its a syntax error with your query. It should be like this to work
q= "SELECT * FROM users"
If it does not work, what I found helps me it is to test the queries first in a client with a local database and then copying them into my code.

SQlite - why do I need the commit command

I use python (my IDE is pycharm) and new to SQlite. I read that I must use commit in order to save the data or changes, otherwise non of those would be saved to the table. I use a simple code to create a table in a database without using commit, define the headers and close the database file. Using DB_Browser I then open the file and see it is updated to what I have just made. Then my question is why do I need the commit command ?
import sqlite3
from sqlite3 import Error
# Connecting SQLite to the Database
def create_connection(db_file):
""" create a database connection to a SQLite database """
try:
# Creates or opens a file called mydb with a SQLite3 DB
db = sqlite3.connect(db_file)
# Get a cursor object
cursor = db.cursor()
# Check if table users does not exist and create it
cursor.execute('''CREATE TABLE IF NOT EXISTS
users(id INTEGER PRIMARY KEY, name TEXT, phone TEXT, email TEXT unique, password TEXT)''')
except Error as e:
# Roll back any change if something goes wrong
db.rollback()
raise e
finally:
# Close the db connection
db.close()
fname = "mydb.db"
create_connection(fname)
commit()
This method commits the current transaction. If you don’t call this method, anything you did since the last call to commit() is not visible from other database connections. If you wonder why you don’t see the data you’ve written to the database, please check you didn’t forget to call this method
https://docs.python.org/2/library/sqlite3.html
Kindly go through the documentation you'll find answer 90% of the time.
Apparently, from this link
By default, SQLite is in auto-commit mode.
Thanks to Richard for pointing this link.

Categories

Resources