Hi Below is the code what i am trying to do the connection string is fine , when i run this No Error pops out and neither it updating the DB
Can you Please help me identify the error.
and i have Two sheet inside that excel data is same just one more column in second sheet name as gender
sample Dataframe which i want to insert into DB is
NAME AGE
0 x 21
1 y 22
2 z 23
3 m 24
4 t 35
import cx_Oracle
from sqlalchemy import create_engine
engine = Create_engine('oracle+cx_oracle://<Connection Detail>')
DB_Connection = cx_Oracle.connect(<Connection Detail)
mycursor = DB_Connection.cursor()
sample_excel = pd.ExcelFile(r<file Path>)
sample_excel_sheet = sample_excel.sheet_names
for i in sample_excel_sheet:
sheet_data = pd.read_excel(r<file Path>,sheet_name=i)
sheet_namee = i.upper()
with engine.connect() as connection:
sheet_data.to_sql(name='sheet_namee',con=engine, if_exists='replace')
DB_Connection.commit()
You only need to change
name='sheet_namee'
with
name=sheet_namee
since sheet_namee = i.upper() already returns the sheet name with inverted commas.
Related
Let's say i have two simple .csv files which i want to import into a SQLite database:
id
state
1
State1
2
State2
3
State3
id
district
values
state_fk
1
District1
123
1
2
District2
456
2
3
District3
789
3
I receive an updated version of the district file every 2 weeks. I simply read the csv files and import/update them via df.to_sql('table', con, if_exists='replace').
Is there a way in pandas to set a PK-FK relation between states.id and districts.state_fk?
I would like to regulary update the district table with new values and don't set the relation manually again after each update.
I don't think it's possible to create foreign keys with pandas. That is not what the module is for.
Instead of replacing the database table each time could you not first empty/truncate it using the sqlite3 module and then import your data using the if_exists='append' option ? Would look something like that :
import sqlite3
con = sqlite3.connect('example.db')
cur = con.cursor()
cur.execute('''TRUNCATE TABLE table''')
con.commit()
con.close()
df.to_sql('table', con, if_exists='append')
I have done the following code and I would like to ask the user to input how many new records want and after to fill column by column those records.
import MySQLdb
import mysql.connector
mydb = mysql.connector.connect(
host="localhost",
user="root",
passwd="Adam!977",
database="testdb1"
)
cur = mydb.cursor()
get_tables_statement = """SHOW TABLES"""
cur.execute(get_tables_statement)
tables = cur.fetchall()
table = tables(gene)
x=input("How many records you desire: ")
x
print "Please enter the data you would like to insert into table %s" %(table)
columns = []
values = []
for j in xrange(0, len(gene)):
column = gene[j][0]
value = raw_input("Value to insert for column '%s'?"%(gene[j][0]))
columns.append(str(column))
values.append('"' + str(value) + '"')
columns = ','.join(columns)
values = ','.join(values)
print columns
print values
The error that i get is about table gene( The table exist in db of SQL)
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\π.py", line 25, in
table = tables(gene)
NameError: name 'gene' is not defined
Also, even I don't know if working properly the code. Please, I need help. Thank you
The error being returned by python is down to the lack of definition of a variable gene. In the following line you reference gene, without it existing:
table = tables(gene)
In the documentation for the python mysql connector, under cursor.fetchall() you'll notice that this method returns either a list of tuples or an empty list. It is therefore somewhat puzzling why you call tables as a function and attempt to pass a parameter to it - this is not correct syntax for accessing a list, or a tuple.
At the beginning of your code example you fetch a list of all of the tables in your database, despite knowing that you only want to update a specific table. It would make more sense to simply reference the name of the table in your SQL query, rather than querying all of the tables that exist and then in python selecting one. For example, the following query would give you 10 records from the table 'gene':
SELECT * FROM gene LIMIT 10
Below is an attempt to correct your code:
import mysql.connector
mydb = mysql.connector.connect(
host="localhost",
user="root",
passwd="Adam!977",
database="testdb1"
)
x=input("How many records you desire: ")
cur = mydb.cursor()
get_rows_statement = """SELECT * FROM gene"""
cur.execute(get_rows_statement)
results = cur.fetchall()
This should give you all of the rows within the table.
I have a SQL Server stored procedure that returns 3 separate tables.
How can I store each of this table in different data-frame using pandas?
Something like:
df1 - first table
df2 - second table
df3 - third table
Where should I start looking at?
Thank you
import pandas as pd
import pyodbc
from datetime import datetime
param = datetime(year=2019,month=7,day=31)
query = """EXECUTE [dbo].PythonTest_USIC_TreatyYear_ReportingPackage #AsOFDate = '{0}'""".format(param)
conn = pyodbc.connect('DRIVER={SQL Server};server=myserver;DATABASE=mydatabase;Trusted_Connection=yes;')
df = pd.read_sql_query(query, conn)
print(df.head())
You should be able to just iterate through the result sets, convert them to DataFrames, and append those DataFrames to a list. For example, given the stored procedure
CREATE PROCEDURE dbo.MultiResultSP
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT 1 AS [UserID], N'Gord' AS [UserName]
UNION ALL
SELECT 2 AS [UserID], N'Elaine' AS [UserName];
SELECT N'pi' AS [Constant], 3.14 AS [Value]
UNION ALL
SELECT N'sqrt_2' AS [Constant], 1.41 AS [Value]
END
the Python code would look something like this:
data_frames = []
crsr = cnxn.cursor()
crsr.execute("EXEC dbo.MultiResultSP")
result = crsr.fetchall()
while result:
col_names = [x[0] for x in crsr.description]
data = [tuple(x) for x in result] # convert pyodbc.Row objects to tuples
data_frames.append(pd.DataFrame(data, columns=col_names))
if crsr.nextset():
result = crsr.fetchall()
else:
result = None
# check results
for df in data_frames:
print(df)
print()
""" console output:
UserID UserName
0 1 Gord
1 2 Elaine
Constant Value
0 pi 3.14
1 sqrt_2 1.41
"""
The below code is working fine when I run it, but sometimes it returns empty DataFrame. The table contains 2 million rows.
conn = sqlite3.connect("somedatabase.db")
exp = input("Enter Date")
df = pd.read_sql_query("SELECT SYMBOL,TIME_STAMP,OPTION_TYP,STRIKE_PR,OPEN_INT,CONTRACTS,EXPIRY_DT FROM datatable WHERE (SYMBOL =? AND EXPIRY_DT =?);", conn,params = {sym,exp})
print(df.head())
What can I do to rectify this?
I have a SQLite database created with SQLAlchemy which has the format below
Code Names Amt1 Amt2 Amt3 Amt4
502 Real Management 11.4 1.3 - -
5TG 80Commercial 85.8 152 4.845 4.12%
AZG Equipment Love 11.6 117.1 - -
But when I tried to read this into a pandas dataframe by using
pandas.read_sql('sqlite_table', con=engine)
It returns me with an error ValueError: could not convert string to float: '-'
I get that pandas can't register - in the Dataframe, but how can I work around this? Is it possible to read it - as 0 or something??
Update all rows in Amt3 with - (if you have set up your login auths and defined cursor) will be something like this:
cur.execute("UPDATE sqlite_table SET Amt3 = 0 WHERE Amt3 = '-'")
This seems to work fine for me, even with -, what is the type of your Atm3?
import pandas as pd
import sqlite3
con = sqlite3.connect(r"/Users/hugohonorem/Desktop/mytable.db") #create database
conn = sqlite3.connect('mytable.db') #connect to database
c = con.cursor() #set
c.execute('''CREATE TABLE mytable
(Code text, Names text, Atm1 integer, Atm2 integer, Atm3 integer, Atm4 integer)''')
c.execute("INSERT INTO mytable VALUES ('502', 'Real Management', '11.4', '1.3', '-' , '-')")
conn.commit()
So we have replicated your table, we can now remove your -
c.execute("UPDATE mytable SET Atm3 = NULL WHERE Atm3 = '-'") #set it to null
df = pd.read_sql("SELECT * from mytable", con)
print df
This will give us the output:
Code Names Atm1 Atm2 Atm3 Atm4
502 Real Management 11.4 1.3 None -
However, as you can see I can retrieve the table with Atm4 being -
I know this question is old but I ran into the same problem recently and this was the first result on google. What solved my problem was just to change pandas.read_sql('sqlite_table', con=engine) to the full query pandas.read_sql('SELECT * FROM sqlite_table', con=engine) as it seems to bypass the conversion attempt and it wasn't needed to edit the table.