Trying to DELETE FROM using SQLAlchemy. Error is sqlalchemy.exc.ArgumentError: subject table for an INSERT, UPDATE or DELETE expected, got 'organization'.
organization is the name of the table I am trying to delete from.
This for a utility that takes a list of tables from one database instance and copying the rows to another database instance--but I want the target table to be empty--the user logged in is not an admin so can't do a TRUNCATE. If there is a better way, I'm open. Thanks.
Calling code stub:
for item in table_list:
process_tbl(engine_in, engine_out, item["table"])
called function:
def process_tbl(engine_in, engine_out, table):
pd = pandas.read_sql(table, engine_in)
print(pd.columns)
stmt = delete(table)
print(stmt)
rows = pd.to_sql(table, engine_out, if_exists="append", index=False)
print("Table: " + table + " inserted rows: " + str(rows))
Well, I could not get the delete(table) call to work, but because it was a simple delete statement, I was able to use:
command = "DELETE FROM " + table
engine_out.execute(command)
Related
I created the database and table like this:
# Amazon Timestream
self.database = timestream.CfnDatabase(scope=self, id="MyDatabase")
self.table = timestream.CfnTable(
scope=self,
id="MyTable",
database_name=self.database.ref,
)
But somehow I can not get a valid query string, since I somehow can not get the correct table name.
query = "".join(
(
"SELECT * ",
f'FROM "DATABASE.TABLE" ',
)
)
If I use self.database.ref it gives me the correct database name. So far so good.
But how do I get the correct table name?
What I tried so far was:
Give the table a name.
Use database.table.ref, but it gives me "DATABASE|TABLE"
Tried to f'"{database.table.ref.replace("|", '"."')}"
I'm trying to make a program in Python that requests an input and if the table in the DB exists, writes to it, and if it doesn't, creates it.
Here is the existing code:
connection = sqlite3.connect('AnimeScheduleSub.db')
cursor = connection.cursor()
anime_id = input('enter server id')
discord_user_id = int(input('Enter token'))
try:
cursor.execute("SELECT * FROM {}".format(anime_id))
results = cursor.fetchall()
print(results)
except:
command1 = f"""CREATE TABLE IF NOT EXISTS
{anime_id}(discord_user_id INTEGER)"""
cursor.execute(command1)
Basically, what it's doing (or what I'm trying to achieve) is the try loop is meant to check if the anime_id table exists. The except loop is meant to create the table if the try loop failed.
But it doesn't work, and I have no idea why. Any help would be much appreciated.
command1 = f"""CREATE TABLE IF NOT EXISTS
A{anime_id}(discord_user_id INTEGER)"""
Creating table name with just numbers are not supported by sql.
You should start with a letter and then use numbers.
You should "ask" the DB if the table is there or not.
Something like the below.
anime_id = input('enter server id')
SELECT name FROM sqlite_master WHERE type='table' AND name='{anime_id}';
I want to use sqlite3 to deal with data in Ubuntu with python. But I always failed and get errors. Codes related to database are as follows:
sqlite = "%s.db" % name
#connnect to the database
conn = sqlite3.connect(sqlite)
print "Opened database successfully"
c = conn.cursor()
#set default separator to "\t" in database
c.execute(".separator "\t"")
print "Set separator of database successfully"
#create table data_node
c.execute('''create table data_node(Time int,Node Text,CurSize int,SizeVar int,VarRate real,Evil int);''')
print "Table data_node created successfully"
node_info = "%s%s.txt" % (name,'-PIT-node')
c.execute(".import %\"s\" data_node") % node_info
print "Import to data_node successfully"
#create table data_face
data_info = "%s%s.txt" % (name,'-PIT-face')
c.execute('''create table data_face(Time int,Node Text,TotalEntry real,FaceId int,FaceEntry real,Evil int);''')
c.execute(".import \"%s\" data_face") % face_info
#get the final table : PIT_node
c.execute('''create table node_temp as select FIRST.Time,FIRST.Node,ROUND(FIRST.PacketsRaw/SECOND.PacketsRaw,4) as SatisRatio from tracer_temp FIRST,tracer_temp SECOND WHERE FIRST.Time=SECOND.Time AND FIRST.Node=SECOND.Node AND FIRST.Type='InData' AND SECOND.Type='OutInterests';''')
c.execute('''create table PIT_node as select A.Time,A.Node,B.SatisRatio,A.CurSize,A.SizeVar,A.VarRate,A.Evil from data_node A,node_temp B WHERE A.Time=B.Time AND A.Node=B.Node;''')
#get the final table : PIT_face
c.execute('''create table face_temp as select FIRST.Time,FIRST.Node,FIRST.FaceId,ROUND(FIRST.PacketsRaw/SECOND.PacketsRaw,4) as SatisRatio,SECOND.Packets from data_tracer FIRST,data_tracer SECOND WHERE FIRST.Time=SECOND.Time AND FIRST.Node=SECOND.Node AND FIRST.FaceId=SECOND.FaceId AND FIRST.Type='OutData' AND SECOND.Type='InInterests';''')
c.execute('''create table PIT_face as select A.Time,A.Node,A.FaceId,B.SatisRatio,B.Packets,ROUND(A.FaceEntry/A.TotalEntry,4),A.Evil from data_face as A,face_temp as B WHERE A.Time=B.Time AND A.Node=B.Node AND A.FaceId = B.FaceId;''')
conn.commit()
conn.close()
These sql-commands are right. When I run the code, it always shows sqlite3.OperationalError: near ".": syntax error. So how to change my code and are there other errors in other commands such as create table?
You have many problems in your code as posted, but the one you're asking about is:
c.execute(".separator "\t"")
This isn't valid Python syntax. But, even if you fix that, it's not valid SQL.
The "dot-commands" are special commands to the sqlite3 command line shell. It intercepts them and uses them to configure itself. They mean nothing to the actual database, and cannot be used from Python.
And most of them don't make any sense outside that shell anyway. For example, you're trying to set the column separator here. But the database doesn't return strings, it returns row objects—similar to lists. There is nowhere for a separator to be used. If you want to print the rows out with tab separators, you have to do that in your own print statements.
So, the simple fix is to remove all of those dot-commands.
However, there is a problem—at least one of those dot-commands actually does something:
c.execute(".import %\"s\" data_node") % node_info
You will have to replace that will valid calls to the library that do the same thing as the .import dot-command. Read what it does, and it should be easy to understand. (You basically want to open the file, parse the columns for each row, and do an executemany on an INSERT with the rows.)
now I am writing an API using AWS lambda with python. but I having a problem.
below is flow:
using cursor to inserted data
while not commit get id of inserted data(1) for insert in to another tables
here is my code:
sqlPjInsert = "INSERT INTO PROJECT(PRJ_NAME,TEST_TGT,DESCRIPTION,PRJ_STATUS) VALUE(%s,%s,%s,%s);"
cursor.execute(sqlPjInsert, (p_name, p_url, p_desc, p_stt))
affectedProject = cursor.rowcount
print("Date fetched: " + results)
if affectedProject == 1:
#conn.commit()
logger.info("PROCESS: get project")
sqlPjGet = "SELECT * FROM PROJECT WHERE PRJ_NAME='"+p_name+"' ORDER BY INS_DTTM DESC LIMIT 1;"
cursor.execute(sqlPjGet)
for row in getInsertedPJ:
h_answer_prj_id = row["PRJ_ID"]
logger.info("PROCESS: get project id" + h_answer_prj_id)
/*Using h_answer_prj_id for insert into another tables*/
....
conn.commit()
More detail:
i have two tables:
1:Project
2:Answer
in my API after inserted an Project then i get Project ID from inserted data. and i using Project ID for insert new data into Answer table. after the all inserted i commit data. if have any exception then I call Rollback function.
I have a mission to read a csv file line by line and insert them to database.
And the csv file contains about 1.7 million lines.
I use python with sqlalchemy orm(merge function) to do this.
But it spend over five hours.
Is it caused by python slow performance or sqlalchemy or sqlalchemy?
or what if i use golang to do it to make a obvious better performance?(but i have no experience on go. Besides, this job need to be scheduled every month)
Hope you guy giving any suggestion, thanks!
Update: database - mysql
For such a mission you don't want to insert data line by line :) Basically, you have 2 ways:
Ensure that sqlalchemy does not run queries one by one. Use BATCH INSERT query (How to do a batch insert in MySQL) instead.
Massage your data in a way you need, then output it into some temporary CSV file and then run LOAD DATA [LOCAL] INFILE as suggested above. If you don't need to preprocess you data, just feed the CSV to the database (I assume it's MySQL)
Follow below three steps
Save the CSV file with the name of table what you want to save it
to.
Execute below python script to create a table dynamically
(Update CSV filename, db parameters)
Execute "mysqlimport
--ignore-lines=1 --fields-terminated-by=, --local -u dbuser -p db_name dbtable_name.csv"
PYTHON CODE:
import numpy as np
import pandas as pd
from mysql.connector import connect
csv_file = 'dbtable_name.csv'
df = pd.read_csv(csv_file)
table_name = csv_file.split('.')
query = "CREATE TABLE " + table_name[0] + "( \n"
for count in np.arange(df.columns.values.size):
query += df.columns.values[count]
if df.dtypes[count] == 'int64':
query += "\t\t int(11) NOT NULL"
elif df.dtypes[count] == 'object':
query += "\t\t varchar(64) NOT NULL"
elif df.dtypes[count] == 'float64':
query += "\t\t float(10,2) NOT NULL"
if count == 0:
query += " PRIMARY KEY"
if count < df.columns.values.size - 1:
query += ",\n"
query += " );"
#print(query)
database = connect(host='localhost', # your host
user='username', # username
passwd='password', # password
db='dbname') #dbname
curs = database.cursor(dictionary=True)
curs.execute(query)
# print(query)