I created the database and table like this:
# Amazon Timestream
self.database = timestream.CfnDatabase(scope=self, id="MyDatabase")
self.table = timestream.CfnTable(
scope=self,
id="MyTable",
database_name=self.database.ref,
)
But somehow I can not get a valid query string, since I somehow can not get the correct table name.
query = "".join(
(
"SELECT * ",
f'FROM "DATABASE.TABLE" ',
)
)
If I use self.database.ref it gives me the correct database name. So far so good.
But how do I get the correct table name?
What I tried so far was:
Give the table a name.
Use database.table.ref, but it gives me "DATABASE|TABLE"
Tried to f'"{database.table.ref.replace("|", '"."')}"
Related
fairly new to SQL in general. I'm currently trying to bolster my general understanding of how to pass commands via cursor.execute(). I'm currently trying to grab a column from a table and rename it to something different.
import mysql.connector
user = 'root'
pw = 'test!*'
host = 'localhost'
db = 'test1'
conn = mysql.connector.connect(user=user, password=pw, host=host, database=db)
cursor = conn.cursor(prepared=True)
new_name = 'Company Name'
query = f'SELECT company_name AS {new_name} from company_directory'
cursor.execute(query)
fetch = cursor.fetchall()
I've also tried it like this:
query = 'SELECT company_name AS %s from company_directory'
cursor.execute(query, ('Company Name'),)
fetch = cursor.fetchall()
but that returns the following error:
stmt = self._cmysql.stmt_prepare(statement)
_mysql_connector.MySQLInterfaceError: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '? from company_directory' at line 1
I'm using python and mySQL. I keep reading about database injection and not using string concatenation but every time I try to use %s I get an error similar to the one below where. I've tried switching to ? syntax but i get the same error.
If someone could ELI5 what the difference is and what exactly database injection is and if what I'm doing in the first attempt qualifies as string concatenation that I should be trying to avoid.
Thank you so much!
If a column name or alias contains spaces, you need to put it in backticks.
query = f'SELECT company_name AS `{new_name}` from company_directory'
You can't use a placeholder for identifiers like table and column names or aliases, only where expressions are allowed.
You can't make a query parameter in place of a column alias. The rules for column aliases are the same as column identifiers, and they must be fixed in the query before you pass the query string.
So you could do this:
query = f"SELECT company_name AS `{'Company Name'}` from company_directory'
cursor.execute(query)
Trying to DELETE FROM using SQLAlchemy. Error is sqlalchemy.exc.ArgumentError: subject table for an INSERT, UPDATE or DELETE expected, got 'organization'.
organization is the name of the table I am trying to delete from.
This for a utility that takes a list of tables from one database instance and copying the rows to another database instance--but I want the target table to be empty--the user logged in is not an admin so can't do a TRUNCATE. If there is a better way, I'm open. Thanks.
Calling code stub:
for item in table_list:
process_tbl(engine_in, engine_out, item["table"])
called function:
def process_tbl(engine_in, engine_out, table):
pd = pandas.read_sql(table, engine_in)
print(pd.columns)
stmt = delete(table)
print(stmt)
rows = pd.to_sql(table, engine_out, if_exists="append", index=False)
print("Table: " + table + " inserted rows: " + str(rows))
Well, I could not get the delete(table) call to work, but because it was a simple delete statement, I was able to use:
command = "DELETE FROM " + table
engine_out.execute(command)
I have around 70 tables in one S3 bucket and I would like to move them to the redshift using glue. I could move only few tables. Rest of them are having data type issue. Redshift is not accepting some of the data types. I resolved the issue in a set of code which moves tables one by one:
table1 = glueContext.create_dynamic_frame.from_catalog(
database="db1_g", table_name="table1"
)
table1 = table1.resolveChoice(
specs=[
("column1", "cast:char"),
("column2", "cast:varchar"),
("column3", "cast:varchar"),
]
)
table1 = glueContext.write_dynamic_frame.from_jdbc_conf(
frame=table1,
catalog_connection="redshift",
connection_options={"dbtable": "schema1.table1", "database": "db1"},
redshift_tmp_dir=args["TempDir"],
transformation_ctx="table1",
)
The same script is used for all other tables having data type change issue.
But, As I would like to automate the script, I used looping tables script which iterate through all the tables and write them to redshift. I have 2 issues related to this script.
Unable to move the tables to respective schemas in redshift.
Unable to add if condition in the loop script for those tables which needs data type change.
client = boto3.client("glue", region_name="us-east-1")
databaseName = "db1_g"
Tables = client.get_tables(DatabaseName=databaseName)
tableList = Tables["TableList"]
for table in tableList:
tableName = table["Name"]
datasource0 = glueContext.create_dynamic_frame.from_catalog(
database="db1_g", table_name=tableName, transformation_ctx="datasource0"
)
datasink4 = glueContext.write_dynamic_frame.from_jdbc_conf(
frame=datasource0,
catalog_connection="redshift",
connection_options={
"dbtable": tableName,
"database": "schema1.db1",
},
redshift_tmp_dir=args["TempDir"],
transformation_ctx="datasink4",
)
job.commit()
Mentioning redshift schema name along with tableName like this: schema1.tableName is throwing error which says schema1 is not defined.
Can anybody help in changing data type for all tables which requires the same, inside the looping script itself?
So the first problem is fixed rather easily. The schema belongs into the dbtable attribute and not the database, like this:
connection_options={
"dbtable": f"schema1.{tableName},
"database": "db1",
}
Your second problem is that you want to call resolveChoice inside of the for Loop, correct? What kind of error occurs there? Why doesn't it work?
I am using Sqlalchemy 1.3 to connect to a PostgreSQL 9.6 database (through Psycopg).
I have a very, very raw Sql string formatted using Psycopg2 syntax which I can not modify because of some legacy issues:
statement_str = SELECT * FROM users WHERE user_id=%(user_id)s
Notice the %(user_id)s
I can happily execute that using a sqlalchemy connection just by doing:
connection = sqlalch_engine.connect()
rows = conn.execute(statement_str, user_id=self.user_id)
And it works fine. I get my user and all is nice and good.
Now, for debugging purposes I'd like to get the actual query with the %(user_id)s argument expanded to the actual value. For instance: If user_id = "foo", then get SELECT * FROM users WHERE user_id = 'foo'
I've seen tons of examples using sqlalchemy.text(...) to produce a statement and then get a compiled version. I have that thanks to other answers like this one or this one been able to produce a decent str when I have an SqlAlchemy query.
However, in this particular case, since I'm using a more cursor-specific syntax %(user_id) I can't do that. If I try:
text(statement_str).bindparams(user_id="foo")
I get:
This text() construct doesn't define a bound parameter named 'user_id'
So I guess what I'm looking for would be something like
conn.compile(statement_str, user_id=self.user_id)
But I haven't been able to get that.
Not sure if this what you want but here goes.
Assuming statement_str is actually a string:
import sqlalchemy as sa
statement_str = "SELECT * FROM users WHERE user_id=%(user_id)s"
params = {'user_id': 'foo'}
query_text = sa.text(statement_str % params)
# str(query_text) should print "select * from users where user_id=foo"
Ok I think I got it.
The combination of SqlAlchemy's raw_connection + Psycopg's mogrify seems to be the answer.
conn = sqlalch_engine.raw_connection()
try:
cursor = conn.cursor()
s_str = cursor.mogrify(statement_str, {'user_id': self.user_id})
s_str = s_str.decode("utf-8") # mogrify returns bytes
# Some cleanup for niceness:
s_str = s_str.replace('\n', ' ')
s_str = re.sub(r'\s{2,}', ' ', s_str)
finally:
conn.close()
I hope someone else finds this helpful
So I'm not so experienced in Python, but I really enjoy making stuff with it.
I decided to start using Python to interact with MySQL in one of my projects.
I would like to write a function that takes the username as input and returns the password as output.
Here is what I've tried to do:
def get_passwd(user_name):
user_passwd = mycursor.execute("SELECT passwd FROM users WHERE name = '%s'", (user_name))
print(user_passwd)
get_passwd("Jacob")
But it's justing printing out "None".
My table looks like this:
Instead of
user_passwd = mycursor.execute("SELECT passwd FROM users WHERE name = '%s'", (user_name))
use something as
mycursor.execute("SELECT passwd FROM users WHERE name = '%s'", (user_name))
row = mycursor.fetchone()
user_passwd = row[0]
It is unclear which package you are using to access your database.
Assuming it is sqlalchemy what you are missing is the fetch command.
So you should add -
def get_passwd(user_name):
user_passwd = mycursor.execute("SELECT passwd FROM users WHERE name = '%s'", (user_name))
user_actual_passwd = user_passwd.fetchone()
print(user_actual_passwd)
get_passwd("Jacob")
See more here
** Update as the question was updated **
I would make sure that the query strings is what you are expecting.
Do -
query = "SELECT passwd FROM users WHERE name = '%s'", (user_name)
print (query)
If the query is what you are expecting, try running it directly on the db and see if you get any result.