I'm using ansible with community.mysql.mysql_user to automate database user creation on AWS aurora. So far all the grants have been working fine, however a new requirement for "Load from S3" which is specific to mysql on AWS does not show up after it is issued.
I've reproduced this with only pymysql(see below) which the ansible module uses and I get the same result. I do not see any errors on the database, and the rest of the grants show up as expected.
PyMySQL 1.0.2
CPython 3.9.7
docker: python:3.9.7-slim-buster
If anyone can provide a fix/shed some light/alternatives please let me know otherwise I'll keep digging.
import pymysql.cursors
# Connect to the database
connection = pymysql.connect(host='some_aurora_mysql_5.7_host',
user='some_user',
password='redacted',
database='redacted',
cursorclass=pymysql.cursors.DictCursor,
ssl = {
'ssl': {
'activate': True
}
}
)
with connection:
with connection.cursor() as cursor:
# Create a new record
sql = "GRANT SELECT,LOAD FROM S3 ON `some_table`.* TO 'some_user'#'%' "
cursor.execute(sql)
connection.commit()
with connection.cursor() as cursor:
# Read a single record
sql = "show grants for some_user"
cursor.execute(sql)
result = cursor.fetchone()
print(result)
As it turns out, the LOAD FROM S3 is on the whole database server/cluster, not on individual databases.
So:
GRANT LOAD FROM S3 ON *.* TO 'test_user'#'%' works fine.
Related
I have a google cloud Postgres database and I can query it no problem in a jupyter notebook. I created a connection to the database using psycopg2. However, I am building a web app with flask and it seems that I cannot use psycopg2 in the gcloud run deploy. I've read the documentation on pg8000 but I am very confused. I am very new to this.
What is the simplest function to create a connection to the database so I can query the data?
This is what I was using with psycopg2:
#establishing the connection
conn = psycopg2.connect(
database='postgres', user='postgres', password='password', host='Public IP Address', port='5432')
Then I could fetch data using:
cursor = conn.cursor()
cursor.execute('SELECT STMT')
result = cursor.fetchall()
Any help or direction is much appreciated.
I have a simple test script to ensure I can connect to and read/write using GCF to Google Cloud SQL instance.
def update_db(request):
import sqlalchemy
# connect to GCP SQL db
user = 'root'
pw = 'XXX'
conn_name = 'hb-project-319121:us-east4:hb-data'
db_name = 'Homebase'
url = 'mysql+pymysql://{}:{}#/{}?unix_socket=/cloudsql/{}'.format(user,pw,db_name,conn_name)
engine = sqlalchemy.create_engine(url,pool_size=1,max_overflow=0)
# Returns
return f'Success'
This is successful but when I try to add a connection:
conn = engine.connect()
I get the following error:
pymysql.err.OperationalError: (1045, "Access denied for user 'root'#'cloudsqlproxy~' (using password: YES)")
Seems odd that engine can be created but no connection can be made to it? Worth noting that any kind of execute using the engine, e.g. engine.execute('select * from table limit 3') will also lead to the error above.
Any thoughts are appreciated. Thanks for your time
Cheers,
Dave
When the create_engine method is called, it creates the necessary objects, but does not actually attempt to connect to the database.
Per the SQLAlchemy documentation:
[create_engine] creates a Dialect object tailored towards [the database in the connection string], as well as a Pool object which will establish a DBAPI connection at [host]:[port] when a connection request is first received.
The error message you receive from MySQL is likely due to the fact that the special user account for the Cloud SQL Auth proxy has not yet been created.
You need to create a user in CloudSQL (MySQL) ('[USER_NAME]'#'cloudsqlproxy~%' or '[USER_NAME]'#'cloudsqlproxy~[IP_ADDRESS]' ).
In Amazon Redshift's Getting Started Guide, it's mentioned that you can utilize SQL client tools that are compatible with PostgreSQL to connect to your Amazon Redshift Cluster.
In the tutorial, they utilize SQL Workbench/J client, but I'd like to utilize python (in particular SQLAlchemy). I've found a related question, but the issue is that it does not go into the detail or the python script that connects to the Redshift Cluster.
I've been able to connect to the cluster via SQL Workbench/J, since I have the JDBC URL, as well as my username and password, but I'm not sure how to connect with SQLAlchemy.
Based on this documentation, I've tried the following:
from sqlalchemy import create_engine
engine = create_engine('jdbc:redshift://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy')
ERROR:
Could not parse rfc1738 URL from string 'jdbc:redshift://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy'
I don't think SQL Alchemy "natively" knows about Redshift. You need to change the JDBC "URL" string to use postgres.
jdbc:postgres://shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy
Alternatively, you may want to try using sqlalchemy-redshift using the instructions they provide.
I was running into the exact same issue, and then I remembered to include my Redshift credentials:
eng = create_engine('postgresql://[LOGIN]:[PASSWORD]#shippy.cx6x1vnxlk55.us-west-2.redshift.amazonaws.com:5439/shippy')
sqlalchemy-redshift is works for me, but after few days of reserch
packages (python3.4):
SQLAlchemy==1.0.14 sqlalchemy-redshift==0.5.0 psycopg2==2.6.2
First of all, I checked, that my query is working workbench (http://www.sql-workbench.net), then I force it work in sqlalchemy (this https://stackoverflow.com/a/33438115/2837890 helps to know that auto_commit or session.commit() must be):
db_credentials = (
'redshift+psycopg2://{p[redshift_user]}:{p[redshift_password]}#{p[redshift_host]}:{p[redshift_port]}/{p[redshift_database]}'
.format(p=config['Amazon_Redshift_parameters']))
engine = create_engine(db_credentials, connect_args={'sslmode': 'prefer'})
connection = engine.connect()
result = connection.execute(text(
"COPY assets FROM 's3://xx/xx/hello.csv' WITH CREDENTIALS "
"'aws_access_key_id=xxx_id;aws_secret_access_key=xxx'"
" FORMAT csv DELIMITER ',' IGNOREHEADER 1 ENCODING UTF8;").execution_options(autocommit=True))
result = connection.execute("select * from assets;")
print(result, type(result))
print(result.rowcount)
connection.close()
And after that, I forced to work sqlalchemy_redshift CopyCommand perhaps bad way, looks little tricky:
import sqlalchemy as sa
tbl2 = sa.Table(TableAssets, sa.MetaData())
copy = dialect_rs.CopyCommand(
assets,
data_location='s3://xx/xx/hello.csv',
access_key_id=access_key_id,
secret_access_key=secret_access_key,
truncate_columns=True,
delimiter=',',
format='CSV',
ignore_header=1,
# empty_as_null=True,
# blanks_as_null=True,
)
print(str(copy.compile(dialect=RedshiftDialect(), compile_kwargs={'literal_binds': True})))
print(dir(copy))
connection = engine.connect()
connection.execute(copy.execution_options(autocommit=True))
connection.close()
We make just that I made with sqlalchemy, excute query, except comine query by CopyCommand. I have not see some profit :(.
The following works for me with Databricks on all kinds of SQLs
import sqlalchemy as SA
import psycopg2
host = 'your_host_url'
username = 'your_user'
password = 'your_passw'
port = 5439
url = "{d}+{driver}://{u}:{p}#{h}:{port}/{db}".\
format(d="redshift",
driver='psycopg2',
u=username,
p=password,
h=host,
port=port,
db=db)
engine = SA.create_engine(url)
cnn = engine.connect()
strSQL = "your_SQL ..."
try:
cnn.execute(strSQL)
except:
raise
import sqlalchemy as db
engine = db.create_engine('postgres://username:password#url:5439/db_name')
This worked for me
I have an application accessing a distant mysql base using PySQL and the following fonction
def db_query(username, password, host, db, query):
try:
db = pymysql.connect(host=host, user=username, passwd=password, db=db)
except pymysql.err.OperationalError as e:
errorLog = e
return [0, errorLog]
# you must create a Cursor object. It will let you execute all the query you need
cur = db.cursor()
cur.execute(query)
result = cur.fetchall()
cur.close()
db.close()
return result
I have recently update Django from 1.7.1 to 1.7.3. Before update code worked. Now, when I'm in production using WSGI and only then (not on local dev server, nor on server dev server) I get a (2003, "Can't connect to MySQL server on [server.address] ([Errno 13] Permission denied)")
I have updated pymysql to latest version available on pip.
I see that 1.7.3 correcting some security issue in WSGI, i tried to downgrade to 1.7.2 (only by doing pip install django==1.7.2 (not sure this is the good way)) and the issue is still present.
Any idea on what i could try to check ?
Thank you in advance for your help.
I think this is related to apache permission to access MySQL database. Can you try command:
setsebool -P httpd_can_network_connect_db 1
Using -P option makes the change persistent. Without this option, the boolean value would be reset to 0 at reboot.
I have a LAMP server and then I installed MySQLdb for my Python scripts. Now I can't access the MySQL (from LAMP) from Python scripts because it isn't connecting to the MySQLdb, and also I can't access the MySQLdb with phpMyAdmin with (root root). I got "#2002 Cannot log in to the MySQL server" error. Is it possible to connect to one db with Python and phpMyAdmin?
Here is my Python code, which can't connect to the LAMP MySQL, but can connect to the MySQLdb:
db = MySQLdb.connect(host="localhost", port=3303, user="root", passwd="rootroot", db="test")
cursor = db.cursor()
sql = "CREATE TABLE TT(ID int NOT NULL AUTO_INCREMENT PRIMARY KEY)"
cursor.execute(sql)
db.commit()
db.close()
If you're getting #2002 Cannot log in to the MySQL server when logging in to phpmyadmin, then edit phpmyadmin/config.inc.php file and change:
$cfg['Servers'][$i]['host'] = 'localhost';
to:
$cfg['Servers'][$i]['host'] = '127.0.0.1';
You might want to visit this link:
http://blog.ryantremaine.com/2011/03/2002-cannot-log-in-to-mysql-server.html
UPDATE:
you wud not have configured your php.ini well and so that it cannot connect to mysql server.
Wrong path for the mysql.sock
mysql.sock is the instance of the mysql, so first you have to find where does it place at.
You may find it at "/tmp/mysql.sock" or "/var/mysql/mysql.sock".
Go to your php.ini and make sure the value for "pdo_mysql.default_socket", "mysql.default_socket", "mysqli.default_socket" is the right path.
Then restart your web server and try again.
ELSE
Try this:
Go to config.inc.php and check for the following line:
$cfg['Servers'][$i]['user'] = 'YOUR USER NAME IS HERE';
$cfg['Servers'][$i]['password'] = 'AND YOU PASSWORD IS HERE';
Check whether the user name and password that you gave is present or not