Pushing dataframe to postgres using sqlalchemy and psycogp2 - python

I am trying to write dataframes to postgres . For that DBAPI used is psycogp2.
localconn='postgresql+psycopg2://postgres/PSWD#localhost:5432/localPG'
localEng=engine.create_engine(localconn)
df.to_sql("DUMMY", localEng)
But its throwing error (psycopg2.OperationalError) could not translate host name postgres to address: Name or service not known
localPG is the database name.
Where I am doing wrong?

The format you have written is wrong, use the following:
localEng = create_engine('postgresql+psycopg2://[user]:[pass]#[host]:[port]/[schema]', echo=False)
and of course, you should replace every parameter between the bracket with the equivalent database credentials.

Related

Errors trying to migrate tables from MariaDB to local MySQL database

New to SQLAlchemy and I am getting an error when I try to copy some tables from this online database (https://relational.fit.cvut.cz/dataset/ConsumerExpenditures):
srcEngine = create_engine( 'mariadb+mariadbconnector://guest:relational#relational.fit.cvut.cz:3306/ConsumerExpenditures')
conn = srcEngine.connect()
srcEngine._metadata = MetaData(bind=conn, reflect=True)
srcEngine._metadata.reflect(srcEngine) # get columns from existing table
srcTable = Table('EXPENDITURES', srcEngine._metadata)
I am getting an error when execution gets to the MetaData(bind=conn...) call. Also in the debugger, I look up the srcEngine object and it has to ._metadata attribute. Is it possible I don't have access to the MetaData from this online database?
As I am new to this, I am not sure how to resolve this problem. What I want to do is to read the 3 tables from this database and copy them over (same structure and data) over to my local MySQL database.

Snowflake connection string provided by Azure Key Vault

Here is my Snowflake connection in Python:
import snowflake.connector
ctx = snowflake.connector.connect(user='someuser#somedomain.com',
password='somepassword',
account='someaccount',
warehouse='somewarehouse',
database='somedb',
schema='someschema'
authenticator='someauth')
It works fine, but now I need to store my connection details in Azure Key Vault and is far as I understand it will be coming back as a string, which I will need to feed into snowflake.connector.connect()
So I tried to convert connection parameters into string:
connection_string = "user=someuser#somedomain.com;password=somepassword;account=someaccount;authenticator=someauth;warehouse=somewarehouse;database = somedb"
ctx = snowflake.connector.connect(connection_string)
but got back error message:
TypeError Traceback (most recent call last)
<ipython-input-19-ca89ef96ad7d> in <module>
----> 1 ctx = snowflake.connector.connect(connection_string)
TypeError: Connect() takes 0 positional arguments but 1 was given
I also tried extracting python dictionary from string with ast library and feeding it into snowflake.connector.connect(), but got back the same error.
So is there way to solve it? Am I missing something conceptually?
Please check if given references can help:
Snowflake connector variables are separate .we may need to separate by split method.
Conn=Connection_string.split(‘;’) and use it in snowflake connector.
Note : For the ACCOUNT parameter, use your account identifier ,the account identifier does not include the snowflakecomputing.com domain name/ suffix. Snowflake automatically appends this when creating the connection.So it may work for snowflake connector.
So try giving connection variables first before and then use the variables in snowflake connector and connection string. Or
Try to include driver and server in the connection string by saving the complete connection string including accountname, username, password, database, warehouse details in Azure Key Vault .
Sample connection string format :
"jdbc:snowflake://<accountname>.snowflakecomputing.com/?user=<username>&db=<database>&warehouse=<warehouse>&role=<myRole>"
References:
Configuring the JDBC Driver — Snowflake Documentation
ODBC connection string to Snowflake for Access Pass Thru Query -
Stack Overflow
azure-docs/connector-snowflake.md at main · MicrosoftDocs/azure-docs
· GitHub
connect-snowflake-using-python-pyodbc-odbc-driver

How to address MongoDB database with hyphen in name in Python?

I need to remove any element from a Mongo database when I have two databases:
mydatabase
data-test-second
With the first database this isn't a problem, I use MongoClient:
self.db = self.client.mydatabase
result = self.db.test.delete_one({"name": 'testelement'})
bBt when I use this for a second database I have a problem with:
self.db = self.client.data-test-second
underlining the database name, how I can write this? Or I can't use this solution for the second name?
In the case that your database name is not valid as an object name in Python, you need to address the database differently:
self.db = self.client["data-test-second"]
In general, it is probably advisable to always use this pattern.
For more information, you can refer to the documentation.

how to insert python logs in postgresql table?

I need to insert the logs from my test case into a table in postgresql data base.
I was able to connect to the db but I can't figure out how to insert this line result in the tabble, I have tried the below but it doesnt work
import logging
import psycopg2
from io import StringIO
from config import config
params = config()
conn = psycopg2.connect(**params)
print(conn)
curr = conn.cursor()
try:
if not hw.has_connection():
logging.error('Failure: Unable to reach websb! ==> '+ str(driver.find_element_by_xpath('//span[#jsselect="heading" and #jsvalues=".innerHTML:msg"]').text))
return
elif hw.is_websiteReachable():
logging.info("Success: website is reachable!")
curr.execute("""INSERT INTO db_name(logs) VALUES (%s)""", ("Success: website is reachable!"))
conn.commit()
except:
logging.error("Failure: Unable to reach website!")
return
Iam a total beginner in this. I have searched but I couldnt find a clear example or guide about it. the above code throws the exception eventhough the website is reachable. sorry if I sound dumb.
It looks like you're incorrectly constructing your SQL statement. Instead of INSERT INTO db_name(table_name) ... it should be INSERT INTO table_name(column_name) .... If you've correctly connected to the appropriate database in your connection settings, you usually don't have to specify the database name each time you write your SQL.
Therefore I would recommend, the following modification (assuming your table is called logs and it has a column named message):
# ...
sql = 'INSERT INTO logs(message) VALUES (%s);'
msg = 'Success: website is reachable!'
curr.execute(sql, (msg))
conn.commit()
You can read the pyscopg2 docs here for more information as well if that would help with passing named parameters to your SQL queries in Python.
You can check a good solution that I personally use in my in-server projects. You just need to give a connection-string to the CRUD object and all the things will be done. For Postgres you can use:
'postgresql+psycopg2://username:password#host:port/database'
or
'postgresql+pg8000://username:password#host:port/database'
for more details check SQLAlchemy Engine Configuration.

PyMySQL query string builder

So I'm using the PyMySQL package to query a remote database and I am finding it quite annoying to have to use the .execute() method.
I am looking for a wrapper that would be able to construct the raw MySQL queries in a more friendly fashion. Does anyone know of a package that would do this?
I have tried using pypika but the queries that it's building ('SELECT "id","username" FROM "users"') throw an error
pymysql.err.ProgrammingError: (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \'"users"\' at line 1')
EDIT: this is a remote DB that I have no control over structure-wise and it is likely that the structure could change so I don't think I could use SQLAlchemy
EDIT 2: Here is the code that I used when I got that error above
from pypika import Query, Table, Field
import pymysql
users = Table('users')
q = Query.from_(users).select(users.id, users.username)
db = pymysql.connect(host=host,
port=3306,
user=user,
passwd=password,
db=db,
cursorclass=pymysql.cursors.DictCursor)
db.execute(str(q))
It looks like it's generating a query using double-quotes as identifier delimiters.
But MySQL uses back-ticks as the default identifier delimiter, so the query should look like this:
SELECT `id`,`username` FROM `users`
You can configure MySQL to use double-quotes, which is the proper identifier delimiter to comply with the ANSI SQL standard. To do this, you have to change the sql_mode to include the modes ANSI or ANSI_QUOTES.
Read https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html for full documentation on sql_mode behavior, and https://dev.mysql.com/doc/refman/8.0/en/identifiers.html for documentation on identifier delimiters.
I searched the pypika site and the docs are pretty sparse. But there's apparently a class for MySQLQuery that sets its own QUOTE_CHAR to ` which is what I would expect.
Are you using the proper query builder for MySQL?
You can use peewee a a simple and small ORM.
http://docs.peewee-orm.com/en/latest/peewee/quickstart.html#quickstart

Categories

Resources