psycopg2.OperationalError: FATAL: role does not exist - python

I'm trying to update my Heroku DB from a Python script I have on my computer. I set up my app on Heroku with NodeJS (because I just like Javascript for that sort of thing), and I'm not sure I can add in a Python script to manage everything. I was able to fill out the DB once, with the script, and it had no hangups. When I try to update it, I get the following statement in my console:
Traceback (most recent call last):
File "/home/alan/dev/python/smog_usage_stats/scripts/DBManager.py", line 17, in <module>
CONN = pg2.connect(
File "/home/alan/dev/python/smog_usage_stats/venv/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: role "alan" does not exist
and this is my script:
#DBManager.py
import os
import zipfile
import psycopg2 as pg2
from os.path import join, dirname
from dotenv import load_dotenv
# -------------------------------
# Connection variables
# -------------------------------
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
# -------------------------------
# Connection to database
# -------------------------------
# Server connection
CONN = pg2.connect(
database = os.environ.get('PG_DATABASE'),
user = os.environ.get('PG_USER'),
password = os.environ.get('PG_PASSWORD'),
host = os.environ.get('PG_HOST'),
port = os.environ.get('PG_PORT')
)
# Local connection
# CONN = pg2.connect(
# database = os.environ.get('LOCAL_DATABASE'),
# user = os.environ.get('LOCAL_USER'),
# password = os.environ.get('LOCAL_PASSWORD'),
# host = os.environ.get('LOCAL_HOST'),
# port = os.environ.get('LOCAL_PORT')
# )
print("Connected to POSTGRES!")
global CUR
CUR = CONN.cursor()
# -------------------------------
# Database manager class
# -------------------------------
class DB_Manager:
def __init__(self):
self.table_name = "smogon_usage_stats"
try:
self.__FILE = os.path.join(
os.getcwd(),
"data/statsmaster.csv"
)
except:
print('you haven\'t downloaded any stats')
# -------------------------------
# Create the tables for the database
# -------------------------------
def construct_tables(self):
master_file = open(self.__FILE)
columns = master_file.readline().strip().split(",")
sql_cmd = "DROP TABLE IF EXISTS " + self.table_name + ";\n"
sql_cmd += "CREATE TABLE " + self.table_name + " (\n"
sql_cmd += (
"id_ SERIAL PRIMARY KEY,\n"
+ columns[0] + " INTEGER,\n"
+ columns[1] + " VARCHAR(50),\n"
+ columns[2] + " FLOAT,\n"
+ columns[3] + " INTEGER,\n"
+ columns[4] + " FLOAT,\n"
+ columns[5] + " INTEGER,\n"
+ columns[6] + " FLOAT,\n"
+ columns[7] + " INTEGER,\n"
+ columns[8] + " VARCHAR(10),\n"
+ columns[9] + " VARCHAR(50));"
)
CUR.execute(sql_cmd)
CONN.commit()
# -------------------------------
# Copy data from CSV files created in smogon_pull.py into database
# -------------------------------.
def fill_tables(self):
master_file = open(self.__FILE, "r")
columns = tuple(master_file.readline().strip().split(","))
CUR.copy_from(
master_file,
self.table_name,
columns=columns,
sep=","
)
CONN.commit()
# -------------------------------
# Disconnect from database.
# -------------------------------
def close_db(self):
CUR.close()
print("Cursor closed.")
CONN.close()
print("Connection to server closed.")
if __name__ == "__main__":
manager = DB_Manager()
print("connected")
manager.construct_tables()
print("table made")
manager.fill_tables()
print("filled")
as I said, everything worked fine, but now I'm getting this unexpected error, and not sure how to trace it back. The name "alan" is not in any of my credentials, which is confusing me.
I'm not running it via CLI, but through my text editor (in this case VS code).

So the reason this didn't work, is that I was pointing to the wrong directory for my .env file. dotenv_path = join(dirname(__file__), '.env') needs to "walk" up one more level to find my .env. Changed it to dotenv_path = os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', '.env')) and it worked. Just in case someone else has a similar issue, that might be something to check!

Might be unrelated, but double check your ports if using multiple instances: I also got psycopg2.OperationalError: FATAL: role "myUser" does not exist when I wanted to log in to one PostgreSQL database running on (default) port 5432 with credentials which I had set up in another instance running on port 5433...

Related

how to ignore (_log) tables while taking backup of MySQL database using Python 3?

My question is : I have a 10 databases (a,b,c,d ....etc) and every database have 2 or 4 logs tables ( a_admin_log, a_status_log , a_api_log).
So, when i want to take backup of single DB which is enter by the user input.
Like ( a) then start taking backup but ignore all logs table). Code is working fine but how to implement for ignore _log tables? please help me
import mysql.connector as m
import os
import time
import datetime
import pipes
import array
connection =m.connect(host='localhost',user='root',password='')
cursor = connection.cursor()
data = ("show databases")
cursor.execute(data)
dblist = []
for data in cursor:
st = (str(data).lstrip("('")).rstrip("',)")
dblist.append(st)
print(dblist)
def check_user_input(input):
try:
val = input
if val in dblist:
print("Given Database " + val + " has matched")
dumpcmd = "mysqldump -h localhost -u root -p " + val + " > " + val + ".sql"
os.system(dumpcmd)
print("Backup has been completed")
else:
print("Given Database Name " + dbname + "has not matched")
except NameError:
print("Something else went wrong")
input1 = input("Enter DB Name for Export:")
check_user_input(input1)
def check_user_input(input2):
try:
val2 = (input2)
dumpcm = "mysql -hlocalhost -uroot -p " + val2 + " < " + val2 + ".sql"
os.system(dumpcm)
print ("Restoration has been completed")
except ValueError:
print("nottt")
input2 = input("Enter DB Name for Import:")
check_user_input(input2)

Error closing postgres connection between each function

I have 2 python functions that handle an event in a lambda function that are essentially the same thing. When checking the logs in AWS I get the following error:
{
"errorMessage": "local variable 'post_connection' referenced before assignment",
"errorType": "UnboundLocalError",
"stackTrace": [
" File \"/var/task/etl_python/handler.py\", line 11, in handle\n EtlCall.run_bc(event)\n",
" File \"/var/task/etl_python/elt_call.py\", line 153, in run_bc\n if post_connection:\n"
]
}
My code looks like this:
def run_bo(event):
s3_resource = boto3.resource('s3')
idv_endpoint = os.getenv('DB_ENDPOINT')
idv_database = os.getenv("DB_NAME")
filename = 'staging/aml_bo'
bucket = os.getenv('BILLING_ETL')
if 'resources' in event and "psql_billing" in event['resources'][0]:
try:
config = VaultService()
s3_resource = boto3.resource('s3')
idv_endpoint = os.getenv('DB_ENDPOINT')
idv_database = os.getenv("DB_NAME")
filename = 'staging/billing_bo'
bucket = os.getenv('BILLING_ETL')
idv_username = config.get_postgres_username()
idv_password = config.get_postgres_password()
post_connection = psycopg2.connect(user = idv_username
, password = idv_password
, host = idv_endpoint
, port = "5432"
, database = idv_database)
cursor = post_connection.cursor()
bo_qry = "SELECT uuid\
,first_name, middle_initial, last_name, date(date_of_birth)
mailing_address, public.bo"
#Might need not need the next two lines but this should work.
query = """COPY ({}) TO STDIN WITH (FORMAT csv, DELIMITER '|', QUOTE '"', HEADER TRUE)""".format(bo_qry)
file = StringIO()
cursor.copy_expert(query, file)
s3_resource.Object(bucket, f'{filename}.csv').put(Body=file.getvalue())
cursor.close()
except(Exception, psycopg2.Error) as error:
print("Error connecting to postgres instance", error)
finally:
if post_connection:
cursor.close()
post_connection.close()
#return "SUCCESS"
else:
# Unknown notification
#raise Exception(f'Unexpected event notification: {event}')
print("Cannot make a solid connection to psql instance. Please check code configuration")
def run_bc(event):
if 'resources' in event and "psql_billing" in event['resources'][0]:
try:
config = VaultService()
s3_resource = boto3.resource('s3')
idv_endpoint = os.getenv('DB_ENDPOINT')
idv_database = os.getenv("DB_NAME")
filename = 'staging/billing_bc'
bucket = os.getenv('BILLING_ETL')
idv_username = config.get_postgres_username()
idv_password = config.get_postgres_password()
post_connection = psycopg2.connect(user = idv_username
, password = idv_password
, host = idv_endpoint
, port = "5432"
, database = idv_database)
cursor = post_connection.cursor()
bc_qry = "select id, uuid, document_type, image_id,
document_id\
from public.bc"
#Might need not need the next two lines but this should work.
query = """COPY ({}) TO STDIN WITH (FORMAT csv, DELIMITER '|', QUOTE '"', HEADER TRUE)""".format(bc_flowdown_qry)
file = StringIO()
cursor.copy_expert(query, file)
s3_resource.Object(bucket, f'{filename}.csv').put(Body=file.getvalue())
cursor.close()
except(Exception, psycopg2.Error) as error:
print("Error connecting to postgres instance", error)
finally:
if post_connection:
cursor.close()
post_connection.close()
#return "SUCCESS"
else:
# Unknown notification
#raise Exception(f'Unexpected event notification: {event}')
print("Cannot make a solid connection to psql instance. Please check code configuration")
I don't understand how my connection is unbound if I am closing the connection and the connection after each function and then reopening it for the next. I close it at the end when the data is dumped to my file and then create a new connection in the next function.

ftputil package using for ftp backup from odoo

I'm trying to get a database backup from odoo erp using ftputil
this is the code
# -*- coding: utf-8 -*-
from odoo import models, fields, api, tools, _
from odoo.exceptions import Warning
import odoo
from odoo.http import content_disposition
import logging
_logger = logging.getLogger(__name__)
import os
import datetime
try:
from xmlrpc import client as xmlrpclib
except ImportError:
import xmlrpclib
import time
import base64
import socket
try:
import ftputil
except ImportError:
raise ImportError(
'This module needs ftputil to automatically write backups to the FTP through ftp. Please install ftputil on your system. (sudo pip3 install ftputil)')
def execute(connector, method, *args):
res = False
try:
res = getattr(connector, method)(*args)
except socket.error as error:
_logger.critical('Error while executing the method "execute". Error: ' + str(error))
raise error
return res
class db_backup(models.Model):
_name = 'db.backup'
#api.multi
def get_db_list(self, host, port, context={}):
uri = 'http://' + host + ':' + port
conn = xmlrpclib.ServerProxy(uri + '/xmlrpc/db')
db_list = execute(conn, 'list')
return db_list
#api.multi
def _get_db_name(self):
dbName = self._cr.dbname
return dbName
# Columns for local server configuration
host = fields.Char('Host', required=True, default='localhost')
port = fields.Char('Port', required=True, default=8069)
name = fields.Char('Database', required=True, help='Database you want to schedule backups for',
default=_get_db_name)
folder = fields.Char('Backup Directory', help='Absolute path for storing the backups', required='True',
default='/odoo/backups')
backup_type = fields.Selection([('zip', 'Zip'), ('dump', 'Dump')], 'Backup Type', required=True, default='zip')
autoremove = fields.Boolean('Auto. Remove Backups',
help='If you check this option you can choose to automaticly remove the backup after xx days')
days_to_keep = fields.Integer('Remove after x days',
help="Choose after how many days the backup should be deleted. For example:\nIf you fill in 5 the backups will be removed after 5 days.",
required=True)
# Columns for external server (SFTP)
sftp_write = fields.Boolean('Write to external server with sftp',
help="If you check this option you can specify the details needed to write to a remote server with SFTP.")
sftp_path = fields.Char('Path external server',
help='The location to the folder where the dumps should be written to. For example /odoo/backups/.\nFiles will then be written to /odoo/backups/ on your remote server.')
sftp_host = fields.Char('IP Address SFTP Server',
help='The IP address from your remote server. For example 192.168.0.1')
sftp_port = fields.Integer('SFTP Port', help='The port on the FTP server that accepts SSH/SFTP calls.', default=22)
sftp_user = fields.Char('Username SFTP Server',
help='The username where the SFTP connection should be made with. This is the user on the external server.')
sftp_password = fields.Char('Password User SFTP Server',
help='The password from the user where the SFTP connection should be made with. This is the password from the user on the external server.')
days_to_keep_sftp = fields.Integer('Remove SFTP after x days',
help='Choose after how many days the backup should be deleted from the FTP server. For example:\nIf you fill in 5 the backups will be removed after 5 days from the FTP server.',
default=30)
send_mail_sftp_fail = fields.Boolean('Auto. E-mail on backup fail',
help='If you check this option you can choose to automaticly get e-mailed when the backup to the external server failed.')
email_to_notify = fields.Char('E-mail to notify',
help='Fill in the e-mail where you want to be notified that the backup failed on the FTP.')
#api.multi
def _check_db_exist(self):
self.ensure_one()
db_list = self.get_db_list(self.host, self.port)
if self.name in db_list:
return True
return False
_constraints = [(_check_db_exist, _('Error ! No such database exists!'), [])]
#api.multi
def test_sftp_connection(self, context=None):
self.ensure_one()
# Check if there is a success or fail and write messages
messageTitle = ""
messageContent = ""
error = ""
has_failed = False
for rec in self:
db_list = self.get_db_list(rec.host, rec.port)
pathToWriteTo = rec.sftp_path
ipHost = rec.sftp_host
portHost = rec.sftp_port
usernameLogin = rec.sftp_user
passwordLogin = rec.sftp_password
# Connect with external server over SFTP, so we know sure that everything works.
try:
with ftputil.FTPHost(ipHost, usernameLogin, passwordLogin) as s:
messageTitle = _("Connection Test Succeeded!\nEverything seems properly set up for FTP back-ups!")
except Exception as e:
_logger.critical('There was a problem connecting to the remote ftp: ' + str(e))
error += str(e)
has_failed = True
messageTitle = _("Connection Test Failed!")
if len(rec.sftp_host) < 8:
messageContent += "\nYour IP address seems to be too short.\n"
messageContent += _("Here is what we got instead:\n")
finally:
if s:
s.close()
if has_failed:
raise Warning(messageTitle + '\n\n' + messageContent + "%s" % str(error))
else:
raise Warning(messageTitle + '\n\n' + messageContent)
#api.model
def schedule_backup(self):
conf_ids = self.search([])
for rec in conf_ids:
db_list = self.get_db_list(rec.host, rec.port)
if rec.name in db_list:
try:
if not os.path.isdir(rec.folder):
os.makedirs(rec.folder)
except:
raise
# Create name for dumpfile.
bkp_file = '%s_%s.%s' % (time.strftime('%Y_%m_%d_%H_%M_%S'), rec.name, rec.backup_type)
file_path = os.path.join(rec.folder, bkp_file)
uri = 'http://' + rec.host + ':' + rec.port
conn = xmlrpclib.ServerProxy(uri + '/xmlrpc/db')
bkp = ''
try:
# try to backup database and write it away
fp = open(file_path, 'wb')
odoo.service.db.dump_db(rec.name, fp, rec.backup_type)
fp.close()
except Exception as error:
_logger.debug(
"Couldn't backup database %s. Bad database administrator password for server running at http://%s:%s" % (
rec.name, rec.host, rec.port))
_logger.debug("Exact error from the exception: " + str(error))
continue
else:
_logger.debug("database %s doesn't exist on http://%s:%s" % (rec.name, rec.host, rec.port))
# Check if user wants to write to SFTP or not.
if rec.sftp_write is True:
try:
# Store all values in variables
dir = rec.folder
pathToWriteTo = rec.sftp_path
ipHost = rec.sftp_host
portHost = rec.sftp_port
usernameLogin = rec.sftp_user
passwordLogin = rec.sftp_password
_logger.debug('sftp remote path: %s' % pathToWriteTo)
try:
with ftputil.FTPHost(ipHost, usernameLogin, passwordLogin) as sftp:
pass
except Exception as error:
_logger.critical('Error connecting to remote server! Error: ' + str(error))
try:
sftp.chdir(pathToWriteTo)
except IOError:
# Create directory and subdirs if they do not exist.
currentDir = ''
for dirElement in pathToWriteTo.split('/'):
currentDir += dirElement + '/'
try:
sftp.chdir(currentDir)
except:
_logger.info('(Part of the) path didn\'t exist. Creating it now at ' + currentDir)
# Make directory and then navigate into it
sftp.mkdir(currentDir, 777)
sftp.chdir(currentDir)
pass
sftp.chdir(pathToWriteTo)
# Loop over all files in the directory.
for f in os.listdir(dir):
if rec.name in f:
fullpath = os.path.join(dir, f)
if os.path.isfile(fullpath):
try:
sftp.StatResult(os.path.join(pathToWriteTo, f))
_logger.debug(
'File %s already exists on the remote FTP Server ------ skipped' % fullpath)
# This means the file does not exist (remote) yet!
except IOError:
try:
# sftp.put(fullpath, pathToWriteTo)
sftp.upload(fullpath, os.path.join(pathToWriteTo, f))
_logger.info('Copying File % s------ success' % fullpath)
except Exception as err:
_logger.critical(
'We couldn\'t write the file to the remote server. Error: ' + str(err))
# Navigate in to the correct folder.
sftp.chdir(pathToWriteTo)
# Loop over all files in the directory from the back-ups.
# We will check the creation date of every back-up.
for file in sftp.listdir(pathToWriteTo):
if rec.name in file:
# Get the full path
fullpath = os.path.join(pathToWriteTo, file)
# Get the timestamp from the file on the external server
timestamp = sftp.StatResult(fullpath).st_atime
createtime = datetime.datetime.fromtimestamp(timestamp)
now = datetime.datetime.now()
delta = now - createtime
# If the file is older than the days_to_keep_sftp (the days to keep that the user filled in on the Odoo form it will be removed.
if delta.days >= rec.days_to_keep_sftp:
# Only delete files, no directories!
if sftp.isfile(fullpath) and (".dump" in file or '.zip' in file):
_logger.info("Delete too old file from SFTP servers: " + file)
sftp.unlink(file)
# Close the SFTP session.
sftp.close()
except Exception as e:
_logger.debug('Exception! We couldn\'t back up to the FTP server..')
# At this point the SFTP backup failed. We will now check if the user wants
# an e-mail notification about this.
if rec.send_mail_sftp_fail:
try:
ir_mail_server = self.env['ir.mail_server']
message = "Dear,\n\nThe backup for the server " + rec.host + " (IP: " + rec.sftp_host + ") failed.Please check the following details:\n\nIP address SFTP server: " + rec.sftp_host + "\nUsername: " + rec.sftp_user + "\nPassword: " + rec.sftp_password + "\n\nError details: " + tools.ustr(
e) + "\n\nWith kind regards"
msg = ir_mail_server.build_email("auto_backup#" + rec.name + ".com", [rec.email_to_notify],
"Backup from " + rec.host + "(" + rec.sftp_host + ") failed",
message)
ir_mail_server.send_email(self._cr, self._uid, msg)
except Exception:
pass
"""
Remove all old files (on local server) in case this is configured..
"""
if rec.autoremove:
dir = rec.folder
# Loop over all files in the directory.
for f in os.listdir(dir):
fullpath = os.path.join(dir, f)
# Only delete the ones which are from the current database
# (Makes it possible to save different databases in the same folder)
if rec.name in fullpath:
timestamp = os.stat(fullpath).st_ctime
createtime = datetime.datetime.fromtimestamp(timestamp)
now = datetime.datetime.now()
delta = now - createtime
if delta.days >= rec.days_to_keep:
# Only delete files (which are .dump and .zip), no directories.
if os.path.isfile(fullpath) and (".dump" in f or '.zip' in f):
_logger.info("Delete local out-of-date file: " + fullpath)
os.remove(fullpath)
I cant get pass this logger "_logger.critical(
'We couldn't write the file to the remote server. Error: ' + str(err))"

Reading Apache Airflow active connections programatically

I have set up the below in Apache Airflow Admin --> Connections.
How do I read these values programmatically inside my DAG?
def check_email_requests():
conn = Connection(conn_id="artnpics_api_calls")
print(conn)
hostname = conn.host
login_name = conn.login
login_password = conn.password
port_number = conn.port
print("hostname = " + hostname + "; Login name: " + login_name + "; password = " + login_password + " ; port number = " + port_number)
request_api = hostname + ":" + port_number
print("request api " + request_api)
result = requests.get(request_api, auth=(login_name, login_password)).json()
print(result)
print("done with check_email_requests")
return False
The above obviously did not work, and I couldn't find any information on how to read from the connections (there is numerous article on how to create one programmatically). My objective is to read API connection and authentication information programmatically and invoke the call, rather than hard coding them.
Rhonald
You can do:
from airflow.hooks.base import BaseHook
conn = BaseHook.get_connection("artnpics_api_calls")
hostname = conn.host
login_name = conn.login
login_password = conn.password
port_number = conn.port

python Database Connection in function

I try to create a function for database connection in python. But this is now working.
Here is my code for the definition.
def connect():
dsn = cx_Oracle.makedsn(host='MYHOST', sid='DEVPRON', port=1521)
conn = cx_Oracle.connect(user='root', password='***', dsn=dsn)
cur = conn.cursor()
return [cur,conn]
I return conn and cur every time i call the connect function.
so here is my code when iam calling the function
connect()[0].execute("insert into tbluser (fullname,nickname) values ('" + fname + "', '" + nname + "') ")
connect()[1].commit()
when i run this no error occur, but when i check the database, there is no inserted row. please help. Thanks
Each time you call your connect function you are creating a new connection to the database server. So, your first call executes a query. The second call gives you a new connection. You're committing with this new connection, but there have been no changes. Try this instead:
def connect():
dsn = cx_Oracle.makedsn(host='MYHOST', sid='DEVPRON', port=1521)
conn = cx_Oracle.connect(user='root', password='***', dsn=dsn)
cur = conn.cursor()
return cur, conn
cur, conn = connect()
cur.execute("insert into tbluser (fullname,nickname) values ('" + fname + "', '" + nname + "') ")
conn.commit()
using sqlalchemy also you can connect database from python. Here is the code
from sqlalchemy import create_engine
engine = create_engine('oracle://host:port/database', echo=True)
conn = engine.connect()
result = conn.execute(query)

Categories

Resources