Connecting to MongoHQ from heroku console (heroku run python) - python

I'm getting a 'need to login' error when trying to interact with my MongoHQ database through python console on heroku:
...
File "/app/.heroku/venv/lib/python2.7/site-packages/pymongo/helpers.py", line 128, in _check_command_response
raise OperationFailure(msg % response["errmsg"])
pymongo.errors.OperationFailure: command SON([('listDatabases', 1)]) failed: need to login
My applicable code
app/init.py:
from mongoengine import connect
import settings
db = connect(settings.DB, host=settings.DB_HOST, port=settings.DB_PORT, username=settings.DB_USER, password=settings.DB_PASS)
app/settings.py:
if 'MONGOHQ_URL' in os.environ:
url = urlparse(os.environ['MONGOHQ_URL'])
DB = url.path[1:]
DB_HOST = url.hostname
DB_PORT = url.port
DB_USER = url.username
DB_PASS = url.password
os.environ['MONGOHQ_URL'] looks like:
'mongodb://[username]:[password]#[host]:[port]/[db-name]'
This code works (connects and can read and write to mongodb) both locally and from the heroku web server.
According to the docs (http://www.mongodb.org/display/DOCS/Connections), it should at make a 'login' attempt on connection to the server as long as the username and password params are passed to Connection or parseable from the URI. I couldn't think of a way to see if the login attempt was being made and failing silently.
I've tried bypassing mongoengine and using pymongo.Connection and got the same result. I tried all of the several patterns of using the Connection method. I created a new database user, different from the one mongoHQ creates for heroku's production access -> same same.
It's a flask app, but I don't think any app code is being touched.
Update
I found a solution, but it will cause some headaches. I can manually connect to the database by
conn = connect(settings.DB, host=settings.DB_HOST, port=settings.DB_PORT, username=settings.DB_USER, password=settings.DB_PASS)
db = conn[settings.DB]
db.authenticate(settings.DB_USER, settings.DB_PASS)
Update #2
Mongolab just worked out of the box.

Please use the URI method for connecting and pass the information to via the host kwarg eg:
connect("testdb_uri", host='mongodb://username:password#localhost/mongoenginetest')

MongoHQ add-on uses password hashes not actual passwords and that's perhaps the error.
You should change the environment variable MONGOHQ_URL to a real password with the following command:
heroku config:set MONGOHQ_URL=mongodb://...
Once set, you may restart your applications (heroku apps) so the change gets picked up. If you're in the directory of the failing application, config:seting the config var will restart the application.

Related

How to authenticate local POSTGRESQL server to access Google Cloud Storage

I am new to the cloud and to data engineering as well.
I have a large csv file stored in a GCS bucket. I would like to write a python script to bulk-insert the data into a postgresql database on my local machine using a COPY statement. I cannot figure out the authentication though.
I would like to do something like this:
import psycopg2
conn = psycopg2.connect(database=database,
user=user,
password=password,
host=host,
port=port)
cursor = conn.cursor()
file = 'https://storage.cloud.google.com/<my_project>/<my_file.csv>'
sql_query = f"COPY <MY_TABLE> FROM {file} WITH CSV"
cursor.execute(sql_query)
conn.commit()
conn.close()
I get this error message:
psycopg2.errors.UndefinedFile: could not open file "https://storage.cloud.google.com/<my_project>/<my_file.csv>" for reading: No such file or directory
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
The same happens when I run the query in psql.
I assume the problem is in authentication. I have set up Application Default Credentials with Google Cloud CLI and when acting like the authenticated user, I can easily download the file using wget. When I switch to postgres user, I get "access denied" error.
The ADC seem to work only with client libraries and command-line tools.
I use Ubuntu 22.04.1 LTS.
Thanks for any help.
This is not going to work for you. The file will need to be in a location permitted to the server process and also not fetched over http (it's a local file path it is expecting).
You can supply a program/script that will fetch the file for you and print it to STDOUT which the server can consume.
Or - do what the error message suggests and handle it locally with psycopg's copy support.

PyMongo authentication with docker

I'm having some problems to authenticate a newly created user in MongoDB. My setup is the MongoDB 4.4.2 in a container and python 3.8.
I have created a user as follows:
from pymongo import MongoClient
host = "mongodb://root_user:root_password#172.20.0.3:27017"
DB_NAME = "test"
client = MongoClient(host)
test_db = client[DB_NAME]
test_db.command("createUser", "TestUser", pwd="TestPwd", roles=["readWrite"])
So far, so good: I simply added the TestUser to the database test, and so I see when I query the collection client.system.users.find({'user': 'TestUser'}), I get the test user with db: test.
Now if I want to test this user connection with
host = "mongodb://TestUser:testPwd#172.20.0.3:27017"
it shows an authentication failure: pymongo.errors.OperationFailure: Authentication failed.
I can connect via the shell inside the container but not via pymongo and I tried already to connect specifying the authentication method, the authentication database and neither worked so far.
Any hints would be much appreciated!
Two issues.
As the commenter notes, you are creating the user in the test database; by default MongoDB will look for credentials in the admin database if authSource is not specified. Therefore you will need to append /<optional database name>?authSource=test to your connection string.
You create your account with password TestPwd, but on the connection string you have testPwd; so this won't authenticate.
So, assuming your password is definitely TestPwd, your connection string should be:
mongodb://TestUser:TestPwd#172.20.0.3:27017/test?authSource=test

How do you find the parameters for conn = psycopg2.connect(dbname=, user=, password=, host = )

I pushed a Python / Django app to Heroku. The app uses a Postgres database. I now want to access this database from a Raspberry Pi running a simple Python program using
conn = psycopg2.connect(dbname=, user=, password=, host = )
The app used SQLite originally until pushed to Heroku.
How do you find the parameters to use for dbname=, user=, password=, host =?
Heroku will set an environment variable DATABASE_URL:
As part of the provisioning process, a DATABASE_URL config var is added to your app’s configuration. This contains the URL your app uses to access the database.
You should use that to connect, since your credentials can change without notice. psycopg2 can use this value directly, e.g.
import os
import psycopg2
database_url = os.getenv(
'DATABASE_URL',
default='postgres://localhost/postgres', # E.g., for local dev
)
connection = psycopg2.connect(database_url)
Edit: Your Raspberry Pi application won't have access to this directly, but you can use the Heroku CLI to query your config vars:
heroku config:get DATABASE_URL --app your-heroku-app-name
Something like this should work from recent versions of Python:
import subprocess
database_url = subprocess.run(
['heroku', 'config:get', 'DATABASE_URL', '--app', 'your-heroku-app-name'],
stdout=subprocess.PIPE,
).stdout
Navigate to https://data.heroku.com/
Select your database.
Select the Settings tab.
Click View Credentials.
You should be able to see host, database name, user, port, and password.
See also: Heroku Postgres Credentials

How to fix Firebird error Database already opened with engine instance, incompatible with current'

I have a Flask app with using flask_sqlalchemy:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config.from_pyfile(filename='settings.py', silent=True)
db = SQLAlchemy(app=app)
I want connect to same database from daemon. In daemon I just import db and use db.engine.execute for SQLAlchemy queries.
But when daemon starts main app can't connect to database.
In log I see that:
fdb.fbcore.DatabaseError: ('Error while connecting to database:\n- SQLCODE:
-902\n- I/O error during "lock" operation for file "main.fdb"\n- Database
already opened with engine instance, incompatible with current', -902,
335544344)
I trying use isolation level:
from fdb.fbcore import ISOLATION_LEVEL_READ_COMMITED_LEGACY
class TPBAlchemy(SQLAlchemy):
def apply_driver_hacks(self, app_, info, options):
if 'isolation_level' not in options:
options['isolation_level'] = ISOLATION_LEVEL_READ_COMMITED_LEGACY
return super(TPBAlchemy, self).apply_driver_hacks(app_, info, options)
And replace this:
db = SQLAlchemy()
To:
db = TPBAlchemy()
But this only make another error:
TypeError: Invalid argument(s) 'isolation_level' sent to create_engine(),
using configuration FBDialect_fdb/QueuePool/Engine. Please check that the
keyword arguments are appropriate for this combination of components.
I would appreciate the full example to address my issue.
Your connection string is for an embedded database. You're only allowed to have one 'connection' to an embedded database at a time.
If you have the Loopback provider enabled you can change your connection string to something like:
localhost:/var/www/main.fdb
or if you have the Remote provider enabled you will have to access your database from another remote node, and assuming your Firebird server lives on 192.168.1.100 change your connection string to
192.168.1.100:/var/www/main.fdb
If you're intending to use the Engine12 provider (the embedded provider), then you have to stop whatever is already connected to that database because you just can't do two simultaneously users with this provider.
Also, try to set up some database aliases so you aren't specifying a database explicitly like that. In Firebird 3.0.3 check out databases.conf, where you can do something like:
mydatabasealias=/var/www/main.fdb
and your connection string would now be mydatabasealias instead of the whole path.

flask-restless with mod_wsgi can't connect to MySQL server

I am trying to run a flask-restless app in apache using mod_wsgi. This works fine with the development server. I have read everything I can find and none of the answers I have seen seem to work for me. The app handles non-database requests properly but gives the following error when I try to access a url that requires a database access:
OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 13] Permission denied)") None None
I have whittled down to basically the flask-restless quick-start with my config and my flask-sqlalchemy models imported (from flask import models). Here is my python code:
import flask
import flask.ext.sqlalchemy
import flask.ext.restless
import sys
sys.path.insert(0, '/proper/path/to/application')
application = flask.Flask(__name__, static_url_path = "")
application.debug=True
application.config.from_object('config')
db = flask.ext.sqlalchemy.SQLAlchemy(application)
from app import models
# Create the Flask-Restless API manager.
manager = flask.ext.restless.APIManager(application, flask_sqlalchemy_db=db)
# Create API endpoints, which will be available at /api/<tablename> by
# default. Allowed HTTP methods can be specified as well.
manager.create_api(models.Asset, methods=['GET'])
# start the flask loop
if __name__ == '__main__':
application.run()
I assume that mod_wsgi isn't having a problem finding the config file which contains the database access details since I don't get an error when reading the config and I also don't get an error on from app import models.
My research so far has led me to believe that this has something to do with the sql-alchemy db connection existing in the wrong scope or context and possibly complicated by the flask-restless API manager. I can't seem to wrap my head around it.
Your code under Apache/mod_wsgi will run as a special Apache user. That user likely doesn't have the privileges required to connect to the database.
Even though it says 'localhost' and you think that may imply a normal socket connection, some database clients will see 'localhost' and will automatically instead try and use the UNIX socket for the database. It may not have access to that UNIX socket connection.
Alternatively, when going through a UNIX socket connection it is trying to validate whether the Apache user than has access, but if the database hasn't been setup to allow the Apache user access, it may then fail.
Consider using daemon mode of mod_wsgi and configure daemon mode to run as a different user to the Apache user and one you know has access to the database.

Categories

Resources