There is a Postgres database that I connect to with SQLAlchemy.
I currently have the database's connection parameters (database name, host, port, username, password) all hard coded in the Python file. I want to change that.
I read here that one should store these parameters in environment variables. Of the five connection parameters, what should I store in environment variables?
Obviously I will store password, but should I additionally store username and host? What is the convention here?
Putting settings in environment variables isn't just about security. It's also about flexibility. Anything that's likely to change between environments is a good candidate to be put in environment variables.
Consider your database. Is it likely that the host, user name, and database name might be different on different environments? I suspect so. Many projects might use a database on localhost or on a Docker image called db in docker-compose.yml in development, and to use a dedicated database server or hosted database in production.
A common pattern is to encode your entire database connection string in a single environment variable DATABASE_URL. The format¹ is something like
<engine>://<user>:<password>#<host>:<port>/<database>
For example, you might use something like
postgres://db_user:password#localhost/app_db
Many database libraries, including SQLAlchemy can connect to databases using this single string directly.
¹This is a specialization on regular URL syntax.
Why hardcode anything? Just move all of these parameters to environment variables.
One of the way to do this will be as below from security point of view.
Assuming that we classify password as sensitive data and we want to encrypt only the password. Rest information can be either in environment variables or into the config files.
1) Have a random value based salt that is specific to the server generated at the time of encryption program invocation. This value is saved into file. Lets call it salt.bin
2) Change permission of the salt.bin file such that it is readable only operating system user which will run your program.
3) Have security personal/entrusted personal enter password to the encryption program and saved the encrypted value into a file. Lets call it db_config.bin.
4) Change permission of the db_config.bin file such that it is readable only by operating system user which will run your program.
Now during program execution time, let program read salt.bin file and db_config.bin file. Decrypt db_config.bin by using salt.bin. Program uses this password along with config files values for host, port, and other details to connect to database .
All of above can be accomplished with python.See here.
Related
A django settings file includes sensitive information such as the secret_key, password for database access etc which is unsafe to keep hard-coded in the setting file. I have come across various suggestions as to how this information can be stored in a more secure way including putting it into environment variables or separate configuration files. The bottom line seems to be that this protects the keys from version control (in addition to added convenience when using in different environments) but that in a compromised system this information can still be accessed by a hacker.
Is there any extra benefit from a security perspective if sensitive settings are kept in a data vault / password manager and then retrieved at run-time when settings are loaded?
For example, to include in the settings.py file (when using pass):
import subprocess
SECRET_KEY=subprocess.check_output("pass SECRET_KEY", shell=True).strip().decode("utf-8")
This spawns a new shell process and returns output to Django. Is this more secure than setting through environment variables?
I think a data vault/password manager solution is a matter of transferring responsibility but the risk is still here. When deploying Django in production, the server should be treated as importantly as a data vault. Firewall in place, fail to ban, os up to date... must be in place. Then, in my opinion, there is nothing wrong or less secure than having a settings.py file with a config parser reading a config.ini file (declared in your .gitignore!) where all your sensitive information is present.
I would like to have some help with the parameters passed under the connection string through a .py file trying connection to my Oracle Apex workspace database:
connection = cx_Oracle.connect("user", "password", "dbhost.example.com/dbinstance", encoding="UTF-8")
On the login page at "apex.oracle.com", we have to pass the following information:
Can I assume that the "user" parameter is equal to the USERNAME info, the "password" parameter is equal to the PASSWORD info and the "dbinstance" parameter is equal to the WORKSPACE info?
And what about the hostname? What is it expected as parameter? How do I find it?
Thank you very much for any support.
Those parameters are not equivalent. An APEX workspace is a logical construct that exists only within APEX; it does not correspond to a physical database instance. Username and password do not necessarily correspond to database users, as APEX is capable of multiple methods of authentication.
APEX itself runs entirely within a single physical database. An APEX instance supports multiple logical workspaces, each of which may have its own independent APEX user accounts that often (usually) do not correspond to database users at all. APEX-based apps may have entirely separate authentication methods of their own, too, and generally do not use the same users defined for the APEX workspaces.
When an APEX application does connect to a database to run, it connects as a proxy user using an otherwise unprivileged database account like APEX_PUBLIC_USER.
If you want to connect Python to APEX, you would have to connect like you would any other web app: through the URL using whatever credentials are appropriate to the user interface and then parsing the HTML output, or through an APEX/ORDS REST API (that you would have to first build and deploy).
If you want to connect to the database behind APEX, then you would need an appropriately provisioned database (not APEX) account, credentials and connectivity information provided by the database administrator.
I am trying to remotely connect to a MongoDB database but don't want to store the password for the database in plaintext in the code. What's a good method for encrypting/decrypting the password so it's not available to anyone with the source code? The source code will be on GitHub.
I'm working with Python and PyMongo for connecting to the database. The database has authentication enabled in the mongod.conf file. The database is hosted on a Ubunutu 18.04 instance running in AWS.
It would also be nice to have the IP address of the server encrypted also as i've had security issues before with people accessing the database due to the code being available on GitHub and then presumably scraped by bots.
My current URI looks like this
URI = "mongo serverip --username mongo --authenticationDatabase admin -p"
I would like the IP address and password to be encrypted in some way so that the password and IP aren't publicly available in the source code.
There is only and and simple way:
If you don't want the password and the server name to be included in your public repository don't write it into a file that is pushed into that repository.
One way to do so would be to create a config file for secret data and add it to the .gitignore file. At run-time open the config file, read the secret data from it and use it in your script.
Another way would be to provide the secret data (password an server name) as command line parameters to your script.
Any other way that "encrypts" (obfuscates) the password is insecure as long as the repository contains also the obvious or hidden key. This can be decoded with a little effort.
All the options provided by Robert makes complete sense. However, I would like to give one more:
You can store username and password under your environment variables under .bash_profile and access the corresponding env var in python.
Example: -
In .bash_profile:
export USRNM='myname'
export PASS='password'
In python:
import os
username = os.environ.get('USRNM')
password = os.environ.get('PASS')
This way, username and password will not be present in your project directory and cant be accessed by looking at the source code.
PS: Further encryption can be added to the password string stored in .bash_profile.
I'd like to create a Jenkins job where I do a backup and deploy of certain databases to a remote MongoDB instance. I'd like this build to be parameterized so that at build time the user chooses from a list of valid MongoDB hostnames and then once the user selects the valid DB hostname, a second list parameter choice box will be dynamically populated with all valid database names on that hostnames. Then once The user has selected the DB name, that will be stored in a parameter "DB" that can be passed to a Build Step "Execute Shell" script to do the actual work.
My problem is that I need for a way to execute a script in the Jenkins Dynamic Parameter (Cascading) Plug-in that will run a shell (or ideally, python) script that will return a list of valid DB names on the selected host. I'm not able to get groovy script portion of the plugin to execute shell commands on the local OS (like the"Execute Shell" build step does).
Ideally I'd like to run something like this where "MONGOHOST" is the first parameter chosen by the user:
#!/usr/bin/env python
from pymongo import MongoClient
client = MongoClient('mongodb://${MONGOHOST}:27017/')
choicelist = client.database_names()
client.close()
I'd then like "choicelist" to be presented in such a way as they become populated as the available choices for a "DB" parameter.
How can I achieve this, especially since the Dynamic Choice parameter only accepts groovy script and not native python?
Usually the dynamic parameter plugin just loads the options from simple ini files. So, if you want to update the list of available options, you just have to update these files on the Jenkins instance.
BTW, If you are trying to implement a self-service portal, you may want to have a look at RunDeck, which I discovered recently and it seems considerably more user-friendly than Jenkins.
The question is, imagine that I want to create a deploy script which uses 'fabric' deploy library, which has to specify the FTP credentials where you want to deploy to. The idea is that I would like to store this script in our testing server, and from that server, it will deploy remotely to another servers. I would like to create a user account to each developer, but I don't want to share with them the FTP credentials, but rather, give them only the executable, so, if I create a python executable and I added to /user/bin for instance, they will be able to execute it, but also making a 'which mycommand' they can see the source where is inside the credentials, what can I do to avoid it?
Thanks!!
If you care about security, you probably should be using scp or sftp instead. These can be set up to not require any keystrokes, while still having decent security. For more see: http://www.debian-administration.org/articles/152
However, if you really want/need to use ftp, you probably should put the credentials in a file and chmod it to mode 400: r-------- or perhaps 440: r--r-----. Embedding credentials in a script isn't a great thing.
Put the credentials in a file to which the individual developers have no access.
Create an account that DOES have access to that file but DOES NOT allow interactive logons. - Create your FTP submission program, make it runnable by this second account.
Put all the developers in a group (e.g. "Devs").
Add an entry in the sudoers file to allow members of the "Devs" group to run the FTP program without additional authentication. This will be something like:
%Devs ALL=(ALL) ALL, , NOPASSWD: /path/to/FTPscript