I have a script that connects to a third party API that uses a public and secret key. The connection is written in python and works on PowerBI desktop and also on the web app in production.
However the keys are hard coded into the script and this doesn't feel like best practice. Is there a way to use Environment Variables in PowerBI so I can remove the keys from the script?
I was just working on this today! I was able to store my credentials as environment variables on my computer and then call them in my python script using os.getenv("SECRET_KEY") etc.
I did have to restart my PowerBI Desktop after saving them to my computer.
Additional information -
os is a python library to interface with your local machine. os.getenv accesses the environment variables that you have stored on your system. Windows users can create env vars here and mac users typically set them via terminal like this
Related
As the title says, I have a Python script I wrote that I would like to allow others to use.
The script is an API aggregator, and it requires a client_id and secret to access the API. As of now I have an env file which stores these values and I'm able to get the values from the env file.
My question is now that I have finished the script locally, how do I deploy with the environment variables it so others can use it given that the environment variables are required?
Sorry if this is a simple question - new to writing scripts for others to use.
The only thing I could think of was including the .env when I push to github, but not sure if that's great practice since my client_id and secret are stored there
I am developing an app with Ansible to generate ssl certificates that other users will be using. I currently have a private key and api key exposed in plain text. So I used Ansible vault to encrypt the strings. Now I am trying to run the playbook without giving the vault-pass in plain text, so I came across passing the vault-pass dynamically with a python script. Is there a way to pull the vault-pass in from the environment using the python script and hide it from other user on the server using the playbook?
tl;dr: Does Ansible have a variable containing the current Python interpreter?
As part of my playbook, I am creating a Python script on the controller (to be run by another command), and I want that script to be run by the Python interpreter being used by Ansible. To do this I am trying to set the interpreter in the shebang of the script.
If I were to set the interpreter manually, I could use the ansible_python_interpreter variable (and I have had it working that way). If I don't set the interpreter manually, then Ansible will auto-discover an interpreter, but I can no longer use the ansible_python_interpreter variable because it is not set.
From looking through the documentation I have been unable to find any way to see which interpreter Ansible has auto-detected. Is there something I've missed?
(Ansible version 2.9.10, Python 3.6)
The complete situation:
I am running Ansible on AWX (open-source Ansible Tower), using a custom virtual environment as the runner. I use Hashicorp Vault as a secret management system, rather than keeping secrets in AWX. For access to Vault I use short-lived access tokens, which doesn't work well with AWX's built-in support for pulling secrets from Vault, so instead I do it manually (so that I can supply a Vault token at job launch time). That works well for me, generally.
In this particular case, I am running ansible-vault (yes, there are too many things called 'vault') on the controller to decrypt a secret. I am using the --vault-password-file argument to supply the decryption password via a script. Since the virtual env that I am using already has the hvac package installed, I wish to just use a brief Python script to pull the password from Hashicorp Vault. All works fine, except that I can't figure out how to set the shebang on this script to point at the virtual environment that Ansible is using.
If I can't get a useable answer to this, I suppose I can change to instead pull the password directly into Ansible and then use the --ask-vault-pass flag to pass the password that way. It just seems to me that the interpreter should really be exposed somewhere by Ansible, so I'm trying that first.
As described in Special Variables ansible_playbook_python variable holds the path to python interpreter being used by Ansible on the controller.
With gather_facts: yes you should be able to get the active python using the ansible_facts.python variable.
Let's say I have some code running on a Heroku dyno (such as this autoscaling script), that needs access to the Platform API. To access the API, I have to authenticate using my app's API Key.
What's the right way to do this?
That script I referenced hardcoded the API Key in the script itself.
A better practice generally seems to put secrets in environment variables, which is what Heroku normally recommends. However, they say they say:
Setting the HEROKU_API_KEY environment variable on your machine will
interfere with normal functioning of auth commands from Toolbelt.
Clearly I could store the API key with under a different key name.
What's the right way? I couldn't find this in the documentation, but seems like a common issue.
Yes, storing this token into a config var is the right way to go.
As for HEROKU_API_KEY, this will happen because locally, the toolbelt will look for the environment variable as one solution to try to fetch your token.
This won't impact your production environment (the heroku toolbelt isn't available within dynos).
Locally, you can also set it easily with a tool like python-dotenv, which will allow you to have a local .env file (don't check it into source control, or your token could be corrupted), with all of it's values available as env vars in your dev app.
So I am creating a local Python script which I plan to export as an executable. However, this script is in need of a MongoDB instances that runs in the background as a service or daemon. How could one possibly include this MongoDB service along with their own ported application?
I have this configuration manually installed on my own computer with a MongoDB database installed as a local Windows service, and Python where my script adds and removes to the database as some events are fired. Is there any possible way to distribute this setup without manual installation of Python and MongoDB?
If you want to include installations of all your utilities, I recommend pynsist. It'll allow you to make a Windows installer that will make your code launchable as an app on the clients system, and include any other files and/or folders that you want.
Py2exe converts python scripts and their dependencies into Windows executable files. It has some limitations, but may work for your application.
You might also get away with not installing mongo, by embedding something like this in your application: https://github.com/Softmotions/ejdb. This may require you to rewrite your data access code.
If you can't or don't do that, then you could have all your clients share a multi-tenant mongo that you host someplace in the cloud.
Finally, if you can't or won't convert your python script to a standalone exe with an embedded database, and you don't want to host a shared mongo instance for your clients, there are legions of software installation makers that make deploying mongo, python, setting up an execution environment, creating services, etc, pretty easy. Some are free, some cost money. A long list can be found here: https://en.m.wikipedia.org/wiki/List_of_installation_software