I am trying to use azcopy from python, I have already used this from CLI and it is working!
I have successfully executted the following commands:
for upload :
set AZCOPY_SPA_CLIENT_SECRET=<my client secret>
azcopy login --service-principal --application-id=<removed> --tenant-id=<removed>
azcopy copy "D:\azure\content" "https://dummyvalue.blob.core.windows.net/container1/result4" --overwrite=prompt --follow-symlinks --recursive --from-to=LocalBlob --blob-type=Detect
Similarly for download
azcopy copy "https://dummyvalue.blob.core.windows.net/container1/result4" "D:\azure\azcopy_windows_amd64_10.4.3\temp\result2" --recursive
Now, I want to automate these commands using python, I know that azcopy can also be used using SAS keys but that is out of scope for my working
First attempt:
from subprocess import call
call(["azcopy", "login", "--service-principal", "--application-id=<removed>", "--tenant-id=<removed>"])
Second attempt:
import os
os.system("azcopy login --service-principal --application-id=<removed> --tenant-id=<removed>")
I have already set AZCOPY_SPA_CLIENT_SECRET in my environment.
I am using python 3 on windows.
Every time I get this error:
Failed to perform login command: service principal auth requires an
application ID, and client secret/certificate
NOTE: If your credential was created in the last 5 minutes, please
wait a few minutes and try again.
I don't want to use Azure VM to do this job
Could anyone please help me fix this problem?
This is because the set cmd does not set a permanent environment variable, it only takes effect in the current windows cmd prompt.
You should manually set the environment variable via UI or try to use setx command.
I did a test by using your code, and manually set the environment variable of AZCOPY_SPA_CLIENT_SECRET as per UI, then the code can run without issues(it may take a few minutes to take effect).
Test result is as below:
Related
I am creating a blog using Flask. However, when trying to run my code on a local server, I am unable to do so, as it comes with this error:
Error: Failed to find Flask application or factory in module "flaskblog". Use "FLASK_APP=flaskblog:name to specify one.
I had typed in, "set FLASK_APP=flaskblog.py" in my terminal prior, then typed in "flask run." What would be the next best steps to take so I can run the code on a local server? Running this on a Windows 10 computer.
If you're using a bash cli, try using "export FLASK_APP=flaskblog.py". Set might only work if you are using something like Windows CMD.
To give you some background I have a bash script being launched from Asterisk via a Python AGI that runs against Amazon Polly and generates a .sln file. I have this working on a CentOS server but am attempting to migrate it to a Debian Server.
This is the line item of code that is giving me problems
aws polly synthesize-speech --output-format pcm --debug --region us-east-2 --profile asterisk --voice-id $voice --text "$1" --sample-rate 8000 $filename.sln >/dev/null
I keep getting this error
ProfileNotFound: The config profile (foo) could not be found
This is an example of my /root/.aws/config
[default]
region = us-east-2
output = json
[profile asterisk]
region = us-east-2
output = json
[asterisk]
region = us-east-2
output = json
The /root/.aws/credentials looks similar but with the keys in them.
I've even tried storing all this data in environment variables and going with default so as to get past this, but then I have the issue where it throws unable to locate credentials, or must define region (got past that by defining the region inline). It's almost like, Asterisk is somehow running this out of some isolated session that I can't get the credentials or config/credentials file to. Which from research and how I set it up it is currently running as Root so that should not be an issue.
Any help is much appreciated, thanks!
Asterisk should be runned under asterisk user for security.
Likly on your prevous install it was under root, so all was working.
Please ensure you have setuped AWS Polly for asterisk user or create sudo entry and use sudo.
If you use System command it also have no shell(bash), so you have start it via bash script and setup PATH and other required variables yourself.
I'm on a Windows 10 machine. I have GPU running on the Google Cloud Platform to train deep learning models.
Historically, I have been running Jupyter notebooks on the cloud server without problem, but recently began preferring to run Python notebooks in VS Code instead of the server based Jupyter notebooks. I'd like to train my VS Code notebooks on my GPUs but I don't have access to my google instances from VS Code, I can only run locally on my CPU.
Normally, to run a typical model, I spin up my instance on the cloud.google.com Compute Engine interface. I use the Ubuntu on the Windows Subsystem for Linux installation and I get in like this:
gcloud compute ssh --zone=$ZONE jupyter#$INSTANCE_NAME -- -L 8080:localhost:8080
I have tried installing the Cloud Code extension so far on VS Code, but as I go through the tutorials, I always sort of get stuck. One error I keep experiencing is that gcloud won't work on anything EXCEPT my Ubuntu terminal. I'd like it to work in the terminal inside VS Code.
Alternatively, I'd like to run the code . command on my Ubuntu command line so I can open VS Code from there, and that won't work. I've googled a few solutions, but they lead me to these same problems with neither gcloud not working, nor code . working.
Edit: I just tried the Google Cloud SDK installer from https://cloud.google.com/sdk/docs/quickstart-windows
and then I tried running gcloud compute ssh from the powershell from within VSCODE. This is the new error I got:
(base) PS C:\Users\user\Documents\dev\project\python> gcloud compute ssh --zone=$ZONE jupyter#$INSTANCE_NAME -- -L 8080:localhost:8080
WARNING: The PuTTY PPK SSH key file for gcloud does not exist.
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
ERROR: (gcloud.compute.ssh) could not parse resource []
It still runs from Ubuntu using WSL, I logged in fine. I guess I just don't know entirely enough about how they're separated, what's shared, and what is missing, and to how to get all my command lines using the same stuff.
It seems as if your ssh key paths are configured correctly for your Ubuntu terminal but not for the VS Code one. If your account is not configured to use OS Login, with which Compute Engine stores the generated key with your user account, local SSH keys are needed. SSH keys are specific to each instance you want to access and here is where you can find them. Once you have find them you can specify their path using the --ssh-key-file flag.
Another option is to use OS Login as I have mentioned before.
Here you have another thread with a similar problem than yours.
I need to login inside a AWS Linux server, then create a folder, add some ownership on it and lastly restart tomcat.
I know that I should be using Ansible or any config mgmt tool & that's easy way.. but out of curiosity I want to do it using Python.
So basically, the steps that need to be followed are:
Login to Machine
mkdir /mnt/some_new_folder
Give permissions, chown tomcat7:tomcat7 /mnt/some_new_folder
Lastly restart tomcat: sudo service tomcat7 restart
Lastly logout
Is it possible to do all this via Python script ?
With open source tools like Python everything is possible. Only your knowledge sets the limit.
I would suggest using sh module which allows easy execution of remote commands over SSH.
sh + SSH tutorial.
You can use it like:
import sh
print(sh.ssh("username#example.com", "mkdir /foo/bar"))
First you need to setup proper SSH keys and SSH agent.
I want to query the Google Analytics API using Python to periodically download data from my Analytics account and store data in a local database. I am basically following the steps as given in the basic tutorial. I am using the Google client API library for Python in this process.
My script is working fine so far when I am running it on my local dev machine (Mac). When I start the script, my browser opens and I am prompted to grant access to my Analytics data from the app. Afterwards I can run my script as often as I want and get access to my data.
On my server (Ubuntu, only terminal available), the w3m browser opens, but I cannot access my Google account from there. I can only quit w3m and kill the program with Ctrl-C. There is an error message like:
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?scope=some_long_url&access_type=offline
If your browser is on a different machine then exit and re-run this
application with the command-line parameter
--noauth_local_webserver
However when I run my script with the parameter --noauth_local_webserver, I get the same results - w3m opens and I cannot authenticate.
How can I get the --noauth_local_webserver to work? I there another way to authenticate without a local browser on the same machine?
When you use FLAGS = gflags.FLAGS, you actually need to pass the command-line arguments to FLAGS (this may or may not have tripped me up as well :) ). See here for an Analytics-centric example of how to do it (code below as links tend to go away after a while). General idea is that argv arguments are passed into the FLAGS variable, which then become available to other modules.
# From samples/analytics/sample_utils.py in the google-api-python-client source
def process_flags(argv):
"""Uses the command-line flags to set the logging level.
Args:
argv: List of command line arguments passed to the python script.
"""
# Let the gflags module process the command-line arguments.
try:
argv = FLAGS(argv)
except gflags.FlagsError, e:
print '%s\nUsage: %s ARGS\n%s' % (e, argv[0], FLAGS)
sys.exit(1)
# Set the logging according to the command-line flag.
logging.getLogger().setLevel(getattr(logging, FLAGS.logging_level))
Also, turns out that we aren't alone! You can track this bug to see when this will get added the documentation.
you can also use GA as a service API:https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/service-py
this works perfectly fine. Just remmeber to convert the p12 to an unencryptet PEM file using openssl
$openssl pkcs12 -in client_secrets.p12 -nodes -nocerts > client_secrets.pem
the import password is printed out when you download the P12 from google developer's console
I ran into the same issue and managed to solve it by SSHing into my server. Example:
ssh -L 8080:127.0.0.1:8080 <server-name>
I then ran my script through SSH. When I was presented with the URL (https://accounts.google.com/o/oauth2/auth?scope=some_long_url&access_type=offline), I copied and pasted into the browser on my machine to complete the authentication flow.
I ran it on my PC, got a token.json, and just copied the token on the server in the home folder (think working directory of the script), it solved it.
No authentication needed if you use same token