To give you some background I have a bash script being launched from Asterisk via a Python AGI that runs against Amazon Polly and generates a .sln file. I have this working on a CentOS server but am attempting to migrate it to a Debian Server.
This is the line item of code that is giving me problems
aws polly synthesize-speech --output-format pcm --debug --region us-east-2 --profile asterisk --voice-id $voice --text "$1" --sample-rate 8000 $filename.sln >/dev/null
I keep getting this error
ProfileNotFound: The config profile (foo) could not be found
This is an example of my /root/.aws/config
[default]
region = us-east-2
output = json
[profile asterisk]
region = us-east-2
output = json
[asterisk]
region = us-east-2
output = json
The /root/.aws/credentials looks similar but with the keys in them.
I've even tried storing all this data in environment variables and going with default so as to get past this, but then I have the issue where it throws unable to locate credentials, or must define region (got past that by defining the region inline). It's almost like, Asterisk is somehow running this out of some isolated session that I can't get the credentials or config/credentials file to. Which from research and how I set it up it is currently running as Root so that should not be an issue.
Any help is much appreciated, thanks!
Asterisk should be runned under asterisk user for security.
Likly on your prevous install it was under root, so all was working.
Please ensure you have setuped AWS Polly for asterisk user or create sudo entry and use sudo.
If you use System command it also have no shell(bash), so you have start it via bash script and setup PATH and other required variables yourself.
Related
I am trying to use azcopy from python, I have already used this from CLI and it is working!
I have successfully executted the following commands:
for upload :
set AZCOPY_SPA_CLIENT_SECRET=<my client secret>
azcopy login --service-principal --application-id=<removed> --tenant-id=<removed>
azcopy copy "D:\azure\content" "https://dummyvalue.blob.core.windows.net/container1/result4" --overwrite=prompt --follow-symlinks --recursive --from-to=LocalBlob --blob-type=Detect
Similarly for download
azcopy copy "https://dummyvalue.blob.core.windows.net/container1/result4" "D:\azure\azcopy_windows_amd64_10.4.3\temp\result2" --recursive
Now, I want to automate these commands using python, I know that azcopy can also be used using SAS keys but that is out of scope for my working
First attempt:
from subprocess import call
call(["azcopy", "login", "--service-principal", "--application-id=<removed>", "--tenant-id=<removed>"])
Second attempt:
import os
os.system("azcopy login --service-principal --application-id=<removed> --tenant-id=<removed>")
I have already set AZCOPY_SPA_CLIENT_SECRET in my environment.
I am using python 3 on windows.
Every time I get this error:
Failed to perform login command: service principal auth requires an
application ID, and client secret/certificate
NOTE: If your credential was created in the last 5 minutes, please
wait a few minutes and try again.
I don't want to use Azure VM to do this job
Could anyone please help me fix this problem?
This is because the set cmd does not set a permanent environment variable, it only takes effect in the current windows cmd prompt.
You should manually set the environment variable via UI or try to use setx command.
I did a test by using your code, and manually set the environment variable of AZCOPY_SPA_CLIENT_SECRET as per UI, then the code can run without issues(it may take a few minutes to take effect).
Test result is as below:
Is it possible to run python script through AWSCLI (user-data). I tried but it didn't run and i have following in my logs
boot.log:2015-08-07 10:08:30,660 - __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: './step-1
cloud-init.log:2015-08-07 10:08:30,660 - __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: './step-1'
cloud-init-output.log:2015-08-07 10:08:30,660 - __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: './step-1'
Note step-1 is my script which i am trying to pass as user-data . Also my script is present in same directory from where i am running command so it should pick
The default interpreter seems to be Python. So if you simply want to execute a shell script you'll need to start with a hash-bang, for example:
#!/bin/bash
/yourpath/step-1
Please note, in order to debug this, try: cat /var/log/cloud-init-output.log
And see: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts
You can use any command to be run under user-data. I have used user-data to bootstrap Windows Instances with Domain Controller setup or domain join using PowerShell; of course given that it is on EC2 - the properties are extensible whether you are running in Unix based or Windows Based.
So you have specified, Python - so please ensure the following
Python already installed and then take an image - use that image to bootstrap
You enable User-Data and pass the user-data commands during the launch time
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
the document from aws say that only shell and cloud-init directive that are supported
I'm trying to upload a simple "hello world" with google app engine (for python). Every time I do I get the following error
Error 404: --- begin server output ---
This application does not exist (app_id=u'sqeekytest').
--- end server output ---
I've checked and double checked that the app_id matches. I found a possible solution in the following thread however I am a complete noob at this and I'm not sure that I am doing this correctly.
This application does not exist (app_id=xxx)
It seems that the most common solution to this problem is to run appcfg.py update . --no_cookies. What I don't know is where I am running it. Is it in cmd, the python shell, the google cloud SDK shell that comes with the program? I've tried it a few different places and the only result I have gotten is the launching of pycharm. Either the solution is not working for me or I am doing something dumb (more likely). I cannot figure this out.
Thanks
Open a terminal or command prompt and cd to the directory of your project
$ cd path/to/project
Then from there do
$ appcfg.py update . --no_cookies
That assumes your app.yaml is in the root directory of your project. If it's in a different directory, specify it by
$ appcfg.py update /path/to/directory/where/app.yaml/is --no_cookies
It could be that appcfg.py is not in your PATH, in that case you could add it to the PATH or simply specify the location, also from within the root directory of your project.
$ path/to/google-cloud-sdk/bin/appcfg.py update . --no_cookies
This will open a webpage in a browser asking for permission and once you have given it it will automatically continue with the deploying process in the terminal/command prompt.
I am newbie to EC2 and boto. I have an EC2 running instance and I want to execute a shell command like e.g. apt-get update through boto.
I searched a lot and found a solution using user_data in the run_instances command, but what if the instance is already launched?
I don't even know if it is possible. Any clue in this reference will be a great help.
The boto.manage.cmdshell module can be used to do this. To use it, you must have the paramiko package installed. A simple example of it's use:
import boto.ec2
from boto.manage.cmdshell import sshclient_from_instance
# Connect to your region of choice
conn = boto.ec2.connect_to_region('us-west-2')
# Find the instance object related to my instanceId
instance = conn.get_all_instances(['i-12345678'])[0].instances[0]
# Create an SSH client for our instance
# key_path is the path to the SSH private key associated with instance
# user_name is the user to login as on the instance (e.g. ubuntu, ec2-user, etc.)
ssh_client = sshclient_from_instance(instance,
'<path to SSH keyfile>',
user_name='ec2-user')
# Run the command. Returns a tuple consisting of:
# The integer status of the command
# A string containing the output of the command
# A string containing the stderr output of the command
status, stdout, stderr = ssh_client.run('ls -al')
That was typed from memory but I think it's correct.
You could also check out Fabric (http://docs.fabfile.org/) which has similar functionality but also has much more sophisticated features and capabilities.
I think you can use fabric for your requirements. Just check the fabric wrapper once . You can execute the command on remote server shell through fabric library.
It is very easy to use and you can integrate both boto and fabric . Together they work brilliant.
Plus the same command can executed to n number of nodes. Which I believe could be your requirements
Just check it out.
Yes, you can do this with AWS Systems manager. AWS Systems Manager Run Command allows you to remotely and securely run set of commands on EC2 as well on-premise server. Below are high-level steps to achieve this.
Attach Instance IAM role:
The ec2 instance must have IAM role with policy AmazonSSMFullAccess. This role enables the instance to communicate with the Systems Manager API.
Install SSM Agent:
The EC2 instance must have SSM agent installed on it. The SSM Agent process the run command requests & configure the instance as per command.
Execute command :
Example usage via AWS CLI:
Execute the following command to retrieve the services running on the instance. Replace Instance-ID with ec2 instance id.
aws ssm send-command --document-name "AWS-RunShellScript" --comment "listing services" --instance-ids "Instance-ID" --parameters commands="service --status-all" --region us-west-2 --output text
More detailed information: https://www.justdocloud.com/2018/04/01/run-commands-remotely-ec2-instances/
I want to query the Google Analytics API using Python to periodically download data from my Analytics account and store data in a local database. I am basically following the steps as given in the basic tutorial. I am using the Google client API library for Python in this process.
My script is working fine so far when I am running it on my local dev machine (Mac). When I start the script, my browser opens and I am prompted to grant access to my Analytics data from the app. Afterwards I can run my script as often as I want and get access to my data.
On my server (Ubuntu, only terminal available), the w3m browser opens, but I cannot access my Google account from there. I can only quit w3m and kill the program with Ctrl-C. There is an error message like:
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?scope=some_long_url&access_type=offline
If your browser is on a different machine then exit and re-run this
application with the command-line parameter
--noauth_local_webserver
However when I run my script with the parameter --noauth_local_webserver, I get the same results - w3m opens and I cannot authenticate.
How can I get the --noauth_local_webserver to work? I there another way to authenticate without a local browser on the same machine?
When you use FLAGS = gflags.FLAGS, you actually need to pass the command-line arguments to FLAGS (this may or may not have tripped me up as well :) ). See here for an Analytics-centric example of how to do it (code below as links tend to go away after a while). General idea is that argv arguments are passed into the FLAGS variable, which then become available to other modules.
# From samples/analytics/sample_utils.py in the google-api-python-client source
def process_flags(argv):
"""Uses the command-line flags to set the logging level.
Args:
argv: List of command line arguments passed to the python script.
"""
# Let the gflags module process the command-line arguments.
try:
argv = FLAGS(argv)
except gflags.FlagsError, e:
print '%s\nUsage: %s ARGS\n%s' % (e, argv[0], FLAGS)
sys.exit(1)
# Set the logging according to the command-line flag.
logging.getLogger().setLevel(getattr(logging, FLAGS.logging_level))
Also, turns out that we aren't alone! You can track this bug to see when this will get added the documentation.
you can also use GA as a service API:https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/service-py
this works perfectly fine. Just remmeber to convert the p12 to an unencryptet PEM file using openssl
$openssl pkcs12 -in client_secrets.p12 -nodes -nocerts > client_secrets.pem
the import password is printed out when you download the P12 from google developer's console
I ran into the same issue and managed to solve it by SSHing into my server. Example:
ssh -L 8080:127.0.0.1:8080 <server-name>
I then ran my script through SSH. When I was presented with the URL (https://accounts.google.com/o/oauth2/auth?scope=some_long_url&access_type=offline), I copied and pasted into the browser on my machine to complete the authentication flow.
I ran it on my PC, got a token.json, and just copied the token on the server in the home folder (think working directory of the script), it solved it.
No authentication needed if you use same token