I've installed Ansible on Ubuntu and running:
ansible testserver -m linode -a 'state=stopped'
gives the error:
testserver | FAILED >> {
"failed": true,
"msg": "linode-python required for this module"
}
I installed linode-python successfully with pp install linode-python and I can run import linode in Python. So how can I get this module working?
Just to be sure : you have to install linode-python on the distant machine, not on the host.
Actually I realised this should be a local action, because we're not actually trying to run a command on the remote server. Which means I have to run this against localhost. So first I had to ensure I could ssh into localhost:
cd ~/.ssh; cat id_rsa.pub >> authorized_keys
Then I changed the machine to localhost:
ansible localhost -m linode -a 'state=stopped ...'
I'm still having some issues with that, but it seems to be running the module now.
Blogged it.
Related
I am using python SDK package to run docker from python.
Here is the docker command I tried to run using python package:
docker run -v /c/Users/msagovac/pdf_ocr:/home/docker jbarlow83/ocrmypdf-polyglot --skip-text 0ce9d58432bf41174dde7148486854e2.pdf output.pdf
Here is a python code:
import docker
client = docker.from_env()
client.containers.run('jbarlow83/ocrmypdf-polyglot', '--skip-text "0ce9d58432bf41174dde7148486854e2.pdf" "output.pdf"', "-v /c/Users/msagovac/pdf_ocr:/home/docker")
Error says file ot found. I am not sure where to set run options:
-v /c/Users/msagovac/pdf_ocr:/home/docker
Try with named parameters:
client.containers.run(
image='jbarlow83/ocrmypdf-polyglot',
command='--skip-text "0ce9d58432bf41174dde7148486854e2.pdf" "output.pdf"',
volumes={'/c/Users/msagovac/pdf_ocr': {'bind': '/home/docker', 'mode': 'rw'}},
)
Also it seems that the path of the volume to mount is incorrect, try with C:/Users/msagovac/pdf_ocr
How to connect oracle database server from python inside unix server ?
I cant install any packages like cx_Orcale, pyodbc etc.
Please consider even PIP is not available to install.
It my UNIX PROD server, so I have lot of restriction.
I tried to run the sql script from sqlplus command and its working.
Ok, so there is sqlplus and it works, this means that oracle drivers are there.
Try to proceed as follows:
1) create a python virtualenv in your $HOME. In python3
python -m venv $HOME/my_venv
2) activate it
source $HOME/my_venv/bin/activate[.csh] # .csh is for cshell, for bash otherwise
3) install pip using python binary from you new virtualenv, it is well described here: https://pip.pypa.io/en/stable/installing/
TL;DR:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get_pip.py (this should install pip into your virtualenv as $HOME/my_env/bin/pip[3]
4) install cx_Oracle:
pip install cx_Oracle
Now you should be able to import it in your python code and connect to an oracle DB.
I tried to connect Oracle database via SQLPLUS and I am calling the script with below way :
os.environ['ORACLE_HOME'] = '<ORACEL PATH>'
os.chdir('<DIR NAME>')
VARIBALE=os.popen('./script_to_Call_sql_script.sh select.sql').read()
My shell script: script_to_Call_sql_script.sh
#!/bin/bash
envFile=ENV_FILE_NAME
envFilePath=<LOACTION_OF_ENV>${envFile}
ORACLE_HOME=<ORACLE PATH>
if [[ $# -eq 0 ]]
then
echo "USAGES: Please provide the positional parameter"
echo "`$basename $0` <SQL SCRIPT NAME>"
fi
ECR=`$ORACLE_HOME/bin/sqlplus -s /#<server_name><<EOF
set pages 0
set head off
set feed off
#$1;
exit
EOF`
echo $ECR
Above things help me to do my work done on Production server.
I am trying to set up Salt Stack for local development, but in masterless mode.
I have copied my states (top.sls, mystate.sls) to /srv/salt.
I have followed the instructions on the local development page and the salt masterless quickstart page, but when I run
$ sudo /home/vagrant/.virtualenvs/myenv/bin/salt-call -c /home/vagrant/.virtualenvs/myenv/etc/salt --local salt.highstate -l debug
All I get is
[DEBUG ] Could not LazyLoad salt.highstate
'salt.highstate' is not available.
I'm running salt in a vagrant ubuntu/trusty64 virtualbox virtual machine on a Mac.
It seems like other modules load (I see them in the debug listing) but for some reason highstate (highstate.py?) is not being loaded.
What am I doing wrong? Is there something additional I have to do for masterless development?
I got help on #salt IRC channel from whytewolf - the problem was that the command should be state.highstate (not salt.highstate):
$ sudo /home/vagrant/.virtualenvs/myenv/bin/salt-call -c /home/vagrant/.virtualenvs/myenv/etc/salt --local state.highstate -l debug
Problem solved!
I'm trying to run a Python script I've uploaded as part of my AWS Elastic Beanstalk application from my development machine, but can't figure out how to. I believe I've located the script correctly, but when I attempt to run it under SSH, I get an import error.
For example, I have a Flask-Migrate migration script as part of my application (pretty much the same as the example in the documentation), but after successfully SSHing to my EB instance with
> eb ssh
and locating the script with
$ sudo find / -name migrate.py
when I run in the directory (/opt/python/current) where I located it with
$ python migrate.py db upgrade
at the SSH prompt I get
Traceback (most recent call last):
File "db_migrate.py", line 15, in <module>
from flask.ext.script import Manager
ImportError: No module named flask.ext.script
even though my requirements.txt (present along with the rest of my files in the same directory) has flask-script==2.0.5.
On Heroku I can accomplish all of this in two steps with
> heroku run bash
$ python migrate.py db upgrade
Is there equivalent functionality on AWS? How do I run a Python script that is part of an application I uploaded in an AWS SSH session? Perhaps I'm missing a step to set up the environment in which the code runs?
To migrate your database the best is to use container_commands, they are commands that will run every time you deploy your application. There is a good example in the EBS documentation (Step 6) :
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
The reason why you're getting an ImportError is because EBS installs your packages in a virtualenv. Before running arbitrary scripts in your application in SSH, first change to the directory containing your (latest) code with
cd /opt/python/current
and then activate the virtualenv
source /opt/python/run/venv/bin/activate
and set the environment variables (that your script probably expects)
source /opt/python/current/env
I'm trying to use fabric to deploy a Django project and I get this error when I run hg pull:
[myusername.webfactional.com] run: hg pull
[myusername.webfactional.com] out: remote: Warning: Permanently added the RSA host key for IP address '207.223.240.181' to the list of known hosts.
[myusername.webfactional.com] out: remote: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
[myusername.webfactional.com] err: abort: no suitable response from remote hg!
Fatal error: run() encountered an error (return code 255) while executing 'hg pull'
I can run other mercurial commands like hg status, and hg log just fine from my fab file.
I have generated an SSH key on the server and added it to my bitbucket account. This works as I can SSH in and run hg pull and it works fine, it's only when using fabric.
This is my fabfile:
from __future__ import with_statement
from fabric.api import *
env.hosts = ['myusername.webfactional.com']
env.user = "myusername"
def development():
# Update files
local("hg push")
with cd("~/webapps/mysite/mysite"):
run("hg pull")
# Update database
with cd("~/webapps/mysite/mysite"):
run("python2.6 manage.py syncdb")
run("python2.6 manage.py migrate")
# Reload apache
run("~/webapps/mysite/apache2/bin/restart")
Any ideas?
EDIT:
Got this working using https
so instead of
hg pull
I'm using
hg pull https://myusername#bitbucket.org/myusername/mysite
Can't reproduce.
zada$ fab development
[ostars.com] Executing task 'development'
[ostars.com] run: hg pull
[ostars.com] out: pulling from ssh://hg#bitbucket.org/Zada/b
[ostars.com] out: no changes found
Done.
Disconnecting from ostars.com... done.
zada$ hg --version
Mercurial Distributed SCM (version 1.6.3)
zada$ ssh ostars.com "hg --version"
Mercurial Distributed SCM (version 1.6)
zada$ fab --version
Fabric 0.9.2
Possible reasons: versions mismatch. Or just a glitches on Butbucket :)
Try run("hg pull") to be more verbose.
To use SSH to clone or pull or push repository in BitBucket you need to follow this instruction (this document is for Mercurial on Mac OSX or Linux) :
https://confluence.atlassian.com/pages/viewpage.action?pageId=270827678
If you want to setup other ssh to to work with bitbucket, here is the full documentation :
https://confluence.atlassian.com/display/BITBUCKET/How+to+install+a+public+key+on+your+Bitbucket+account