Run local commands with Fabric - python

My environments are based on windows with vagrant or docker as the actual environments. I'd like to set up a quick way of ad hoc deploying stuff directly from windows though, it would be great if I could just run
fab deploySomething
And that would for example locally build an react app, commit and push to the server. However I'm stuck at the local bit.
My setup is:
Windows 10
Fabric 2
Python 3
I've got a fabfile.py set up with a simple test:
from fabric import Connection, task, Config
#task
def deployApp(context):
config = Config(overrides={'user': 'XXX', 'connect_kwargs': {'password': 'YYY'}})
c = Connection('123.123.123.123', config=config)
# c.local('echo ---------- test from local')
with c.cd('../../app/some-app'):
c.local('dir') #this is correct
c.local('yarn install', echo=True)
But I'm just getting:
'yarn' is not recognized as an internal or external command, operable program or batch file.
you can replace 'yarn' with pretty much anything, I can't run a command with local that works fine manually. With debugging on, all i get is:
DEBUG:invoke:Received a possibly-skippable exception: <UnexpectedExit: cmd='cd ../../app/some-app && yarn install' exited=1>
which isn't very helpful...anyone came across this? Any examples of local commands with fabric I can find seem to refer to the old 1.X versions

To run local commands, run them off of your context and not your connection. n.b., this drops you to the invoke level:
from fabric import task
#task
def hello(context):
with context.cd('/path/to/local_dir'):
context.run('ls -la')
That said, the issue is probably that you need the fully qualified path to yarn since your environment's path hasn't been sourced.

Related

AWS Batch:/usr/local/bin/python: cannot execute binary file

I built an AWS Batch compute environment. I want to run a python script in jobs.
Here is the docker file I'm using :
FROM python:slim
RUN apt-get update
RUN pip install boto3 matplotlib awscli
COPY runscript.py /
ENTRYPOINT ["/bin/bash"]
The command in my task definition is :
python /runscript.py
When I submit a job in AWS console I get this error in CloudWatch:
/usr/local/bin/python: /usr/local/bin/python: cannot execute binary file
And the job gets the status FAILED.
What is going wrong? I run the container locally and I can launch the script without any errors.
Delete your ENTRYPOINT line. But replace it with the CMD that says what the container is actually doing.
There are two parts to the main command that a Docker container runs, ENTRYPOINT and CMD; these are combined together into one command when the container starts. The command your container is running is probably something like
/bin/bash python /runscript.py
So bash finds a python in its $PATH (successfully), and tries to run it as a shell script (leading to that error).
You don't strictly need an ENTRYPOINT, and here it's causing trouble. Conversely there's a single thing you usually want the container to do, so you should just specify it in the Dockerfile.
# No ENTRYPOINT
CMD ["python", "/runscript.py"]
You can try with following docker file and task definition.
Docker File
FROM python:slim
RUN apt-get update
RUN pip install boto3 matplotlib awscli
COPY runscript.py /
CMD ["/bin/python"]
Task Definition
['/runscript.py']
By passing script name in task definition will give you flexibility to run any script while submitting a job. Please refer below example to submit a job and override task definition.
import boto3
session = boto3.Session()
batch_client = session.client('batch')
response = batch_client.submit_job(
jobName=job_name,
jobQueue=AWS_BATCH_JOB_QUEUE,
jobDefinition=AWS_BATCH_JOB_DEFINITION,
containerOverrides={
'command': [
'/main.py'
]
}
)

Python Fabric 2.4 no environment variables

I have problem with fabic (2.4). I have no access to environment variables in remote server (I'm using FreeBSD).
In my ~/.profile file i have variable:
export MY_KEY="123456789"
In my fabfile.py i have simple task:
from fabric import task
#task(hosts=['user#myhost.com'])
def deploy(context):
context.run('echo 123')
context.run('echo $MY_KEY')
When I run the fab deploy command, I see only 123 but after connecting via ssh my variable is visible.
And what about using Connection.prefix as a context manager ?
with context.prefix('MY_KEY="123456789"'):
context.run('echo 123')
context.run('echo $MY_KEY')

Running a Python function from Ansible script

I have a Django project hosted on a remote server. This contains a file called tmp_file.py. There's a function called fetch_data() inside that file. Usually I follow the below approach to run that function.
# Inside Django Project
$ python manage.py shell
[Shell] from tmp_file import feth_data
[Shell] fetch_data()
Also the file doesn't contain __name__ section. So can't run as a stand alone script. What's the best way to perform this task using Ansible. I couldn't find anything useful from Ansible docs.
There's --command switch for shell django-admin command.
So you can try in Ansible:
- name: Fetch data
command: "django-admin shell --command='from tmp_file import feth_data; fetch_data()'"
args:
chdir: /path/to/tmp_file

Can't connect MongoDb on AWS EC2 using python

I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.

Best practices for debugging vagrant+docker+flask

My goal is to run a flask webserver from a Docker container. Working on a Windows machine this requires Vagrant for creating a VM. Running vagrant up --provider=docker leads to the following complaint:
INFO interface: error: The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
This is my Dockerfile
FROM mrmrcoleman/python_webapp
EXPOSE 5000
# Install Python
RUN apt-get install -y python python-dev python-distribute python-pip
# Add and install Python modules
RUN pip install Flask
#copy the working directory to the container
ADD . /
CMD python run.py
And this is the Vagrantfile
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.build_dir = "." #searches for a local dockerfile
end
config.vm.synced_folder ".", "/vagrant", type: "rsync"
rsync__chown = false
end
Because the Vagrantfile and run.py work without trouble independently, I suspect I made a mistake in the Dockerfile. My question is twofold:
Is there something clearly wrong with the Dockerfile or the
Vagrantfile?
Is there a way to have vagrant/docker produce more
specific error messages?
I think the answer I was looking for is using the command
vagrant docker-logs
I broke the Dockerfile because I did not recognize good behaviour as such, because nothing really happens if the app runs as it should. docker-logs confirms that the flask app is listening for requests.
Is there something clearly wrong with the Dockerfile or the Vagrantfile?
Your Dockerfile and Vagrantfiles look good, but I think you need to modify the permissions of run.py to be executable:
...
#copy the working directory to the container
ADD . /
RUN chmod +x run.py
CMD python run.py
Does that work?
Is there a way to have vagrant/docker produce more specific error messages?
Try taking a look at the vagrant debugging page. Another approach I use is to log into the container and try running the script manually.
# log onto the vm running docker
vagrant ssh
# start your container in bash, assuming its already built.
docker run -it my/container /bin/bash
# now from inside your container try to start your app
python run.py
Also, if you want to view your app locally, you'll want to add port forwarding to your Vagrantfile.

Categories

Resources