gunicorn not accepting arguments - python

I am setting up a Flask app on a Google VM. I have previously done something very similar on the same VM, but for whatever reason after creating a new virtual environment things aren't working.
cd /home/joshuasmith6556/joshthings-server
sudo /home/joshuasmith6556/joshthings-server/env/bin/gunicorn --bind 0.0.0.0:443 --certfile=/etc/letsencrypt/live/api.joshthings.com/fullchain.pem --keyfile=/etc/letsencrypt/live/api.joshthings.com/privkey.pem --workers=6 app:app
When I run the above script to launch my server on HTTPS using some cert files I have generated, gunicorn throws an error: Error: No such option: --bind. If I put app:app before the other flags, it says Error: Got unexpected extra argument (app:app). If I put --certfile first, I get Error: No such option: --certfile. Essentially, gunicorn always complains about the first argument I give it and never launches the server. Any ideas about what is wrong or what I can do to fix this?

Here the main source of error might be a typo with "bind" missing an "=" sign after it. It should be something like:
sudo /home/joshuasmith6556/joshthings-server/env/bin/gunicorn --bind=0.0.0.0:443 --certfile=/etc/letsencrypt/live/api.joshthings.com/fullchain.pem --keyfile=/etc/letsencrypt/live/api.joshthings.com/privkey.pem --workers=6 app:app
The WSGI_APP flag is expected to be in the end, so there is no conflict with this. I think when you have put certfile into first place, you could also not put the equals sign, and that's why it gave you an error.

Related

Executing bash commands from flask application

Application Details: Ubuntu 16.04 + flask + nginx + uwsgi
I am trying to execute a bash command from flask application.
#app.route('/hello', methods=('GET', 'POST'))
def hello():
os.system('mkdir my_directory')
return "Hello"
The above code run successfully but doesn't create any directory. Also it creates directory on my local which doesn't have any nginx level setup.
I also tried following ways:
subprocess.call(['mkdir', 'my_directory']) # Throws Internal server error
subprocess.call(['mkdir', 'my_directory'],shell=True) # No error but directory not created
subprocess.Popen(['mkdir', 'my_directory']) # Throws Internal server error
subprocess.Popen(['mkdir', 'my_directory'],shell=True) # No error but directory not created
Do I need any nginx level configuration changes.
Finally I got the point. I followed Python subprocess call returns “command not found”, Terminal executes correctly
.
What I was missing was absolute path of mkdir. When I executed subprocess.call(["/bin/mkdir", "my_directory"]), it makes the directory successfully.
The above link contains complete details.
Also I would be thankful if anyone will describe the reason that why I need to specify absolute path for mkdir.
Thanks to all. :)

Ansible ec2.py runs standalone but fails in playbook

I've seen a few posts on this topic with odd hard to reproduce behaviours. Here's a new set of data points.
Currently the following works
cd ./hosts
./ec2.py --profile=dev
And this fails
AWS_PROFILE=dev; ansible-playbook test.yml
These were both working a couple days ago. Something in my environment changed. Still investigating. Any guesses?
Error message:
ERROR! The file ./hosts/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this withchmod -x ./hosts/ec2.py.
ERROR! Inventory script (./hosts/ec2.py) had an execution error: ERROR: "Error connecting to AWS backend.
You are not authorized to perform this operation.", while: getting EC2 instances
ERROR! ./hosts/ec2.py:3: Error parsing host definition ''''': No closing quotation
Note that the normal credentials error is:
ERROR: "Error connecting to AWS backend.
You are not authorized to perform this operation.", while: getting EC2 instances
...
Hmmm. Error message has shifted.
AWS_PROFILE=dev; ansible-playbook test.yml
ERROR! ERROR! ./hosts/tmp:2: Expected key=value host variable assignment, got: {
Looks like the problem was a temporary file in the hosts folder. After removing it the problems went away. It looks like std ansible behaviour: Pull in ALL files in the hosts folder.

Can't connect MongoDb on AWS EC2 using python

I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.

Best practices for debugging vagrant+docker+flask

My goal is to run a flask webserver from a Docker container. Working on a Windows machine this requires Vagrant for creating a VM. Running vagrant up --provider=docker leads to the following complaint:
INFO interface: error: The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
This is my Dockerfile
FROM mrmrcoleman/python_webapp
EXPOSE 5000
# Install Python
RUN apt-get install -y python python-dev python-distribute python-pip
# Add and install Python modules
RUN pip install Flask
#copy the working directory to the container
ADD . /
CMD python run.py
And this is the Vagrantfile
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.build_dir = "." #searches for a local dockerfile
end
config.vm.synced_folder ".", "/vagrant", type: "rsync"
rsync__chown = false
end
Because the Vagrantfile and run.py work without trouble independently, I suspect I made a mistake in the Dockerfile. My question is twofold:
Is there something clearly wrong with the Dockerfile or the
Vagrantfile?
Is there a way to have vagrant/docker produce more
specific error messages?
I think the answer I was looking for is using the command
vagrant docker-logs
I broke the Dockerfile because I did not recognize good behaviour as such, because nothing really happens if the app runs as it should. docker-logs confirms that the flask app is listening for requests.
Is there something clearly wrong with the Dockerfile or the Vagrantfile?
Your Dockerfile and Vagrantfiles look good, but I think you need to modify the permissions of run.py to be executable:
...
#copy the working directory to the container
ADD . /
RUN chmod +x run.py
CMD python run.py
Does that work?
Is there a way to have vagrant/docker produce more specific error messages?
Try taking a look at the vagrant debugging page. Another approach I use is to log into the container and try running the script manually.
# log onto the vm running docker
vagrant ssh
# start your container in bash, assuming its already built.
docker run -it my/container /bin/bash
# now from inside your container try to start your app
python run.py
Also, if you want to view your app locally, you'll want to add port forwarding to your Vagrantfile.

Using nginx and gunicorn to serve django

I am receiving the error:
ImportError at /
No module named Interest.urls
even though my settings file has been changed several times:
ROOT_URLCONF = 'urls'
or
ROOT_URLCONF = 'interest.urls'
I keep getting the same error, as if it doesn't matter what I put in my settings file, it is still looking for Interest.urls, even though my urls file is located at Interest(django project)/interest/urls.py
I have restarted my nginx server several times and it changes nothing, is there another place I should be looking to change where it looks for my urls file?
Thanks!
I had to restart my supervisorctl, which restarted the gunicorn server which was actually handling the django files
There's not need to restart nginx, you can do these steps:
Install fabric (pip install fabric)
Create a "restart" function into fabfile.py that has the following:
def restart():
sudo('kill -9 `ps -ef | grep -m 1 \'[y]our_project_name\' | awk \'{print $2}\'`')
Call the function through:
$ fab restart
Optional, you might want to add the command into a script with your password just adding "-p mypass" to fabric command
That will kill all your gunicorn processes allowing supervisord to start them again.

Categories

Resources