I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.
Related
This is my fabric script that runs on the jenkins server.
sudo('/home/myjenkins/killit.sh',pty=False)
sudo('/home/myjenkins/makedir.sh',pty=False)
sudo('/home/myjenkins/runit.sh',pty=False)
This kills the old server, creates a virtualenv, installs the requirements and restarts the server.
The problem is the with the script that starts the server - runit.sh :-
nohup /home/myjenkins/devserver/dev/bin/python /home/myjenkins/devserver
/workspace/manage.py runserver --noreload 0:80 >> jenkins.out &
When the jenkins server that starts the server and I navigate to the homepage, it gives me a 404 Page Not Found. It says /static/index.html not found. But the file exists. When I run 'sudo bash runit.sh' and I access the homepage, It works fine.
mkdir -p /home/myjenkins/devserver
cp -rf /home/myjenkins/workspace /home/jenkins/devserver/
cp -f /home/myjenkins/dev_settings.py /home/myjenkins/devserver/workspace/mywebsite/settings.py
cd /home/myjenkins/devserver
virtualenv -p python3 dev
cd /home/myjenkins/devserver/workspace
../dev/bin/pip install -r requirements.txt
Please ask me for more details if you need it.
EDITED 9/2/18
When I start the script from the folder containing manage.py, the server is able to serve the files. But Jenkins was starting the script from the home folder and if I also start the script from the home folder - the server is not able to find the files. look at my comment for more details. It would be great if someone could explain why this happens even though I've specified the full path in the script.
nohup /home/myjenkins/devserver/dev/bin/python /home/myjenkins/devserver
/workspace/manage.py runserver --noreload 0:80 >> jenkins.out &
Okay I figured out the whole deal. My django server was taking the output of the npm build from the wrong folder.
In the settings.py file, the variable STATICFILES_DIRS was set as:-
STATICFILES_DIRS = ('frontend/dist',)
instead of:-
STATICFILES_DIRS = (os.path.join(BASE_DIR,'frontend/dist'),)
Thus, when Jenkins was running the script, it was doing so from the home folder. This made Django's staticfiles finders to look at /home/myjenkins/frontend/dist instead of the relative '../frontend/dist'
I have followed a few posts on here trying to run either a python or shell script on my ec2 instance after every boot not just the first boot.
I have tried the:
[scripts-user, always] to /etc/cloud/cloud.cfg file
Added script to ./scripts/per-boot folder
and
adding script to /etc/rc.local
Yes the permissions were changed to 755 for /etc/rc.local
I am attempting to pipe the output of the file into a file located in the /home/ubuntu/ directory and the file does not contain anything after boot.
If I run the scripts (.sh or .py) manually they work.
Any suggestions or request for additional info to help?
So the current solution appears to be a method I wrote off in my initial question post as I may have not performed the setup exactly as outline in the link below...
This link -->
How do I make cloud-init startup scripts run every time my EC2 instance boots?
The link shows how to modify the /etc/cloud/cloud.cfg file to update scripts-user to [scripts-user, always]
Also that link says to add your *.sh file to /var/lib/cloud/scripts/per-boot directory.
Once you reboot your system your script should have executed and you can verify this in: sudo cat /var/log/cloud-init.log
if your script still fails to execute try to erase the instance state of your server with the following command: sudo rm -rf /var/lib/cloud/instance/*
--NOTE:--
It appears print commands from a python script do not pipe (>>) as expected but echo commands pipe easily
Fails to pipe
sudo python test.py >> log.txt
Pipes successfully
echo "HI" >> log.txt
Is this something along the lines that you want?
It copies the script to the instance, connects to the instance, and runs the script right away.
ec2 scp ~/path_to_script.py : instance_name -y && ec2 ssh instance_name -yc "python script_name.py" 1>/dev/null
I read that the use of rc.local is getting deprecated. One thing to try is a line in /etc/crontab like this:
#reboot full-path-of-script
If there's a specific user you want to run the script as, you can list it after #reboot.
I am writing an app in Kivy and a part of the app is to turn off the rpi display's back-light after a certain amount of time and to turn the back-light back on when pressing an invisible button. I need to use sudo python when launching the app in order to open the file:
/sys/class/backlight/rpi-backlight/bl_power
The problem is that by default, I get an error saying "no module named kivy.app" when using 'sudo python'. If I add the line:
Defaults env_keep += "PYTHONPATH"
to the /etc/sudoers file it allows me to run the app with 'sudo python', but then none of the buttons on the app function. The app runs, but touch functionality is lost. Is there a way to make this work?
I'd suggest a different approach: make /sys/class/backlight/rpi-backlight/bl_power writable to the user running the Python script (most likely pi). Temporarily, this can be done with
sudo chmod a+w /sys/class/backlight/rpi_backlight/bl_power
(this grants write rights to all users). But this will also be reset at the next restart. The solution for that is to write a udev rule. They live in /etc/udev/rules.d and on my system, 99-com.rules was a good starting point. Here is what I have in a file called 98-backlight.rules:
SUBSYSTEM=="backlight", PROGRAM="/bin/sh -c 'chown -R root:video /sys/class/backlight && chmod -R 770 /sys/class/backlight; chown -R root:video /sys/devices/platform/rpi_backlight && chmod -R 770 /sys/devices/platform/rpi_backlight'"
This changes the owner group to video and grants group write rights. User pi is by default member of video. Then all you need is a restart (or sudo udevadm control --reload-rules followed by sudo udevadm trigger) to activate the new rule.
Finally I migrated my development env from runserver to gunicorn/nginx.
It'd be convenient to replicate the autoreload feature of runserver to gunicorn, so the server automatically restarts when source changes. Otherwise I have to restart the server manually with kill -HUP.
Any way to avoid the manual restart?
While this is old question you need to know that ever since version 19.0 gunicorn has had the --reload option.
So now no third party tools are needed.
One option would be to use the --max-requests to limit each spawned process to serving only one request by adding --max-requests 1 to the startup options. Every newly spawned process should see your code changes and in a development environment the extra startup time per request should be negligible.
Bryan Helmig came up with this and I modified it to use run_gunicorn instead of launching gunicorn directly, to make it possible to just cut and paste these 3 commands into a shell in your django project root folder (with your virtualenv activated):
pip install watchdog -U
watchmedo shell-command --patterns="*.py;*.html;*.css;*.js" --recursive --command='echo "${watch_src_path}" && kill -HUP `cat gunicorn.pid`' . &
python manage.py run_gunicorn 127.0.0.1:80 --pid=gunicorn.pid
I use git push to deploy to production and set up git hooks to run a script. The advantage of this approach is you can also do your migration and package installation at the same time. https://mikeeverhart.net/2013/01/using-git-to-deploy-code/
mkdir -p /home/git/project_name.git
cd /home/git/project_name.git
git init --bare
Then create a script /home/git/project_name.git/hooks/post-receive.
#!/bin/bash
GIT_WORK_TREE=/path/to/project git checkout -f
source /path/to/virtualenv/activate
pip install -r /path/to/project/requirements.txt
python /path/to/project/manage.py migrate
sudo supervisorctl restart project_name
Make sure to chmod u+x post-receive, and add user to sudoers. Allow it to run sudo supervisorctl without password. https://www.cyberciti.biz/faq/linux-unix-running-sudo-command-without-a-password/
From my local / development server, I set up git remote that allows me to push to the production server
git remote add production ssh://user_name#production-server/home/git/project_name.git
# initial push
git push production +master:refs/heads/master
# subsequent push
git push production master
As a bonus, you will get to see all the prompts as the script is running. So you will see if there is any issue with the migration/package installation/supervisor restart.
My goal is to run a flask webserver from a Docker container. Working on a Windows machine this requires Vagrant for creating a VM. Running vagrant up --provider=docker leads to the following complaint:
INFO interface: error: The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
This is my Dockerfile
FROM mrmrcoleman/python_webapp
EXPOSE 5000
# Install Python
RUN apt-get install -y python python-dev python-distribute python-pip
# Add and install Python modules
RUN pip install Flask
#copy the working directory to the container
ADD . /
CMD python run.py
And this is the Vagrantfile
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.build_dir = "." #searches for a local dockerfile
end
config.vm.synced_folder ".", "/vagrant", type: "rsync"
rsync__chown = false
end
Because the Vagrantfile and run.py work without trouble independently, I suspect I made a mistake in the Dockerfile. My question is twofold:
Is there something clearly wrong with the Dockerfile or the
Vagrantfile?
Is there a way to have vagrant/docker produce more
specific error messages?
I think the answer I was looking for is using the command
vagrant docker-logs
I broke the Dockerfile because I did not recognize good behaviour as such, because nothing really happens if the app runs as it should. docker-logs confirms that the flask app is listening for requests.
Is there something clearly wrong with the Dockerfile or the Vagrantfile?
Your Dockerfile and Vagrantfiles look good, but I think you need to modify the permissions of run.py to be executable:
...
#copy the working directory to the container
ADD . /
RUN chmod +x run.py
CMD python run.py
Does that work?
Is there a way to have vagrant/docker produce more specific error messages?
Try taking a look at the vagrant debugging page. Another approach I use is to log into the container and try running the script manually.
# log onto the vm running docker
vagrant ssh
# start your container in bash, assuming its already built.
docker run -it my/container /bin/bash
# now from inside your container try to start your app
python run.py
Also, if you want to view your app locally, you'll want to add port forwarding to your Vagrantfile.