Django runserver under Jenkins does not work - python

I have created a script to build my Django project which is called by my Jenkins CI, every time a push is made.
The script runs just fine if I run it manually, but fails to start the web server when it is ran automatically.
No errors are thrown, but the last line of the script:
nohup python manage.py runserver 0:9000 > /dev/null 2>&1 &
has absolutely no effect.
I am 100% sure that the script is ran as jenkins user, under my virtualenv, so that's not the problem. Also, permissions are not a problem, I have checked. Like I said, no error is thrown, so I don't really know what is happening.
Any ideas ?

So, thanks to tomrom95 I found the solution : adding BUILD_ID=dontKillMe in front of the command fixed everything. It's kind of funny.
Here is the link to a more complete answer to why this didn't work and why it works now.

Related

How do I allow permissions so that i can run my python server?

I was trying to practice using django,
when I executed python3 manage.py runserver, I received a permission denied when trying and I'm not sure what that means.
I have seen this before, but I am not sure how I would be able to allow permissions. I thought I enabled everything in start up but I feel I am missing something.
I will post a picture here from when I went into the folder I started and tried to run the server.
have you tried using sudo before the "python3 ..." command?
(I can't add this to comments due to my low reputation)

How to properly reconnect via SSH after screen detached

I'm working on a Django project that is hosted in a remote server, and in my work we connect through a computer which has Mobaxterm. The thing is, I'm not at all experienced, and the connection broke (or I broke it, I don't know), and then I wrote screen -r -d which is what I used to use to make the git pull and then the supervisorctl restart just to apply the changes I did on my local project.
After the disconnection, it said there were no screens to detach, so after searching online, I managed to get into a screen, and then made my way to the correct path so that I could do python manage.py etc..., and I activated the virtualenv.
The thing is, I'm having errors I didn't have, so I don't know if I have to do something else. For example, when I try to do python manage.py showmigrations , I get an error saying:
FileNotFoundError: [Errno 2] No such file or directory: '/webapps/project_name/app_name/logs/dialogflow.log'
And different errors occur when I do python manage.py makemigrations , python manage.py migrate, etc. What could the last screen I was working on had and not this one? My ex-coworker, who set everything up before I arrived, hasn't answered, or I'd like at least a clue about how to troubleshoot this, because I'm lost.
Any help would be amazing.
A co-worker helped me via remote, and we got the problem. I had to also run postactivate from the virtualenv, after the activate.

Run python script on Google Cloud Compute Engine

I know this is an exact copy of this question, but I've been trying different solutions for a while and didn't come up with anything.
I have this simple script that uses PRAW to find posts on Reddit. It takes a while, so I need it to stay alive when I log out of the shell as well.
I tried to set it up as a start-up script, to use nohup in order to run it in the background, but none of this worked. I followed the quickstart and I can get the hello word app to run, but all these examples are for web applications and all I want is start a process on my VM and keep it running when I'm not connected, without using .yaml configuration files and such. Can somebody please point me in the right direction?
Well, at the end using nohup was the answer. I'm new to the GNU environment and I just assumed it didn't work when I first tried. My program was exiting with an error, but I didn't check the nohup.out file so I was unaware of it..
Anyway here is a detailed guide for future reference (Using Debian Stretch):
Make your script an executable
chmod +x myscript.py
Run the nohup command to execute the script in the background. The & option ensures that the process stays alive after exiting. I've added the shebang line to my python script so there's no need to call python here
nohup /path/to/script/myscript.py &
Logout from the shell if you want
logout
Done! Now your script is up and running. You can login back and make sure that your process is still alive by checking the output of this command:
ps -e | grep myscript.py

Django/Travis CI - configuring a .travis YAML file to first start a localhost server, then run my tests without hanging?

I'm starting to setup a CI environment for one of my (first) Django projects. I'm having trouble with travis; when I test my code locally, I first have to run 'manage.py runserver' to start my server; then I can run my unit tests with 'manage.py tests.'
So in my .travis.yml file, I have it start the server first, then run the unit tests, then run the integration tests (which are in a self-made python script). My main problem is that this causes travis to hang: it starts the server and then waits for input. Which is expected; the documentation, which I've pored through all day, says that it will error out a build if it receives no output for longer than ten minutes. How would I set up my .travis file to start the localhost server, THEN keep it running and move on to the integration tests? Unit tests are fine, I can run them without spinning up the server of course, but I need the server running to pass my integration tests; the only test I've written so far starts off with an assert to check if 'Django' is the page's title. This fails without the server of course.
Any help would be appreciated..I know the code's technically fine since this whole problem is due to me not being able to configure the .travis.yml file correctly and therefore not letting travis run my tests on it's own instance of a localhost, but after searching for hours and getting my fifth or sixth email about failed builds on my dev branch I got irritated.
Travis file:
http://pastebin.com/U0GpnF5y
So silly me, after fighting for almost two days straight I figured it out. Dead simple answer and I feel stupid but I'll post it here just in case someone else who doesn't do more than simple stuff with the linux shell stumbles upon it. I appended an ampersand to the end of the python manage.py runserver line (I'm assuming this tells the os to run this as a separate job, or to run it and proceed to the next task) and kept it in the before_script section. So the end of the .yml file looks like:
before_script:
python manage.py runserver &
script:
coverage run manage.py tests
coverage run functional_tests.py
after_script:
coveralls

Localhost has stopped updating when various flask/python scripts are run, how do I fix this?

I've been testing out a few .py files with Flask, referring to 127.0.0.1:5000 frequently to see if they're interfacing correctly with the HTML, after running the .py I'm getting the following as normal:
* Running on http://127.0.0.1:5000/
* Restarting with reloader
However, 127.0.0.1:5000 has suddenly stopped updating when scripts are run, remaining as it was after the first time a script was run since my computer has been turned on (restarting the machine is the only way I've found to take a fresh look at my work). To confirm that it's not an issue within .py files or my templates, Flask's hello world example with app.run(debug=True) does not update this localhost page when run. Is there any way to remedy this?
Two things that may or may not be involved with this issue:
(1) I am not using virtualenv but simply running .py files from folder directories on my desktop (following proper format for Flask and the template engine, though). (2) Around the time the problem started I installed SQLAlchemy and its Flash extension, Flask-SQLAlchemy with pip.
After tracking the processes down by running $ netstat -a -o in the command line, it turns out it wasn't a code error, but rather multiple instances of pythonw.exe, which can be taken care of in the Task Manager. I'm not sure why the processes keep running after I close out of all python windows, or why it keeps communicating with 127.0.0.1:5000, however, so thoughts on this would still be appreciated.
Thats right. Just press 'Ctrl+Shift+Esc' to open the task manager.
Scroll down to find the 'python3.exe' files and end the task manually.
The reason is 'ctrl+c' doesnt work for me (it just copies the text on terminal window), so I have to manually kill the python interpreter running in the background. Its hard work, but hey atleast you dont have to restart your computer everytime!!

Categories

Resources