My Apache server runs on some non-default (not-root) account. When it tries to run a python script which in turn executes a subversion check-out command, 'svn checkout' fails with the following error message:
svn: Can't open file '/root/.subversion/servers': Permission denied
At the same time running that python script with subversion checkout command inside from command line under the same user account goes on perfectly well.
Apache server 2.2.6 with mod_python 3.2.8 runs on Fedora Core 6 machine.
Can anybody help me out? Thanks a lot.
It sounds like the environment you apache process is running under is a little unusual. For whatever reason, svn seems to think the user configuration files it needs are in /root. You can avoid having svn use the root versions of the files by specifying on the command line which config directory to use, like so:
svn --config-dir /home/myuser/.subversion checkout http://example.com/path
While not fixing your enviornment, it will at least allow you to have your script run properly...
Try granting the Apache user (the user that the apache service is running under) r+w permissions on that file.
Doesn't Apache's error log give you a clue?
Maybe it has to do with SELinux. Check /var/log/audit/audit.log and adjust your SELinux configuration accordingly, if the audit.log file indicates that it's SELinux which denies Apache access.
The Permission Denied error is showing that the script is running with root credentials, because it's looking in root's home dir for files.
I suggest you change the hook script to something that does:
id > /tmp/id
so that you can check the results of that to make sure what the uid/gid and euid/egid are. You will probably find it's not actually running as the user you think it is.
My first guess, like Troels, was also SELinux, but that would only be my guess if you are absolutely sure the script through Apache is running with exactly the same user/group as your manual test.
Well, thanks to all who answered the question. Anyway, I think I solved the mistery.
SELinux is completely disabled on the machine, so the problem is definitely in 'svn co' not being able to found config_dir for the user account it runs under.
Apache / mod_python doesn't read in shell environment of the user account which apache is running on. Thus for examle no $HOME is seen by mod_python when apache
is running under some real user ( not nobody )
Now 'svn co' has a flag --config-dir which points to configuration directory to read params from. By default it is $HOME/.subversion, i.e. it corresponds to the user account home directory. Apparently when no $HOME exists mod_python goes to root home dir ( /root) and tries to fiddle with .subversion content over there - which is obviously
fails miserably.
putting
SetEnv HOME /home/qa
into the /etc/httpd/conf/httpd.conf doesn't solve the problem because of SetEnv having nothing to do with shell environment - it only sets apache related environment
Likewise PythonOption - sets only mod_python related variables which can be read with req.get_options() after that
Running 'svn co --config-dir /home/ ...' definitely gives a workaround for running from within mod_python, but gets in the way of those who will try to run the script from command line.
So the proposed ( and working) solution is to set HOME environment variable prior to starting appache.
For example in /etc/init.d/httpd script
QAHOME=/home/qa
...
HOME=$QAHOME LANG=$HTTPD_LANG daemon $httpd $OPTIONS
What is happening is apache is being started with the environment variables of root, so it thinks that it should find its config files in /root/. This is NOT the case.
what happens is if you do sudo apache2ctl start, it pulls your $HOME variable from the sudo $HOME=/root/
I have just found a solution to this problem myself (although with mod_perl, but same thing)
run this command (if its apache 1, remove the 2):
sudo /etc/init.d/apache2 stop
sudo /etc/init.d/apache2 start
When /etc/init.d/apache2 starts apache, it sets all the proper environment variables that apache should be running under.
Related
System:
Windows 10 x64 (enterprise computer with some restrictions)
Apache 2.4 64-bit
Python 3.7.1 64-bit
mod_wsgi (built today from github using python setup.py install)
I am working on getting an Apache server with Python on a Windows machine and I have the server configured correctly in order to get the Hello World! example from mod_wsig documentation working.
If I simply launch C:\Apache24\bin\httpd.exe, this works and I see Hello World at http://localhost:5000/
Then I wanted to get it running as a service, so I call
httpd.exe -k install
In the ApacheMonitor I start the new Apache2.4 service but I get a failed to start error. In the Windows system event log it says Event ID: 7024 with a service specific error: Incorrect function.
When I run httpd.exe -k start -n "Apache2.4" -t it says Syntax OK
What I can't find is any more information about the service error. Nothing populates in the error.log file and I don't know where else to look, and I am asking for any further information on how to diagnose this.
Before I started configuring Apache to use mod_wsgi, launching the service was successful, so this happened after doing that, and I haven't configured anything else at this point.
Go to the Command Prompt move to the apache/bin folder and type
>httpd -t
This will give you more information about the error preventing Apache from start.
I was getting this error after updating my httpd.conf file. The problem was that my final xml tag in httpd.conf was unclosed </directory without the closing >.
This days I was facing the same situation, windows 10, apache24, django app, python3.8, failing to start as service, service events showing error 7024...
So, after a lot of struggling and research, I would like to add my solution even this is an old question.
The solution was to add two environment variables, no in httpd.conf, not in python code/conf but at SO level (windows > environment variables)
PYTHONHOME = c:\your\path\to\installed\python
PYTHONPATH = c:\your\path\to\installed\virtualenv
And magic! apache now works as service and the django app is always available on localhost
I want to generate pop-ups for certain events in my python script. I am using 'notify-send' for that purpose.
subprocess.Popen(['notify-send', "Authentication", "True/False"])
The above command executes fine on terminal but when I run it from systemd-service it does not generate any pop-up.
When I see logs there are no errors.
You need to first set the environment variable so that the root can communicate with the currently logged user and send the notification in GUI.
In my case, I did it as follow:
[Unit]
Description=< write your description>
After=systemd-user-sessions.service,systemd-journald.service
[Service]
Type=simple
ExecStart=/bin/bash /<path to your script file>.sh
Restart=always
RestartSec=1
KillMode=process
IgnoreSIGPIPE=no
RemainAfterExit=yes
Environment="DISPLAY=:0" "XAUTHORITY=/home/<User name>/.Xauthority"
[Install]
WantedBy=multi-user.target
Here,
RemainAfterExit=yes
is very important to mention in service file.
make sure to change all the parameters like Description, User name and path to your script file.
also, make sure that script file has executable permission by executing the command
sudo chmod +x <path to your script file>.sh
Here my script file is written in bash which shows the notification by using the same 'notify-send' command.
Now here the Environment parameter is doing all the magic.
you can read more about this behavior and the problem discussed overhere.
I certainly don't know the complete working of these files or how this worked, but for me, it worked just fine.
So you can give it a try.
please let me know if this worked or not in your case.
Running graphical applications requires the DISPLAY environment variable to be set, which would be set when run it from the CLI, but not when run from systemd (unless you explicitly set it).
This issue is covered more in Writing a systemd service that depends on XOrg.
I agree with the general advise that systemd may not be the best tool for the job. You may be better off using an "auto start" feature of your desktop environment to run your app, which would set the correct things in the environment that you need.
If running notify-send for desktop notifications in cron, notify-send is sending values to dbus. So it needs to tell dbus to connect to the right bus. The address can be found by examining DBUS_SESSION_BUS_ADDRESS environment variable and setting it to the same value.
Copy the value of DISPLAY and DBUS_SESSION_BUS_ADDRESS from your running environment and set them in [Service].Environment section
More info on the Arch Wiki:
https://wiki.archlinux.org/index.php/Cron#Running_X.org_server-based_applications
We have a project on nginx/Django, using VirtualBox.
When we try to run command VBoxManage list runningvms in nginx, we have the next error:
Failed to initialize COM because the global settings directory '/.config/VirtualBox' is not accessible!
If we run this command in console, it works fine.
What can we do to make it working good in nginx?
Other details:
nginx is runned by user "www-data", console - by the other user (Administrator).
We have fixed the issue.
There was wrong environment variable "Home" (os.environ['HOME']). We changed it, and so the problem was gone.
Using Python API for VB instead of ssh can really help you with that problem, as RegularlyScheduledProgramming suggested - we added Python API too.
Thanks!
I have a flask application that creates directory with this code:
if not os.path.exists(target_path):
os.makedirs(target_path)
With the created directory the default permission is 0755, and the owner and group is _www.
So, only the owner can write in the directory.
I always have to modify the permission manually to 0775 to make some files in it.
How to make the default directory permission as 0775? I use apache for web server.
This has something to do with Apache setup.
For Mac:
Open /System/Library/LaunchDaemons/org.apache.httpd.plist
Add <key>Umask</key> and <integer>002</integer>
Restart with sudo apachectl restart
I found the solution from this site, for linux I guess Setting the umask of the Apache user can give some hints.
Change the code to
if not os.path.exists(target_path):
os.makedirs(target_path, mode=0775)
I am having difficulty running Django on my Ubuntu server. I am able to run Django but I don't know how to run it as a service.
Distributor ID: Ubuntu
Description: Ubuntu 10.10
Release: 10.10
Codename: maverick
Here is what I am doing:
I log onto my Ubuntu server
Start my Django process: sudo ./manage.py runserver 0.0.0.0:80 &
Test: Traffic passes and the app displays the right page.
Now I close my terminal window and it all stops. I think I need to run it as a service somehow, but I can't figure out how to do that.
How do I keep my Django process running on port 80 even when I'm not logged in?
Also, I get that I should be linking it through Apache, but I'm not ready for that yet.
Don't use manage.py runserver to run your server on port 80. Not even for development. If you need that for your development environment, it's still better to redirect traffic from 8000 to 80 through iptables than running your django application as root.
In django documentation (or in other answers to this post) you can find out how to run it with a real webserver.
If, for any other reason you need a process to keep running in background after you close your terminal, you can't just run the process with & because it will be run in background but keep your session's session id, and will be closed when the session leader (your terminal) is terminated.
You can circunvent this behaviour by running the process through the setsid utility. See your manpage for setsid for more details.
Anyway, if after reading other comments, you still want to use the process with manage.py, just add "nohup" before your command line:
sudo nohup /home/ubuntu/django_projects/myproject/manage.py runserver 0.0.0.0:80 &
For this kind of job, since you're on Ubuntu, you should use the awesome Ubuntu upstart.
Just specify a file, e.g. django-fcgi, in case you're going to deploy Django with FastCGI:
/etc/init/django-fcgi.conf
and put the required upstart syntax instructions.
Then you can you would be able to start and stop your runserver command simply with:
start runserver
and
stop runserver
Examples of managing the deployment of Django processes with Upstart: here and here. I found those two links helpful when setting up this deployment structure myself.
The problem is that & runs a program in the background but does not separate it from the spawning process. However, an additional issue is that you are running the development server, which is only for testing purposes and should not be used for a production environment.
Use gunicorn or apache with mod_wsgi. Documentation for django and these projects should make it explicit how to serve it properly.
If you just want a really quick-and-dirty way to run your django dev server on port 80 and leave it there -- which is not something I recommend -- you could potentially run it in a screen. screen will create a terminal that will not close even if you close your connection. You can even run it in the foreground of a screen terminal and disconnect, leaving it to run until reboot.
If you are using virtualenv,the sudo command will execute the manage.py runserver command outside of the virtual enviorment context, and you'll get all kind of errors.
To fix that, I did the following:
while working on the virtual env type:
which python
outputs: /home/oleg/.virtualenvs/openmuni/bin/python
then type:
sudo !!
outputs: /usr/bin/python
Then all what's left to do is create a symbolic link between the global python and the python at the virtualenv that you currently use, and would like to run on 0.0.0.0:80
first move the global python folder to a backup location:
mv /usr/bin/python /usr/bin/python.old
/usr/bin/python
that should do it:
ln -s /usr/bin/python /home/oleg/.virtualenvs/openmuni/bin/python
that's it! now you can run sudo python manage.py runserver 0.0.0.0:80 in virtaulenv context!
Keep in mind that if you are using postgres DB on your developement local setup, you'll probably need a root role.
Credit to #ydaniv