I am executing a Python script through service file. The python script is responsible to create 3 more scripts and then execute them one by one. I am also giving permission to all of them and also to a folder in my home directory.
The problem here is that on executing the service file none of the Python files or the folder is getting the permissions. I am giving 777 permission
Following is my service file
[Unit]
Description=systemd service to run upload script
[Service]
Type=simple
User=jetson
ExecStart=/usr/bin/python3 /home/project/file_upload.py
[Install]
WantedBy=multi-user.target
The folder I am trying to give permission to is created by the azure Iotedge module
Please let me know if I need to make any changes in the service file.
I resolved this issue by creating the folder before starting the service file and gave it permission so now there are no issues.
Related
I am using Django and Gunicorn to create a blog and am wanting to run my config file that I have created.
This is the path to my config file (the conf folder is at the same level as manage.py):
/var/www/website.co.uk/blog/DjangoBlog/Articles/conf/gunicorn_config.py
I am in this path: /var/www/website.co.uk/blog/DjangoBlog/Articles and run this command:
gunicorn -c gunicorn_config.py Articles.wsgi
However, it returns the error:
Error: 'gunicorn_config.py' doesn't exist
Is Gunicorn looking in the wrong place for my config file?
Any help would be massively appreciated. I have not been able to solve this for a while.
Kind regards
you dont seem to be using the correct directory for the gunicorn config file, try this
gunicorn -c ./conf/gunicorn_config.py Articles.wsgi
I am trying to setup a MLFlow tracking server on a remote machine as a systemd service.
I have a sftp server running and created a SSH key pair.
Everything seems to work fine except the artifact logging. MLFlow seems to not have permissions to list the artifacts saved in the mlruns directory.
I create an experiment and log artifacts in this way:
uri = 'http://192.XXX:8000'
mlflow.set_tracking_uri(uri)
mlflow.create_experiment('test', artifact_location='sftp://192.XXX:_path_to_mlruns_folder_')
experiment=mlflow.get_experiment_by_name('test')
with mlflow.start_run(experiment_id=experiment.experiment_id, run_name=run_name) as run:
mlflow.log_param(_parameter_name_, _parameter_value_)
mlflow.log_artifact(_an_artifact_, _artifact_folder_name_)
I can see the metrics in the UI and the artifacts in the correct destination folder on the remote machine. However, in the UI I receive this message when trying to see the artifacts:
Unable to list artifacts stored
under sftp://192.XXX:path_to_mlruns_folder/run_id/artifacts
for the current run. Please contact your tracking server administrator
to notify them of this error, which can happen when the tracking
server lacks permission to list artifacts under the current run's root
artifact directory.
I cannot figure out why as the mlruns folder has drwxrwxrwx permissions and all the subfolders have drwxrwxr-x. What am I missing?
UPDATE
Looking at it with fresh eyes, it seems weird that it tries to list files through sftp://192.XXX:, it should just look in the folder _path_to_mlruns_folder_/_run_id_/artifacts. However, I still do not know how to circumvent that.
The problem seems to be that by default the systemd service is run by root.
Specifying a user and creating a ssh key pair for that user to access the same remote machine worked.
[Unit]
Description=MLflow server
After=network.target
[Service]
Restart=on-failure
RestartSec=20
User=_user_
Group=_group_
ExecStart=/bin/bash -c 'PATH=_yourpath_/anaconda3/envs/mlflow_server/bin/:$PATH exec mlflow server --backend-store-uri postgresql://mlflow:mlflow#localhost/mlflow --default-artifact-root sftp://_user_#192.168.1.245:_yourotherpath_/MLFLOW_SERVER/mlruns -h 0.0.0.0 -p 8000'
[Install]
WantedBy=multi-user.target
_user_ and _group_ should be the same listed by ls -la in the mlruns directory.
I'm trying to deploy a django web app to the Microsoft Azure and this is correctly deployed by the pipeline on DevOps Azure, but I get the error message (ModuleNotFoundError: No module named 'django) on portal Azure and cannot reach my app via the URL.
The app also works properly locally
Here is the whole error message: '''https://pastebin.com/mGHSS8kQ'''
How can I solve this error?
I understand you have tried the steps suggested in the SO thread Eyap shared, and few things here are already covers that. Kindly review these settings.
You can use this command instead - source /antenv3.6/bin/activate.
As a side note- The antenv will be available only after a deployment is initiated. Kindly check the “/” path from SSH and you should see a folder with name starting from antenv.
Browse to .python_packages/lib/python3.6/site-packages/ or .python_packages/lib/site-packages/. Kindly review the file path exists.
Review the Application logs as well (/home/LogFiles folder) from Kudu- https://<yourwebpp-name>.scm.azurewebsites.net/api/logs/docker
The App Service deployment engine automatically activates a virtual environment and runs
pip install -r requirements.txt
The requirements.txt file must be in the project root for dependencies to be installed.
For Django apps, App Service looks for a file named wsgi.py within your app code, and then runs Gunicorn using the following command:
is the name of the folder that contains wsgi.py
gunicorn --bind=0.0.0.0 --timeout 600 .wsgi
If you want more specific control over the startup command, use a custom startup command, replace with the name of folder that contains wsgi.py, and add a --chdir argument if that module is not in the project root.
For additional details, please checkout this document
Configure a Linux Python app for Azure App Service
Quickstart: Create a Python app in Azure App Service on Linux
I am trying to start a python script on a raspby via a systemd service, but it cannot find any of the modules installed via pip3 and gives the error:
raspberrypi python3[1017]: ModuleNotFoundError: No module named 'paho'
Running the same script via SSH terminal works fine. From my research, it could relate to the PYTHONPATH, though I have been unable to find it in .bashrc
The modules that cannot be found are installed here:
./.local/lib/python3.7/site-packages (1.5.0)
This is the service file in /etc/systemd/user/mytest.service which starts the script unsuccessfully:
[Unit]
Description=TestScript Service
After=network-online.target
[Service]
Type=idle
ExecStart=/usr/bin/python3 /home/pi/MyProject/my_script.py > /home/pi/my_script.log 2>&1
[Install]
WantedBy=network-online.target
How can I let the service know, where the modules are located?
Kind regards
Here's is a quick fix to the problem:
By specifying a User in the .service file under [Service], the python script will find all installed libraries.
[Service]
User=pi
I am running Ubuntu with an Apache webserver with Mod_python. The root directory of the web server is /var/www
I have a form for uploading files. The uploaded files should be stored in folder /var/www/xy/uploads by a python script.
But when I use this script, I receive an error:
Permission denied: '/var/www/xy/uploads/316.jpg'
Here the relevant parts of the code, that should handle the received files:
targetdir_path = "/var/www/xy/uploads"
newid = 316
f = open(os.path.join(targetdir_path,str(newid)+'.jpg'),"w")
I assume, there is a problem with the access rights of the uploads directory. They are set to: drwxr-xr-x
Can anyone explain me, what I need to change? Thanks for the help!
Your directory permissions are set for only allowing writing for the owner of the directory.
try this:
sudo chown www-data:www-data /var/www/xy/
sudo chmod -R g+rw /var/wwww/xy/uploads
Also, I'd advise against using mod_python as it is deprecated, look into mod_wsgi instead.