When I deploy my django project to centos, I am unable to access it through the URL.
The media files uploaded from django admin belong to the user and group nobody:nobody.
When I change the ownership to my user, then it is accessible. How to allow these files to be accessed without specifying chmod explicitly?
You can use ACL rights Access Control List.
With this method, you may define the default owners and permissions for the content that will be created in a folder.
On CentOS, you may install it with the following command:
yum install acl
Once installed, the command getfacl will return the acl setup for a specific file:
getfacl /path/to/your/folder
The command setfacl will set up the access.
To setup default permissions:
setfacl -Rm d:u:username:rwx,g:groupname:rwx /path/to/your/folder
The content created in this folder will inherit the default ACL.
Run ls -la that will show you owner of file. Run this command in order to reach from outside.
chown admin
Related
first time question so please excuse my poor question formatting. I'm running up to date(enough) Python, AWS CLI v2, installed with MSI, config and credentials file both have values, no environment variables set, Windows 10, using Command Prompt.
When I input:
C:\Users\correctuser> aws --version
This is what returns:
aws-cli/2.5.2 Python/3.9.11 Windows/10 exe/AMD64 prompt/off
When I type:
C:\Users\correctuser> aws configure list
This is what I get:
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region us-west-2 config-file ~/.aws/config
//I have both regular and secret access keys set in the 'credentials' file in C:\users\correctuser.aws\credentials, and the region and format set in C:\users\correctuser.aws\config.
[default]
region = us-west-2
output = json
[default]
aws_access_key_id = thisisfakeaccesskeyID
aws_secret_access_key = thisisfakesecretaccesskeyID
And then when I type:
C:\Users\GitUser>aws configure
AWS Access Key ID [None]: thisisfakeaccesskeyID
AWS Secret Access Key [None]: thisisfakesecretaccesskeyID
Default region name [us-west-2]:
Default output format [json]:
[Errno 13] Permission denied: 'c:\\users\\GitUser\\.aws'
I heard that sometimes when your file name doesn't have a backslash at the end of it that can cause an error, though I don't think that's what it is. I've also tried running Command Prompt as administrator and that didn't help.
What do y'all think?
To fix the problem you have to give write permission to the file.
If you run on unix/OSX
// Set permission
sudo chmod -R 777 /{Your_Path}/.aws/credentials
sudo chmod -R 777 /{Your_Path}/.aws/config
Windows
1:- Right-click on the target file and select properties then select Security Tab
2:- Click Advanced and then make sure inheritance is disabled.
3:- Click apply and then click Edit in the security menu
4:- Remove all users except Admin user, which should have full control *Admin account should have all checkboxes checked on Allow column except special permission.
5:- Click Apply and then click OK.
I first tried to give the permissions but also that did not fix the error at first.
When I checked the location the config.ini and credentials.ini did not exist.
After creating the files by editing a text file and saving it as the .ini files.
The error disappeared from that moment.
I am using python to dump csv data into a database using Psycopg2. I need to give Postgres permission to a specific filepath in order to use the COPY command (documentation: https://www.postgresql.org/docs/10/static/sql-copy.html). I need to give permission to a specific directory path route and file to avoid the following error:
COPY database.table_name FROM '/home/development_user/Documents/App/CSV/filename.csv' delimiter ',' csv header
ERROR: could not open file "/home/development_user/Documents/App/CSV/filename.csv" for reading: Permission denied
To simplify things, want to add postgres to the development user's group. That way, postgres should have the group read permissions the development user can easily define on a path by path basis. I added the postgres user to the development_user group using the following command and validated that it was successful:
$ sudo usermod -a -G development_user postgres
$ groups postgres
postgres : postgres development_user
Here is the output of a permissions path trace using the namei -l [path] commmand
$ namei -l /home/development_user/Documents/App/CSV/filename.csv
drwxr-xr-x root root /
drwxr-xr-x root root home
drwxr-x--- development_user development_user development_user
drwxr-xr-x development_user development_user Documents
drwxr-xr-x development_user development_user App
drwxrwxr-x development_user development_user CSV
-rw-rw-r-- development_user development_user filename.csv
As you can see, anyone in the group development_user should now have read (r) and execute (x) permissions on all directories in the path, and also read and write permissions on the final file. If postgres tried to access the same file as an other user, postgres would be limited by the development_user directory in ability to access.
However, when I try to access the file I get a permissions error as noted above. When I open the development_user directory with other read and execute permissions such as the command below, I am able to read the the file is Postgres:
$ chmod o+rx /home/development
However, I do not want to grant other read and execute permissions for the development_user home directory, and I can't see why postgres user is not able to use the group permissions outlined above to access the same file since I added postgres to the development_user account.
Any ideas if my method to give postgres permissions to read a file by adding it to the user's group is a viable strategy? I do not want to use another solution such as mentioned here: (PostgreSQL - inconsistent COPY permissions errors) or here (Postgres ERROR: could not open file for reading: Permission denied) which advise opening up permissions by setting the file owner to be postgres:postgres. or opening up the directory permissions to widely such as allowing all users to read and execute on the development home directory. I also do not want to create another directory in the system directories and be forced to save files there as suggested here: (psql ERROR: could not open file "address.csv" for reading: No such file or directory).
From the PostgreSQL Manual:
COPY naming a file or command is only allowed to database superusers,
since it allows reading or writing any file that the server has
privileges to access.
So the PostgreSQL user doing the copying must be a database superuser.
You can do this with the ALTER ROLE command:
ALTER ROLE <rolename> WITH SUPERUSER
Also:
COPY with a file name instructs the PostgreSQL server to directly read
from or write to a file. The file must be accessible by the PostgreSQL
user (the user ID the server runs as) and the name must be specified
from the viewpoint of the server.
...
Files named in a COPY command are read or written directly by the
server, not by the client application. Therefore, they must reside on
or be accessible to the database server machine, not the client.
The default system user that PostgreSQL runs on is postgres. Ensure that that user has access to the files you want to copy. You can test this by using the command sudo -i -u postgres to become the postgres user and then trying to view the files.
The way I solved this problem particular to use psychopg2 cursor class function copy_expert (Docs: http://initd.org/psycopg/docs/cursor.html). copy_expert allows you to use STDIN therefore bypassing the need to issue a superuser privilege for the postgres user.
From Postgres COPY Docs (https://www.postgresql.org/docs/current/static/sql-copy.html):
Do not confuse COPY with the psql instruction \copy. \copy invokes
COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in
a file accessible to the psql client. Thus, file accessibility and
access rights depend on the client rather than the server when \copy
is used.
You can also leave the permissions set strictly for access to the development_user home folder and the App folder.
sql = "COPY table_name FROM STDIN DELIMITER '|' CSV HEADER"
self._cursor.copy_expert(sql, open(csv_file_name, "r"))
Slight variation on #jonnyjandles answer, since that shows a mystery self._cursor -- a more typical invocation might be like:
copy_command = f"COPY table_name FROM STDIN CSV HEADER;"
with connection.cursor() as cursor:
cursor.copy_expert(copy_command, open(some_file_path, "r"))
I've got a python script named test.cgi in /Library/WebServer/CGI-Executables. I have an index.html file in /Library/WebServer/Documents. My html file contains a form that posts to the CGI script and that works fine. When my script attempts to write a file I get the following error:
It doesn't matter what I specify as the output dir, I get the same error message. I've tried changing the permissions on the cgi-bin folder and the script but that doesn't work either. Any suggestions?
On Linux, a web server normally runs as an unprivileged user and group. Often user=www-data and group=www-data, but it depends on your setup. The CGI inherits this user and group.
To create a file as www-data you need to ensure the directory is writable to that user.
One common way is to make sure that the directory is in group www-data and writable. The following commands are an example:
$ chgrp www-data /Users/user/Documents/pictures
$ chmod g+rwx /Users/user/Documents/pictures
This will only work if you are yourself in group www-data (or root).
You might want to make existing files in that directory writable:
$ chgrp www-data /Users/user/Documents/pictures/*
$ chmod g+rw /Users/user/Documents/pictures/*
You also need to check that all the directories above /Users/user/Documents/pictures are accessible to www-data. So chgrp/chmod them as well if they are not open to anyone.
Looks like you don't have the write permissions at some point along the way to /Users/user/Documents/pictures/lol.jpg - you should modify permissions in there accordingly (whilst bearing in mind security implications)
First I created a new group tcpdumpers, added current user to that group, and then I edited /etc/sudoers according to the top answer of this link: Running commands from within python that need root access
import os
os.system("% sudo tcpdump")
os.system("cd /var/www/tbg/media/uploads/")
os.system("mkdir " + str(request.user.id))
os.system("cat .htaccess")
os.system("chown www-data:www-data .htaccess")
myfile = open(".htaccess", "a")
This yields the error Permission denied: .htaccess
This is Apache within Django, so request.user.id is the user object ID in Django.
I also used the touch command instead of cat which yielded identical results. It seems that Python has no rights at all no matter what I do.
EDIT: I'd like to point out that no directory is created either with the mkdir command. So the problem starts there, not with .htaccess. It doesn't even exist.
I am trying to upload image through admin page, but it keeps saying:
[Errno 13] Permission denied: '/path/to/my/site/media/userfolder/2014/05/26'
the folders userfolder/2014/05/26 are created dynamically while uploading.
In Traceback, i found that the error is occuring during this command:
In /usr/lib64/python2.6/os.py Line 157. while calling
mkdir(name, mode)
meaning, it cannot create any folder as it doesnot have the permission to do this
I have OpenSuse as OS in Server. In httpd.conf, i have this:
<Directory /path/to/my/site/media>
Order allow,deny
Allow from all
</Directory>
Do I have to chmod or chown something?
You need to change the directory permission so that web server process can change the directory.
To change ownership of the directory, use chown:
chown -R user-id:group-id /path/to/the/directory
To see which user own the web server process (change httpd accordingly):
ps aux | grep httpd | grep -v grep
OR
ps -efl | grep httpd | grep -v grep
This may also happen if you have a slash before the folder name:
path = '/folder1/folder2'
OSError: [Errno 13] Permission denied: '/folder1'
comes up with an error but this one works fine:
path = 'folder1/folder2'
Probably you are facing problem when a download request is made by the maybe_download function call in base.py file.
There is a conflict in the permissions of the temporary files and I myself couldn't work out a way to change the permissions, but was able to work around the problem.
Do the following...
Download the four .gz files of the MNIST data set from the link ( http://yann.lecun.com/exdb/mnist/ )
Then make a folder names MNIST_data (or your choice in your working directory/ site packages folder in the tensorflow\examples folder).
Directly copy paste the files into the folder.
Copy the address of the folder (it probably will be
( C:\Python\Python35\Lib\site-packages\tensorflow\examples\tutorials\mnist\MNIST_data ))
Change the "\" to "/" as "\" is used for escape characters, to access the folder locations.
Lastly, if you are following the tutorials, your call function would be ( mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) ) ;
change the "MNIST_data/" parameter to your folder location. As in my case would be ( mnist = input_data.read_data_sets("C:/Python/Python35/Lib/site-packages/tensorflow/examples/tutorials/mnist/MNIST_data", one_hot=True) )
Then it's all done.
Hope it works for you.
Another option is to ensure the file is not open anywhere else on your machine.
supplementing #falsetru's answer : run id in the terminal to get your user_id and group_id
Go the directory/partition where you are facing the challenge.
Open terminal, type id then press enter.
This will show you your user_id and group_id
then type
chown -R user-id:group-id .
Replace user-id and group-id
. at the end indicates current partition / repository
// chown -R 1001:1001 . (that was my case)
Simply try:
sudo cp /source /destination
Just close the file in case it is opened in the background. The error disappears by itself
The solution that worked out for me here when I was using python 3 os package for performing operations on a directory where I didn't have sufficient permissions and access to got resolved by running the python file with sudo (root) i.e.:
sudo python python_file_name.py
Any other utility that you might also plan on using to chmod or chown that directory would also only work when you run it with sudo.
# file_name.py
base_path = "./parent_dir/child_dir/"
user = os.stat(base_path).st_uid # for getting details of the current user owner of the dir
group = os.stat(base_path).st_gid # for getting details of the current group owner of the dir
print("Present owner and group of the specified path")
print("Owner:", user)
print("Group:", group)
os.chown(base_path, user, group) # change directory permissions
print("\nOwner id of the file:", os.stat(base_path).st_uid)
print("Group id of the file:", os.stat(base_path).st_gid)
os.mkdir(base_path+file_name,mode=0o666)
run the above file with sudo.
sudo python file_name.py
Hope this answer works out for you.
Forever indebted to stackoverflow and the dev community. All hail the devs.