When starting /etc/init.d/celeryd I get the following error below. I made sure all the directories are read and writable with the user that I am starting it as. I even did touch file to make sure in all the places that there is a directive for a file to write to.
(community)community#community:~$ /etc/init.d/celeryd start
bot_server.settings.production
celery multi v3.1.11 (Cipater)
> Starting nodes...
> w1.bot_server#community.net: Traceback (most recent call last):
File "/home/community/community-forums/bot_server/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/community/.virtualenvs/community/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/home/community/.virtualenvs/community/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/community/.virtualenvs/community/local/lib/python2.7/site-packages/djcelery/management/commands/celeryd_detach.py", line 26, in run_from_argv
detached().execute_from_commandline(argv)
File "/home/community/.virtualenvs/community/local/lib/python2.7/site-packages/celery/bin/celeryd_detach.py", line 160, in execute_from_commandline
**vars(options)
File "/home/community/.virtualenvs/community/local/lib/python2.7/site-packages/celery/bin/celeryd_detach.py", line 42, in detach
with detached(logfile, pidfile, uid, gid, umask, working_directory, fake):
File "/home/community/.virtualenvs/community/local/lib/python2.7/site-packages/celery/platforms.py", line 383, in detached
maybe_drop_privileges(uid=uid, gid=gid)
File "/home/community/.virtualenvs/community/local/lib/python2.7/site-packages/celery/platforms.py", line 520, in maybe_drop_privileges
initgroups(uid, gid)
File "/home/community/.virtualenvs/community/local/lib/python2.7/site-packages/celery/platforms.py", line 473, in initgroups
return os.initgroups(username, gid)
OSError: [Errno 1] Operation not permitted
* Child terminated with errorcode 1
FAILED
I am using:
Django==1.6.3
celery==3.1.11
celerymon==1.0.3
django-celery==3.1.10
django-celery-with-redis==3.0
OSError errno 1 looks like a permissions issue:
OSError: [Error 1] Operation not permitted
Looks like the issue is on os.initgroups, which calls initgroups on the system -- see man initgroups
From your prompt it looks like you're using some sort of role account, I'd guess there is some issue with how your role/group permissions are set up with respect to the files django is accessing.
Edit:
From man initgroups:
DESCRIPTION
The initgroups() function initializes the group access list by reading
the group database /etc/group and using all groups of which user is a member.
The additional group group is also added to the list.
Can community read /etc/group?
Related
I was trying Streamlit 1.13 on Windows 10, where I encountered the following error:
Z:\>streamlit run st1.py
2022-10-04 02:25:28.218 INFO numexpr.utils: NumExpr defaulting to 4 threads.
Welcome to Streamlit!
If you're one of our development partners or you're interested in getting
personal technical support or Streamlit updates, please enter your email
address below. Otherwise, you may leave the field blank.
http://localhost:8501
Traceback (most recent call last):
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\tornado\http1connection.py", line 276, in _read_message
delegate.finish()
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\tornado\routing.py", line 268, in finish
self.delegate.finish()
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\tornado\web.py", line 2322, in finish
self.execute()
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\tornado\web.py", line 2344, in execute
self.handler = self.handler_class(
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\tornado\web.py", line 239, in __init__
self.initialize(**kwargs) # type: ignore
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\streamlit\web\server\routes.py", line 49, in initialize
self._pages = get_pages()
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\streamlit\web\server\server.py", line 397, in <lambda>
for page_info in source_util.get_pages(
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\streamlit\source_util.py", line 155, in get_pages
"script_path": str(main_script_path.resolve()),
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\pathlib.py", line 1215, in resolve
s = self._flavour.resolve(self, strict=strict)
File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\pathlib.py", line 215, in resolve
s = self._ext_to_normal(_getfinalpathname(s))
OSError: [WinError 1] Incorrect function: 'st1.py'
The installation of streamlit was complete: initially there was a conflict which I fixed, I also installed it in Anaconda and the error was the same.
I checked the exact streamlit file which rised the exception and changed the script to print the actual path of the script and it was correct, as well as the file was there.
#File "C:\Users\...\AppData\Local\Programs\Python\Python39\lib\site-packages\streamlit\source_util.py", line 155, in get_pages
def get_pages(main_script_path_str: str) -> Dict[str, Dict[str, str]]:
global _cached_pages
print("main_script_path_str=",main_script_path_str) #DEBUG
# Avoid taking the lock if the pages cache hasn't been invalidated.
pages = _cached_pages
if pages is not None:
return pages
The problem was with the location of the script on a RAM disk. Moving it to the a regular disk resolved it.
Another question with this Win error reminded me that the drivers of Windows RAM drives, at least the ones I've used such as Imdisk or OSFMount, seem to have missing support of some of the OS file functions.
OSError: [WinError 1] Incorrect function
E.g. "Rust" also had errors when trying to build source located in any of these RAM drives on Windows.
So I've been working in a project for a while, and haven't really changed the models at all, and therefore haven't done any migrations. Now I needed to add two new fields and delete another one, which should be normally fine. I'm using django-tenants, so my command to run the migrations is ./manage.py migrate_schemas.
Now, whenever I run that, I get the error KeyError: "prune" (the full traceback is below) in what seems to be internal code of Django.
Now, afterwards I tried reverting my changes, so no new migration and running the comnad again, and got the same error. I also thought that maybe the database had gotten "dirty" at some point, so I tried migrating a clean database from scratch, with the same result. The migrations don't even get to start.
Has anyone ever encountered something similar?
The full traceback (I have simplified the paths)
[standard:public] === Starting migration
Traceback (most recent call last):
File ":directory/./manage.py", line 22, in <module>
main()
File ":directory/./manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "$venv/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "$/lib/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "$venv/lib/python3.9/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "$venv/lib/python3.9/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "$venv/lib/python3.9/site-packages/django_tenants/management/commands/migrate_schemas.py", line 52, in handle
executor.run_migrations(tenants=[self.PUBLIC_SCHEMA_NAME])
File "$venv/lib/python3.9/site-packages/django_tenants/migration_executors/standard.py", line 11, in run_migrations
run_migrations(self.args, self.options, self.codename, self.PUBLIC_SCHEMA_NAME)
File "$venv/lib/python3.9/site-packages/django_tenants/migration_executors/base.py", line 53, in run_migrations
MigrateCommand(stdout=stdout, stderr=stderr).execute(*args, **options)
File "$venv/lib/python3.9/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "$venv/lib/python3.9/site-packages/django/core/management/base.py", line 96, in wrapped
res = handle_func(*args, **kwargs)
File "$venv/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 188, in handle
if options["prune"]:
KeyError: 'prune'
It seems like version incompatibility.
prune option added to Django couple months ago (Jan 22, 2022)
If you want to use newer Django you have to patch django-tenants manually and add --prune argument
def add_arguments(self, parser):
parser.add_argument(
'--prune', action='store_true', dest='prune',
help='Delete nonexistent migrations from the django_migrations table.',
)
PS I couldn't find issue related to prune, so you may create new one ;)
I have the following Snakefile (using Snakemake v7.6.0):
from snakemake.remote.GS import RemoteProvider as GSRemoteProvider
GS = GSRemoteProvider(project="my-gcp-project-id")
rule example2:
output:
GS.remote('my-gcs-bucket/path/to/test.txt', stay_on_remote=True)
shell: 'touch test.txt && gsutil mv test.txt gs://my-gcs-bucket/path/to/test.txt'
When I run this in Linux (using WSL2), the job completes successfully -- the file is uploaded to GCS, and Snakemake reports that it finished the job:
...
Finished job 0.
1 of 1 steps (100%) done
However, when I run the same rule in Windows (using GitBash), I get an error:
Traceback (most recent call last):
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\__init__.py", line 722, in snakemake
success = workflow.execute(
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\workflow.py", line 1110, in execute
success = self.scheduler.schedule()
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\scheduler.py", line 510, in schedule
self.run(runjobs)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\scheduler.py", line 558, in run
executor.run_jobs(
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\executors\__init__.py", line 120, in run_jobs
self.run(
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\executors\__init__.py", line 486, in run
future = self.run_single_job(job)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\executors\__init__.py", line 539, in run_single_job
self.cached_or_run, job, run_wrapper, *self.job_args_and_prepare(job)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\executors\__init__.py", line 491, in job_args_and_prepare
job.prepare()
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\jobs.py", line 770, in prepare
f.prepare()
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\io.py", line 618, in prepare
raise e
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\site-packages\snakemake\io.py", line 614, in prepare
os.makedirs(dir, exist_ok=True)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "C:\Users\my-user-name\Miniconda3\envs\my-miniconda-env-name\lib\os.py", line 223, in makedirs
mkdir(name, mode)
FileNotFoundError: [WinError 161] The specified path is invalid: 'gs://my-gcs-bucket'
If the file is already present in GCS (e.g., because I ran the rule in Linux already), the Windows job can see that the file is there:
Building DAG of jobs...
Nothing to be done (all requested files are present and up to date).
I realize that the example above could be refactored to touch {output} and allow Snakemake itself to upload the file. However, I have a use-case where I have large files that I want to be able to write directly to GCS (e.g., streaming and processing files that would otherwise be too large to fit on my local disk all at once). Thus, the example above is meant to represent a wider type of use-case.
Is my use in Linux incorrect? If not, is there a way for this to work in Windows (in a POSIX-like shell like GitBash), as well? I'm interested to understand why this does not work in the Windows shell.
I am trying to distribute a small number of files to each node in a Ray cluster on AWS EC2, using the file_mounts block in the configuration file:-
file_mounts: {
"./": "./run_files"
}
The cluster launches with only a master node, onto which the contents of the run_files directory have been correctly copied. However, the two worker nodes that were requested do not launch. If I omit the file_mounts section, the workers launch. The Ray monitor indicates that there is a problem locating the file libtcl.so in the matplotlib sub-directory of the Anaconda3 installation. This file is on the correct path on the master node so it appears that the setup on worker nodes is not working properly:-
$ ray exec ray_conf.yaml 'tail -n 100 -f /tmp/ray/session_*/logs/monitor*'
2019-05-29 19:36:14,019 INFO updater.py:95 -- NodeUpdater: Waiting for IP of i-073950262949fe9a8...
2019-05-29 19:36:14,019 INFO log_timer.py:21 -- NodeUpdater: i-073950262949fe9a8: Got IP [LogTimer=362ms]
2019-05-29 19:36:14,025 INFO updater.py:272 -- NodeUpdater: Running tail -n 100 -f /tmp/ray/session_*/logs/monitor* on 54.175.173.233...
==> /tmp/ray/session_2019-05-29_23-35-49_842129_4407/logs/monitor.err <==
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/monitor.py", line 376, in <module>
redis_password=args.redis_password)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/monitor.py", line 54, in __init__
self.load_metrics)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/autoscaler/autoscaler.py", line 349, in __init__
self.reload_config(errors_fatal=True)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/autoscaler/autoscaler.py", line 523, in reload_config
raise e
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/autoscaler/autoscaler.py", line 516, in reload_config
new_config["worker_start_ray_commands"]
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/autoscaler/autoscaler.py", line 790, in hash_runtime_conf
add_content_hashes(local_path)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/autoscaler/autoscaler.py", line 778, in add_content_hashes
add_hash_of_file(fpath)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/ray/autoscaler/autoscaler.py", line 764, in add_hash_of_file
with open(fpath, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: './anaconda3/pkgs/matplotlib-2.1.0-py36hba5de38_0/lib/libtcl.so'
==> /tmp/ray/session_2019-05-29_23-35-49_842129_4407/logs/monitor.out <==
(Note that this problem follows on from the question "Workers not being launched on EC2 by ray", I have continued in a new question because the source of the error is now more specifically identified.)
I think that the libtcl.so error message is very misleading. The problem is that the file_mounts remote path cannot be the home directory on the workers (neither ./ nor ~/ works); it has to be a sub-directory. So the following was successful:-
file_mounts: {"~/run_files": "./run_files"}
So I have been following these documentations so that I can print on my thermal printer through python.
I am running Ubuntu and have the pyusb module installed.
The printer is a Rongta RP58, and it's idVendor and idProduct were found through the lsusb command (0fe6:811e).
Just like in the instructions, I enter:
from escpos.printer import Usb
p = Usb(0x0fe6, 0x811e)
but the error appears
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/testerdell/.local/lib/python2.7/site-packages/escpos/printer.py", line 51, in __init__
self.open()
File "/home/testerdell/.local/lib/python2.7/site-packages/escpos/printer.py", line 62, in open
check_driver = self.device.is_kernel_driver_active(0)
File "/home/testerdell/.local/lib/python2.7/site-packages/usb/core.py", line 1061, in is_kernel_driver_active
self._ctx.managed_open()
File "/home/testerdell/.local/lib/python2.7/site-packages/usb/core.py", line 102, in wrapper
return f(self, *args, **kwargs)
File "/home/testerdell/.local/lib/python2.7/site-packages/usb/core.py", line 120, in managed_open
self.handle = self.backend.open_device(self.dev)
File "/home/testerdell/.local/lib/python2.7/site-packages/usb/backend/libusb1.py", line 786, in open_device
return _DeviceHandle(dev)
File "/home/testerdell/.local/lib/python2.7/site-packages/usb/backend/libusb1.py", line 643, in __init__
_check(_lib.libusb_open(self.devid, byref(self.handle)))
File "/home/testerdell/.local/lib/python2.7/site-packages/usb/backend/libusb1.py", line 595, in _check
raise USBError(_strerror(ret), ret, _libusb_errno[ret])
usb.core.USBError: [Errno 13] Access denied (insufficient permissions)
In the documentation it suggested to create a rule that grants access, but it hasn't worked for me.
sudo nano /etc/udev/rules.d/99-escpos.rules
is the command I use to edit the file, and this is what's inside
SUBSYSTEM=="usb", ATTRS{idVendor}=="0fe6", ATTRS{idProduct}=="811e", MODE="0664", GROUP="dialout"
after changing anything in that file, I run this code:
sudo service udev restart
I am not sure how else I can give USB access to python.
When using direct root access with
sudo su root
the system says "ImportError: No module named escpos.printer". I wouldn't want to enter the password for root access every single time.
Is there a problem with my udev rules, groups, mode, user permissions?
Any help is greatly appreciated! Thanks in advance :)
Change the mode from 664 to 666 in your udev rule and it will work.