I have a process running as root that needs to spin threads off to be run as various users. This part is working fine, but I need a way to communicate between the child processes and the parent process.
When I try using multiprocessing.Manager() with some lists, dictionary, Lock, Queue, etc, it always has permission denied errors on the process that has lowered permissions.
Is there a way to grant access to a user or PID to fix this?
Basic code that represents what I'm running into (run as root):
#!/usr/bin/env python
import multiprocessing, os
manager = multiprocessing.Manager()
problematic_list = manager.list()
os.setuid(43121) # or whatever your user is
problematic_list.append('anything')
Result:
root#liberator:/home/bscable# python asd.py
Traceback (most recent call last):
File "asd.py", line 8, in <module>
problematic_list.append('anything')
File "<string>", line 2, in append
File "/usr/lib/python2.7/multiprocessing/managers.py", line 755, in _callmethod
self._connect()
File "/usr/lib/python2.7/multiprocessing/managers.py", line 742, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 169, in Client
c = SocketClient(address)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 293, in SocketClient
s.connect(address)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 13] Permission denied
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/util.py", line 261, in _run_finalizers
finalizer()
File "/usr/lib/python2.7/multiprocessing/util.py", line 200, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/managers.py", line 625, in _finalize_manager
process.terminate()
File "/usr/lib/python2.7/multiprocessing/process.py", line 137, in terminate
self._popen.terminate()
File "/usr/lib/python2.7/multiprocessing/forking.py", line 165, in terminate
os.kill(self.pid, signal.SIGTERM)
OSError: [Errno 1] Operation not permitted
The first exception appears to be the one that is important here.
Python (at least 2.6) uses a UNIX socket to communicate that appears like so:
/tmp/pymp-eGnU6a/listener-BTHJ0E
We can grab that path and change the permissions on it like so:
#!/usr/bin/env python
import multiprocessing, os, grp, pwd
manager = multiprocessing.Manager()
problematic_list = manager.list()
fullname = manager._address
dirname = os.path.dirname(fullname)
gid = grp.getgrnam('some_group').gr_gid
uid = pwd.getpwnam('root').pw_uid # should always be 0, but you never know
os.chown(dirname, uid, gid)
os.chmod(dirname, 0770)
os.chown(fullname, uid, gid)
os.chmod(fullname, 0770)
os.setgid(gid)
os.setuid(43121) # or whatever your user is
problematic_list.append('anything')
Related
I am trying to send a file from the master node to minion nodes using a python script but a single error OSError: Failure keeps on coming up.
I tried to code this file to send this file from one local machine to another local machine.
My code:
#! /usr/bin/python
#! /usr/bin/python3
import paramiko
import os
#Defining working connect
def workon(host):
#Making a connection
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) #To add the missing host key and auto add policy
ssh_client.connect(hostname = host, username = 'username', password = 'password')
ftp_client = ssh_client.open_sftp()
ftp_client.put("/home/TrialFolder/HelloPython", "/home/")
ftp_client.close()
#stdin, stdout, stderr = ssh_client.exec_command("ls")
#lines = stdout.readlines()
#print(lines)
def main():
hosts = ['192.16.15.32', '192.16.15.33', '192.16.15.34']
threads = []
for h in hosts:
workon(h)
main()
Error:
Traceback (most recent call last):
File "PythonMultipleConnectionUsinhSSH.py", line 28, in <module>
main()
File "PythonMultipleConnectionUsinhSSH.py", line 26, in main
workon(h)
File "PythonMultipleConnectionUsinhSSH.py", line 15, in workon
ftp_client.put("/home/Sahil/HelloPython", "/home/")
File "/usr/local/lib/python3.6/site-packages/paramiko/sftp_client.py", line 759, in put
return self.putfo(fl, remotepath, file_size, callback, confirm)
File "/usr/local/lib/python3.6/site-packages/paramiko/sftp_client.py", line 714, in putfo
with self.file(remotepath, "wb") as fr:
File "/usr/local/lib/python3.6/site-packages/paramiko/sftp_client.py", line 372, in open
t, msg = self._request(CMD_OPEN, filename, imode, attrblock)
File "/usr/local/lib/python3.6/site-packages/paramiko/sftp_client.py", line 813, in _request
return self._read_response(num)
File "/usr/local/lib/python3.6/site-packages/paramiko/sftp_client.py", line 865, in _read_response
self._convert_status(msg)
File "/usr/local/lib/python3.6/site-packages/paramiko/sftp_client.py", line 898, in _convert_status
raise IOError(text)
OSError: Failure
First, you should make sure the target directory /home/ is writable for you. Then you should review documentation for the put method. It says this about the second argument (remotepath):
The destination path on the SFTP server. Note that the filename should be included. Only specifying a directory may result in an error.
Try including the filename in the path, like:
...
ftp_client.put("/home/TrialFolder/HelloPython", "/home/HelloPython")
...
After serveral test, I find this problem caused by the dim of manager.list(manager.list(...)). But I really need it to be 2 dims. Any suggestion would be appreciated!
I'm trying to build a server and multiple clients across multiple nodes.
One node act as server which initial manager.list() for other client to use.
Other nodes act as client which attach server to get list and deal with it.
Firewall is closed. And when put server and client on a single node, it works fine.
Got problem like this:
Traceback (most recent call last):
File "main.py", line 352, in <module>
train(args)
File "main.py", line 296, in train
args, proc_manager, device)
File "main.py", line 267, in make_gossip_buffer
mng,sync_freq=args.sync_freq, num_nodes=args.num_nodes)
File "/home/think/gala-master-distprocess-changing_to_multinodes/gala/gpu_gossip_buffer.py", line 49, in __init__
r_events = read_events[rank]
File "<string>", line 2, in __getitem__
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/managers.py", line 819, in _callmethod
kind, result = conn.recv()
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/managers.py", line 943, in RebuildProxy
return func(token, serializer, incref=incref, **kwds)
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/managers.py", line 793, in __init__
self._incref()
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/managers.py", line 847, in _incref
conn = self._Client(self._token.address, authkey=self._authkey)
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/connection.py", line 492, in Client
c = SocketClient(address)
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/connection.py", line 620, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
Server runs on a single node.
Code of server are shown below:
import torch.multiprocessing as mp
from multiprocessing.managers import ListProxy, BarrierProxy, AcquirerProxy, EventProxy
from gala.arguments import get_args
mp.current_process().authkey = b'abc'
def server(manager,host, port, key, args):
read_events = manager.list([manager.list([manager.Event() for _ in range(num_learners)])
for _ in range(num_learners)])
manager.register('get_read_events', callable=lambda : read_events, proxytype=ListProxy)
print('start service at', host)
s = manager.get_server()
s.serve_forever()
if __name__ == '__main__':
mp.set_start_method('spawn')
args = get_args()
manager = mp.Manager()
server(manager,'10.107.13.120', 5000, b'abc', args)
Client runs on other nodes. those nodes connect server with ethernet. CLient ip is 10.107.13.80
Code of client are shown below:
import torch.multiprocessing as mp
mp.current_process().authkey = b'abc'
def make_gossip_buffer(mng):
read_events = mng.get_read_events()
gossip_buffer = GossipBuffer(parameters)
def train(args):
proc_manager = mp.Manager()
proc_manager.register('get_read_events')
proc_manager.__init__(address=('10.107.13.120', 5000), authkey=b'abc')
proc_manager.connect()
make_gossip_buffer(proc_manager)
if __name__ == "__main__":
mp.set_start_method('spawn')
train(args)
Any help would be appreciated!
I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this:
role_info = {
'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution',
'RoleSessionName': 'roleExecution'
}
sts_client = boto3.client('sts', region_name='eu-central-1')
credentials = sts_client.assume_role(**role_info)
aws_access_key_id = credentials["Credentials"]['AccessKeyId']
aws_secret_access_key = credentials["Credentials"]['SecretAccessKey']
aws_session_token = credentials["Credentials"]["SessionToken"]
os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id
os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key
os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1'
os.environ["AWS_SESSION_TOKEN"] = aws_session_token
broker = "sqs://"
backend = 'redis://redis-service:6379/0'
celery = Celery('tasks', broker=broker, backend=backend)
celery.conf["task_default_queue"] = 'my-queue'
celery.conf["broker_transport_options"] = {
'region': 'eu-central-1',
'predefined_queues': {
'my-queue': {
'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue'
}
}
}
In the same file I have the following task:
#celery.task(name='my-queue.my_task')
def my_task(content) -> int:
print("hello")
return 0
When I execute the following code I get an error:
[2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel
return self._avail_channels.pop()
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start
c.connection = c.connect()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect
conn = self.connection_for_read(heartbeat=self.amqheartbeat)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read
self.app.connection_for_read(heartbeat=heartbeat))
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected
callback=maybe_shutdown,
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection
callback, timeout=timeout)
File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect
return self.connection
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel
channel = self.Channel(connection)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__
self._update_queue_cache(self.queue_name_prefix)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache
resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.
If I use boto3 directly without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the predefined_queues configuration, tha is used to avoid these behavior (from docs):
If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined_queue_urls setting
Source here
Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all.
The versions I'm using:
celery==4.4.7
boto3==1.14.54
kombu==4.5.0
Thanks!
PS: I created and issue in Github to track if this can be a library error or not...
I solved the problem updating dependencies to the latest versions:
celery==5.0.0
boto3==1.14.54
kombu==5.0.2
pycurl==7.43.0.6
I was able to get celery==4.4.7 and kombu==4.6.11 working by setting the following configuration option:
celery.conf["task_create_missing_queues"] = False
I am stuck in this setup for almost a week now. Hope that someone can guide me through it.
Setup
I have setup an IIS Server running Flask python code. (Using wfastcgi.py )
I have configured the Application Pool Identity to my own account. (Admin Permission)
I have changed all the files permission that are needed for this web deployment to "Everyone" - Full Control(Read,Write,Execute). (I understand the security risks, this is my staging environment.)
Web server is running fine and i have checked using the bottom code to know my python permission is administrator.
def am_i_admin():
try:
is_admin = os.getuid() == 0
except AttributeError:
is_admin = ctypes.windll.shell32.IsUserAnAdmin() != 0
if is_admin == True:
return "ADMIN"
else:
return "USER"
Problem Statement
I am trying to run administrator priv code on my flask IIS server which allow user within the same network to execute; such as
subprocess.run(['ipconfig'], stdout=subprocess.PIPE)
pyautogui.screenshot() #which take a screenshot of the web server and send over to the client.
I ran on my local jupyter notebook, and the above functions worked perfectly well.
But it failed to run on the IIS flask server.
I have also tried to setup pyautogui on flask server (stand alone without IIS), it worked.
What is the issue with the IIS server ?? Or are there more things that i need to configure. Are there security features that I can disable ?
Subprocess error message:
Error occurred while reading WSGI handler:
Traceback (most recent call last):
File "c:\users\aspnet\anaconda3\lib\site-packages\wfastcgi.py", line 791, in main
env, handler = read_wsgi_handler(response.physical_path)
File "c:\users\aspnet\anaconda3\lib\site-packages\wfastcgi.py", line 633, in read_wsgi_handler
handler = get_wsgi_handler(os.getenv("WSGI_HANDLER"))
File "c:\users\aspnet\anaconda3\lib\site-packages\wfastcgi.py", line 600, in get_wsgi_handler
handler = __import__(module_name, fromlist=[name_list[0][0]])
File ".\my_app.py", line 58, in <module>
out = os.popen("ipconfig").read()
File "c:\users\aspnet\anaconda3\lib\os.py", line 990, in popen
bufsize=buffering)
File "c:\users\aspnet\anaconda3\lib\subprocess.py", line 753, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "c:\users\aspnet\anaconda3\lib\subprocess.py", line 1090, in _get_handles
errwrite = _winapi.GetStdHandle(_winapi.STD_ERROR_HANDLE)
OSError: [WinError 6] The handle is invalid
StdOut:
StdErr:
pyautogui error:
Error occurred while reading WSGI handler:
Traceback (most recent call last):
File "c:\users\aspnet\anaconda3\lib\site-packages\wfastcgi.py", line 791, in main
env, handler = read_wsgi_handler(response.physical_path)
File "c:\users\aspnet\anaconda3\lib\site-packages\wfastcgi.py", line 633, in read_wsgi_handler
handler = get_wsgi_handler(os.getenv("WSGI_HANDLER"))
File "c:\users\aspnet\anaconda3\lib\site-packages\wfastcgi.py", line 600, in get_wsgi_handler
handler = __import__(module_name, fromlist=[name_list[0][0]])
File ".\my_app.py", line 45, in <module>
pyautogui.screenshot()
File "c:\users\aspnet\anaconda3\lib\site-packages\pyscreeze\__init__.py", line 135, in wrapper
return wrappedFunction(*args, **kwargs)
File "c:\users\aspnet\anaconda3\lib\site-packages\pyscreeze\__init__.py", line 427, in _screenshot_win32
im = ImageGrab.grab()
File "c:\users\aspnet\anaconda3\lib\site-packages\PIL\ImageGrab.py", line 44, in grab
include_layered_windows, all_screens
OSError: screen grab failed
StdOut:
StdErr:
File "c:\users\aspnet\anaconda3\lib\site-packages\PIL\ImageGrab.py", line 44, in grab
include_layered_windows, all_screens
OSError: screen grab failed
To resolve the issue set stderr and stdin to subprocess.PIPE:
['where', 'wkhtmltopdf'], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0].strip()
Reference: https://github.com/foliojs/pdfkit/issues/714
File "c:\users\aspnet\anaconda3\lib\site-packages\PIL\ImageGrab.py", line 44, in grab
include_layered_windows, all_screens
OSError: screen grab failed
Use below code:
from PIL import ImageGrab
OR
from PIL import Image
I'm currently facing this problem when batch uploading video by youtube-upload .sh. What can i do to prevent this? Can anyone teach me how I can write something in .sh to take action on this error? Should I retry the last row of the script or something else?
Traceback (most recent call last):
File "/usr/local/bin/youtube-upload", line 5, in <module>
main.run()
File "/Library/Python/2.7/site-packages/youtube_upload/main.py", line 214, in run
sys.exit(lib.catch_exceptions(EXIT_CODES, main, sys.argv[1:]))
File "/Library/Python/2.7/site-packages/youtube_upload/lib.py", line 35, in catch_exceptions
fun(*args, **kwargs)
File "/Library/Python/2.7/site-packages/youtube_upload/main.py", line 211, in main
run_main(parser, options, args)
File "/Library/Python/2.7/site-packages/youtube_upload/main.py", line 153, in run_main
video_id = upload_youtube_video(youtube, options, video_path, len(args), index)
File "/Library/Python/2.7/site-packages/youtube_upload/main.py", line 121, in upload_youtube_video
request_body, progress_callback=progress.callback)
File "/Library/Python/2.7/site-packages/youtube_upload/upload_video.py", line 37, in upload
RETRIABLE_EXCEPTIONS, max_retries=max_retries)
File "/Library/Python/2.7/site-packages/youtube_upload/lib.py", line 71, in retriable_exceptions
raise exc
socket.error: [Errno 54] Connection reset by peer`
This is what i'm using in .sh file, i've repeat 20 rows for different video.
youtube-upload --title="" --client-secrets=client_secrets.json -- description="" --tags="" --thumbnail="" --playlist="" --privacy="unlisted" /users/desktop/video/4.mp4