After serveral test, I find this problem caused by the dim of manager.list(manager.list(...)). But I really need it to be 2 dims. Any suggestion would be appreciated!
I'm trying to build a server and multiple clients across multiple nodes.
One node act as server which initial manager.list() for other client to use.
Other nodes act as client which attach server to get list and deal with it.
Firewall is closed. And when put server and client on a single node, it works fine.
Got problem like this:
Traceback (most recent call last):
File "main.py", line 352, in <module>
train(args)
File "main.py", line 296, in train
args, proc_manager, device)
File "main.py", line 267, in make_gossip_buffer
mng,sync_freq=args.sync_freq, num_nodes=args.num_nodes)
File "/home/think/gala-master-distprocess-changing_to_multinodes/gala/gpu_gossip_buffer.py", line 49, in __init__
r_events = read_events[rank]
File "<string>", line 2, in __getitem__
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/managers.py", line 819, in _callmethod
kind, result = conn.recv()
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/managers.py", line 943, in RebuildProxy
return func(token, serializer, incref=incref, **kwds)
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/managers.py", line 793, in __init__
self._incref()
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/managers.py", line 847, in _incref
conn = self._Client(self._token.address, authkey=self._authkey)
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/connection.py", line 492, in Client
c = SocketClient(address)
File "/home/think/anaconda3/envs/AC/lib/python3.7/multiprocessing/connection.py", line 620, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
Server runs on a single node.
Code of server are shown below:
import torch.multiprocessing as mp
from multiprocessing.managers import ListProxy, BarrierProxy, AcquirerProxy, EventProxy
from gala.arguments import get_args
mp.current_process().authkey = b'abc'
def server(manager,host, port, key, args):
read_events = manager.list([manager.list([manager.Event() for _ in range(num_learners)])
for _ in range(num_learners)])
manager.register('get_read_events', callable=lambda : read_events, proxytype=ListProxy)
print('start service at', host)
s = manager.get_server()
s.serve_forever()
if __name__ == '__main__':
mp.set_start_method('spawn')
args = get_args()
manager = mp.Manager()
server(manager,'10.107.13.120', 5000, b'abc', args)
Client runs on other nodes. those nodes connect server with ethernet. CLient ip is 10.107.13.80
Code of client are shown below:
import torch.multiprocessing as mp
mp.current_process().authkey = b'abc'
def make_gossip_buffer(mng):
read_events = mng.get_read_events()
gossip_buffer = GossipBuffer(parameters)
def train(args):
proc_manager = mp.Manager()
proc_manager.register('get_read_events')
proc_manager.__init__(address=('10.107.13.120', 5000), authkey=b'abc')
proc_manager.connect()
make_gossip_buffer(proc_manager)
if __name__ == "__main__":
mp.set_start_method('spawn')
train(args)
Any help would be appreciated!
Related
I am trying to use Azure Service Bus as the broker for my celery app.
I have patched the solution by referring to various sources.
The goal is to use Azure Service Bus as the broker and PostgresSQL as the backend.
I created an Azure Service Bus and copied the credentials for the RootManageSharedAccessKey to the celery app.
Following is the task.py
from time import sleep
from celery import Celery
from kombu.utils.url import safequote
SAS_policy = safequote("RootManageSharedAccessKey") #SAS Policy
SAS_key = safequote("1234222zUY28tRUtp+A2YoHmDYcABCD") #Primary key from the previous SS
namespace = safequote("bluenode-dev")
app = Celery('tasks', backend='db+postgresql://afsan.gujarati:admin#localhost/local_dev',
broker=f'azureservicebus://{SAS_policy}:{SAS_key}=#{namespace}')
#app.task
def divide(x, y):
sleep(30)
return x/y
When I try to run the Celery app using the following command:
celery -A tasks worker --loglevel=INFO
I get the following error
[2020-10-09 14:00:32,035: CRITICAL/MainProcess] Unrecoverable error: AzureHttpError('Unauthorized\n<Error><Code>401</Code><Detail>claim is empty or token is invalid. TrackingId:295f7c76-770e-40cc-8489-e0eb56248b09_G5S1, SystemTracker:bluenode-dev.servicebus.windows.net:$Resources/Queues, Timestamp:2020-10-09T20:00:31</Detail></Error>')
Traceback (most recent call last):
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 918, in create_channel
return self._avail_channels.pop()
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/servicebusservice.py", line 1225, in _perform_request
resp = self._filter(request)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/_http/httpclient.py", line 211, in perform_request
raise HTTPError(status, message, respheaders, respbody)
azure.servicebus.control_client._http.HTTPError: Unauthorized
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 311, in start
blueprint.start(self)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/connection.py", line 21, in start
c.connection = c.connect()
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 398, in connect
conn = self.connection_for_read(heartbeat=self.amqheartbeat)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 404, in connection_for_read
return self.ensure_connected(
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 430, in ensure_connected
conn = conn.ensure_connection(
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/connection.py", line 383, in ensure_connection
self._ensure_connection(*args, **kwargs)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/connection.py", line 435, in _ensure_connection
return retry_over_time(
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/utils/functional.py", line 325, in retry_over_time
return fun(*args, **kwargs)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/connection.py", line 866, in _connection_factory
self._connection = self._establish_connection()
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/connection.py", line 801, in _establish_connection
conn = self.transport.establish_connection()
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 938, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 920, in create_channel
channel = self.Channel(connection)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/transport/azureservicebus.py", line 64, in __init__
for queue in self.queue_service.list_queues():
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/servicebusservice.py", line 313, in list_queues
response = self._perform_request(request)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/servicebusservice.py", line 1227, in _perform_request
return _service_bus_error_handler(ex)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/_serialization.py", line 569, in _service_bus_error_handler
return _general_error_handler(http_error)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/_common_error.py", line 41, in _general_error_handler
raise AzureHttpError(message, http_error.status)
azure.common.AzureHttpError: Unauthorized
<Error><Code>401</Code><Detail>claim is empty or token is invalid. TrackingId:295f7c76-770e-40cc-8489-e0eb56248b09_G5S1, SystemTracker:bluenode-dev.servicebus.windows.net:$Resources/Queues, Timestamp:2020-10-09T20:00:31</Detail></Error>
I don't see a straight solution for this anywhere. What am I missing?
P.S. I did not create the Queue in Azure Service Bus. I am assuming that celery would create the Queue by itself when the celery app is executed.
P.S.S. I also tried to use the exact same credentials in Python's Service Bus Client and it seemed to work. It feels like a Celery issue, but I am not able to figure out exactly what.
If you want to use Azure Service Bus Transport to connect Azure service bus, the URL should be azureservicebus://{SAS policy name}:{SAS key}#{Service Bus Namespace}.
For example
Get Shared access policies RootManageSharedAccessKey
Code
from celery import Celery
from kombu.utils.url import safequote
SAS_policy = "RootManageSharedAccessKey" # SAS Policy
# Primary key from the previous SS
SAS_key = safequote("X/*****qyY=")
namespace = "bowman1012"
app = Celery('tasks', backend='db+postgresql://<>#localhost/<>',
broker=f'azureservicebus://{SAS_policy}:{SAS_key}#{namespace}')
#app.task
def add(x, y):
return x + y
The next problem happens only when SSL is enabled!
I am running a server with bottle version 0.12.6 and cherrypy version 3.2.2 using https.
The client code sends a file to the server and the server saves it.
when i send a file with size below 102199 bytes, it is received and saved successfully. However, When i send a file with size bigger or equal to 102199, I get the exception:
The Server Code:
from bottle import request, response,static_file, run,server_names
from OpenSSL import crypto,SSL
from bottle import Bottle, run, request, server_names, ServerAdapter
app = Bottle()
app.mount('/test' , app)
class MySSLCherryPy(ServerAdapter):
def run(self, handler):
from cherrypy import wsgiserver
server = wsgiserver.CherryPyWSGIServer((self.host, self.port), handler)
server.ssl_certificate = "./cert"
server.ssl_private_key = "./key"
try:
server.start()
finally:
server.stop()
#app.post('/upload')
def received_file():
file = request.files.file
# file.save("./newfile")
file_path="./newfile"
with open(file_path, 'w') as open_file:
open_file.write(file.read())
if __name__=='__main__':
server_names['mysslcherrypy'] = MySSLCherryPy
run(app, host='0.0.0.0', port=4430, server='mysslcherrypy')
exit(0)
Why does the server fail to get file more than a given limit? is there a limit that i need to change?
(I tried to set the constant MEMFILE_MAX at the function received_file but it didn't help)
The problem vanish if the server is http and not https!
The exception in plain text (in case you cannot view the image):
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/bottle.py", line 861, in _handle
return route.call(**args)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1727, in wrapper
rv = callback(*a, **ka)
File "testser", line 28, in received_file
file = request.files.file
File "/usr/lib/python2.6/site-packages/bottle.py", line 165, in get
if key not in storage: storage[key] = self.getter(obj)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1106, in files
for name, item in self.POST.allitems():
File "/usr/lib/python2.6/site-packages/bottle.py", line 165, in get
if key not in storage: storage[key] = self.getter(obj)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1222, in POST
args = dict(fp=self.body, environ=safe_env, keep_blank_values=True)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1193, in body
self._body.seek(0)
File "/usr/lib/python2.6/site-packages/bottle.py", line 165, in get
if key not in storage: storage[key] = self.getter(obj)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1162, in _body
for part in body_iter(read_func, self.MEMFILE_MAX):
File "/usr/lib/python2.6/site-packages/bottle.py", line 1125, in _iter_body
part = read(min(maxread, bufsize))
File "/usr/lib/python2.6/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 329, in read
data = self.rfile.read(size)
File "/usr/lib/python2.6/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1052, in read
assert n <= left, "recv(%d) returned %d bytes" % (left, n)
AssertionError: recv(47) returned 48 bytes
Solution
In the file bottle.py I changed the value of .MEMFILE_MAX to be 10000000 and by this i solved the problem. The best way to do this is from your server code by adding the next line:
bottle.BaseRequest.MEMFILE_MAX=30000000000
Is there anyway to connectionpool or use a connection across multiple processes?
I am trying to use one connection across multiple processes. Here is the code (running on python 2.7, pyodbc).
# Import custom python packages
import pathos.multiprocessing as mp
import pyodbc
class MyManagerClass(object):
def __init__(self):
self.conn = None
self.result = []
def connect_to_db(self):
conn = pyodbc.connect("DSN=cpmeast;UID=dntcore;PWD=dntcorevs2")
cursor = conn.cursor()
self.conn = conn
return cursor
def read_data(self, *args):
cursor = args[0][0]
data = args[0][1]
print 'Running query'
cursor.execute("WAITFOR DELAY '00:00:02';select GETDATE(), '"+data+"';")
self.result.append(cursor.fetchall())
def read_data(*args):
print 'Running query', args
# cursor.execute("WAITFOR DELAY '00:00:02';select GETDATE(), '"+data+"';")
def main():
dbm = MyManagerClass()
conn = pyodbc.connect("DSN=cpmeast;UID=dntcore;PWD=dntcorevs2")
cursor = conn.cursor()
pool = mp.ProcessingPool(4)
for i in pool.imap(dbm.read_data, ((cursor, 'foo'), (cursor, 'bar'))):
print i
pool.close()
pool.join()
cursor.close();
dbm.conn.close()
print 'Result', dbm.result
print 'Closed'
if __name__ == '__main__':
main()
I am getting the following error:
Process PoolWorker-1:
Traceback (most recent call last):
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/process.py", line 227, in _bootstrap
self.run()
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/process.py", line 85, in run
self._target(*self._args, **self._kwargs)
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/pool.py", line 54, in worker
for job, i, func, args, kwds in iter(inqueue.get, None):
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/queue.py", line 327, in get
return recv()
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/dill-0.2.4-py2.7.egg/dill/dill.py", line 209, in loads
return load(file)
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/dill-0.2.4-py2.7.egg/dill/dill.py", line 199, in load
obj = pik.load()
File "/home/amit/envs/py_env_clink/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/home/amit/envs/py_env_clink/lib/python2.7/pickle.py", line 1083, in load_newobj
obj = cls.__new__(cls, *args)
TypeError: object.__new__(pyodbc.Cursor) is not safe, use pyodbc.Cursor.__new__()
Process PoolWorker-2:
Traceback (most recent call last):
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/process.py", line 227, in _bootstrap
self.run()
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/process.py", line 85, in run
self._target(*self._args, **self._kwargs)
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/pool.py", line 54, in worker
for job, i, func, args, kwds in iter(inqueue.get, None):
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/processing/queue.py", line 327, in get
return recv()
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/dill-0.2.4-py2.7.egg/dill/dill.py", line 209, in loads
return load(file)
File "/home/amit/envs/py_env_clink/lib/python2.7/site-packages/dill-0.2.4-py2.7.egg/dill/dill.py", line 199, in load
obj = pik.load()
File "/home/amit/envs/py_env_clink/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/home/amit/envs/py_env_clink/lib/python2.7/pickle.py", line 1083, in load_newobj
obj = cls.__new__(cls, *args)
TypeError: object.__new__(pyodbc.Cursor) is not safe, use pyodbc.Cursor.__new__()
The problem is with the Pickle stage. Pickle doesn't know inherently how to serialize a connection. Consider:
import pickle
import pymssql
a = {'hello': 'world'}
server = 'server'
username = 'username'
password = 'password'
database = 'database'
conn = pymssql.connect(host=server,user=username,password=password,database=database)
with open('filename.pickle', 'wb') as handle:
pickle.dump(conn, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('filename.pickle', 'rb') as handle:
b = pickle.load(handle)
print(a == b)
This results in the following error message:
Traceback (most recent call last):
File "pickle_ex.py", line 10, in <module>
pickle.dump(conn, handle, protocol=pickle.HIGHEST_PROTOCOL)
File "stringsource", line 2, in _mssql.MSSQLConnection.__reduce_cython__
TypeError: no default __reduce__ due to non-trivial __cinit__
But if you replace conn with a in pickle.dump, the code will run and print out True.
You may be able to define a custom reduce method in your class, but I wouldn't try it, considering how this would result in temp tables acting like global temp tables but only accessible across these processes (which shouldn't be allowed to transpire) anyways.
Links:
My pickle code is from here: How can I use pickle to save a dict?
CORE
The server part - Core, which is responsible for the registration of modules and the interaction between them. Core runs as ThreadedServer. CoreService provides registration modules. When registering I keep a list of Connections, then to use them. Module calls at the core function that it should call another module. But to use the list of connections does not work, the performance goes into an infinite loop.
class CoreService(rpyc.Service):
__modules = {}
def exposed_register_module(self, module_name):
if module_name in self.__modules:
return False
self.__modules[module_name] = self._conn
return True
def exposed_execute_query_module(self, module_name, attribute_name, args):
# TTTTTTTTTTHHHHHHHHHIIIIIIIISSSSSSSSSSSSSS
if module_name in self.__modules:
self.__modules[module_name].root
# return None
Run test
When you run the test I get in into a loop which is interrupted by a combination of keys and get the following output:
^CTraceback (most recent call last):
File "/home/kpv/perseus/control-lib/perseus_control_lib/module.py", line 67, in __getattr__
return self.__core_connector.root.execute_query_module(self.__proxy_module_name, name, args)
File "/usr/local/lib/python2.7/dist-packages/rpyc/core/netref.py", line 196, in __call__
return syncreq(_self, consts.HANDLE_CALL, args, kwargs)
File "/usr/local/lib/python2.7/dist-packages/rpyc/core/netref.py", line 71, in syncreq
return conn.sync_request(handler, oid, *args)
File "/usr/local/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 438, in sync_request
self.serve(0.1)
File "/usr/local/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 387, in serve
data = self._recv(timeout, wait_for_lock = True)
File "/usr/local/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 344, in _recv
if self._channel.poll(timeout):
File "/usr/local/lib/python2.7/dist-packages/rpyc/core/channel.py", line 43, in poll
return self.stream.poll(timeout)
File "/usr/local/lib/python2.7/dist-packages/rpyc/core/stream.py", line 41, in poll
rl, _, _ = select([self], [], [], timeout)
KeyboardInterrupt
I have a process running as root that needs to spin threads off to be run as various users. This part is working fine, but I need a way to communicate between the child processes and the parent process.
When I try using multiprocessing.Manager() with some lists, dictionary, Lock, Queue, etc, it always has permission denied errors on the process that has lowered permissions.
Is there a way to grant access to a user or PID to fix this?
Basic code that represents what I'm running into (run as root):
#!/usr/bin/env python
import multiprocessing, os
manager = multiprocessing.Manager()
problematic_list = manager.list()
os.setuid(43121) # or whatever your user is
problematic_list.append('anything')
Result:
root#liberator:/home/bscable# python asd.py
Traceback (most recent call last):
File "asd.py", line 8, in <module>
problematic_list.append('anything')
File "<string>", line 2, in append
File "/usr/lib/python2.7/multiprocessing/managers.py", line 755, in _callmethod
self._connect()
File "/usr/lib/python2.7/multiprocessing/managers.py", line 742, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 169, in Client
c = SocketClient(address)
File "/usr/lib/python2.7/multiprocessing/connection.py", line 293, in SocketClient
s.connect(address)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 13] Permission denied
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/util.py", line 261, in _run_finalizers
finalizer()
File "/usr/lib/python2.7/multiprocessing/util.py", line 200, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/managers.py", line 625, in _finalize_manager
process.terminate()
File "/usr/lib/python2.7/multiprocessing/process.py", line 137, in terminate
self._popen.terminate()
File "/usr/lib/python2.7/multiprocessing/forking.py", line 165, in terminate
os.kill(self.pid, signal.SIGTERM)
OSError: [Errno 1] Operation not permitted
The first exception appears to be the one that is important here.
Python (at least 2.6) uses a UNIX socket to communicate that appears like so:
/tmp/pymp-eGnU6a/listener-BTHJ0E
We can grab that path and change the permissions on it like so:
#!/usr/bin/env python
import multiprocessing, os, grp, pwd
manager = multiprocessing.Manager()
problematic_list = manager.list()
fullname = manager._address
dirname = os.path.dirname(fullname)
gid = grp.getgrnam('some_group').gr_gid
uid = pwd.getpwnam('root').pw_uid # should always be 0, but you never know
os.chown(dirname, uid, gid)
os.chmod(dirname, 0770)
os.chown(fullname, uid, gid)
os.chmod(fullname, 0770)
os.setgid(gid)
os.setuid(43121) # or whatever your user is
problematic_list.append('anything')