Changing mutables inside Socketserver.handle() - Python 3.3 - python

I have a problem to change the data variable in the class NetworkManagerData. Everytime a request with 'SIT' comes to the server the variable 'master_ip' and 'time_updated' are updated. I have chosen a dictionary for my values as a container because it is mutable. But everytime i get a new request it has it old values in it.
Like:
First Request:
>>False
>>True
Second Request:
>>False
>>True
Third Request without 'SIT':
>>False
>>False
Do I have some missunderstanding with these mutables. Or are there some special issues with using dictionarys in multiprocessing?
Code to start the server:
HOST, PORT = "100.0.0.1", 11880
network_manager = NetworkManagerServer((HOST, PORT), NetworkManagerHandler)
network_manager_process =
multiprocessing.Process(target=network_manager.serve_forever)
network_manager_process.daemon = True
network_manager_process.start()
while True:
if '!quit' in input():
network_manager_process.terminate()
sys.exit()
Server:
from multiprocessing import Lock
import os
import socketserver
class NetworkManagerData():
def __init__(self):
self.lock = Lock()
self.data = {'master_ip': '0.0.0.0', 'time_updated': False}
class NetworkManagerServer(socketserver.ForkingMixIn, socketserver.TCPServer):
def __init__(self, nmw_server, RequestHandlerClass):
socketserver.TCPServer.__init__(self, nmw_server, RequestHandlerClass)
self.nmd = NetworkManagerData()
def finish_request(self, request, client_address):
self.RequestHandlerClass(request, client_address, self, self.nmd)
class NetworkManagerHandler(socketserver.StreamRequestHandler):
def __init__(self, request, client_address, server, nmd):
self.request = request
self.client_address = client_address
self.server = server
self.setup()
self.nmd = nmd
try:
self.handle(self.nmd)
finally:
self.finish()
def handle(self, nmd):
print(nmd.data.get('time_updated')) # <<<- False ->>>
while True:
self.data = self.rfile.readline()
if self.data:
ds = self.data.strip().decode('ASCII')
header = ds[0:3]
body = ds[4:]
if 'SIT' in header:
# ...
nmd.lock.acquire()
nmd.data['master_ip'] = self.client_address[0] # <-
nmd.data['time_updated'] = True # <-
nmd.lock.release()
# ...
print(nmd.data.get('time_updated')) # <<<- True ->>>
else:
print("Connection closed: " + self.client_address[0] + ":" +
str(self.client_address[1]))
return
Thanks!

Ok, the use of multiprocessing.Value and multiprocessing.Array have solved my problem. :)
If you give some variables that are not part of the multiprocessing library to a process it will only copy the variables for its own process, there is no connection between the original and the copy. The variable is still mutable, but only in its own copy.
To work on the original variable in the memory you have to use multiprocessing.Array or multiprocessing.Value. There are other things like variable managers or queues to get this done. What you want to use depends on your case.
So I changed the datamanager class:
class NetworkManagerData():
def __init__(self):
self.lock = multiprocessing.Lock()
self.master_ip = multiprocessing.Array('B', (255,255,255,255))
self.time_updated = multiprocessing.Value('B', False)
To set the IP I am using this now:
nmd.lock.acquire()
ip_array = []
for b in self.client_address[0].split('.'):
ip_array.append(int(b))
nmd.master_ip[:] = ip_array
nmd.lock.release()
To read the IP I am using this:
self.wfile.write(("GIP|%s.%s.%s.%s" %
(nmd.master_ip[0], nmd.master_ip[1], nmd.master_ip[2],
nmd.master_ip[3]) + '\n').encode('ASCII'))

Related

Python: Instance of BaseHTTPRequestHandler resets after GET Request

I have a python BaseHTTPRequestHandler class that is called by an HTTPServer class. Basically the BaseHTTPRequestHandler just runs a basic algorithm and then responds to a Get request. The issue is that every time I do a Get request, I get the correct response but all the gathered data in BaseHTTPRequestHandler is reset as if each time a request is sent to the HTTPServer it creates a new instance of BaseHTTPRequestHandler. I can't find anything online that really explains what's going on behind the scenes. I've attached a simplified version of my code. Any help or explanation would be greatly appreciated.
Before anyone suggests creating a class or global variable, I am using a thread to create multiple instances of this class at once and doing so makes all the instances on each thread share and replace each other's data.
CODE (indentation off when copy and pasted)
BaseHTTPRequestHandler
This simplified version just keeps track of the number of alerts that have happened. The problem is that the count always resets to 0 when I call a Get request as if the instance of the class is reset.
class SimulationServer(BaseHTTPRequestHandler):
def __init__(self, address, port, randomNumberMax, *args):
self.IP_ADDRESS = address
self.PORT = port
self.RANDOM_NUMBER_MAX = randomNumberMax
self.COUNT = 0
BaseHTTPRequestHandler.__init__(self, *args)
def do_GET(self):
if self.headers['Authorization'] == 'Basic ' + str(key):
print("send response")
self.do_HEAD()
randomNumberMax = self.RANDOM_NUMBER_MAX
response = ""
if randint(0, randomNumberMax) == 0:
self.generateAlert()
base_path = urlparse(self.path).path
print('base_path: ' + base_path)
if base_path == '/count':
response = self.getCount()
self.wfile.write(bytes(response, 'utf-8'))
def getCount(self):
count = self.COUNT
jsonString = '{"_sig": "","count": ' + str(count) + '}'
return jsonString
def generateAlert(self):
newAlert = {}
newAlert['siteId'] = "siteId"+ str(self.COUNT)
newAlert['mesg'] = "Simulated Alert"
newAlert['when'] = int(time.time())
self.COUNT += 1
HTTPServer
class CustomHTTPServer(HTTPServer):
key = ''
def __init__(self, address, handlerClass=SimulationServer):
super().__init__(address, handlerClass)
def set_auth(self, username, password):
self.key = base64.b64encode(
bytes('%s:%s' % (username, password), 'utf-8')).decode('ascii')
def get_auth_key(self):
return self.key
Main
This class creates the HTTPServer and attaches the handler
class RunSimulator(object):
def run(self, alertFrequency=50, port=9000):
ipAddress="127.0.0.1"
def handler(*args):
SimulationServer(ipAddress, port, alertFrequency, *args)
simulationServer = CustomHTTPServer((ipAddress, port), handler)
simulationServer.set_auth('username', 'password')
try:
simulationServer.serve_forever()
except KeyboardInterrupt:
pass
simulationServer.server_close()
print(time.asctime(), "Server Stops - %s:%s" % (ipAddress, port))
if __name__ == "__main__":
from sys import argv
simu = RunSimulator()
simu.run()
So the short answer is that it does create a new instance every time a request is sent.
BaseHTTPServer.HTTPServer is a subclass of SocketServer.TCPServer which itself is a subclass of SocketServer.BaseServer. Every time a request comes in it will process the request process_request and then call finish_request. finish_request will instantiate a fresh instance of whatever your request handler is.

How can I use threadlocal variable with ThreadPoolExecutor?

I want to threads has some local variable, with thread.Thread it can be done like this elegantly:
class TTT(threading.Thread):
def __init__(self, lines, ip, port):
threading.Thread.__init__(self)
self._lines = lines;
self._sock = initsock(ip, port)
self._sts = 0
self._cts = 0
def run(self):
for line in self._lines:
query = genquery(line)
length = len(query)
head = "0xFFFFFFFE"
q = struct.pack('II%ds'%len(query), head, length, query)
sock.send(q)
sock.recv(4)
length, = struct.unpack('I', sock.recv(4))
result = ''
remain = length
while remain:
t = sock.recv(remain)
result+=t
remain-=len(t)
print(result)
As you can see that _lines _sock _sts _cts these variable will be independent in every thread.
But with concurrent.future.ThreadPoolExecutor, it seems that it's not that easy. With ThreadPoolExecutor, how can I make things elegantly?(no more global variables)
New Edited
class Processor(object):
def __init__(self, host, port):
self._sock = self._init_sock(host, port)
def __call__(self, address, adcode):
self._send_data(address, adcode)
result = self._recv_data()
return json.loads(result)
def main():
args = parse_args()
adcode = {"shenzhen": 440300}[args.city]
if args.output:
fo = open(args.output, "w", encoding="utf-8")
else:
fo = sys.stdout
with open(args.file, encoding=args.encoding) as fi, fo,\
ThreadPoolExecutor(max_workers=args.processes) as executor:
reader = csv.DictReader(fi)
writer = csv.DictWriter(fo, reader.fieldnames + ["crfterm"])
test_set = AddressIter(args.file, args.field, args.encoding)
func = Processor(args.host, args.port)
futures = map(lambda x: executor.submit(func, x, adcode), test_set)
for row, future in zip(reader, as_completed(futures)):
result = future.result()
row["crfterm"] = join_segs_tags(result["segs"], result["tags"])
writer.writerow(row)
Using a layout very similar to what you have now would be the easiest thing. Instead of a Thread, have a normal object, and instead of run, implement your logic in __call__:
class TTT:
def __init__(self, lines, ip, port):
self._lines = lines;
self._sock = initsock(ip, port)
self._sts = 0
self._cts = 0
def __call__(self):
...
# do stuff to self
Adding a __call__ method to a class makes it possible to invoke instances as if they were regular functions. In fact, normal functions are objects with such a method. You can now pass a bunch of TTT instances to either map or submit.
Alternatively, you could absorb the initialization into your task function:
def ttt(lines, ip, port):
sock = initsock(ip, port)
sts = cts = 0
...
Now you can call submit with the correct parameter list or map with an iterable of values for each parameter.
I would prefer the former approach for this example because it opens the port outside the executor. Error reporting in executor tasks can be tricky sometimes, and I would prefer to make the error prone operation of opening a port as transparent as possible.
EDIT
Based on your related question, I believe that the real question you are asking is about function-local variables (which are automatically thread-local as well), not being shared between function calls on the same thread. However, you can always pass references between function calls.

Python equivalent of Perl's HTTP::Async->next_response

I'm looking for a way to do the equivalent of Perl's HTTP::Async module's next_response method
The HTTP::Async module doesn't spawn any background threads, nor does it use any callbacks. Instead, every time anyone (in my case, the main thread) calls next_response on the object, all the data that has been received by the OS so far is read (blocking, but instantaneous since it only processes data that's already been received). If this is the end of the response, then next_response returns an HTTP::Response object, otherwise it returns undef.
Usage of this module looks something like (pseudocode):
request = HTTP::Async(url)
do:
response = request->next_response()
if not response:
sleep 5 # or process events or whatever
while not response
# Do things with response
As far as I can see, Python's urllib or http.client don't support this style. As for why I want to do it in this style:
This is for an embedded Python environment where I can't spawn threads, nor have Python spawn any.
I'm restricted to a single thread that is actually the embedding application's thread. This means I cannot have any delayed callbacks either - the application decides when to let my Python code run. All I can do is request the embedding application to invoke a callback of my choosing every 50 milliseconds, say.
Is there a way to do this in Python?
For reference, this is an example of the Perl code I have right now and that I'm looking to port to Python:
httpAsync = HTTP::Async->new()
sub httpRequestAsync {
my ($url, $callback) = #_; # $callback will be called with the response text
$httpAsync->add(new HTTP::Request(GET => $url));
# create_timer causes the embedding application to call the supplied callback every 50ms
application::create_timer(50, sub {
my $timer_result = application::keep_timer;
my $response = $httpAsync->next_response;
if ($response) {
my $responseText = $response->decoded_content;
if ($responseText) {
$callback->($responseText);
}
$timer_result = application::remove_timer;
}
# Returning application::keep_timer will preserve the timer to be called again.
# Returning application::remove_timer will remove the timer.
return $timer_result;
});
}
httpRequestAsync('http://www.example.com/', sub {
my $responseText = $_[0];
application::display($responseText);
});
Edit: Given that this is for an embedded Python instance, I'll take all the alternatives I can get (part of the standard library or otherwise) as I'll have to evaluate all of them to make sure they can run under my particular constraints.
Note: If you're interested in only retrieving data when YOU call for data to be received, simply add a flag to handle_receive and add it to the sleep block inside handle_receive thus giving you data only when you call your function.
#!/usr/bin/python
# -*- coding: iso-8859-15 -*-
import asyncore, errno
from socket import AF_INET, SOCK_STREAM
from time import sleep
class sender():
def __init__(self, sock_send):
self.s = sock_send
self.bufferpos = 0
self.buffer = {}
self.alive = 1
def send(self, what):
self.buffer[len(self.buffer)] = what
def writable(self):
return (len(self.buffer) > self.bufferpos)
def run(self):
while self.alive:
if self.writable():
logout = str([self.buffer[self.bufferpos]])
self.s(self.buffer[self.bufferpos])
self.bufferpos += 1
sleep(0.01)
class SOCK(asyncore.dispatcher):
def __init__(self, _s=None, config=None):
self.conf = config
Thread.__init__(self)
self._s = _s
self.inbuffer = ''
#self.buffer = ''
self.lockedbuffer = False
self.is_writable = False
self.autounlockAccounts = {}
if _s:
asyncore.dispatcher.__init__(self, _s)
self.sender = sender(self.send)
else:
asyncore.dispatcher.__init__(self)
self.create_socket(AF_INET, SOCK_STREAM)
#if self.allow_reuse_address:
# self.set_resue_addr()
self.bind((self.conf['SERVER'], self.conf['PORT']))
self.listen(5)
self.sender = None
self.start()
def parse(self):
self.lockedbuffer = True
## Parse here
print self.inbuffer
self.inbuffer = ''
self.lockedbuffer = False
def readable(self):
return True
def handle_connect(self):
pass
def handle_accept(self):
(conn_sock, client_address) = self.accept()
if self.verify_request(conn_sock, client_address):
self.process_request(conn_sock, client_address)
def process_request(self, sock, addr):
x = SOCK(sock, config={'PARSER' : self.conf['PARSER'], 'ADDR' : addr[0], 'NAME' : 'CORE_SUB_SOCK_('+str(addr[0]) + ')'})
def verify_request(self, conn_sock, client_address):
return True
def handle_close(self):
self.close()
if self.sender:
self.sender.alive = False
def handle_read(self):
data = self.recv(8192)
while self.lockedbuffer:
sleep(0.01)
self.inbuffer += data
def writable(self):
return True
def handle_write(self):
pass
def run(self):
if not self._s:
asyncore.loop()
imap = SOCK(config={'SERVER' : '', 'PORT' : 6668})
imap.run()
while 1
sleep(1)
Something along the lines of this?
Asyncore socket that always appends to the inbuffer when there's data to recieve.
You can modify it however you want to, i just pasted a piece of code from another project that happens to be Threaded :)
Last attempt:
class EchoHandler(asyncore.dispatcher_with_send):
def handle_read(self):
data = self.recv(8192)
if data:
self.send(data)

How do i control what socket thread I want to connect to in a asychronous reverse server python?

Good evening, This is my 1st time on this site, I have been programming a python based user monitoring system for my work for the past 3 months and I am almost done with my 1st release. However I have run into a problem controlling what computer I want to connect to.
If i run the two sample code I put in this post I can receive the client and send commands to client with the server, but only one client at a time, and the server is dictating which client I can send to and which one is next. I am certain the problem is "server side but I am not sure how to fix the problem and a Google search does not turn up anyone having tried this.
I have attached both client and server base networking code in this post.
client:
import asyncore
import socket
import sys
do_restart = False
class client(asyncore.dispatcher):
def __init__(self, host, port=8000):
serv = open("srv.conf","r")
host = serv.read()
serv.close()
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect((host, port))
def writable(self):
return 0
def handle_connect(self):
pass
def handle_read(self):
data = self.recv(4096)
#Rest of code goes here
serv = open("srv.conf","r")
host = serv.read()
serv.close()
request = client(host)
asyncore.loop()
server:
import asyncore
import socket
import sys
class soc(asyncore.dispatcher):
def __init__(self, port=8000):
asyncore.dispatcher.__init__(self)
self.port = port
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.bind(('', port))
self.listen(5)
def handle_accept(self):
channel, addr = self.accept()
while 1:
j = raw_input(addr)
#Rest of my code is here
server = soc(8000)
asyncore.loop()
Here is a fast and dirty idea that I threw together.
The use of raw_input has been replaced with another dispatcher that is asyncore compatable, referencing this other question here
And I am expanding on the answer given by #user1320237 to defer each new connection to a new dispatcher.
You wanted to have a single command line interface that can send control commands to any of the connected clients. That means you need a way to switch between them. What I have done is created a dict to keep track of the connected clients. Then we also create a set of available commands that map to callbacks for your command line.
This example has the following:
list: list current clients
set <client>: set current client
send <msg>: send a msg to the current client
server.py
import asyncore
import socket
import sys
from weakref import WeakValueDictionary
class Soc(asyncore.dispatcher):
CMDS = {
'list': 'cmd_list',
'set': 'cmd_set_addr',
'send': 'cmd_send',
}
def __init__(self, port=8000):
asyncore.dispatcher.__init__(self)
self._conns = WeakValueDictionary()
self._current = tuple()
self.port = port
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.set_reuse_addr()
self.bind(('', port))
self.listen(5)
self.cmdline = Cmdline(self.handle_input, sys.stdin)
self.cmdline.prompt()
def writable(self):
return False
def handle_input(self, i):
tokens = i.strip().split(None, 1)
cmd = tokens[0]
arg = ""
if len(tokens) > 1:
arg = tokens[1]
cbk = self.CMDS.get(cmd)
if cbk:
getattr(self, cbk)(arg)
self.cmdline.prompt(self._addr_to_key(self._current))
def handle_accept(self):
channel, addr = self.accept()
c = Conn(channel)
self._conns[self._addr_to_key(addr)] = c
def _addr_to_key(self, addr):
return ':'.join(str(i) for i in addr)
def cmd_list(self, *args):
avail = '\n'.join(self._conns.iterkeys())
print "\n%s\n" % avail
def cmd_set_addr(self, addr_str):
conn = self._conns.get(addr_str)
if conn:
self._current = conn.addr
def cmd_send(self, msg):
if self._current:
addr_str = self._addr_to_key(self._current)
conn = self._conns.get(addr_str)
if conn:
conn.buffer += msg
class Cmdline(asyncore.file_dispatcher):
def __init__(self, cbk, f):
asyncore.file_dispatcher.__init__(self, f)
self.cbk = cbk
def prompt(self, msg=''):
sys.stdout.write('%s > ' % msg)
sys.stdout.flush()
def handle_read(self):
self.cbk(self.recv(1024))
class Conn(asyncore.dispatcher):
def __init__(self, *args, **kwargs):
asyncore.dispatcher.__init__(self, *args, **kwargs)
self.buffer = ""
def writable(self):
return len(self.buffer) > 0
def handle_write(self):
self.send(self.buffer)
self.buffer = ''
def handle_read(self):
data = self.recv(4096)
print self.addr, '-', data
server = Soc(8000)
asyncore.loop()
Your main server is now never blocking on stdin, and always accepting new connections. The only work it does is the command handling which should either be a fast operation, or signals the connection objects to handle the message.
Usage:
# start the server
# start 2 clients
>
> list
127.0.0.1:51738
127.0.0.1:51736
> set 127.0.0.1:51736
127.0.0.1:51736 >
127.0.0.1:51736 > send foo
# client 127.0.0.1:51736 receives "foo"
To me
while 1:
j = raw_input(addr)
seems to be the problem:
you only accept a socket an then do something with it until end.
You should create e new dispatcher for every client connecting
class conn(asyncore.dispatcher):
...
def handle_read(self):
...
class soc(asyncore.dispatcher):
def handle_accept(self):
...
c = conn()
c.set_socket(channel)
Asyncore will call you back for every read operation possible.
Asyncore uses only one thread. This is its strength. every dispatcher that has a socket is called one after an other with those handle_* functions.

How do I pass a python object using a remote manager?

I'm developing a simple client-server application in python. I'm using a manager to set up shared queues, but I can't figure out how to pass an arbitrary object from the server to the client. I suspect it has something to do with the manager.register function, but it's not very well explained in the multiprocessing documentation. The only example there uses Queues and nothing else.
Here's my code:
#manager demo.py
from multiprocessing import Process, Queue, managers
from multiprocessing.managers import SyncManager
import time
class MyObject():
def __init__( self, p, f ):
self.parameter = p
self.processor_function = f
class MyServer():
def __init__(self, server_info, obj):
print '=== Launching Server ... ====='
(ip, port, pw) = server_info
self.object = obj #Parameters for task processing
#Define queues
self._process_queue = Queue() #Queue of tasks to be processed
self._results_queue = Queue() #Queue of processed tasks to be stored
#Set up IS_Manager class and register server functions
class IS_Manager(managers.BaseManager): pass
IS_Manager.register('get_processQ', callable=self.get_process_queue)
IS_Manager.register('get_resultsQ', callable=self.get_results_queue)
IS_Manager.register('get_object', callable=self.get_object)
#Initialize manager and server
self.manager = IS_Manager(address=(ip, port), authkey=pw)
self.server = self.manager.get_server()
self.server_process = Process( target=self.server.serve_forever )
self.server_process.start()
def get_process_queue(self): return self._process_queue
def get_results_queue(self): return self._results_queue
def get_object(self): return self.object
def runUntilDone(self, task_list):
#Fill the initial queue
for t in task_list:
self._process_queue.put(t)
#Main loop
total_tasks = len(task_list)
while not self._results_queue.qsize()==total_tasks:
time.sleep(.5)
print self._process_queue.qsize(), '\t', self._results_queue.qsize()
if not self._results_queue.empty():
print '\t', self._results_queue.get()
#Do stuff
pass
class MyClient():
def __init__(self, server_info):
(ip, port, pw) = server_info
print '=== Launching Client ... ====='
class IS_Manager(managers.BaseManager): pass
IS_Manager.register('get_processQ')
IS_Manager.register('get_resultsQ')
IS_Manager.register('get_object')
#Set up manager, pool
print '\tConnecting to server...'
manager = IS_Manager(address=(ip, port), authkey=pw)
manager.connect()
self._process_queue = manager.get_processQ()
self._results_queue = manager.get_resultsQ()
self.object = manager.get_object()
print '\tConnected.'
def runUntilDone(self):#, parameters):
print 'Starting client main loop...'
#Main loop
while 1:
if self._process_queue.empty():
print 'I\'m bored here!'
time.sleep(.5)
else:
task = self._process_queue.get()
print task, '\t', self.object.processor_function( task, self.object.parameter )
print 'Client process is quitting. Bye!'
self._clients_queue.get()
And a simple server...
from manager_demo import *
def myProcessor( x, parameter ):
return x + parameter
if __name__ == '__main__':
my_object = MyObject( 100, myProcessor )
my_task_list = range(1,20)
my_server_info = ('127.0.0.1', 8081, 'my_pw')
my_crawl_server = MyServer( my_server_info, my_object )
my_crawl_server.runUntilDone( my_task_list )
And a simple client...
from manager_demo import *
if __name__ == '__main__':
my_server_info = ('127.0.0.1', 8081, 'my_pw')
my_client = MyClient( my_server_info )
my_client.runUntilDone()
When I run this it crashes on:
erin#Erin:~/Desktop$ python client.py
=== Launching Client ... =====
Connecting to server...
Connected.
Starting client main loop...
2 Traceback (most recent call last):
File "client.py", line 5, in <module>
my_client.runUntilDone()
File "/home/erin/Desktop/manager_demo.py", line 84, in runUntilDone
print task, '\t', self.object.processor_function( task, self.object.parameter )
AttributeError: 'AutoProxy[get_object]' object has no attribute 'parameter'
Why does python have no trouble with Queues or the processor_function, but choke on the object parameter? Thanks!
You're encountering this issue because the parameter attribute on your MyObject() class is not a callable.
The documentation states that, _exposed_ is used to specify a sequence of method names which proxies for this typeid. In the case where no exposed list is specified, all “public methods” of the shared object will be accessible. (Here a “public method” means any attribute which has a __call__() method and whose name does not begin with '_'.)
So, you will need to manually expose the parameter attribute on MyObject, presumably, as a method, by changing your MyObject():
class MyObject():
def __init__(self, p, f):
self._parameter = p
self.processor_function = f
def parameter(self):
return self._parameter
Also, you will need to change your task to:
self.object.processor_function(task, self.object.parameter())
HTH.

Categories

Resources