I am trying to activate a function that will, once i'm finished, change the control variables for some motors. the commands are coming in over wifi via socket server. Here is the code;
import SocketServer
import Tkinter as Tk
from Tkinter import *
class MyTCPHandler(SocketServer.BaseRequestHandler):
def handle(self):
self.DriveSend = self.request.recv(1024).strip()
self.SteeringSend = self.request.recv(1024).strip()
#print("{} wrote:".format(self.client_address[0]))
#print(self.DriveSend)
#print(self.SteeringSend)
#self.request.sendall(self.DriveSend.upper())
#self.request.sendall(self.SteeringSend.upper())
return (self.DriveSend,self.SteeringSend)
MotorControl()
def MotorControl():
MotorVar = MyTCPHandler()
MotorVar.handle()
MotorVar.DriveSend
MotorVar.SteeringSend
print(MotorVar.DriveSend)
print(MotorVar.SteeringSend)
print('test')
if __name__ == "__main__":
HOST, PORT = "192.168.2.12", 9999
server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
server.serve_forever()
As you can see the server runs constantly and looks out for incoming messages, i would like it to run the function MotorControl every time it receives a new message ideally from my client program. I tried this but it doesn't print any values (my way of testing it with something basic before i try and control anything) all the commented out sections are bits of code from the original testing to check the server side of things. all of which works fine.
There are a couple of things wrong here. Firstly, the MyTCPHandler method returns before calling the function. When a return in a function is hit, execution returns immediately and no more code in that function will be executed.
Secondly, you've got the MyTCPHandler method calling the MotorControl function, but then the function instantiates a new instance of the class, which naturally doesn't have any of the information set. Instead, you should pass your instance to the function.
So:
class MyTCPHandler(SocketServer.BaseRequestHandler):
def handle(self):
self.DriveSend = self.request.recv(1024).strip()
self.SteeringSend = self.request.recv(1024).strip()
MotorControl(self)
return (self.DriveSend,self.SteeringSend)
def MotorControl(motor_var):
print(motor_var.DriveSend)
print(motor_var.SteeringSend)
print('test')
Although on reflection, it's not clear why you want this in a standalone function anyway. You should perhaps make MotorControl a method on MyTCPHandler, then it will already have access to self.
Related
i have a problem with my progam using socket and thread.
I have made a socket server who add client in a thread, but the client thread never start...
here is my code:
socket server
import socket, threading, logging, sys
from client_thread import ClientThread
class SocketServer:
CLIENTS = list()
def __init__(self, server_ip, server_port, max_connections):
try:
self.tcpsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcpsock.bind((server_ip, server_port))
self.tcpsock.listen(10)
logging.info('Socket server successfully started !')
except Exception as e:
logging.error(format(e))
def start(self):
from src.realmserver.core.hetwan import EmulatorState, Core
while (Core.STATE == EmulatorState.IN_RUNNING):
try:
(clientsock, (ip, port)) = self.tcpsock.accept()
new_client = threading.Thread(target=ClientThread, args=[len(self.CLIENTS), ip, port, clientsock])
self.CLIENTS.append(new_client)
new_client.start()
except Exception as e:
print format(e)
for client in self.CLIENTS:
client.join()
and client thread
import logging, string, random
class ClientThread:
def __init__(self, client_id, client_ip, client_port, socket):
self.client_id = client_id
self.client_ip = client_ip
self.client_port = client_port
self.socket = socket
logging.debug('(%d) Client join us !', client_id)
def run(self):
key = ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in range(32))
print self.send('HC%s' % key)
while True:
entry = self.socket.recv(4096)
entry.replace("\n", "")
if not entry:
break
else:
logging.debug('(%d) Packet received : %s', self.client_id, str(entry))
self.kill()
def send(self, packet):
return self.socket.send("%s\x00" % packet)
def kill(self):
self.socket.close()
logging.debug('(%d) Client is gone...', self.client_id)
sorry for bad indentation, it's the form, not my file.
Please help me :(
Thank you in advance (sorry for bad english i'm french....)
You have this line of code in your Server instance start function:
new_client = threading.Thread(target=ClientThread,
args=[len(self.CLIENTS), ip, port, clientsock])
The target= argument to threading.Thread needs to be a callable function. Here ClientThread is the name of the constructor function for your class ClientThread, so it is a callable function, returning an instance of that class. Note that it is not actually called yet! The args= argument is more normally a tuple, but a list actually works. These are the arguments that will be passed to the target function once it's eventually called, when you use this particular threading model. (You can also pass keyword arguments using kwargs= and a dictionary.)
What happens now is a bit tricky. Now that the two parameters (target= and args=) have been evaluated, the Python runtime creates a new instance of a threading.Thread class. This new instance is, at the moment, just a data object.
If we add a print statement/function (it's not clear whether this is py2k or py3k code) we can see the object itself:
print('new_client id is', id(new_client))
which will print something like:1
new_client id is 34367605072
Next, you add this to a list and then invoke its start:
self.CLIENTS.append(new_client)
new_client.start()
The list add is straightforward enough, but the start is pretty tricky.
The start call itself actually creates a new OS/runtime thread (whose ID is not related to the data object's ID—the raw thread ID is an internal implementation detail). This new thread starts running at its run method.2 The default run method is in fact:3
try:
if self.__target:
self.__target(*self.__args, **self.__kwargs)
finally:
# Avoid a refcycle if the thread is running a function with
# an argument that has a member that points to the thread.
del self.__target, self.__args, self.__kwargs
Since you are using a regular threading.Thread instance object, you are getting this default behavior, where new_thread.start() creates the new thread itself, which then calls the default run method, which calls its self.__target which is your ClientThread class-instance-creation function.
So now, inside the new thread, Python creates an instance of a ClientThread object, calling its __init__ with the self.__args and self.__kwargs saved in the new_thread instance (which is itself shared between the original Python, and the new thread).
This new ClientThread object executes its __init__ code and returns. This is the equivalent of having the run method read:
def run(self):
ClientThread(**saved_args)
Note that this is not:
def run(self):
tmp = ClientThread(**saved_args)
tmp.run()
That is, the run method of the ClientThread instance is never called. Only the run method of the threading.Thread instance is called. If you modify your ClientThread's __init__ method to print out its ID, you will see that this ID differs from that of the threading.Thread instance:
class ClientThread:
def __init__(self, client_id, client_ip, client_port, socket):
print('creating', id(self), 'instance')
which will print a different ID (and definitely print after the new_client id is line):
new_client id is 34367605072
creating 34367777464 instance
If you add additional prints to your run method you will see that it is never invoked.
What to do about this
You have two main options here.
You can either make your ClientThread a subclass of threading.Thread:
class ClientThread(threading.Thread):
def __init__(self, client_id, client_ip, client_port, socket):
...
threading.Thread.__init__(self)
In this case, you would create the client object yourself, rather than using threading.Thread to create it:
new_thread = ClientThread(...)
...
new_thread.start()
The .start method would be threading.Thread.start since you have not overridden that, and that method would then create the actual OS/runtime thread and then call your run method, which—since you did override it—would be your run.
Or, you can create a standard threading.Thread object, supply it with a target, and have this target invoke your object's run method, e.g.:
new_client = ClientThread(...)
new_thread = threading.Thread(target=new_client.run, ...)
...
new_thread.start()
The choice is yours: to subclass, or to use separate objects.
1The actual ID is highly implementation-dependent.
2The path by which it reaches this run function is somewhat convoluted, passing through bootstrap code that does some internal initialization, then calls self.run for you, passing no arguments. You are only promised that self.run gets entered somehow; you should not rely on the "how".
3At least, this is the code in Python 2.7 and 3.4; other implementations could vary slightly.
I have a problem with a Python script on my rpi. If I create a process object, it starts automatically and blocks everything else. I want it to run in the background, and to be able to start it by calling the start() method.
network_manager.py:
import socketserver
class NetworkManagerHandler(socketserver.StreamRequestHandler):
def handle(self):
print("Got some Data!")
class NetworkManagerServer(socketserver.ForkingMixIn, socketserver.TCPServer):
pass
core.py:
import multiprocessing
from network_manager import NetworkManagerServer, NetworkManagerHandler
HOST, PORT = "100.0.0.1", 11891
network_manager = NetworkManagerServer((HOST, PORT), NetworkManagerHandler)
network_manager_process =
multiprocessing.Process(target=network_manager.serve_forever())
# !-> Program is blocking here, but the Server is working. <-!
network_manager_process.daemon = True
network_manager_process.start()
print("Networkmanager is running. (%s:%s)" % (HOST, PORT))
# network_manager.shutdown()
Thanks.
This:
network_manager_process =
multiprocessing.Process(target=network_manager.serve_forever())
Should be this:
network_manager_process =
multiprocessing.Process(target=network_manager.serve_forever)
You don't actually want to call serve_forever, you just want to pass the function to the Process object.
I need to check if the python script is already running then calling a method from the same running python script. But it must be on same process(pid), no new process. Is this possible?
I tried some codes but not worked.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import Tkinter as tk
from Tkinter import *
import socket
class Main():
def mainFunc(self):
self.root = tk.Tk()
self.root.title("Main Window")
self.lbl = Label(self.root, text = "First Text")
self.lbl.pack()
openStngs = Button(self.root, text = "Open Settings", command=self.settingsFunc)
openStngs.pack()
def settingsFunc(self):
stngsRoot = Toplevel()
stngsRoot.title("Settings Window")
changeTextOfLabel = Button(stngsRoot, text = "Change Main Window Text", command=self.change_text)
changeTextOfLabel.pack()
def change_text(self):
self.lbl.config(text="Text changed")
# the get_lock from http://stackoverflow.com/a/7758075/3254912
def get_lock(process_name):
lock_socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
try:
print lock_socket
lock_socket.bind('\0' + process_name)
print 'I got the lock'
m.mainFunc()
mainloop()
except socket.error:
print 'lock exists'
m.settingsFunc()
mainloop()
# sys.exit()
if __name__ == '__main__':
m=Main()
get_lock('myPython.py')
You either need:
A proactive check in your running process to look at the environment (for instance, the contents of a file or data coming through a socket) to know when to fire the function,
or for your running process to receive unix signals or some other IPC (possibly one of the user-defined signals) and perform a function when one is received.
Either way you can't just reach into a running process and fire a function inside that process (it MIGHT not be literally impossible if you hook the running process up to a debugger, but I wouldn't recommend it).
Tkinter necessarily has its own event loop system, so I recommend reading up on how that works and how to either run something on a timer in that event loop system, or set up a callback that responds to a signal. You could also wrap a non-event loop based system in a try/except block that will catch an exception generated by a UNIX signal, but it may not be straightforward to resume the operation of the rest of the program after that signal is caught, in that case.
Sockets are a good solution to this kind of interprocess communication problem.
One possible approach would be to set up a socket server in a thread in your original process, this can be used as an entry point for external input. A (rather stupid) example might be:
# main.py
import socket
import SocketServer # socketserver in Python 3+
import time
from Queue import Queue
from threading import Thread
# class for handling requests
class QueueHandler(SocketServer.BaseRequestHandler):
def __init__(self, request, client_address, server):
self.server = server
server.client_address = client_address
SocketServer.BaseRequestHandler.__init__(self,request, client_address, server)
# receive a block of data
# put it in a Queue instance
# send back the block of data (redundant)
def handle(self):
data = self.request.recv(4096)
self.server.recv_q.put(data)
self.request.send(data)
class TCPServer(SocketServer.TCPServer):
def __init__(self, ip, port, handler_class=QueueHandler):
SocketServer.TCPServer.__init__(self, (ip, port), handler_class, bind_and_activate=False)
self.recv_q = Queue() # a Queue for data received over the socket
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_bind()
self.server_activate()
def shutdown(self):
SocketServer.TCPServer.shutdown(self)
def __del__(self):
self.server_close()
# This is the equivalent of the main body of your original code
class TheClassThatLovesToAdd(object):
def __init__(self):
self.value = 1
# create and instance of the server attached to some port
self.server = TCPServer("localhost",9999)
# start it listening in a separate control thread
self.server_thread = Thread(target=self.server.serve_forever)
self.server_thread.start()
self.stop = False
def add_one_to_value(self):
self.value += 1
def run(self):
while not self.stop:
print "Value =",self.value
# if there is stuff in the queue...
while not self.server.recv_q.empty():
# read and parse the message from the queue
msg = self.server.recv_q.get()
# perform some action based on the message
if msg == "add":
self.add_one_to_value()
elif msg == "shutdown":
self.server.shutdown()
self.stop = True
time.sleep(1)
if __name__ == "__main__":
x = TheClassThatLovesToAdd()
x.run()
When you start this running, it should just loop over and over printing to the screen. Output:
Value = 1
Value = 1
Value = 1
...
However the TCPServer instance attached to the TheClassThatLovesToAdd instance now gives us a control path. The simplest looking snippet of control code would be:
# control.py
import socket
import sys
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.settimeout(2)
sock.connect(('localhost',9999))
# send some command line argument through the socket
sock.send(sys.argv[1])
sock.close()
So if I run main.py in one terminal window and call python control.py add from another, the output of main.py will change:
Value = 1
Value = 1
Value = 1
Value = 2
Value = 2
...
Finally to kill it all we can run python control.py shutdown, which will gently bring main.py to a halt.
This is by no means the only solution to your problem, but it is likely to be one of the simplest.
One can try GDB, but not sure how to call a function from within [an idle thread].
Perhaps someone very versed with gdb and debugging/calling Python functions from within GDB can improve this answer.
One solution would be to use a messaging service (such as ActiveMQ or RabbitMQ). Your application subscribes to a queue/topic and whenever you want to send it a command, you write a message to it's queue. I'm not going to go into details because there are thousands of examples on-line. Queues/messaging/MQTT etc. are very simple to implement and are how most business systems (and modern control systems) communicate. Do a search for paho-mqtt.
I'm trying to write unittests for my application that uses Autobahn.
I want to test my controllers which gets received data from protocol, parses it and reacts to it.
But when my test comes to a point when protocol should be disconnected (self.sendClose) then it raises error
exceptions.AttributeError: 'MyProtocol' object has no attribute 'state'.
I was trying to makeConnection using proto_helpers.StringTransport but then I have errors too
exceptions.AttributeError: StringTransport instance has no attribute 'setTcpNoDelay'`
I'm using trial and I don't want to run dummy server/client for testing purposes only, because it's not recommended.
How should I write my tests so I can test functions that sends data, read data, disconnects etc. using fake connection and trial ?
It is difficult to say exactly what is going on without having a peek at MyProtocol class. The problem sounds a lot like it is caused by the fact that you are directly messing round with low level functions and therefore also the state attribute of WebSocket class, which is, well, a representation of the internal state of the WebSocket connection.
According to the autobahn reference doc, the APIs from the WebSicketProtocol that you could directly use and override are:
onOpen
onMessage
onClose
sendMessage
sendClose
Your approach of using the StringTransport to test your protocol is not ideal. The problem lays in the fact that MyProtocol is a tiny layer on top of the WebSocketProtocol framework provided by autobahn which, for better or worse, hides the details about managing the connection, the transport and the internal protocol state.
If you think about it, you want to test your stuff, not WebSocketProtocol and therefore if you do not want to embed a dummy server or client, your best bet is to test directly the methods that MyProtocol overrides.
An example of what I am saying is the following
class MyPublisher(object):
cbk=None
def publish(self, msg):
if self.cbk:
self.cbk(msg)
class MyProtocol(WebSocketServerProtocol):
def __init__(self, publisher):
WebSocketServerProtocol.__init__(self)
#Defining callback for publisher
publisher.cbk = self.sendMessage
def onMessage(self, msg, binary)
#Stupid echo
self.sendMessage(msg)
class NotificationTest(unittest.TestCase):
class MyProtocolFactory(WebSocketServerFactory):
def __init__(self, publisher):
WebSocketServerFactory.__init__(self, "ws://127.0.0.1:8081")
self.publisher = publisher
self.openHandshakeTimeout = None
def buildProtocol(self, addr):
protocol = MyProtocol(self.listener)
protocol.factory = self
protocol.websocket_version = 13 #Hybi version 13 is supported by pretty much everyone (apart from IE <8 and android browsers)
return protocol
def setUp(self):
publisher = task.LoopingCall(self.send_stuff, "Hi there")
factory = NotificationTest.MyProtocolFactory(listener)
protocol = factory.buildProtocol(None)
transport = proto_helpers.StringTransport()
def play_dumb(*args): pass
setattr(transport, "setTcpNoDelay", play_dumb)
protocol.makeConnection(transport)
self.protocol, self.transport, self.publisher, self.fingerprint_handler = protocol, transport, publisher, fingerprint_handler
def test_onMessage(self):
#Following 2 lines are the problematic part. Here you are manipulating explicitly a hidden state which your implementation should not be concerned with!
self.protocol.state = WebSocketProtocol.STATE_OPEN
self.protocol.websocket_version = 13
self.protocol.onMessage("Whatever")
self.assertEqual(self.transport.value()[2:], 'Whatever')
def test_push(self):
#Following 2 lines are the problematic part. Here you are manipulating explicitly a hidden state which your implementation should not be concerned with!
self.protocol.state = WebSocketProtocol.STATE_OPEN
self.protocol.websocket_version = 13
self.publisher.publish("Hi there")
self.assertEqual(self.transport.value()[2:], 'Hi There')
As you might have noticed, using the StringTransport here is very cumbersome. You must have knowledge of the underline framework and bypass its state management, something you don't really want to do. Unfortunately autobahn does not provide a ready-to-use test object that would permit easy state manipulation and therefore my suggestion of using dummy servers and clients is still valid
Testing your server WITH network
The test provided shows how you can test server push, asserting that what your are getting is what you expect, and using also a hook on how to determine when to finish.
The server protocol
from twisted.trial.unittest import TestCase as TrialTest
from autobahn.websocket import WebSocketServerProtocol, WebSocketServerFactory, WebSocketClientProtocol, WebSocketClientFactory, connectWS, listenWS
from twisted.internet.defer import Deferred
from twisted.internet import task
START="START"
class TestServerProtocol(WebSocketServerProtocol):
def __init__(self):
#The publisher task simulates an event that triggers a message push
self.publisher = task.LoopingCall(self.send_stuff, "Hi there")
def send_stuff(self, msg):
#this method sends a message to the client
self.sendMessage(msg)
def _on_start(self):
#here we trigger the task to execute every second
self.publisher.start(1.0)
def onMessage(self, message, binary):
#According to this stupid protocol, the server starts sending stuff when the client sends a "START" message
#You can plug other commands in here
{
START : self._on_start
#Put other keys here
}[message]()
def onClose(self, wasClean, code, reason):
#After closing the connection, we tell the task to stop sending messages
self.publisher.stop()
The client protocol and factory
Next class is the client protocol. It basically tells the server to start pushing messages. It calls the close_condition on them to see if it is time to close the connection and as a last thing, it calls the assertion function on the messages it received to see if the test was successful or not
class TestClientProtocol(WebSocketClientProtocol):
def __init__(self, assertion, close_condition, timeout, *args, **kwargs):
self.assertion = assertion
self.close_condition = close_condition
self._received_msgs = []
from twisted.internet import reactor
#This is a way to set a timeout for your test
#in case you never meet the conditions dictated by close_condition
self.damocle_sword = reactor.callLater(timeout, self.sendClose)
def onOpen(self):
#After the connection has been established,
#you can tell the server to send its stuff
self.sendMessage(START)
def onMessage(self, msg, binary):
#Here you get the messages pushed from the server
self._received_msgs.append(msg)
#If it is time to close the connection
if self.close_condition(msg):
self.damocle_sword.cancel()
self.sendClose()
def onClose(self, wasClean, code, reason):
#Now it is the right time to check our test assertions
self.assertion.callback(self._received_msgs)
class TestClientProtocolFactory(WebSocketClientFactory):
def __init__(self, assertion, close_condition, timeout, **kwargs):
WebSocketClientFactory.__init__(self, **kwargs)
self.assertion = assertion
self.close_condition = close_condition
self.timeout = timeout
#This parameter needs to be forced to None to not leave the reactor dirty
self.openHandshakeTimeout = None
def buildProtocol(self, addr):
protocol = TestClientProtocol(self.assertion, self.close_condition, self.timeout)
protocol.factory = self
return protocol
The trial based test
class WebSocketTest(TrialTest):
def setUp(self):
port = 8088
factory = WebSocketServerFactory("ws://localhost:{}".format(port))
factory.protocol = TestServerProtocol
self.listening_port = listenWS(factory)
self.factory, self.port = factory, port
def tearDown(self):
#cleaning up stuff otherwise the reactor complains
self.listening_port.stopListening()
def test_message_reception(self):
#This is the test assertion, we are testing that the messages received were 3
def assertion(msgs):
self.assertEquals(len(msgs), 3)
#This class says when the connection with the server should be finalized.
#In this case the condition to close the connectionis for the client to get 3 messages
class CommunicationHandler(object):
msg_count = 0
def close_condition(self, msg):
self.msg_count += 1
return self.msg_count == 3
d = Deferred()
d.addCallback(assertion)
#Here we create the client...
client_factory = TestClientProtocolFactory(d, CommunicationHandler().close_condition, 5, url="ws://localhost:{}".format(self.port))
#...and we connect it to the server
connectWS(client_factory)
#returning the assertion as a deferred purely for demonstration
return d
This is obviously just an example, but as you can see I did not have to mess around with makeConnection or any transport explicitly
This is problem.
My primary work is : deliver "s" object to "handle" method in TestRequestHandler class.
My first step was : deliver "s" object through "point" method to TestServer class, but here im stuck. How to deliver "s" object to TestRequestHandler? Some suggestions?
import threading
import SocketServer
from socket import *
class TestRequestHandler(SocketServer.BaseRequestHandler):
def __init__(self, request, client_address, server):
SocketServer.BaseRequestHandler.__init__(self, request, client_address, server)
return
def setup(self):
return SocketServer.BaseRequestHandler.setup(self)
def handle(self):
data = self.request.recv(1024)
if (data):
self.request.send(data)
print data
def finish(self):
return SocketServer.BaseRequestHandler.finish(self)
class TestServer(SocketServer.TCPServer):
def __init__(self, server_address, handler_class=TestRequestHandler):
print "__init__"
SocketServer.TCPServer.__init__(self, server_address, handler_class)
return
def point(self,obj):
self.obj = obj
print "point"
def server_activate(self):
SocketServer.TCPServer.server_activate(self)
return
def serve_forever(self):
print "serve_forever"
while True:
self.handle_request()
return
def handle_request(self):
return SocketServer.TCPServer.handle_request(self)
if __name__ == '__main__':
s = socket(AF_INET, SOCK_STREAM)
address = ('localhost', 6666)
server = TestServer(address, TestRequestHandler)
server.point(s)
t = threading.Thread(target=server.serve_forever())
t.setDaemon(True)
t.start()
If I understand correctly, I think you perhaps are misunderstanding how the module works. You are already specifying an address of 'localhost:6666' for the server to bind on.
When you start the server via your call to serve_forever(), this is going to cause the server to start listening to a socket on localhost:6666.
According to the documentation, that socket is passed to your RequestHandler as the 'request' object. When data is received on the socket, your 'handle' method should be able to recv/send from/to that object using the documented socket API.
If you want a further abstraction, it looks like your RequestHandler can extend from StreamRequestHandler and read/write to the socket using file-like objects instead.
The point is, there is no need for you to create an additional socket and then try to force your server to use the new one instead. Part of the value of the SocketServer module is that it manages the lifecycle of the socket for you.
On the flip side, if you want to test your server from a client's perspective, then you would want to create a socket that you can read/write your client requests on. But you would never pass this socket to your server, per se. You would probably do this in a completely separate process and test your server via IPC over the socket.
Edit based on new information
To get server A to open a socket to server B when server A receives data one solution is to simply open a socket from inside your RequestHandler. That said, there are likely some other design concerns that you will need to address based on the requirements of your service.
For example, you may want to use a simple connection pool that say opens a few sockets to server B that server A can use like a resource. There may already be some libraries in Python that help with this.
Given your current design, your RequestHandler has access to the server as a member variable so you could do something like this:
class TestServer(SocketServer.TCPServer):
def point (self, socketB):
self.socketB = socketB # hold serverB socket
class TestRequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
if (data):
self.request.send(data)
print data
self.server.socketB ... # Do whatever with the socketB
But like I said, it may be better for you to have some sort of connection pool or other object that manages your server B socket such that your server A handler can just acquire/release the socket as incoming requests are handled.
This way you can better deal with conditions where server B breaks the socket. Your current design wouldn't be able to handle broken sockets very easily. Just some thoughts...
If the value of s is set once, and not reinitialized - you could make it a class variable as opposed to an instance variable of TestServer, and then have the handler retrieve it via a class method of TestServer in the handler's constructor.
eg: TestServer._mySocket = s
Ok, my main task is this. Construction of the listening server (A-server - localhost, 6666) which during start will open "hard" connection to the different server (B-server - localhost, 7777).
When the customer send data to the A-server this (A-server) sends data (having that hard connection to the B-server) to B-server, the answer receives from the B-server to A-server and answer sends to the customer.
Then again : the customer sends data, A-server receives them, then sends to the B-server, the answer receives data from the B-server and A-server send data to the customer.
And so round and round. The connection to the B-server is closes just when the server A will stop.
All above is the test of making this.