I am trying to create a two player game in pygame using sockets, the thing is, when I try to receive data on on this line:
message = self.conn.recv(1024)
python hangs until it gets some data. The problem with this is that is pauses the game loop when the client is not sending anything through the socket and causes a black screen. How can I stop recv from doing this?
Thanks in advance
Use nonblocking mode. (See socket.setblocking.)
Or check if there is data available before call recv.
For example, using select.select:
r, _, _ = select.select([self.conn], [], [])
if r:
# ready to receive
message = self.conn.recv(1024)
you can use signal module to stop an hangs recv thread.
in recv thread:
try:
data = sock.recv(1024)
except KeyboardInterrupt:
pass
in interpret thread:
signal.pthread_kill(your_recving_thread.ident, signal.SIGINT)
I know that this is an old post, but since I worked on a similar project lately, I wanted to add something that hasn't already been stated yet for anybody having the same issue.
You can use threading to create a new thread, which will receive data. After this, run your game loop normally in your main thread, and check for received data in each iteration. Received data should be placed inside a queue by the data receiver thread and read from that queue by the main thread.
#other imports
import queue
import threading
class MainGame:
def __init__(self):
#any code here
self.data_queue = queue.Queue()
data_receiver = threading.Thread(target=self.data_receiver)
data_receiver.start()
self.gameLoop()
def gameLoop(self):
while True:
try:
data = self.data_queue.get_nowait()
except queue.Empty:
pass
self.gameIteration(data)
def data_receiver(self):
#Assuming self.sock exists
data = self.sock.recv(1024).decode("utf-8")
#edit the data in any way necessary here
self.data_queue.put(data)
def gameIteration(self, data):
#Assume this method handles updating, drawing, etc
pass
Note that this code is in Python 3.
Related
I am using Python3 modules:
requests for HTTP GET calls to a few Particle Photons which are set up as simple HTTP Servers
As a client I am using the Raspberry Pi (which is also an Access Point) as a HTTP Client which uses multiprocessing.dummy.Pool for making HTTP GET resquests to the above mentioned photons
The polling routine is as follows:
def pollURL(url_of_photon):
"""
pollURL: Obtain the IP Address and create a URL for HTTP GET Request
#param: url_of_photon: IP address of the Photon connected to A.P.
"""
create_request = 'http://' + url_of_photon + ':80'
while True:
try:
time.sleep(0.1) # poll every 100ms
response = requests.get(create_request)
if response.status_code == 200:
# if success then dump the data into a temp dump file
with open('temp_data_dump', 'a+') as jFile:
json.dump(response.json(), jFile)
else:
# Currently just break
break
except KeyboardInterrupt as e:
print('KeyboardInterrupt detected ', e)
break
The url_of_photon values are simple IPv4 Addresses obtained from the dnsmasq.leases file available on the Pi.
the main() function:
def main():
# obtain the IP and MAC addresses from the Lease file
IP_addresses = []
MAC_addresses = []
with open('/var/lib/misc/dnsmasq.leases', 'r') as leases_file:
# split lines and words to obtain the useful stuff.
for lines in leases_file:
fields = lines.strip().split()
# use logging in future
print('Photon with MAC: %s has IP address: %s' %(fields[1],fields[2]))
IP_addresses.append(fields[2])
MAC_addresses.append(fields[1])
# Create Thread Pool
pool = ThreadPool(len(IP_addresses))
results = pool.map(pollURL, IP_addresses)
pool.close()
pool.join()
if __name__ == '__main__':
main()
Problem
The program runs well however when I press CTRL + C the program does not terminate. Upon digging I found that the way to do so is using CTRL + \
How do I use this in my pollURL function for a safe way to exit the program, i.e. perform poll.join() so no leftover processes are left?
notes:
the KeyboardInterrupt is never recognized with the function. Hence I am facing trouble trying to detect CTRL + \.
The pollURL is executed in another thread. In Python, signals are handled only in the main thread. Therefore, SIGINT will raise the KeyboardInterrupt only in the main thread.
From the signal documentation:
Signals and threads
Python signal handlers are always executed in the main Python thread, even if the signal was received in another thread. This means that signals can’t be used as a means of inter-thread communication. You can use the synchronization primitives from the threading module instead.
Besides, only the main thread is allowed to set a new signal handler.
You can implement your solution in the following way (pseudocode).
event = threading.Event()
def looping_function( ... ):
while event.is_set():
do_your_stuff()
def main():
try:
event.set()
pool = ThreadPool()
pool.map( ... )
except KeyboardInterrupt:
event.clear()
finally:
pool.close()
pool.join()
I use multiprocessing.connection.Listener for communication between processes, and it works as a charm for me. Now i would really love my mainloop to do something else between commands from client. Unfortunately listener.accept() blocks execution until connection from client process is established.
Is there a simple way of managing non blocking check for multiprocessing.connection? Timeout? Or shall i use a dedicated thread?
# Simplified code:
from multiprocessing.connection import Listener
def mainloop():
listener = Listener(address=(localhost, 6000), authkey=b'secret')
while True:
conn = listener.accept() # <--- This blocks!
msg = conn.recv()
print ('got message: %r' % msg)
conn.close()
One solution that I found (although it might not be the most "elegant" solution is using conn.poll. (documentation) Poll returns True if the Listener has new data, and (most importantly) is nonblocking if no argument is passed to it. I'm not 100% sure that this is the best way to do this, but I've had success with only running listener.accept() once, and then using the following syntax to repeatedly get input (if there is any available)
from multiprocessing.connection import Listener
def mainloop():
running = True
listener = Listener(address=(localhost, 6000), authkey=b'secret')
conn = listener.accept()
msg = ""
while running:
while conn.poll():
msg = conn.recv()
print (f"got message: {msg}")
if msg == "EXIT":
running = False
# Other code can go here
print(f"I can run too! Last msg received was {msg}")
conn.close()
The 'while' in the conditional statement can be replaced with 'if,' if you only want to get a maximum of one message at a time. Use with caution, as it seems sort of 'hacky,' and I haven't found references to using conn.poll for this purpose elsewhere.
You can run the blocking function in a thread:
conn = await loop.run_in_executor(None, listener.accept)
I've not used the Listener object myself- for this task I normally use multiprocessing.Queue; doco at the following link:
https://docs.python.org/2/library/queue.html#Queue.Queue
That object can be used to send and receive any pickle-able object between Python processes with a nice API; I think you'll be most interested in:
in process A
.put('some message')
in process B
.get_nowait() # will raise Queue.Empty if nothing is available- handle that to move on with your execution
The only limitation with this is you'll need to have control of both Process objects at some point in order to be able to allocate the queue to them- something like this:
import time
from Queue import Empty
from multiprocessing import Queue, Process
def receiver(q):
while 1:
try:
message = q.get_nowait()
print 'receiver got', message
except Empty:
print 'nothing to receive, sleeping'
time.sleep(1)
def sender(q):
while 1:
message = 'some message'
q.put('some message')
print 'sender sent', message
time.sleep(1)
some_queue = Queue()
process_a = Process(
target=receiver,
args=(some_queue,)
)
process_b = Process(
target=sender,
args=(some_queue,)
)
process_a.start()
process_b.start()
print 'ctrl + c to exit'
try:
while 1:
time.sleep(1)
except KeyboardInterrupt:
pass
process_a.terminate()
process_b.terminate()
process_a.join()
process_b.join()
Queues are nice because you can actually have as many consumers and as many producers for that exact same Queue object as you like (handy for distributing tasks).
I should point out that just calling .terminate() on a Process is bad form- you should use your shiny new messaging system to pass a shutdown message or something of that nature.
The multiprocessing module comes with a nice feature called Pipe(). It is a nice way to share resources between two processes(never tried more than two before). With the dawn of python 3.80 came the shared memory function in the multiprocessing module but i have not really tested that so i cannot vouch for it
You will use the pipe function something like
from multiprocessing import Pipe
.....
def sending(conn):
message = 'some message'
#perform some code
conn.send(message)
conn.close()
receiver, sender = Pipe()
p = Process(target=sending, args=(sender,))
p.start()
print receiver.recv() # prints "some message"
p.join()
with this you should be able to have separate processes running independently and when you get to the point which you need the input from one process. If there is somehow an error due to the unrelieved data of the other process you can put it on a kind of sleep or halt or use a while loop to constantly check pending when the other process finishes with that task and sends it over
while not parent_conn.recv():
time.sleep(5)
this should keep it in an infinite loop until the other process is done running and sends the result. This is also about 2-3 times faster than Queue. Although queue is also a good option personally I do not use it.
I have this piece of code, basically it run channel.start_consuming().
I want it to stop after a while.
I think that channel.stop_consuming() is the right method:
def stop_consuming(self, consumer_tag=None):
""" Cancels all consumers, signalling the `start_consuming` loop to
exit.
But it doesn't work: start_consuming() never ends (execution doesn't exit from this call, "end" is never printed).
import unittest
import pika
import threading
import time
_url = "amqp://user:password#xxx.rabbitserver.com/aaa"
class Consumer_test(unittest.TestCase):
def test_startConsuming(self):
def callback(channel, method, properties, body):
print("callback")
print(body)
def connectionTimeoutCallback():
print("connecionClosedCallback")
def _closeChannel(channel_):
print("_closeChannel")
time.sleep(1)
print("close")
if channel_.is_open:
channel_.stop_consuming()
print("stop_cosuming")
else:
print("channel is closed")
#channel_.close()
params = pika.URLParameters(_url)
params.socket_timeout = 5
connection = pika.BlockingConnection(params)
#connection.add_timeout(2, connectionTimeoutCallback)
channel = connection.channel()
channel.basic_consume(callback,
queue='test',
no_ack=True)
t = threading.Thread(target=_closeChannel, args=[channel])
t.start()
print("start_consuming")
channel.start_consuming() # start consuming (loop never ends)
connection.close()
print("end")
connection.add_timeout solve my problem, maybe call basic_cancel too, but I want to use the right method.
Thanks
Note:
I can't respond or add comment to this (pika, stop_consuming does not work) due to my low reputation points.
Note 2:
I think that I'm not sharing channel or connection across threads (Pika doesn't support this) because I use "channel_" passed as parameter and not "channel" instance of the class (Am I wrong?).
I was having the same problem; as pika is not thread safe. i.e. connections and channels can't be safely shared across threads.
So I used a separate connection to send a shutdown message; then stopped consuming the original channel from the callback function.
I try to write a daemon in python. But I have no idea how can I use a thread to start parallel tcp server in this daemon. And even what type of server I should use : asyncore?SocketServer?socket?
this is part of my code:
import os
def demonized():
child_pid = os.fork()
if child_pid == 0:
child_pid = os.fork()
if child_pid == 0: #fork twice for demonize
file = open('###', "r") # open file
event = file.read()
while event:
#TODO check for changes put changes in list variable
event = file.read()
file.close()
else:
sys.exit(0)
else:
sys.exit(0)
if __name__ == "__main__":
demonized()
So in a loop I have a list variable with some data appended every circle, and I want to start a thread with tcp server that wait for connection in the loop and if client connects send it this data(with zeroing variable). So I do not need to handle multiple clients, the client will be only one at time. What is the optimal way to implement this?
Thank you.
In case you want to avoid repeating boilerplate, Python will soon have a standard module that does the fork() pair and standard-I/O manipulations (which you have not added to your program yet?) that make it a daemon. You can download and use this module right now, from:
http://pypi.python.org/pypi/python-daemon
Running a TCP server in a separate thread is often as simple as:
import threading
def my_tcp_server():
sock = socket.socket(...)
sock.bind(...)
sock.listen()
while True:
conn, address = sock.accept()
...
... talk on the connection ...
...
conn.close()
def main():
...
threading.Thread(target=my_tcp_server).start()
...
I strongly recommend against trying to get your file-reader thread and your socket-answering thread talking with a list and lock of your own devising; such schemes are hard to get working and hard to keep working. Instead, use the standard library's Queue.Queue() class which does all of the locking and appending correctly for you.
Do you want to append items to the list in while event:... loop and serving this list simultaneously? If so then you have two writers and you must somehow protect your list.
In the sample SocketServer.TCPServer and threading.Lock was used:
import threading
import SocketServer
import time
class DataHandler(SocketServer.StreamRequestHandler):
def handle(self):
self.server.list_block.acquire()
self.wfile.write(', '.join(self.server.data))
self.wfile.flush()
self.server.data = []
self.server.list_block.release()
if __name__ == '__main__':
data = []
list_block = threading.Lock()
server = SocketServer.TCPServer(('localhost', 0), DataHandler)
server.list_block = list_block
server.data = data
t = threading.Thread(target=server.serve_forever)
t.start()
while True:
list_block.acquire()
data.append(1)
list_block.release()
time.sleep(1)
I have a Python script that opens a websocket to the Twitter API and then waits. When an event is passed to the script via amq, I need to open a new websocket connection and immediately close the old one just as soon as the new connection is registered.
It looks something like this:
stream = TwitterStream()
stream.start()
for message in broker.listen():
if message:
new_stream = TwitterStream()
# need to close the old connection as soon as the
# new one connects here somehow
stream = new_stream()
I'm trying to figure out how I'd establish a 'callback' in order to notify my script as to when the new connection is established. The TwitterStream class has a "is_running" boolean variable that I can reference, so I was thinking perhaps something like:
while not new_stream.is_running:
time.sleep(1)
But it seems kind of messy. Does anyone know a better way to achieve this?
A busy loop is not the right approach, since it obviously wastes CPU. There are threading constructs that let you communicate such events, instead. See for example: http://docs.python.org/library/threading.html#event-objects
Here is an example with threading event:
import threading
from time import sleep
evt = threading.Event()
result = None
def background_task():
global result
print("start")
result = "Started"
sleep(5)
print("stop")
result = "Finished"
evt.set()
t = threading.Thread(target=background_task)
t.start()
# optional timeout
timeout=3
evt.wait(timeout=timeout)
print(result)