I am trying to use GLib.IOChannels to send data from a client to a server running a Glib.Mainloop.
The file used for the socket should be located at /tmp/so/sock, and the server should simply run a function whenever it receives data.
This is the code I've written:
import sys
import gi
from gi.repository import GLib
ADRESS = '/tmp/so/sock'
def server():
loop = GLib.MainLoop()
with open(ADRESS, 'r') as sock_file:
sock = GLib.IOChannel.unix_new(sock_file.fileno())
GLib.io_add_watch(sock, GLib.IO_IN,
lambda *args: print('received:', args))
loop.run()
def client(argv):
sock_file = open(ADRESS, 'w')
sock = GLib.IOChannel.unix_new(sock_file.fileno())
try:
print(sock.write_chars(' '.join(argv).encode('utf-8'), -1))
except GLib.Error:
raise
finally:
sock.shutdown(True)
# sock_file.close() # calling close breaks the script?
if __name__ == '__main__':
if len(sys.argv) > 1:
client(sys.argv[1:])
else:
server()
When called without arguments, it acts as the server, if called with arguments, it sends them to a running server.
When starting the server, I immediately get the following output:
received: (<GLib.IOChannel object at 0x7fbd72558b80 (GIOChannel at 0x55b8397905c0)>, <flags G_IO_IN of type GLib.IOCondition>)
I don't know why that is. Whenever I send something, I get an output like (<enum G_IO_STATUS_NORMAL of type GLib.IOStatus>, bytes_written=4) on the client side, while nothing happens server-side.
What am I missing? I suspect I understood the documentation wrong, as I did not find a concrete example.
I got the inspiration to use the IOChannel instead of normal sockets from this post: How to listen socket, when app is running in gtk.main()?
Related
I have two scripts I am actively using for a programming class. My code is commented nicely and my teacher prefers outside resources since there are many different solutions.
Getting to the actual problem though, I need to create a server with a socket (which works) and then allow another computer to connect to it using a separate script (which also works). The problem is after the connection is made. I want the two to be able to send messages back and forth. The way it sends has to be in byte form with how I have it set up but the byte returned is impossible to read. I can decode it but I want it to be conveniently located in the Command Prompt with everything else. I attempt to import the main script (Connection.py) into the secondary script (Client.py) but then it runs the main script. Is there any way I can prevent it from running?
Here is my main script (the one creating the server)
#Import socket and base64#
import socket
import base64
#Creating variable for continuous activity#
neverland = True
#Create socket object#
s = socket.socket()
print ("Socket created") #Just for debugging purposes#
#Choose port number for connection#
port = 29759 #Used a random number generator to get this port#
#Bind to the port#
s.bind((' ', port))
print ("Currently using port #%s" %(port)) #Just for debugging purposes#
#Make socket listen for connections#
s.listen(5)
print ("Currently waiting on a connection...") #Just for debugging purposes#
#Loop for establishing a connection and sending a message#
while neverland == True:
#Establish a connection#
c, addr = s.accept()
print ("Got a connection from ", addr) #Just for debugging purposes#
#Sending custom messages to the client (as a byte)#
usermessage = input("Enter your message here: ")
usermessage = base64.b64encode(bytes(usermessage, "utf-8"))
c.send(usermessage)
#End the connection#
c.close()
And here is my secondary script (the one that connects to the main one)
#Import socket module#
import socket
import Connection
#Create a socket object#
s = socket.socket()
#Define the port on which you want to connect#
port = 29759
#Connect to the server on local computer#
s.connect(('127.0.0.1', port))
#Receive data from the server#
print (s.recv(1024))
usermessage = base64.b64decode(str(usermessage, "utf-8"))
print (usermessage)
#Close the connection#
s.close()
Upon running them both in the command prompt, the following error occurs:
It attempts to run the main script again and gets the error, how can I prevent it?
The way you'd commonly achieve this is to not execute any actions when a script is read. I.e. you just define your functions, classes and variables and if this script is meant to be called directly, you if it was called as such and refer to appropriate entry point. E.g.:
myvar = "world"
def main():
print("Hello", myvar)
if __name__ == "__main__":
main()
This way you can call your script python example.py or import it from another one and use it content where needed: import example; print(example.myvar).
You can also, and this is not mutually exclusive with above, refactor your scripts and have one file with common/shared definitions which is imported into and used by both of your scripts.
I have a long running Python script which collect tweets from Twitter, and I would like to know how its doing every once in awhile.
Currently, I am using the signal library to catch interrupts, at which point I call my print function. Something like this:
import signal
def print_info(count):
print "#Tweets:", count
#Print out the process ID so I can interrupt it for info
print 'PID:', os.getpid()
#Start listening for interrupts
signal.signal(signal.SIGUSR1, functools.partial(print_info, tweet_count))
And whenever I want my info, I open up a new terminal and issue my interrupt:
$kill -USR1 <pid>
Is there a better way to do this? I am aware I could have my script something at scheduled intervals, but I am more interested in knowing on demand, and potentially issuing other commands as well.
Sending a signal to process would interrupt the process. Below you will find an approach that uses dedicated thread to emulate python console. The console is exposed as a unix socket.
import traceback
import importlib
from code import InteractiveConsole
import sys
import socket
import os
import threading
from logging import getLogger
# template used to generate file name
SOCK_FILE_TEMPLATE = '%(dir)s/%(prefix)s-%(pid)d.socket'
log = getLogger(__name__)
class SocketConsole(object):
'''
Ported form :eventlet.backdoor.SocketConsole:.
'''
def __init__(self, locals, conn, banner=None): # pylint: diable=W0622
self.locals = locals
self.desc = _fileobject(conn)
self.banner = banner
self.saved = None
def switch(self):
self.saved = sys.stdin, sys.stderr, sys.stdout
sys.stdin = sys.stdout = sys.stderr = self.desc
def switch_out(self):
sys.stdin, sys.stderr, sys.stdout = self.saved
def finalize(self):
self.desc = None
def _run(self):
try:
console = InteractiveConsole(self.locals)
# __builtins__ may either be the __builtin__ module or
# __builtin__.__dict__ in the latter case typing
# locals() at the backdoor prompt spews out lots of
# useless stuff
import __builtin__
console.locals["__builtins__"] = __builtin__
console.interact(banner=self.banner)
except SystemExit: # raised by quit()
sys.exc_clear()
finally:
self.switch_out()
self.finalize()
class _fileobject(socket._fileobject):
def write(self, data):
self._sock.sendall(data)
def isatty(self):
return True
def flush(self):
pass
def readline(self, *a):
return socket._fileobject.readline(self, *a).replace("\r\n", "\n")
def make_threaded_backdoor(prefix=None):
'''
:return: started daemon thread running :main_loop:
'''
socket_file_name = _get_filename(prefix)
db_thread = threading.Thread(target=main_loop, args=(socket_file_name,))
db_thread.setDaemon(True)
db_thread.start()
return db_thread
def _get_filename(prefix):
return SOCK_FILE_TEMPLATE % {
'dir': '/var/run',
'prefix': prefix,
'pid': os.getpid(),
}
def main_loop(socket_filename):
try:
log.debug('Binding backdoor socket to %s', socket_filename)
check_socket(socket_filename)
sockobj = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sockobj.bind(socket_filename)
sockobj.listen(5)
except Exception, e:
log.exception('Failed to init backdoor socket %s', e)
return
while True:
conn = None
try:
conn, _ = sockobj.accept()
console = SocketConsole(locals=None, conn=conn, banner=None)
console.switch()
console._run()
except IOError:
log.debug('IOError closing connection')
finally:
if conn:
conn.close()
def check_socket(socket_filename):
try:
os.unlink(socket_filename)
except OSError:
if os.path.exists(socket_filename):
raise
Example program:
make_threaded_backdoor(prefix='test')
while True:
pass
Example session:
mmatczuk#cactus:~$ rlwrap nc -U /var/run/test-3196.socket
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import os
>>> os.getpid()
3196
>>> quit()
mmatczuk#cactus:~$
This is a pretty robust tool that can be used to:
dump threads,
inspect process memory,
attach debugger on demand, pydev debugger (work for both eclipse and pycharm),
force GC,
monkeypatch function definition on the fly
and even more.
I personally write information to a file so that I have it afterwards, although this has the disadvantage of perhaps being slightly slower because it has to write to a file every time or every few times it retrieves a tweet.
Anyways, if you write it to a file "output.txt", you can open up bash and either type in tail output.txt for the latest 10 lines printed in the file, or you can type tail -f output.txt, which continuously updates the terminal prompt with the lines that you are writing to the file. If you wish to stop, just Ctrl-C.
Here's an example long-running program that also maintains a status socket. When a client connects to the socket, the script writes some status information to the socket.
#!/usr/bin/python
import os
import sys
import argparse
import random
import threading
import socket
import time
import select
val1 = 0
val2 = 0
lastupdate = 0
quit = False
# This function runs in a separate thread. When a client connects,
# we write out some basic status information, close the client socket,
# and wait for the next connection.
def connection_handler(sock):
global val1, val2, lastupdate, quit
while not quit:
# We use select() with a timeout here so that we are able to catch the
# quit flag in a timely manner.
rlist, wlist, xlist = select.select([sock],[],[], 0.5)
if not rlist:
continue
client, clientaddr = sock.accept()
client.send('%s %s %s\n' % (lastupdate, val1, val2))
client.close()
# This function starts the listener thread.
def start_listener():
sock = socket.socket(socket.AF_UNIX)
try:
os.unlink('/var/tmp/myprog.socket')
except OSError:
pass
sock.bind('/var/tmp/myprog.socket')
sock.listen(5)
t = threading.Thread(
target=connection_handler,
args=(sock,))
t.start()
def main():
global val1, val2, lastupdate
start_listener()
# Here is the part of our script that actually does "work".
while True:
print 'updating...'
lastupdate = time.time()
val1 = val1 + random.randint(1,10)
val2 = val2 + random.randint(100,200)
print 'sleeping...'
time.sleep(5)
if __name__ == '__main__':
try:
main()
except (Exception,KeyboardInterrupt,SystemExit):
quit=True
raise
You could write a simple Python client to connect to the socket, or you could use something like socat:
$ socat - unix:/var/tmp/myprog.sock
1403061693.06 6 152
I had write a similar application before.
Here is what I did:
When there are only a few commands needed, I just use signal as you did, just for not making it too complicated. By command, I mean something that you want you application to do, such as print_info in your post.
But when application updated, there are more different commands needed, I began to use a special thread listening on a socket port or reading a local file for accepting commands. Suppose the application need to support prinf_info1 print_info2 print_info3, so you can use a client connect to the target port and write print_info1 to make the application execute command print_info1 (Or just write print_info1 to a local file if you are using the reading local file mechanism).
When using the listening on a socket port mechanism, the disadvantage is it will take a bit more work to write a client to give commands, the advantage is you can give orders anywhere.
When using the reading a local file mechanism, the disadvantage is you have to make the thread check the file in a loop and it will use a bit resource, the advantage is giving orders is very simple (just write a string to a file) and you don't need to write a client and socket listen server.
rpyc is the perfect tool for this task.
In short, you define a rpyc.Service class which exposes the commands you want to expose, and start an rpyc.Server thread.
Your client then connects to your process, and calls the methods which are mapped to the commands your service exposes.
It's as simple and clean as that. No need to worry about sockets, signals, object serialization.
It has other cool features as well, for example the protocol being symmetric.
Your question relates to interprocess communication. You can achieve this by communicating over a unix socket or TCP port, by using a shared memory, or by using a message queue or cache system such as RabbitMQ and Redis.
This post talks about using mmap to achieve shared memory interprocess communication.
Here's how to get started with redis and RabbitMQ, both are rather simple to implement.
I need to check if the python script is already running then calling a method from the same running python script. But it must be on same process(pid), no new process. Is this possible?
I tried some codes but not worked.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import Tkinter as tk
from Tkinter import *
import socket
class Main():
def mainFunc(self):
self.root = tk.Tk()
self.root.title("Main Window")
self.lbl = Label(self.root, text = "First Text")
self.lbl.pack()
openStngs = Button(self.root, text = "Open Settings", command=self.settingsFunc)
openStngs.pack()
def settingsFunc(self):
stngsRoot = Toplevel()
stngsRoot.title("Settings Window")
changeTextOfLabel = Button(stngsRoot, text = "Change Main Window Text", command=self.change_text)
changeTextOfLabel.pack()
def change_text(self):
self.lbl.config(text="Text changed")
# the get_lock from http://stackoverflow.com/a/7758075/3254912
def get_lock(process_name):
lock_socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
try:
print lock_socket
lock_socket.bind('\0' + process_name)
print 'I got the lock'
m.mainFunc()
mainloop()
except socket.error:
print 'lock exists'
m.settingsFunc()
mainloop()
# sys.exit()
if __name__ == '__main__':
m=Main()
get_lock('myPython.py')
You either need:
A proactive check in your running process to look at the environment (for instance, the contents of a file or data coming through a socket) to know when to fire the function,
or for your running process to receive unix signals or some other IPC (possibly one of the user-defined signals) and perform a function when one is received.
Either way you can't just reach into a running process and fire a function inside that process (it MIGHT not be literally impossible if you hook the running process up to a debugger, but I wouldn't recommend it).
Tkinter necessarily has its own event loop system, so I recommend reading up on how that works and how to either run something on a timer in that event loop system, or set up a callback that responds to a signal. You could also wrap a non-event loop based system in a try/except block that will catch an exception generated by a UNIX signal, but it may not be straightforward to resume the operation of the rest of the program after that signal is caught, in that case.
Sockets are a good solution to this kind of interprocess communication problem.
One possible approach would be to set up a socket server in a thread in your original process, this can be used as an entry point for external input. A (rather stupid) example might be:
# main.py
import socket
import SocketServer # socketserver in Python 3+
import time
from Queue import Queue
from threading import Thread
# class for handling requests
class QueueHandler(SocketServer.BaseRequestHandler):
def __init__(self, request, client_address, server):
self.server = server
server.client_address = client_address
SocketServer.BaseRequestHandler.__init__(self,request, client_address, server)
# receive a block of data
# put it in a Queue instance
# send back the block of data (redundant)
def handle(self):
data = self.request.recv(4096)
self.server.recv_q.put(data)
self.request.send(data)
class TCPServer(SocketServer.TCPServer):
def __init__(self, ip, port, handler_class=QueueHandler):
SocketServer.TCPServer.__init__(self, (ip, port), handler_class, bind_and_activate=False)
self.recv_q = Queue() # a Queue for data received over the socket
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.server_bind()
self.server_activate()
def shutdown(self):
SocketServer.TCPServer.shutdown(self)
def __del__(self):
self.server_close()
# This is the equivalent of the main body of your original code
class TheClassThatLovesToAdd(object):
def __init__(self):
self.value = 1
# create and instance of the server attached to some port
self.server = TCPServer("localhost",9999)
# start it listening in a separate control thread
self.server_thread = Thread(target=self.server.serve_forever)
self.server_thread.start()
self.stop = False
def add_one_to_value(self):
self.value += 1
def run(self):
while not self.stop:
print "Value =",self.value
# if there is stuff in the queue...
while not self.server.recv_q.empty():
# read and parse the message from the queue
msg = self.server.recv_q.get()
# perform some action based on the message
if msg == "add":
self.add_one_to_value()
elif msg == "shutdown":
self.server.shutdown()
self.stop = True
time.sleep(1)
if __name__ == "__main__":
x = TheClassThatLovesToAdd()
x.run()
When you start this running, it should just loop over and over printing to the screen. Output:
Value = 1
Value = 1
Value = 1
...
However the TCPServer instance attached to the TheClassThatLovesToAdd instance now gives us a control path. The simplest looking snippet of control code would be:
# control.py
import socket
import sys
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.settimeout(2)
sock.connect(('localhost',9999))
# send some command line argument through the socket
sock.send(sys.argv[1])
sock.close()
So if I run main.py in one terminal window and call python control.py add from another, the output of main.py will change:
Value = 1
Value = 1
Value = 1
Value = 2
Value = 2
...
Finally to kill it all we can run python control.py shutdown, which will gently bring main.py to a halt.
This is by no means the only solution to your problem, but it is likely to be one of the simplest.
One can try GDB, but not sure how to call a function from within [an idle thread].
Perhaps someone very versed with gdb and debugging/calling Python functions from within GDB can improve this answer.
One solution would be to use a messaging service (such as ActiveMQ or RabbitMQ). Your application subscribes to a queue/topic and whenever you want to send it a command, you write a message to it's queue. I'm not going to go into details because there are thousands of examples on-line. Queues/messaging/MQTT etc. are very simple to implement and are how most business systems (and modern control systems) communicate. Do a search for paho-mqtt.
I am writing a tool in python (platform is linux), one of the tasks is to capture a live tcp stream and to
apply a function to each line. Currently I'm using
import subprocess
proc = subprocess.Popen(['sudo','tcpflow', '-C', '-i', interface, '-p', 'src', 'host', ip],stdout=subprocess.PIPE)
for line in iter(proc.stdout.readline,''):
do_something(line)
This works quite well (with the appropriate entry in /etc/sudoers), but I would like to avoid calling an external program.
So far I have looked into the following possibilities:
flowgrep: a python tool which looks just like what I need, BUT: it uses pynids
internally, which is 7 years old and seems pretty much abandoned. There is no pynids package
for my gentoo system and it ships with a patched version of libnids
which I couldn't compile without further tweaking.
scapy: this is a package manipulation program/library for python,
I'm not sure if tcp stream
reassembly is supported.
pypcap or pylibpcap as wrappers for libpcap. Again, libpcap is for packet
capturing, where I need stream reassembly which is not possible according
to this question.
Before I dive deeper into any of these libraries I would like to know if maybe someone
has a working code snippet (this seems like a rather common problem). I'm also grateful if
someone can give advice about the right way to go.
Thanks
Jon Oberheide has led efforts to maintain pynids, which is fairly up to date at:
http://jon.oberheide.org/pynids/
So, this might permit you to further explore flowgrep. Pynids itself handles stream reconstruction rather elegantly.See http://monkey.org/~jose/presentations/pysniff04.d/ for some good examples.
Just as a follow-up: I abandoned the idea to monitor the stream on the tcp layer. Instead I wrote a proxy in python and let the connection I want to monitor (a http session) connect through this proxy. The result is more stable and does not need root privileges to run. This solution depends on pymiproxy.
This goes into a standalone program, e.g. helper_proxy.py
from multiprocessing.connection import Listener
import StringIO
from httplib import HTTPResponse
import threading
import time
from miproxy.proxy import RequestInterceptorPlugin, ResponseInterceptorPlugin, AsyncMitmProxy
class FakeSocket(StringIO.StringIO):
def makefile(self, *args, **kw):
return self
class Interceptor(RequestInterceptorPlugin, ResponseInterceptorPlugin):
conn = None
def do_request(self, data):
# do whatever you need to sent data here, I'm only interested in responses
return data
def do_response(self, data):
if Interceptor.conn: # if the listener is connected, send the response to it
response = HTTPResponse(FakeSocket(data))
response.begin()
Interceptor.conn.send(response.read())
return data
def main():
proxy = AsyncMitmProxy()
proxy.register_interceptor(Interceptor)
ProxyThread = threading.Thread(target=proxy.serve_forever)
ProxyThread.daemon=True
ProxyThread.start()
print "Proxy started."
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey='some_secret_password')
while True:
Interceptor.conn = listener.accept()
print "Accepted Connection from", listener.last_accepted
try:
Interceptor.conn.recv()
except: time.sleep(1)
finally:
Interceptor.conn.close()
if __name__ == '__main__':
main()
Start with python helper_proxy.py. This will create a proxy listening for http connections on port 8080 and listening for another python program on port 6000. Once the other python program has connected on that port, the helper proxy will send all http replies to it. This way the helper proxy can continue to run, keeping up the http connection, and the listener can be restarted for debugging.
Here is how the listener works, e.g. listener.py:
from multiprocessing.connection import Client
def main():
address = ('localhost', 6000)
conn = Client(address, authkey='some_secret_password')
while True:
print conn.recv()
if __name__ == '__main__':
main()
This will just print all the replies. Now point your browser to the proxy running on port 8080 and establish the http connection you want to monitor.
I'm trying to write a makefile that will replicate a client/server program I've written (which is really just two Python scripts, but that's not the real question of concern)...
test:
python server.py 7040 &
python subscriber.py localhost 7040 &
python client.py localhost 7040;
So I run make test
and I get the ability to enter a message from client.py:
python server.py 7040 &
python subscriber.py localhost 7040 &
python client.py localhost 7040;
Enter a message:
When the client enters an empty message, he closes the connection and quits successfully. Now, how can I automate the subscriber (who is just a "listener) of the chat room to close - which will in turn exit the server process.
I was trying to get the process IDs from these calls using pidof - but wasn't really sure if that was the correct route. I am no makefile expert; maybe I could just write a quick Python script that gets executed from my makefile to do the work for me? Any suggestions would be great.
EDIT:
I've gone writing the Python script route, and have the following:
import server
import client
import subscriber
#import subprocess
server.main(8092)
# child = subprocess.Popen("server.py",shell=False)
subscriber.main('localhost',8090)
client.main('localhost', 8090)
However, now I'm getting errors that my global variables are not defined ( I think its directly related to adding the main methods to my server (and subscriber and client, but I'm not getting that far yet:). This may deserve a separate question...
Here's my server code:
import socket
import select
import sys
import thread
import time
# initialize list to track all open_sockets/connected clients
open_sockets = []
# thread for each client that connects
def handle_client(this_client,sleeptime):
global message,client_count,message_lock,client_count_lock
while 1:
user_input = this_client.recv(100)
if user_input == '':
break
message_lock.acquire()
time.sleep(sleeptime)
message += user_input
message_lock.release()
message = message + '\n'
this_client.sendall(message)
# remove 'this_client' from open_sockets list
open_sockets.remove(this_client)
this_client.close()
client_count_lock.acquire()
client_count -= 1
client_count_lock.release()
def main(a):
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
port = a
server.bind(('', port))
server.listen(5)
message = ''
message_lock = thread.allocate_lock()
client_count = 2
client_count_lock = thread.allocate_lock()
for i in range(client_count):
(client,address) = server.accept()
open_sockets.append(client)
thread.start_new_thread(handle_client,(client,2))
server.close()
while client_count > 0:
pass
print '************\nMessage log from all clients:\n%s\n************' % message
if __name__ == "__main__":
if sys.argv[1]:
main(int(sys.argv[1]))
else:
main(8070)
Use plain old bash in the script, get the PID and use kill.
Or, much much much much better, create a testing script that handles all that and call that from your Makefile. A single run_tests.py, say.
You want to keep as much logic as possible outside the Makefile.
related to 'global' issue => define handle_client inside main and remove the global message, client_count,... line