In C, ZeroMQ's various calls return error codes.
But if the following Python is executed in two processes in one server, then the second process fails - completely silently. How can I detect this situation?
a) Why is it not raising an error?
b) Why do none of the Python ZMQ calls return status codes, if they don't assert?
import zmq
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
socket.send("Hello world")
a) the respective ZeroMQ python-binding author decided whether or not to raise it.
user always has options to inspect the zmq_errno(), hasn't she/he?
b) again, no one's but the python-binding author's Hamlet decision - to do or not to do so.
Conservative design schools advice to be rather explicit on resource-mapping in .bind() method calls. An explicitly controlled mapping ought under a due design practice never collide on resources-mapping when .bind()-ing, ought it?
Related
I need to make a chat between two clients. But I don't know when a socket is closed or not, is there a way to check if it is?
Here is the part of the code that I need to fix:
def main():
"""Implements the conversation with server."""
# Open client socket, Transport layer: protocol TCP, Network layer: protocol IP
client_socket = socket.socket()
client_socket.connect((HOST_IP, PORT))
# start conversation with new client in parallel thread
name = input("enter your name ")
protocol.send_request(client_socket, name)
thread_for_responses = threading.Thread(target=get_responses,
args=(client_socket, ))
thread_for_responses.start()
while True:
# Get request from keyboard
client_request_str = input()
if client_request_str: # if client_request_str not empty string
# send request according to the protocol
protocol.send_request(client_socket, client_request_str)
# Get response from server
Instead of while True, I need to check if the socket is closed so it won't go into a loop where it will crush for using a closed socket.
Python programmers usually say it is easier to ask for forgiveness than for permission. By this they mean "handle the exception".
For example, you cannot divide by zero. Here are two ways you could deal with this fact:
def print_quotient(a, b):
if b == 0:
print("quotient is not a number")
else:
print("quotient is {}".format(a / b))
vs
def print_quotient(a, b):
try:
print("quotient is {}".format(a / b))
except ZeroDivisionError:
print("quotient is not a number")
These functions behave the same way so it doesn't make a lot of difference which approach you take. This is because b cannot change. This differs from your example where the socket can change. External factors influence its state and this changes the behavior of trying to use it (for example, to send bytes with it). In this case, exception handling is superior because it does not have to try to make sure nothing is going to go wrong, it just handles things when they go wrong. There is no case where the code thinks it has set everything up to work correctly and then it turns out it has missed something.
So, when you use socket operations, apply exception handling for any exceptions that might arise from those operations.
I'm trying to connect to a TeamSpeak server using the QueryServer to make a bot. I've taken advice from this thread, however I still need help.
This is The TeamSpeak API that I'm using.
Before the edits, this was the summary of what actually happened in my script (1 connection):
It connects.
It checks for channel ID (and it's own client ID)
It joins the channel and starts reading everything
If someone says an specific command, it executes the command and then it disconnects.
How can I make it so it doesn't disconnect? How can I make the script stay in a "waiting" state so it can keep reading after the command is executed?
I am using Python 3.4.1.
I tried learning Threading but either I'm dumb or it doesn't work the way I thought it would. There's another "bug", once waiting for events, if I don't trigger anything with a command, it disconnects after 60 seconds.
#Librerias
import ts3
import threading
import datetime
from random import choice, sample
# Data needed #
USER = "thisisafakename"
PASS = "something"
HOST = "111.111.111.111"
PORT = 10011
SID = 1
class BotPrincipal:
def __init__(self, manejador=False):
self.ts3conn = ts3.query.TS3Connection(HOST, PORT)
self.ts3conn.login(client_login_name=USER, client_login_password=PASS)
self.ts3conn.use(sid=SID)
channelToJoin = Bot.GettingChannelID("TestingBot")
try: #Login with a client that is ok
self.ts3conn.clientupdate(client_nickname="The Reader Bot")
self.MyData = self.GettingMyData()
self.MoveUserToChannel(ChannelToJoin, Bot.MyData["client_id"])
self.suscribirEvento("textchannel", ChannelToJoin)
self.ts3conn.on_event = self.manejadorDeEventos
self.ts3conn.recv_in_thread()
except ts3.query.TS3QueryError: #Name already exists, 2nd client connect with this info
self.ts3conn.clientupdate(client_nickname="The Writer Bot")
self.MyData = self.GettingMyData()
self.MoveUserToChannel(ChannelToJoin, Bot.MyData["client_id"])
def __del__(self):
self.ts3conn.close()
def GettingMyData(self):
respuesta = self.ts3conn.whoami()
return respuesta.parsed[0]
def GettingChannelID(self, nombre):
respuesta = self.ts3conn.channelfind(pattern=ts3.escape.TS3Escape.unescape(nombre))
return respuesta.parsed[0]["cid"]
def MoveUserToChannel(self, idCanal, idUsuario, passCanal=None):
self.ts3conn.clientmove(cid=idCanal, clid=idUsuario, cpw=passCanal)
def suscribirEvento(self, tipoEvento, idCanal):
self.ts3conn.servernotifyregister(event=tipoEvento, id_=idCanal)
def SendTextToChannel(self, idCanal, mensajito="Error"):
self.ts3conn.sendtextmessage(targetmode=2, target=idCanal, msg=mensajito) #This works
print("test") #PROBLEM HERE This doesn't work. Why? the line above did work
def manejadorDeEventos(sender, event):
message = event.parsed[0]['msg']
if "test" in message: #This works
Bot.SendTextToChannel(ChannelToJoin, "This is a test") #This works
if __name__ == "__main__":
Bot = BotPrincipal()
threadprincipal = threading.Thread(target=Bot.__init__)
threadprincipal.start()
Prior to using 2 bots, I tested to launch the SendTextToChannel when it connects and it works perfectly, allowing me to do anything that I want after it sends the text to the channel. The bug that made entire python code stop only happens if it's triggered by the manejadorDeEventos
Edit 1 - Experimenting with threading.
I messed it up big time with threading, getting to the result where 2 clients connect at same time. Somehow i think 1 of them is reading the events and the other one is answering. The script doesn't close itself anymore and that's a win, but having a clone connection doesn't looks good.
Edit 2 - Updated code and actual state of the problem.
I managed to make the double connection works more or less "fine", but it disconnects if nothing happens in the room for 60 seconds. Tried using Threading.timer but I'm unable to make it works. The entire question code has been updated for it.
I would like an answer that helps me to do both reading from the channel and answering to it without the need of connect a second bot for it (like it's actually doing...) And I would give extra points if the answer also helps me to understand an easy way to make a query to the server each 50 seconds so it doesn't disconnects.
From looking at the source, recv_in_thread doesn't create a thread that loops around receiving messages until quit time, it creates a thread that receives a single message and then exits:
def recv_in_thread(self):
"""
Calls :meth:`recv` in a thread. This is useful,
if you used ``servernotifyregister`` and you expect to receive events.
"""
thread = threading.Thread(target=self.recv, args=(True,))
thread.start()
return None
That implies that you have to repeatedly call recv_in_thread, not just call it once.
I'm not sure exactly where to do so from reading the docs, but presumably it's at the end of whatever callback gets triggered by a received event; I think that's your manejadorDeEventos method? (Or maybe it's something related to the servernotifyregister method? I'm not sure what servernotifyregister is for and what on_event is for…)
That manejadorDeEventos brings up two side points:
You've declared manejadorDeEventos wrong. Every method has to take self as its first parameter. When you pass a bound method, like self.manejadorDeEventos, that bound self object is going to be passed as the first argument, before any arguments that the caller passes. (There are exceptions to this for classmethods and staticmethods, but those don't apply here.) Also, within that method, you should almost certainly be accessing self, not a global variable Bot that happens to be the same object as self.
If manejadorDeEventos is actually the callback for recv_in_thread, you've got a race condition here: if the first message comes in before your main threads finishes the on_event assignment, the recv_on_thread won't be able to call your event handler. (This is exactly the kind of bug that often shows up one time in a million, making it a huge pain to debug when you discover it months after deploying or publishing your code.) So, reverse those two lines.
One last thing: a brief glimpse at this library's code is a bit worrisome. It doesn't look like it's written by someone who really knows what they're doing. The method I copied above only has 3 lines of code, but it includes a useless return None and a leaked Thread that can never be joined, not to mention that the whole design of making you call this method (and spawn a new thread) after each event received is weird, and even more so given that it's not really explained. If this is the standard client library for a service you have to use, then you really don't have much choice in the matter, but if it's not, I'd consider looking for a different library.
I've tried posting this in the reverse-engineering stack-exchange, but I thought I'd cross-post it here for more visibility.
I'm having trouble switching from debugging one thread to another in pydbg. I don't have much experience with multithreading, so I'm hoping that I'm just missing something obvious.
Basically, I want to suspend all threads, then start single stepping in one thread. In my case, there are two threads.
First, I suspend all threads. Then, I set a breakpoint on the location where EIP will be when thread 2 is resumed. (This location is confirmed by using IDA). Then, I enable single-stepping as I would in any other context, and resume Thread 2.
However, pydbg doesn't seem to catch the breakpoint exception! Thread 2 seems to resume and even though it MUST hit that address, there is no indication that pydbg is catching the breakpoint exception. I included a "print "HIT BREAKPOINT" inside pydbg's internal breakpoint handler, and that never seems to be called after resuming Thread 2.
I'm not too sure about where to go next, so any suggestions are appreciated!
dbg.suspend_all_threads()
print dbg.enumerate_threads()[0]
oldcontext = dbg.get_thread_context(thread_id=dbg.enumerate_threads()[0])
if (dbg.disasm(oldcontext.Eip) == "ret"):
print disasm_at(dbg,oldcontext.Eip)
print "Thread EIP at a ret"
addrstr = int("0x"+(dbg.read(oldcontext.Esp + 4,4))[::-1].encode("hex"),16)
print hex(addrstr)
dbg.bp_set(0x7C90D21A,handler=Thread_Start_bp_Handler)
print dbg.read(0x7C90D21A,1).encode("hex")
dbg.bp_set(oldcontext.Eip + dbg.instruction.length,handler=Thread_Start_bp_Handler)
dbg.set_thread_context(oldcontext,thread_id=dbg.enumerate_threads()[0])
dbg.context = oldcontext
dbg.resume_thread(dbg.enumerate_threads()[0])
dbg.single_step(enable=True)
return DBG_CONTINUE
Sorry about the "magic numbers", but they are correct as far as I can tell.
One of your problems is that you try to single step through Thread2 and you only refer to Thread1 in your code:
dbg.enumerate_threads()[0] # <--- Return handle to the first thread.
In addition, the code the you posted is not reflective of the complete structure of your script, which makes it hard to judge wether you have other errors or not. You also try to set breakpoint within the sub-brach that disassembles your instructions, which does not make a lot of sense to me logically. Let me try to explain what I know, and lay it out in an organized manner. That way you might look back at your code, re-think it and correct it.
Let's start with basic framework of debugging an application with pydbg:
Create debugger instance
Attache to the process
Set breakpoints
Run it
Breakpoint gets hit - handle it.
This is how it could look like:
from pydbg import *
from pydbg.defines import *
# This is maximum number of instructions we will log
MAX_INSTRUCTIONS = 20
# Address of the breakpoint
func_address = "0x7C90D21A"
# Create debugger instance
dbg = pydbg()
# PID to attach to
pid = int(raw_input("Enter PID: "))
# Attach to the process with debugger instance created earlier.
# Attaching the debugger will pause the process.
dbg.attach(pid)
# Let's set the breakpoint and handler as thread_step_setter,
# which we will define a little later...
dbg.bp_set(func_address, handler=thread_step_setter)
# Let's set our "personalized" handler for Single Step Exception
# It will get triggered if execution of a thread goes into single step mode.
dbg.set_callback(EXCEPTION_SINGLE_STEP, single_step_handler)
# Setup is done. Let's run it...
dbg.run()
Now having the basic structure, let's define our personalized handlers for breakpoint and single stepping. The code snippet below defines our "custom" handlers. What will happen is when breakpoint hits we will iterate through threads and set them to single step mode. It will in turn trigger single step exception, which we will handle and disassemble MAX_INSTRUCTIONS amount of instructions:
def thread_step_setter(dbg):
dbg.suspend_all_threads()
for thread_id in dbg.enumerate_threads():
print "Single step for thread: 0x%08x" % thread_id
h_thread = dbg.open_thread(thread_id)
dbg.single_step(True, h_thread)
dbg.close_handle(h_thread)
# Resume execution, which will pass control to step handler
dbg.resume_all_threads()
return DBG_CONTINUE
def single_step_handler(dbg):
global total_instructions
if instructions == MAX_INSTRUCTION:
dbg.single_step(False)
return DBG_CONTINUE
else:
# Disassemble the instruction
current_instruction = dbg.disasm(dbg.context,Eip)
print "#%d\t0x%08x : %s" % (total_instructions, dbg.context.Eip, current_instruction)
total_instructions += 1
dbg.single_step(True)
return DBG_CONTINUE
Discloser: I do not guarantee that the code above will work if copied and pasted. I typed it out and haven't tested it. However, if basic understanding is acquired, the small syntactical error could be easily fixed. I apologize in advanced if I have any. I don't currently have means or time to test it.
I really hope it helps you out.
My code looks like this:
... # class Site(Resource)
def render_POST(self,request)
otherclass.doAssync(request.args)
print '1'
return "done" #that returns the HTTP response, always the same.
...
def doAssync(self,msg):
d = defer.Deferred()
reactor.callLater(0,self.doStuff,d,msg)
d.addCallback(self.sucess)
def doStuff(self,d,msg):
# do some stuff
time.sleep(2) #just for example
d.callback('ok')
def sucess(msg):
print msg
The output:
1
ok
So far, so good, but, the HTTP response (return 'done'), only happens after the delay (time.sleep(2)).
I can tell this, because the browser keeps 'loading' for 2 seconds.
What am I doing wrong?
What you are doing wrong is running a blocking call (time.sleep(2)), while Twisted expects you to only perform non-blocking operations. Things that don't wait. Because you have that time.sleep(2) in there, Twisted can't do anything else while that function is sleeping. So it can't send any data to the browser, either.
In the case of time.sleep(2), you would replace that with another reactor.callLater call. Assuming you actually meant for the time.sleep(2) call to be some other blocking operation, how to fix it depends on the operation. If you can do the operation in a non-blocking way, do that. For many such operations (like database interaction) Twisted already comes with non-blocking alternatives. If the thing you're doing has no non-blocking interface and Twisted doesn't have an alternative to it, you may have to run the code in a separate thread (using for example twisted.internet.threads.deferToThread), although that requires your code is actually thread-safe.
in the C world, a function can return error code to represent the exit status, and use INOUT/OUT parameter to carry the actual fruit of the process. when it comes to xmlrpc, no INOUT/OUT parameter, is there any best practice/conventions to represent the exit status and actual result?
the context is i am trying to write an agent/daemon (python SimpleXMLRPCServer) running on the Server, and want to design the "protocol" to interact with it.
any advice is appreciated.
EDIT:
per S.Lott's comment, make the problem more clear.
it is more about os convention rather
than C convention. I agree with that.
the job of the agent is more or less run some cmd on the server, inherently with an exit code/result idiom
.
One simple way to implement this in Python is with a tuple. Have your function return a tuple of: (status, result) where the status can be numeric or a string, and the result can be any Python data structure you fancy.
Here's an example, adapted from the module documentation. Server code:
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
# Restrict to a particular path.
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/RPC2',)
# Create server
server = SimpleXMLRPCServer(("localhost", 8000),
requestHandler=RequestHandler)
def myfunction(x, y):
status = 1
result = [5, 6, [4, 5]]
return (status, result)
server.register_function(myfunction)
# Run the server's main loop
server.serve_forever()
Client code:
import xmlrpclib
s = xmlrpclib.ServerProxy('http://localhost:8000')
print s.myfunction(2, 4)
The server function returns a tuple
"in the C world, a function can return error code to represent the exit status, and use INOUT/OUT parameter to carry the actual fruit of the process"
Consider an exit status to be a hack. It's not a C-ism, it's a Linux-ism. C functions return exactly one value. C doesn't have exceptions, so there are several ways to indicate failure, all pretty bad.
Exception handling is what's needed. Python and Java have this, and they don't need exit status.
OS's however, still depend on exit status because shell scripting is still very primitive and some languages (like C) can't produce exceptions.
Consider in/out variables also to be a hack. This is a terrible hack because the function has multiple side-effects in addition to returning a value.
Both of these "features" aren't really the best design patterns to follow.
Ideally, a function is "idempotent" -- no matter how many times you call it, you get the same results. In/Out variables break idempotency in obscure, hard-to-debug ways.
You don't really need either of these features, that's why you don't see many best practices for implementing them.
The best practice is to return a value or raise an exception. If you need to return multiple values you return a tuple. If things didn't work, you don't return an exit status, you raise an exception.
Update. Since the remote process is basically RSH to run a remote command, you should do what remctl does.
You need to mimic: http://linux.die.net/man/1/remctl precisely. You have to write a Python client and server. The server returns a message with a status code (and any other summary, like run-time). The client exits with that same status code.