After a long search I've found this python script that does what I need in order to get a real time notification to my iOS app when a new email arrives. I usually write in Objective-c and this is the first time I'm dealing with Python. Before I'll try to setup and run the script I'd like to understand it a bit better.
This is the part that I'm not sure about:
# Because this is just an example, exit after 8 hours
time.sleep(8*60*60)
#finally:
# Clean up.
idler.stop()
idler.join()
M.close()
# This is important!
M.logout()
My questions:
Should I comment out time.sleep(8*60*60) If I want to keep the connection active at all times?
What's the use for the Clean up section? Do I need it if I want to keep the connection?
Why M.logout() is important?
The main question that includes all the above is What changes (if any) I need to do to this script in order for it to function without stoping or timing out.
Thanks
The script has started another thread, the actual work is done in this other thread.
For some reason the main thread is left without anything to do, that's why the author has put the time.sleep(8*60*60) to occupy it for a while.
If you want to keep the connection active at all times you need to uncomment the try:/finally:, see bellow.
If you are new to python beware that indentation is used to define blocks of code. The cleanup part might actually not be useful if you don't plan to stop the program, but with the try:/finally: the cleanup code will be executed even if you stop the program with Ctrl+C.
Not tested:
# Had to do this stuff in a try-finally, since some testing
# went a little wrong.....
try:
# Set the following two lines to your creds and server
M = imaplib2.IMAP4_SSL("imap.gmail.com")
M.login(USER, PASSWORD)
# We need to get out of the AUTH state, so we just select
# the INBOX.
M.select("INBOX")
numUnseen = getUnseen()
sendPushNotification(numUnseen)
#print M.status("INBOX", '(UNSEEN)')
# Start the Idler thread
idler = Idler(M)
idler.start()
# Sleep forever, one minute at a time
while True:
time.sleep(60)
finally:
# Clean up.
idler.stop()
idler.join()
M.close()
# This is important!
M.logout()
Related
I am trying to get a python program to continuously run until a certain aws log is registered and printed. It is supposed to:
Run indefinitely even if no events happen
List whatever events occurs (in my case a log stating that the task is finished)
Stop running
the python command looks like this: python3 watch_logs.py <log-source> --start=15:00:00
The logs are working fine, and the python script can print them out between certain time frames as long as they already exist. The program works by taking a continuously running task which prints events to the log file, and the python script should filter out events I am looking for and print them.
But, when I run the script, it wont print the event even if I can see the log entry appear in the file. If i kill the process and run it again using the same timestamp, it will find the log entry and end the script like it should.
the code is fairly short:
logs = get_log_events(
log_group=log_group,
start_time=start_time,
end_time=end_time
)
while True:
for event in logs:
print(event['message'].rstrip())
sys.exit("Task complete")
Any insight why this is happening would help a lot. I am fairly new to python
The value in the variable logs is old when the file is updated. You need to update this variable. For example if you were to use logs = myfile.read() at start of your script, the value in the logs variable would be a snapshot of that file at that time.
Try storing event['message'].rstrip() in a variable and checking with an if statement if it corresponds to the log you want to find
If you don't want to read the file each time through the loop, you should have a look at pygtail (https://pypi.org/project/pygtail/).
I was overthinking the problem. I put the log variable inside the loop so it was being defined at each cycle
while True:
logs = get_log_events(
log_group=log_group,
start_time=start_time,
end_time=end_time
)
for event in logs:
print(event['message'].rstrip())
sys.exit("Task complete")
I am relatively new to Python, and programming as a whole. I am progressively getting the hang of it, however I have been stumped as of late in regards to one of my latest projects. I have a set of Atlas Scientific EZO circuits w/ their corresponding sensors hooked up to my Raspberry Pi 3. I can run the i2c script fine, and the majority of the code makes sense to me. However, I would like to pull data from the sensors and log it with a time stamp in a CSV file, taking data points in timed intervals. I am not quite sure how to pull the data from the sensor, and put it into a CSV. Making CSVs in Python is fairly simple, as is filling them with data, but I cannot seem to understand how I would make the data that goes into the CSV the same as what is displayed in the terminal when one runs the Poll function. Attached is the i2c sample code from Atlas' website. I have annotated it a bit more so as to help me understand it better.
I have already attempted to make sense of the poll function, but am confused in regards to the self.file_write and self.file_read methods used throughout the code. I do believe they would be of use in this instance but I am generally stumped in terms of implementation. Below you will find a link to the Python script (i2c.py) written by Atlas Scientific
https://github.com/AtlasScientific/Raspberry-Pi-sample-code/blob/master/i2c.py
I'm guessing by "the polling function" you are referring to this section of the code:
# continuous polling command automatically polls the board
elif user_cmd.upper().startswith("POLL"):
delaytime = float(string.split(user_cmd, ',')[1])
# check for polling time being too short, change it to the minimum timeout if too short
if delaytime < AtlasI2C.long_timeout:
print("Polling time is shorter than timeout, setting polling time to %0.2f" % AtlasI2C.long_timeout)
delaytime = AtlasI2C.long_timeout
# get the information of the board you're polling
info = string.split(device.query("I"), ",")[1]
print("Polling %s sensor every %0.2f seconds, press ctrl-c to stop polling" % (info, delaytime))
try:
while True:
print(device.query("R"))
time.sleep(delaytime - AtlasI2C.long_timeout)
except KeyboardInterrupt: # catches the ctrl-c command, which breaks the loop above
print("Continuous polling stopped")
If this is the case then if looks like you can recycle most of this code for your use. You can grab the string you are seeing in your console with device.query("R"), instead of printing it, grab the return value and write it to your CSV.
I think You should add method to AtlasI2C class which will write data to file
Just type under AtlasI2C init() this method:
def update_file(self, new_data):
with open(self.csv_file, 'a') as data_file:
try:
data = "{}\n".format(str(new_data))
data_file.write(data)
except Exception as e:
print(e)
add to AtlasI2C init statement about csv file name:
self.csv_file = <my_filename>.csv # replace my_filename with ur name
and then under line 51 (char_list = list(map(lambda x: chr(ord(x) & ~0x80), list(response[1:]))) add this line:
self.update_file(''.join(char_list))
Hope its gonna help You.
Cheers,
Fenrir
I have a script runReports.py that is executed every night. Suppose for some reason the script takes too long to execute, I want to be able to stop it from terminal by issuing a command like ./runReports.py stop.
I tried to implement this by having the script to create a temporary file when the stop command is issued.
The script checks for existence of this file before running each report.
If the file is there the script stops executing, else it continues.
But I am not able to find a way to make the issuer of the stop command aware that the script has stopped successfully. Something along the following lines:
$ ./runReports.py stop
Stopping runReports...
runReports.py stopped successfully.
How to achieve this?
For example if your script runs in loop, you can catch signal http://en.wikipedia.org/wiki/Unix_signal and terminate process:
import signal
class SimpleReport(BaseReport):
def __init__(self):
...
is_running = True
def _signal_handler(self, signum, frame):
is_running = False
def run(self):
signal.signal(signal.SIGUSR1, self._signal_handler) # set signal handler
...
while is_running:
print("Preparing report")
print("Exiting ...")
To terminate process just call kill -SIGUSR1 procId
You want to achieve inter process communication. You should first explore the different ways to do that : system V IPC (memory, very versatile, possibly baffling API), sockets (including unix domain sockets)(memory, more limited, clean API), file system (persistent on disk, almost architecture independent), and choose yours.
As you are asking about files, there are still two ways to communicate using files : either using file content (feature rich, harder to implement), or simply file presence. But the problem using files, is that is a program terminates because of an error, it may not be able to write its ended status on the disk.
IMHO, you should clearly define what are your requirements before choosing file system based communication (testing the end of a program is not really what it is best at) unless you also need architecture independence.
To directly answer your question, the only reliable way to know if a program has ended if you use file system communication is to browse the list of currently active processes, and the simplest way is IMHO to use ps -e in a subprocess.
Instead of having a temporary file, you could have a permanent file(config.txt) that has some tags in it and check if the tag 'running = True'.
To achieve this is quiet simple, if your code has a loop in it (I imagine it does), just make a function/method that branches a check condition on this file.
def continue_running():
with open("config.txt") as f:
for line in f:
tag, condition = line.split(" = ")
if tag == "running" and condition == "True":
return True
return False
In your script you will do this:
while True: # or your terminal condition
if continue_running():
# your regular code goes here
else:
break
So all you have to do to stop the loop in the script is change the 'running' to anything but "True".
I'm trying to connect to a TeamSpeak server using the QueryServer to make a bot. I've taken advice from this thread, however I still need help.
This is The TeamSpeak API that I'm using.
Before the edits, this was the summary of what actually happened in my script (1 connection):
It connects.
It checks for channel ID (and it's own client ID)
It joins the channel and starts reading everything
If someone says an specific command, it executes the command and then it disconnects.
How can I make it so it doesn't disconnect? How can I make the script stay in a "waiting" state so it can keep reading after the command is executed?
I am using Python 3.4.1.
I tried learning Threading but either I'm dumb or it doesn't work the way I thought it would. There's another "bug", once waiting for events, if I don't trigger anything with a command, it disconnects after 60 seconds.
#Librerias
import ts3
import threading
import datetime
from random import choice, sample
# Data needed #
USER = "thisisafakename"
PASS = "something"
HOST = "111.111.111.111"
PORT = 10011
SID = 1
class BotPrincipal:
def __init__(self, manejador=False):
self.ts3conn = ts3.query.TS3Connection(HOST, PORT)
self.ts3conn.login(client_login_name=USER, client_login_password=PASS)
self.ts3conn.use(sid=SID)
channelToJoin = Bot.GettingChannelID("TestingBot")
try: #Login with a client that is ok
self.ts3conn.clientupdate(client_nickname="The Reader Bot")
self.MyData = self.GettingMyData()
self.MoveUserToChannel(ChannelToJoin, Bot.MyData["client_id"])
self.suscribirEvento("textchannel", ChannelToJoin)
self.ts3conn.on_event = self.manejadorDeEventos
self.ts3conn.recv_in_thread()
except ts3.query.TS3QueryError: #Name already exists, 2nd client connect with this info
self.ts3conn.clientupdate(client_nickname="The Writer Bot")
self.MyData = self.GettingMyData()
self.MoveUserToChannel(ChannelToJoin, Bot.MyData["client_id"])
def __del__(self):
self.ts3conn.close()
def GettingMyData(self):
respuesta = self.ts3conn.whoami()
return respuesta.parsed[0]
def GettingChannelID(self, nombre):
respuesta = self.ts3conn.channelfind(pattern=ts3.escape.TS3Escape.unescape(nombre))
return respuesta.parsed[0]["cid"]
def MoveUserToChannel(self, idCanal, idUsuario, passCanal=None):
self.ts3conn.clientmove(cid=idCanal, clid=idUsuario, cpw=passCanal)
def suscribirEvento(self, tipoEvento, idCanal):
self.ts3conn.servernotifyregister(event=tipoEvento, id_=idCanal)
def SendTextToChannel(self, idCanal, mensajito="Error"):
self.ts3conn.sendtextmessage(targetmode=2, target=idCanal, msg=mensajito) #This works
print("test") #PROBLEM HERE This doesn't work. Why? the line above did work
def manejadorDeEventos(sender, event):
message = event.parsed[0]['msg']
if "test" in message: #This works
Bot.SendTextToChannel(ChannelToJoin, "This is a test") #This works
if __name__ == "__main__":
Bot = BotPrincipal()
threadprincipal = threading.Thread(target=Bot.__init__)
threadprincipal.start()
Prior to using 2 bots, I tested to launch the SendTextToChannel when it connects and it works perfectly, allowing me to do anything that I want after it sends the text to the channel. The bug that made entire python code stop only happens if it's triggered by the manejadorDeEventos
Edit 1 - Experimenting with threading.
I messed it up big time with threading, getting to the result where 2 clients connect at same time. Somehow i think 1 of them is reading the events and the other one is answering. The script doesn't close itself anymore and that's a win, but having a clone connection doesn't looks good.
Edit 2 - Updated code and actual state of the problem.
I managed to make the double connection works more or less "fine", but it disconnects if nothing happens in the room for 60 seconds. Tried using Threading.timer but I'm unable to make it works. The entire question code has been updated for it.
I would like an answer that helps me to do both reading from the channel and answering to it without the need of connect a second bot for it (like it's actually doing...) And I would give extra points if the answer also helps me to understand an easy way to make a query to the server each 50 seconds so it doesn't disconnects.
From looking at the source, recv_in_thread doesn't create a thread that loops around receiving messages until quit time, it creates a thread that receives a single message and then exits:
def recv_in_thread(self):
"""
Calls :meth:`recv` in a thread. This is useful,
if you used ``servernotifyregister`` and you expect to receive events.
"""
thread = threading.Thread(target=self.recv, args=(True,))
thread.start()
return None
That implies that you have to repeatedly call recv_in_thread, not just call it once.
I'm not sure exactly where to do so from reading the docs, but presumably it's at the end of whatever callback gets triggered by a received event; I think that's your manejadorDeEventos method? (Or maybe it's something related to the servernotifyregister method? I'm not sure what servernotifyregister is for and what on_event is for…)
That manejadorDeEventos brings up two side points:
You've declared manejadorDeEventos wrong. Every method has to take self as its first parameter. When you pass a bound method, like self.manejadorDeEventos, that bound self object is going to be passed as the first argument, before any arguments that the caller passes. (There are exceptions to this for classmethods and staticmethods, but those don't apply here.) Also, within that method, you should almost certainly be accessing self, not a global variable Bot that happens to be the same object as self.
If manejadorDeEventos is actually the callback for recv_in_thread, you've got a race condition here: if the first message comes in before your main threads finishes the on_event assignment, the recv_on_thread won't be able to call your event handler. (This is exactly the kind of bug that often shows up one time in a million, making it a huge pain to debug when you discover it months after deploying or publishing your code.) So, reverse those two lines.
One last thing: a brief glimpse at this library's code is a bit worrisome. It doesn't look like it's written by someone who really knows what they're doing. The method I copied above only has 3 lines of code, but it includes a useless return None and a leaked Thread that can never be joined, not to mention that the whole design of making you call this method (and spawn a new thread) after each event received is weird, and even more so given that it's not really explained. If this is the standard client library for a service you have to use, then you really don't have much choice in the matter, but if it's not, I'd consider looking for a different library.
I've tried posting this in the reverse-engineering stack-exchange, but I thought I'd cross-post it here for more visibility.
I'm having trouble switching from debugging one thread to another in pydbg. I don't have much experience with multithreading, so I'm hoping that I'm just missing something obvious.
Basically, I want to suspend all threads, then start single stepping in one thread. In my case, there are two threads.
First, I suspend all threads. Then, I set a breakpoint on the location where EIP will be when thread 2 is resumed. (This location is confirmed by using IDA). Then, I enable single-stepping as I would in any other context, and resume Thread 2.
However, pydbg doesn't seem to catch the breakpoint exception! Thread 2 seems to resume and even though it MUST hit that address, there is no indication that pydbg is catching the breakpoint exception. I included a "print "HIT BREAKPOINT" inside pydbg's internal breakpoint handler, and that never seems to be called after resuming Thread 2.
I'm not too sure about where to go next, so any suggestions are appreciated!
dbg.suspend_all_threads()
print dbg.enumerate_threads()[0]
oldcontext = dbg.get_thread_context(thread_id=dbg.enumerate_threads()[0])
if (dbg.disasm(oldcontext.Eip) == "ret"):
print disasm_at(dbg,oldcontext.Eip)
print "Thread EIP at a ret"
addrstr = int("0x"+(dbg.read(oldcontext.Esp + 4,4))[::-1].encode("hex"),16)
print hex(addrstr)
dbg.bp_set(0x7C90D21A,handler=Thread_Start_bp_Handler)
print dbg.read(0x7C90D21A,1).encode("hex")
dbg.bp_set(oldcontext.Eip + dbg.instruction.length,handler=Thread_Start_bp_Handler)
dbg.set_thread_context(oldcontext,thread_id=dbg.enumerate_threads()[0])
dbg.context = oldcontext
dbg.resume_thread(dbg.enumerate_threads()[0])
dbg.single_step(enable=True)
return DBG_CONTINUE
Sorry about the "magic numbers", but they are correct as far as I can tell.
One of your problems is that you try to single step through Thread2 and you only refer to Thread1 in your code:
dbg.enumerate_threads()[0] # <--- Return handle to the first thread.
In addition, the code the you posted is not reflective of the complete structure of your script, which makes it hard to judge wether you have other errors or not. You also try to set breakpoint within the sub-brach that disassembles your instructions, which does not make a lot of sense to me logically. Let me try to explain what I know, and lay it out in an organized manner. That way you might look back at your code, re-think it and correct it.
Let's start with basic framework of debugging an application with pydbg:
Create debugger instance
Attache to the process
Set breakpoints
Run it
Breakpoint gets hit - handle it.
This is how it could look like:
from pydbg import *
from pydbg.defines import *
# This is maximum number of instructions we will log
MAX_INSTRUCTIONS = 20
# Address of the breakpoint
func_address = "0x7C90D21A"
# Create debugger instance
dbg = pydbg()
# PID to attach to
pid = int(raw_input("Enter PID: "))
# Attach to the process with debugger instance created earlier.
# Attaching the debugger will pause the process.
dbg.attach(pid)
# Let's set the breakpoint and handler as thread_step_setter,
# which we will define a little later...
dbg.bp_set(func_address, handler=thread_step_setter)
# Let's set our "personalized" handler for Single Step Exception
# It will get triggered if execution of a thread goes into single step mode.
dbg.set_callback(EXCEPTION_SINGLE_STEP, single_step_handler)
# Setup is done. Let's run it...
dbg.run()
Now having the basic structure, let's define our personalized handlers for breakpoint and single stepping. The code snippet below defines our "custom" handlers. What will happen is when breakpoint hits we will iterate through threads and set them to single step mode. It will in turn trigger single step exception, which we will handle and disassemble MAX_INSTRUCTIONS amount of instructions:
def thread_step_setter(dbg):
dbg.suspend_all_threads()
for thread_id in dbg.enumerate_threads():
print "Single step for thread: 0x%08x" % thread_id
h_thread = dbg.open_thread(thread_id)
dbg.single_step(True, h_thread)
dbg.close_handle(h_thread)
# Resume execution, which will pass control to step handler
dbg.resume_all_threads()
return DBG_CONTINUE
def single_step_handler(dbg):
global total_instructions
if instructions == MAX_INSTRUCTION:
dbg.single_step(False)
return DBG_CONTINUE
else:
# Disassemble the instruction
current_instruction = dbg.disasm(dbg.context,Eip)
print "#%d\t0x%08x : %s" % (total_instructions, dbg.context.Eip, current_instruction)
total_instructions += 1
dbg.single_step(True)
return DBG_CONTINUE
Discloser: I do not guarantee that the code above will work if copied and pasted. I typed it out and haven't tested it. However, if basic understanding is acquired, the small syntactical error could be easily fixed. I apologize in advanced if I have any. I don't currently have means or time to test it.
I really hope it helps you out.