Run Python script line-by-line in shell vs atomically - python

I'm integrating MicroPython into a microcontroller and I want to add a debug step-by-step execution mode to my product (via a connection to a PC).
Thankfully, MicroPython includes a REPL aka Python shell functionality: I can feed it one line at a time and execute.
I want to use this feature to single-step on the PC-side and send in the lines in the Python script one-by-one.
Is there ANY difference, besides possibly timing, between running a Python script one line at a time vs python my_script.py?

Passing one line of code at a time on stdin is a completely unacceptable alternative to a proper debugger.
Let's say that you want to debug the following:
def foo(): # 1
for i in range(10): # 2
if i == 5: # 3
raise Exception("Argh!") # 4
# 5
foo() # 6
...in a proper step-by-step debugger, the user could use it like so:
break 4
run
Now, how are you going to do that? If you enter the function in a REPL, the function is defined as one operation, and it runs as one operation. It doesn't stop at line 6. It doesn't let you proceed line-by-line. The same is true of the for loop: Entering the text of the for loop one line at a time doesn't let you step it before the exception is thrown.
If you eliminate the function, and eliminate the loop (generating the code _something = iter(range(10)); i=_something.next(), maybe?), then you need to emulate the effects of scoping. It means you have a hugely different language than the one you're purportedly "debugging".

I don't know whether MicroPython has compile() and exec() built-in.
But when embeded Python has them and when MCU has enough RAM, then I do the following:
Send a line to embeded shell to start a creation of variable with multiline string.
'_code = """\'
Send the code I wish executed (line by line or however)
Close the multiline string with """
Send exec command to run the transfered code stored in the variable on MCU and pick up the output.
If your RAM is small and you cannot transfer whole code at once, you should transfer it in blocks that would be executed. Like functions, loops, etc.
If you can compile bytecode for MicroPython on a PC, then you should be able to transfer it and prepare it for execution. This would use a lot less of RAM.
But whether you can inject the raw bytecode into shell and run it depends on how much MicroPython resembles CPython.
And yep, there are differences. As explained in another answer line by line execution can be tricky. So blocks of code is your best bet.

Is there ANY difference ...
Yes.
The code below, for example, works in .py file, but is a SyntaxError in the interactive interpreter:
x = 1
if x == 1:
pass
x = 2
There are many other differences, but this alone should be enough to scare you away from the idea.

Related

How to make a python script stopable from another script?

TL;DR: If you have a program that should run for an undetermined amount of time, how do you code something to stop it when the user decide it is time? (Without KeyboardInterrupt or killing the task)
--
I've recently posted this question: How to make my code stopable? (Not killing/interrupting)
The answers did address my question, but from a termination/interruption point of view, and that's not really what I wanted. (Although, my question didn't made that clear)
So, I'm rephrasing it.
I created a generic script for example purposes. So I have this class, that gathers data from a generic API and write the data into a csv. The code is started by typing python main.py on a terminal window.
import time,csv
import GenericAPI
class GenericDataCollector:
def __init__(self):
self.generic_api = GenericAPI()
self.loop_control = True
def collect_data(self):
while self.loop_control: #Can this var be changed from outside of the class? (Maybe one solution)
data = self.generic_api.fetch_data() #Returns a JSON with some data
self.write_on_csv(data)
time.sleep(1)
def write_on_csv(self, data):
with open('file.csv','wt') as f:
writer = csv.writer(f)
writer.writerow(data)
def run():
obj = GenericDataCollector()
obj.collect_data()
if __name__ == "__main__":
run()
The script is supposed to run forever OR until I command it to stop. I know I can just KeyboardInterrupt (Ctrl+C) or abruptly kill the task. That isn't what I'm looking for. I want a "soft" way to tell the script it's time to stop, not only because interruption can be unpredictable, but it's also a harsh way to stop.
If that script was running on a docker container (for example) you wouldn't be able to Ctrl+C unless you happen to be in the terminal/bash inside the docker.
Or another situation: If that script was made for a customer, I don't think it's ok to tell the customer, just use Ctrl+C/kill the task to stop it. Definitely counterintuitive, especially if it's a non tech person.
I'm looking for way to code another script (assuming that's a possible solution) that would change to False the attribute obj.loop_control, finishing the loop once it's completed. Something that could be run by typing on a (different) terminal python stop_script.py.
It doesn't, necessarily, needs to be this way. Other solutions are also acceptable, as long it doesn't involve KeyboardInterrupt or Killing tasks. If I could use a method inside the class, that would be great, as long I can call it from another terminal/script.
Is there a way to do this?
If you have a program that should run for an undetermined amount of time, how do you code something to stop it when the user decide it is time?
In general, there are two main ways of doing this (as far as I can see). The first one would be to make your script check some condition that can be modified from outside (like the existence or the content of some file/socket). Or as #Green Cloak Guy stated, using pipes which is one form of interprocess communication.
The second one would be to use the built in mechanism for interprocess communication called signals that exists in every OS where python runs. When the user presses Ctrl+C the terminal sends a specific signal to the process in the foreground. But you can send the same (or another) signal programmatically (i.e. from another script).
Reading the answers to your other question I would say that what is missing to address this one is a way to send the appropriate signal to your already running process. Essentially this can be done by using the os.kill() function. Note that although the function is called 'kill' it can send any signal (not only SIGKILL).
In order for this to work you need to have the process id of the running process. A commonly used way of knowing this is making your script save its process id when it launches into a file stored in a common location. To get the current process id you can use the os.getpid() function.
So summarizing I'd say that the steps to achieve what you want would be:
Modify your current script to store its process id (obtainable by using os.getpid()) into a file in a common location, for example /tmp/myscript.pid. Note that if you want your script to be protable you will need to address this in a way that works in non-unix like OSs like Windows.
Choose one signal (typically SIGINT or SIGSTOP or SIGTERM) and modify your script to register a custom handler using signal.signal() that addresses the graceful termination of your script.
Create another (note that it could be the same script with some command line paramater) script that reads the process id from the known file (aka /tmp/myscript.pid) and sends the chosen signal to that process using os.kill().
Note that an advantage of using signals to achieve this instead of an external way (files, pipes, etc.) is that the user can still press Ctrl+C (if you chose SIGINT) and that will produce the same behavior as the 'stop script' would.
What you're really looking for is any way to send a signal from one program to another, independent, program. One way to do this would be to use an inter-process pipe. Python has a module for this (which does, admittedly, seem to require a POSIX-compliant shell, but most major operating systems should provide that).
What you'll have to do is agree on a filepath beforehand between your running-program (let's say main.py) and your stopping-program (let's say stop.sh). Then you might make the main program run until someone inputs something to that pipe:
import pipes
...
t = pipes.Template()
# create a pipe in the first place
t.open("/tmp/pipefile", "w")
# create a lasting pipe to read from that
pipefile = t.open("/tmp/pipefile", "r")
...
And now, inside your program, change your loop condition to "as long as there's no input from this file - unless someone writes something to it, .read() will return an empty string:
while not pipefile.read():
# do stuff
To stop it, you put another file or script or something that will write to that file. This is easiest to do with a shell script:
#!/usr/bin/env sh
echo STOP >> /tmp/pipefile
which, if you're containerizing this, you could put in /usr/bin and name it stop, give it at least 0111 permissions, and tell your user "to stop the program, just do docker exec containername stop".
(using >> instead of > is important because we just want to append to the pipe, not to overwrite it).
Proof of concept on my python console:
>>> import pipes
>>> t = pipes.Template()
>>> t.open("/tmp/file1", "w")
<_io.TextIOWrapper name='/tmp/file1' mode='w' encoding='UTF-8'>
>>> pipefile = t.open("/tmp/file1", "r")
>>> i = 0
>>> while not pipefile.read():
... i += 1
...
At this point I go to a different terminal tab and do
$ echo "Stop" >> /tmp/file1
then I go back to my python tab, and the while loop is no longer executing, so I can check what happened to i while I was gone.
>>> print(i)
1704312

Python Unusual error with multiline code in IDLE Shell

I was testing some code on IDLE for Python which I haven't used in a while and stumbled on an unusual error.
I was attempting to run this simple code:
for i in range(10):
print(i)
print('Done')
I recall that the shell works on a line by line basis, so here is what I did first:
>>> for i in range(10):
print(i)
print('Done')
This resulted in a indent error, shown by the picture below:
I tried another way, as it might be that the next statement needed to be at the start perhaps, like this:
>>> for i in range(10):
print(i)
print('Done')
But this gave a syntax error, oddly:
This is quite odd to me the way IDLE works.
Take Note:
I am actually testing a much more complex program and didn't want to create a small Python file for a short test. After all, isn't IDLE's shell used for short tests anyways?
Why is multi-line coding causing this issue? Thanks.
Just hit return once or twice after the print(i), until you get the >>> prompt again. Then you can type the print('Done'). What's going on is that python is waiting for you to tell it that you're done working inside that for. And you do that by hitting return.
(You'll see, though, that the for loop is executed right away.)

Executing commands at breakpoint in gdb via python interface

Not a duplicate of this question, as I'm working through the python interface to gdb.
This one is similar but does not have an answer.
I'm extending a gdb.breakpoint in python so that it writes certain registers to file, and then jumps to an address: at 0x4021ee, I want to write stuff to file, then jump to 0x4021f3
However, nothing in command is ever getting executed.
import gdb
class DebugPrintingBreakpoint(gdb.Breakpoint):
def __init__(self, spec, command):
super(DebugPrintingBreakpoint, self).__init__(spec, gdb.BP_BREAKPOINT, internal = False)
self.command = command
def stop(self):
with open('tracer', 'a') as f:
f.write(chr(gdb.parse_and_eval("$rbx") ^ 0x71))
f.close()
return False
gdb.execute("start")
DebugPrintingBreakpoint("*0x4021ee", "jump *0x4021f3")
gdb.execute("continue")
If I explicitly add gdb.execute(self.command) to the end of stop(), I get Python Exception <class 'gdb.error'> Cannot execute this command while the selected thread is running.:
Anyone have a working example of command lists with breakpoints in python gdb?
A couple options to try:
Use gdb.post_event from stop() to run the desired command later. I believe you'll need to return True from your function then call continue from your event.
Create a normal breakpoint and listen to events.stop to check if your breakpoint was hit.
The Breakpoint.stop method is called when, in gdb terms, the inferior is still "executing". Partly this is a bookkeeping oddity -- of course the inferior isn't really executing, it is stopped while gdb does a bit of breakpoint-related processing. Internally it is more like gdb hasn't yet decided to report the stop to other interested parties inside gdb. This funny state is what lets stop work so nicely vis a vis next and other execution commands.
Some commands in gdb can't be invoked while the inferior is running, like jump, as you've found.
One thing you could try -- I have never tried this and don't know if it would work -- would be to assign to the PC in your stop method. This might do the right thing; but of course you should know that the documentation warns against doing weird stuff like this.
Failing that I think the only approach is to fall back to using commands to attach the jump to the breakpoint. This has the drawback that it will interfere with next.
One final way would be to patch the running code to insert a jump or just a sequence of nops.

Best way to return a value from a python script

I wrote a script in python that takes a few files, runs a few tests and counts the number of total_bugs while writing new files with information for each (bugs+more).
To take a couple files from current working directory:
myscript.py -i input_name1 input_name2
When that job is done, I'd like the script to 'return total_bugs' but I'm not sure on the best way to implement this.
Currently, the script prints stuff like:
[working directory]
[files being opened]
[completed work for file a + num_of_bugs_for_a]
[completed work for file b + num_of_bugs_for_b]
...
[work complete]
A bit of help (notes/tips/code examples) could be helpful here.
Btw, this needs to work for windows and unix.
If you want your script to return values, just do return [1,2,3] from a function wrapping your code but then you'd have to import your script from another script to even have any use for that information:
Return values (from a wrapping-function)
(again, this would have to be run by a separate Python script and be imported in order to even do any good):
import ...
def main():
# calculate stuff
return [1,2,3]
Exit codes as indicators
(This is generally just good for when you want to indicate to a governor what went wrong or simply the number of bugs/rows counted or w/e. Normally 0 is a good exit and >=1 is a bad exit but you could inter-prate them in any way you want to get data out of it)
import sys
# calculate and stuff
sys.exit(100)
And exit with a specific exit code depending on what you want that to tell your governor.
I used exit codes when running script by a scheduling and monitoring environment to indicate what has happened.
(os._exit(100) also works, and is a bit more forceful)
Stdout as your relay
If not you'd have to use stdout to communicate with the outside world (like you've described).
But that's generally a bad idea unless it's a parser executing your script and can catch whatever it is you're reporting to.
import sys
# calculate stuff
sys.stdout.write('Bugs: 5|Other: 10\n')
sys.stdout.flush()
sys.exit(0)
Are you running your script in a controlled scheduling environment then exit codes are the best way to go.
Files as conveyors
There's also the option to simply write information to a file, and store the result there.
# calculate
with open('finish.txt', 'wb') as fh:
fh.write(str(5)+'\n')
And pick up the value/result from there. You could even do it in a CSV format for others to read simplistically.
Sockets as conveyors
If none of the above work, you can also use network sockets locally *(unix sockets is a great way on nix systems). These are a bit more intricate and deserve their own post/answer. But editing to add it here as it's a good option to communicate between processes. Especially if they should run multiple tasks and return values.

Python File append error

This code works fine for me. Appends data at the end.
def writeFile(dataFile, nameFile):
fob = open(nameFile,'a+')
fob.write("%s\n"%dataFile)
fob.close()
But the problem is when I close the program and later run again I found that all the previous data were lost. Process is started to write from the start and there is no data in the file.
But during the run it perfectly add a line at the end of file.
I can't understand the problem. Please some one help.
NB: I am using Ubuntu-10.04 with python 2.6
There is nothing wrong with the code you posted here... I tend to agree with other the comments that this file is probably being overwritten elsewhere in your code.
The only suggestion I can think of to test this explicitly (if your use case can tolerate it) is to throw in an exit() statement at the end of the function and then open the file externally (aka in gedit) and see if the last change took.
Alternatively to the exit, you could run the program in the terminal and include a call to pdb at the end of this function which would interrupt the program without killing it:
import pdb; pdb.set_trace()
You will then have to hit c to continue the program each time this runs.
If that checks out, do a search for other places this file might be getting opened.

Categories

Resources