I was testing some code on IDLE for Python which I haven't used in a while and stumbled on an unusual error.
I was attempting to run this simple code:
for i in range(10):
print(i)
print('Done')
I recall that the shell works on a line by line basis, so here is what I did first:
>>> for i in range(10):
print(i)
print('Done')
This resulted in a indent error, shown by the picture below:
I tried another way, as it might be that the next statement needed to be at the start perhaps, like this:
>>> for i in range(10):
print(i)
print('Done')
But this gave a syntax error, oddly:
This is quite odd to me the way IDLE works.
Take Note:
I am actually testing a much more complex program and didn't want to create a small Python file for a short test. After all, isn't IDLE's shell used for short tests anyways?
Why is multi-line coding causing this issue? Thanks.
Just hit return once or twice after the print(i), until you get the >>> prompt again. Then you can type the print('Done'). What's going on is that python is waiting for you to tell it that you're done working inside that for. And you do that by hitting return.
(You'll see, though, that the for loop is executed right away.)
Related
I wrote a little python3 script, that runs another program with Popen and processes its output to create a little dashboard for it. The script generates a long string with information about the other program, clears the terminal and prints it. Everytime the screen refreshes, the whole terminal flickers.
here are the important parts of my script:
def screen_clear():
if os.name == 'posix':
os.system('clear')
else:
os.system('cls')
def display(lines):
# lines as a list of, well, lines i guess
s=''
for line in lines:
s=s + '\n' + str(line)
screen_clear()
print(s)
I bet theres a more elegant way without flickering to this, right?
Thanks for any help in advance!
the only solution to try out I can think of would be using print(s, end='\r') instead of clearing the screen first and printing again. The \r marker tells the console to override the last line.
In the end I'm sorry to say that consoles are simply not made for using them as a dashboard with permanently changing values. If the aforementioned solution doesn't work, maybe try implementing your dashboard in another way, python offers lots of solutions for that.
This is probably a dumb question, but I'm new to programming and I have a recursive function set up that I'm trying to figure out. For any print function in Python, is it necessarily true that lines are printed in the order that they are written in the script OR for larger outputs, is it possible that smaller length outputs can get printed first in the console even though the print statement is later in the code (maybe due to some memory lag)?
Example:
def test_print():
#don't run this, but was meant for scale. Is there any chance the 1 would print before the list of lists?
print([[i for i in range(10000)] for j in range(10000)])
print(1)
Print statements pile output into stdout in the order the code was written. Top to bottom. It isn't possible any other way because that's the way the code is interpreted. Memory lag doesn't play any role here because the output to your console is a line for line rendition of the data that was piled into stdout. And the order the data was written to it can't change, so you'll maintain chronology. Of course, you can always play around with the how the print function itself works. But I wouldn't recommend tampering with standard library functions.
As said above, print() function is executed in the order which they are in your code. But you yourself can change the order in which you want it executed, after all you have every right to instruct the code to do whatever you want.
You'll always get the same order in the output as the order you execute print() functions in Python.
I'm integrating MicroPython into a microcontroller and I want to add a debug step-by-step execution mode to my product (via a connection to a PC).
Thankfully, MicroPython includes a REPL aka Python shell functionality: I can feed it one line at a time and execute.
I want to use this feature to single-step on the PC-side and send in the lines in the Python script one-by-one.
Is there ANY difference, besides possibly timing, between running a Python script one line at a time vs python my_script.py?
Passing one line of code at a time on stdin is a completely unacceptable alternative to a proper debugger.
Let's say that you want to debug the following:
def foo(): # 1
for i in range(10): # 2
if i == 5: # 3
raise Exception("Argh!") # 4
# 5
foo() # 6
...in a proper step-by-step debugger, the user could use it like so:
break 4
run
Now, how are you going to do that? If you enter the function in a REPL, the function is defined as one operation, and it runs as one operation. It doesn't stop at line 6. It doesn't let you proceed line-by-line. The same is true of the for loop: Entering the text of the for loop one line at a time doesn't let you step it before the exception is thrown.
If you eliminate the function, and eliminate the loop (generating the code _something = iter(range(10)); i=_something.next(), maybe?), then you need to emulate the effects of scoping. It means you have a hugely different language than the one you're purportedly "debugging".
I don't know whether MicroPython has compile() and exec() built-in.
But when embeded Python has them and when MCU has enough RAM, then I do the following:
Send a line to embeded shell to start a creation of variable with multiline string.
'_code = """\'
Send the code I wish executed (line by line or however)
Close the multiline string with """
Send exec command to run the transfered code stored in the variable on MCU and pick up the output.
If your RAM is small and you cannot transfer whole code at once, you should transfer it in blocks that would be executed. Like functions, loops, etc.
If you can compile bytecode for MicroPython on a PC, then you should be able to transfer it and prepare it for execution. This would use a lot less of RAM.
But whether you can inject the raw bytecode into shell and run it depends on how much MicroPython resembles CPython.
And yep, there are differences. As explained in another answer line by line execution can be tricky. So blocks of code is your best bet.
Is there ANY difference ...
Yes.
The code below, for example, works in .py file, but is a SyntaxError in the interactive interpreter:
x = 1
if x == 1:
pass
x = 2
There are many other differences, but this alone should be enough to scare you away from the idea.
This code works fine for me. Appends data at the end.
def writeFile(dataFile, nameFile):
fob = open(nameFile,'a+')
fob.write("%s\n"%dataFile)
fob.close()
But the problem is when I close the program and later run again I found that all the previous data were lost. Process is started to write from the start and there is no data in the file.
But during the run it perfectly add a line at the end of file.
I can't understand the problem. Please some one help.
NB: I am using Ubuntu-10.04 with python 2.6
There is nothing wrong with the code you posted here... I tend to agree with other the comments that this file is probably being overwritten elsewhere in your code.
The only suggestion I can think of to test this explicitly (if your use case can tolerate it) is to throw in an exit() statement at the end of the function and then open the file externally (aka in gedit) and see if the last change took.
Alternatively to the exit, you could run the program in the terminal and include a call to pdb at the end of this function which would interrupt the program without killing it:
import pdb; pdb.set_trace()
You will then have to hit c to continue the program each time this runs.
If that checks out, do a search for other places this file might be getting opened.
How can I skip over a loop using pdb.set_trace()?
For example,
pdb.set_trace()
for i in range(5):
print(i)
print('Done!')
pdb prompts before the loop. I input a command. All 1-5 values are returned and then I'd like to be prompted with pdb again before the print('Done!') executes.
Try the until statement.
Go to the last line of the loop (with next or n) and then use until or unt. This will take you to the next line, right after the loop.
http://www.doughellmann.com/PyMOTW/pdb/ has a good explanation
You should set a breakpoint after the loop ("break main.py:4" presuming the above lines are in a file called main.py) and then continue ("c").
In the link mentioned by the accepted answer (https://pymotw.com/3/pdb/), I found this section somewhat more helpful:
To let execution run until a specific line, pass the line number to
the until command.
Here's an example of how that can work re: loops:
It spares you from two things: having to create extra breakpoints, and having to navigate to the end of a loop (especially when you might have already iterated such that you wouldn't be able to without re-running the debugger).
Here's the Python docs on until. Btw I'm using pdb++ as a drop-in for the standard debugger (hence the formatting) but until works the same in both.
You can set another breakpoint after the loop and jump to it (when debugging) with c:
pdb.set_trace()
for i in range(5):
print(i)
pdb.set_trace()
print('Done!')
If I understood this correctly.
One possible way of doing this would be:
Once you get you pdb prompt . Just hit n (next) 10 times to exit the loop.
However, I am not aware of a way to exit a loop in pdb.
You could use r to exit a function though.