I use the ipython debugger interactively in Spyder. I need to debug with breakpoints across modules i.e. I'd like the debugger to step into a function that is contained within a particular .py file which isn't the .py file I start the debugger on.
At the moment, I can't seem to make this work. The debugger only 'sees' the breakpoints (which I manually add before running the debugger) in the current module. But I can't have every function in this module because it'll grow too large and it's not good practise. For testing, I really need to step into these smaller, generic functions that I hope to place in other modules.
Thanks
Related
Hey everyone!
Just moved over to VScode and dealing with some initial transition problems.
I'm using VScode for Python and have been using the interactive window and debugger. For my python interpreter, I've been selecting Python 3.9.7, which is a part of my Anaconda installation.
I've noticed that when I've been changing and saving my functions in one py file, and then calling the function from another file, that the changes I've made in my code aren't reflected in the code output.
It's worth noting that when the changes are made and saved in a file, and the same file is run, the changes WILL be reflected, so it's purely an issue between files. I reload the functions from the file after I make the changes and save them, so it's not an issue of reloading the function.
To provide some context in the photo, I have different functions in the file "Metric_Functions.py". I'm testing the code using different tests in the file "UnitTestCode.py". However, as I'm running the tests (reloading the functions and running the cell with the specific test), I noticed that when I made updated in the file "Metric_Functions", those changes were not being reflected in the unit test results.
Any help/experience with this kind of issue/suggestions of where to start to look would be really appreciated! Really inexperienced with VScode, so any help would be awesome.
Thanks!
In iPython and Jupyter imported modules persist throughout the session. If a module has already been imported then running the import statement again doesn't do anything at all since the interpreter can see that this module already exists in the namespace.
In order for the changes to external module to be seen in iPython/Jupyter you need to kill/restart the current instance and then run the import again.
I started programming some scripts with ipython notebook but now the project is becoming to big for a notebook. Nevertheless I love to execute my stuff in an ipython notebook (load de data only once, online figures...).
What I would want is to program everything with eclipse but executing it in ipython. I know I can save the notebooks as .py by adding the --script option at the beginning. But now I want to automatically make the process the other way around. I mean, I want my ipython notebook to reload de code I modify with Eclipse.
Is it possible?
Should I make a program that makes it using the converter?
Thanks!!
I found the solution for manually updating the functions without rerunning the whole .ipynb file. However, I do not know how to make it automated. Here is the solution:
Synchronizing code between jupyter/iPython notebook script and class methods
To cut it short you need to put reload function from importlib module in the cell of interest:
import babs_visualizations
from importlib import reload
reload(babs_visualizations)
Just a little addition: make sure that you are addressing the function in the form of moldule.function. If you previously imported function by from module import function and then reloaded the module the function will not be reloaded. You can put the function inside the notebook cell and rerun it to see, how the changes in the module affected the function output in the notebook.
I hope this was helpful for someone.
I am using portable python 1.1 with python 2.6.2. The PyScriptor is 1.9.9.6. I open all the files I am working on with PyScriptor. So, I run my main file and an error shows up with code in one of my imported files. I fix it and run the main file again, but the same error shows up. It is as if the imported file is still the old one but PyScriptor is correctly saving the files I edit. Restarting PyScriptor fixes it, but is a pain to do that for every bug. I tested this happens bu adding a print statement that showed up after restarting, and then removing it and still see the print statement.
You can use reload(imported_module_name) in the interactive shell to reload the module before re-running your script. PyScripter does everything in a single Python instance, which makes debugging easier, but also, as you discovered, makes fixing imported files a bit trickier.
You can also completely reinitialize the Remote Engine from the Run menu to get a fresh interpreter.
How can you save functions/ classes you've writing in a python interactive session to a file? Specifically, is there a way in pydev / eclipse's interactive session (on a mac) to do this?
I just started learning python - and am enjoying using the interpreter's interactive session for testing and playing with modules I've written. However, I find myself writing functions in the interpreter, which I think, oh it would be cool to save that to my script files. How do I do this?
I tried:
import pickle
pickle.dump(my_function, open("output.p", "w"))
But it seems to be more of a binary serialization, or at least nothing that I could copy and paste into my code...
Are there ways to see the code behind classes & functions I've defined in the interpreter? And then copy them out of the interpreter?
Update:
Ok, here's what I've learned so far:
I missed the easiest of all - PyDev's interactive session in eclipse allows you to right click and save your session. Still have to remove >>>'s, but gets the job done.
IPython is apparently the way to go to do this.
How to save a Python interactive session? has more details.
The best environment for interactive coding sessions has to be IPython, in my opinion. It's built on and extends the basic Python interpreter with a lot of magic, including history. For example, you can issue the command %logstart to dump all subsequent input to a file, which still needs to be edited afterward before it will be a script, but gives you a lot to work with.
When installing IPython, don't forget pyreadline.
In general, however, it is best to write code in an IDE and then run it. IPython helps here as well. If you write and save the script, then use the IPython "run" command to run it, the entire global namespace of the script will be available for inspection in your IPython session. Additionally, you can use the -d argument to run to trigger the pdb debugger immediately on any unhandled exception.
If you're more of a straightlaced IDE and debugger kind of guy, then the easiest and best lightweight environment has to be PyScripter.
I think the answer is to change your workflow.
What I do is write my functions in an editor (emacs), and then press a key combination (Ctrl-c Ctrl-e) to send the region of text to the (i)python interpreter.
That way I can save the function if I want, and also play with it in an interpreter.
Emacs is central to how I do it, but I'm sure there must be similar approaches with many editors (vim, gedit, etc) and IDEs.
PS. Finding a good editor is crucial when working with Python. The editor must be able to move blocks of code to the left and right easily, or the whitespace issue becomes too onerous.
I dislike typing blocks of code in the python interpreter because it doesn't allow me to shift blocks easily. You'll like Python even more when you find the right editor.
You can setup a python history file which stores everything you type into the interpreter.
Here's how:
http://docs.python.org/tutorial/interactive.html
I think it can't be done.
Python can perform instrospection with the inspect module, but the inspect.getsource function won't work without a source file.
I am developing a Python package using a text editor and IPython. Each time I change any of the module code I have to restart the interpreter to test this. This is a pain since the classes I am developing rely on a context that needs to be re-established on each reload.
I am aware of the reload() function, but this appears to be frowned upon (also since it has been relegated from a built-in in Python 3.0) and moreover it rarely works since the modules almost always have multiple references.
My question is - what is the best/accepted way to develop a Python module/package so that I don't have to go through the pain of constantly re-establishing my interpreter context?
One idea I did think of was using the if __name__ == '__main__': trick to run a module directly so the code is not imported. However this leaves a bunch of contextual cruft (specific to my setup) at the bottom of my module files.
Ideas?
A different approach may be to formalise your test driven development, and instead of using the interpreter to test your module, save your tests and run them directly.
You probably know of the various ways to do this with python, I imagine the simplest way to start in this direction is to copy and paste what you do in the interpreter into the docstring as a doctest and add the following to the bottom of your module:
if __name__ == "__main__":
import doctest
doctest.testmod()
Your informal test will then be repeated every time the module is called directly. This has a number of other benefits. See the doctest docs for more info on writing doctests.
Ipython does allow reloads see the magic function %run iPython doc
or if modules under the one have changed the recursive dreloadd() function
If you have a complex context is it possible to create it in another module? or assign it to a global variable which will stay around as the interpreter is not restarted
How about using nose with nosey to run your tests automatically in a separate terminal every time you save your edits to disk? Set up all the state you need in your unit tests.
you could create a python script that sets up your context and run it with
python -i context-setup.py
-i When a script is passed as first argument or the -c option is
used, enter interactive mode after executing the script or the
command. It does not read the $PYTHONSTARTUP file. This can be
useful to inspect global variables or a stack trace when a
script raises an exception.