Hey everyone!
Just moved over to VScode and dealing with some initial transition problems.
I'm using VScode for Python and have been using the interactive window and debugger. For my python interpreter, I've been selecting Python 3.9.7, which is a part of my Anaconda installation.
I've noticed that when I've been changing and saving my functions in one py file, and then calling the function from another file, that the changes I've made in my code aren't reflected in the code output.
It's worth noting that when the changes are made and saved in a file, and the same file is run, the changes WILL be reflected, so it's purely an issue between files. I reload the functions from the file after I make the changes and save them, so it's not an issue of reloading the function.
To provide some context in the photo, I have different functions in the file "Metric_Functions.py". I'm testing the code using different tests in the file "UnitTestCode.py". However, as I'm running the tests (reloading the functions and running the cell with the specific test), I noticed that when I made updated in the file "Metric_Functions", those changes were not being reflected in the unit test results.
Any help/experience with this kind of issue/suggestions of where to start to look would be really appreciated! Really inexperienced with VScode, so any help would be awesome.
Thanks!
In iPython and Jupyter imported modules persist throughout the session. If a module has already been imported then running the import statement again doesn't do anything at all since the interpreter can see that this module already exists in the namespace.
In order for the changes to external module to be seen in iPython/Jupyter you need to kill/restart the current instance and then run the import again.
Related
So I have a project that has multiple files regular python, and I'm using a jupyter lab python file as the 'main' file that imports and runs all the rest of the code. But if I make changes to those python files, the jupyter lab file does not automatically respond to updates in those files, and it takes a long time before the code runs properly with the updates.
The main problem is that I have a text file that I constantly update, and the jupyter lab file reads from that, but it takes forever before the changes in the text file are actually noticed and the code runs off that. Is this just a known issue with jupyter lab or?
There is no code so is difficult to know what is happening here. But how the Jupyter environ "notice" these changes? You must to re run de code again and must to consider than Jupyter maintain the vars in memory until the kernel is restarted (because the garbage collector of Python).
I've tried to erase the variables with del but always Jupyter maintain a reference to the old value (I don't know why) for that reason I try to use my code inside of function's scope in this way the variable dies when the function is done. This is the only way I found to deal with this problem.
I always try work with functions because is hard to debug a code in Jupyter with old variables values.
I use the ipython debugger interactively in Spyder. I need to debug with breakpoints across modules i.e. I'd like the debugger to step into a function that is contained within a particular .py file which isn't the .py file I start the debugger on.
At the moment, I can't seem to make this work. The debugger only 'sees' the breakpoints (which I manually add before running the debugger) in the current module. But I can't have every function in this module because it'll grow too large and it's not good practise. For testing, I really need to step into these smaller, generic functions that I hope to place in other modules.
Thanks
I am coming into Python from R, and installed Python 3.5 with Anaconda. Now, PyCharm console has a prompt identical to an iPython Notebook, i.e. instead of >>>, it shows [1] at the command line.
After writing a toy line of code (below) in a .py document, and running it from within PyCharm, showing no errors, I was under the assumption that the function toss(), which was defined in the .py document would be ready to use in the console. However this did not seem to be the case. I ended up copying and pasting the pertinent lines of code on the console, entering, and then, finally, the function toss() was accessible to produce random examples of the roll of a die.
Logically, there has to be a smoother way of moving code from a .py file in the Editor to the environment accessible from the Python Console. But this shorter way doesn't seem to be simply running the .py file.
Code:
import random
def toss():
return(random.randint(1,6))
So how do you make the code in a Python file in the Editor accessible in the local environment?
You need to import it first. Let's say that your function toss() is in a file called foo.py then that means that you can do
from foo import toss
toss()
in your Python Console to use your function. A Python source file is, by definition, a module and you'll need to import it in order to use any functions defined there.
I started programming some scripts with ipython notebook but now the project is becoming to big for a notebook. Nevertheless I love to execute my stuff in an ipython notebook (load de data only once, online figures...).
What I would want is to program everything with eclipse but executing it in ipython. I know I can save the notebooks as .py by adding the --script option at the beginning. But now I want to automatically make the process the other way around. I mean, I want my ipython notebook to reload de code I modify with Eclipse.
Is it possible?
Should I make a program that makes it using the converter?
Thanks!!
I found the solution for manually updating the functions without rerunning the whole .ipynb file. However, I do not know how to make it automated. Here is the solution:
Synchronizing code between jupyter/iPython notebook script and class methods
To cut it short you need to put reload function from importlib module in the cell of interest:
import babs_visualizations
from importlib import reload
reload(babs_visualizations)
Just a little addition: make sure that you are addressing the function in the form of moldule.function. If you previously imported function by from module import function and then reloaded the module the function will not be reloaded. You can put the function inside the notebook cell and rerun it to see, how the changes in the module affected the function output in the notebook.
I hope this was helpful for someone.
I am using portable python 1.1 with python 2.6.2. The PyScriptor is 1.9.9.6. I open all the files I am working on with PyScriptor. So, I run my main file and an error shows up with code in one of my imported files. I fix it and run the main file again, but the same error shows up. It is as if the imported file is still the old one but PyScriptor is correctly saving the files I edit. Restarting PyScriptor fixes it, but is a pain to do that for every bug. I tested this happens bu adding a print statement that showed up after restarting, and then removing it and still see the print statement.
You can use reload(imported_module_name) in the interactive shell to reload the module before re-running your script. PyScripter does everything in a single Python instance, which makes debugging easier, but also, as you discovered, makes fixing imported files a bit trickier.
You can also completely reinitialize the Remote Engine from the Run menu to get a fresh interpreter.