Editing packages not taking effect in jupyter - python

I forked a repo from github and copied to my local machine and opened all the files in Jupyter. Now I have access to all the .py files and all the notebooks.
Let's say I want to add a new function to the package as follows:
teste(self):
return self
I do this by writing the function in the here.py file and to make sure it works I test it on a notebook by calling it in a cell and executing that cell:
print(here.teste(worked))
However, this doesn't work. My guess is that I have not updated the package itself so the function teste() does not exist. How do I commit this change to the package locally (without using pull request).

Most likely you need to restart your jupyter kernel for the changes to take effect.
Git is merely a versioning system, it does not care what python does and does not influence how python works.
Python loads your package when it is imported import my_package as mp. When you make changes to that package while python is running, it is not aware of those changes. If you try to re-import, python will merely check if it is already imported (which is true) and do nothing. So the changes still does not take effect. Only when you restart the kernel, and import the package will it take effect. You can also re-import a package with the following (python 3.4 and greater):
import importlib
importlib.reload(package)

Related

`pipreqs` generating blank requirements.txt in Docker Container

I am using Python in a Jupyter Lab notebook in a Docker container. I have the following code in one cell:
import numpy as np
import os
import pandas as pd
Then I run the following cell:
!pipreqs /app/loaded_reqs
and get:
INFO: Successfully saved requirements file in /app/loaded_reqs/requirements.txt
But when I open the requirements.txt, it shows up empty/blank. I expected numpy, os and pandas to be in this requirements.txt file. Why might it not be working?
According to this Medium post by Iván Lengyel, pipreqs doesn't support Jupyter notebooks. (This issue in in the pipreqs repo, open since 2016 convinces me of the veracity of that assertion. Nicely, the issue post also suggests the solution I had already found when searching the terms 'pipreqs jupyter' at Google.) Plus, importantly you generally don't use tools that act on notebook files inside the notebook you are trying to use. (Or at least it is something to always watch out for, [or test if possible], similar in a way to avoiding iterating on a list you are modifying in the loop.)
Solution -- use pipreqsnb instead:
In that Medium post saying it doesn't work with notebooks, Iván Lengyel proffers a wrapper for it that works for notebooks. So in the terminal outside the notebook, but in the same environment (inside the docker container, in your case), install pipreqsnb via pip install pipreqsnb. Then run it pointing it at your specific notebook file. I'll give an example in the next paragraph.
I just tried it and it worked in temporary sessions launched from here by pressing launch binder badge there. When the session came up, I opened a terminal and ran pip install pipreqsnb and then pipreqsnb index.ipynb. That first time I saw requirements.txt get made with details on the versions of matplotlib, numpy, scipy, and seaborn. To fully test it was working, I opened index.ipynb in the running session and added a cell with import pandas as pd typed in it and saved the notebook. Then I shutdown the kernel and over in the terminal ran, pipreqsnb index.ipynb. When I re-examined the requirements.txt file now pandas has been added with details about the versions.
More about maybe why !pipreqs /app/loaded_reqs failed:
I had the idea that maybe you needed to save the notebook first after adding the import statements cell? However, nevermind. That still won't help because as stated here pipreqs, and further confirmed at the pipreqs issues list doesn't support Jupyter notebooks.
Also, keep in mind the use of the exclamation in a notebook to run a command in the shell doesn't mean that shell will be in the same environment as the kernel of the notebook, see the second paragraph here to more perspective on that. (This can be useful to understand for future things though, such as why you want to use the %pip or %conda magic commands when installing from inside a notebook, see here, and not put an exclamation point in front of that command in modern Jupyter.)
Or inside the notebook at the end, I'd suggest trying %watermark --iversions, see watermark. And then making some code to generate the requirements.txt from that. (Also, I had seen there was bug in that related to some packages imported with from X import Y, see here.)
Or I'd suggest trying %pip freeze inside the notebook for the full environment information. Not just what the file needs, though.

Cannot find module "qfi" for running JdeRobot drone_cat_mouse exercise from source

I want to run JdeRobot drone_cat_mouse on my Ubuntu 20.04. I'm using ROS Noetic and has faithfully followed these installation instructions. Everything it told me to test was working properly.
When I first ran roslaunch drone_cat_mouse.launch, there was an import error for teleopWidget and sensorsWidget which I fixed by using relative imports. Then I had an error No module named qfi.
Unlike teleopWidget and sensorsWidget, I couldn't find the qfi module in the JdeRobot/drones source code. So I googled it, and the only relevant result that popped up was this, which led to this link. They said to:
sudo touch /usr/lib/python2.7/dist-packages/qfi/__init__.py
But I ran that command and this happened!
Not even pip has a "qfi" module!
So I thought to check JdeRobot's entire repository. Turns out it was in JdeRobot/base, and that repo is not maintained anymore!
After further digging, there was this issue which basically tells us forget about it and move to the web release! But I can't, circumstances forced me to use the source code option (deliverables are drone_cat_mouse.world and my_solution.py, it's impossible for me to get the former in the docker web version and the latter's format is different between the source code version and the web version).
In a nutshell, how do I fix this qfi module problem so that I can run the exercises from source like these people?
I'm just stupid, as usual. all I need to do was clone https://github.com/JdeRobot/ThirdParty, get the qfi module, copy it to
~/catkin_ws/src/drones/rqt_drone_teleop/src/rqt_vel_teleop/ and replace all qfi imports with its relative import version. All common sense
No errors in terminal, gazebo runs, but somehow the rqt widget for drone vision never appears.
Forget it, I'm giving up on this dumpster fire of a program.
Edit: I did another fresh install, followed the steps, noticed troubleshooting for qfi which required qmake, but same end result
If you're trying to launch drone_cat_mouse there is an issue with the namespace of the RQT widget that occurs when you try to launch it.
Namely, the topics that exist for drone_cat_mouse are prefixed by cat/ or mouse/. But RQT will try to access these topics without the prefix and run into an error. Alternatively, since you have a local install, you can try to run the code manually by running
python my_solution.py
Just make sure the change the area where the DroneWrapper class was called in the following manner:
HAL = DroneWrapper('drone', 'cat/')
Here 'drone' is the name of the node you are creating and 'cat/' is the namespace given to the DroneWrapper Class.

VS Code / Pylance / Pylint Cannot resolve import

The Summary
I have a python import that works when run from the VS Code terminal, but that VS Code's editor is giving warnings about. Also, "Go to Definition" doesn't work.
The Problem
I have created a docker container from the image tensorflow/tensorflow:1.15.2-py3, then attach to it using VS Code's "Remote- Containers" extension. Then I've created the following file in the container.
main.py:
import tensorflow.compat.v1 as tf
print(tf.__version__)
This runs fine in the VS Code terminal, but the Editor and the Problems pane both give me an unresolved import 'tensorflow.compat' warning. Also "Go to Definition" doesn't work on tf.__version__.
I'm using several extensions but I believe the relevant ones are the Microsoft Python extension (installed in the container), as well as the Remote - Containers extension, and now the Pylance extension (installed in the container).
The Things I've Tried
I've tried this with the default pylint, and then also after installing pylance with similar results. I've also seen some docs about similar issues, but they were related to setting the correct source folder location for modules that were part of a project. In contrast, my code within my project seems to work fine with imports/go-to-definition. It's external libraries that don't seem to work.
Also, for the sake of this minimal example, I've attached to the container as root, so I am guessing it's not an issue of elevated permissions.
I've also tried disabling all the extensions except the following, but got the same results:
Remote - Containers (local)
Remote - WSL (local)
Python (on container)
Jupyter (on container, required by Python for some reason)
All the extensions above are on the latest versions.
I've also fiddled around with setting python.autocomplete.extraPaths, but I'm not sure what the right path is. It also seems like the wrong thing to have to add libraries to the path that are installed in the global python installation, especially since I'm not using a virtual environment (it being in a docker container and all).
The Question
How do I fix VS Code so that it recognizes this import and I can use "Go to Definition" to explore these tensorflow functions/classes/etc?
tldr;
TensorFlow defines some of its modules in a way that pylint & pylance aren't able to recognize. These errors don't necessarily indicate an incorrect setup.
To Fix:
pylint: The pylint warnings are safely ignored.
Intellisense: The best way I know of at the moment to fix Intellisense is to replace the imports with the modules they are aliasing (found by importing alias in a repl as x then running help(x)). Because the target of the alias in my case is an internal name, you probably don't want to check in these changes to source control. Not ideal.
Details
Regarding the linting: It seems that tensorflow defines its modules in a way that the tools can't understand. Also, it appears that the package is an alias of some kind to another package. For example:
import tensorflow.compat.v1 as tf
tf.estimator.RunConfig()
The above code gives the pylint warning and breaks intellisense. But if you manually import the above in a REPL and run help(tf), it shows you the below package, which you can use instead:
import tensorflow_core._api.v1.compat.v1 as tf
tf.estimator.RunConfig()
This second example does not cause the pylint warning. Also the Intellisense features (Go to definition, Ctrl+Click, etc) work with this second example.
However, based on the _api, it looks like that second package name is an internal namespace, so I'm guessing it is probably best to only use this internal name for local debugging.
Confirmation/Tickets
pylint: I've found a ticket about pylint having issues with a couple tensorflow imports that looks related.
Intellisense: I've opened a ticket with pylance.
So for me I was trying to
import pandas as pd
but I got the error
"pd" is not accessedPylance (module) pd
SO what I did was reload the extension Python IntelliSense(Pylance) and that solved my issue.
I had the same problem but with all kinds of packages.
My solution was to go to the VSCode settings and search for "python.analysis.extraPaths", and add the path to your site-packages.
In my case, I added C:\Code\Python39\Lib\site-packages, and now it's working fine.
What, usually, solves the pylance issues for me is pointing my Python interpreter to the virtualenv one.
Open the command palette Ctrl + Shift + P
Type: Python: Select Interpreter
It will show a list of all the python Interpreters it actually detects:
Select Enter interpreter path
Type in the path to your local venv/bin folder or click find to navigate using the file explorer.
Your path should look something like:
venv/bin/python3.9
i changed "import tensorflow as tf" to "from tensorflow import compat as tf"
it ll even work for tf.gfile.Gfile()

How to update source files for pytest?

pytest appears to be using old source code and failing tests because of it. I'm not sure how to update it.
Test code:
from nba_stats import league
class TestLeaders():
def test_default():
leaders = league.Leaders()
print(leaders)
Source code (league.py):
from nba_stats.nba_api import NbaAPI
from nba_stats import constants
class Leaders:
...
When I run pytest on my parent directory, I get an error that refers to an old import statement.
_____________________________ ERROR collecting test/test_league.py ______________________________
ImportError while importing test module '/home/mfb/src/nba_stats/test/test_league.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_league.py:1: in <module>
from nba_stats import league
../../../.virtualenvs/nba_stats_dev/lib/python3.6/site-packages/nba_stats/league.py:1: in <module>
from nba_stats import _api_scrape, _get_json
E ImportError: cannot import name '_api_scrape'
I tried resetting my virtualenvironment and also reinstalling my package via pip. What do I need to do to tell it to see the new import statement and why is this happening?
Edit: Deleting my virtual environment completely and then creating a new one seemed to fix it, but it seems to be a recurring issue with any further source code changes. Surely there must be a way to not have to reset my virtualenvironment each time?
Looks like you installed that package (possibly as a dependency through something else if not directly) and also have it cloned locally for development. You can look into local editable installs (https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs), but personally, I prefer to make the test refer directly to the package under which it is being run, since then it can be used "as-is" after cloning it. Do this by modifying sys.path in your test_league.py. Ie., assuming it has a structure with the python code under python/nba_stats, in the parent directory of `test
sys.path = [os.path.join(os.pardir, 'python')] + sys.path
at the top of test_league.py. This puts your local package up front and import will consider it first.
EDIT:
Since you tried and it still did not work (please do make sure that the snippet above does point to the local python package as in the actual structure; the above is just a common one but you may have a different structure), here is how you can see which directories are considered in order, and which are eventually selected:
python -vv -m pytest -svx
You will be able to see if there are spurious directories in sys.path, whether the one tried (as in the snippet above) matches as expected or not, any left-over .pyc files that get picked up, etc.
EDITv2: Since you stated that python -m pytest works, but pytest not, have a look where that pytest executable is coming from with which pytest. Likely it's a system one that refers to a different python then the one in your virtualenv. To see which python it picks up, do:
cat `which python`
and look at the top line.
If that is not the same as what which python gives you (with your desired virtualenv activated), you may have to install pytest for that current virtualenv (python -m pip install pytest).

Anki python scripting: Multiple modules missing

I'm trying to follow the tutorial at https://www.juliensobczak.com/tell/2016/12/26/anki-scripting.html
And I'm getting the "listcards.py" basic script set up, having cloned anki and installed the virtual environment as well as the requirements from the anki/requirements.txt file.
However, when I run the script from the tutorial entitled "listcards.py", I get a notice that the module 'anki.sched' is not found. ("ModuleNotFoundError: No module named 'anki.sched')
While I could pip install each package, I have a feeling that there must be an underlying reason that these packages are missing- is there a way to have python automatically pull in the named module even if it isn't pre-installed in the manner of how node.js installed referenced dependencies automatically, so that I won't have to manually install every missing package?
I ran into this same problem. anki.sched is a package contained within the anki repository you cloned, so it does exist on your machine. You won't be able to install it using pip.
The solution for me was to write the absolute path of the anki repository you cloned in sys.path.append rather than a relative path. For example, if your script exists in /Users/anki/scripts and your cloned anki repository exists in /Users/anki/anki write this in your script before importing anki modules:
sys.path.append("/Users/anki/anki")
rather than this (which is what is provided in the tutorial):
sys.path.append("../anki")
I'm not 100% sure why the latter fails, but Anki must be looking for the anki.sched module in the wrong place due to the relative reference.
What I did, and I know this is probably not the correct way, is to simply wipe out the root anki folder and copy all the application scripts to it, then the imports worked.

Categories

Resources