import pandas as pd
kulfi=['Chocolate','Mango','Vanilla','Kesar']
pd.Series(kulfi)
When I run this program in pyCharm it doesn't show any output in console whereas it shows output in Google Colab
Please note that I have already pip3 installed python-tk for Graphical output(if needed)
Try adding print(pd.Series(kulfi)). They are different environments. Google Colab has an interactive Jupyter notebook like interface while pycharm is an IDE.
This is expected output since you are not using print() function.
Please use the following way to get the desired result in pycharm or anyother console.
import pandas as pd
kulfi=['Chocolate','Mango','Vanilla','Kesar']
print(pd.Series(kulfi))
This can happen anytime while using various IDE's. Google Colab has various things already done for you, it means colab knows you are wanting the output. So, here you have to specify the print(pd.Series(kulfi)) to get everything going. In case of any IDE you have to specifically mention the print statement.
Related
I am using Python in a Jupyter Lab notebook in a Docker container. I have the following code in one cell:
import numpy as np
import os
import pandas as pd
Then I run the following cell:
!pipreqs /app/loaded_reqs
and get:
INFO: Successfully saved requirements file in /app/loaded_reqs/requirements.txt
But when I open the requirements.txt, it shows up empty/blank. I expected numpy, os and pandas to be in this requirements.txt file. Why might it not be working?
According to this Medium post by Iván Lengyel, pipreqs doesn't support Jupyter notebooks. (This issue in in the pipreqs repo, open since 2016 convinces me of the veracity of that assertion. Nicely, the issue post also suggests the solution I had already found when searching the terms 'pipreqs jupyter' at Google.) Plus, importantly you generally don't use tools that act on notebook files inside the notebook you are trying to use. (Or at least it is something to always watch out for, [or test if possible], similar in a way to avoiding iterating on a list you are modifying in the loop.)
Solution -- use pipreqsnb instead:
In that Medium post saying it doesn't work with notebooks, Iván Lengyel proffers a wrapper for it that works for notebooks. So in the terminal outside the notebook, but in the same environment (inside the docker container, in your case), install pipreqsnb via pip install pipreqsnb. Then run it pointing it at your specific notebook file. I'll give an example in the next paragraph.
I just tried it and it worked in temporary sessions launched from here by pressing launch binder badge there. When the session came up, I opened a terminal and ran pip install pipreqsnb and then pipreqsnb index.ipynb. That first time I saw requirements.txt get made with details on the versions of matplotlib, numpy, scipy, and seaborn. To fully test it was working, I opened index.ipynb in the running session and added a cell with import pandas as pd typed in it and saved the notebook. Then I shutdown the kernel and over in the terminal ran, pipreqsnb index.ipynb. When I re-examined the requirements.txt file now pandas has been added with details about the versions.
More about maybe why !pipreqs /app/loaded_reqs failed:
I had the idea that maybe you needed to save the notebook first after adding the import statements cell? However, nevermind. That still won't help because as stated here pipreqs, and further confirmed at the pipreqs issues list doesn't support Jupyter notebooks.
Also, keep in mind the use of the exclamation in a notebook to run a command in the shell doesn't mean that shell will be in the same environment as the kernel of the notebook, see the second paragraph here to more perspective on that. (This can be useful to understand for future things though, such as why you want to use the %pip or %conda magic commands when installing from inside a notebook, see here, and not put an exclamation point in front of that command in modern Jupyter.)
Or inside the notebook at the end, I'd suggest trying %watermark --iversions, see watermark. And then making some code to generate the requirements.txt from that. (Also, I had seen there was bug in that related to some packages imported with from X import Y, see here.)
Or I'd suggest trying %pip freeze inside the notebook for the full environment information. Not just what the file needs, though.
TLDR
How can one know the syntax differences required between a jupyter notebook and "normal" python (i.e. that made in a .py file and run in a terminal with python myfile.py)? Specifically for the Pillow.Image class (e.g.)
PIL.Image.open('/path2file.jpg')
why does this display (show) and image in a jupyter notebook but not in "normal" python?
Background
I was running through the image classification tutorial for tensor flow, and started copy/pasting code snippets into PyCharm (as I don't use jupyter for development) and noted a few key differences in the jupyter notebook code and what was necessary within PyCharm (or a terminal for that matter).
The two main differences I noticed were in the line
PIL.Image.open(str(roses[0]))
First off, the import statement:
import PIL
is the only thing specified in the jupyter notebook before using the Image.open() method. In PyCharm the Image class will not resolve unless I import:
from PIL import Image
Does this imply that jupyter notebooks are importing ALL classes from a package similar to import PIL *? I thought this was bad practice?
Also in the jupyter notebook I notice that the rose image is displayed as suggested by the tutorial upon
PIL.Image.open(str(roses[0]))
whereas in PyCharm or a console it only loads the object into memory and shows nothing.
PIL.Image.open(str(roses[0])).show()
is necessary to actually show the image on screen, which agrees with what is specified by the Pillow docs.
So my overall question is how do you know how to code for a jupyter notebook as opposed to "normal" python code in a console/terminal/.py file/PyCharm? It seems jupyter is doing things beyond what the code calls for, so how would I as a programmer know ahead of time that I don't have to call the .show() method in a jupyter notebook without trial and error?
Note, I'm sure I could read through all the docs on jupyter and become an expert to answer my own question, but if someone knows of a single part of the docs that covers these kinds of differences I'd very much appreciate a summary as I'm not interested in using jupyter regularly, just want to know the major differences so I can port things to "real" python from examples like the tensor flow one given above.
Because jupyter provide us a feature to run commands and view the output in the ide itself. It's the feature of jupyter notebook. It becomes very easy to see the commands and output.
I'm fairly new to IDE's and I'm trying to take courses in Python. No matter what I try, I cannot successfully run a python script that has import pandas and import numpy in it in either Visual Studio Code or Eclipse (running on Windows 10). I have Python 3.8 installed, and when I try running those commands in the shell it works fine. I suspect when I try executing an actual Python script instead of using the console, it might be using a different interpreter, and I only get errors when I try doing this, saying numpy is not defined. I also get the error "cannot import name 'numpy' from partially initialized module 'pandas' (most likely due to a circular import)" when I specify "from pandas import numpy" rather than "from pandas import *".
I am very frustrated and don't know how to fix this problem. I've tried searching for help but not having a programming background, I don't know where to go to resolve this or how.
I also cannot get pip or pip3 to work at all to install packages. Those commands don't get recognized.
Please help!
I recommend using Jupyter Notebooks/pycharm(IDE). Both are very useful for learning python and working with data, data manipulation, and data visualizations.
PyCharm knows everything about your code. Rely on it for intelligent code completion, on-the-fly error checking and quick-fixes & easy project navigation.
While
Jupyter Notebooks can run line by line, rerun specific lines after making changes, and it's inline output is very useful for debugging and visualizations. You can get it from https://jupyter.org.
Zepellin Notebooks can also serve as alternatives.
I am running a python notebook in VS Code (see image). It runs fine but when I try to inspect a dataframe with the Data Viewer I get:
"Python package 'pandas' is required for viewing data."
The package is installed, otherwise, the code would not work and the data frames would not be present in the variable panel. When I click on "Install" I get: "Error: All data science packages require an interpreter be passed in"
I only have two environments, Anaconda and one created by VS code. I have tried selecting either one and nothing changes, code runs on both and I get the same errors on both.
Any ideas on how to fix this problem?
EDIT: The previous question Viewing data in the VSCode variable explorer requires pandas does not solve my issue. As mentioned above I have selected different environments without fixing it.
EDIT 2: Updating pandas did not fix it either. It was only solved by updating Anaconda, as suggested by Mrinal Roy.
This answer on a similar issue Viewing data in the VSCode variable explorer requires pandas mentioned it was fixed around a year ago.
It seems that the version.release string of pandas I was using includes characters that are not correctly parsed by the extension. They mentioned that it was going to be addressed yesterday in the Insider version. Looking forward to validating.
So, you probably have an older version of Anaconda or VS Code. Otherwise, what version of pandas do you have in both conda environments? VS Code Data Viewer requires pandas package 0.20 or later. Try upgrading it and related packages to the latest and check.
I just installed IPython on a remote desktop at work. I had to create a shortcut on my desktop to connect to IPython because the remote desktop does not have internet access. I am able to successfully open the IPython notebook. However, when I try to import pandas
import pandas as pd
I get this error that I have never seen before
The history saving thread hit an unexpected error (OperationalError('database or disk is full',)).History will not be written to the database.
Does this error relate to how it was installed on the remote desktop?
I suffered from this problem for a long time. My dirty fix was to simply restart the kernel and go about my work. However, I did find a way which eliminated it for good. This question seems to have mixed answers for different users. I'll try to list all based on answers elsewhere (all links at the end).
So the issue seems to be because of a certain nbsignatures.db file. And we need to simply remove it to solve the issue. You may find the file here in any one of the locations:
~/.local/share/jupyter/nbsignatures.db (I found mine here)
~/.ipython/profile_default/security/nbsignatures.db
~/Library/Jupyter/nbsignatures.db
All links:
https://github.com/ipython/ipython/issues/9293
IPython Notebook error: Error loading notebook