I am newbie to python and searched for 2 days to get solution but could not get find any solution. Please help me.
I have two python(newkmeans.py,newbisecting.py) files and one ipython jupyter notebook. I am running newbiecting.py from ipython notebook by calling it as %newbisecting.py which in turn calls newkmeans.py file. Now newkmeans.py file generates few set of variables like for example list values list1. I want to access this list1[] in ipython jupyter notebook. How can I do this? Please help me out.
I have tried using from newkmeans import list1 in ipython juypter notebook but could not get any results.
Related
I'm starting to work more with Jupyter notebooks, and am really starting to like it. However, I find it difficult to use it with my particular setup.
I have a workstation for running all the notebooks, but for a large part of my day I'm on-the-go with a space-constrained laptop with no power outlets. I'd like to be able to edit (but not run) these notebooks without installing and running the full Jupyter server backend, which I imagine would suck up a lot of power.
My question is: Is it possible for me to edit (and not run) notebooks without running the Jupyter server?
You could use one of the following options
1. ipynb-py-convert
With this module you can do a conversion from .py to .ipynb and vice-versa:
ipynb-py-convert ~/name_of_notebook.ipynb ~/name_of_notebook.py
where according to the documentation the cells are left as they are. To get back a jupyter notebook
ipynb-py-convert ~/name_of_notebook.py ~/name_of_notebook.ipynb
2. Ipython
However, you could also do a conversion to .py when you want to work it with an editor like VS Code or Sublime Text after you have download your .ipynb file with ipython:
ipython nbconvert --to python name_of_your_notebook.ipynb
As I was asking this question, I had opened the notebook locally in Visual Studio Code, but the preview was just the raw text representation of the notebook, so I had assumed that it needed the backend to run.
However, I was about to press submit on the question when I checked back in on it, and the notebook showed up just fine. So one solution is to open it in VS Code and wait a little bit.
I am having an issue with displaying dataframe properly in Jupyter Notebook that is integrated into PyCharm.
Here is what I am seeing:
This happens where I run print(df). I know that in Jupyter Notebook you can simply write df and it should output the dataframe..
Ideally, I'd even like for the dataframe to ouput as a table, as I have seen this before in Jupyter.. How can I configure this?
I'm trying to import a Python script as a module into one of my Notebooks, but the import or import importlib commands do not work in Jupyter Notebook like they do in a standard Python file and terminal.
I've come across the %load command but as far as I understand and have seen in my own Notebook, is that it only loads the contents of the script I'm trying to import into the current cell and is invisible to the other cells, and it also outputs the code in the script being imported for everyone to see.
I want to be able to load the script and make it available to all cells, as well as hiding the code for the purpose of encapsulation and also keeping the Notebook neat and tidy - only focusing on the code relevant to the topic of the Notebook. Is this possible?
so I'm trying to familiarize myself with the Jupyter Notebook and am running into issues.
When I run the following code in the normal .py file of my PyCharm IDE it runs perfect; however, if I run it in my notebook the [*] never disappears meaning it just continuously runs never ending. Any idea why that might be the case? all answers much appreciated!
import pandas as pd
train_file='C:\Users\DDautel\Anaconda2\PycharmProjects\Kaggle\Titanic\RUN.csv'
test_file='C:\Users\DDautel\Anaconda2\PycharmProjects\Kaggle\Titanic\RUN2.csv'
train=pd.read_csv(train_file)
test=pd.read_csv(test_file)
print train.describe()
print test.describe()
For others gratification:
Glad it's solved.
Make sure python, ipython, jupyter from anaconda (using conda env's possibly) is in your path before others. Order is important.
After updating various packages (I'm running the Anaconda Python distribution) DataFrame.to_csv() will not work from within IPython Notebook. Everything else Pandas-related seems to work fine (reading data into datframes, manipulating them, etc.) but DataFrame.to_csv() will not generate an output file. A snippet of the code:
hives_output = hives_data.groupby(['user_id','replay']).apply(hives_processor).sort(['user_id','replay'])
hives_output.to_csv('hives_output.csv',index=False)
The first command runs fine, but no output file is produced by the second command. No error is produced whatsoever.
Weirder still, the code runs fine (i.e. it does generate an output file) within a regular Python or IPython interpreter, or when run from the command line, so the problem would seem to be specific to IPython Notebook.
I've tried downgrading various packages, but have yet to isolate the problem. Has anyone experienced this? Any ideas on what might be the cause? Some sort of incompatibility between IPython-Notebook and Pandas?