Looping through ipynb files in python - python

So if I have the same piece of code inside of 10 separate .ipynb files with different names and lets say that the code is as follows.
x = 1+1
so pretty simple stuff, but I want to change the variable x to y. Is their anyway using python to loop through each .ipynb file and do some sort of find and replace anywhere it sees x to change it or replace it with y? Or will I have to open each file up in Jupiter notebook and make the change manually?

I never tried this before, but the .ipynb files are simply JSONs. These pretty much function like nested dictionaries. Each cell is contained within the key 'cells', and then the 'cell_type' tells you if the cell is code. You then access the contents of the code cell (the code part) with the 'source' key.
In a notebook I am writing I can look for a particular piece of code like this:
import json
with open('UW_Demographics.ipynb') as f:
ff = json.load(f)
for cell in ff['cells']:
if cell['cell_type'] == 'code':
for elem in cell['source']:
if "pd.read_csv('UWdemographics.csv')" in elem:
print("OK")
You can iterate over your ipynb files, identify the code you want to change using the above, change it and save using json.dump in the normal way.

Related

How to save python notebook cell code to file in Colab

TLDR: How can I make a notebook cell save its own python code to a file so that I can reference it later?
I'm doing tons of small experiments where I make adjustments to Python code to change its behaviour, and then run various algorithms to produce results for my research. I want to save the cell code (the actual python code, not the output) into a new uniquely named file every time I run it so that I can easily keep track of which experiments I have already conducted. I found lots of answers on saving the output of a cell, but this is not what I need. Any ideas how to make a notebook cell save its own code to a file in Google Colab?
For example, I'm looking to save a file that contains the entire below snippet in text:
df['signal adjusted'] = df['signal'].pct_change() + df['baseline']
results = run_experiment(df)
All cell codes are stored in a List variable In.
For example you can print the lastest cell by
print(In[-1]) # show itself
# print(In[-1]) # show itself
So you can easily save the content of In[-1] or In[-2] to wherever you want.
Posting one potential solution but still looking for a better and cleaner option.
By defining the entire cell as a string, I can execute it and save to file with a separate command:
cell_str = '''
df['signal adjusted'] = df['signal'].pct_change() + df['baseline']
results = run_experiment(df)
'''
exec(cell_str)
with open('cell.txt', 'w') as f:
f.write(cell_str)

How do I import images with filenames corresponding to column values in a dataframe?

I'm a doctor trying to learn some code for work, and was hoping you could help me solve a problem I have with regards to importing multiple images into python.
I am working in Jupyter Notebook, where I have created a dataframe (named df_1) using pandas. In this dataframe each row represents a patient, and the first column shows the case number for each patient (e.g. 85).
Now, what I want to do is import multiple images (.bmp) from a given folder(same location as the .ipynb file). There are many images in this folder, and I do not want all of them - only the ones who have filenames corresponding to the "case_number" column in my dataframe (e.g. 85.bmp).
I already read this post, but I must admit it was way to complicated for me to understand.
Is there some simple loop (or something else) I could create to import all images with filenames corresponding to the values of the "case number" column in the dataframe?
I was imagining something like the below would be possible, I just do not know how to write it.
for i=[(df_1['case_number'()]
cv2.imread('[i].bmp')
The images don't really need to be implemented in the dataframe, but I would like to be able to view them in my notebook by using e.g. plt.imshow(85) afterwards.
Here is an image of the head of my dataframe
Thank you for helping!
You can access all of your files using this:
imageList = []
for i in range(0, len(df_1)):
cv2.imread('./' + str(df_1['case_number'][i]) + '.bmp')
imageList.append('./' + str(df_1['case_number'][i]) + '.bmp')
plt.imshow(imagelist[x])
This is looping through every item in the case_number column, the ./ shows that your file is within the current directory, using the directory path leading up to your current file. And by making everything a string and joining it you make it so that the file path is readable. The path created by joining the strings should look something like ./85.bmp, which should open your desired file. Also, you are appending the filenames to the list so that they can be accessed by the plt.imshow()
If you would like to access the files based on their name, you can use another variable (which could be set as an input) and implement the code below
fileName = input('Enter Your Value: ')
inputFile = imageList.index('./' + fileName + '.bmp')
and from here, you could use the same plt.imshow(imagelist[x]), but replace the x with the inputFile variable.

How to get LibreOffice headless Calc calculate to save new values from uno?

I am trying to open an excel file from python, get it to recalculate and then save it with the newly calculated values.
The spreadsheet is large and opens fine in LibreOffice with GUI, and initially shows old values. If I then do a Data->Calculate->Recalculate Hard I see the correct values, and I can of course saveas and all seems fine.
But, there are multiple large spreadsheets I want to do it from so I don't want to use a GUI instead I want to use Python. The following all seems to work to create a new spreadsheet but it doesn't have the new values (unless I again manually do a recalculate hard)
I'm running on Linux. First I do this:
soffice --headless --nologo --nofirststartwizard --accept="socket,host=0.0.0.0,port=8100,tcpNoDelay=1;urp"
Then, here is sample python code:
import uno
local = uno.getComponentContext()
resolver = local.ServiceManager.createInstanceWithContext("com.sun.star.bridge.UnoUrlResolver", local)
context = resolver.resolve("uno:socket,host=localhost,port=8100;urp;StarOffice.ServiceManager")
remoteContext = context.getPropertyValue("DefaultContext")
desktop = context.createInstanceWithContext("com.sun.star.frame.Desktop", remoteContext)
document = desktop.getCurrentComponent()
file_url="file://foo.xlsx"
document = desktop.loadComponentFromURL(file_url, "_blank", 0, ())
controller=document.getCurrentController()
sheet=document.getSheets().getByIndex(0)
controller.setActiveSheet(sheet)
document.calculateAll()
file__out_url="file://foo_out.xlsx"
from com.sun.star.beans import PropertyValue
pv_filtername = PropertyValue()
pv_filtername.Name = "FilterName"
pv_filtername.Value = "Calc MS Excel 2007 XML"
document.storeAsURL(file__out_url, (pv_filtername,))
document.dispose()
After running the above code, and opening foo_out.xlsx it shows the "old" values, not the recalculated values. I know that the calculateAll() is taking a little while, as I would expect for it to do the recalculation. But, the new values don't seem to actually get saved.
If I open it in Excel it does an auto-recalculate and shows the correct values and if I open in LibreOffice and do Recalculate Hard it shows the correct values. But, what I need is to save it, from python like above, so that it already contains the recalculated values.
Is there any way to do that?
Essentially, what I want to do from python is:
open, recalculate hard, saveas
It seems that this was a problem with an older version of LibreOffice. I was using 5.0.6.2, on Linux, and even though I was recalculating, the new values were not even showing up when I extracted the cell values directly.
However, I upgraded to 6.2 and the problem has gone away, using the same code and the same input files.
I decided to just answer my own question, instead of deleting it, as this was leading to a frustration until I solved it.

Saving multiple items to HDFS with (spark, python, pyspark, jupyter)

I´m used to program in Python. My company now got a Hadoop Cluster with Jupyter installed. Until now I never used Spark / Pyspark for anything.
I am able to load files from HDFS as easy as this:
text_file = sc.textFile("/user/myname/student_grades.txt")
And I´m able to write output like this:
text_file.saveAsTextFile("/user/myname/student_grades2.txt")
The thing I´m trying to achieve is to use a simple "for loop" to read text files one-by-one and write it's content into one HDFS file. So I tried this:
list = ['text1.txt', 'text2.txt', 'text3.txt', 'text4.txt']
for i in list:
text_file = sc.textFile("/user/myname/" + i)
text_file.saveAsTextFile("/user/myname/all.txt")
So this works for the first element of the list, but then gives me this error message:
Py4JJavaError: An error occurred while calling o714.saveAsTextFile.
: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory
XXXXXXXX/user/myname/all.txt already exists
To avoid confusion I "blured"-out the IP address with XXXXXXXX.
What is the right way to do this?
I will have tons of datasets (like 'text1', 'text2' ...) and want to perform a python function with each of them before saving them into HDFS. But I would like to have the results all together in "one" output file.
Thanks a lot!
MG
EDIT:
It seems like that my final goal was not really clear. I need to apply a function to each text file seperately and then I want to append the output to the existing output directory. Something like this:
for i in list:
text_file = sc.textFile("/user/myname/" + i)
text_file = really_cool_python_function(text_file)
text_file.saveAsTextFile("/user/myname/all.txt")
I wanted to post this as comment but could not do so as I do not have enough reputation.
You have to convert your RDD to dataframe and then write it in append mode. To convert RDD to dataframe please look into this answer:
https://stackoverflow.com/a/39705464/3287419
or this link http://spark.apache.org/docs/latest/sql-programming-guide.html
To save dataframe in append mode below link may be useful:
http://spark.apache.org/docs/latest/sql-programming-guide.html#save-modes
Almost same question is here also Spark: Saving RDD in an already existing path in HDFS . But the answer provided is for scala. I hope something similar can be done in python also.
There is yet another (but ugly) approach. Convert your RDD to string. Let the resulting string be resultString . Use subprocess to append that string to destination file i.e.
subprocess.call("echo "+resultString+" | hdfs dfs -appendToFile - <destination>", shell=True)
you can read multiple files and save them by
textfile = sc.textFile(','.join(['/user/myname/'+f for f in list]))
textfile.saveAsTextFile('/user/myname/all')
you will get all part files within output directory.
If the text files all have the same schema, you could use Hive to read the whole folder as a single table, and directly write that output.
I would try this, it should be fine:
list = ['text1.txt', 'text2.txt', 'text3.txt', 'text4.txt']
for i in list:
text_file = sc.textFile("/user/myname/" + i)
text_file.saveAsTextFile(f"/user/myname/{i}")

Python - Extract private variables from a function?

I have a function f2(a, b)
It is only ever called by a minimize algorithm which iterates the function for different values of a and b each time. I would like to store these iterations in excel for plotting.
Is it possible to extract these values (i only need to paste them all into excel or a text file) easily? Conventional return and print won't work within f2. Is there any way to extract the values a and b to a public list in the main body some other way?
The algorithm may iterate dozens or hundreds of times.
So far I have tried:
Print to console (can't paste this data into excel easily)
Write to file (csv) within f2, the csv file gets overwritten within the function each time though.
Append the values to a global list.
values = []
def f2(a,b):
values.append((a,b))
#todo: actual function logic goes here
Then you can look at values in the main scope once you're done iterating.
Write to file (csv) within f2, the csv file gets overwritten within the function each time though.
Not if you open the file in append mode:
with open("file.csv", "a") as myfile:

Categories

Resources