Tensorboard for API 1.0 - python

In the old versions, we can use this command after the creating the network structure and creating the session.
writer = tf.train.SummaryWriter("logs/", sess.graph)
And type this in the cmd after running your script:
tensorboard --logdir="logs"
Then you copy the link to your browser.
But it shows this error:
No graph definition files were found.
To store a graph, create a tf.summary.FileWriter and pass the graph
either via the constructor, or by calling its add_graph() method. You
may want to check out the graph visualizer tutorial .1
Please help. I also tried using the tf.summary.FileWriter() instead
file_writer = tf.summary.FileWriter('/path/to/logs', sess.graph)
And I get the same error.

There is probably something wrong with the folder you are writing to, or reading from. After you tried the filewriter, can you verify that it indeed wrote something to your logs folders?
If this is the case, start your tensorboard with this command:
tensorboard --logdir="logs" --debug
Take a look at this line:
INFO:tensorflow:TensorBoard path_to_run is: {'/Users/test/logs': None}
Verify that this is the same path! This is what went wrong for me when I had the same issue. More debug ideas can be found on this page by the way: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tensorboard/README.md#my-tensorboard-isnt-showing-any-data-whats-wrong
If this does not help, let us know!

Related

Jenkins api4jenkins download last build job console output

I am trying to get the Jenkins last successful build output using api4jenkins library. I can get the build number, but unable to find the option to download consoletext. Is that option available in this framework?
if not any other way to download the console text using cli?
Thanks
SR
You can do something like this.
from api4jenkins import Jenkins
j = Jenkins('http://localhost:8080', auth=('admin', 'admin'))
job = j['Sample'] # Getting the Job by name
for line in job.get_build(1).console_text():
print(line)
Also remember it's always possible to directly download the log as well. For example refer the following.
http://localhost:8080/job/<JOBNAME>/lastSuccessfulBuild/consoleText

How To Download Google Pegasus Library Model

I am a very newbie and currently working for my Final Project. I watch a youtube video that teach me to code Abstractive Text Summarization with google's Pegasus library. It Works fine but I need it to be more efficient.
So here is the code
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
tokenizer = PegasusTokenizer.from_pretrained("google/pegasus-xsum")
model = PegasusForConditionalGeneration.from_pretrained("google/pegasus-xsum")
Everytime I run that code, it always download the "Google Pegasus-xsum" library which sized about 2.2 GB.
So here is the sample of the code in notebook : https://github.com/nicknochnack/PegasusSummarization/blob/main/Pegasus%20Tutorial.ipynb
and it will running download the library like picture below :
Is there any way to download the library first and then I saved it locally, and everytime I run the code it's just gonna call the library locally?
Something like caching or saving the library locally maybe?
Thanks.
Mac
Using inspect you can find and locate the modules easily.
import inspect
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
tokenizer = PegasusTokenizer.from_pretrained("google/pegasus-xsum")
model = PegasusForConditionalGeneration.from_pretrained("google/pegasus-xsum")
print(inspect.getfile(PegasusForConditionalGeneration))
print(inspect.getfile(PegasusTokenizer))
You will get their paths sth like this
/usr/local/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py
/usr/local/lib/python3.9/site-packages/transformers/models/pegasus/tokenization_pegasus.py
Now, if you go and see what is inside the tokenization_pegasus.py file, you will notice that the model of google/pegasus-xsum is being probably fetched by the following line
PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": {"google/pegasus-xsum": "https://huggingface.co/google/pegasus-xsum/resolve/main/spiece.model"}
}
where here if you open:
https://huggingface.co/google/pegasus-xsum/resolve/main/spiece.model
You will get the model downloaded directly to your machine.
UPDATE
After some search on Google, I've found sth important where you can get the used models and all their related files downloaded to your working directory by the following
tokenizer.save_pretrained("local_pegasus-xsum_tokenizer")
model.save_pretrained("local_pegasus-xsum_tokenizer_model")
Ref:
https://github.com/huggingface/transformers/issues/14561
So that after running it, you will see the following being saved automatically in your working directory. So, now you can call the models directly but you need to search how...
Also, the 12.2GB file that you wanted to know its path locally, it is being located here online
https://huggingface.co/google/pegasus-xsum/tree/main
And after downloading the models to your directory as you can see from the screenshot its name is pytorch_model.bin as it’s named online.

"HTTPError: HTTP Error 404: Not Found" while using translation function in TextBlob

When I try to use translate function in TextBlob library in jupyter notebook, I get:
HTTPError: HTTP Error 404: Not Found
I have posted my code and screenshot of error message for reference here. This code worked well 5-6 days ago when I ran exactly the same code first time but after that whenever I run this code it gives me the same error message. I have been trying to run this code since last 4-5 days but it never worked again.
My code:
from textblob import TextBlob
en_blob = TextBlob('Simplilearn is one of the world’s leading certification training providers.')
en_blob.translate(to='es')
I am new to stackoverflow and asking my first question on this plateform so please pardon me if I my question is not following rules of this platform.
Textblob library uses Google API for translation functionality in the backend. Google has made some changes in the its API recently. Due to this reason TextBlob's translation feature has stopped working. I noticed that by making some minor changes in translate.py file (in your folder where all TextBlob files are located) as mentioned below, we can get rid of this error:
original code:
url = "http://translate.google.com/translate_a/t?client=webapp&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1"
change above code in translate.py to following:
url = "http://translate.google.com/translate_a/t?client=te&format=html&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1"
I have just tried this. It did not work for me the first times.
I restarted the Anaconda Prompt, restarted IPython. And re-ran my snippets and the problem go away after the fix. I am using Windows 10, I don't use virtual environment, the two files that were changed are:
C:\Users\behai\anaconda3\pkgs\textblob-0.15.3-py_0\site-packages\textblob\translate.py
C:\Users\behai\anaconda3\Lib\site-packages\textblob\translate.py
And I have also found that I have to do tab-indentation for the newline.
Added on 02/01/2021:
I did not do anything much at all. I applied the suggestion by Mr. Manish ( green tick above ). I had this 404 problem in the Anaconda environment. After applied the change above, I just restarted the "Anaconda Prompt (anaconda3)", and it worked.
This is the change as suggested above, in:
C:\Users\behai\anaconda3\Lib\site-packages\textblob\translate.py
# url = "http://translate.google.com/translate_a/t?client=webapp&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1"
url = "http://translate.google.com/translate_a/t?client=te&format=html&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&dt=t&dt=at&ie=UTF-8&oe=UTF-8&otf=2&ssel=0&tsel=0&kc=1"
The file below get auto-updated:
C:\Users\behai\anaconda3\pkgs\textblob-0.15.3-py_0\site-packages\textblob\translate.py
It's fixed at https://github.com/sloria/TextBlob/pull/398
You should use a tag version with that fix.
# requirements/txt
textblob # git+https://github.com/sloria/TextBlob#0.17.1#=textBlob

pickled python machine learning model uses hardcoded paths, doesn't run on other machine - what to do?

I use AutoGluon to create ML models locally on my computer.
Now I want to deploy them through AWS, but I realized that all the pickle files created in the process use hardcoded path references to other pickle files:
/home/myname/Desktop/ETC_PATH/AutoGluon/
I use cloudpickle.dump(predictor, open('FINAL_MODEL.pkl', 'wb')) to pickle the final ensemble model, but AutoGluon creates numerous other pickle files of the individual models, which are then referenced as /home/myname/Desktop/ETC_PATH/AutoGluon/models/ and /home/myname/Desktop/ETC_PATH/AutoGluon/models/specific_model/ and so forth...
How can I achieve that all absolute paths everywhere are replaced by relative paths like root/AutoGluon/WHATEVER_PATH, where root could be set to anything, depending on where the model is later saved.
Any pointers would be helpful.
EDIT: I'm reasonably sure I found the problem. If, instead of loading FINAL_MODEL.pkl (that seems to hardcode paths) I use AutoGluon's predictor = task.load(model_dir) it should find all dependencies correctly, whether or not the AutoGluon folder as a whole was moved. This issue on github helped
EDIT: This solved the problem: If, instead of loading FINAL_MODEL.pkl (that seems to hardcode paths) I use AutoGluon's predictor = task.load(model_dir) it should find all dependencies correctly, whether or not the AutoGluon folder as a whole was moved. This issue on github helped

ggplot2 no plot becase ggsave no save

I have a python script using the pyper library (pipes to R), and I am trying to get some output out of ggplot2. I have tried both the 'ggsave' method and the 'device(...); dev.off()' methods and nothing is output.
I have to use pyper because of using 64 bits everywere (python and R), so rpy[2] isn't an option for me.
The code looks like the following:
r("png(filename='test.png',width=720,height=540)") #comment if ggsave
r("p<-ggplot(DB,aes(X,Y,group=cfg))")
r("""p <- p + geom_path(aes(colour=factor(f1))) + scale_x_log10('X label') +
scale_y_continuous('Y label',breaks=myb,labels=myl) +
geom_point(data=subset(DB,pts==dot),aes(colour=factor(f1),size=factor(f2),
shape=factor(f3))) + labs(colour='l1',size='l2',shape='l3')""")
r("print(p)")
# r("ggsave(filename='test.png',width=10,height=7.5) #comment out if using png
r("dev.off()") # comment if using ggsave
No file is created in either case. I have checked to make certain that the DB data table has entries (1000s). What could I try?
So this all turns out to be an issue with libraries and environment variables. Some of the loaded libraries, like ggplot2, don't load all dependencies, like the digest library. This error only occurs on the "print(p)" portion of the code.
In addition, there are differences in the x64 library locations that need to be set correctly. Make certain that the R_HOME and R_LIBS variables match your configuration.
Pyper didn't appear to tell me that libraries didn't load, it just kept going, so qplot wasn't loading in R initially. After getting that loaded in the right place, you need to make certain you are using either your user account always or the administrator account always (or you have have multiple paths in R_LIBS, but I didn't try that).
qplot and ggsave worked fine, so long as the libraries were loaded.
Thanks for all the dedicated folks and the directions for debug!

Categories

Resources