Data Privacy with Tensorboard - python

I've recently begun using Tensorflow via Keras and Python 3.5 to analyze company data, and I am by no means an expert and only recently built my first "real-world" model.
With my experimental data I used Tensorboard to visualiza how my neural network was working, and I would like to do the same with my real data. However, my company is extremely strict about company data leaving our servers - so my question is this:
Does tensorboard take the raw data used in the model and upload it off-site to generate its reports/visuals or does it only use processed data/results from my model?
I've done several google searches already, and I haven't found anything conclusive one way or the other.
If I'm not asking this question correctly, please let me know - I'm new to all of this.
Thank you.

No, Tensorboard does not upload the data to "the cloud" or anywhere outside the computer where it is running, it just interprets data produced by the model.

Related

How to build data streaming pipeline with python?

I want to build an application that analyzes tweets in real time and applies a machine-learning model.
What would be the best architecture to use that would allow easy integration of the python ecosystem? I've found a Faust from Robinhood, but it looks like it is no longer maintained.
Would really appreciate your suggestions.

Swamped with real time data and tasked with building a database

I work for a power company, and have been tasked with building a database. I have a pretty beginner/intermediate understanding level of python, and can fuddle decently with MSSQL. They have procured Azure for this project, and I am completely lost of how to start this task.
Here is one of the sources of data that I want to scrape every minute.
http://ets.aeso.ca/ets_web/docroot/tradingPage.html - this is a complete overview of the Alberta power market in real time.
Ideally, I would want to be able to scrape this data and other sources, and then modify it to fit into in a certain format and push it onto the SQL server.
Do I need virtual machines that are just looping over python scripts? Or do I need managed instances? This data also then needs to be able to be queried right after it is scraped. Eventually this data may feed machine learning algorithms (I don't know jack about that either but I have been told it should play friendly with that type of enviornment).
Just looking to see if anyone has any insight in how you would approach this, and can tell me what I clearly don't know and haven't thought of. Any insight is truly appreciated.
Thanks!

How to read/understand the code behind TF/Keras API?

I started using Tensorflow/Keras for basics Neural Networks architectures such as Feed-Forward networks or RNN.
Although it is working well and there are plenty of information on how this works, in principle, on the internet, I could not find any direct explanation of Tensorflow/Keras source code.
When I have a look in the source directory of the package, there are thousands of files and there is virtually no way (at least for me) to find relevant information in this. It seems everything is highly nested and I can't find the code corresponding to the maths behind layers I call.
So I'd like someone to provide tips on how to find such information in Tensorflow/Keras code or any resource that comments the inner working of basic networks directly linked with the source of the API implementation.
Well I am unsure Tensorflow/Keras has ever been made to be "used" for this purpose.
You may want to look at EpyNN.
This is an educational project which provides API, examples and documentation website. The source code is written to be read.
While this is basic compared to Tensorflow/Keras, what it provides has been validated against it. For identical configurations results are identical. So you could use that to understand, somehow, the necessary part of what's behind Tensorflow/Keras API for the basic use you mention.
Disclaimer: I am the main author of EpyNN.

Is there a way to use pre-trained R ML model in python web app?

More of a theoretical question:
Use case: Create an API that takes json input, triggers ML algorithm inside of it and returns result to the user.
I know that in case of python ML model, I could just pack whole thing into pickle and use it easily inside of my web app. The problem is that all our algorithms are currently written in R and I would rather avoid re-writing them to python. I have checked a few libraries that allow to run R code within python but I cannot find a way to pack it "in a pickle way" and then just utilize.
It may be stupid but I have not had much to do with R so far.
Thank you in advance for any suggestions!
Not sure what calling R code from Python has to do with ML models.
If you have a trained model, you can try converting it into ONNX format (emerging standard), and try using the result from Python.

Microsoft Speech Recognition Custom Training

I have been wanting to create an application using the Microsoft Speech Recognition.
My application's users are expected to often say abbreviated things, such as 'LHC' for 'Large Hadron Collider' or 'CERN'. Given that exact order, my application will return
You said: At age C.
You said: Cern
While it did work for 'CERN', it failed very badly for 'LHC'.
However, if I could make my own custom training files, I could easily place the term 'LHC' somewhere in there. Then, I could make the user access the Speech Control Panel and run my training file.
All the links I have found for this have been frustratingly useless, as they just say things like 'This is ----, you should try going to the ---- forum instead'.
If it does help, here is a list of the links:
http://compgroups.net/comp.speech.users/add-my-own-training/153194
https://groups.google.com/forum/#!topic/microsoft.public.speech.server/v58SH1ov22s
http://social.msdn.microsoft.com/Forums/en/servercorefordevelopers/thread/f7a35f3f-b352-464a-b264-e16eb4afd049
Is my problem even possible? Or are the training files themselves in a special format? If so, can that format be reproduced?
A solution that can also work on Windows XP would be ideal.
Thanks in advance!
P.S. If there are any libraries or modules out there already for this, could anyone point me to some? A Python or C/C++ solution would be splendid. Also, since I'd rather not post another question regarding this, is it possible to utilize the train utilities from command prompt (or without the GUI visible, but still having total command of all controls)?
Okay, pulling this from a thing I wrote three or four years ago now, but I believe you want to do something like this.
The grammar library is a trained system which can recognize words. You can create your own grammar library cued to specific words.
C#, sorry
using System.Speech
using System.Speech.Recognition
using System.Speech.AudioFormat
SpeechRecognitionEngine sre = new SpeechRecognitionEngine();
string[] words = {"L H C", "CERN"};
Choices choices = new Choices(words);
GrammarBuilder gb = new GrammarBuilder(choices);
Grammar grammar = new Grammar(gb);
sre.LoadGrammar(grammar);
That is as far as I can get you. From docs it looks like you can define the pronunciations somehow. So perhaps that way you could have LHC map directly to a single word. Here are the docs on the grammar class - http://msdn.microsoft.com/en-us/library/system.speech.recognition.grammar.aspx
Small update - see example in their docs here http://msdn.microsoft.com/en-us/library/ms554228.aspx

Categories

Resources