I tried to trained a experiment with deep learning model.
I found that tensorflow is the best way to do this.
But there is problem that tensorflow need to be writen in python.
And my program contain many loops.Like this..
for i=1~2000
for j=1~2000
I know this is a big drawback for python.
It's very slow than c.
I know tensorfow has a C++ API, but it's not clear.
https://www.tensorflow.org/api_docs/cc/index.html
(This is the worst Specification I have ever looked)
Can someone give me an easy example in that?
All I need is two simple code.
One is how to create a graph.
The other is how to load this graph and run it.
I really eager need this.Hope someone can help me out.
It's not so easy, but it is possible.
First, you need to create tensorflow graph in python and save it in file.
This article may help you
https://medium.com/jim-fleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f#.krslipabt
Second, you need to compile libtensorflow, link it to your program (you need tensorflow headers as well, so it's a bit tricky) and load the graph from the file.
This article may help you this time
https://medium.com/jim-fleming/loading-tensorflow-graphs-via-host-languages-be10fd81876f#.p9s69rn7u
Related
I am trying to speed up some python code. It's a fairly complex system, including a loop running inference of a Convolutional Neural network (Tensorflow 2) and subsequently handle its results in a reconstruction algorithm. The NN running on GPU is fast enough, but the subsequent python code can't catch up. So I profiled the code and found the hot-path to include a lot of error-handling stuff it seems:
In a production setup I'd love to get rid of this for performance. Is there any way I can achieve this in python? It seems there is some back-tracing/error-handling involved in each function call. Is that correct? If so, in C++ I'd probably try inlining functions as a remedy. Are there similar strategies in python? I'd appreciate any general advice.
Note: No minimum working example or code, as it's a fairly complex setup and I figure the question is not very specific to my code. But I can share the code and provide more detailed information if people think it'd be helpful.
I've built a recommender system using Python Surprise library.
Next step is to update algorithm with new data. For example a new user or a new item was added.
I've digged into documentation and got nothing for this case. The only possible way is to train new model from time to time from scratch.
It looks like I missed something but I can't figure out what exactly.
Can anybody point me out how I can refit existing algorithm with new data?
Unfortunately Surprise doesn't support partial fit yet.
In this thread there are some workarounds and forks with implemented partial fit.
Until now I've been using the streamlit framework for most of plotting and visualizing but recently I've had some new ideas. Long story short, something like Unreal's Blueprint editor is what I need (that's a bit too much, I know). For now I would be content with something least remotely similar to it. As only a few people will be using it so it is not a product just a sketch.
Maybe we can omit some of the details and say that we have a Pipeline.
Meaning it has some Steps, which in turn have Inputs and Outputs.
Then we say that earlier steps do not have access to later outputs.
Now we have a picture. And that picture would be an acyclic graph!
But maybe you see other options. How would you approach such a problem?
I am hoping someone on here has experience in training object detection models with tensorflow. I am a complete newbie, trying to learn. I ran through a few of the tutorials on the tensorflow site and am now going to try a real world example. I am following the tutorial here. I am at the point where I need to label the images.
My plan is to try to detect scallops, but the images I using have several scallops. Some I wouldn't really be able to tell were scallops are other than the fact I have context that they are likely a scallop because they are next to a mound of other scallops.
My questions are:
Am I better off cutting them out and treating them individually? Or labeling images that have several scallops
When labeling the scallops there are many that might look just like a round rock if I didn't have context of seeing other scallops. Should I still label them?
I am guessing I will also need to find some images with differing backgrounds???.
I know I can experiment to see how the models perform, but labeling these images is a labour intensive task, so I am hoping I can borrow from someones experience who has attempted something similar in the past. Example of one of the images that I am part way through labeling:
1) Good question! The answer is easy, you should label the images as the model would see them at inference time. There's no reason to "lie" to your model (by not labeling something), you'll only confuse it. Be truthful, if you see a scallop, label it. If you don't label something, it's like a negative example, which will confuse the model. ==> A: multiple scallops
2) Seems like the model will take images of (many) scallops as input, so it's not a problem that it learns that 'round objects next to a mound of scallops are likely also a scallop', it's even a good thing, because they often are. So, again, be truthful, label everything.
3) That depends, how will you use the model at inference time? Will the images all have the same background then? If yes, you don't need different backgrounds, if no, you do need them.
I'm interested in saving the model created in Sklearn (e.g., EmpiricalCovariance, MinCovDet or OneClassSVM) and re-applying later on.
I'm familiar with the option of saving a PKL file and joblib, however I would prefer to save the model explicitly and not a serialized python object.
The main motivation for this is that it allows easily viewing the model parameters.
I found one reference to doing this:
http://thiagomarzagao.com/2015/12/07/model-persistence-without-pickles/
The question is:
Can I count on this working over time (i.e., new versions of sklearn)? Is this too much of a "hacky" solution?
Does anyone have experience doing this?
Thanks
Jonathan
I don't think it's a hacky solution, a colleague has done a similar thing where he exports a model to be consumed by a scorer which is written in golang, and is much faster than the scikit-learn scorer. If you're worried about compatability with future versions of sklearn, you should consider using an environment manager like conda or virtualenv; in anycause this is just good software engineering practice and something you should start to get used to anyway.