How predict an image in google machine learning - python

I'm using Google cloud machine learning. I would like to identify different images.
Now I have trained my model with different type of image (using inception model of tensorflow), and I have created a version in Google machine learning with the results.
How can I get prediction about a new image?
Do you have some idea to help me?
Many thanks!

I'm not quite clear on what you're asking. Without more information, I will just point you to the Google blog post and code sample that detail how to train on images.
But back to what I think you're asking...for a model to be deployed to Google Cloud ML a few things have to happen:
It needs to have its inputs and output collections declared in the Tensorflow model before saving the checkpoint.
The model checkpoint needs to be copied to GCS
You must use gcloud to create a new "model" (as far as gcloud is concerned, a model is a namespace for many different tensorflow checkpoints) and then deploy your checkpoint to that gcloud model.
The prediction quickstart has a very similar example here.

Related

Using GCP's Vertex AI Image Classification exported (TF SavedModel) model for prediction

I've trained an Image Classification model via Google Cloud Platform's Vertex AI framework and liked the results. Due to that I then proceeded to export it in Tensorflow SavedModel format (shows up as 'Container' export) for custom prediction because I don't like neither the slowness of Vertex's batch prediction nor the high cost of using a Vertex endpoint.
In my python code I used
model = tensorflow.saved_model.load(model_path)
infer = model.signatures["serving_default"]
When I tried to inspect what infer requires I saw that its input is two parameters: image_bytes and key. Both are string-type tensors.
This question can be broken off into several sub-questions that then make a whole:
Isn't inference done on multiple data instances? If so, why is it image_bytes and not images_bytes?
Is image_bytes just the output of open("img.jpg", "rb").read()? If so, don't I have to resize it first? To what size? How do I check that?
What is key? I have absolutely no clue or guess regarding this one's meaning.
The documentation for GCP is paid only and so I have decided to ask for help here. I tried to search for an answer on google for multiple days but found no relevant article.
Thank you for reading and your help would be greatly appreciated and maybe even useful to future readers.

which is the best option call trained machine learning model in google cloud?

I have a trained machine learning model in python to obtain a regression output, this model is trained with scikit-learn
I want to insert this predictions into firestore, I am going to do it with a cloud function scheduling it every day with cloud scheduler.
My question is where I have to store this trained machine learning model?
Can I store it into google storage and call it in my cloud function to obtain predictions?
Or I should store it into AI platform?
If the answer is into AI platform, why? what advantages I have if I store it into AI platform? can I train the model with new data from there?
I have been reading that this is possible but I don't know why is better and how to it
There is several answer to your question.
Do you want to build a monolith or 2 microservices:
Monolith, I mean the same service (Functions or container) is triggered by the scheduler, load the model, perform the prediction and save it to firestore
Microservice:
1 service is triggered by the scheduler, request a prediction and store the result into Firestore
1 service load the model and answer to prediction query.
In monolith case, AI-Platform is not recommended. In microservice, you can host your prediction service on AI Platform, and the other on Cloud Functions
With tensorflow I also proposed another solution for hosting the model: in Cloud Run. I wrote an article on this. I don't know enough SciKit for telling you is the same thing is possible, but it's a good alternative.
About where to store your trained model? Definitively on Cloud Storage. And even if you build a Cloud Run service with a container like described in my article, where I download the model and load it into the container (and thus the model is not downloaded from Storage at runtime, only at build time), Cloud Storage is the best place for immutable objects.
Finally, your last question about AI Platform. A same name, several services. You can host your model and perform Online Prediction, and you can train your model. It's not the same internal service, not the same usage, not the same API. There is no difference/advantage when you train new model, if you host your online prediction on AI Platform or not

training google cloud ml engine actually in the cloud- clarification on the approach

I am trying to implement cloud based predictions for an sklearn model using google cloud ML engine. I am able to do this however it seems that even when using the REST API, it always references a trainer module that is actually trained offline /or on a standard python3 runtime that has sklearn installed, rather than any google service:
training_inputs = {'scaleTier': 'BASIC',
#'masterType': 'standard',
#'parameterServerType': 'large_model',
#'workerCount': 9,
#'parameterServerCount': 3,
'packageUris': ['gs://pathto/trainer/package/packages/trainer-0.0.0.tar.gz'],
'pythonModule': 'trainer.task',
'region': 'europe-west1',
'jobDir': ,
'runtimeVersion': '1.12',
'pythonVersion': '3.5'}
So, the way I see it, whether using gcloud (command line submission ) or the REST API via:
request = ml.projects().jobs().create(body=job_spec, parent=project_id)
The actual training is done by my python code running sklearn- i.e. the google cloud ML engine all it does is receive model specs from a sklearn model.bst file and then run the actual predictions. Is my understanding correct ? thanks for your help,
To answer your question, here is some background about ML Engine: the module referred in the command is the main module which starts whole training process. This process will include the training file and evaluation file in the code as in this example, and ML Engine will be in charge to create the model based on these files. Therefore, when submitting a training job to ML Engine, the train process will use ML Engine resources for each training step to create the model, which can be deployed into ML Engine for prediction.
For your question, ML Engine does not interfere the training datasets and model coding. That why it needs trainer module with the model specification and code. It provides resources for the model training and prediction, and manage the different version of the model. The diagram in this document should be a good reference for what ML Engine does.

Tensorflow CNN model - RESTful API

I developed a Convolutional Neural Network algorithm in python, which classifies images (.jpg) with a specific label, by
1) defining a custom CNN model;
2) setting up an estimator, which locally saves the save_summary and save_checkpoint steps;
3) training the estimator with the estimator.train function.
Now, if I run the estimator.predict function, with a new image, it returns the predicted label.
How can I deploy this trained estimator as RESTful API so that I can call it from a WEB page or an application?
It would help to know if you have used some known framework (Keras, Tensorflow, MXNet...), as most have some recommended ways to serve models over an API.
If you built your solution from scratch, you can "just" use any web framework to deliver your model predictions. To get you started, you may want to take a look to Flask.

Sessions with tensorflow

I'm a tensorflow beginner. So, excuse my question if it is stupied
I checked a github code for implementing CNN using MNIST data and tensorflow.
the link below:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py
However, I need to save the model generated by this code, but don't know how to do it, as this code does not involve the use of sessions, how to incoperate session on it?
Would appreciate your response.
The linked code is using tf.estimator.Estimator to train the model. Its documentation includes how to save the model using export_savedmodel. A saved model can be imported by specifying its location through the model_dir argument of the tf.estimator.Estimator initialiser.

Categories

Resources