Do you know how back up machine learning models in Azure Machine Learning Studio in case of idle time when subscription is not bought. I would preferably back those models in Azure DB/DWH on other accounts/instances of Azure. Is it actually possible to copy models' flow to another locations or share it with other users?
I would appreciate the answer.
Based on my understanding, I think you want to export your experiments in Azure Machine Learning Studio to local as a file or other type resources. There seems not to be any way for doing this on Azure offical sites, but I searched a third party tool named azuremlps which is a PowerShell module for Azure ML Studio from a MSFT. You can try to use the cmdlet Export-AmlExperimentGraph to export a specific Experiment graph to a file in JSON format.
Hope it helps.
There's no way to backup the experiments you've created directly. You can share the models with other users in two way.
Share publicly through the gallery. All can see the experiment you've created.
Share privately allows you to share an experiment with the people only have the link for your published experiment.
Use 'Publish to gallery' operation as shown in below for the above task.
Related
On my company PC, I do not have full permissions to install Python packages(usually this has to be requested for approval from IT, which is very painful and takes a very long time).
I am thinking to ask my manager to invest in Anaconda Enterprise so that the security aspect of open source Python use will not be an issue anymore. However, also to consider, my boss is looking to move to the cloud and I was wondering if Anaconda Enterprise can be used interchangeably on-premise (offline from cloud, i.e., no use of cloud storage or cloud compute resources) and when needed for big data processing, switched to 'cloud mode' by connecting to any of AWS, GCP, Azure to rent GPU instances? Any advice welcome.
Yes, that can be a good approach for your company, I used it in many projects on GCP and IBM cloud over Debian 7,8 and 9, and is a good approach, you can also depend on your need to create a package channel with the enterprise version and manage the permissions over your packages and it has a deploying tool where you can manage the deploys and audit the different for projects and API's as well also track the deployments and assign them to owners.
You can switch your server nodes to different servers or add and remove as well when you work with those depending on your environment can be difficult at the beginning but is pretty good after implemented.
Below are some links where you can see more information about what I'm talking about:
using-anaconda-enterprise
conda-offline-install-update
server-nodes
Depending on your preferences it may not be necessary to use anaconda enterprise on GCP. If you're boss is looking to move to the cloud then GCP has some great options for analyzing big data. Using the AI Platform you can deploy a new instance choose R, Python, CUDA, TensorFlow etc. Once the instance is deployed you can start your data preprocessing. Install whatever libraries you desire, Numpy, Scipy, Pandas, Matplotlib etc. And start your data manipulation.
If using something like Jupyter Notebooks you can use that offline to prepare your work before entering the GCP platform to run the Model Training.
Oh, also GCP has many labs to test out their Data Science platform.
https://www.qwiklabs.com/quests/43
GCP has many free promos these days below is a link to one.
GCP - Build your cloud skills for free with Google Cloud
Step by step usage for AI Platform
I have a python code which is quite heavy, my computer cant run it efficiently, therefore i want to run the python code on cloud.
Please tell me how to do it ? any step by step tutorial available
thanks
Based on my experience I would recommend Amazon Web Services: https://aws.amazon.com/.
I would suggest for you to look about creating an EC2 Instance and run your code there. An EC2 Instance basically is some kind of server and you can automate your Python script there as well.
Now, there's this tutorial that helped me a lot to have a clearer image about running Python script using AWS (specifically EC2): https://www.youtube.com/watch?v=WE303yFWfV4.
For further informations about Cloud Services in Amazon and products, you can get informations here: https://aws.amazon.com/products/.
You can try Heroku. It's free and they got their own tutorials. But it's good enough only if you will use it for studying. AWS, Azure or google cloud are much better for production.
I am working on extracting Features from csv files and I use Python to perform the task. I am inside Azure and created a Python Application using Visual studio 2017. It works perfectly fine and i am looking for ways to automate the process so that it runs in batches per schedule.
I dont want to post it as a web job because the script has some references to the file in local disk of my VM. Could some one tell me the options available to run this solution in batch?
According to your description , I provide you with several ways as below to run your solution in batch.
1.Web Job
Actually , you can package Python script-dependent modules or references together and send them to webjob. Then you can find their absolute path on the KUDU and refer to them in your script, so this does not affect your use of webjob. For this process you can refer to the case I used to answer :Python libraries on Web Job.
Please note that Web Job can be executed per second at least.
2.Azure Scheduler
Azure Scheduler allows you to declaratively describe actions to run in the cloud. It then schedules and runs those actions automatically. You can periodically call your app script url. More details , please refer to the official tutorial.
Please note that Azure Scheduler can be executed per minute at least.
3.Azure Functions
Like the previous method , you can use Azure functions timer trigger to periodically call your app script url. More details , please refer to the official tutorial.
4.Azure Batch
Azure Batch schedules compute-intensive work to run on a managed collection of virtual machines, and can automatically scale compute resources to meet the needs of your jobs.Considering that Azure Batch is used in large data operations, the cost of combining your situation is relatively high and I don't suggesting you use.More details , please refer to the official tutorial.
Hope it helps you.
I wonder if it is possible to design ML experiments without using the drag&drop functionality (which is very nice btw)? I want to use Python code in the notebook (within Azure ML studio) to access the algorithms (e.g., matchbox recommender, regression models, etc) in the studio and design experiments? Is this possible?
I appreciate any information and suggestion!
The algorithms used as modules in Azure ML Studio are not currently able to be used directly as code for Python programming.
That being said: you can attempt to publish the outputs of the out-of-the-box algorithms as web services, which can be consumed by Python code in the Azure ML Studio notebooks. You can also create your own algorithms and use them as custom Python or R modules.
I am trying to run through the following tutorial on Bluemix:
https://www.ibm.com/watson/developercloud/doc/retrieve-rank/get_start.shtml
However, I am not able to install Python locally onto my system due to security policies. Is there a way I can run this tutorial through hosting my code in IBM DevOps Services using the Python runtime on Bluemix?
I am not sure if the Bluemix Python runtime can be leveraged like a natively installed Python and accept the command line instructions from:
Stage 4: Create and train the ranker.
Any feedback would be greatly appreciated!
An alternative to the manual Python and curl commands in that tutorial is to use the web interface.
I've written about it in some detail on my blog which also includes a video walkthrough of how the web tool works. (You can also find the official documentation on ibm.com).
But to sum up, it'd mean you could do everything through a web browser and not have to install or run anything locally - including training a ranker.
There is a small wrinkle in this plan right now, unfortunately. The Solr schema used in the Python/curl tutorial you've followed isn't compatible with the web tool, but we're working on that. This means that if you use the web tool, you'll need to start again with a new cluster and collection. But this means that you could start again using your own documents, content and training questions, instead of having to use the cranfield test data - so hopefully this is a good thing!