Azure python create empty vhd blob - python

I am using Azure python API to create page blob create_blob and updating the header using the link provided http://blog.stevenedouard.com/create-a-blank-azure-vm-disk-vhd-without-attaching-it/ and updating my actual image data using update_page but when i am trying to boot the VHD i am getting provision error in Azure. "Could not provision the virtual machine" can any one please suggest.

I think there may be something wrong with your vhd image. I would suggest you have a look at this article.
Here is a snippet of that article:
Please make sure of the following when uploading a VHD for use with Azure VMs:
A VM must be generalized to use as an image from which you will create other VMs. For Windows, you generalize with the sysprep tool. For Linux you generalize with the Windows Azure Linux Agent (waagent). Provisioning will fail if you upload a VHD as an image that has not been generalized.
A VM must not be generalized to use as a disk to only use as a single VM (and not base other VMs from it). Provisioning will fail if you upload a VHD as a disk that has generalized.
When using third-party storage tools for the upload make sure to upload the VHD a page blob (provisioning will fail if the VHD was uploaded as a block blob). Add-AzureVHD and Csupload will handle this for you. It is only with third-party tools that you could inadvertantly upload as a block blob instead of a page blob.
Upload only fixed VHDs (not dynamic, and not VHDX). Windows Azure Virtual Machines do not support dynamic disks or the VHDX format.
Note: Using CSUPLOAD or Add-AzureVHD to upload VHDs automatically converts the dynamic VHDs to fixed VHDs.
Maximum size of VHD can be up to 127 GB. While data disks can be up to 1 TB, OS disks must be 127 GB or less.
The VM must be configured for DHCP and not assigned a static IP address. Windows Azure Virtual Machines do not support static IP addresses.

I think that there are two points that you could focus on.
1.The VHD file should be a .vhd file. So ,your code should be 'blob_name='a-new-vhd.vhd''
2.The storage account and the VM which you created should be in the same location.
Hope it helps. Any concerns, please feel free to let me know.

Related

How to download file from website to S3 bucket without having to download to local machine

I'm trying to download a dataset from a website. However all the files I want to download add up to about 100 gb which I don't want to download to my local machine, then upload to s3. Is there a way to download directly to an s3 bucket? Or do you have to use ec2, and if so could somebody give brief instructions on how to do this? Thanks
S3's put_object() method supports a Body parameter for Bytes (or file):
Python example:
response = client.put_object(
Body=b'bytes'|file,
Bucket='string',
Key='string',
)
So if you download a webpage, using Python you'd use the requests.Get() method or .Net you use either HttpWebRequest or WebClient and then upload the file as a byte array so you never need to save it locally. It can all be done in memory.
Or do you have to use ec2
An Ec2 is just a VM in the cloud, you can programatically do this task (download 100gb to S3) from your Desktop PC/Laptop. Simply open an Command Window or a Terminal and type:
AWS Configure
Put in an IAM users creds and use the aws cli or use an AWS SDK like the python example above. You can give the S3 Bucket a Policy Document that will allow access to the IAM user. This will download everything to your local machine.
If you want to run this on an EC2 and avoid downloading everything to your local PC modify the role assigned to the EC2 and give it Put privileges to S3. This will be the easiest and most secure. If you use the in-memory and bytes approach it will download all the data but it wont save it to disk.

Is there any way by which I can upload a local image in a google doc file using the google docs API? [duplicate]

I want to create a new slides out of the local images using Python.
Unfortunately, I do not see an example to upload a local image, only an image URL via the official documentation.
That's why I'm wondering is it possible to do that at all? If it is I would appreciate any kind of help.
Thank you very much
From the official documentation:
https://developers.google.com/slides/how-tos/add-image
If you want to add private or local images to a slide, you'll first need to make them available on a publicly accessible URL. One option is to upload your images to Google Cloud Storage and use signed URLs with a 15 minute TTL. Uploaded images are automatically deleted after 15 minutes.
Google requires a publicly accessible URL as mentioned in the docs.
You can either:
upload your file to an Internet-accessible server, or
allow the Internet to access your local system.
I'll describe the 2nd approach a bit because the 1st approach is fairly straight-forward.
To allow Google to access your local file without uploading the file to a server, you can use a local HTTP server and expose it to the Internet using a TCP tunneling service like ngrok:
https://ngrok.com/
You can use any local server. If you don't have one, http-server on NPM is easy to use:
https://www.npmjs.com/package/http-server
I've used the ngrok approach successfully.

How do I programmatically upload files to JupyterLab from my local storage?

I was wondering if JupyterLab has an API that allows me to programmatically upload files from my local storage to the JupyterLab portal. Currently, I am able to manually select "Upload" through the UI, but I want to automate this.
I have searched their documentation but no luck. Any help would be appreciated. Also, I am using a chromebook (if that matters).
Thanks!!
Firstly, you can use python packages "requests" and "urllib" to upload files
https://stackoverflow.com/a/41915132/11845699
This method is actually the same as clicking the upload button, but the uploading speed is not very satisfying so I don't recommend it if you are uploading lots of files or some large files.
I don't know whether your JupyterLab server is managed by your administrator or yourself. In my case, I'm the administrator of the server in my lab. So I setup an NFS disk and mount it to a folder in the JupyterLab working directory. The users can access to this NFS disk via our local network or the internet. NFS disk is capable of transmitting lots of large files, which is much more efficient than the Jupyter upload button. I learned this from a speech of a TA in Berkeley https://bids.berkeley.edu/resources/videos/teaching-ipythonjupyter-notebooks-and-jupyterhub
I highly recommend this if you can contact the person who has access to the file system of your Jupyter server. If you don't use Linux, then Webdav is an alternative to NFS. Actually, anything that can give you access to a folder on a remote server is optional, such as Nextcloud or Pydio.
(If you can't ask the administrator to deploy such service, then just use the python packages)

How to find vhd files in azure using python

I want to create VM from a disk in azure using python. So I need VHD file of the disk for this purpose.
How can I download or obtain the VHD file in azure using python. And on Azure portal, where are the VHD files listed?
I referred a video where it was told to search under storage account->containers->vhds, but I did not find anything as such. Then I found this :- blob_service.get_blob_to_path, but it also did not work for me.
Can anyone help me with this problem?
I want to obtain VHD file of the disk in azure using python.

how to run python code on amazon ec2 webservice?

I have never used amazon web services so I apologize for the naive question. I am looking to run my code on a cluster as the quad-core architecture on my local machine doesn't seem to be doing the job. The documentation seems overwhelming and I don't even know which AWS services are going to be used for running my script on EC2. Would I have to use their storage facility (S3) because I guess if I have to run my script, I'm going to have to store it on the cloud in a place where the cluster instance has access to the files or do I upload my files somewhere else while working with EC2? If this is true is it possible for me to upload my entire directory which has all the contents of the files required by my application onto s3. Any guidance would be much appreciated. So I guess my question is do I have to use S3 to store my code in a place accessible by the cluster? If so is there an easy way to do it? Meaning I have only seen examples of creating buckets wherein one file can be transferred per bucket. Can you transfer an entire folder into a bucket?
If we don't require to use S3 then which other service should I use to give the cluster access to my scripts to be executed?
Thanks in advance!
You do not need to use S3, you would likely want to use EBS for storing the code if you need it to be preserved between instance launches. When you launch an instance you have the option to add an ebs storage volume to the drive. That drive will automatically be mounted to the instance and you can access it just like you would on any physical machine. ssh your code up to the amazon machine and fire away.

Categories

Resources