GCP Compute Engine Python API proper way to create clients - python

What is the current "standard" way to create a gcp compute client in python? I have seen both:
import googleapiclient.discovery
service = googleapiclient.discovery.build(
'container', 'v1', credentials=credentials)
body = {
"autoCreateSubnetworks": False,
"description": "",
"mtu": 1460.0,
"name": "test_network",
"routingConfig": {
"routingMode": "REGIONAL"
}
}
network = compute.networks().insert(project=project_id, body=body, requestId=str(uuid.uuid4())).execute()
and:
from google.cloud import compute_v1
compute = compute_v1.InstancesClient(credentials=credentials)
net = compute.Network()
net.auto_create_subnetworks = False
net.description = ""
net.mtu = 1460.0
net.name = "test_network"
net.routing_config = {
"routingMode": "REGIONAL"
}
request = InsertNetworkRequest()
request.project = project_id
request.request_id = str(uuid.uuid4())
request.network_resource = net
network = compute.NetworksClient().insert(request=request)
Does Google plan on only supporting one somewhere down the road?

According to this repository google-api-python-client libraries where it's explained that it's supported for now, but there is no date yet for it to stop being updated.
This library is considered complete and is in maintenance mode. This means that we will address critical bugs and security issues but will not add any new features.
It's recommended to use the repository google-cloud-python which has 3 development branches, GA (General Availability), Beta Support and Alpha Support.

Related

Rancher API: Unauthorized 401: must authenticate

I'm trying to communicate with rancher API, tried different combinations, getting the same result every time:Unauthorized 401: must authenticate
steps to reproduce:
1)Create Rancher API key and secret
2)Create a simple script that uses them to deploy a test workload.
import requests
api_url = "https://myrancherurl.com/v3/project/c-m-qh7tkqn4/jobs"
access_key = "token-zmdpqs"
secret_key = "fr9v6z9xxfqdgmjv2k9z44zvx6mlrandomtoke"
token=access_key+":"+secret_key
# Set the API token
headers = { "Authorization": "Bearer "+token }
# Set the payload for the API request
payload = {
"name": "my-job",
"jobConfig": {
"image": "nginx:latest",
"command": ["nginx", "-g", "daemon off;"],
"restartPolicy": {
"name": "Never"
}
}
}
# Send the API request to create the job
response = requests.post(api_url, json=payload, headers=headers)
# Print the API response
print(response.json())
I'm not 100% sure what exaclty is "Project id", so I tried different combinations, results are the same. I have the impression, that additional config has to be done on rancher side, but can't find any info.
Any ideas?
I've tried also using the official python library, but it seems outdated(and also returns the same erro)
Every object in the Rancher API has an id. Project is a like a group used to organize various namespaces, and workloads.
There are three types of clients that are used to interact with the Rancher API.
Management (Used to interact with general objects not tied to any cluster/project)
Cluster (Used to interact with objects that are tied to a specific cluster)
Project (Used to interact with objects that are tied to a specific project)
Sample code:
pip install git+https://github.com/rancher/client-python.git#master
import rancher
rancher_url = 'https://rancher.example.com/v3'
rancher_token = 'token-abcde:abcdefghijklmnopqrstuvwxyz0123456789'
rancher_verify_ssl_certs = True
management_client = rancher.Client(
url=rancher_url,
token=rancher_token,
verify=rancher_verify_ssl_certs
)
clusters_info = management_client.list_cluster(name='leo-downstream')
my_cluster = clusters_info.data[0]
projects_info = management_client.list_project(name='Default')
default_project = projects_info.data[0]
default_project_url = default_project.links['self'] + '/schemas'
default_project_client = rancher.Client(url=default_project_url, token=rancher_token, verify=False)
containers = [
{
'name': 'one',
'image': 'nginx',
}
]
default_project_client.create_workload(
name='test-workload-creation',
namespaceId='default',
scale=1,
containers=containers
)

create azure vm using python with generalized image in existing rg, vnet, subnet with no public ip

I am able to create an image via az cli commands with:
az vm create --resource-group $RG2 \
--name $VM_NAME --image $(az sig image-version show \
--resource-group $RG \
--gallery-name $SIG \
--gallery-image-definition $SIG_IMAGE_DEFINITION \
--gallery-image-version $VERSION \
--query id -o tsv) \
--size $SIZE \
--public-ip-address "" \
--assign-identity $(az identity show --resource-group $RG2 --name $IDENTITY --query id -o tsv) \
--ssh-key-values $SSH_KEY_PATH \
--authentication-type ssh \
--admin-username admin
This works great. I am trying to do the same with python.
I see examples where they are creating everything in that, resource groups, nics, subnets, vnets etc but that is not what I need. I am literally trying to do what this az cli is doing. Is there a way to do this with python?
How do we address the setting of public ip to nothing so it does not set one? I want it to use the vnet, subnet, etc that the resource groups already has defined just like the az cli.
You can modify the example script in our doc to do this. Essentially, you need to get rid of step 4. and modify step 5 to not send a public IP when creating the NIC. This has been validated in my own subscription.
# Import the needed credential and management objects from the libraries.
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.compute import ComputeManagementClient
import os
print(f"Provisioning a virtual machine...some operations might take a minute or two.")
# Acquire a credential object using CLI-based authentication.
credential = AzureCliCredential()
# Retrieve subscription ID from environment variable.
subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
# Step 1: Provision a resource group
# Obtain the management object for resources, using the credentials from the CLI login.
resource_client = ResourceManagementClient(credential, subscription_id)
# Constants we need in multiple places: the resource group name and the region
# in which we provision resources. You can change these values however you want.
RESOURCE_GROUP_NAME = "PythonAzureExample-VM-rg"
LOCATION = "westus2"
# Provision the resource group.
rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
{
"location": LOCATION
}
)
print(f"Provisioned resource group {rg_result.name} in the {rg_result.location} region")
# For details on the previous code, see Example: Provision a resource group
# at https://learn.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
# Step 2: provision a virtual network
# A virtual machine requires a network interface client (NIC). A NIC requires
# a virtual network and subnet along with an IP address. Therefore we must provision
# these downstream components first, then provision the NIC, after which we
# can provision the VM.
# Network and IP address names
VNET_NAME = "python-example-vnet"
SUBNET_NAME = "python-example-subnet"
IP_NAME = "python-example-ip"
IP_CONFIG_NAME = "python-example-ip-config"
NIC_NAME = "python-example-nic"
# Obtain the management object for networks
network_client = NetworkManagementClient(credential, subscription_id)
# Provision the virtual network and wait for completion
poller = network_client.virtual_networks.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME,
{
"location": LOCATION,
"address_space": {
"address_prefixes": ["10.0.0.0/16"]
}
}
)
vnet_result = poller.result()
print(f"Provisioned virtual network {vnet_result.name} with address prefixes {vnet_result.address_space.address_prefixes}")
# Step 3: Provision the subnet and wait for completion
poller = network_client.subnets.begin_create_or_update(RESOURCE_GROUP_NAME,
VNET_NAME, SUBNET_NAME,
{ "address_prefix": "10.0.0.0/24" }
)
subnet_result = poller.result()
print(f"Provisioned virtual subnet {subnet_result.name} with address prefix {subnet_result.address_prefix}")
# Step 4: Provision an IP address and wait for completion
# Removed as not needed
# Step 5: Provision the network interface client
poller = network_client.network_interfaces.begin_create_or_update(RESOURCE_GROUP_NAME,
NIC_NAME,
{
"location": LOCATION,
"ip_configurations": [ {
"name": IP_CONFIG_NAME,
"subnet": { "id": subnet_result.id }
}]
}
)
nic_result = poller.result()
print(f"Provisioned network interface client {nic_result.name}")
# Step 6: Provision the virtual machine
# Obtain the management object for virtual machines
compute_client = ComputeManagementClient(credential, subscription_id)
VM_NAME = "ExampleVM"
USERNAME = "azureuser"
PASSWORD = "ChangePa$$w0rd24"
print(f"Provisioning virtual machine {VM_NAME}; this operation might take a few minutes.")
# Provision the VM specifying only minimal arguments, which defaults to an Ubuntu 18.04 VM
# on a Standard DS1 v2 plan with a public IP address and a default virtual network/subnet.
poller = compute_client.virtual_machines.begin_create_or_update(RESOURCE_GROUP_NAME, VM_NAME,
{
"location": LOCATION,
"storage_profile": {
"image_reference": {
"publisher": 'Canonical',
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
}
},
"hardware_profile": {
"vm_size": "Standard_DS1_v2"
},
"os_profile": {
"computer_name": VM_NAME,
"admin_username": USERNAME,
"admin_password": PASSWORD
},
"network_profile": {
"network_interfaces": [{
"id": nic_result.id,
}]
}
}
)
vm_result = poller.result()
print(f"Provisioned virtual machine {vm_result.name}")
resource_name = f"myserver{random.randint(1000, 9999)}"
VNET_NAME = "myteam-vpn-vnet"
SUBNET_NAME = "myteam-subnet"
IP_NAME = resource_name + "-ip"
IP_CONFIG_NAME = resource_name + "-ip-config"
NIC_NAME = resource_name + "-nic"
Subnet=network_client.subnets.get(resource_group_name, VNET_NAME, SUBNET_NAME)
# Step 5: Provision the network interface client
poller = network_client.network_interfaces.begin_create_or_update(resource_group_name,
NIC_NAME,
{
"location": location,
"ip_configurations": [{
"name": IP_CONFIG_NAME,
"subnet": { "id": Subnet.id },
}]
}
)
nic_result = poller.result()
Yes, we removed steps 4 and 5 above as suggested. The nic was then applied in the vm creation as such:
"network_profile": {
"network_interfaces": [
{
"id": nic_result.id
}
]
},

Updating Azure Container Registry Public Access IP with Python

I've tried looking online but could not find an answer as the documentation (and the API) for Azure Python SDK is just horrible.
I have a Container Registery on Azure with a list of allowed IPs for public access. I'd like to modify that list by adding a new IP using Python.
I'm not sure the API supports it or how to achieve this using ContainerRegistryManagementClient.
Can't agree more that documentation (and the API) for Azure Python SDK is just horrible :)
If you want to add a list of allowed IPs for public access to your Container Registery on Azure, just try the code below using REST API:
from azure.identity import ClientSecretCredential
import requests
TENANT_ID= ""
CLIENT_ID = ""
CLIENT_SECRET = ""
SUBSCRIPTION_ID = ""
GROUP_NAME = ""
REGISTRIES = ""
#your public ip list here
ALLOWED_IPS = [{
"value": "167.220.255.1"
},
{
"value": "167.220.255.2"
}
]
clientCred = ClientSecretCredential(TENANT_ID,CLIENT_ID,CLIENT_SECRET)
authResp = clientCred.get_token("https://management.azure.com/.default")
requestURL = 'https://management.azure.com/subscriptions/'+SUBSCRIPTION_ID+'/resourceGroups/'+GROUP_NAME+'/providers/Microsoft.ContainerRegistry/registries/'+REGISTRIES+'?api-version=2020-11-01-preview'
requestBody = {
"properties": {
"publicNetworkAccess": "Enabled",
"networkRuleSet": {
"defaultAction": "Deny",
"virtualNetworkRules": [],
"ipRules": ALLOWED_IPS
},
"networkRuleBypassOptions": "AzureServices"
}
}
r = requests.patch(url=requestURL,json=requestBody,headers={"Authorization":"Bearer "+ authResp.token})
print(r.text)
Result:
Before you run this, pls make sure that your client app has been granted the required permissions(Azure subscription roles, such as contributor).

Deploy App Engine version from Python with service account

Trying to deploy an App Engine instance from Python by using service account. The goal is to spin up a lot of instances, do some heavy network task (download and upload files) and shut them down afterwords.
I'm trying to do it with service account from Python runtime, but getting the following error
TypeError: Missing required parameter "servicesId"
What could be wrong or is there a better solution for such task? Thanks and the code is below:
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']
SERVICE_ACCOUNT_FILE = 'service.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
gcp = build('appengine', 'v1', credentials=credentials)
res = gcp.apps().create(body={"id":"251499913983"})
app_json = {
"deployment": {
"files": {
"my-resource-file1": {
"sourceUrl": "https://storage.googleapis.com/downloader_sources/hello-world/main.py"
}
}
},
"handlers": [
{
"script": {
"scriptPath": "main.app"
},
"urlRegex": "/.*"
}
],
"runtime": "python27",
"threadsafe": True
}
res2 = gcp.apps().services().versions().create(body=app_json)
I guess you need to specify the service you want to deploy to. You could use default:
gcp.apps().services().versions().create(serviceID=default, body=app_json)
See doc for more details.

How to create a commit and push into repo with GitHub API v3?

I want to create a repository and Commit a few files to it via any Python package. How do I do?
I do not understand how to add files for commit.
Solution using the requests library:
NOTES: I use the requests library to do the calls to GitHub REST API v3.
1. Get the last commit SHA of a specific branch
# GET /repos/:owner/:repo/branches/:branch_name
last_commit_sha = response.json()['commit']['sha']
2. Create the blobs with the file's content (encoding base64 or utf-8)
# POST /repos/:owner/:repo/git/blobs
# {
# "content": "aGVsbG8gd29ybGQK",
# "encoding": "base64"
#}
base64_blob_sha = response.json()['sha']
# POST /repos/:owner/:repo/git/blobs
# {
# "content": "hello world",
# "encoding": "utf-8"
#}
utf8_blob_sha = response.json()['sha']
3. Create a tree which defines the folder structure
# POST repos/:owner/:repo/git/trees/
# {
# "base_tree": last_commit_sha,
# "tree": [
# {
# "path": "myfolder/base64file.txt",
# "mode": "100644",
# "type": "blob",
# "sha": base64_blob_sha
# },
# {
# "path": "file-utf8.txt",
# "mode": "100644",
# "type": "blob",
# "sha": utf8_blob_sha
# }
# ]
# }
tree_sha = response.json()['sha']
4. Create the commit
# POST /repos/:owner/:repo/git/commits
# {
# "message": "Add new files at once programatically",
# "author": {
# "name": "Jan-Michael Vincent",
# "email": "JanQuadrantVincent16#rickandmorty.com"
# },
# "parents": [
# last_commit_sha
# ],
# "tree": tree_sha
# }
new_commit_sha = response.json()['sha']
5. Update the reference of your branch to point to the new commit SHA (on master branch example)
# POST /repos/:owner/:repo/git/refs/heads/master
# {
# "ref": "refs/heads/master",
# "sha": new_commit_sha
# }
Finally, for a more advanced setup read the docs.
You can see if the new update GitHub CRUD API (May 2013) can help
The repository contents API has allowed reading files for a while. Now you can easily commit changes to single files, just like you can in the web UI.
Starting today, these methods are available to you:
File Create
File Update
File Delete
Here is a complete snippet:
def push_to_github(filename, repo, branch, token):
url="https://api.github.com/repos/"+repo+"/contents/"+filename
base64content=base64.b64encode(open(filename,"rb").read())
data = requests.get(url+'?ref='+branch, headers = {"Authorization": "token "+token}).json()
sha = data['sha']
if base64content.decode('utf-8')+"\n" != data['content']:
message = json.dumps({"message":"update",
"branch": branch,
"content": base64content.decode("utf-8") ,
"sha": sha
})
resp=requests.put(url, data = message, headers = {"Content-Type": "application/json", "Authorization": "token "+token})
print(resp)
else:
print("nothing to update")
token = "lskdlfszezeirzoherkzjehrkzjrzerzer"
filename="foo.txt"
repo = "you/test"
branch="master"
push_to_github(filename, repo, branch, token)
Github provides a Git database API that gives you access to read and write raw objects and to list and update your references (branch heads and tags). For a better understanding of the topic, I would highly recommend you reading Git Internals chapter of Pro Git book.
As per the documentation, it is a 7 steps process to commit a change to a file in your repository:
get the current commit object
retrieve the tree it points to
retrieve the content of the blob object that tree has for that particular file path
change the content somehow and post a new blob object with that new content, getting a blob SHA back
post a new tree object with that file path pointer replaced with your new blob SHA getting a tree SHA back
create a new commit object with the current commit SHA as the parent and the new tree SHA, getting a commit SHA back
update the reference of your branch to point to the new commit SHA
This blog does a great job at explaining this process using perl. For a python implementation, you can use PyGithub library.
Based on previous answer, here is a complete example. Note that you need to use POST if you upload the commit to a new branch, or PATCH to upload to an existing one.
import whatsneeded
GITHUB_TOKEN = "WHATEVERWILLBEWILLBE"
def github_request(method, url, headers=None, data=None, params=None):
"""Execute a request to the GitHUB API, handling redirect"""
if not headers:
headers = {}
headers.update({
"User-Agent": "Agent 007",
"Authorization": "Bearer " + GITHUB_TOKEN,
})
url_parsed = urllib.parse.urlparse(url)
url_path = url_parsed.path
if params:
url_path += "?" + urllib.parse.urlencode(params)
data = data and json.dumps(data)
conn = http.client.HTTPSConnection(url_parsed.hostname)
conn.request(method, url_path, body=data, headers=headers)
response = conn.getresponse()
if response.status == 302:
return github_request(method, response.headers["Location"])
if response.status >= 400:
headers.pop('Authorization', None)
raise Exception(
f"Error: {response.status} - {json.loads(response.read())} - {method} - {url} - {data} - {headers}"
)
return (response, json.loads(response.read().decode()))
def upload_to_github(repository, src, dst, author_name, author_email, git_message, branch="heads/master"):
# Get last commit SHA of a branch
resp, jeez = github_request("GET", f"/repos/{repository}/git/ref/{branch}")
last_commit_sha = jeez["object"]["sha"]
print("Last commit SHA: " + last_commit_sha)
base64content = base64.b64encode(open(src, "rb").read())
resp, jeez = github_request(
"POST",
f"/repos/{repository}/git/blobs",
data={
"content": base64content.decode(),
"encoding": "base64"
},
)
blob_content_sha = jeez["sha"]
resp, jeez = github_request(
"POST",
f"/repos/{repository}/git/trees",
data={
"base_tree":
last_commit_sha,
"tree": [{
"path": dst,
"mode": "100644",
"type": "blob",
"sha": blob_content_sha,
}],
},
)
tree_sha = jeez["sha"]
resp, jeez = github_request(
"POST",
f"/repos/{repository}/git/commits",
data={
"message": git_message,
"author": {
"name": author_name,
"email": author_email,
},
"parents": [last_commit_sha],
"tree": tree_sha,
},
)
new_commit_sha = jeez["sha"]
resp, jeez = github_request(
"PATCH",
f"/repos/{repository}/git/refs/{branch}",
data={"sha": new_commit_sha},
)
return (resp, jeez)
I'm on Google App Engine (GAE) so beside of python, I can create a new file, update it, even delete it via a commit and push into my repo in GitHub with GitHub API v3 in php, java and go.
Checking and reviewing some of the available third party libraries to create like the example script that presented in perl, I would recommend to use the following:
PyGithub (python)
GitHub API for php (php)
GitHub API for Java (java)
go-github (go)
As you aware, you can get one site per GitHub account and organization, and unlimited project sites where the websites are hosted directly from your repo and powered by Jekyll as default.
Combining Jekyll, Webhooks, and GitHub API Script on GAE, along with an appropriate GAE Setting, it will give you a wide possibility like calling external script and create a dynamic page on GitHub.
Other than GAE, there is also an option run it on Heroku. Use JekyllBot that lives on a (free) Heroku instance to silently generates JSON files for each post and pushing the changes back to GitHub.
I have created an example for committing with multiple files using Python:
import datetime
import os
import github
# If you run this example using your personal token the commit is not going to be verified.
# It only works for commits made using a token generated for a bot/app
# during the workflow job execution.
def main(repo_token, branch):
gh = github.Github(repo_token)
repository = "josecelano/pygithub"
remote_repo = gh.get_repo(repository)
# Update files:
# data/example-04/latest_datetime_01.txt
# data/example-04/latest_datetime_02.txt
# with the current date.
file_to_update_01 = "data/example-04/latest_datetime_01.txt"
file_to_update_02 = "data/example-04/latest_datetime_02.txt"
now = datetime.datetime.now()
file_to_update_01_content = str(now)
file_to_update_02_content = str(now)
blob1 = remote_repo.create_git_blob(file_to_update_01_content, "utf-8")
element1 = github.InputGitTreeElement(
path=file_to_update_01, mode='100644', type='blob', sha=blob1.sha)
blob2 = remote_repo.create_git_blob(file_to_update_02_content, "utf-8")
element2 = github.InputGitTreeElement(
path=file_to_update_02, mode='100644', type='blob', sha=blob2.sha)
commit_message = f'Example 04: update datetime to {now}'
branch_sha = remote_repo.get_branch(branch).commit.sha
base_tree = remote_repo.get_git_tree(sha=branch_sha)
tree = remote_repo.create_git_tree([element1, element2], base_tree)
parent = remote_repo.get_git_commit(sha=branch_sha)
commit = remote_repo.create_git_commit(commit_message, tree, [parent])
branch_refs = remote_repo.get_git_ref(f'heads/{branch}')
branch_refs.edit(sha=commit.sha)

Categories

Resources