I want to link Amazon API Gateway to a function in my EC2 instance but am finding little online about hwo to do this.
Currently I have set up the API call as follows:
Can anyone shed any light as to how I could connect the API call to my python function called 'test.py' in the root folder of my EC2 instance
I suppose you might be able do this with the AWS Run Command service, but it is a weird way of doing things. The AWS Service Proxy proxies the AWS API. So telling it to proxy the AWS EC2 service exposes the AWS API to manage EC2 instances. Managing EC2 instances includes things like creating and deleting servers. It does not include things initiating an SSH connection to the server, logging into the server, and then running a command on the server.
The standard way to run a script on a server via API Gateway is to expose that script via a web server on the EC2 server, and then have API Gateway hit the appropriate URL.
API Gateway cannot directly execute a Python function sitting on the file system of your EC2 instance. API Gateway can only interact with EC2 instances via http/https endpoints. If you must run you Python function on an EC2 instance then you'll need to run a web server or application server on your EC2 instance and set it up to execute your Python function when it gets a request on a specific path. Then set-up your API Gateway http integration endpoint to use that path.
If you just need to execute this Python function and don't necessarily need it to run on this EC2 instance, then you could set-up a Lambda function containing your Python function. Then set-up your API Gateway to call the Lambda function. Using the Lambda approach means that you don't need to manage the EC2 instance. Also, for low-volume use cases, Lambda can be much more cost effective than running a dedicated EC2 instance.
you can do it by invoking System manager "Send Command" from API Gateway Integration Request. EC2 instance has to be managed by SSM and instance Role associated with your EC2 instance.
Related
I am trying to run reportportal on my AWS EC2 instance and connect it to my pytest framework.
I have followed the guide available at reportportal.io on installation.
ReportPortal Installation Guide
ReportPortal Pytest Integration
Locally I am able to get reportportal up and running and also able to log the test results.
I am able to access the reportportal running on AWS EC2 instance using browser with the public IP(Elastic Ip) that is assigned to the EC2 instance.
I am also able to log the test results on the reportportal running on AWS EC2 using SSH Tunneling. Not able to understand what is the issue and why it fails to log directly using rp_uuid and other info taken from the reporportal instance running on AWS EC2.
If anyone has got it up and running successfully let me know what more config changes are needed or what am I doing wrong?
I am adapting my client's test code that works fine against their app outside of GKE. But running inside a GKE cluster and accessing the endpoint IP, I get:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot post path \"/\"","reason":"Forbidden","details":{},"code":403}
I understand that I will need a service account when this goes into production, but right now I am just trying to run from the command line while logged in with:
gcloud auth login
How do I do this? I tried binding my login ID to an auth role, but that seems not to have done it.
system:anonymous means that an unauthenticated user is trying to get a resource from your cluster, which is forbidden. You need to Create an RBAC profile on the GKE cluster.
As mentioned by #Gari singh you might be trying to access the Kube API endpoint and not the endpoint for your app. To access the endpoint from your app, you need to connect to the application running on a pod .
Steps to access the endpoint from your app:
To deploy your app to the GKE cluster you created, you need two Kubernetes objects.
A Deployment to define your app.
A Service to define how to access your app.
Deploy an app:
The app has a frontend server that handles the web requests. You define the cluster resources needed to run the frontend in a new file called deployment.yaml. These resources are described as a Deployment. You use Deployments to create and update a ReplicaSet and its associated Pods.
Deploy a Service:
Services provide a single point of access to a set of Pods. While it's possible to access a single Pod, Pods are ephemeral and can only be accessed reliably by using a Service address. In your app, the Service defines a load balancer to access the app Pods from a single IP address. This Service is defined in the service.yaml file. Get the external IP address of the Service by using command kubectl get services.
View a deployed app:
Use the external IP address from the previous step to load the app in your web browser, and see your running app: http://EXTERNAL_IP
Or, you can make a curl call to the external IP address of the Service: curl External_IP. This will be your application endpoint.
Refer Deploying a language-specific-app and exposing applications using services for information.
I am writing a Python script to determine which EC2 instances have CloudWatch agents installed and which do not. I got some information from CloudWatch Agent Troubleshooting but don't know how to implement it programmatically. Do I use SSM, EC2, or something else?
I'm not aware of external visibility into the status of a particular EC2 instance's CloudWatch Agent.
If your EC2 instances have the SSM Agent preinstalled then you could use boto3 to invoke SSM Run Command to run a collector scripts on each instance (example).
I think the boto3 function send_command is what I am looking for. Thanks, everybody.
On Azure Portal I can set auto-shutdown for a VM but can't find the API command for the Python SDK anywhere. Is this possible at the moment? Do I have to use DevTestLab? The SDK in question: https://github.com/Azure/azure-sdk-for-python
1.The auto-shutdown in the portal is configured as part of the VM's deployment template. If you updated the template, you could certainly deploy it in whatever way you choose.
2.You could schedule a script using the REST API in Azure to start/stop the VM at any schedule of your choosing. The script could deployed on Webjobs, Azure Functions or Azure Scheduler.
So when i run the application in amazon instance it tries to get the data from google inthis image
when it goes to the callback this is the output i get
I have run an python application with google Oauth2 and ran it successfully locally. Gunicorn serves http and google Oauth2 requires hhtps. I locally used certificates i have generated and it worked successfully, but when i try to deploy in Amazon Ec2 instance, it doesn't work. Did anyone face such kind of problem? will using nginx be helpfull in that case?