How to debug dockerfiles when using gcloud and google app engine - python

I am using Google App Engine and PyCharm 4.0.4 and GCloud managedvm.
I am attempting to access more information about the building of my dockerfile than is given by console output when running locally using dev_appserver.py .
medusavm is a python linux console app that can convert python code to dart code, among other functionalities.
In my dockerfile, medusavm installs without a hitch. I have managed to setup debugging using PyCharm, so I have access to breakpoint debugging functionality, if that is required.
I have a problem with running medusavm, even though it installs without a hitch. At the moment I can only access information from the console. I am running a local debug using the gcloud version of dev_appserver.py, accessed from:
C:/Program Files/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/dev_appserver.py
I have managed to successfully run the gcloud gae tutorial app using this exact setup, so I think it is a problem with the modifications I have made in my dockerfile to do with installing medusavm.
A link to the text document of my docker file is here and the console output from building from the dockerfile ( the part that I would like more info on for debugging purposes, if you know how to) is here.
If, miraculously, you know the problem that I am facing, (which currently I do not, other than that it originates from not installing medusavm properly), that would be incredibly helpful!
Also, if you happen to know how to debug dockerfiles while using gcloud and gae, I would greatly appreciate that too.
Thank you for taking your time to read this.

Related

Firebase Cloud Functions running a python script - needs dependencies

I'm building a website with React and Firebase that utilizes an algorithm I wrote in python. The database and authentication for the project are both handled by Firebase, so I would like to keep the cloud functions in that same ecosystem if possible.
Right now, I'm using the python-shell npm package to send and receive data from NodeJS to my python script.
I have local unit testing set up so I can test the https.onCall functions locally without needing to deploy and test from the client.
When I am testing locally, everything works perfectly.
However, when I push the functions to the cloud and trigger the function from the client, the logs in the Firebase console show that the python script is missing dependencies.
What is the best way to ensure that the script has all the dependencies available to it up on the server?
I have tried:
-Copying the actual dependency folders from my library/.../site-packages and putting them in the same directory under the /functions folder with the python script. This almost works. I just run into an issue with numpy: "No module named 'numpy.core._multiarray_umath'" is printed to the logs in Firebase.
I apologize if this is an obvious answer. I'm new to Python, and the solutions I've found online seem way to elaborate or involve hosting the python code in another ecosystem (like AWS or Heroku). I am especially hesitant to go to all that work because it runs fine locally. If I can just find a way to send the dependencies up with the script I'm good to go.
Please let me know if you need any more information.
the logs in the Firebase console show that the python script is missing dependencies.
That's because the nodejs runtime targeted by the Firebase CLI doesn't have everything you need to run python programs.
If you need to run a function that's primarily written in python, you should not use the Firebase CLI and instead uses the Google Cloud tools to target the python runtime, which should do everything you want. Yes, it might be extra work for you to learn new tools, and you will not be able to use the Firebase CLI, but it will be the right way to run python in Cloud Functions.

Deploying python application

I have created a python application containing rest apis which call machine learning code on PyCharm IDE. I want to deploy rest apis on IIS.
I copy and past complete PyCharm project in virtual directory. The issue I am facing is dependencies like tensorflow and keras are not being found due to which API is giving "Internal server error", however I am able to call rest services.
please guide.
As FishingCode suggested please share code snippet to understand the exact issue.
Please include and install requirements.txt file in virtual environment.
Also, you may try installing using docker tutorial

Is it possible to have a Heroku app that is just a python console?

I have been writing a pretty simple python quizz system (called game.py) and I am working to deploy it on heroku. The app functions exclusively within the confines of a python console, with no interface of any kind but that provided by a terminal.
As such, I would like to be able to have the application on Heroku simply be akin to what you obtain with a one-off dyno, available on the dashboard (or in a terminal with the CLI) with:
heroku run python game.py
The application works perfectly well in it's deployed form (exclusively from the Heroku git) and locally, but in order for the app to be available to a larger public, I would need to have such a console appear on the "https://[appname].herokuapp.com/" URL that you are given on deployment of the app.
Naively, I would think this to be unspeakably simple to pull off, but I have yet to find a way to do it.
The only reasonable thing I have found would have been to create a Procfile, but lacking any documentation on the commands available, I only have been able to try variations of:
web: run python game.py
Which doesn't create a web console. And:
web: bash
Which simply crash with error code h10, with no other information given.
Any help, any suggestion, any workaround you can think of would be extremely appreciated.

How do I include dependencies for embedded console apps when using Run From Package

I'm deploying my Azure Function app using a CI/CD pipeline in Azure DevOps. The function invokes three console applications that are included in the package. One of the console applications is a stand alone .exe, it works without issue. The other two have dependencies to a number of dll:s that are also included in the package. This setup works well on my local machine, and when deployed using WebDeploy.
When instead deploying using Run From Package to a freshly created Function App Service, the function app itself loads fine as well as the standalone .exe console app, but both console apps that have dll dependencies fail to run, and both return with exit code 0xC0000135 to my function app (indicating that a dll failed to load).
Now, if I deploy once using Webdeploy and then deploy again using Run From Package, I get the latest build installed - and the console apps now work (!). I think this might be due to the .exe not being able to access the virtual file system when loading the dll:s, is this correct?
I could stick with WebDeploy but I really want to use the package deploy since the cold start time is much faster during scale-out (will need 100+ instances in production). I am also concerned that this way, the app actually needs to copy both the zip package, and the site structure under wwwroot, causing additional overhead.
What is the best way to include dependencies such as dll:s in a package when using Run From Package with Azure Functions?
(The function app is v3, built using .NET Core 3.1)

Unable to find/modify the Dockerfile of a Google App Engine Managed VM that uses a standard runtime (python27)

I want to modify the Dockerfile of a Google App Engine managed VM that uses a standard runtime (python27).
I want to do this to add a C++ library that needs to be called to implement an HTTP request. This library is pretty much the only addition I need to the sandboxed python27 runtime.
The documentation makes it quite clear that this is possible:
Each standard runtime uses a default Dockerfile, which is supplied by the SDK. You can extend and enhance a standard runtime by adding new docker commands to this file.
Elsewhere they say that they Dockerfile of a standard runtime will be generated in the project directory:
When you use gcloud to run or deploy a managed VM application based on a standard runtime (in this case Python27), the SDK will create a minimal Dockerfile using the standard runtime as a base image. You'll find this Dockerfile in your project directory...
This is the one I am supposed to modify according to the same page:
Later steps in this tutorial will show you how to extend the capabilities of your runtime environment by adding instructions to the Dockerfile.
The problem is that when I do run my application on the dev server, I cannot find the Dockerfile anywhere, so I can't make any changes to it.
Has anyone managed to modify the standard runtime Dockerfile for Google App Engine? Any help would be appreciated.
To use google-api-python-client i had the same issue, cause I needed pycrypto. I always got the error:
CryptoUnavailableError: No crypto library available
To solve this I created an instance start handler that installs all needed libs. It's ugly but it works.
app.yaml:
handlers:
- url: /_ah/start
script: start_handler.app
start_handler.py
import webapp2
import logging
import os
class StartHandler(webapp2.RequestHandler):
def execute(self, cmd):
logging.info(os.popen("%s 2>&1" % cmd).read())
def get(self):
if not os.environ.get('SERVER_SOFTWARE','').startswith('Development'):
self.execute("apt-get update")
self.execute("apt-get -y install build-essential libssl-dev libffi-dev python-dev")
self.execute("pip install cryptography")
self.execute("pip install pyopenssl")
app = webapp2.WSGIApplication([
('/_ah/start', StartHandler)
], debug=True)
It seems like the Dockerfile is generated only when using gcloud preview app run and not dev_appserver.py, which was what I was using.
I am however not able to modify the Dockerfile and run a custom managed VM. But that is a seperate error (--custom_entrypoint related).
This whole situation is a nightmare fueled by atrocious documentation and support. A warning for other developers considering Google App Engine.
Turns out, extending the Dockerfile in your app does not work the way it's purported in the Documentation (Link). In fact, if there is a Dockerfile present you will get the following error:
"ERROR: (gcloud.preview.app.deploy) There is a Dockerfile in the current directory, and the runtime field in /[...]/app.yaml is currently set to [runtime: python27]. To use your Dockerfile to build a custom runtime, set the runtime field in [...]/app.yaml to [runtime: custom]. To continue using the [python27] runtime, please omit the Dockerfile from this directory"
The only way I've been able to use a customized Dockerfile is using a custom runtime.
Google has a really good GitHub example for deploying Django to a managed VM using a custom Python runtime (here).
Since you're using the custom runtime you'll have to implement health checking yourself. However, if you need to access Google APIs, Google has an example of how to set that up on GitHub (here).
For help implementing health checking, or integrating with Google APIs you can follow the Google Compute Engine, Getting Started series of tutorials (here).

Categories

Resources