I have been experimenting with the Python Cloud Functions. One of my cloud functions utlizies a large text file, that I would love to bundle with my .py file when I deploy it. THe docs are kinda limited on stuff like this..
https://cloud.google.com/functions/docs/quickstart
I was looking, would I just include that file, and a requirements file in my same directory as my function to deploy it. Or do I have to some how require it in my code?
Also, is there any information on how to use a database trigger instead of a http trigger? I was trying to think if the reason my file didn't seem to get included was because I had the wrong way of defining the trigger. How would you create an OnCreate.. or something like that
gcloud beta functions deploy hello_get --runtime python37 --trigger-http
When you deploy your function using gcloud, your project directory is zipped and uploaded. Any files contained in the current directory (and child directories) will be uploaded as well, which includes static assets like your large text file.
Just make sure your text file is in your project directory (or one of the child directories), then use a relative reference to your file in your Python code. It should just work.
Related
We have a python application on a window azure VM that reads data from an API and loads the data into an onprem DB. The application works just fine and the code is source controlled in an Azure devops repo. The current deployment process is for someone to pull the main branch and copy the application from their local machine to c:\someapplication\ on the STAGE/PROD server. We would like to automate this process. There are a bunch of tutorials on how to do a build for API and Web applications which require your azure subscription and app name (which we dont have). My two questions are:
is there a way to do a simple copy to the c:\someapplication folder from azure devops for this application
if there is, would the copy solution be the best solution for this or should I consider something else?
Should i simply clone the main repo to each folder location above and then automate the git pull via the azure pipeline? Any advice or links would be greatly appreciated.
According to your description, you could try to use the CopyFiles#2 task and set the local folder as shared folder, so that use it as TargetFolder. The target folder or UNC path that will contain the copied files.
YAML like:
- task: CopyFiles#2
inputs:
SourceFolder:$(Build.SourcesDirectory) # string. Source Folder.
Contents: '**' # string. Required. Contents. Default: **.
TargetFolder: 'c:\\someapplication' # string. Required. Target Folder.
Please check if it meets your requirements.
I've worked with deployment manager at small scale before.
However, things are getting bigger and I want to switch from Jinja templates to Python.
I was wondering if it is also possible to define the main config.yaml also as python and not only the templates.
Hoped to see something in the official samples.
I read the guide for deployment manager at scale where it looks like this is only for using python files and not YAML's.
But unfortunately the links to GitHub are ending up in 404's.
The following makes me think the main file could be a python file:
gcloud deployment-manager deployments create hierarchy-org-example-dev
--template env_demo_project.py --properties=envName:dev
gcloud deployment-manager deployments create hierarchy-org-example-test
--template env_demo_project.py --properties=envName:test
gcloud deployment-manager deployments create hierarchy-org-example-prod
--template env_demo_project.py --properties=envName:prod
Does anyone know, where to find these configuration_hierarchy samples did go.. or can anyone provide me with another example?
You can't have you main file in Python - it has to be YAML:
A configuration file defines all the Google Cloud Platform resources that make up a deployment. You must have a configuration file to create a deployment. A configuration file must be written in YAML syntax.
However - as you notices - your template files can be in Python:
Templates can be written in either Jinja 2.10.x or Python 3.x. Jinja maps more closely to the YAML syntax, so it might be easier to write templates in Jinja if you are more familiar with YAML.
I have a Django app deployed on a Azure Web App, and I want to dynamically create webjobs. More precisely, when I save a Django model called Importer, I want to be able to create a new web job, made of 3 files:
run.py : the Python webjob
settings.job : the cron schedule
config.ini : a config file for the webjob
The content of the "settings.job" & "config.ini" comes from an admin form and is specific to the webjob.
When I save my model, the code creates a new directory in
App_Data\jobs\triggered{my job name}
, and copies there the "run.py" from my app directory.
This works. However, when I try to create a new text file called "settings.job" in the job directory and write the cron schedule in it, I got a server error.
I tried many things, but the following basic test causes a failure:
file = open('test.txt','w')
file.write('this is a test')
file.close()
It seems that I don't have the right to write a file to the disk. How can that be solved?
Also, I want to be able to modify the content of the config and settings.job files when I update the corresponding Django model.
In addition, I tried to copy another file called "run2.py" to the webjob directory, and that fails too ! I cannot copy another file that run.py in that directory
According to your description, per my experience, I think the issue was caused by using the relative path in your code.
On Azure WebApps, we have the permission of doing any operations under the path D:/home.
My suggestion is using the absolute path instead of the relative path, such as D:/home/site/wwwroot/App_Data/jobs/triggered/<triggered-job-name>/test.txt or /home/site/wwwroot/App_Data/jobs/triggered/<triggered-job-name>/test.txt instead of test.txt. And please make sure the directory & parent directories had been made via use os.path.exists(path) to check the path exists and use os.makedirs to make them before writing a file.
Meanwhile, you also can try to use the WebJobs API to do some operations, such as creating a webjob via uploading a zip file or updating settings.job. For using these WebJobs APIs, be sure you had configured the Deployment credentials on Azure portal as the figure below, and add the basic auth Authorization: Basic <BASE64-Encode("deployment-username":"password")>in the request header.
Here is my analysis:
there is no problem copying other files than "run.py" to the webjob directory
the code crashes after (successfully) copying the file "run.py" from my Django app directory to the webjob directory. It does not matter if I use shutil.copy/shutil.copyfile or simply open("{path}/run.py","w"), the problem occurs when I try to write a file called "run.py" to the disk in the webjob directory.
I think that when we create a new webjob directory, if the system detects a file called "run.py" it tries to carry out some operations. Then there is a conflict with two processes trying to access the same file at the same time.
My solution: I copy the python script to the webjob directory with the name "myrun.txt", and then I rename it to run.py using os.rename
It works !
Is there an easy way to edit our python files in the Jenkins workspace UI?
It would be super nice if we could get code highlighting too!
There is a jenkins plugin that allows you to edit files: Config File Provider
It cant edit random file but you can use it to achieve what you want.
The storage of the plugin is in the form of xml files in jenkins folder. This means that you could create script that recreates those files wherever you need them by parsing those xml files (plugin does this for the workspace although it requires build setp). For instance, i could add new custom config file like this:
Name: script.sh
Comment: /var/log
Content: ....
This will be available then in xml file which you could parse within cron job to create actual files where you need them
The closest I can think of that Jenkins offers is a file upload. You can upload file with local changes and then trigger a build. This file will be replaced at already specified location. This feature can be used by making your build parameterized and adding File Parameter option. Below is what Jenkins says about the description of this feature.
Accepts a file submission from a browser as a build parameter. The uploaded file will be placed at the specified location in the workspace, which your build can then access and use.
This is useful for many situations, such as:
Letting people run tests on the artifacts they built.
Automating the upload/release/deployment process by allowing the user to place the file.
Perform data processing by uploading a dataset.
It is possible to not submit any file. If it's case and if no file is already present at the specified location in the workspace, then nothing happens. If there's already a file present in the workspace, then this file will be kept as-is.
I'm working on a project utilizing Django on Google App Engine. I've been asked if some of the code can be deployed as compiled only.
So I guess the question is can I upload a .pyc file only that contains the piece of code in question? I've done a basic test with a views.pyc file in an application and things don't work. Is there some configuration or other that I can set to allow Google App Engine to just use the .pyc files?
No, you can't - you can only upload sourcecode. There's no good reason to do this, though: your code will be bytecode-compiled on the servers when needed, and nobody is able to access your code in any case.
I realise you couldn't do this when you asked this question, but you can now, if you use Python 2.7. See Sandboxing in Python 2.7:
You can upload and use .pyc, but not in combination with .py files. You can upload zip files containing .py or .pyc files (or a combination).
Why would you want to do that in the first place? Because your .py files are uploaded to Google infrastructure and can be seen only if you explicitly give permissions.
But yes, there is no reason as why uploading only .pyc files should not work. If you try it, in your dev environment, you will find it working just as BaseHTTPServer can take python compiled modules as handlers.
Also, recent GAE supports automatic precompilation for python files, which means that as soon as you update your application, the python files can precompiled and served. So, you might have to play with --no_precompilation during appcfg.py upload if there was any expectation to check for .py files at the app engine end.