git pull on running code causes corruption - python

We are developing a product running on a RPI CM3+ module (Buster OS). It is WiFi connected and runs a python flask server.
The code (server.py) is run from a locally cloned git repository. We have a page on the server where the idea is that the user can check if there are any remote updates by clicking a button on the flask UI.
The issue seems to be that when we do a git pull. The server.py file size becomes 0kB and when we subsequently try and interact with the git we get:
error: object file .git/objects/3b/1f1939a417f8bbb15e759c899911cc664b5488 is empty
fatal: loose object 3b1f1939a417f8bbb15e759c899911cc664b5488 (stored in .git/objects/3b/1f1939a417f8bbb15e759c899911cc664b5488) is corrupt
We have to remove and re-clone the repository.
Is this expected behaviour for performing a git pull on running code? What procedure should we be following to achieve a method of remote code update for our product?

I solved this using a custom workflow. I took the advice of #torek and moved away from using git as this was corrupting itself and my code.
The approach I took was a 3 step approach
Step 1. I created an API which allowed the application to check if a new version existed. For this I used AWS lambda and AWS API Gateway. I hosted my new code in a .zip file in an AWS S3 bucket.
Step 2. Since my application is in python then if the API returned a true stating that a version existed then I would use:
import boto3
s3_client = boto3.client('s3')
s3_client.download_file('bucket', '<code_folder_in_bucket>', '<download_dir_on_devivce')
Step 3. I created a bash service that runs on start-up (systemd) that checks if a .zip folder exists in a specific directory. If it does then it performs an unzipand overwrites the application folder with the new code. Deleting the .zip folder afterwards for a clean start next time. The service after this service is the service to launch my application code.
This works very reliably.

Related

Move python code from Azure Devops repo to windows Azure VM

We have a python application on a window azure VM that reads data from an API and loads the data into an onprem DB. The application works just fine and the code is source controlled in an Azure devops repo. The current deployment process is for someone to pull the main branch and copy the application from their local machine to c:\someapplication\ on the STAGE/PROD server. We would like to automate this process. There are a bunch of tutorials on how to do a build for API and Web applications which require your azure subscription and app name (which we dont have). My two questions are:
is there a way to do a simple copy to the c:\someapplication folder from azure devops for this application
if there is, would the copy solution be the best solution for this or should I consider something else?
Should i simply clone the main repo to each folder location above and then automate the git pull via the azure pipeline? Any advice or links would be greatly appreciated.
According to your description, you could try to use the CopyFiles#2 task and set the local folder as shared folder, so that use it as TargetFolder. The target folder or UNC path that will contain the copied files.
YAML like:
- task: CopyFiles#2
inputs:
SourceFolder:$(Build.SourcesDirectory) # string. Source Folder.
Contents: '**' # string. Required. Contents. Default: **.
TargetFolder: 'c:\\someapplication' # string. Required. Target Folder.
Please check if it meets your requirements.

How can I get the token-sheets.pickle and token-drive.pickle files to use ezsheets for Google Sheets?

I am trying to set up ezsheets for the use with Google Sheets. I followed the instructions from here https://ezsheets.readthedocs.io/en/latest/ and here https://automatetheboringstuff.com/2e/chapter14/
The set up process works quite differently on my computer: Somehow I could download the credentials-sheets.json. I need to download the token-sheets.pickle and token-drive.pickle files. When I run import ezsheets, no browser window is opended as described in the set up instructions. Nothing happens.
Is there another way to download both files?
I followed the steps you referenced and managed to generate the files, but I also encountered the same issue before figuring out the cause. The problem is that there are a few possible causes and the script silently fails without telling you exactly what happened.
Here are a few suggestions:
First off you need to configure your OAuth Consent Screen. You won't be able to create the credentials without it.
Make sure that you have the right credentials file. To generate it you have to go to the Credentials page in the Cloud Console. The docs say that you need an OAuth Client ID. Make sure that you have chosen the correct app at the top.
Then you will be prompted to choose an application type. According to the docs you shared the type should be "Other", but this is no longer available so "Desktop app" is the best equivalent if you're just running a local script.
After that you can just choose a name and create the credentials. You will be prompted to download the file afterwards.
Check that the credentials-sheets.json file has that exact name.
Make sure that the credentials-sheets.json file is located in the same directory where you're running your python script file or console commands.
Check that you've enabled both the Sheets and Drive API in your GCP Project.
Python will try to setup a temporary server on http://localhost:8080/ to retrieve the pickle files. If another application is using port 8080 then it will also fail. In my case a previously failed Python script was hanging on to that port.
To find and close the processes using port 8080 you can refer to this answer for Linux/Mac or this other answer for Windows. Just make sure that the process is not something you're currently using.
I just used the single import ezsheets command to get the files so after getting the token-sheets.pickle I had to run it again to get the token-drive.pickle, but after that the library should detect that you already have the files.

Python push files to Github remote repo without local working directory

I am working on a Python based web application for collaborative xml/document editing, and one requirement from the client is that users should be able to push the files they created (and saved on the server) directly to a Github remote repo, without the need of ever creating a local clone on the server (i.e., no local working directory or tracking of any sort). In GUI terms, this would correspond to going to Github website and manually add the file to the remote repo by clicking the "Upload files" or "create new file" button, or simply edit the existing file on the remote repo on the Github website and then commit the change inside the web browser. I wonder is this functionality even possible to achieve either using some Python Github modules or writing some code from scratch using the Github API or something?
So you can create files via the API and if the user has their own GitHub account, you can upload it as them.
Let's use github3.py as an example of how to do this:
import github3
gh = github3.login(username='foo', password='bar')
repository = gh.repository('organization-name', 'repository-name')
for file_info in files_to_upload:
with open(file_info, 'rb') as fd:
contents = fd.read()
repository.create_file(
path=file_info,
message='Start tracking {!r}'.format(file_info),
content=contents,
)
You will want to check that it returns an object you'd expect to verify the file was successfully uploaded. You can also specify committer and author dictionaries so you could attribute the commit to your service so people aren't under the assumption that the person authored it on a local git set-up.

Creating a KhanAcademy clone via Google App Engine - issues with application name in app.yaml

I'm trying to create a KhanAcademy (KA) clone on Google App Engine (GAE). I downloaded the offline version of KA (http://code.google.com/p/khanacademy/downloads/list) for Mac, and set it up with GoogleAppEngineLauncher (https://developers.google.com/appengine/). Because KA was produced on Python 2.5, I have the setup running through the Python 2.5 included in the KA offline version download, and I added these extra flags to the app (to essentially duplicate the functionality of the included Run file):
--datastore_path=/Users/Tadas/KhanAcademy/code/datastore --use_sqlite
As is, GAELauncher is able to get that up and running perfectly fine on a localhost. However, to get it up on my Google appspot domain, I need to change the application name in app.yaml. When I change "application: khan-academy" in app.yaml to a new name and try to run the local version via GAELauncher (or the included Run file), the site comes up but all the content (exercises, etc.) has disappeared (essentially, the site loses most of its functionality). If I try to "Deploy" the app in this state, I received a 500 Server Error when I try to go on the appspot website. Any ideas as to what could be going wrong?
Thanks.
The problem is that your 'clone' application does not have access to Khans Academy's AppEngine datastore so there is no content to display. Even if you do use all of the code for their application, you are still going to have to generate all of your own content.
Even if you are planning to 'clone' their content, too, you are going to have to do a lot of probably manual work to get it in to your application's datastore.

How to use github and ec2 together to deploy a python application

I am currently using github to develop a python application and am looking to deploy it on EC2.
Is there a good way to automatically handle the messiness this entails (setting up SSH key pairs on the EC2 instance for github, pulling from the github repository every time a commit is pushed to the master branch, etc.) without a bunch of custom scripts? Alternatively, is there an open-source project that has focused on this?
I wrote a simple python script to do this once. I also posted about it on my blog.
You set up mappings of your repositories and brances to point to local folders which already contain a checkout of that repo and branch. Then, you enable GitHub's post-receive hooks to hit the script which will then automatically trigger a git pull in the appropriate folder.

Categories

Resources