Vagrant and Google App Engine are not syncing files - python

I am currently using Vagrant to spin up a VM to run GAE's dev_appserver in the Virtual Machine.
The sync folder works and I can see all the files.
But, after I run the dev appserver, changes to python files by the host machine are not dynamically updated.
To see updates to my python files, I have to relaunch dev appserver in my Virtual Machine.
Also, I have grunt tasks that watch html/css files. These also do not sync properly when updated by editors outside the Virtual Machine.
I suspect that it's something to do with the way Vagrant syncs files changed on the host machine.
Has anyone found a solution to this problem?

Finally found the answer!
In the latest version of google app engine, there is a new parameter you can pass to dev_appserver.py.
using dev_appserver.py --use_mtime_file_watcher=True works!
Although the change takes 1-2 seconds to detect, but it still works!

Related

installing python in a docker container

I'm new to coding and been fiddling around with docker containers and services
I have installed a temporary vscode server on my raspberry and deployed on my local lan for accessing it from various machines
Now i've been trying to create a flask app and run it from the container and trying to figure out how to publish and run the flask web server since i can't figure out what ip should i host it on (the default i always used was host=127.0.0.1 port=8080, but that would bring me to the local machine i'm visiting it from)
So while i was troubleshooting to understand what to do with exposed ports etc i've stopped the container and changed the docker-compose file, (i have a path set for the config's permanent storage, so my vscode setting are actually saved and persistent between deployments)
But I'm having the problem that every time i stop and re deploy the container i loose my python3 installation, and have to re run apt-update, apt-upgrade, apt install python3-pip and every python packages i need for the project
Where am i going wrong?
Silly question, but where does python get installed, and why isn't it persistent since i have my config path set?
I read that python gets installed in usr/local/lib, should i also map those directories to the persistent storage folder?
how should i do that?
thanks

PyCharm synchronization between local files and (local) docker

I know that as of now (june 21) PyCharm does not support a fully remote development the way VSCode does; however, it supports a remote interpreter and in the case of ssh connection, it supports an automatic upload of local files to the server.
My question is: can I use an interpreter running in docker container and synchronize my files with the files within the container?
Apparently, I can define a remote docker interpreter and set up a "Mapping":
However, I don't see any way how to automatically upload my local files / changes to the container. When I try to configure a new deployment, I don't see any docker option:
Am I missing something, or is it like PyCharm does not support such "Docker deployment"?
(This might be related, but not a clear conclusion there...)
(UPDATE: after studying more, I'm not even sure if I can use interpreter in a docker container. It seems that pycharm only enables a docker image, and anything I would do inside docker... is gone in the next session. Or am I wrong?)

vscode run / debug python in docker instance

I'm using jupyter notebooks to prototype and I write the majority of my code as python packages using vscode and installed as so:
pip install -e .
This works well as I can test rapidly prototype in jupyter but still maintain reusable / testable code by keeping most of the heavy lifting in the package(s)
I'd like to move my python/jupyter environment to docker. Is there any way to configure vscode to work well with a "remote" development environment running in a docker container?
Since May 2019 (version 1.35), VScode remote development feature is present in the stable release. It splits the VScode program in two:
a server part that can be runned on a remote computer, container, or WSL environment
a client part, mainly the GUI, that is runned locally
When properly configured, debugging/linting/... operations will be executed inside the container. To answer your specific question, you can get a debug experience identical the one of an uncontainerized setup.
See here for a quick overview of this feature. You can find a vscode-issued tutorial on how to setup vscode with docker here.
If you expose the Jupyter instance running in the container to your machine, you may be able to specify it as a remote Jupyter server.

Django in Docker using PyCharm

I'm trying to develop a database manager in Django and want to develop and deploy it in Docker. As my IDE, I'd like to continue using PyCharm, but I'm having trouble understanding how it interacts with Docker.
I am new to Docker and its integration in PyCharm. My system runs Windows 10 and Docker for Windows.
I already tried using PyCharm's remote interpreter, but I have to activate the port forwarding manually (using Kitematic), since PyCharm does somehow not forward the exposed port automatically.
I also tried using a "Docker Deployment" run configuration. However, I can't get requests to localhost:8000 to get through to the Django server. All I get are empty response errors.
(Note: The bold issue was addressed in the accepted answer.)
It would really help me to have an explanation of how PyCharm's two options (remote interpreter and docker deployment) really work and ideally have an up-to-date tutorial for setting up Django with it. Unfortunately I could only find outdated tutorials and JetBrain's help pages are either outdated or do not explain it in enough detail.
Could someone help me out and guide me through this or point me to good resources?
Assuming you have the latest Docker (for Mac or for Windows) along with an updated version of PyCharm, you could achieve the port forwarding (binding) this way:
Create a new run configuration
Select your Docker server in the Deployment tab. If nothing shows, create a new one. Test that it actually works by clicking View > Tools Windows > Docker and connecting to the docker server. You should see the existing images and running containers.
In the Container tab, make sure to add the right Ports bindings.
An important note
Make sure that you are running your Django server on 0.0.0.0:8000 and not localhost:8000

Docker container and virtual python environment

I'm getting started working with Docker. I installed Docker Toolbox on Windows 10 and downloaded the desired container. I need full access to container’s filesystem with the ability to add and edit files. Can I transfer the contents of the container into a Virtual Python Environment in Windows filesystem? How to do it?
Transferring files between Windows and Linux might be a little annoying because of different line endings.
Putting that aside, sounds like you are looking to create a Docker based development environment. There are good tutorials online that walk you through setting one up, I would start with one of these
Running a Rails Development Environment in Docker. This one is about Rails, but the principles will be the same. Section 3 specifically talks about about sharing code between your host machine and the Docker container.
How To Work with Docker Data Volumes on Ubuntu 14.04 includes an brief introduction to Docker containers, different use cases for data volumes, and how to get each one working. Sharing Data Between the Host and the Docker Container section talks about what you are trying to do. This example talks about reading log files created inside the container, but the principle is the same for adding/updating files in the container.

Categories

Resources