How to speed up django development - Code reload - python

I have been moving from php to python for my web-development. Have selected django as my prefeered framework. One thing that bugs is the time it takes for my changes of the python code to reload during development. ~10 sec roughly.
Probably some of my seconds are due to my selected setup of docker-for-mac with mounted volume. But even if it was down to 5sec it would be annoying. I have moved away from the built-in django development server, over to apache 2.4 with mod_wgsi, this improves the speed of the application a lot, but no the python code reloading.
I know it's like comparing apples and oranges, but coming from php my code changes are available immediately. Does anyone have any tips to speed this up?

Traced it back to slow disk access with Docker for Mac.

Related

Advice on running flask app ONLY locally forever

I want to create web form that stays on forever on a single computer. Users can come to the computer fill out the form and submit it. After submitting, it will record the responses in an excel file and send emails. The next user can then come and fill out a new form automatically. I was planning on using Flask for this task since it is simple to create, but since I am not doing this on some production server, I will just have it running locally in development on the single computer.
I have never seen anyone do something like this with Flask so I was wondering if my idea is possible or if I should avoid it. I am also new to web development so I was wondering what problems there could be with keeping a flask application stay on 24/7 on a local development computer.
Thanks
There is nothing wrong with doing this in principle however, it is likely not the best solution for the time-to-reward payoff.
First, to answer your question, this could easily be done, even for a beginner, completing this in a few hours with minimal Python and HTML experience could definitely be done. Your app could crash in the background for many reasons (running out of space, bad memory addresses, etc) but most likely you will be fine.
As for specifically building it, it is all possible, there are libraries you can use to add the results to an excel file, or you can easily just append to a CSV (which is what I would recommend). Creating and sending an email, similarly is relatively simple, but again, doing it without python would be much easier.
If you are not set on flask/python, you could check out Google Forms but if you are set on python, or want to use it as a learning experience, it can definitely be done.
Your idea is possible and while there are many ways to do this kind of thing, what you are suggesting is not necessarily to be avoided.
All apps that run on a computer over a long period of time start a process and keep it going until closed. That is essentially what you are doing.
Having done this myself (and still currently doing it) at my business, I can say that it works great.
The only caveat is that to ensure that it will always be available, you need to have the process monitored by some tool to make sure that it gets restarted if it ever closes due to a variety of reasons.
In linux, supervisor is a great tool for doing that. In windows you could register it as a service. But you could also just create an easy way to restart and make it easy for the user to do so if it is down when they need it.
Yes, this could be done. It's very similar to the applications that run on the servers in data centers.
To keep the application running forever or restarting it after your system starts you'll need to use a system manager similar to systemd in Unix. You could use NSSM - the Non-Sucking Service Manager
or Service Control to monitor your application and restart it if it crashes. This will also have to be enabled on startup.
Other than this, you could use Waitres to serve your Flask application. Waitress is a WSGI web server with which you can easily configure the number of threads and workers to enable serving multiple users at the same time.
In a production environment, it's always suggested to use a web server interface like Gunicorn or Waitress.

Django Initial Page Load Slow After Python 2 to 3 Upgrade

I run Django 1.11.20. I have just (finally) made the jump from Python 2.7 to 3.7.
I've noticed that since upgrading to Python 3, the very first page load after clearing the cache is very slow. So far I only have my Python 3 environment on my development machine, so I'm experiencing this when using ./manage.py runserver and the FileBasedCache backend. I don't know if I will experience the same thing on a production stack (nginx, uwsgi, redis, etc).
I'm using Django Debug Toolbar, which gives me some hints as to what the problem isn't, but I'm not sure where I should look to see what the problem is.
The times given by the debug toolbar run loading a simple page with an empty cache in the Python 2.7 environment on my development machine are:
CPU: 4877.89ms
SQL 462.41ms
Cache: 1154.54ms
The times given by the debug toolbar run loading the same page with an empty cache in the Python 3.7 environment on my development machine are:
CPU: 91661.71ms
SQL 350.44ms
Cache: 609.65ms
(I'm using the file based caching on this development machine, so when I say "with an empty cache" I just mean that I rm -r cache before loading the URL in my browser.)
So the SQL and Cache times got slightly faster with the upgrade, but the CPU time doubled. When I open the "Time" panel in the debug toolbar I see that the "timing attribute" that increased is "request".
This same thing happens on every page (including pages that are just HTML generated by TemplateView, so they're not doing anything tricky), but only when it is first loaded from an empty cache. After that very first response, all pages are back to about the same response time as in the Python 2 environment. So it is something to do with the initial request.
I'm not sure where to look to try to figure out what Django is doing only on the very first request. Any pointers? Or is there something obvious that I can expect to be slower in the jump from Python 2.7 to 3.7?
I was finally able to trace this problem back to the Django Debug Toolbar itself, which was bumped from 1.11 to 2.0 during my Python 3 upgrade. Specifically, the toolbar's SQL panel was causing the slowdown. I have a context processor which executes a query if a cached object does not exist. The query itself is fast, but for some reason Django Debug Toolbar 2.0 is extremely slow generating the stacktrace for it. I have not figured out what changed between 1.11 and 2.0 to cause this terrible performance loss, but now that I know where the problem lies I can work around it.

Slow page loading on apache when using Flask

The Issue
I am using my laptop with Apache to act as a server for a local project involving tensorflow and python which uses an API written in Flask to service GET and POST requests coming from an app and maybe another user on the local network.The problem is that the initial page keeps loading when I specifically import tensorflow or the object detection folder within the research folder in the tensorflow github folder, and it never seems to finish doing so, effectively getting it stuck. I suspect the issue has to do with the packages being large in size, but I didn't have any issue with that when running the application on the development server provided with Flask.
Are there any pointers that I should look for when trying to solve this issue? I checked the memory usage, and it doesn't seem to be rising substantially, as well as the CPU usage.
Debugging process
I am able to print basic hello world to the root page quite quickly, but I isolated the issue to the point when the importing takes place where it gets stuck.
The only thing I can think of is to limit the number of threads that are launched, but when I limited the number of threads per child to 5 and number of connections to 5 in the httpd-mpm.conf file, it didn't help.
The error/access logs don't provide much insight to the matter.
A few notes:
Thus far, I used Flask's development server with multi-threading enabled to serve those requests, but I found it to be prone to crashing after 5 minutes of continuous run, so I am now trying to use Apache using the wsgi interface in order to use Python scripts.
I should also note that I am not servicing html files, just basic GET and POST requests. I am just viewing them using the browser.
If it helps, I also don't use virtual environments.
I am using Windows 10, Apache 2.4 and mod_wsgi 4.5.24
The tensorflow module being a C extension module, may not be implemented so it works properly in Python sub interpreters. To combat this, force your application to run in the main Python interpreter context. Details in:
http://modwsgi.readthedocs.io/en/develop/user-guides/application-issues.html#python-simplified-gil-state-api

Configuring python

I am new to python and struggling to find how to control the amount of memory a python process can take? I am running python on a Cento OS machine with more than 2 GB of main memory size. Python is taking up only 128mb of this and I want to allocate it more. I tried to search all over the internet on this for last half an hour and found absolutely nothing! Why is it so difficult to find information on python related stuff :(
I would be happy if someone could throw some light on how to configure python for various things like allowed memory size, number of threads etc.
A link to a site where most controllable parameters of python are described would be appreciated well.
Forget all that, python just allocates more memory as needed, there is not a myriad of comandline arguments for the VM as in java, just let it run. For all comandline switches you can just run python -h or read man python.
Are you sure that the machine does not have a 128M process limit? If you are running the python script as a CGI inside a web server, it is quite likely that there is a process limit set - you will need to look at the web server configuration.

Running Django with FastCGI or with mod_python

which would you recommend?
which is faster, reliable?
apache mod_python or nginx/lighttpd FastCGI?
I've done both, and Apache/mod_python tended to be easier to work with and more stable. But these days I've jumped over to Apache/mod_wsgi, which is everything I've ever wanted and more:
Easy management of daemon processes.
As a result, much better process isolation (running multiple sites in the same Apache config with mod_python almost always ends in trouble -- environment variables and C extensions leak across sites when you do that).
Easy code reloads (set it up right and you can just touch the .wsgi file to reload instead of restarting Apache).
More predictable resource usage. With mod_python, a given Apache child process' memory use can jump around a lot. With mod_wsgi it's pretty stable: once everything's loaded, you know that's how much memory it'll use.
lighttpd with FastCGI will be nominally faster, but really the time it takes to run your python code and any database hits it does is going to absolutely dwarf any performance benefit you get between web servers.
mod_python and apache will give you a bit more flexibility feature-wise if you want to write code outside of django that does stuff like digest auth, or any fancy HTTP header getting/setting. Perhaps you want to use other builtin features of apache such as mod_rewrite.
If memory is a concern, staying away form apache/mod_python will help a lot. Apache tends to use a lot of RAM, and the mod_python code that glues into all of the apache functionality occupies a lot of memory-space as well. Not to mention the multiprocess nature of apache tends to eat up more RAM, as each process grows to the size of it's most intensive request.
Nginx with mod_wsgi
I'm using it with nginx. not sure if it's really faster, but certainly less RAM/CPU load. Also it's easier to run several Django processes and have nginx map each URL prefix to a different socket. still not taking full advantage of nginx's memcached module, but first tests show huge speed advantage.
There's also mod_wsgi, it seems to be faster than mod_python and the daemon mode operates similar to FastCGI
Personally I've had it working with FastCGI for some time now (6 months or so) and the response times 'seem' quicker when loading a page that way vs mod___python. The critical reason for me though is that I couldn't see an obvious way to do multiple sites from the same apache / mod_python install whereas FastCGI was a relative no-brainer.
I've not conducted any particularly thorough experiments though :-)
[Edit] Speaking from experience though, setting up FastCGI can be a bit of a pain the first time around. I keep meaning to write a guide..!
I'd recommend WSGI configurations; I keep meaning to ditch apache, but there is always some legacy app on the server that seems to require it. Additionally, the WSGI app ecology is very diverse, and it allows neat tricks such as daisy-chaining WSGI "middleware" between the server and the app.
However, there are currently known issues with some apps and apache mod_wsgi, particularly some ctypes apps, so be wary if you are trying to run, say, geodjango which uses ctypes extensively. I'm currently working around those issues by going back to fastcgi myself.

Categories

Resources