how to transfer session to another compute node with python? [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 days ago.
Improve this question
How to transfer session to another compute node with python in the following case?
case 1: If using kubernete,
case 2: Or using autoscale,
case 3: if using Amazon,
How to transfer session to another compute node with python?
So that program can run forever

Nope, none of those things can transfer a process with all of its in-memory and on-disk state across hosts.
If you’re looking at Kubernetes already, I’d encourage you to design your application so that it doesn’t have any local state. Everything it knows about lives in a database that’s maintained separately (if you’re into AWS, it could be an RDS hosted database or something else). Then you can easily run multiple copies of it (maybe multiple replicas in a Kubernetes ReplicaSet or Deployment) and easily kill one off to restart it somewhere else.
One of the high-end virtualization solutions might be able to do what you’re asking, but keeping a program running forever forever is pretty hard, particularly in a scripting language like Python. (How do you update the program? How do you update the underlying OS, when it needs to reboot to take a kernel update?)

Related

How to choose appropriate AWS EC2 instance [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I want to deploy a website that uses Django on the server side and reactJs Frontend on AWS. I'm estimating traffic of about 40,000 per month. It happens that Amazon has several hosting packages. I like the EC2 instances but I don't know which particular instance will be suitable for me and being cost effective. I want to host for 6 months to get to understand the platform and later extends it if I'm comfortable with their services. The database isn't that robust. I'll also like to add updates at least two times a week.
Honestly, I haven't used AWS before and I don't even know if EC2 is the best for my project. Which of the packages will be best for my project?
Your main choice is Amazon EC2 (highly flexible) vs Amazon Lightsail (similar to a VPS).
Nobody can advise you on sizing your system, since it is totally dependent upon what your application does (video vs compute vs caching, etc) and how your users interact with the app. You should pick an Instance Type, setup a test system that simulates typical usage and then monitor how it runs.
Alternatively, pick something reasonably big, run it in production and monitor for any issues. If it all looks good, you could downsize the system.
AWS EC2 is an IaaS (Infrastructure as a Service) cloud model, and it's best suited for those who are starting to use cloud computing. We can say that it's similar to on-prem environments, you have almost full control over all aspects of computing like memory, storage, and networking, on the other hand, you need to manage them, and in some cases, you don't want or, you don't need to do. In your case, with the information provided, an EC2 seems to be the best choice, so you can safely try AWS (safely, because you will need to take care of costs, and on IaaS model you have more predictability) while learning about other services. The second step will depend on the specificities of your application like if stateful or stateless if needs to be scalable or not. Hope I have contributed.

Allowing users to execute python code on the server [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know this is technically a duplicate question, but I believe it is valid since the original question was submitted 7 years ago and Python/web security has come a long way since then.
I would like to build a web app that allows users to input python code (through the Ace editor package) and then execute it in the browser using a python interpreter on the server. I cannot use pypy.js because I need to use numpy, pandas, and matplotlib. Essentially I would like to make my own Codecademy (I am a teacher and would like to create Codecademy-like courses for my students). Sadly the create-a-course thing Codecademy mentioned at one point has come to nothing.
I'm using Flask, but I could learn Django if that would be easier.
What is the best way to allow my users to run the python code without allowing them to affect the rest of the program or access files outside of what they're allowed to?
There were no fundamental changes in Python or web security the last 7 years. It is still suicidal to allow users to run code on your server.
However, what did change is the availability of lightweight VM solutions like docker.
For an example how this could work have a look at https://civisanalytics.com/blog/engineering/2014/08/14/Using-Docker-to-Run-Python/ . I will not reference this solution here as you will found other examples, even if this one goes away.
However, this might be more safe then running user code direct on your server, BUT
the user code is still running on your server. It might be not possible to escape the docker image, but a malicious user could still upload for eg. a denial of service tool and start an attack from your server. Or sniff your network traffic or whatever.
there are or at least might be ways to break out of the docker image.
For a controlled environment like a classroom those risks might be acceptable, but for a public server you would need a lot of security know how to further lock down the server and the docker image and filter available python functionality.

How to transfer pre trained machine learning model to App Engine [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have few machine learning models which I have already trained (size in 300MB - 4 GB). Now I want to query them using a rest API. I build my APIs in flask which require these models to be in memory. Is google app engine suitable for it? Or should I use compute engine instead?
it's unsuited for appengine because of reasons like:
appengine does not have instances with so much ram (unless you use flexible which is basically the same as gce option).
even if it did, the instance can go away which requires to reload data to memory constantly (thou it could be on memcached)
time restrictions on frontend instances will make it very unlikely you can even get time to load data to memory, less being able to analyze the data. a backend type of instance could do it, but will be harder than doing it from a regular VM.
good luck loading the libraries you need, as none should write to file storage, even for a temp file.
thus compute engine is the correct place.

Viewing and managing socket usage in Python [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am trying to create a utility using Python (3) that, among other things, needs to look at and manage socket usage ON WINDOWS (no, I really don't care if it works on other OS's).
Looking at socket usage: To be clear, I don't want to create a socket or bind to an existing one, I want to be able to get a full list of what sockets are open and what programs have opened them. If you're not sure about what I mean, take a look at TCPView, which does exactly what I'm talking about.
Managing socket usage: Basically, I want to be able to stop programs from connecting from the internet, if necessary. I would assume that the easiest way to do this is to use os.system() to add a new rule to the Windows Firewall, but as that doesn't seem too elegant I'm open to suggestions.
As that's obviously not all the utility will do, I would prefer a library/module of some sort over a 3rd-party program.
You can launch the command "netstat -nabo" to get the list of all active connections & parse the output to get the source, destination, process name & ID. There is no straight forward method to get the active connections in python. You can also get the same information from python invoking iphlpapi. To block or allow a connection windows has command line to add/remove rule from windows firewall.

Python R/W to text file network [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What could happen if multiple users run the same copies of python script which designed to R/W data to a single text file store in network device at the same time?
Will the processes stop working?
If so, what could be the solution?
It can happen many bad things, I don't think the processes stop working, not at least because of concurrent access to file a file, but what could happen is and inconsistent file creation: for example, if one processes write hello, and there is a concurrent access to the file, you might get a line like hhelllolo
A solution I can see is, use a database as suggested, or, create a mechanism for locking the file to concurrent accesses (which might be cumbersome because you're working on network, not the same computer)
Another solution I can think of is create a server side simple script who handle the requests and lock the file for concurrent access. This is almost the same solution as using a database, you'll be creating an storage system from scratch so why bother :)
Hope this helps!

Categories

Resources