Before I buy my first setup. I'll launch my deep-learning-pipline on sth like vast.ai.
I never did it before, but How can I protect my script from being "stolen"?
This should be a serious launch and take around 7 days to finish training.
google colab doesn allow enough memory & ram for what i need ( need around 64GB ram)
is there a way to run a python script encrypted? (note: it makes use of libaries)
It is hard to run python encrypted. However, you could try to store the code into encrypted disk space.
There are some ways, from fully obfuscating your script to converting your script into equivalent cython or creating an executable out of it using the likes of Nuitka.
You may also implement some important logic in C/C++ (as extensions) and then call them in your script.
You may also set up a server where you feel is OK and send the bits that needs to be executed, basically create a distributed system.
As you can see there are many ways, and the deeper you go the more complex it gets.
Also you might want to have a look here as well.
Related
How can I run c/c++ code within python in the form:
def run_c_code(code):
#Do something to run the code
code = """
Arbitrary code
"""
run_c_code(code)
It would be great if someone could provide an easy solution which does not involve installing packages. I know that C is not a scripting language but it would be great if it could do a 'mini'-compile that is able to run the code into the console. The code should run as it would compiled normally but this needs to be able to work on the fly as the rest of the code runs it and if possible, run as fast as normal and be able to create and edit variables so that python can use it. If necessary, the code can be pre-compiled into the code = """something""".
Sorry for all the requirements but if you can make the c code run in python then that would be great. Thanks in advance for all the answers..
As somebody else already pointed out, to run C/C++ code from "within" Python, you'd have to write said C/C++ code into an own file, compile it correctly, and then execute that program from your Python code.
You can't just type one command, compile it, and execute it. You always have to have the whole "framework" set up. You can't compile a program when you haven't yet written the } that ends the class/function/statement 20 lines later on. At this point you'd already have to write the whole C/C++ program for it to work. It's simply not meant to be interpreted on the run, line by line. You can do that with python, bash/dash/batch, and a few others. But C/C++ definitely isn't one of them.
With those come several issues. Firstly, the C/C++ part probably needs data from the Python part. I don't know of any way of doing it in RAM alone (maybe there is one, but I don't know), so the Python part would have to write it into a file, the C/C++ part would read and process it, then put the processed data into another file, and then the Python part would have to read that and continue.
Which brings another point up. Here we're already getting into multi-threading territory, because the moment you execute that C/C++ program you're dealing with a second thread. So, somehow, you'd have to coordinate those programs so that the Python part only continues once the C/C++ part is done. Shouldn't be a huge problem to get running, but it can be a nightmare to performance and RAM if done wrongly.
Without knowing to what extent you use that program, I also like to add that C/C++ isn't platform-independent like Python. You'll have to compile that program for every single different OS that you run it on. That may come with minor changes to the code and in general just a lot of work because you have to debug and test it for every single system.
To sum up, I think it may be better to find another solution. I don't know why you'd want to run this specific part in C/C++, but I'd recommend trying to get it done in one language. If there's absolutely no way you can get it done in Python (which I doubt, there's libraries for almost everything), you should get your Python to C/C++ instead.
If you want to run C/C++ code - you'll need either a C/C++ compiler, or a C/C++ interpreter.
The former is quite easy to arrange (though probably not suitable for an end user product) and you can just compile the code and run as required.
The latter requires that you attempt to process the code yourself and generate python code that you can then import. I'm not sure this one is worth the effort at all given that even websites that offer compilation tools wrap gcc/g++ rather than implement it in javascript.
I suspect that this is an XY problem; you may wish to take a couple of steps back and try to explain why you want to run c++ code from within a python script.
I saw this post on Medium, and wondered how one might go about managing multiple python scripts.
How I Hacked Amazon's Wifi Button
This describes a system where you need to run one or more scripts continuously to catch and react to events in your network.
My question: Let's say I had multiple python scripts that I wanted to do run while I work on other things. What approaches are available to manage these scripts? I have to imagine there is a better way than having a large number of terminal windows running each script individually.
I am coming back to python, and have no formal training in computer programming, so any guidance you can provide will be greatly appreciated.
Let's say I had multiple python scripts that I wanted to do run. What
approaches are available to manage these scripts? I have to imagine
there is a better way than having a large number of terminal windows
running each script individually.
If you have several .py files in a directory that you want to run, without having a specific order, you can do:
import glob
pyFiles = glob.glob('path/*.py')
for pyFile in pyFiles:
execfile(pyFile)
Your system already runs a large number of background processes, with output to the system log or occasionally to a service-specific log file.
A common arrangement for quick and dirty deployments -- where you don't necessarily want to invest in making the scripts robust and well-behaved enough to run as proper services -- is to start the script inside screen or tmux. You can detach when you don't need to be looking at it, and can reattach at any time -- even from a remote login -- to view the output, or to troubleshoot.
Take a look at luigi (I've not used it).
https://github.com/spotify/luigi
These days (five years after the question was asked) a lot of people use docker compose. But that's a little heavy weight depending on what you want to do.
I just saw today the script server of bugy. Maybe it might be a solution for you or somebody else.
(I am just trying to find a tampermonkey script structure for python..)
So I have tried to find a answer but must not be searching correctly or what I'm trying to do is the wrong way to go about it.
So I have a simple python script that creates a chess board and pieces in a command line environment. You can in put commands to move the pieces. So one of my co workers thought it would be cool to play each other over the network. I agreed and tried by creating a text file to read and write to on the network share. Then we would both run the script that reads that file. The problem I ran into is I pretty much DOS attacked that file share since it kept trying to check that file on network share for a update.
I am still new to python and haven't ever wrote code that travels the internet, our even a simple local network. So my question is how should I go about properly allowing 2 people to access this data at the same time with out stealing all the network resources?
Oh also im using version 2.6 because thats what everyone else uses and they refuse to change to new syntax
You need to use the proper networking way. It's not quite hard for simple networked program like yours.
Use the one from the Python's stdlib http://docs.python.org/library/socket.html (also take a look at the examples at the bottom of the page).
First off, without knowing how many times you are checking the fle with the moves, it is difficult to know why the file-share is getting DoS-ed. Most networks and network shares these days can handle that level of traffic - they are all gigabit Ethernet, so unless you are transferring large chunks of data each time, you should be ok. If you are transferring the whole file each time, then I'd suggest that you look at optimizing that.
That said, coming to your second question on how this is handled at a network level, to be honest, you are already doing it in a certain way - you are accessing a file on a network share and modifying it. The only optimization required is to be able to do it efficiently. Even over the network operations in a concurrent world do the same. In that case, it will be using fast in-memory database storing various changes / using a high-scale RDBMS / in the case of fast-serving web-servers better async I/O.
In the current case, since there are two users playing the game, I suggest that you work on a way to transmit only the difference in the moves each time over the network. So, instead of modifying the file over the network share, you can send the moves over to a server component and it synchronizing the changes locally to the file. Of course, this means you will need to create a server component that would do something like this
user1's moves <--> server <--> user2's moves . Server will modify the moves file.
Once you start doing this, you get into the realm of server programming / preventing race conditions etc. It will be a good learning experience.
We have begun upgrading hardware and software to a 64-bit architecture using Apache with mod_jk and four Tomcat servers (the new hardware). We need to be able to test this equipment with a large number of simultaneous connections while still actually doing things in the app (logging in, etc.)
I currently am using Python with the Mechanize library to do this, but it's just not cutting it. Threading is not "real" in Python, and multiprocessing makes the local box work harder than the machines we are trying to test since it has to load so much into memory for Mechanize.
The bottom line is that I need something that will really hammer this thing's connections and hold a session to make sure that the sticky sessions are working in mod_jk. I need to be able to code it quickly, it needs to be lightweight, and being able to do true multithreading would be a perk. Other than that, I am open-minded.
Any input will be greatly appreciated. Thanks.
Open Source Testing Tools
Not knowing the full requirements makes it difficult, however something from the list might fit the bill.
In order to accomplish what I wanted to do, I just went back to basics. Mechanize is somewhat bulky, and there was a lot of bloat involved in the main functionality tests I had before. So I started with a clean slate and just used cookielib.CookieJar and urllib2 to build a linear test and then run them in a while 1 loop. This provided enough strain on the Apache system to see how it would react in the new environment, and for the record, it did VERY well.
We have a process here at work where an input file is created by SAS. That input file is then read by a legacy application and that legacy application creates results. SAS then reads the results and summarizes them. A non-programmer usually takes care of these actions one by one. So the person just creates the input file. They know when it is done, and then they run the legacy application, and they know when that is done. Then they run the summary program.
I have a situation where my boss wants to run about 100 variations. I have access to 3 or 4 computers that share a network drive. Here is my plan: Use computer A, I start creating the 100 input files, one by one. Using computer B, I run the legacy program on each input file. I would like to start running the program when input is ready. So if input1 is done being created on computer A, I want to run the legacy application on input1 on computer B while input2 is being created on computer A. I know python best, so I will probably use python to glue all of this together.
Now I know there is a lot of things I could do, but I think this approach is adequate and will allow me to just get the job done for the time being. I don't really have time to design and test a very elegant solution that would take advantage of all the cores on all the machines or use a database to help me synchronize all of this. I appreciate suggestions like this, but I really just want to know if there is a way, in python, to tell if a file on a network drive is open for writing by any application on any computer? If not, I will probably just come up with a dumb way to create an indicator that the job is done - like create a file "doneA" where if it exists, it means the "input1" file is complete. For example. I would add a step to the sas program that creates an indicator file after the input file has been created.
Sorry for the really long explanation, but I just don't want you to waste your time offering alternate solutions that I probably will not be able to implement.
I have read this question and its responses. I don't think I can use anything like lsof b/c these files would be open on different computers.
Write output to a temp file. When done writing, close it, then rename it to the name the other program is waiting for. That way, the file only appears when it is ready to be read.
if there is a way, in python, to tell if a file on a network drive is open for writing by any application on any computer?
Not really.
Windows will let you open the file several times and really muck things up.
You have to use some explicit synchronization. Rather than synchronize each of the three steps 100 different ways, my preference is to do the following. Create 100 copies of the three-step dance. Don't worry about synchronization among the steps.
for variant in range(100):
name= "variant_{0}.bat".format(variant)
with open(name,"w") as script:
print( "run some SAS thing", file=script )
print( "run some legacy thing", file=script )
print( "run some SAS thing", file=script )
subprocess.Popen( "start {0}".format(name), shell=True )
I suspect that this will crush the life out of your processor by running all 100 in parallel.
As a practical matter, you probably don't want to actually use subprocess.Popen() in Python. As a practical matter you probably want to create several of these "start variant_x" batch files that can run a few variants in parallel. You can create some kind of master bat file that runs a sequence of processing steps. Each step starts several parallel 3-step variants.