Obviously, the same program is run, and the code is the same. Why do two of them appear the same
When you execute scripts with the same name but different file locations or run configurations this can happen.
It can be useful in a variety of situations; where you want to run the same script in multiple different versions of python. Each Run configuration can be setup to use a separate interpreter. Though, it would be wise to rename the run configuration.
In this case, pycharm simply took the name of the script and appended (1) when it saw another run configuration of that same name already existed.
Related
I have separate preprocessing and training Python scripts. I would like to track my experiments using mlflow.
Because my scripts are separate, I am using a Powershell script (think of it as a shell script, but on Windows) to trigger the Python scripts with the same configuration and to make sure that all the scripts operate with the same parameters and data.
How can I track across scripts into the same mlflow run?
I am passing the same experiment name to the scripts. To make sure the same run is picked up, I thought to generate a (16 bit hex) run ID in my Powershell script and pass it to all the Python scripts.
This doesn't work as when a run ID is given to mlflow_start_run(), mlflow expects that run ID to already exist and it fails with mlflow.exceptions.MlflowException: Run 'dfa31595f0b84a1e0d1343eedc81ee75' not found.
If I pass a run name, each of the scripts gets logged to a different run anyways (which is expected).
I cannot use mlflow.last_active_run() in subsequent scripts, because I need to preserve the ability to run and track/log each script separately.
I have a function in python, that I sometimes run manually and sometimes run automatically with windows task schedualer. I want some functionality to be different depending on the way it is run. Can I do something within the function that would tell me how it was run?
The reason I need to know that, is that when I run it manually, I want to double-check with the user (e.g me) that I want to proceed with some data over-ridding, but when it is run automatically, no-one would be there to confirm (and as it is done automatically, it was probably done right).
UPDATE:
My problem was solved in a different way, as the automatic scripts are concentrated in a special path, so I can use os.getcwd() to know if the script is run from that path, which means it was run automatically.
When running Pylint on my file I am getting the following message.
refactor (R0915, too-many-statements, function) Too many statements (95/50)
I want to set the number of statements a function can have to 100 instead of 50 in order avoid the above message from Pylint.
Pylint works based on configured settings which are default PEP 8 standards. Now, if customizing them is good or bad can be taken for another discussion, as they are kept like that for a reason. For example, if you have a method with more than 50 lines of code, it simply means that you are increasing the cyclomatic-cognitive complexities as well as making it difficult to unit test and get coverage.
OK, arguments aside, I think the following approach can help you customize the linting rule.
Go to your Python site-packages directory (it might be inside the Python installation Libs folder or in your virtual environment.
For example, D:\Python37\Lib\site-packages
Open a command line here, and navigate to the Pylint directory. Execute the configuration generator like
lint.py --generate-rcfile > custom_standard.rc
Now you will have a file named custom_standard.rc in the folder. Let’s copy it to some place around your project, say, D:\lint_config\custom_standard.rc.
Open the configuration file. You can see a setting for most of the rules. Now, for your question of number of statements inside a method, find the setting called
max-statements=50
Change it to:
max-statements=100
Save the configuration file. Now when you run the Pylint executable, use the option --rcfile to specify your custom configuration:
pylint --rcfile=D:\lint_config\custom_standard.rc prject_dir
If you want to integrate this with your IDE like PyCharm there are plugins which allows configuring the same.
But again!, it is not a good decision to change the PEP 8 :-)
I have a python script which takes a while to finish its executing depending on the passed argument. So if I run them from two terminals with different arguments, do they get their own version of the code? I can't see two .pyc files being generated.
Terminal 1 runs: python prog.py 1000 > out_1000.out
Before the script running on terminal 1 terminate, i start running an another; thus terminal 2 runs: python prog.py 100 > out_100.out
Or basically my question is could they interfere with each other?
If you are writing the output to the same file in disk, then yes, it will be overwritten. However, it seems that you're actually printing to the stdout and then redirect it to a file. So that is not the case here.
Now answer to your question is simple: there is no interaction between two different executions of the same code. When you execute a program or a script OS will load the code to the memory and execute it and subsequent changes to code has nothing to do with the code that is already running. Technically a program that is running is called a process. Also when you run a code on two different terminals there will be two different processes on the OS one for each of them and there is no way for two process to interfere unless you explicitly do that (IPC or inter-process communication) which you are doing here.
So in summary you can run your code simultaneously on different terminals they will be completely independent.
Each Python interpreter process is independent. How the script reacts to itself being run multiple times depends on the exact code in use, but in general they should not interfere.
.pyc file reference http://effbot.org/pyfaq/how-do-i-create-a-pyc-file.htm
Python automatically compiles your script to compiled code, so called
byte code, before running it. When a module is imported for the first
time, or when the source is more recent than the current compiled
file, a .pyc file containing the compiled code will usually be created
in the same directory as the .py file.
If you afraid your code get overwritten due to whatever mistake, you should learn to put your code under VERSION CONTROL. Register github and use git to do that.
bigger sign ">" will send the output to the right handler. It you specify file name, it will push the output to that file name. Even in different terminal, if you run the code inside the same folder, use the ">" point to SAME file name, the file on the right of the ">" definitely get overwrite.
Program SOURCE CODE ARE NOT mutable during execution. Unless you acquire high level program hacking skill.
Each program will run inside its "execution workspace". Unless you make a code that tap into same resources(like change same file,shared reources ), otherwise there is no interference. (except if one exhaust all CPU, Memory resources, the second one will be interfere, but that is other story)
I am using the Command Prompt in Windows 8 to perform some analytical tasks with Python.
I'm using a few external libraries, some .py files with functions I need, and some set up commands (like setting certain variables based on databases I need to load).
In all there are about 20 statements. The problem is, that each time I want to work on it, I have to manually enter (copy/paste) all of these commands into the Shell, which adds a few minutes each time.
Is there any way I can save all of these commands somewhere and automatically load them into the prompt when needed?
Yes,
Save your commands in any file in your home directory. Then set your PYSTARTUP environment variable to point to that file. Every time you start the python interpreter it will run the commands from the startup file
Here is the link to example of such file and more detailed explanation
If you need to have different start up files for different projects. Make a set of shell scripts one per project. The scripts should look like this:
#!/bin/bash
export PYTHONSTARTUP=~/proj1Settings.py
python
And so on. Or you can simply change the value of PYTHONSTARTUP variable before you start working on a particular project. Personally, I use MacOS with Iterm2, so I set-up multiple profiles for different projects. When I need to work on a particular project I simply launch a tab with the profile configured for the project.