I am trying to initialize a local server on Python.
How is possible that the server initialization that I show on the image below is working?
Funny enough, app.run is not crushing the program. I was expecting some error of the kind: "app" does not have any run method or "app" has not been defined or something like that.
From PEP 8:
Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
If you put your imports before you initialize and run app. It should pull in everything correctly and work.
Related
I am using vs code for python scripts. These scripts run on the server only.
The server however, passes certain variables to the script while executing the script.
e.g mbo is always passed in it. mbo is special keyword which corresponds to a some class.
Sample mbo.py
class Mbo:
def getString(column: str)-> str:
return 'ABC'
def setString(columnName: str)-> None:
# do something with columnName.
Goal:
In my project in any python file whenever the user types mbo followed by a . vs code should show autoscomplete for .getString() and .setString() without importing this class as it is passed to the script by server.
I can try to write an extension for vs code to add this feature.
Here, I am stuck that what kind of extension is needed here. A LSP? I don't want to loose the feature of the existing python LSP for python.
Can any one proficient with vs code extension API guide me in right direction.
Note: I cannot import this Mbo class just for autocompletion in vscode because I import it. Then I run the same script on server. The server throws errors about the file.
You could try making the imports conditional:
try:
mbo
except NameError:
import mbo
That might be enough to make one of the two IntelliSense engines in the Python extension work.
Otherwise you are looking at your own extension. LSP is obviously the best option but there's also the classic style of registering classes. The VS Code docs have the details. But you are still probably going to clash with the Python extension as both it and your extension will be registered to work with Python files.
My request seems unorthodox, but I would like to quickly package an old repository, consisting mostly of python executable scripts.
The problem is that those scripts were not designed as modules, so some of them execute code directly at the module top-level, and some other have the if __name__=='__main__' part.
How would you distribute those scripts using setuptools, without too much rewrite?
I know I could just put them under the scripts option of setup(), but it's not advised, and also it doesn't allow me to rename them.
I would like to skip defining a main() function in all those scripts, also because some scripts call weird recursive functions with side effects on global variables, so I'm a bit afraid of breaking stuff.
When I try providing only the module name as console_scripts (e.g "myscript=mypkg.myscript" instead of "myscript=mypkg.myscript:main"), it logically complains after installation that a module is not callable.
Is there a way to create scripts from modules? At least when they have a if __name__=='__main__'?
I just realised part of the answer:
in the case where the module executes everything at the top-level, i.e. on import, it's therefore ok to define a dummy "no-op" main function, like so:
# Content of mypkg/myscript.py
print("myscript being executed!")
def main():
pass # Do nothing!
This solution will still force me to add this line to the existing scripts, but I think it's a quick but cautious solution.
No solution if the code is under a if __name__=='__main__' though...
You can use the following codes.
def main():
pass # or do something
if __name__ == "__main__":
main()
I have a python flask server. In the main of my script, I get a variable from user that defines the mode of my server. I want to set this variable in main (just write) and use it (just read) in my controllers. I am currently using os.environ, but I 'm searching for more flasky way to use.
I googled and tried following options:
flask.g: it is being reset for each request; so, I can't use it inside controllers when it is being set somewhere else.
flask.session: it is not accessible outside request context and I can't set it in the main.
Flask-Session: like the second item, I can't set it in the main.
In main you use app.run() so app is avaliable in main. It should be also avaliable all time in all functions so you can try to use
app.variable = value
but flask has also
app.config
to keep any settings. See doc: Configuration Handling
Some time ago I created a script using Python, the script will execute some actions in an instance based on a configuration file.
This is the issue, I created 2 configuration files.
Config.py
instance= <Production url>
Value1= A
Value2= B
...
TestConfig.py
instance= <Development url>
Value1= C
Value2= D
...
So when I want the script to execute the tasks in a development instance to do tests, I just import the TestConfig.py instead of the Config.py.
Main.py
# from Config import *
from TestConfig import *
The problem comes when I update the script using git. If I want to run the script in development I have to modify the file manually, this means that I will have uncommited changes in the server.
Doing this change takes about 1 min of my time but I feel like I'm doing something wrong.
Do you know if there's a standard or right way to accomplish this kind of tasks?
Use that:
try:
from TestConfig import *
except ImportError:
from Config import *
On production, remove TestConfig.py
Export environment variables on your machines, and chose the settings based on that environment variable.
I think Django addresses this issue best with local_settings.py. Based on this approach. at the end of all your imports (after from config import *), just add:
# Add this at the end of all imports
# This is safe to commit and even push to production so long as you don't have local_config in your production server
try:
from local_config import *
except ImportError:
pass
And create a local_config.py per machine. What this will do is import everything from config, and then again from local_config, overriding global configuration settings, should they have the same name as the settings inside config.
The other answers here offer perfectly fine solutions if you really want to differentiate between production and test environments in your script. I would advocate for a different approach, however: to properly test your code, you should create an entirely separate test environment and run your code there, without any changes (or changes to the config files).
I can't make any specific suggestions for how to go about this since I don't know what the script does. In general though, you should try to create a sandbox that spoofs your production environment and is completely isolated. You can create a wrapper script that will run your code in the sandbox and modify the inputs and outputs as necessary to make your code interact with the test environment instead of the production environment. This wrapper is where you should be choosing which environment the code runs in and which config files it uses.
This approach abstracts the testing away from the code itself, making both easier to maintain. Designing for test is a reasonable approach for hardware, where you are stuck with the hardware you have after fabrication, but it makes less sense for software, where wrappers and spoofed data are easier to manage. You shouldn't have to modify your production code base just to handle testing.
It also entirely elimiates the chance that you'll forget to change something when you want to switch between testing and deployment to production.
I don't know if it is feasable to paste all of the code here but I am looking at the code in this git repo.
If you look at the example they do:
ec2 = EC2('access key id', 'secret key')
...but there is no EC2 class. However, it looks like in libcloud\providers.py there is a dict that maps the EC2 to the EC2NodeDriver found in libcloud\drivers\ec2.py. The correct mapping is calculated by get_driver(provider), but that method doesn't appear to be called anywhere.
I am new to python, obviously, but not to programming. I'm not even sure what I should be looking up in the docs to figure this out.
example.py includes an import statement that reads:
from libcloud.drivers import EC2, Slicehost, Rackspace
This means that the EC2 class is imported from the libcloud.drivers module. However, in this case, libcloud.drivers is actually a package (a Python package contains modules), which means that EC2 should be defined in a file __init__.py in libcloud/drivers/, but it's not. Which means that in this specific case, their example code is actually wrong. (I downloaded the code and got an import error when running example.py, and as you can see, the file libcloud/drivers/__init__.py does not contain any definitions at all, least of all an EC2 definition.)
Checking out the libcloud\examples.py might be helpful. I saw this:
from libcloud.drivers import EC2, Slicehost, Rackspace
The python 'import' statement brings in the class from other python module, in this case from the libcloud.drivers module.