How to call yolov7 detect method from a python program - python

So far, I am using detect.py with appropiate arguments for the object detection tasks, using a custom trained model.
How can I call the detect method with the parameters(weights, source, conf, and img_size) from a python program, instead of using CLI script?
I am unable to do so.

you can create a main.py file where you call all these methods from.
Please make sure you import these methods at the top of main.py, e.g. from detect.py import detect (or whatever you want to call from this file).
Hard to give more precise advice without more input from you.
and then you just run your main file.
Alternatively maybe consider using a jupyter notebook - not the 'nicest' way, but it makes everything more convenient for testing etc.

Related

Prevent any file system usage in Python's pytest

I have a program that, for data security reasons, should never persist anything to local storage if deployed in the cloud. Instead, any input / output needs to be written to the connected (encrypted) storage instead.
To allow deployment locally as well as to multiple clouds, I am using the very useful fsspec. However, other developers are working on the project as well, and I need a way to make sure that they aren't accidentally using local File I/O methods - which may pass unit tests, but fail when deployed to the cloud.
For this, my idea is to basically mock/replace any I/O methods in pytest with ones that don't work and make the test fail. However, this is probably not straightforward to implement. I am wondering whether anyone else has had this problem as well, and maybe best practices / a library exists for this already?
During my research, I found pyfakefs, which looks like it is very close what I am trying to do - except I don't want to simulate another file system, I want there to be no local file system at all.
Any input appreciated.
You can not use any pytest addons to make it secure. There will always be ways to overcome it. Even if you patch everything in the standard python library, the code always can use third-party C libraries which can't be patched from the Python side.
Even if you, by some way, restrict every way the python process can write the file, it will still be able to call the OS or other process to write something.
The only ways are to run only the trusted code or to use some sandbox to run the process.
In Unix-like operating systems, the workable solution may be to create a chroot and run the program inside it.
If you're ok with just preventing opening files using open function, you can patch this function in builtins module.
_original_open = builtins.open
class FileSystemUsageError(Exception):
pass
def patched_open(*args, **kwargs):
raise FileSystemUsageError()
#pytest.fixture
def disable_fs():
builtins.open = patched_open
yield
builtins.open = _original_open
I've done this example of code on the basis of the pytest plugin which is written by the company in which I work now to prevent using network in pytests. You can see a full example here: https://github.com/best-doctor/pytest_network/blob/4e98d816fb93bcbdac4593710ff9b2d38d16134d/pytest_network.py

Running Python from R

I am aware that there are multiple libraries for both languages (R/Python) to call modules from the other one. I am looking for a way to have the backend of my code running in python mainly because of .pyc and speed, and also the front end running in R so I can have a Shiny app. I couldn't find a way to make python machine for the backend. If anyone knows how to do it in R/Rstudio please respond.
I don't have any good benchmarks on it's speed, but the reticulate package is the best way I know of to pass data to and from a python script without using a webserver. It lets you import python objects into R where they will act like R objects, accepting arguments and returning values.
There were a few wonky problems that I had when I just wanted to run functions from a single file. It ran into problems with import statements and multiple functions that called on each other. What worked well was to run the import statements separately (see the sapply() statement below) and to merge all the code in my python script into a single object. This worked nicely and seemed about as fast as running it in python normally (though I haven't done any real benchmarking)
library(reticulate)
use_python(python = '/usr/bin/python') # optionally specify python location
# Import statements are here, not in the file
sapply(c("import mysql.connector", "import re"), py_run_string)
# File contains only the definition of class MismatchFinder
source_python("python_script.py")
# Now we can call on that python object from R
result <- MismatchFinder()$find_mismatch(arg1, arg2)
My impression is that it might be simpler if you make your python code into a module and load it with: py_module <- import_from_path('my_python_module', path = 'PATH') but I didn't try that.
Hope this helps!
I believe what you are looking for is the code below. It will run a python script in R.
system('python3 file_name.py')

Python calling function from DLL (WinDLL)

I trying a lot of methods, but any method work strange. Some return error, some return value, but not what I need to be returned ("0" rather than string).
So, I try to use Everything in python code. I make example tool, but when I start using API I stacked. Simply I can't call function or call it wrong.
My code you may see on Github (with comments).
To run it need dll, that you can also find on Github, Everything itself and python 3.5.
Run it in cmd by using command: fs [path] (i.e. C:\, but not used right now as I can't call function)
Code started at line 73 and ended at 126.
Rather than struggling with calling "Everything" I would suggest that you take a look at the python built-in library functions os.walk, fnmatch.fnmatch and glob.glob - they work very well to do what everything does with no external dependencies and a simple pythonic user calling interface.
Additionally, if you write your code using them, you code will work on every platform that supports python.

How can I call a python script with arguments from within Processing?

I have a python script which outputs a JSON when called with different arguments. I am looking for a way to call that script from within Processing and load the output using something like loadJSONObject()
The problem is that I don't know how to call the python script with arguments from within Processing.
Any tip will be appreciated, thanks!
One option, as pointed out in the comments, is to use open, and then load the file that generates the normal way.
Another -arguably much better- way is to not do this and to run your python script as services with a web interface instead, so that your python scripts sits listening on http://localhost:1234, for instance, and your Processing sketch can simply load a file "http://localhost:1234/somefile?input=whatever" and not even care what is actually generating the content.
The upside there is also that you can run your script anywhere that can be reached via URLs, and those things don't need to rely on python being available as an executable.

Display in pdb++ not working

I was trying to modify pdb++ and add watch capability to it. However, I found that it was already implemented, that is, display in pdb++ does watch the variables.
I'm able to watch all the variables inside my test file (test.py) if I do
python pdb.py test.py
But, if I add the following code inside test.py (and ran it as python test.py), I'm not able to do many functions like display, sticky, etc.
import pdb as bp
bp.set_trace()
Does anyone have any ideas what I'm missing? Thanks!

Categories

Resources