It is supposedly possible to debug a Python3/Cython project using gdb, after building gdb from source if you configure it with python2.7 as specified in the Cython debugging docu.
However, the example in the docu:
is sometimes vague (e.g. the code should be built with python3 and debugger run with python2, but I discovered python-gdb is actually linked to python2 ... so how does that work?)
is incomplete (some steps covered in code blocks, others vaguely referred to in text)
is inconsistent (e.g. references to both source.pyx and myfile.pyx)
Furthermore, it:
does not take the use of virtual environments into account
seems to assume the main function resides in the .pyx (but mine resides in a regular main.py)
does not specify what to do when your files live in different directories (e.g. like my main.py and cythonCode.pyx do)
Could someone please explain (preferably with working example) how to do debug a Python3/Cython project in a situation involving all 3 points just mentioned?
At the moment it seems I can actually get DDD to work following this Cython wiki article, but I then discovered that is the 'old' way of doing it, and it refers to the current debugging docu I also linked to. At this point however, it is unclear to me how the 'new' method functions (the old makes more sense to me) and it surely seems more complex to get it to work.
Related
I have a question regarding simple imports that I cannot get my head around.
Take a look at the attached screenshot to see my project layout.
The file somefile.py imports the class SayHello from a file called someclass.py and calls it. someotherfile.py does the exact same thing. They both use from someclass import SayHello.
In Pycharm both files run. However, From the command line or from VSCode somefile.py runs, but someotherfile.py errors out with the following error:
ModuleNotFound: No module named 'someclass'.
I believe it has something to do with PYTHONPATH/environment variables or something like that, but every explanation I have read has confused me thus far (Even this one which I thought was going to set me strait Relative imports for the billionth time).
Can someone explain in simple terms what is happening here? What is Pycharm doing by default that other editors are not such that my imported modules are found? How can I make someotherfile.py work in VSCode without modifying it?
Thanks in advance!
Pycharm adds your project directory into python paths by default. See configuration of "Pycharm run" that you execute and you shall see few checkboxes like
If those checked Pycharm creates PYTHONPATH environment variable for you that instructs Python where to look for someclass module.
You will have to configure VSCode to define PYTHONPATH environemnt variable for python command you run and include your root project directory path on it.
TLDR: Mess with the line starting with ./automated pointing it to various directories in your project until it works haha.
Long rambling answer: Alright now that I am not in a frenzy from trying to figure this out and it has been a day, I feel like I can make a conherint response (lets see if that is true).
So my original answer was an attempt to simplify my problem into what I thought was the issue due to a ModuleNotFound error I was getting. I had been trying to get unittests in Python inside of Visual Studio code to work (hint hint just use Pycharm it just works out of the box in there), but the integrated testing feature or whatever could not find my tests giving ModuleNotFound as the reason.
The solution to my problem actually just concerned the line ./automated-software-testsing-with-python/blog.
In the below screenshot the relevant part is ./automated-software-testing-with-python/blog.
This is it when it is correctly configured (My tests work Woo hoo!)
Now you can go ahead and read the official documentation for what this line is doing, but I am convinced that the documentation is wrong. Due to how my code is structured and how my files are named, what the documentation says its looking for definitely does not exist. But that is another can of worms...
I actually got it to work by accident. Originally when you go though the wizard to set up what's in that screenshot above, it defaulted to ./automated-software-testing-with-python which did not work. I then manually edited it to something else that was not the below screenshot hoping to get it to work.
It was only when I pointed it to that blog directory on accident (thinking I was in a different file due to trying to debug this for hours and hours in a blind rage) that it ended up working.
I did a bunch of PYTHONPATH manipulation and Environment Variable mumbo jumbo, and I originally thought that that had an effect, but when I cloned my repot to my other machine (which did not have any of that Environment Variable PYTHONPATH stuff going on) it worked too (again, provided that the line in question pointed to blog.
Ok hopefully that might help somone someday, so I'll end it there. Not to end on a bitter sounding zinger, but I will never cease be amazed by how doing such simple things such as configuring the most basic unit test can be so difficult in our profession haha. Well now I can start working. Thanks for the help to those who answered!
I built an EXE file from a Python script using PyInstaller, using
pyinstaller --onefile myscript.py
Packages I used:
pandas, numpy, imutils, opencv, logging, os, random, json, string, csv, datetime, uuid
The EXE runs fine on my PC. However, when I try it on another PC I get the error shown in this screenshot: https://www.screencast.com/t/msZrURL4v
Any idea what the problem is?
The error you post just says "I was looking for one specific DLL and did not find it".
Rather than installing other packages and extensions that might, or might not, be or somehow contain the right DLL, you now need to determine exactly what it is that isn't to be found.
I can suggest three complementary methods, none absolutely certain to pinpoint the exact problem (of course the voodoo method of "install some package at random and see whether it fixes it" might also work, and often does -- but that's magic, not computer science):
the quickest: check the pyimod03_importers.py file at line 714, see what it was doing when the exception was thrown. Due to Windows' library loading strategies, you might be handed a red herring, with a file reported not to be there when it actually is, because it relies on a second missing file whose name you won't be told.
the easiest: use a tool like SysInternals' DEPENDS.EXE to inspect the OMR.EXE file. This is almost guaranteed not to work in this case, because the needed imports might be specified in Python format, not in any form that DEPENDS.EXE will recognize.
the most comprehensive, but least easy: use a tool like SysInternals' PROCMON, set up the filters to exclude the background noise of Windows' idle state - there will be an awful lot of that - and then fake running OMR.EXE; exclude the additional noise generated by that. You'll need about fortyish filters to be set up. Finally run OMR.EXE. Near the end, you will see a series of attempt to load SOMETHING.DLL, all failed; the first is where the DLL is supposed to be (by either Python or OMR), the others are all suitable alternatives.
Then:
if the DLL is one of yours, find out how to pack them with the EXE bundle.
if it is not, you need to reliably assess where it can be found.
It might well be that the suggestion you were given - install MSVC redistributable that-version-or-other - was absolutely correct. Libraries with names like MSVCnn... belong to that package. MSO... files usually belong to Microsoft Office redistributables. MSJET... files are found in several Microsoft package, for example the .NET redistributable.
otherwise, Google and possibly MSDN Search Engine are your friends.
From past experience, I suggest setting up a virtual machine for testing, then seeing what packages are needed. This is because the first DLL crash will hide any subsequent ones, and you might need to repeat the above steps several times. The fact that the first library you need is supplied by the NETFX64 package and the second by the Microsoft Office runtime might be true, but when you find out that the second library is needed, you might also find out that the MSO runtime would have supplied the first also; so at that point, and not before, you discover that the NETFX64 package wasn't really needed, and can simplify your installation requirements to the MSO runtime alone.
Boiling down the requirements to a short list might be a lengthy task and you will want to restart the machine from scratch more than once. With a VM, that is easy to do.
(I've kept referring to the MSO runtime because I figure that your program will process a checkbox answers module, and will likey need or believe it needs some scanner recognition features, which the MSO runtime supplies. If that is so, they'll probably come last).
I'm new to Python and trying to get comfortable with the syntax and the language. I gave PyCharm a shot and found it very comfortable.
The only problem is that auto-completion isn't working as I expected and it is very important to me as part of the learning process and looking into some modules.
My code works even without the autocomplete but I'm very used to it and really wish to enjoy this feature.
I tried changing my project interpreter back and forth and nothing changed. I tried restarting PyCharm, the computer - didn't work. I tried Invalidate Cache, made sure the power save mode is off - nada.
Here is an example of missing autocomplete for lxml:
And here is the interpreter window:
Python is a dynamically typed language, so the return type of a function is not always known in advance. PyCharm generally looks at the source of a function to guess what it returns. It can't in this case because etree.parse is written in Cython, not Python. If you ask PyCharm to go to the definition of the function it gives you a stub.
The Python ecosystem has recently started to tackle this problem by providing various ways to annotate files with type hints for use by external tools, including PyCharm. One way is through .pyi files. A large collection of these can be found in the typeshed project. This issue shows that writing hints for lxml was proving difficult, and not wanting to have incomplete stubs in the typeshed repo, they were moved to their own repo here. The stubs are indeed very incomplete, and when I tried downloading and using them in PyCharm the results were pretty dismal. They correctly identify that etree.parse returns an etree._ElementTree, but the stub for _ElementTree only has two methods.
I got much better results by annotating directly in the Python file, e.g.
tree = etree.parse(path) # type: etree._ElementTree
(you can find out the type by checking type(tree))
PyCharm itself somehow knows what the methods on _ElementTree are so now autocomplete works. Unfortunately it seems that using .pyi files makes PyCharm forget this knowledge.
Here is documentation on type hinting in PyCharm.
And yes, in general you will have to get used to less autocompletion and general type information and static analysis. Fortunately I think there is a lot to make up for it that isn't possible in other languages :)
Install KITE, its a super fast auto suggest engine for python. It works for Pycharm,Sublime etc...
For more details view this youtube video
New to Python, so excuse my lack of specific technical jargon. Pretty simple question really, but I can't seem to grasp or understand the concept.
It seems that a lot of modules require using pip or easy_install and running setup.py to "install" into your python installation or your virtualenv. What is the difference between installing a module and simply taking it and importing the into another script? It seems that you access the modules the same way.
Thanks!
It's like the difference between:
Uploading a photo to the internet
Linking the photo URL inside an HTML page
Installing puts the code somewhere python expects those kinds of things to be, and the import statement says "go look there for something named X now, and make the data available to me for use".
For a single module, it usually doesn't make any difference. For complicated webs of modules, though, an installation program may do many things that wouldn't be immediately obvious. For example, it may also copy data files into locations the new modules can find them, put executables (binary libraries, or DLLs on Windws, for example) where the new modules can find them, do different things depending on which version of Python you have, and so on.
If deploying a web of modules were always easy, nobody would have written setup programs to begin with ;-)
I made a Python module (https://github.com/Yannbane/Tick.py) and a Python program (https://github.com/Yannbane/Flatland.py). The program imports the module, and without it, it cannot work. I have intended for people to download both of these files before they can run the program, but, I am concerned about this a bit.
In the program, I've added these lines:
sys.path.append("/home/bane/Tick.py")
import tick
"/home/bane/Tick.py" is the path to my local repo of the module that needs to be included, but this will obviously be different to other people! How can I solve this situation better?
What suggested by #Lattyware is a viable option. However, its not uncommon to have core dependencies boundled with the main program (Django and PyDev do this for example). This works fine especially if the main code is tweaked against a specific version of the library.
In order to avoid the troubles mentioned by Lattyware when it comes to code maintenance, you should look into git submodules, which allow precisely this kind of layout, keeping code versioning sane.
From the structure of your directory it seems that both files live in the same directory. This might be the tell-tale than they might be two modules of a same package. In that case you should simply add an empty file called __init__.py to the directory, and then your import could work by:
import bane.tick
or
from bane import tick
Oh, and yes... you should use lower case for module names (it's worth to take a in-depth look at PEP8 if you are going to code in python! :)
HTH!
You might want to try submitting your module to the Python Package Index, that way people can easily install it (pip tick) into their path, and you can just import it without having to add it to the python path.
Otherwise, I would suggest simply telling people to download the module as well, and place it in a subdirectory of the program. If you really feel that is too much effort, you could place a copy of the module into the repository for the program (of course, that means ensuring you keep both versions up-to-date, which is a bit of a pain, although I imagine it may be possible just to use a symlink).
It's also worth noting your repo name is a bit misleading, capitalisation is often important, so you might want to call the repo tick.py to match the module, and python naming conventions.