Does Python have a rope data structure? - python

In writing some Python code, I came upon a need for a string-like data structure that offers fast insertion into, access to, and deletion from arbitrary positions. The first data structure that came to mind was a rope. Does Python have a rope data structure already implemented somewhere? I've looked through the standard library and PyPI, but I haven't seen one. (It doesn't help that there's a refactoring library for Python by the name of Rope as well as a company called Python Rope that sells wire rope.)

There isn't one in the standard library, but there are implementations out there, e.g. pyropes.
There's also this list of various non-built-in data structure implementations for Python.

Yes! there is one package available at PyPI.org for Rope data structure(named pyropes), written purely in Python 3. You can install it using
pip install pyropes
It also have full documentation on how to use it. Though it requires Python >=3.6(because it uses f-strings)

Related

Django: requirements.txt

So far I know requirements.txt like this: Django==2.0. Now I saw this style of writing Django>=1.8,<2.1.99
Can you explain to me what it means?
requirements.txt is a file where one specifies dependencies. For example your program will here depend on Django (well you probably do not want to implement Django yourself).
In case one only writes a custom application, and does not plan to export it (for example as a library) to other programmers, one can pin the version of the library, for example Django==2.0.1. Then you can always assume (given pip manages to install the correct package) that your environment will ave the correct version, and thus that if you follow the correct documentation, no problems will (well should) arise.
If you however implement a library, for example mygreatdjangolibrary, then you probably do not want to pin the version: it would mean that everybody that wants to use your library would have to install Django==2.0.1. Imagine that they want a feature that is only available in django-2.1, then they can - given they follow the dependencies strictly - not do this: your library requires 2.0.1. This is of course not manageable.
So typically in a library, one aims to give as much freedom to a user of a library. It would be ideal if regardless of the Django version the user installed, your library could work.
Unfortunately this would result in a lot of trouble for the library developer. Imagine that you have to take into account that a user can use Django-1.1 up to django-2.1. Through the years, several features have been introduced that the library then can not use, since the programmer should be conservative, and take into account that it is possible that these features do not exist in the library the user installed.
It becomes even worse since Django went through some refactoring: some features have later been removed, so we can not simply program on django-1.1 and hope that everything works out.
So in that case, it makes sense to specify a range of versions we support. For example we can read the documentation of django-2.0, and look to the release notes to see if something relevant changed in django-2.1, and let tox test both versions for the tests we write. We thus then can specify a range like Django>=2.0,<2.1.99.
This is also important if you depend on several libraries that each a common requirement. Say for example you want to install a library liba, and a library libb, both depend on Django, bot the two have a different range, for example:
liba:
Django>=1.10, <2.1
libb:
Django>=1.9, <1.11
Then this thus means that we can only install a Django version between >=1.10 and <1.11.
The above even easily gets more complex. Since liba and libb of course have versions as well, for example:
liba-0.1:
Django>=1.10, <2.1
liba-0.2:
Django>=1.11, <2.1
liba-0.3:
Django>=1.11, <2.2
libb-0.1:
Django>=1.5, <1.8
libb-0.2:
Django>=1.10, <2.0
So if we now want to install any liba, and any libb, we need to find a version of liba and libb that "allows" us to install a Django version, and that is not that trivial since for example if we would pick libb-0.1, then there is no version of liba that supports an "overlapping" Django version.
To the best of my knowledge, pip currently has no dependency resolution algorithm. It looks at the specification, and each time aims to pick the most recent that is satisfying the constraints, and recursively installs the dependencies of these packages.
Therefore it is up to the user to make sure that (sub)dependencies do not conflict: if we would specify liba libb==0.1, then pip will probably install Django-2.1, and then find out that libb can not work with this.
There are some dependency resolution programs. But the problem turns out to be quite hard (it is NP-hard if I recall correctly). So that means that for a given dependency tree, it can takes years to find a valid configuration.

How can I include a pip installed module with my python script?

I wrote an encryption program in Python 3.6 that uses the module pycryptodome, specificly these imports:
from Crypto.Cipher import AES
from Crypto.Hash import SHA256
from Crypto.Random import get_random_bytes
The program works, but it cannot be used without the user installing the pycryptodome module themselves.
Is there any way to include that package somehow or can I create a "first time setup" to install it for the end user?
Short answer - as already mentioned in comments - you need to package your script.
Long answer - I have been there before for the first time - it's annoying. They way Python modules and scripts are packaged is constantly evolving and not all available documentation is being kept up to date. You can easily end up reading recent documentation which, by the time of reading, is already flawed (i.e. obsolete).
The easiest approach, without going down all the way, is a simple file for pip, which describes your project's dependencies. It's called requirements.txt. Up-to-date information can be found here.
If this is not enough, you have to package your application. A good but little maintained and somewhat outdated overview / introduction can be found here. It's good for a start, but do NOT try to follow it to the letter! Another OUTDATED manual for beginners commonly cited can be found here. Read it for basic understanding and do NOT try to follow it to the letter.
Once you have come to this point, it's time to read "What the Hell? -- A Journey Through the Nine Circles of Python Packing". It gives a lot of practical advise. The most practical of all: Look at how other projects are doing it and copy-paste it ... ehm ... learn from it.
If you are not scared off by now, I can actually recommend to look at Python module templates. One of the best ones I know of is the "Cookiecutter PyPackage". Its well maintained documentation is available here. If you learned the basics of Python packaging, it's a quick and reliable way of creating all the files and infrastructure required for packaging your code.
Honorable mention: There are tools, which try to streamline the entire process. Number one on my list and also already mentioned in the comments is PyInstaller (manual). Another tool (for Windows), which is usually mentioned, is py2exe (no up-to-date documentation available, AFAIK). Another up-and-coming and promising (yet not production-ready?) tool is Briefcase (documentation). There are more of those, but they all have their issues. There is a good chance you end up reading into the above literature trying to understand those issues ...

How to use shared dynamic libraries with python-cffi (in linux)?

OS: CentOS 6 (64bit)
I have a dynamic library (.so) in C. And I want to create an abstraction layer of Python over it and then use it to implement my logic. I have decided to use CFFI for this since it doesn't deal with any kind of dsl (domain specific language).
Couple of things I wanted to know:
Is there some good starting point which I can refer for doing this (loading and using dynamic libraries using cffi)? The docs on the official site talk about this, but I was looking if there was some concrete reference somewhere with some examples. Or someone who might have tried it.
Can there a possible drawback to this approach?
Thanks
Two good starting points:
The CFFI documentation, and specifically the ABI out of line example: https://cffi.readthedocs.org/en/latest/overview.html#out-of-line-example-abi-level-out-of-line
My CFFI example repository: https://github.com/wolever/python-cffi-example
Between the two you shouldn't have too much trouble putting together your wrapper.
And to your second question: if the shared library you're wrapping is very simple (ex, a few function calls, simle data structures) you might find ctypes simpler (as it's part of the standard library).

Tracking global migration to Python 3.x

Python 3.x is looking ever more tempting with cleaned up syntax (I like it, others may not) new features and what looks like a gradual progression towards more speed and better multithreading.
But Python 3.x is still held back by lack of 3rd party support. Important packages like Django, Twisted, etc. are not ported. It's hard to get an overview of where the bottlebecks in the migration are, how far it has come, and if it's progressing at all. The migration dependencies are also hard to map. Also, projects are probably waiting for Python 3.x to offer some major improvement over 2.x that would justify the effort of porting.
Ideally, there would be a site for tracking this migration overall, with (links to) migration plans and dependencies shown so that people willing to help the migration globally could coordinate their efforts and help specific projects. Perhaps also linking to projects' bug tracking systems for relevant migration-related bugs.
But perhaps I'm just not looking hard enough. Does someone know of any efforts to track global migration to Python 3.x?
(By "global", I mean the universe of open source projects built on Python.)
Update:
There's a poll right now on the Python home page which asks about packages you'd like to see ported to Python 3.x.
George Brandl has made a script that generates a graph with the amount of packages supporting Python 3:
The Link on the CheeseShop front page shows the packages in question: http://pypi.python.org/pypi?%3aaction=browse&c=533&show=all
There is also (a pretty crummy) list of unported packages ordered by how many depends on it: http://onpython3yet.com/ Why do I say it's crummy? Well, because it is done entirely without manual fixing up, resulting in things like listing Python as a package. This is to a large extent because people don't know that the "Dependencies" listing isn't a place to just list any sort of random dependencies, it should be used to list the packages that should be auto installed when you use easy_install/PIP. But for example in the Django world, they don't know that so you see things like "django-saddle" depending on Django and Python, and hence not being easy_installable.
That said, the list is interesting, and we see that PIL really should get ported.
Now this is not anything "global" it's just the packages on PyPI, and as such tend to be mostly Python modules, not separate applications. But I think the trend in general is visible there anyway.
The Python Package Index (PyPI) allows you to search for Python 3rd-party modules that support Python 3.x. It even has a Python 3 packages link which lists them all.
But that doesn't track individual projects' progress on Python 3 support. It just tells you which projects have achieved it.
Something I'd be interested to see is a graph of the total number/percentage of Python 3 packages in PyPI over time (from Python 3 release until present). I don't know if anyone has tracked this, or if the PyPI administrators have enough history data to produce such graphs.

writing an api for python that can be installed using setup.py method

I am new at writing APIs in python, in any language for that matter. I was hoping to get pointers on how i can create an API that can be installed using setup.py method and used in other python projects. Something similar to the twitterapi.
I have already created and coded all the methods i want to include in the API. I just need to know how to implement the installation so other can use my code to leverage ideas they may have. Or if i need to format the code a certain way to facilitate installation.
I learn best with examples or tutorials.
Thanks so much.
It's worth noting that this part of python is undergoing some changes right now. It's all a bit messy. The most current overview I know of is the Hitchhiker's Guide to Packaging: http://guide.python-distribute.org/
The current state of packaging section is important: http://guide.python-distribute.org/introduction.html#current-state-of-packaging
The python packaging world is a mess (like poswald said). Here's a brief overview along with a bunch of pointers. Your basic problem (using setup.py etc.) is solved by reading the distutils guide which msw has mentioned in his comment.
Now for the dirt. The basic infrastructure of the distribution modules which is in the Python standard library is distutils referred to above. It's limited in some ways and so a series of extensions was written on top of it called setuptools. Setuptools along with actually increasing the functionality provided a command line "installer" called "easy_install".
Setuptools maintenance was not too great and so it was forked and a more active branch called "distribute" was setup and it is the preferred alternative right now. In addition to this, a replacement for easy_install named pip was created which was more modular and useful.
Now there's a huge project going which attempts to fold in all changes from distribute and stuff into a unified library that will go into the stdlib. It's tentatively called "distutils2".

Categories

Resources