PyCharm intellisense for boto3 - python

having problems seeing full intellisense (code completion) options in PyCharm.
working with python 3.4 on Windows.
the suggests are partially working:
import boto3
s = boto3.Session() (boto3. will bring up list of methods/params of object boto3)
ec2 = s.resource('ec2') (resource is a suggested method!)
ec2. <<<< this brings up nothing. For some reason PyCharm cant detect that ec2 object would have
while I can work off documentation alone, intellisense is just such a nice feature to have!
ive had similar problems getting it to complete lxml syntax but I thought that was because I had to install lxml directly as a binary (too many hoops to jump through on windows to install it via pip)
Anyone else encounter similar problems?
While we are here,
I see a lot of different libraries around using awscli with python: boto, boto3, troposphere etc. What are some advantages of using one over the other. Amazon states that boto3 is the prefered method over boto but for my usage of starting/stopping ec2 instances could be easily done with older boto.

I was frustrated with the same issue. So I decided to parse boto3 documentation and generate wrapper classes from the documentation. Here is the link to project
https://github.com/gehadshaat/pyboto3
To install it
pip install pyboto3
To use it
import boto3
s3 = boto3.client('s3')
""" :type : pyboto3.s3 """
# s3. -> will give you autocomplete for s3 methods in pycharm
Make sure that you first:
Install pyboto3 -> pip install pyboto3 | pip3.x install pyboto3
Check your interpreter settings and verify that you see pyboto3 on the list
Do a File -> Invalidate Caches/Restart
After Pycharm restarts you should see intellisense working in your favor and all of the available methods for the service (in the case above s3) you are trying to use available to you!

This is happening because all of the methods on the boto3 clients and resource objects are generated at runtime based on a JSON file that describes what operations the service supports. Pycharm would have to have specific knowledge about this process in order to auto complete method names.
For your second question, boto3 is the official AWS SDK for Python. One of the main advantages of boto3 is that because of this JSON model driven process that describes the AWS APIs, most new service features only require a simple model update. This means API updates happen in a quick, consistent, and reliable manner.
But if you're using boto in existing code and it's working for you, feel free to keep using it. You can always install boto3 along side boto if you need to pull in new functionality.

The room's getting a little crowded here, but I also have created a boto3 typing solution (GitHub link), boto3_type_annotations. I took the pyboto3 approach and parsed the docstrings of service objects and then programmatically wrote class definitions for them and annotated arguments and return types with the typing module. Unlike pyboto3 I created everything including service resources, paginators, and waiters. There's also an option where I left the docstrings in, so PyCharm's quick documentation will work. But fair warning, that package is really big.
# Without docs
pip install boto3_type_annotations
# Or with docs
pip install boto3_type_annotations_with_docs

Probably not an official method, but I did find something that works.
In PyCharm, open python console (tools/python console). The console will also have variable list on the right side. If you initialize a resource object on the console, it will have its sub objects listed in variable object tree. Some limited intellisense as well.
The way I started doing it, is writing code right into the interpreter using variable watch window as a cheat sheet. Once code is written, I copy/paste it into the actual script file. Clunky...

I love boto3, but I was also frustrated that every time I want to make a simple ad-hoc request I have to open boto3 documentation. So I wrote autoboto:
https://pypi.org/project/autoboto/
It doesn't just auto-complete. It also returns dataclasses which means that you don't have to look up the names of attributes of the returned objects. PyCharm will tell you what is available.
At the moment, it's also probably very slow because of all the generic serialisation and deserialisation.

While autocomplete solutions for boto3 are being discussed I'm surprised that nobody mentioned botostubs yet. Works on any IDE and is automatically kept up-to-date.

This works for me. If you are using python3
python3 -m pip install boto3-stubs
python3 -m pip install 'boto3-stubs[essential]'

Make sure that you:
Install pyboto3 -> pip install pyboto3 | pip3.x install pyboto3
Check your interpreter settings and verify that you see pyboto3 on the list
Do a File -> Invalidate Caches/Restart
After Pycharm restarts you should see intellisense working in your favor and all of the available methods for the service you are trying to use available to you!

boto3-stubs
Type annotations for boto3 1.16.62 compatible with VSCode, PyCharm, Emacs, Sublime Text, mypy, pyright and other tools.

Related

Can I see library versions for a google cloud function?

I've got a cloud function I deployed a while ago. It's running fine, but some of its dependent libraries were updated, and I didn't specify == in the requirements.txt, so now when I try to deploy again pip can't resolve dependencies. I'd like to know which specific versions my working, deployed version is using, but I can't just do a pip freeze of the environment as far as I know.
Is there a way to see which versions of libraries the function's environment is using?
I would suggest using the pip list as it has the option to display outdated packages using the --outdated (-o) flag.
You can check this documentation for pip list for additional information and flags that would be useful to your project.
I am still unaware how to get this information directly from Google Cloud Platform. I think it may not be surfaced after deploy. But a coworker had a workaround if you've deployed from a CI pipeline: Go back and look in that pipeline's logs to see which packages got installed upon deploy. It's printed. This didn't quite save me, because I'd deployed my function manually from a terminal, but it got me closer, because I could see which versions were being used around that time.

Jenkins shared library: How to run code from a python module contained in a package?

I want to run a python script as part of a jenkins pipline triggered from a github repo. If I store the script directly in the repo itself, I can just do sh 'python path/to/my_package/script.py' which work perfectly. However since I want to use this from multiple pipelines from multiple repos, I want to put this in a jenkins shared library.
I found this question which suggested storing the python file in the resources directory and copying it to a temp file before use. That only works if the script is one standalone file. Unfortunately, mine is a package with multiple python files and imports between them, so thats a no go. I also tried to copy the entire folder containing the python package from the answer to this question, which suggests getting the location of the library with
import groovy.transform.SourceURI
import java.nio.file.Path
import java.nio.file.Paths
class ScriptSourceUri {
#SourceURI
static URI uri
}
but its gives me the following error:
Scripts not permitted to use staticMethod java.net.URI create java.lang.String. Administrators can decide whether to approve or reject this signature.
It seems that some additional permissions are required, which I don't think I'll be able to acquire (its a shared machine).
So? Does anyone know how I can run a python package from jenkins shared library? Right now the only solution I can think of is to manually recreate the directory structure of the python package, which is obviously very messy and non-generic.
PS: There is no particular reason for using the python script over writing the same script in groovy. Its just that the python script is well tested, well understood and well supported. Rewriting the whole thing in groovy just isn't feasible right now.
You can go to http://host:8080/jenkins/scriptApproval/ page of your Jenkins installation and approve the request for your scripts, please see below:-
And follow the link for more information.

LIRC Python client bindings

I am needing to use the LIRC Python client bindings for a project. The LIRC website has good documentation over them, but I have no idea how to actually get them besides copying and pasting the python code. It never says anywhere on the site that I have seen where to actually get them.
Where/how do I get these bindings?
http://www.lirc.org/html/lirc_client.html
http://www.lirc.org/api-docs/html/group__python__bindings.html
I think you need to install python-lirc or python3-lirc available on PyPi.
This is a Python binding to LIRC.
You could install the latest version available on the upstream site: http://sf.net/p/lirc. This is the version actually described in the docs. The pypi described in previous reply is another beast.

Edit package installed by pip

I'm trying to edit a package that I installed via pip, called py_mysql2pgsql (I had an error when converting my db from mysql to postgre, just like this.
However, when I got to the folder /user/local/lib/python2.7/dist-packages/py_mysql2pgsql-0.1.5.egg-info, I cannot find the source code for the package. I only find PKG-INFO and text files.
How can I find the actual source code for a package (or in particular, this package)?
Thanks
TL;DR:
Modifying in place is dangerous. Modify the source and then install it from your modified version.
Details
pip is a tool for managing the installation of packages. You should not modify files creating during package installation. At best, doing so would mean pip will believe that a particular version of the package is installed when it isn't. This would not interact well with the upgrade function. I suspect pip would just overwrite your customizations, discarding them forever, but I haven't confirmed. The other possibility is that it checks if files have changed and throws an error if so. (I don't think that's likely.) It also misleads other users of the system. They see that you have a package installed, but you don't actually have that version indicated; you have a customized version. This is likely to result in confusion if they try to install the unmodified version somewhere else or if they expect some particular behavior from the version installed.
If you want to modify the source code, the right thing to do is modify the source code and either build a new, custom package or just install from source. py-mysql2pgsql provides instructions for performing a source install:
> git clone git://github.com/philipsoutham/py-mysql2pgsql.git
> cd py-mysql2pgsql
> python setup.py install
You can clone the source, modify it, and then install without using pip. You could alternatively build your own customized version of the package if you need to redistribute it internally. This project uses setuptools for building its packages, so you only need to familiarize yourself with setuptools to make use of their setup.py file. Make sure that installing it this way doesn't create any misleading entries in pip's package list. If it does, either find a way to make sure the entry is more clear or find an alternative install method.
Since you've discovered a bug in the software, I also highly recommend forking it on Github and submitting a pull request once you have it fixed. If you do so, you can use the above installation instructions just by changing the repository URL to your fork. If you don't fork it, at least file an issue and describe the changes that fix it.
Alternatives:
You could copy all the source code into your project, modify it there, and then distribute the modified version with the rest of your code. (Make sure you don't violate the license if you do so.)
You might be able to solve you problem at runtime. Monkey-patching the module is a little risky if other people on your team might not expect the change in behavior, but it could be done for global modification of the module's behavior. You could also create some additional code that wraps the buggy code: it can take input, call the buggy code, and either prevents or handles the bug (e.g., modifying the input to make it work or catching an exception and handling it, etc.).
just print out the .__file__ attribute of the module:
>>> import numpy
>>> numpy.__file__
'/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/numpy/__init__.py'
Obviously the path and specific package would be different for you but this is pretty fool proof way of tracking down the source file of any module in python.
You can patch pip packages quite easily with the patch command.
When you do this, it's important that you specify exact version numbers for the packages that you patch.
I recommend using Pipenv, it creates a lock file where all versions of all dependencies and sub-dependencies are locked, so that always the same versions of packages are installed. It also manages your virtual env, and makes it convenient to use the method described here.
The first argument to the patch command is the file you want to patch. So that should be the module of the pip package, which is probably inside a virtualenv.
If you use Pipenv, you can get the virtual env path with pipenv --venv, so then you could patch the requests package like this:
patch $(pipenv --venv)/lib/python3.6/site-packages/requests/api.py < requests-api.patch
The requests.patch file is a diff file, which could look like:
--- requests/api.py 2022-05-03 21:55:06.712305946 +0200
+++ requests/api_new.py 2022-05-03 21:54:57.002368710 +0200
## -54,6 +54,8 ##
<Response [200]>
"""
+ print(f"Executing {method} request at {url}")
+
# By using the 'with' statement we are sure the session is closed, thus we
# avoid leaving sockets open which can trigger a ResourceWarning in some
# cases, and look like a memory leak in others.
You can make the patch file like this:
diff -u requests/api.py requests/api_new.py > requests-api.patch
Where requests/api_new.py would be the new, updated version of requests/api.py.
The -u flag to the diff command gives a unified diff format, which can be used to patch files later with the patch command.
So this method could be used in an automated process. Just make sure that you have specified an exact version numbers for the module that you patch. You don't want the module to upgrade unexpectedly, because you might have to update the patch file. So you also have to keep in mind, that if you ever manually upgrade the module, that you also check if the patch file needs to be recreated, and do so if it is necessary. It is only necessary when the file that you are patching has been updated in the new version of the package.
py_mysql2pgsql package is hosted on PyPI: https://pypi.python.org/pypi/py-mysql2pgsql
If you want the code for that specific version, just download the source tarball from PyPI (py-mysql2pgsql-0.1.5.tar.gz)
Development is hosted on GitHub: https://github.com/philipsoutham/py-mysql2pgsql

Python web app that can download itself

I'm writing a small web app that I'd like to include the ability to download itself. The ideal solution would be for users to be able to "pip install" the full app but that users of the app would be able to download a version of it to use themselves (perhaps with reduced functionality or without some of the less essential dependencies).
I'm currently using Bottle as I'd like to keep everything as close to the standard library as possible. Users could be on different platforms or Python versions, which are other reasons for minimising the use of extra modules. (Though I'll assume 2.7 or 3.3 will be in use regardless of platform).
My current thinking is to have the app use __file__ or similar and zip itself up. It could also use setuptools/distribute and call sdist on itself. Users could then execute the zip file, or install the app using the source distribution. (ideally I'd like to provide both of these options).
The app would include aggressive import checking to fallback to available modules, with Bottle being the only requirement (and would be included in the downloaded file).
Can anyone think of a robust approach to providing this functionality?
Update: users of the app cannot be guaranteed to have internet access at all times, hence the requirement for being able to download a version of the app from someone who as previously installed it. Python experience cannot be assumed either, hence the idea of letting users run python -m myApp.zip to run their own version.
Update II: as the level of python experience also cannot be guaranteed I'd want the simplest way for a user to get a mostly working version of the app. Experienced users would then be free to 'upgrade' the app by installing their own choice of additional modules. The vast majority of these would be different servers to host the app from (CherryPy, Twisted, etc) and so would not strictly count as a dependency but a "nice to have".
Update III: based on the answer below I will look into a PyPI/buildout based solution but would still be interested in whether there is a specific solution to the above approach.
Just package your app and put it on PyPI. Trying to automatically package the code running on the server seems over-engineered. Then you can let people use pip to install your app. In your app, provide a link to the PyPI page.
Then you can also add dependencies in the setup.py, and pip will install them for you. It seems like you are trying to build your own packaging infrastructure, but don't have to. Use what's out there.

Categories

Resources