Webassets + Typescript, cannot resolve symbols/modules - python

I have a flask project with the following structure:
├─ app.py
├─ project
| ├─ __init__.py
| └─ static
| └─ typescript
| └─ app.ts
└─ typings
├─ globals
| └─ ... # multiple imported ts libraries
└─ index.d.ts
I'm using a webpacker integration called Flask Assets. I've set up the compilation like so (in __init__.py)
ts = get_filter('typescript')
ts.load_paths = [
#os.path.join(config.APP_ROOT, '..', 'typings'), # doesn't do anything :/
os.path.join(app.static_folder, 'typescript')
]
assets.register('javascript', Bundle(
'typescript/app.ts',
filters = (ts, 'jsmin'),
output = 'js/app-%(version)s.js'
))
My app.ts is, more or less,
class SomeClass {
... various class methods, using things like jQuery and CryptoJS
}
no imports - I'm not really sure whether or not I need them.
The specific error I'm getting is
Cannot find name 'JQuery'.
../../../../../var/folders/5t/4x0gmsdx0dbbgv_fr3cv3x6m0000gn/T/tmphFTSQo.ts(7,17): error TS2503: Cannot find namespace 'CryptoJS'.
../../../../../var/folders/5t/4x0gmsdx0dbbgv_fr3cv3x6m0000gn/T/tmphFTSQo.ts(10,27): error TS2304: Cannot find name '$'.
... a bunch more about other symbols

I kind of solved it...
glob_string = os.path.join(config.APP_ROOT, '..', 'typings', '*', '*', '*.d.ts')
assets.register('javascript', Bundle(
glob.glob(glob_string),
'typescript/app.ts',
filters = ('typescript', 'jsmin'),
output = 'js/app-%(version)s.js'
))
basically I just "manually" add all the definition files to the bundle (using glob). It's not sufficient to just add the index.d.ts in the root of the typings dir as the typescript filter copies the .ts to a temp file (in /tmp) before compiling and the paths in index.d.ts are relative.
it should also be noted that ts.load_paths does nothing...

Related

How to test program using python-can module

I'm implementing a class which uses a can bus instance (Bus instance is static because all object should use the same).
# ./mymodule/__init__.py
import can
class UsingCanBUS:
_bus = can.ThreadSafeBus(channel='can0', bustype='socketcan')
def __init__(self) -> None:
# other code
# v here every object interacts with the bus
self._listener = can.Listener()
self._listener.on_message_received = self._handle_message
def _send_message(self, id, data) -> bool:
msg = can.Message(arbitration_id=id, data=data, extended_id=False)
try:
self._bus.send(msg)
except can.CanError:
return False
else:
return True
This code will eventually run on a raspberry so the can interface is correctly setup in the system.
Now, if I want to unit test any of the class methods or any file in the module the bus tries to initialize and, since I'm not on the target system, it throws an os error (which is to be expected).
The folder structure is as follows:
.
|- mymodule/
| |- __init__.py
| |- utils.py
|
|- tests/
| |- __init__.py
| |- test_utils/
| | |- __init__.py
| | |- test_utils.py
It's not clear to me how I should test this piece of code.
I tried patching the can module:
#./tests/test_utils/test_utils.py
import pytest
from unittest.mock import patch
#patch('mymodule.can')
def test_something():
from mymodule.utils import some_function
# This doesn't work as the real python-can methods get called instead of the mocked ones
assert some_function() == expected_result
I don't understand if I'm using the patch decorator wrong or if my approach is completely off course.
I expect every class in the can module imported from mymodule to be patched with mocked classes but it doesn't seem like it.
The Raspberry Pi doesn't come with the CAN driver so you can't directly install the can-utils and simulate the virtual CAN. Use the CAN transceiver on top of Raspberry Pi. You could go with this particular one, which I'm also using for the simulations.
RS485-CAN-HAT

Combine protobuf with python namespaced package in bazel

I am trying to set up Bazel on an existing project that consists of three applications in Python and Groovy, and a shared protobuf IDL.
For the Python applications, I currently have a custom command in setup.py that generates a Python module from the protobuf IDL. When generating the python module, I place it inside the application packages, so it can be imported like any other module in the application.
When trying to put the whole project under Bazel I'm struggeling to find out how to deal with the generated python module. protoc will only generate a single file. In order to put the file in a package, I need to create a directory structure and move the file into place. Some googling has lead me to a solution that combines some pkg_tar rules to create a tarball with the correct layout, but I can't figure out how to make the jump to making this into a python library.
The files are laid out like this:
.
├── BUILD
├── protobuf
│   └── messages.proto
└── python
└── ibidem
├── __init__.py
└── codetanks
├── __init__.py
└── domain
└── __init__.py
I want the generated module to be placed in ibidem/codetanks/domain, so that it can be imported with from ibidem.codetanks.domain import messages_pb2.
My current BUILD file:
load("#build_stack_rules_proto//python:python_proto_library.bzl", "python_proto_library")
load("#rules_pkg//:pkg.bzl", "pkg_tar", "pkg_deb")
proto_library(
name = "messages_proto",
srcs = ["protobuf/messages.proto"],
)
python_proto_library(
name = "messages_python_proto",
deps = [":messages_proto"],
)
pkg_tar(
name = "python_messages_tarball",
strip_prefix = "protobuf/",
package_dir = "ibidem/codetanks/domain",
srcs = [":messages_python_proto"],
)
filegroup(
name = "python_domain_files",
srcs = glob([
"python/**/*.py",
]),
)
pkg_tar(
name = "python_domain_tarball",
strip_prefix = "python/",
srcs = [":python_domain_files"],
)
# This fails because the tarballs doesn't have the `py` or `PyInfo` provider .
# If I use a `pkg_tar` rule here, the tarball has exactly the contents I'd want to have as a python library.
py_library(
name = "python",
deps = [
":python_domain_tarball",
":python_messages_tarball",
],
)
I've found some places that say that this can be solved by putting the messages.proto file inside the directory structure in the same place as I want the generated file to wind up. That sounds like a bad workaround, considering that placement won't make any sense for any other language than Python. I also generate a java package, and in the future the plan is to add other languages too.
Is this simply a limitation of Bazel, or can it be solved in some fancy way that I haven't been able to google my way to?
With the link provided by Sjoerd Visscher, I found a solution that seems to do the trick. First step was to separate into language specific packages. Then use copy_file to move the generated file into a subdirectory. Once that was done, combining the helper files with the generated file in a py_library was pretty straight forward.
The file layout is now:
.
├── protobuf
│   ├── BUILD
│   └── messages.proto
└── python
├── BUILD
└── ibidem
├── __init__.py
└── codetanks
├── __init__.py
└── domain
└── __init__.py
Contents of protobuf/BUILD:
package(default_visibility=["//domain:__subpackages__"])
load("#build_stack_rules_proto//python:python_proto_library.bzl", "python_proto_library")
proto_library(
name = "messages_proto",
srcs = ["messages.proto"],
)
python_proto_library(
name = "python_messages_proto",
deps = [":messages_proto"],
)
Contents of python/BUILD:
load("#build_stack_rules_proto//python:python_proto_library.bzl", "python_proto_library")
load("#rules_python//python:defs.bzl", "py_library")
load("#bazel_skylib//rules:copy_file.bzl", "copy_file")
copy_file(
name="python_messages_file",
src="//domain/protobuf:python_messages_proto",
out="ibidem/codetanks/domain/messages_pb2.py",
)
filegroup(
name = "python_helper_files",
srcs = glob([
"ibidem/**/__init__.py",
]),
)
py_library(
name = "messages",
srcs = [
":python_helper_files",
":python_messages_file",
],
visibility = ["//visibility:public"]
)

Trouble instantiating class in a subdirectory

Have a module in a subdirectory and when I try to import it, I get a NameError: namefoois not defined. When I put the class code directly into the __main__.py file it works fine. __init__.py files are empty.
I've tried the following all with other errors:
MyProject/
├── __init__.py
├── __main__.py
├── foo/
│ ├── bar.py
│ ├── __init__.py
bar.py
class Bar:
def __init__(self):
print( 'am here' )
pass
__main__.py
from MyProject import foo
#from MyProject import bar # errors with cannot import bar from MyProject
#from foo import bar # errors with No module named foo
if __name__ == '__main__':
w = Bar()
Is there perhaps a better way to organise this?
The Bar class is in the file bar.py, so I think you'd need to do
from MyProject.foo import bar
w = bar.Bar()
or
from MyProject.foo.bar import Bar
w = Bar()
You didn't share your foo/__init__.py, but you could fix the situation by adding something like this to it:
from .bar import Bar
That adds Bar to the namespace of foo and causes Python to find Bar when you just import foo.
However, you would probably do well to look at a few standard examples for writing a package. For one, you probably shouldn't name your project MyProject, as that name signals it's a class (with TitleCase). Naming it 'project' further confuses the issue, as it appears you're writing a package, so my_package sounds about right.
If you dont know from what direcrory file will be runed then use . to show where package is
from .foo.bar import Bar
or
from .foo import bar
w = bar.Bar()
or
import .foo
w = foo.bar.Bar()
. before package name means that package lockated in same directory with current file

Bazel: Reading a file with relative path to package, not workspace

Suppose we have a project like this:
project-path
├── root
│   ├── BUILD
│   ├── gen
│   │   ├── a2.txt
│   │   └── a.txt
│   └── use.py
└── WORKSPACE
And in use.py:
f = open("gen/a.txt", "r")
f2 = open("gen/a2.txt", "r")
print(f.read())
print(f2.read())
And BUILD:
py_binary(
name = "use",
srcs = ["use.py"],
data = ["gen/a.txt", "gen/a2.txt"],
)
when I bazel run root:use, it errors:
FileNotFoundError: [Errno 2] No such file or directory: 'gen/a.txt'
It expects paths relative to the WORKSPACE directory, not the current package (root/gen/a.txt here). But I want to access files relative to each package.
Add Skylib to your project, i.e. extend you WORKSPACE file:
WORKSPACE
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "bazel_skylib",
sha256 = "74d544d96f4a5bb630d465ca8bbcfe231e3594e5aae57e1edbf17a6eb3ca2506",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz",
"https://github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz",
],
)
load("#bazel_skylib//:workspace.bzl", "bazel_skylib_workspace")
bazel_skylib_workspace()
Replace the path in the bash script using text replacement:
load("#bazel_skylib//rules:expand_template.bzl", "expand_template")
expand_template(
name = "modifiy_use_for_bazel",
out = "prepared_for_bazel_use.py",
substitutions = {
"gen/a.txt": "root/gen/a.txt",
"gen/a2.txt": "root/gen/a.txt",
},
template = "use.py",
)
py_binary(
name = "use",
main = "prepared_for_bazel_use.py",
srcs = ["prepared_for_bazel_use.py"],
data = [
"gen/a.txt",
"gen/a2.txt",
],
)
You can run now the script via bazel run root:use
Note: Tested with Bazel 6.0.0
To make this all a bit more convenient to use you could implement your own rule for it since this seems to be a common problem (e.g. when supporting two build systems at the same time where Bazel is not the primary build system and the other build system can cope with relative path names.)
A similar problem is described here: Change test execution directory in Bazel?

Sphinx not removing doctest flags in html output

I cannot eliminate the doctest flags (ie. <BLANKLINE>, # doctest: +ELLIPSIS) for the html output. I am able to generate the documentation as I would like, so no errors there but it includes theses flags which I would like removed. Sphinx documentation here claims this is possible so I must be doing something wrong. My documentation examples are in numpy style and I have tried using both the napoleon and numpydoc extensions.
Here are the steps I have taken.
run sphinx-quickstart (enabling autodoc and doctest extensions)
run sphinx-apidoc to generate .rst files
run make doctest (all tests are passing)
run make html
I have tried the setting trim_doctest_flags and doctest_test_doctest_blocks variables in conf.py with no success.
Is there something I am missing to trigger sphinx to remove these for the html docs? I am hoping this is enough information to get pointed in the right direction since the docs look good except for this one issue. However, I can provide more details or an example if necessary.
Update: MCV Example (Using Sphinx 1.8.2)
directory and file structure
.
├── trial
│   ├── __init__.py
│   └── trial.py
└── trialDocs
├── build
├── Makefile
└── source
├── _static
├── _templates
├── conf.py
├── index.rst
├── modules.rst
└── trial.rst
conf.py
# -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
import os
import sys
sys.path.insert(0, os.path.abspath('../../trial'))
project = 'trial'
copyright = '2019, trial'
author = 'trial'
version = ''
release = 'trial'
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.doctest',
'sphinx.ext.napoleon',
]
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
language = None
exclude_patterns = []
pygments_style = None
html_theme = 'alabaster'
htmlhelp_basename = 'trialdoc'
latex_elements = {}
latex_documents = [(master_doc, 'trial.tex', 'trial Documentation', 'trial', 'manual'),]
man_pages = [(master_doc, 'trial', 'trial Documentation', [author], 1)]
texinfo_documents = [(master_doc, 'trial', 'trial Documentation', author, 'trial', 'One line description of project.', 'Miscellaneous'),]
epub_title = project
epub_exclude_files = ['search.html']
doctest_global_setup = """
from trial import *
"""
trim_doctest_flags=True
trial.rst - this was generated using sphinx-apidoc
trial package
=============
Module contents
---------------
.. automodule:: trial
:members:
:undoc-members:
:show-inheritance:
trial.py
def withBlankline():
"""
Use blanklines in example.
Determine if sphinx will eliminate <BLANKLINE> for html.
Examples
--------
>>> withBlankline()
<BLANKLINE>
blanklines above and below
<BLANKLINE>
"""
print()
print('blanklines above and below')
print()
class Example():
def __init__(self):
pass
def withEllipsis(self):
"""
Use ellipsis in example.
Determine if sphinx will eliminate # doctest: +ELLIPSIS for html.
Examples
--------
>>> e = Example()
>>> e.withEllipsis() # doctest: +ELLIPSIS
abc...xyz
"""
print('abcdefghijklmnopqrstuvwxyz')
using make html or sphinx-build -b html source build
trial.html output:
Based on the comment by #mzjn, it appears that this was a bug, fixed in Sphinx 2.2.0:
Issue: doctest comments not getting trimmed since Sphinx 1.8.0 - Issue #6545

Categories

Resources