Briefcase build - Unable to load file - python

I am trying to create my first app using briefcase and python through Beeware. I am following the tutorial as listed on the website, however upon running briefcase build i recieve the following messages in the console:
briefcase build
[helloworld] Building App...
Unable to load file: "src\Hello World.exe"
Setting stub app details...
Unable to update details on stub app for helloworld.
the only thing I have noticed that is different from the tutorial is that upon running briefcase create I receive this message :
[helloworld] Created windows\app\Hello World
Instead of this one as shown in the tutorial
[helloworld] Created windows\msi\Hello World
Any help would be greatly appreciated thank you :)

It's difficult to say from the details you're provided. The error would be consistent with using the wrong Windows template for the app - but if you're doing the tutorial, you shouldn't be modifying anything that would cause this to be a problem.
The app vs msi reference isn't anything to be concerned about; that was a change in the most recent version of Briefcase, and we've neglected to update the tutorial to reflect the change.
My initial guess would be that a network error has occurred when Briefcase downloaded the Windows app template. Your Briefcase project should contain a file named windows\app\Hello World\src\Hello World.exe. The error you're reporting indicates that file doesn't exist.
My best suggestion for a fix would be to clear out your cookiecutter cache. Look in your home directory (C:\Users\<your username>) for a .cookiecutters folder; in that folder, there should be a briefcase-windows-app-template subfolder. Delete that subfolder; then delete the windows folder in your Briefcase project, then re-run briefcase create.

Related

Getting "system cannot find specified file" when using pydeps

I am trying to visualize the dependencies of the GitHub repository. So I am using Pydeps for the purpose.
enter image description here
But I keep getting the error that the system is not able to find the specified file
Please help someone.
I tried adding the init.py file to the downloaded repository but that didn't work either.
I tried running pydeps form the project but that does not work either.
This is how my code looks

Error loading shared library libpython3.7m.so.1.0: No such file or directory (needed by /usr/local/bin/coverage)

I seem to be having issues with my python interpreter. I am getting the following error on my terminal when trying to start the Django webserver:
1) Error loading shared library libpython3.7m.so.1.0: No such file or
directory (needed by /usr/local/bin/coverage) 2) Error relocating
/usr/local/bin/coverage: _Py_UnixMain: symbol not found
Any idea or clues on what I should be looking for in regards to the error above? I am running ubuntu on my system. This problem seems to occur after I started a new Django project.
Thank you.
This seems like a Path related issue. You can see here for solutions on how you can alter the path. Alternatively, you can simply fix the issue the following way:
sudo apt-get install libpython3.x-dev
This way you won't need to any changes to environment path manually.

Project interpreter with Pycharm (with Anaconda plugin). How can I resolve this error telling me that pycharm "Failed to Create Interpreter"?

I used to use Pycharm, then switched to Miniconda and used Spyder for a time, now I'm back to using Pycharm (this time the version with an Anaconda plugin). I've been able to set up a project and run it successfully, but I tried to create a new project and can't figure out how to configure a new interpreter for the new project. I've been able to get my new project to run by using the Project Interpreter from my old project, but I would like to create a new one as these are separate projects.
I would like to use a conda environment for the project interpreter.
Here is the error I am receiving: Error Message
I've pasted the error message here for you:
Collecting package metadata: ...working... failed
UnavailableInvalidChannel: The channel is not accessible or is invalid.
channel name: pypi
channel url: https://pypi.python.org/pypi
error code: 404
You will need to adjust your conda configuration to proceed.
Use conda config --show channels to view your configurations current state,
and use conda config --show-sources to view config file locations.
I know I've created a new environment for my new projects but I'm still a beginner and I'm not sure if I'm doing it correctly. I'm also not very familiar with any of the general rules of thumb for configuring projects and environments using Pycharm (obviously, or I wouldn't be having this problem). I'd love to get any extra information you can give, as I would like to learn, but bear in mind that I'm not familiar with a lot of of the terminology so if you can explain to me like I'm a kid I wouldn't mind.

How to run WordCountTopology from storm-starter in Intellij

I work with Storm for a while already, but want to get started with development. As suggested, I am using IntelliJ (up to now, I was using Eclipse and did only write topologies against Java API).
I was also looking at
https://github.com/apache/storm/tree/master/examples/storm-starter#intellij-idea
This documentation is not complete. I was not able to run anything in Intellij first. I could figure out, that I need to remove the scope of storm-core dependency (in storm-starter pom.xml). (found here: storm-starter with intellij idea,maven project could not find class)
After that I was able to build the project. I can also run ExclamationTopology with no problems within IntelliJ. However, WordCountTopology fails.
First I got the following error:
java.lang.RuntimeException: backtype.storm.multilang.NoOutputException: Pipe to subprocess seems to be broken! No output read.
Serializer Exception:
Traceback (most recent call last):
File "splitsentence.py", line 16, in
import storm
ImportError: No module named storm
Update: installing python-storm is not required to make it work
I was able to resolve it via: apt-get install python-storm (from StackOverflow)
However, I don't speak Python and was wondering what the problem is and why I could resolve it like this. Just want to get deeper into it. Maybe someone can explain.
Unfortunately, I am getting a different error now:
java.lang.RuntimeException: backtype.storm.multilang.NoOutputException: Pipe to subprocess seems to be broken! No output read.
Serializer Exception:
Traceback (most recent call last):
File "splitsentence.py", line 18, in
class SplitSentenceBolt(storm.BasicBolt):
AttributeError: 'module' object has no attribute 'BasicBolt'
I did not find any solution on the Internet. Asking at dev#storm.apache.org did not help either. I go the following suggestion:
I think that it was always assumed that topology would always be invoked through storm-command line. Thus working directory would be ${STORM-INSTALLATION}/bin/storm Since storm.py is in the this directory, splitSentence.py would be able to find storm modules. Can you set the working directory to a path, where storm.py is present and then try. If it works, we can add it later to the documentation
However, chancing the working directory did not solve the problem.
And as I am not familiar with Python and as I am new to IntelliJ, I am stuck now. Because ExclamationTopology runs, I guess my basic setup is correct.
What do I do wrong? It is possible at all to run WordcountTopology in LocalCluster in IntelliJ?
Unfortunately, AFAIK you can't run multilang feature with LocalCluster without having packaged file.
ShellProcess relies on codeDir of TopologyContext, which is used by supervisor.
Workers are serialized to stormcode.ser, but multilang files should extracted to outside of serialized file so that python/ruby/node/etc can load it.
Accomplishing this with distribute mode is easy because there's always user submitted jar, and supervisor can know it is what user submitted.
But accomplishing this with local mode is not easy cause supervisor cannot know user submitted jar, and users can run topology to local mode without packaging.
So, Supervisor in local mode finds resource directory ("resources") from each jars (which ends with "jar") in classpath, and copy first occurrence to codeDir.
storm jar places user topology jar to the first of classpath, so it can be run without issue.
So normally, it's natural for ShellProcess to not find "splitsentence.py". Maybe your working directory or PYTHONPATH did the trick.
I struggled with a similar issue, not with the sample topology, but with my own using a Python bolt.
Also experienced the "AttributeError: 'module' object has no attribute 'BasicBolt'" exception - in local mode and when submitting to the cluster.
There are very few resources on this, I found your question and little else discussing this issue.
In case someone else has the same problem:
Make sure you include the correct Maven "multilang-python" dependency in your pom file. This will package the correct run time dependencies into the JAR file needed to run your topology.
I managed to run it on my virtualbox, storm version 1.2.2:
just download https://github.com/apache/storm/blob/master/storm-multilang/python/src/main/resources/resources/storm.py and put it into any folder you want, for example: /apache-storm-1.2.2/examples/storm-starter/multilang/resources/ , and then change the main function:
public static void main(String[] args) throws Exception {
SplitSentence pythonSplit = new SplitSentence();
Map env = new HashMap();
env.put("PYTHONPATH", "/apache-storm-1.2.2/examples/storm-starter/multilang/resources/");
pythonSplit.setEnv(env);
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new RandomSentenceSpout(), 5);
builder.setBolt("split",pythonSplit, 8).shuffleGrouping("spout");
builder.setBolt("count", new WordCount(), 12).fieldsGrouping("split", new Fields("word"));
Config conf = new Config();
conf.setDebug(true);
if (args != null && args.length > 0) {
conf.setNumWorkers(3);
StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());
}
else {
conf.setMaxTaskParallelism(3);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology("word-count", conf, builder.createTopology());
Thread.sleep(600000);
cluster.shutdown();
}
}
the full instructions can be found on my blog which includes other issues encountered when running it in Local Mode and Local Cluster mode: https://lyhistory.com/storm/

How to correctly import a Python library into a Django project (hosted on Heroku)?

I need to include the Recurly API library into a Django project.
The library is on GitHub, and the project is deployed to Heroku.
Currently, I have the following added to requirements.txt:
-e git://github.com/recurly/recurly-client-python.git#egg=recurly-client-python
This may work once the app is on heroku (?) but it's not getting picked up when developing locally (running local server via foreman). In my test app's views.py, I have:
import recurly
I get:
Exception Type: ImportError
Exception Value:
No module named recurly
Exception Location: /Users/pete/Documents/code/django/simpleblog/subscriptions/views.py in <module>, line 7
Python Executable: /Users/pete/.virtualenvs/django/bin/python
I'm pretty new to Django/Python, as well as working with APIs in this environment. How should I install & include it, so it works both locally and once deployed? I tried searching online to no avail.
First method:
What you can do is clone the code on your desktop :
git clone https://github.com/recurly/recurly-client-python.git
and then from this new directory run
python setup.py install
(This is how you can install any reusable python app into your environment)
EDIT1:
Second method:
simply change requirement.txt
"-e git://github.com/recurly/recurly-client-python.git#egg=recurly-client-python" to "recurly"
If you are new to python and want easy and fast implementation use second one. If you are new to python and want to learn how things work in python use first one, it will help.
EDIT2:
Wanna learn more? Check which version you got installed by these two different methods. ("pip list|grep recurly")

Categories

Resources