I am trying to execute the tutorial given in https://marcobonzanini.com/2015/10/24/building-data-pipelines-with-python-and-luigi/.
I am able to run the program on its own using local scheduler, giving me:
Scheduled 2 tasks of which:
* 2 ran successfully:
- 1 PrintNumbers(n=1000)
- 1 SquaredNumbers(n=1000)
This progress looks :) because there were no failed tasks or missing external de
pendencies
===== Luigi Execution Summary =====
However, to try the visualization on the server, when I try to run luigid --background, it throws me an error saying I dont have pwd module.
I cannot find a pwd module using pip for windows.
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages
\luigi\process.py", line 79, in daemonize
import daemon
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages
\daemon\__init__.py", line 42, in <module>
from .daemon import DaemonContext
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages
\daemon\daemon.py", line 25, in <module>
import pwd
ModuleNotFoundError: No module named 'pwd'
I am working in Anaconda Spyder with Python 3.6
I was able to fix this by installing python-daemon==2.1.2
If you already have python-daemon, try downgrading to version 2.1.2
Do this before install luigi.
Example:
pip install python-daemon==2.1.2
then
pip install luigi.
For some reason, if you dont use the --background parameter on windows it will start just fine
just write luigid in cmd
The base problem here is that luigid --background is trying to spawn a python-daemon which is a unix-specific thing.
See section titled "The luigid server" here: http://luigi.readthedocs.io/en/stable/central_scheduler.html
Specifically:
Note that this requires python-daemon. By default, the server starts on AF_INET and AF_INET6 port 8082 (which can be changed with the --port flag) and listens on all IPs. (To use an AF_UNIX socket use the --unix-socket flag)
This existing stack overflow answer provides more detail:
How to start daemon process from python on windows?
Options I see here are:
Log a request with Luigi on github to improve their windows support to spawn Luigid as a windows process for the --background switch
Run a virtual machine with a proper Unix OS in it on Windows and run your Luigi pipelines there.
Follow Steven G's suggestion and run luigid in a separate command prompt
To reproduce the root cause of this issue, open a python prompt in windows and type:
>>import daemon
Traceback (most recent call last): File "", line 1, in
File "C:\Anaconda3\lib\site-packages\daemon__init__.py",
line 42, in
from .daemon import DaemonContext File "C:\Anaconda3\lib\site-packages\daemon\daemon.py", line 25, in
import pwd ModuleNotFoundError: No module named 'pwd'
Related
I could run the server on port 8000 but when i try to use 80 with
python manage.py runserver myip:80 I get:
You don't have permission to access that port.
If I use sudo python manage.py runserver myip:80 I get:
File "manage.py", line 14
) from exc
^
SyntaxError: invalid syntax
If I write python in the console I get version 3.5.5 and my env is activated.
EDIT:
Using sudo python3 manage.py runserver myip:80 I get:
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named 'django'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 14, in <module>
) from exc
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
You are getting this because you aren't using Python 3. The easiest way to fix this is to create a virtualenv that uses python3 as it's python executable.
sudo pip3 install virtualenv
virtualenv -p python3 envname
workon envname
pip install django
pip install your_other_dependencies
Still, as others have said, running the Django webserver in a production environment is dicey at best, and spending some time setting up a Gunicorn/Nginx (or appropriate substitute) will pay dividends long term.
There are a couple of things going on here. First of all, only privileged users (e.g. root or other users via sudo) can bind to ports under 1024.
But more importantly, manage.py runserver should never be used in production:
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)
I strongly advise you to set up a proper web server instead. If you search for "EC2 Django" you'll find plenty of walkthroughs on how to do this properly.
I am using:
yowsup-celery: https://github.com/jlmadurga/yowsup-celery
For trying to integrate whats app in my system.
I have been successfully able to store messages and want to now run celery in daemon mode rather than running in terminal
To run it normally we use:
celery multi start -P gevent -c 2 -l info --yowconfig:conf_wasap
To run daemon mode we use:
sudo /etc/init.d/celeryd start
Here how can I pass config file as argument or is there a way to remove dependency of passing it as an argument rather reading the file inside script.
Since version yowsup-celery 0.2.0 it is possible to pass config file path through configuration instead of argument.
YOWSUPCONFIG = "path/to/credentials/file"
I'm trying to run a Python script I've uploaded as part of my AWS Elastic Beanstalk application from my development machine, but can't figure out how to. I believe I've located the script correctly, but when I attempt to run it under SSH, I get an import error.
For example, I have a Flask-Migrate migration script as part of my application (pretty much the same as the example in the documentation), but after successfully SSHing to my EB instance with
> eb ssh
and locating the script with
$ sudo find / -name migrate.py
when I run in the directory (/opt/python/current) where I located it with
$ python migrate.py db upgrade
at the SSH prompt I get
Traceback (most recent call last):
File "db_migrate.py", line 15, in <module>
from flask.ext.script import Manager
ImportError: No module named flask.ext.script
even though my requirements.txt (present along with the rest of my files in the same directory) has flask-script==2.0.5.
On Heroku I can accomplish all of this in two steps with
> heroku run bash
$ python migrate.py db upgrade
Is there equivalent functionality on AWS? How do I run a Python script that is part of an application I uploaded in an AWS SSH session? Perhaps I'm missing a step to set up the environment in which the code runs?
To migrate your database the best is to use container_commands, they are commands that will run every time you deploy your application. There is a good example in the EBS documentation (Step 6) :
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
The reason why you're getting an ImportError is because EBS installs your packages in a virtualenv. Before running arbitrary scripts in your application in SSH, first change to the directory containing your (latest) code with
cd /opt/python/current
and then activate the virtualenv
source /opt/python/run/venv/bin/activate
and set the environment variables (that your script probably expects)
source /opt/python/current/env
I'm attempting to run stratum-mining-proxy with minerd. Proxy starts and runs with the following command:
python ./mining_proxy.py -o ltc-stratum.kattare.com -p 3333 -pa scrypt
Proxy starts fine. Run Minerd (U/P removed):
minerd -a scrypt -r 1 -s 6 -o http://127.0.0.1:3333 -O USERNAME.1:PASSWORD
Following errors are received. This one from the proxy:
2013-07-18 01:33:59,981 ERROR protocol protocol.dataReceived # Processing of message failed
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/stratum-0.2.12-py2.7.egg/stratum/protocol.py", line 185, in dataReceived
self.lineReceived(line, request_counter)
File "/usr/local/lib/python2.7/dist-packages/stratum-0.2.12-py2.7.egg/stratum/protocol.py", line 216, in lineReceived
raise custom_exceptions.ProtocolException("Cannot decode message '%s'" % line)
'rotocolException: Cannot decode message 'POST / HTTP/1.1
And this from minerd. What am I doing wrong? Any help is appreciated!
[2013-07-18 01:33:59] HTTP request failed: Empty reply from server
[2013-07-18 01:33:59] json_rpc_call failed, retry after 30 seconds
I am a little curious, I don't know as a fact but I was under the impression that the mining proxy was for BTC not LTC.
But anyways I believe I got a similar message when I first installed it as well. To fix, or rather to actually get it running I had to use the Git installation method instead of installing manually.
Installation on Linux using Git
This is advanced option for experienced users, but give you the easiest way for updating the proxy.
1.git clone git://github.com/slush0/stratum-mining-proxy.git
2.cd stratum-mining-proxy
3.sudo apt-get install python-dev # Development package of Python are necessary
4.sudo python distribute_setup.py # This will upgrade setuptools package
5.sudo python setup.py develop # This will install required dependencies (namely Twisted and Stratum libraries), but don't install the package into the system.
6.You can start the proxy by typing "./mining_proxy.py" in the terminal window. Using default settings, proxy connects to Slush's pool interface.
7.If you want to connect to another pool or change other proxy settings, type "./mining_proxy.py --help".
8.If you want to update the proxy, type "git pull" in the package directory.
I would like to run an IPython instance on one machine and connect to it (over LAN) from a different process (to run some python commands). I understand that it is possible with zmq : http://ipython.org/ipython-doc/dev/development/ipythonzmq.html .
However, I can not find documentation on how to do it and whether it is even possible yet.
Any help would be appreciated!
EDIT
I would like to be able to connect to IPython kernel instance and send it python commands. However, this should not be done via a graphic tool (qtconsole) , but I want to be able to connect to that kernel instance from within a different python script...
e.g.
external.py
somehow_connect_to_ipython_kernel_instance
instance.run_command("a=6")
If you want to run code in a kernel from another Python program, the easiest way is to connect a BlockingKernelManager. The best example of this right now is Paul Ivanov's vim-ipython client, or IPython's own terminal client.
The gist:
ipython kernels write JSON connection files, in IPYTHONDIR/profile_<name>/security/kernel-<id>.json, which contain information necessary for various clients to connect and execute code.
KernelManagers are the objects that are used to communicate with kernels (execute code, receive results, etc.).
*
A working example:
In a shell, do ipython kernel (or ipython qtconsole, if you want to share a kernel with an already running GUI):
$> ipython kernel
[IPKernelApp] To connect another client to this kernel, use:
[IPKernelApp] --existing kernel-6759.json
This wrote the 'kernel-6759.json' file
Then you can run this Python snippet to connect a KernelManager, and run some code:
from IPython.lib.kernel import find_connection_file
from IPython.zmq.blockingkernelmanager import BlockingKernelManager
# this is a helper method for turning a fraction of a connection-file name
# into a full path. If you already know the full path, you can just use that
cf = find_connection_file('6759')
km = BlockingKernelManager(connection_file=cf)
# load connection info and init communication
km.load_connection_file()
km.start_channels()
def run_cell(km, code):
# now we can run code. This is done on the shell channel
shell = km.shell_channel
print
print "running:"
print code
# execution is immediate and async, returning a UUID
msg_id = shell.execute(code)
# get_msg can block for a reply
reply = shell.get_msg()
status = reply['content']['status']
if status == 'ok':
print 'succeeded!'
elif status == 'error':
print 'failed!'
for line in reply['content']['traceback']:
print line
run_cell(km, 'a=5')
run_cell(km, 'b=0')
run_cell(km, 'c=a/b')
The output of a run:
running:
a=5
succeeded!
running:
b=0
succeeded!
running:
c=a/b
failed!
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
/Users/minrk/<ipython-input-11-fb3f79bd285b> in <module>()
----> 1 c=a/b
ZeroDivisionError: integer division or modulo by zero
see the message spec for more information on how to interpret the reply. If relevant, stdout/err and display data will come over km.iopub_channel, and you can use the msg_id returned by shell.execute() to associate output with a given execution.
PS: I apologize for the quality of the documentation of these new features. We have a lot of writing to do.
If you just want to connect interactively, you can use SSH forwarding. I didn't find this documented anywhere on Stack Overflow yet, yet this question comes closest. This answer has been tested on Ipython 0.13. I got the information from this blog post.
Run ipython kernel on the remote machine:
user#remote:~$ ipython3 kernel
[IPKernelApp] To connect another client to this kernel, use:
[IPKernelApp] --existing kernel-25333.json
Look at the kernel-25333.json file:
user#remote:~$ cat ~/.ipython/profile_default/security/kernel-25333.json
{
"stdin_port": 54985,
"ip": "127.0.0.1",
"hb_port": 50266,
"key": "da9c7ae2-02aa-47d4-8e67-e6153eb15366",
"shell_port": 50378,
"iopub_port": 49981
}
Set up port-forwarding on the local machine:
user#local:~$ ssh user#remote -f -N -L 54985:127.0.0.1:54985
user#local:~$ ssh user#remote -f -N -L 50266:127.0.0.1:50266
user#local:~$ ssh user#remote -f -N -L 50378:127.0.0.1:50378
user#local:~$ ssh user#remote -f -N -L 49981:127.0.0.1:49981
Copy the kernel-25333.json file to the local machine:
user#local:~$ rsync -av user#remote:.ipython/profile_default/security/kernel-25333.json ~/.ipython/profile_default/security/kernel-25333.json
Run ipython on the local machine using the new kernel:
user#local:~$ ipython3 console --existing kernel-25333.json
Python 3.2.3 (default, Oct 19 2012, 19:53:16)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1.rc2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import socket; print(socket.gethostname())
remote
Update to minrk's answer after the split to jupyter. With
jupyter_client (4.1.1)
the simplest code is rather something like:
import jupyter_client
cf=jupyter_client.find_connection_file('6759')
km=jupyter_client.BlockingKernelClient(connection_file=cf)
km.load_connection_file()
km.execute('a=5')
Note that:
jupyter_client.BlockingKernelClient is also aliased with jupyter_client.client.BlockingKernelClient.
the shell (km.shell_channel) does not have the method execute() & get_msg() anymore.
Currently it is quite difficult to find an updated documentation; nothing yet on http://jupyter-client.readthedocs.org/en/latest/ for BlockingKernelClient. Some code in https://github.com/jupyter/jupyter_kernel_test. Any link welcome.
The above answers are a bit old. The solution for the latest version of ipython is much simpler but is not documented well at one place. So I thought I would document it here.
Solution to connect from any OS to a ipython kernel running on Windows
If either the client or server is a linux or other operating system, just change the location of kernel-1234.json appropriately based on Where is kernel-1234.json located in Jupyter under Windows?
On your windows based kernel start, make sure ipykernel is installed using pip install ipykernel
Start the ipykernel using ipython kernel -f kernel-1234.json
Locate the kernel-1234.json file on your Windows machine. The file will probably have a different number, not 1234 and will most likely be located in 'C:\Users\me\AppData\Roaming\jupyter\runtime\kernel-1234.json': https://stackoverflow.com/a/48332006/4752883
Install Jupyter Console (or Jupyter Qtconsole/notebook) using pip install jupyter-console or pip install qtconsole https://jupyter-console.readthedocs.io/en/latest/
If you are on Windows do a ipconfig to find out the ip address of your Windows server. (On Linux do a ifconfig at the shell prompt). In the kernel-1234.json file change the ip address from 127.0.0.1 to the ip address of your server. If you are connecting from another Windows server, then copy the kernel-1234.json file to your local computer and note down the path.
Navigate to the folder containing the kernel-1234.json and start Jupyter Console using jupyter console --existing kernel-1234.json
If you're using Anaconda, in OS X the JSON file is stored at
/Users/[username]/Library/Jupyter/runtime/
In Windows:
c:\Users[username]\AppData\Roaming\jupyter\runtime\