i have compiled myscripts to myscripts.exe file using pyinstaller --onefile
myscripts.py contain
import os
os.popen("python celery -A tasks worker --loglevel=info -P solo -c 1")
Once i got .exe file
SimpleSC::InstallService "ERP" "ERP Data Cloud" "16" "2" "$INSTDIR\myscript1.exe" "" "" ""
SimpleSC::StartService "ERP" "" 30
i compiled using nsis got my setup.exe
Now when i see the service window i can see the service is added bt the status is blank, even i try to start the service manually i got an error
control request is not timely fashion
after install
!define MUI_FINISHPAGE_RUN "$INSTDIR\dist\myscripts.exe"
i am able to run myscripts.exe which start celery with no problem, But i want it to run in service .
now Question
Am i doing it completly worng way, Or i need to add something,
What am i missing.
A service has to call specific service functions. If you are unable to write a proper service you could try a helper utility like Srvany...
Related
My environments are based on windows with vagrant or docker as the actual environments. I'd like to set up a quick way of ad hoc deploying stuff directly from windows though, it would be great if I could just run
fab deploySomething
And that would for example locally build an react app, commit and push to the server. However I'm stuck at the local bit.
My setup is:
Windows 10
Fabric 2
Python 3
I've got a fabfile.py set up with a simple test:
from fabric import Connection, task, Config
#task
def deployApp(context):
config = Config(overrides={'user': 'XXX', 'connect_kwargs': {'password': 'YYY'}})
c = Connection('123.123.123.123', config=config)
# c.local('echo ---------- test from local')
with c.cd('../../app/some-app'):
c.local('dir') #this is correct
c.local('yarn install', echo=True)
But I'm just getting:
'yarn' is not recognized as an internal or external command, operable program or batch file.
you can replace 'yarn' with pretty much anything, I can't run a command with local that works fine manually. With debugging on, all i get is:
DEBUG:invoke:Received a possibly-skippable exception: <UnexpectedExit: cmd='cd ../../app/some-app && yarn install' exited=1>
which isn't very helpful...anyone came across this? Any examples of local commands with fabric I can find seem to refer to the old 1.X versions
To run local commands, run them off of your context and not your connection. n.b., this drops you to the invoke level:
from fabric import task
#task
def hello(context):
with context.cd('/path/to/local_dir'):
context.run('ls -la')
That said, the issue is probably that you need the fully qualified path to yarn since your environment's path hasn't been sourced.
I have a bunch of python code that is basically executed via Click CLI framework entry points.
I am exploring how to make some of CLI functions into WebActions, and was looking IBM Cloud Functions, which is basically Apache OpenWhisk.
I am brand new to OpenWhisk and IBM CloudFunctions.
I am following the Help Docs here:
https://console.bluemix.net/docs/openwhisk/openwhisk_actions.html#creating-python-actions
trying to mimic the virtualenv method.
When I translate their basic example to be Click CLI Commands as follows:
(below is the contents of a file __main__.py which started out as a file named hello_too.py but changed as following along with IBM Docs)
import click
#click.command()
#click.argument('params', nargs=-1)
def main(params):
#name = args.get("name", "stranger")
greeting = "Hello " + "foo" + "!"
print(greeting)
return {"greeting": greeting}
if __name__ == "__main__":
main()
and then zip it and upload (as per their virtualenv example) as web action I get the following error
{
"error": "The action did not produce a valid JSON response: Internal Server Error"
}
I saw on some other blogs, that running python with -i is a good sanity check for OpenWhisk runtime.
When I run this code with -i I get a stack-trace around system exit.
Traceback (most recent call last):
File "hello_too.py", line 12, in <module>
main()
File "/Users/mcmasty/projects/ppf-github/experiments/ibm-api-connect/venv/lib/python2.7/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/Users/mcmasty/projects/ppf-github/experiments/ibm-api-connect/venv/lib/python2.7/site-packages/click/core.py", line 700, in main
ctx.exit()
File "/Users/mcmasty/projects/ppf-github/experiments/ibm-api-connect/venv/lib/python2.7/site-packages/click/core.py", line 484, in exit
sys.exit(code)
SystemExit: 0
but when I run the example code, non-Click enabled, the interactive interpreter does complain.
Any advice at easiest path to port Click CLI scripts to be OpenWhisk Actions / IBM Cloud Functions ?
I tried to get click standalone_mode to change SystemExit behavior, but could not seem to get it to work
I also tried this with naming the command echo and using --main echo option on OpenWhisk action create. (Same result)
I also tried many variations of returning JSON String (via json.dumps()), either through return or via writing to stdout, both in zip packaging and Docker image packaging... (same results)
Since the python dictionary is basically the hard-coded result, my best guess right now is that this stack trace when running click enabled script is the root of my problem when deploying to IBM Cloud Functions.
Thanks in advance.
Additional info in response to comments
Code provided above. That code is in in a file called __main__.py (as per the IBM Docs https://console.bluemix.net/docs/openwhisk/openwhisk_actions.html#creating-python-actions )
then following the IBM Docs its...
docker run --rm -v "$PWD:/tmp" openwhisk/python2action bash -c "cd tmp && virtualenv virtualenv && source virtualenv/bin/activate && pip install -r requirements.txt"
the only requirement in requirements.txt is click
then also following along the IBM Docs
zip -r hello_too.zip virtualenv __main__.py
and a sanity check
python -i hello_too.zip
Throws the SystemExit exception / stack trace similar to example above.
But
python hello_too.zip
completes normally.
Then deploy to Cloud Functions / Web Actions
ibmcloud wsk action create hello_too --kind python:2 hello_too.zip --web true
then invoke via command line
ibmcloud wsk action invoke --result hello_too
I get the following message:
{
"error": "The action did not produce a valid JSON response: Internal Server Error"
}
but the hard-coded response
return {"greeting": greeting}
is identical to their sample code in the "Creating and invoking a Python action section https://console.bluemix.net/docs/openwhisk/openwhisk_actions.html#creating-python-actions
so I am assuming this is not root-cause of the issue. (I ran their sample code, as outlined in the docs and returning a Python dict worked fine.)
Its just when I try to use click version of python, I am getting stuck.
The click module is causing a runtime error which kills the underlying Python process running the code.
The click module is designed to build command line interface tools. Python code for OpenWhisk Actions is dynamically evaluated and invoked by an existing Python script. You will need to re-factor your application to expose the core functions through raw functions than the click module.
I did a script on Windows using PuTTY:
from pywinauto.application import Application
app = Application().Start(cmd_line='C:\Program Files (x86)\PuTTY\putty.exe -l user -pw **pwd** -load Proxy_10.153.1.250 '+ ip +' -ssh')
putty = app.PuTTY
putty.Wait('ready')
time.sleep(7)
cmd1 = "show log "+ "{ENTER}"
This script will be executed for many switchs, but when it is executed, I cannot do other tasks on Windows else script will be interrupted? Is it possible to be executed in background?
You need a proper tool for CLI automation. Just run subprocess.call('ssh user#host <the rest of cmd>') or use Paramiko to run remote SSH command.
BTW, pywinauto's code is incomplete, I don't see .type_keys(cmd1). You may try .send_chars(cmd1) instead and use putty.minimize() first. But send_chars is not guaranteed to work with every app (and it's experimental). So you can just try.
This doc show the command to download the source of an app I have in app engine:
appcfg.py -A [YOUR_APP_ID] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
Thats fine, but I also have services that I deployed. Using this command I can only seem to download the "default" service. I also deployed "myservice01" and "myservice02" to app engine in my GCP project. How do I specify the code of a specific service to download?
I tried this command as suggested:
appcfg.py -A [YOUR_APP_ID] -M [YOUR_MODULE] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
It didn't fail but this is the ouput I got (and it didn't download anything)
01:30 AM Host: appengine.google.com
01:30 AM Fetching file list...
01:30 AM Fetching files...
Now as a test I tried it with the name of a module I know doesn't exist and I got this error:
Error 400: --- begin server output ---
Version ... of Module ... does not exist.
So I at least know its successfully finding the module and version, but doesn't seem to want to download them?
Also specify the module (services used to be called modules):
-M MODULE, --module=MODULE
Set the module, overriding the module value from
app.yaml.
So something like:
appcfg.py -A [YOUR_APP_ID] -M [YOUR_MODULE] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
Side note: YOUR_APP_VERSION should really read YOUR_MODULE_VERSION :)
Of course, the answer assumes the app code downloads were not permanently disabled from the Console's GAE App Settings page:
Permanently prohibit code downloads
Once this is set, no one, including yourself, will ever be able to
download the code for this application using the appcfg download_app
command.
I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.