I've seen a few posts on this topic with odd hard to reproduce behaviours. Here's a new set of data points.
Currently the following works
cd ./hosts
./ec2.py --profile=dev
And this fails
AWS_PROFILE=dev; ansible-playbook test.yml
These were both working a couple days ago. Something in my environment changed. Still investigating. Any guesses?
Error message:
ERROR! The file ./hosts/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this withchmod -x ./hosts/ec2.py.
ERROR! Inventory script (./hosts/ec2.py) had an execution error: ERROR: "Error connecting to AWS backend.
You are not authorized to perform this operation.", while: getting EC2 instances
ERROR! ./hosts/ec2.py:3: Error parsing host definition ''''': No closing quotation
Note that the normal credentials error is:
ERROR: "Error connecting to AWS backend.
You are not authorized to perform this operation.", while: getting EC2 instances
...
Hmmm. Error message has shifted.
AWS_PROFILE=dev; ansible-playbook test.yml
ERROR! ERROR! ./hosts/tmp:2: Expected key=value host variable assignment, got: {
Looks like the problem was a temporary file in the hosts folder. After removing it the problems went away. It looks like std ansible behaviour: Pull in ALL files in the hosts folder.
Related
I am setting up a Flask app on a Google VM. I have previously done something very similar on the same VM, but for whatever reason after creating a new virtual environment things aren't working.
cd /home/joshuasmith6556/joshthings-server
sudo /home/joshuasmith6556/joshthings-server/env/bin/gunicorn --bind 0.0.0.0:443 --certfile=/etc/letsencrypt/live/api.joshthings.com/fullchain.pem --keyfile=/etc/letsencrypt/live/api.joshthings.com/privkey.pem --workers=6 app:app
When I run the above script to launch my server on HTTPS using some cert files I have generated, gunicorn throws an error: Error: No such option: --bind. If I put app:app before the other flags, it says Error: Got unexpected extra argument (app:app). If I put --certfile first, I get Error: No such option: --certfile. Essentially, gunicorn always complains about the first argument I give it and never launches the server. Any ideas about what is wrong or what I can do to fix this?
Here the main source of error might be a typo with "bind" missing an "=" sign after it. It should be something like:
sudo /home/joshuasmith6556/joshthings-server/env/bin/gunicorn --bind=0.0.0.0:443 --certfile=/etc/letsencrypt/live/api.joshthings.com/fullchain.pem --keyfile=/etc/letsencrypt/live/api.joshthings.com/privkey.pem --workers=6 app:app
The WSGI_APP flag is expected to be in the end, so there is no conflict with this. I think when you have put certfile into first place, you could also not put the equals sign, and that's why it gave you an error.
Application Details: Ubuntu 16.04 + flask + nginx + uwsgi
I am trying to execute a bash command from flask application.
#app.route('/hello', methods=('GET', 'POST'))
def hello():
os.system('mkdir my_directory')
return "Hello"
The above code run successfully but doesn't create any directory. Also it creates directory on my local which doesn't have any nginx level setup.
I also tried following ways:
subprocess.call(['mkdir', 'my_directory']) # Throws Internal server error
subprocess.call(['mkdir', 'my_directory'],shell=True) # No error but directory not created
subprocess.Popen(['mkdir', 'my_directory']) # Throws Internal server error
subprocess.Popen(['mkdir', 'my_directory'],shell=True) # No error but directory not created
Do I need any nginx level configuration changes.
Finally I got the point. I followed Python subprocess call returns “command not found”, Terminal executes correctly
.
What I was missing was absolute path of mkdir. When I executed subprocess.call(["/bin/mkdir", "my_directory"]), it makes the directory successfully.
The above link contains complete details.
Also I would be thankful if anyone will describe the reason that why I need to specify absolute path for mkdir.
Thanks to all. :)
We are building data pipeline using Beam Python SDK and trying to run on Dataflow, but getting the below error,
A setup error was detected in beamapp-xxxxyyyy-0322102737-03220329-8a74-harness-lm6v. Please refer to the worker-startup log for detailed information.
But could not find detailed worker-startup logs.
We tried increasing memory size, worker count etc, but still getting the same error.
Here is the command we use,
python run.py \
--project=xyz \
--runner=DataflowRunner \
--staging_location=gs://xyz/staging \
--temp_location=gs://xyz/temp \
--requirements_file=requirements.txt \
--worker_machine_type n1-standard-8 \
--num_workers 2
pipeline snippet,
data = pipeline | "load data" >> beam.io.Read(
beam.io.BigQuerySource(query="SELECT * FROM abc_table LIMIT 100")
)
data | "filter data" >> beam.Filter(lambda x: x.get('column_name') == value)
Above pipeline is just loading the data from BigQuery and filtering based on some column value. This pipeline works like a charm in DirectRunner but fails on Dataflow.
Are we doing any obvious setup mistake? anyone else getting the same error? We could use some help to resolve the issue.
Update:
Our pipeline code is spread across multiple files, so we created a python package. We solved setup error problem by passing --setup_file argument instead of --requirements_file.
We resolved this setup error issue by sending a different set of arguments to the dataflow. Our code is spread across multiple files, so had to create a package for it. If we use --requirements_file, the job will start, but fail eventually, because it wouldn't be able to find the package in the workers. Beam Python SDK sometimes does not throw explicit error message for these instead, it will retry the job and fail. To get your code running with a package, you will need to pass --setup_file argument, which has dependencies listed in it. Make sure package created by python setup.py sdist command includes all the files required by your pipeline code.
If you have a privately hosted python package dependency then pass --extra_package with the path to the package.tar.gz file. Better way is to store in a GCS bucket and pass the path here.
I have written an example project to get started with Apache Beam Python SDK on Dataflow - https://github.com/RajeshHegde/apache-beam-example
Read about it here - https://medium.com/#rajeshhegde/data-pipeline-using-apache-beam-python-sdk-on-dataflow-6bb8550bf366
I'm building a prediction pipeline using Apache Beam/Dataflow. I need to include the model files inside the dependencies available to the remote workers. The Dataflow job failed with the same error log:
Error message from worker: A setup error was detected in beamapp-xxx-xxxxxxxxxx-xxxxxxxx-xxxx-harness-xxxx. Please refer to the worker-startup log for detailed information.
However, this error message didn't give any details about the worker-startup log. Finally, I found a way to have the worker log and solve the problem.
As is known, Dataflow creates compute engines to run jobs and save logs on them so that we can access the vm to see logs. We can connect to the vm in use by our Dataflow job from the GCP console via SSH. Then we can check the boot-json.log file located in /var/log/dataflow/taskrunner/harness:
$ cd /var/log/dataflow/taskrunner/harness
$ cat boot-json.log
Here we should pay attention. When running in batch mode, the vm created by Dataflow is ephemeral and closed when the job failed. If the vm is closed, we can't access it anymore. But a process including a failing item is retried 4 times, so normally we have enough time to open boot-json.log and see what is going on.
At last, I put my Python setup solution here that may help someone else:
main.py
...
model_path = os.path.dirname(os.path.abspath(__file__)) + '/models/net.pd'
# pipeline code
...
MANIFEST.in
include models/*.*
setup.py complete example
REQUIRED_PACKAGES = [...]
setuptools.setup(
...
include_package_data=True,
install_requires=REQUIRED_PACKAGES,
packages=setuptools.find_packages(),
package_data={"models": ["models/*"]},
...
)
Run Dataflow pipelines
$ python main.py --setup_file=/absolute/path/to/setup.py ...
Recently, when I was installing openstack on 3 vm on centos 7 using answer file I had the following error:
10.7.35.174_osclient.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 10.7.35.174_osclient.pp
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list python-iso8601' returned 1: Error: No matching Packages to list
You will find full trace in log /var/tmp/packstack/20160318-124834-91QzZC/manifests/10.7.35.174_osclient.pp.log
Please check log file /var/tmp/packstack/20160318-124834-91QzZC/openstack-setup.log for more information
Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 10.7.35.174. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://10.7.35.174/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
I have already manually installed that module, but the same problem occures anyway.
That command only runs like that:
/usr/bin/yum -d 0 -e 0 -y list python2-iso8601
Is there any way to parse it to python?
Do you have any ideas how to solve it?
Found that kilo version works fine.
this is the error message .. any help will be appreciated .
2011-02-23 23:09:11 Running command: "['C:\\Python25\\pythonw.exe', '-u', 'C:\\Program Files\\Google\\google_appengine\\appcfg.py', '--no_cookies', u'--email=adham587#gmail.com', '--passin', 'update', 'C:\\Users\\adham\\Desktop\\images']"
Application: refacingme; version: 1.
Server: appengine.google.com.
Scanning files on local disk.
Initiating update.
2011-02-23 23:09:42,223 WARNING appengine_rpc.py:405 ssl module not found.
Without the ssl module, the identity of the remote host cannot be verified, and
connections may NOT be secure. To fix this, please install the ssl module from
http://pypi.python.org/pypi/ssl .
To learn more, see http://code.google.com/appengine/kb/general.html#rpcssl .
Password for XXXXX#gmail.com: Error 409: --- begin server output ---
Another transaction by user XXXXXXXX is already in progress for this app and major version. That user can undo the transaction with appcfg.py's "rollback" command.
--- end server output ---
2011-02-23 23:09:46 (Process exited with code 1)
You can close this window now.
This problem can occur if an update is started and does not finish for whatever reason. As the error message notes, the correct thing to do is to give appcfg.py the rollback command. That will undo the failed changes and reset your app so it is ready for an update.
Try
easy_install ssl
The message says that module is missing.
#Adam: duly noted. I apologize for missing the "Warning text".
I believe his error will go away in a while since another operation might be going on at the same time. He should install ssl module nevertheless. If after a while and it doesn't help, he should perform a rollback.
Google App Engine: appcfg.py rollback