having trouble installing awslogs agent - python

I'm having issues trying to instal awslogs agent on my ec2 node. When I run this command:
sudo python ./awslogs-agent-setup.py --region us-east-1
it seems to fail at step 2 like this:
Launching interactive setup of CloudWatch Logs agent ...
Step 1 of 5: Installing pip ...DONE
Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ...
Traceback (most recent call last):
File "./awslogs-agent-setup.py", line 1144, in <module>
main()
File "./awslogs-agent-setup.py", line 1140, in main
setup.setup_artifacts()
File "./awslogs-agent-setup.py", line 696, in setup_artifacts
self.install_awslogs_cli()
File "./awslogs-agent-setup.py", line 523, in install_awslogs_cli
subprocess.call([AWSCLI_CMD, 'configure', 'set', 'plugins.cwlogs', 'cwlogs'], env=DEFAULT_ENV)
File "/usr/lib64/python2.7/subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
what directory or file is it missing?

Amazon Linux 2
The awslogs agent is available now as a yum package https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
sudo yum install -y awslogs
sudo systemctl start awslogsd
sudo systemctl enable awslogsd.service
Make sure to change the AWS Region as mentionned in the doc

I solved this by passing the python interpreter to be used:
sudo python ./awslogs-agent-setup.py --region us-east-1 --python=/usr/bin/python3.5

Although this question is a bit old, I'd like to add an answer to it, as I recently run into the same problem, but managed to find a way around it. I was trying to install this in an instance running CentOS 7.
When I run the installation command for the first time, I got exactly the same error log reported by #user2061886. The installer logs to a file with the following path: /var/log/awslogs-agent-setup.log. I tailed the file and found that internally the installer was complaining about not being able to find the file "Python.h":
creating build/temp.linux-x86_64-2.7
checking if libyaml is compilable
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-
D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-
size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.7 -c build/temp.linux-x86_64-2.7/check_libyaml.c -o build/temp.linux-x86_64-2.7/check_libyaml.o
checking if libyaml is linkable
gcc -pthread build/temp.linux-x86_64-2.7/check_libyaml.o -L/usr/lib64 -lyaml -o build/temp.linux-x86_64-2.7/check_libyaml
building '_yaml' extension
creating build/temp.linux-x86_64-2.7/ext
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.7 -c ext/_yaml.c -o build/temp.linux-x86_64-2.7/ext/_yaml.o
ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'gcc' failed with exit status 1
I couldn't get it working with Python 2.7, so I switched to Python 3.5. To install Python 3.5 in CentOS 7:
yum -y udpate
yum install -y epel-release
yum install -y http://dl.iuscommunity.org/pub/ius/stable/CentOS/7/x86_64/ius-
release-1.0-13.ius.centos7.noarch.rpm
yum -y update
yum install -y python35u*
I run again the installer command and got passed the error reported by #user2061886. I could install and configure the CloudWatch Logs Agent. However, soon after I started the service (sudo service awslogs start), I run into a second problem. This time I had to tail the following file to spot the issue: /var/log/awslogs.log. The cloudwatch logs agent was basically complaining about not being able to find the cwlogs package:
Traceback (most recent call last):
File "/var/awslogs/bin/aws", line 27, in <module>
sys.exit(main())
File "/var/awslogs/bin/aws", line 23, in main
return awscli.clidriver.main()
File "/usr/lib/python3.5/site-packages/awscli/clidriver.py", line 55, in main
driver = create_clidriver()
File "/usr/lib/python3.5/site-packages/awscli/clidriver.py", line 64, in create_clidriver
event_hooks=emitter)
File "/usr/lib/python3.5/site-packages/awscli/plugin.py", line 44, in load_plugins
modules = _import_plugins(plugin_mapping)
File "/usr/lib/python3.5/site-packages/awscli/plugin.py", line 58, in _import_plugins
plugins.append(__import__(path))
ImportError: No module named 'cwlogs'
I solved this by installing the package manually with pip:
pip3.5 install awscli-cwlogs.
This got the problem solved!

I had the same problem trying to install on centos docker. Turns out I could do without updating python after installing these packages
python-devel libpython-dev which initscripts cronie

^^Yeah.. I fixed a similar issue with some missing dependencies that were pointed to in the /var/log/awslogs.log
apt-get update && apt-get install -y python-pip libpython-dev

So guys Just figured it OUT!!
File "./awslogs-agent-setup.py", line 520, in install_awslogs_cli
venv_in_path = (subprocess.call(["which", "virtualenv"], stderr=self.log_file, stdout=self.log_file) == 0)
On the above error, you can the "which" command is being used and unfortunately "which" was not installed.
Once installed everything started working.
Cheers!! If it helps anyone.

I know I'm 2+ years late but I wasn't able to find an answer to this.
I was having the same problem and it was because the disk was running out of inodes (I think running out of disk space can cause the same problem) and I solved it running sudo apt-get autoremove
You can check your inodes with df -i
I hope this may help anyone having this problem.

Related

gcc cannot find file even though it is on the include path

I'm trying to install a local Python package on Scientific Linux 7.9, Python version 3.8. The package contains Cython so needs Python headers to build. This is installed and is on the include path, but gcc is still claiming it can't find it. Is this a permissions issue?
$ python setup.py install
running install
...
building 'farm.rasters.water_fill' extension
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64
-mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python38/root/usr/include -O2 -g -pipe
-Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4
-grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -I/opt/rh/rh-python38/root/usr/include
-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4
-grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC
-I/home/jon/.jenkins/workspace/Farmmap_revision_linux_py36/TOXENV/py38/venv/lib64/python3.8/site-packages/numpy/core/include
-I/home/jon/.jenkins/workspace/Farmmap_revision_linux_py36/TOXENV/py38/venv/include
-I/opt/rh/rh-python38/root/include/python3.8 -c farm/rasters/water_fill.c
-o build/temp.linux-x86_64-3.8/farm/rasters/water_fill.o
farm/rasters/water_fill.c:19:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'gcc' failed with exit status 1
I have installed rh-python38-python-devel, which includes the headers and they are in the -I path given above.
$ ls -l /opt/rh/rh-python38/root/usr/include/python3.8/
...
-rw-r--r--. 1 root root 3615 Jun 28 11:08 Python.h
Do I just need to chown this directory? I haven't needed to do this on other machines I have installed the package on.
I have figured it out - while the executables /usr/bin/python and /bin/python appear to be the same, only one of them has the associated header files.
When I create a venv using
$ /opt/rh/rh-python38/root/usr/bin/python3.8 -m venv venv
I can build my project, but when I create the venv with
$ /opt/rh/rh-python38/root/bin/python3.8 -m venv venv
I cannot.
When I try to look for the includes associated with the second executable, there is nothing there.
$ ls /opt/rh/rh-python38/root/include/python3.8
ls: cannot access /opt/rh/rh-python38/root/include/python3.8: No such file or directory
if it is not showing path and showing a you a "path not found" error.
Simply try to open windows PowerShell as administrator and type a command
sfc /scannow and wait for some minutes if it will show you that it found some files are corrupted or other corrupted warnings. then you simply run a command
Dism /Online /Cleanup-Image /RestoreHealth and wait for half an hour, it takes time and restart your PC and then check again if it is working or not.
Hope its Helps you 🙂.

boost python library linking issue -- undefined symbol

I am using boost-python built for python3 to expose a simple hello-world program. The example can be found here : https://github.com/TNG/boost-python-examples/blob/master/01-HelloWorld/hello.cpp
I ran the following commands to get the shared object:
g++ -fPIC -c -I/usr/include/python3.4m -I/usr/include/python3.4m -Wno-unused-result -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -L/usr/lib/x86_64-linux-gnu/libboost_python-py34 hello.cpp
g++ -shared hello.o -o hello.so
After this, I run the python3 -c 'import hello' command and I get the following error:
Traceback (most recent call last):
File "", line 1, in
ImportError: hello.so: undefined symbol: _ZTIN5boost6python7objects21py_function_impl_baseE
I partly understand this issue may be because my boost-python installation may be built for an alternative python version (for instance python2.7). When I run the command:
ls /usr/lib/x86_64-linux-gnu/libboost_python*.so
There are three .so files:
1. libboost_python-py27.so
2. libboost_python-py34.so
3. libboost_python.so
How can this issue be circumvented?
use pkg-config to retrieve ldflags and cflags of your boost library
Installing miniconda might be an option for you. https://conda.io/miniconda.html
This will provide a complete, isolated, python environment. You can then
conda install boost
I've tested this on my system and it worked well. I modified the Makefile from http://www.shocksolution.com/python-basics-tutorials-and-examples/linking-python-and-c-with-boostpython/
My Makefile can be found here:
https://github.com/grelleum/boost-python-with-anaconda

pip install pubnub throws 'gcc' failed error

I am trying to install pubnub libraries and I get the error when I do pip install pubnub
Compiling support for Intel AES instructions
building 'Crypto.Hash._MD2' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/src
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DLTC_NO_ASM -DHAVE_CPUID_H -Isrc/ -I/usr/include/python2.7 -c src/MD2.c -o build/temp.linux-x86_64-2.7/src/MD2.o
gcc -pthread -shared build/temp.linux-x86_64-2.7/src/MD2.o -L/usr/lib64 -lpython2.7 -o build/lib.linux-x86_64-2.7/Crypto/Hash/_MD2.so
/usr/bin/ld: cannot find -lpython2.7
collect2: error: ld returned 1 exit status
error: command 'gcc' failed with exit status 1
These are the steps I followed
curl -O https://bootstrap.pypa.io/get-pip.py
sudo python27 get-pip.py
sudo yum install git
git clone https://github.com/pubnub/python && cd python/python
sudo update-alternatives --config python
sudo yum install python-devel
sudo yum install gcc
Thanks
You need to install Python's header files. How you do that will depend on your operating system.
On Debian or Ubuntu, for example, something like
sudo apt-get install python-dev
should do it.
On Fedora / CentOS / Red Hat, try
sudo yum install python-devel
The solution for this, I had to follow these steps
ld -lpython2.7 --verbose
attempt to open /usr/x86_64-amazon-linux/lib64/libpython2.7.so failed
attempt to open /usr/x86_64-amazon-linux/lib64/libpython2.7.a failed
attempt to open /usr/local/lib64/libpython2.7.so failed
attempt to open /usr/local/lib64/libpython2.7.a failed
attempt to open /lib64/libpython2.7.so failed
attempt to open /lib64/libpython2.7.a failed
attempt to open /usr/lib64/libpython2.7.so failed
attempt to open /usr/lib64/libpython2.7.a failed
attempt to open /usr/x86_64-amazon-linux/lib/libpython2.7.so failed
attempt to open /usr/x86_64-amazon-linux/lib/libpython2.7.a failed
attempt to open /usr/lib64/libpython2.7.so failed
attempt to open /usr/lib64/libpython2.7.a failed
attempt to open /usr/local/lib/libpython2.7.so failed
attempt to open /usr/local/lib/libpython2.7.a failed
attempt to open /lib/libpython2.7.so failed
attempt to open /lib/libpython2.7.a failed
attempt to open /usr/lib/libpython2.7.so failed
attempt to open /usr/lib/libpython2.7.a failed
Check ldconfig softlink for python and find out what its pointing to
ldconfig -p | grep python2.7
libpython2.7.so.1.0 (libc6,x86-64) => /usr/lib64/libpython2.7.so.1.0
This shows that it was looking for a wrong softlink and I changed the soft link like this
sudo ln -s /usr/lib64/libpython2.7.so.1.0 /usr/lib64/libpython2.7.so
and then had to run pip like this
sudo /usr/local/bin/pip install pubnub -- Location of pip installed
Worked Pretty Good

gcc cannot find cc1plus

I'm trying to install the python package pandas on CentOS 6 but I'm having problems with the gcc compiler:
sudo pip install pandas
...
creating build/temp.linux-x86_64-2.7/pandas/msgpack
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -D__LITTLE_ENDIAN__=1 -Ipandas/src/msgpack -Ipandas/src/klib -Ipandas/src -I/opt/rh/python27/root/usr/lib64/python2.7/site-packages/numpy/core/include -I/opt/rh/python27/root/usr/include/python2.7 -c pandas/msgpack/_packer.cpp -o build/temp.linux-x86_64-2.7/pandas/msgpack/_packer.o -Wno-unused-function
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Cleaning up...
...
So it appears I need cc1plus, which by reading around requires gcc-g++. But I already have gcc-c++:
sudo yum install gcc-c++
...
Package gcc-c++-4.4.7-16.el6.x86_64 already installed and latest version
Nothing to do
About gcc and cc1plus:
gcc --version
gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-15)
which gcc
/opt/rh/devtoolset-2/root/usr/bin/gcc
locate cc1plus
/usr/libexec/gcc/x86_64-redhat-linux/4.4.4/cc1plus
My own solution below. Does anybody have better ways of addressing the problem?
Solution to my own question:
It seems that cc1plus, although present, is not visible to gcc as it is not on path. So a solution is to link cc1plus to a directory on PATH:
sudo ln -s /usr/libexec/gcc/x86_64-redhat-linux/4.4.4/cc1plus /usr/local/bin/
Now sudo pip install pandas succeeds.
(But why the package manager put cc1plus there in the first place?)

'gcc' failed during pandas build on AWS Elastic Beanstalk

Getting the following error when trying to install Pandas (0.16.0), which is in my requirements.txt file, on AWS Elastic Beanstalk EC2 instance:
building 'pandas.msgpack' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -D__LITTLE_ENDIAN__=1 -Ipandas/src/klib -Ipandas/src -I/opt/python/run/venv/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c pandas/msgpack.cpp -o build/temp.linux-x86_64-2.7/pandas/msgpack.o
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
I'm running on 64bit Amazon Linux 2015.03 v1.3.0 running Python 2.7 and previously ran into this same error on a t1.micro instance, which was resolved when I change to a m3.medium, but I'm running an m3.xlarge so can't be a memory issue.
I have also ensured that gcc is installed as a package in .ebextensions/00_gcc.config:
packages:
yum:
gcc: []
gcc-c++: []
For pandas being compiled on Elastic Beanstalk, make sure to have both packages: gcc-c++ and python-devel
packages:
yum:
gcc-c++: []
python-devel: []
Install python-dev
sudo apt-get install python-dev
For python3
sudo apt-get install python3-dev
on ec2 instances if you run into gcc error; try this
sudo yum install gcc python-setuptools python-devel postgresql-devel
sudo su -
sudo pip install
I had to upgrade amazon's EC2 pip. You can do this by editing the .config file in .ebextensions:
sh
commands:
00_update_pip:
command: "/opt/python/run/venv/bin/pip install --upgrade pip"
I solved this issue by ssh'ing in to the EBS machine and updating pip
pip install -U pip

Categories

Resources