I am able to call my python from nodejs on AWS Lambda using the below function. However, because I need specific python libraries, I created a virutalenv in the env directory. I zipped everything up and pushed to Lambda. But when I try and call python from the virtual directory I get a Permission Denied error.
I attempted to modify the chmod permissions on Lambda before calling python but got Operation Not Permitted. How can I get this to run?
console.log('Loading event');
var exec = require('child_process').exec;
exports.handler = function(event, context) {
exec('env/bin/python district.py \'' + JSON.stringify(event) + '\'', function(error, stdout) {
var obj = stdout.toString();
context.done(error, obj);
});
};
Here's the error:
{
"errorMessage": "Command failed: /bin/sh: env/bin/python: Permission denied\n",
"errorType": "Error",
"stackTrace": [
"",
"ChildProcess.exithandler (child_process.js:658:15)",
"ChildProcess.emit (events.js:98:17)",
"maybeClose (child_process.js:766:16)",
"Process.ChildProcess._handle.onexit (child_process.js:833:5)"
]
}
The error most likely signals that python.exe does not have the executable bit set. Note, however, that even if you set the x bit, it won't work: .exe files are Windows executables, and they won't work.
Note, this virtual env was created in windows. I also attempted from Linux in the i.e. env/bin/python district.py with no help.
env/bin/python is the correct command. If you still get the Permission Denied error, the it means that the file python is missing the executable bit.
In the AWS Lamba runtime environment, you are not allowed to change permissions of files, nor to change user, therefore you must set the executable bit (or any other permission bit you need) when creating the .zip archive.
To sum up:
On Linux machines, use Linux executables.
Set the executable bit of the executables before creating archive.
Try this out:
exec('python district.py "'+ JSON.stringify(event) +'"', function(error, stdout) {
console.log('Python returned: ' + stdout + '.');
context.done(error, stdout);
});
Amazon has a tutorial on using Python in Lambda here
Related
I am starting to integrate AWX into our environment and would like to move & schedule some python scripts there, but I am facing issue to trigger python script using Ansible playbook. There is .YML and .PY located in the same Github repository & directory. I trigger Ansible playbook which initiates python script as per below 2 lines (there is more of the code of course) and everything completes OK, but script is not triggered. Previously all python scripts were located on the host locally and scheduled via windows task scheduler.
CODE:
- name: Checking for duplicate clients
script: duplicate_clients.py
Seems that Python script on the windows host is even not started, as I do not see python started in the task manager.
DEBUG:
<server_XXX> PUT "/runner/project/MISC/duplicate_clients.py" TO "C:\Users\C017317\AppData\Local\Temp\ansible-tmp-1656571782.4292357-31-141461256091413\duplicate_clients.py"
EXEC (via pipeline wrapper)
EXEC (via pipeline wrapper)
changed: [server_XXX] => {
"changed": true,
"rc": 0,
"stderr": "#< CLIXML<Objs Version=\"1.1.0.1\" xmlns=\"http://schemas.microsoft.com/powershell/2004/04\"><Obj S=\"progress\" RefId=\"0\"><TN RefId=\"0\"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N=\"SourceId\">1</I64><PR N=\"Record\"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>",
"stderr_lines": [
"#< CLIXML<Objs Version=\"1.1.0.1\" xmlns=\"http://schemas.microsoft.com/powershell/2004/04\"><Obj S=\"progress\" RefId=\"0\"><TN RefId=\"0\"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N=\"SourceId\">1</I64><PR N=\"Record\"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>"
3…
META: ran handlers
However, I have tried a different approach to start python script (as I am not sure how exactly it should be done when script is located on Github).
CODE:
- name: Checking for duplicate clients
command: py -3 duplicate_clients.py
DEBUG
> EXEC (via pipeline wrapper)
Using module file /usr/local/lib/python3.8/site-packages/ansible/modules/command.py
Pipelining is enabled.
EXEC (via pipeline wrapper)
[WARNING]: No python interpreters found for host
server_XXX (tried ['python3.10', 'python3.9',
'python3.8', 'python3.7', 'python3.6', 'python3.5', '/usr/bin/python3',
'/usr/libexec/platform-python', 'python2.7', 'python2.6', '/usr/bin/python',
'python'])
fatal: [server_XXX]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
Seems it is looking python on a linux path, though the target host is a windows.
The python script here is not an issue, as it is simply creating a test file. Running script locally completes with RC=0.
So the issue has been resolved by providing python executable path.
- name: Checking for duplicate clients
script: create_file.py
args:
executable: '"C:\Program Files (x86)\Python\python.exe"'
I can try to run Python script in Laravel. I am using composer require symfony/process for this. Actually, I don't want to use shell_exec(). When I try to run it, an error is returned. In the command line, everything is okay. My python script is located in the public folder.
My Controller:
use Symfony\Component\Process\Process;
use Symfony\Component\Process\Exception\ProcessFailedException;
public function plagiarismCheck() {
$process = new Process(['C:/Program Files/Python39/python.exe', 'C:/Users/User/Desktop/www/abyss-hub/public/helloads.py']);
$process->run();
// executes after the command finishes
if (!$process->isSuccessful()) {
throw new ProcessFailedException($process);
}
return $process->getOutput();
}
Python script:
print('Hello Python')
Error:
The command ""C:/Program Files/Python39/python.exe" "C:/Users/User/Desktop/www/abyss-hub/public/helloads.py"" failed.
Exit Code: 1(General error)
Working directory:
C:\Users\User\Desktop\www\abyss-hub\public
Output: ================ Error
Output: ================ Fatal
Python error:
_Py_HashRandomization_Init: failed to get random numbers to initialize Python
Python runtime state: preinitialized.
You need to configure the process environment variables:
https://symfony.com/doc/current/components/process.html#setting-environment-variables-for-processes
$process = new Process(["python", $pathToFilePython, ...$yourArgs], env: [
'SYSTEMROOT' => getenv('SYSTEMROOT'),
'PATH' => getenv("PATH")
]);
$process->run();
$process->getOutput();
PATH: To recognize the "python "command.
SYSTEMROOT: OS dependencies for Python to work.
I'm running a python script that interacts with Slack. I'm getting the Slack api token into the python script with
the_token = os.environ.get('SLACK_TOKEN')
I tried to puppetize the python environment with
$var_name = 'SLACK_TOKEN'
$token = 'xxxx-xxxxxxxxxx-xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxx'
python::virtualenv { $virtualenv_path:
ensure => present,
requirements => '/opt/<dir>/<dir>/<dir>/requirements.txt'
owner => $::local_username,
version => '3',
require => [Class['<class>']],
environment => ["${var_name}=${token}"],
}
I thought the last line of the 'virtualenv' block would set the environment variable, but apparently not.
In manifests/virtualenv.pp there is an exec that's running pip commands to install/start virtual environments (sorry, I'm no expert on Python virtual environments).
exec { "python_requirements_initial_install_${requirements}_${venv_dir}":
command => "${pip_cmd} --log ${venv_dir}/pip.log install ${pypi_index} ${proxy_flag} --no-binary :all: -r ${requirements} ${extra_pip_args}",
refreshonly => true,
timeout => $timeout,
user => $owner,
subscribe => Exec["python_virtualenv_${venv_dir}"],
environment => $environment, <----- HERE
cwd => $cwd,
}
When the exec runs, Puppet opens a shell, pipes in the environment variables, runs the command and closes the shell so the environment variables exist within the shell Puppet started but not in the Python environment it created. It's intended function is probably to setup paths to commands if they are in a different place or pass in proxy configurations so pip can pull packages in from external sites.
I verified the exec is handling what your sending it correctly using this.
class test {
$var_name = 'test'
$token = 'xxxx-xxxxxxxxxx-xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxx'
exec { "test":
command => "/bin/env > /tmp/env.txt",
environment => ["${var_name}=${token}"],
}
}
But you'll notice I had to dump the environment the exec was running in out to a file to see it.
I want to invoke a bash script with name myScript.sh in a newly created lambda function.
Step 1: I created a lambda function with name myLambda.py and the source code like:
import subprocess
print("start")
subprocess.call("./myScript.sh")"
Step 2: Create a bash script with name myScript.sh under the same path with myLambda.py
Step 3: Click the test button and got the response:
{
"errorMessage": "[Errno 13] Permission denied: './myScript.sh'"
}
Anybody knows how to deal with the permission denied issue in aws lambda function env?
Since the files are added as the guideline in https://docs.aws.amazon.com/lambda/latest/dg/code-editor.html, it's not helpful to use linux command "chmod +x " to change the file permission.
It's resolved by move myScript.sh to /tmp folder and add permission change command:
subprocess.run(["chmod", "+x", "/tmp/myScript.sh"])
You can't execute scripts that don't have execute permission. You can supply execute permissions using some variant of:
chmod +x /somepath/myScript.sh
You can run this using your current subprocess approach. Run chmod before you run myScript.sh.
I am trying to run an Azure function locally on my Mac and getting the following error: The binding type(s) 'blobTrigger' are not registered. Please ensure the type is correct and the binding extension is installed.
I'm working with Python 3.6.8 and have installed azure-functions-core-tools using homebrew (brew tap azure/functions; brew install azure-functions-core-tools).
Setup my local.settings.json file with the expected configuration, so function should be listening to the correct storage container hosted in azure.
Im certain I have not changed any code or configuration files since it was working last week.
host.json file contains:
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
function.json file contains:
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "xmlblob",
"type": "blobTrigger",
"direction": "in",
"path": "<directory>/{name}",
"connection": "AzureStorageAccountConnectionString"
}
]
}
requirements.txt file contains:
azure-cosmos==3.1.0
azure-functions-worker==1.0.0b6
azure-storage==0.36.0
azure-storage-blob==2.0.1
xmljson==0.2.0
xmlschema==1.0.11
Then I run the following commands in my terminal:
1) pip install -r requirements.txt
2) source .env/bin/activate
3) func host start
I then get the following error:
<Application name>: The binding type(s) 'blobTrigger' are not registered. Please ensure the type is correct and the binding extension is installed.
You have done everything correct by the looks of it, but you need to have the dotnet core framework and runtime installed locally in order to execute the trigger.
For me on Ubuntu, I followed this guide. Once installed I was able to trigger a blob function locally.
For Mac, I would take a look here about installing dotnot core.
I got this error because there was something wrong with the bundle that it had downloaded and cached. In addition to the error message, earlier on in the log was a warning about not being able to load the Extension Bundle and a few lines earlier the log shows the path it's loading the bundle from, like C:\Users<redacted>\AppData\Local\Temp\Functions\ExtensionBundles\Microsoft.Azure.Functions.ExtensionBundle. I deleted the ExtensionBundles folder and it re-downloaded.
I followed nathan shumoogum solution that he describes in a comment on another answer of this question. It worked.
The process is:
Uninstall python dependencies, then python 3.6.8, uninstall azure-functions-core-tools and finally uninstall all versions of the .NET Core 2.2 SDK (in that order). Then reinstall everything in the reverse order for macOS.
Make sure you have the Microsoft.Azure.WebJobs.Extensions.Storage nuget package installed.
If you do it from portal , it prompted to install the extension.