After editing Lambda function inline with Python 3.8 and then pressing "save"...I get the message "successfully deployed package". Normally the updated function should be triggered after calling API again, but the updated code seems either not beeing deployed or whyever the new code is not called, I donĀ“t know what happens there...it says successfully deployed package.
Does it take a while for AWS to "realy" make the new function available to public?
Thanks for reply.
EDIT:
found out that my browser had old version in cache I think...but this is a problem...why do I have to clear cache to test new version of API? It got published correctly but my browser used old version...how to prevent that?
Related
My azure function works correctly locally but when I deploy it on azure. It will return this message:
Deployment successful. deployer = ms-azuretools-vscode deploymentPath = Functions App ZipDeploy. Extract zip. Remote build.
Syncing triggers...
Querying triggers...
No HTTP triggers found.
Uploading settings...
Added the following settings:
- AzureWebJobsFeatureFlags
- FUNCTIONS_WORKER_PROCESS_COUNT
11:43:54 AM: Ignored the following settings that were already the same:
- FUNCTIONS_WORKER_RUNTIME
11:43:54 AM: WARNING: This operation will not delete any settings in "vietnam-trademark-scraper-dev-test". You must manually delete settings if desired.
11:43:54 AM: Excluded the following settings:
- AzureWebJobsStorage
11:43:54 AM: Error: Error "appSettings.properties with value "1" must be of type string." occurred in serializing the payload - "StringDictionary".
in local, I see all trigger work:
I use Python V2 model Azure Functions triggers. I deploy it with a dedicated plan.
I try to search for this problem on google and have no idea to fix it. Can someone explain this problem and suggested me some solutions? Thanks
Glad that it worked for you by adding the setting AzureWebJobsFeatureFlags:EnableWorkerIndexing and shown the practical in one of my workarounds.
I added this setting and redeployed and it works. But I don't know how it works? Can you explain it?
It is because Microsoft explicitly mentioned to add that application setting for running the Python Programming v2 model in Azure.
Whatever the values we have added related to AzureWebJobsFeatureFlags are not ready to run in production but can be experimental before they are released completely as defined in this MS Doc of Azure Functions App Settings.
And also, in the V2 Programming model, multiple workers (FUNCTIONS_WORKER_PROCESS_COUNT) of Python environment is not yet supporting in greater than 1 so this setting needs to be added as a Feature flag to work.
Refer to the GitHub Article on Azure Functions Host of Worker Indexing Changes to Python Environment for related more information.
I've set up a Flask server which is hosted on Firebase integrated with Cloud Run, I'm only making changes to html at the moment and using the command "firebase serve" with my localhost, however when I refresh the window and when I stop the server and restart it, my changes are still not showing up. I must be googling wrong because I can't find what I'm looking for: is there some sort of an update command, or do I need to re-build and re-deploy every time?
If the Firebase emulator suite isn't proxying the request to Cloud Run in the way you expect, you should open an issue on the firebase-tools GitHub and provide reproduction steps so they can diagnose. You should make sure that your installation of firebase-tools is fully up to date.
Note that the CLI will not deploy any new code to Cloud Run. You still have to run gcloud to update the backend.
I'm having the exact same issue as another unanswered post, but I'm willing to give whatever code/setup needed to get the question answered properly.
Like in the post I mentioned above, I am also trying to deploy files to S3 with the AWS CLI and I receive the same error:
upload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna
I have the newest version of Python and the AWS CLI. I can get the Python shell to import encodings.idna, but the AWS CLI process boots its own shell to run the commands I assume. Which may mean that I need to somehow inject the import statement into the AWS CLI process. I've tried to edit the aws.cmd programs (one in /bin and one in /scripts), but nearly every change stopped the program from working properly.
I'm not sure what to post that can help determine what my issue is, so please let me know.
I just saw that I left this question unanswered. So, here's what I did to fix the issue.
Shortly after posting this question, I tried the AWS CLI Windows Installer once more and it worked. I still don't know why the initial installation didn't work, but I am able to upload to S3 via CLI without Python encoding errors.
I am writing an Azure timer trigger using Python 3.x. I've got one such function running. I think I know to do it, create one from JS and then delete the 'index.js' and create a run.py. But this time, when I run my python function, I always got an error saying "No such file: index.js". I didn't see any bonds between the function and the 'index.js' file.
Any thoughts?
We could add the python function from the Azure portal directly. If you want to create Timetrigger function,then we could change the trigger type
The following is my detail steps to create Python timetrigger function.
1.Create an Azure function App
2.Add a python function
3.Change the httptrigger to timetrigger
a. delete the httptrigger and http output
b. add the time trigger
4.Add the test code and test it from Azure portal.
The default version is 2.7.8. If you want to use python 3.x, you could follow this tutorial to update the python version.
5.Update the python version.
a. Install extension for Azure function App
b. Add Handler Mappings entry so as to use Python3.X via FastCGI
6.Test it from Azure portal
I followed tutorial in comment and reproduce your issue on my side though I refresh the portal.
However, after waiting for some time, it works. I suspect it's due to cache.
I suggest you creating python azure function on kudu directly. Just create run.py and function.json in new folder instead of changing the JS template.
Hope it helps you.
In my case, run.py is recognized and run after I restart Azure Functions from the portal:
Azure Functions > Overview > Restart
screenshot
Yesterday this code was working fine both in local and production servers:
import cloudstorage
def filelist(Handler):
gs_bucket_name="/bucketname"
list=cloudstorage.listbucket(gs_bucket_name)
logging.warning(list)
self.write(list)
for e in list:
self.write(e)
self.write("<br>")
From yesterday to today I've upgraded GAE Launcher and changed the billing options (I was using a free trial and now a paid account) (not sure if it has anything to do, but just to give extra information)
But today the code stopped working in local (works fine in production)
This is the beginning of the error log
WARNING 2015-02-20 09:50:21,721 admin.py:106] <cloudstorage.cloudstorage_api._Bucket object at 0x10ac31e90>
ERROR 2015-02-20 09:50:21,729 api_server.py:221] Exception while handling service_name: "app_identity_service"
method: "GetAccessToken"
request: "\n7https://www.googleapis.com/auth/devstorage.full_control"
request_id: "WoMrXkOyfe"
The warning shows a bucket object, but as soon as I try to iterate in the list I get the exception on the identity service.
What is hapening? Seems that I need to authorize local devserver gcs mockup, but I'm not sure how.
Remember this is only happening in devserver, not in production.
Thanks for your help
This is a problem with the latest release (1.9.18). For now, until it gets fixed, you can downgrade to 1.9.17 by downloading the installer from here and just running it: https://storage.googleapis.com/appengine-sdks/featured/GoogleAppEngineLauncher-1.9.17.dmg
As per the answer below, the 1.9.18 has been patched with a fix for this. If you still want to install the 1.9.17 version, please follow this link: https://storage.googleapis.com/appengine-sdks/deprecated/1917/GoogleAppEngineLauncher-1.9.17.dmg
It's a known bug in the dev_appserver, where the SDK does not cope with the credentials already persisted in earlier versions. For me (on Ubuntu 15.10 with SDK 1.9.33) it helped to simply remove a file:
rm ~/.config/gcloud/application_default_credentials.json
as suggested in the bug issue filed by Jari Wiklund.
Update: As of March 5th, 2105 this was fixed in the public release of 1.9.18, which may be a simpler way to get the fix.
Note: while the fix was in Python, issue can surface in Java, PHP and Go also because they use the Python local dev server code.
I do believe that the cause of the problem is a bug in the local dev server (GoogleAppEngineLauncher), recently released.
I'm experiencing something similar in the PHP runtime: GloudStorage fails locally