I have a python script that is written in different files (one for importing, one for calculations, et cetera). These are all in the same folder, and when I need a function from another function I do something like
import file_import
file_import.do_something_usefull()
where, of course, in the file_import there is a function do_something_usefull() that, uhm, does something usefull. How can I accomplish the same in Azure?
I found it out myself. It is documenten on Microsoft's site here.
The steps, very short, are:
Include all the python you want in a .zip
Upload that zip as a dataset
Drag the dataset as the third option parameter in the 'execute python'-block (example below)
execute said function by importing import Hello (the name of the file, not the zip) and running Hello.do_something_usefull()
As reference, there is a similiar answered thread you can refer to, please see Access Azure blog storage from within an Azure ML experiment.
Related
My python application works well and uses swi_prolog's consult, asserts, and query functions along with a pl file. However, when I call the code via web (I get access error at consult when trying to open the pl file).
So, I thought of using the prolog without the pl file consultation. I just want to embed the pl file content into prolog in a way so that I can use it in a similar way and I can continue with the query steps. Is there anyone who can guide me in doing this?
Thanks in advance,
Ferda
The SWI-Prolog manual has a chapter on deploying applications. In particular, it allows you to create so-called saved states of your program. This mechanism allows you to create a stand-alone package from your application, either from inside the application or from the command line.
I tried with SASpy but it's not working. I am able to open the SAS .egp file but not able to run the multiple scripts within in sequence.
import os, sys, subprocess
def OpenProject(sas_exe, egp_path):
sasExe = sas_exe
sasEGpath = egp_path
subprocess.call([sasExe, sasEGpath])
sas_exe = path\path\
egp_path = path\path\path\
OpenProject(sas_exe, egp_path)
This depends a bit on exactly what the workflow is. A few side notes, then the full solution.
First: EGP is not really intended to store production processes, in my opinion. EGP should really be used for development, then production is done with .sas (text) files. EGP can directly store the nodes as .sas files; ask a new question about that if you want to know more, but it's pretty easy to figure out. Best practice is to have EGP save the code modules as .sas files, then run those - SASPy will easily do that for you.
Second: If you use SAS's built-in Git connectivity, then you can do this a bit more easily I suspect. Consider doing that if you already use Git for your other processes. Again, then you end up with a .sas file, and can directly run that via SASPy.
So: how can you do this in Python, with the assumption you do have to use the .egp itself, without too many different moving parts? The key here is the .egp format. EGP is a container file, which is actually a .zip format container that has in it, among other things, all of the SAS code you want to run, as text. Text in xml format, but still, text.
You can write a python program that opens the .egp as a .zip file, using the zipfile library, and then use xml.etree.ElementTree to parse the project.xml file inside that project. Exactly what you do from there depends on your particular details, and is well out of scope for a Stack Overflow answer, but if you do better visually you can simply rename the .egp to .zip and then open in unzip program of your choice, then browse project.xml in your text editor, and find the nodes and code related to those nodes.
You can then extract the .sas code as text, and submit it directly via SASPy, or extract it to a .sas file and then submit that however you prefer (SASPy or something else).
I do something similar to this for a project - I don't actually run code from it, I'm just parsing it to verify that the correct programs were synced from the EGP to production - but it would be trivial to actually submit the code from what I've written, which is about 50 lines of code total. I may write a SGF paper this year or next year on this topic, in which case I'll try and remember to submit it here - or you can head over to my github page and see if it's there (in the future!).
This is a python appengine question, mapreduce library 1.9.21 .
I have code writing lines to a blob in the local blobstore, then processing that using mapreduce BlobstoreLineInputReader.
Given that the files api is going away, I thought I'd retarget all my processing to cloud storage.
I would expect to find a class called GoogleCloudStorageLineInputReader, but there isn't anything like that. Is it hiding somewhere?
Is there something way I can use GoogleCloudStorageInputReader to read lines?
Another possibility is using GoogleCloudStorageRecordInputReader, but for that my input file needs to be in LevelDB format and I don't know how to create that except with a GoogleCloudStorageConsistentRecordOutputWriter, which I don't know how to use outside a mapreduce context. How might I do that?
Or am I doing this all wrong, is there some other possibility I've missed?
At first, I attempted thinkjson's CloudStorageLineInputReader but had no success.
Then I found this pull request...which led me to rbruyere's fork. Despite some linting issues (like the spelling on GoolgeCloudStorageLineInputReader), however at the bottom of the pull request it is mentioned that it works fine, and asks if the project needs to be taken over.
Hope that helps!
I am developing an Ansible module that generates a url, fetches (like get_url) the tarball at that url from my internal artifactory and then extracts it. I am wondering if there is a way to include or extend the get_url Ansible core module in my module. I can't have this in multiple steps because the url being used is generated from a git hash and requires a multi-step search.
If there isn't a way, I will probably just copy the whole get_url module and use it in my module, but I would like to avoid that.
I'd like to do something like:
module_json_response = module.get_module('get_url').issue_command('url=http://myartifactory.com/my_artifact.tar.gz dest=/path/to/local/my_artifact.tar.gz');
My understanding of Ansible is that it uploads the module in use and executes it, including another module isn't supported or isn't documented.
Thanks in advance for any help.
To quote Michael DeHaan's post here:
Generally speaking, Ansible allows sharing code through
"lib/ansible/module_common.py" to make writing functionality easier.
It does not, however, make it possible for one module to call another,
which has not, to date, really been needed -- that's not entirely
true, we used to have something like this for file and copy until we
got smart and moved the file attribute code into common :)
It seems like since url access is frequent enough we could make a
common function in module common for url downloads -- IF we modify the
get_url code to also use it so we aren't repeating ourselves.
He later followed up with:
You can access the way template works by writing an action
plugin, but it's more involved than writing a simple client module.
+1 to moving get_url code into common, that's come up a few times.
I would like to provide my Python GAE website in the user's own language, using only the tools available directly in App Engine. For that, I would like to use GNU gettext files (.po and .mo files).
Has someone successfully combined Python Google App Engine and gettext files? If so, could you please provide the steps you used?
I had started a discussion in GAE's Google group, but haven't been able to extract from it how I'd like to do it: I don't want to add external dependencies, like Babel (suggested in the discussion). I want to use plain vanilla Google App Engine, so no manual update of Django or this kind of stuff.
At first, I will start using the language sent by the browser, so no need to manually force the language by using cookies etc. However, I might add a language changing feature later, once the basic internationalization works.
As a background note to give you more details about what I'm trying to do, I would like to internationalize Issue Tracker Tracker, an open source application I've hosted on Launchpad. I plan to use Launchpad's translation platform (explaining why I'd like to use .mo files). You can have a look at the source code in it's Bazaar branch (sorry no link due to stackoverflow spam prevention limit for new users...)
Thanks for helping me advance on this project!
As my needs were simple, I used a simple hack instead of (unavailable) gettext. I created a file with string translations, translate.py. Approximately like this:
en={}
ru={}
en['default_site_title']=u"Site title in English"
ru['default_site_title']=u"Название сайта по-русски"
Then in the main code I defined a function which returns a dictionary with translations into the most suitable language from the list (the first one to have a translation is used or English):
import translate
def get_messages(languages=[]):
msgs=translate.en
for lang in languages:
if hasattr(translate,lang):
msgs=getattr(translate,lang)
break
return msgs
Usage:
msgs = get_messages(["it","ru","en"])
hi = msgs['hello_message'] % 'yourname'
I also defined a helper function which extracts a list of languages from Accept-Language header.
It's not the most flexible solution, but it doesn't have any external dependencies and works for me (in a toy project). I think translate.py may be generated automatically from gettext files.
In case you want to see more, my actual source is here.
You can use the Django internationalisation tool, like explained here.
They are also saying that there is no easy way to do this.
I hope that helps you :)