I need help with something.I need to convert the Markdown file to json format, but I don't know how to do this.I did a Google search, "markdown to json" but the tools I used didn't work for me.Is there someone who has experienced this before?
PS: I can use Nodejs and python for this.But I used the nodejs and python modules related to this did not work.
example Markdown
```{python}
from __future__ import division
from deltasigma import *
```
### 5th-order modulator: NTF *with* zeros optimization
This time we enable the zeros optimization, setting `opt=1` when calling synthesizeNTF(), then replot the NTF as above.
* 0 -> not optimized,
* 1 -> optimized,
* 2 -> optimized with at least one zero at band-center,
* 3 -> optimized zeros (with optimizer)
* 4 -> same as 3, but with at least one zero at band-center
* [z] -> zero locations in complex form
I would like this or similar json output
{
code:
header :
content :
}
In fact, as long as code and other content blogs are separated, there is no problem.
In addition to the above, I can even write my own converter with nodejs, but it can take a very long time.
Thanks in advance
There is a very straightforward approach incase you are working on rasa.
import rasa_nlu
from rasa_nlu.training_data import load_data
input_training_file_md_file = "nlu_data.md"
output_json_file = 'nlu_data.json'
with open(output_json_file,'w') as f:
f.write(load_data(input_training_file_md_file).as_json())
#done
You mentioned you Google'd this, so I'm sure you saw this, but for a working example of this in python, check out: https://github.com/njvack/markdown-to-json
Beyond that, I recommend reading the README for the project on some things you're going to run into. Ultimately, they're not the same type of format, so the conversion isn't reversible without guess work. Notice how all of the packages are a variety of responses. They're inventing a JSON key-value response structure to make things semi understandable.
If existing tools don't do what you need, try checking out what they're doing, and change it to your needs.
Related
Is there an easy way to extract a list of all variables with start attribute from a Modelica model? The ultimate goal is to run a simulation until it reaches steady-state, then run a python script that compares the values of start attribute against the steady-state value, so that I can identify start values that were chosen badly.
In the Dymola Python interface I could not find such a functionality. Another approach could be to generate the modelDescription.xml and parse it, I assume the information is available somewhere in there, but for that approach I also feel I need help to get started.
Similar to this answer, you can extract that info easily from the modelDescription.xml inside a FMU with FMPy.
Here is a small runnable example:
from fmpy import read_model_description
from fmpy.util import download_test_file
from pprint import pprint
fmu_filename = 'CoupledClutches.fmu'
download_test_file('2.0', 'CoSimulation', 'MapleSim', '2016.2', 'CoupledClutches', fmu_filename)
model_description = read_model_description(fmu_filename)
start_vars = [v for v in model_description.modelVariables if v.start and v.causality == 'local']
pprint(start_vars)
The files dsin.txt and dsfinal.txt might help you around with this. They have the same structure, with values at the start and at the end of the simulation; by renaming dsfinal.txt to dsin.txt you can start your simulation from the (e.g. steady-state) values you computed in a previous run.
It might be worthy working with these two files if you have in mind already to use such values for running other simulations.
They give you information about solvers/simulation settings, that you won't find in the .mat result files (if they're of any interest for your case)
However, if it is only a comparison between start and final values of variables that are present in the result files anyway, a better choice might be to use python and a library to read the result.mat file (dymat, modelicares, etc). It is then a matter of comparing start-end values of the signals of interest.
After some trial and error, I came up with this python code snippet to get that information from modelDescription.xml:
import xml.etree.ElementTree as ET
root = ET.parse('modelDescription.xml').getroot()
for ScalarVariable in root.findall('ModelVariables/ScalarVariable'):
varStart = ScalarVariable.find('*[#start]')
if varStart is not None:
name = ScalarVariable.get('name')
value = varStart.get('start')
print(f"{name} = {value};")
To generate the modelDescription.xml file, run Dymola translation with the flag
Advanced.FMI.GenerateModelDescriptionInterface2 = true;
Python standard library has several modules for processing XML:
https://docs.python.org/3/library/xml.html
This snippet uses ElementTree.
This is just a first step, not sure if I missed something basic.
I've been always using ConfigParser. Now that I have the need of using nested sections, I've found ConfigObj which seems to fit really my needs. The problem comes when I try to interpolate variables from other subsections. Is this possible? Otherwise the nested sections stop making sense in my case.
I have been looking for the interpolation syntax in configobj and it looks like this has not been implemented... I just wanted to be sure and to know other options to deal with this.
This is an example of what I would like to do:
[global]
[[dirs]]
software = /path-to-software-dir/
dbs = /path-to-dbs-dir/
[A]
[[softs]]
soft1 = {global.dirs.software}/soft1
soft2 = {global.dirs.software}/soft2
[[dbs]]
db1 = {global.dirs.dbs}/db1
db2 = {global.dirs.dbs}/db2
Any ideas?
We have had a similar problem. We ended up calculating the path in the application. This has the added advantage that you can normalize the path, using os.path.join() and friends.
Recently I found this awesome 2-factor authentication code generator written in Python 3. I was trying to convert it to Swift 3, but I am having trouble with one specific part, though:
def get_hotp_token(secret, intervals_no):
key = base64.b32decode(secret)
msg = struct.pack(">Q", intervals_no)
h = hmac.new(key, msg, hashlib.sha1).digest()
o = h[19] & 15
h = (struct.unpack(">I", h[o:o+4])[0] & 0x7fffffff) % 1000000
return h
I so far have only been able to do the first line of the function body :p using code from here
func getHotpToken(secret: String) -> [Int] {
let data = secret.base32DecodedData
<...>
return theTokens
}
I tried reading the documentation on struct.pack here and reading about what packing actually is here, but I still find the concept/implementation confusing, and I have no idea what the equivalent would be in Swift.
According to the documentation, struct.pack returns a string in the given format. The format in my case is >Q, which means that the byte order is little-endian and the C Type is an unsigned long long. Again, I am not exactly sure how this is supposed to look in Swift.
... And that is only the second line! I don't really understand how HMAC works (I can't even find the actual 'background' code), so I can't even translate the entire function. I could not find any native library for Swift that has this behavior.
Any pointers or help translating this function will be highly appreciated!
P.S. I checked and I think that this is on topic
Relevant imports:
import base64, struct, hmac
I just finished converting my code to Swift 3. This is a little different from the Python version, since this is more of a framework-type thing. It took a lot of experimentation to get the get_hotp_token to work (for example the Wikipedia page says it used SHA256 but it actually uses SHA1.
You can find it here.
When you use this, be sure to add a bridging header with #import <CommonCrypto/CommonHMAC.h>
Enjoy!
My goal is to take a text file with a number list generated by R (e.g 1 2 3 4), and "translate" the numbers into music21 notes (that is, to compose a melody when each note is identified with a number).
Having the number list, one idea I had was creating a R vector with strings that matches with music21 note names, and trying to get a new output with the note names instead of numbers. But I'm not very sure of that. Besides, I don't know how to proceed after that.
I also read some topics talking about using R as a subprocess in Python, but again, I couldn't clearly understand how that works (the fact that running the subprocess almost makes my poor old laptop crash had something to do with that...)
How can I proceed here?
Personally, I would try to use only python. I realize you have little experience with it; but python is more general purpose than R and should be able to do anything R can do. Trying to use both at the same time seems like it would generate additional complexity and overhead you simply don't need.
It looks this music21 takes notes and lengths; however there are also rests. Let's say you have a list for durations called "durations", and a list for notes (and rests) called notes:
from music21 import *
mymusic = stream.Stream()
notes = ["F4", "F4", "rest", "F4"]
durations = [0.25, 1, 0.25, 1]
for n,d in zip(notes, durations):
if n == "rest":
mymusic.append(note.Rest(d))
else:
mymusic.append(note.Note(n,d))
mymusic.show("midi")
Music21 uses a special kind of list called a stream. We're making an empty stream first, and then populating it with notes and durations. Zip lets us walk through both lists at the same time. We chekc if the note is supposed to be a rest; if it is a rest we add the rest with the right duration, else we continue to add a note of the right duration. (notice I am not a composer, you could generate the notes and durations any way you like :-) ).
If you really wanted to; you could write a csv file or something of notes and durations in R and read that in python. However, I think generating the lists in python itself is a cleaner approach.
Thanks for introducing me to this music21 library, it looks very neat.
Construct is a DSL implemented in Python used to describe data structures (binary and textual). Once you have the data structure described, construct can parse and build it for you. Which is good ("DRY", "Declarative", "Denotational-Semantics"...)
Usage example:
# code from construct.formats.graphics.png
itxt_info = Struct("itxt_info",
CString("keyword"),
UBInt8("compression_flag"),
compression_method,
CString("language_tag"),
CString("translated_keyword"),
OnDemand(
Field("text",
lambda ctx: ctx._.length - (len(ctx.keyword) +
len(ctx.language_tag) + len(ctx.translated_keyword) + 5),
),
),
)
I am in need for such a tool for Haskell and
I wonder if something like this exists.
I know of:
Data.Binary: User implements parsing and building seperately
Parsec: Only for parsing? Only for text?
I guess one must use Template Haskell to achieve this?
I'd say it depends what you want to do, and if you need to comply with any existing format.
Data.Binary will (surprise!) help you with binary data, both reading and writing.
You can either write the code to read/write yourself, or let go of the details and generate the required code for your data structures using some additional tools like DrIFT or Derive. DrIFT works as a preprocessor, while Derive can work as a preprocessor and with TemplateHaskell.
Parsec will only help you with parsing text. No binary data (as easily), and no writing. Work is done with regular Strings. There are ByteString equivalents on hackage.
For your example above I'd use Data.Binary and write custom put/geters myself.
Have a look at the parser category at hackage for more options.
Currently (afaik) there is no equivalent to Construct in Haskell.
One can be implemented using Template Haskell.
I don't know anything about Python or Construct, so this is probably not what you are searching for, but for simple data structures you can always just derive read:
data Test a = I Int | S a deriving (Read,Show)
Now, for the expression
read "S 123" :: Test Double
GHCi will emit: S 123.0
For anything more complex, you can make an instance of Read using Parsec.