Generating expressions from permutations of variables and operators - python

So, I've decided that it's time to learn regular expressions. Thus, I set out to solve various problems, and after a bit of smooth sailing, I seem to have hit a wall and need help getting unstuck.
The task:
Given a list of characters and logical operators, find all possible combinations of these characters and operators that are not gibberish.
For example, given:
my_list = ['p', 'q', '&', '|']
the output would be:
answers = ['p', 'q', 'p&q', 'p|q'...]
However, strings like 'pq&' and 'p&|' are gibberish and therefore not allowed.
Naturally, as more elements are added to my_list, the more complicated the process becomes.
My current approach:
(I'd like to learn how to solve it with regex, but I am also curious if there exists a better way, too... but again, my focus is regex)
step 1:
find all permutations of the elements such that each permutation is 3 <= x <= len(my_list) long.
step 2:
Loop over the list, and if a regex match is found, pull that element out and put it in the answers list.
(I'm not married to this 2-step approach, it is just what seemed most logical to me)
My current code, minus the regex:
import re
from itertool import permutations
my_list = ['p', 'q', '~r', 'r', '|', '&']
foo = []
answers = []
count = 3
while count < 7:
for i in permutations(a, count):
i = ''.join(k for k in i)
foo.append(i)
count +=1
for i in foo:
if re.match(r'insert_regex', i):
answers.append(i)
else:
None
print answers
Now, I have tried a vast slew of different regex's to get this to work (too many to list them all here) but some of the main ones are:
A straightforward approach by finding all the cases that have two letters side by side, or two operators side by side, then instead of appending 'answers', I just removed them from 'foo'. This is the regex I tried:
r'(\w\w)[&\|]{2,}'
and did not even come close.
I then decided to try and find the strings that I wanted, as opposed to the ones I did not want.
First I tested:
r'^[~\w]'
to make sure I could get the strings whose first character were a letter or a negation. This worked. I was happy.
I then tried:
r'^[~\w][&\|]'
to try and get the next logical operator; however, it only picked up strings whose first character was a letter, and ignored all of the strings whose first character was a negation.
I then tried a conditional so that if the first character was a negation, the next character would be a letter, otherwise it would be an operator:
r'^(?(~)\w|[&\|])'
but this thew me "error: bad character in group name".
I then tried to resolve this error by:
r'^(?:(~)\w|[&\|])'
But that returned only strings that started with '~' or an operator.
I then tried a slew of other things related to conditionals and groupings (2 days worth, actually), but I can't seem to find a solution. Part of the problem is that I don't know enough about regex to know where to go to find the solution, so I have kind of been wandering around the internet aimlessly.
I have run through a lot of tutorials and explanation pages, but they are all rather opaque and don't piece things together in a way is conducive to understanding... they just sort of throw out code for you to copy and paste or mimic.
Any insights you have would be much appreciated, and as much as I would love an answer to the problem, if possible, an ELI5 explanation of what the solution does would be excellent for my own progress.

In a bitter twist of irony, it turns out that I had the solution written down (I documented all the regex's I tried), but it originally failed because I forgot to remove strings from the original list, not the copy.
If anyone is looking for a solution to the problem, the following code worked on all of my test cases (can't promise beyond that, however).
import re
from itertools import permutations
import copy
a = ['p', 'q', 'r', '~r', '|', '&']
foo = []
count = 3
while count < len(a)+1:
for j in permutations(a, count):
j = ''.join(k for k in j)
foo.append(j)
count +=1
foo_copy = copy.copy(foo)
for i in foo:
if re.search(r'(^[&\|])|(\w\w)|(\w~)|([&\|][&\|])|([&\|]$)', i):
foo_copy.remove(i)
else:
None
print foo_copy

You have a list of variables (characters), binary operators, and/or variables prefixed with a unitary operator (like ~). The last case can be dealt with just like a variable.
As binary operators need a variable at either side, we can conclude that a valid expression is an alternation of variables and operators, starting and ending with a variable.
So, you could first divide the input list into two lists based on whether an item is a variable or an operator. Then you could increase the size of the output you will generate, and for each size, get the permutations of both lists and zip these in order to build a valid expression each time. This way you don't need a regular expression to verify the validity.
Here is the suggested function:
from itertools import permutations, zip_longest, chain
def expressions(my_list):
answers = []
variables = [x for x in my_list if x[-1].isalpha()]
operators = [x for x in my_list if not x[-1].isalpha()]
max_var_count = min(len(operators) + 1, len(variables))
for var_count in range(1, max_var_count+1):
for vars in permutations(variables, var_count):
for ops in permutations(operators, var_count-1):
answers.append(''.join(list(chain.from_iterable(zip_longest(vars, ops)))[:-1]))
return answers
print(expressions(['p', 'q', '~r', 'r', '|', '&']))

Related

Approaches for finding matches in a large dataset

I have a project where, given a list of ~10,000 unique strings, I want to find where those strings occur in a file with 10,000,000+ string entries. I also want to include partial matches if possible. My list of ~10,000 strings is dynamic data and updates every 30 minutes, and currently I'm not able to process all of the searching to keep up with the updated data. My searches take about 3 hours now (compared to the 30 minutes I have to do the search within), so I feel my approach to this problem isn't quite right.
My current approach is to first create a list from the 10,000,000+ string entries. Then each item from the dynamic list is searched for in the larger list using an in-search.
results_boolean = [keyword in n for n in string_data]
Is there a way I can greatly speed this up with a more appropriate approach?
Using a generator with a set is probably your best bet ... this solution i think will work and presumably faster
def find_matches(target_words,filename_to_search):
targets = set(target_words)
with open("search_me.txt") as f:
for line_no,line in enumerate(f):
matching_intersection = targets.intersection(line.split())
if matching_intersection:
yield (line_no,line,matching_intersection) # there was a match
for match in find_matches(["unique","list","of","strings"],"search_me.txt"):
print("Match: %s"%(match,))
input("Hit Enter For next match:") #py3 ... just to see your matches
of coarse it gets harder if your matches are not single words, especially if there is no reliable grouping delimiter
In general, you would want to preprocess the large, unchanging data is some way to speed repeated searches. But you said too little to suggest something clearly practical. Like: how long are these strings? What's the alphabet (e.g., 7-bit ASCII or full-blown Unicode?)? How many characters total are there? Are characters in the alphabet equally likely to appear in each string position, or is the distribution highly skewed? If so, how? And so on.
Here's about the simplest kind of indexing, buiding a dict with a number of entries equal to the number of unique characters across all of string_data. It maps each character to the set of string_data indices of strings containing that character. Then a search for a keyword can be restricted to the only string_data entries now known in advance to contain the keyword's first character.
Now, depending on details that can't be guessed from what you said, it's possible even this modest indexing will consume more RAM than you have - or it's possible that it's already more than good enough to get you the 6x speedup you seem to need:
# Preprocessing - do this just once, when string_data changes.
def build_map(string_data):
from collections import defaultdict
ch2ixs = defaultdict(set)
for i, s in enumerate(string_data):
for ch in s:
ch2ixs[ch].add(i)
return ch2ixs
def find_partial_matches(keywords, string_data, ch2ixs):
for keyword in keywords:
ch = keyword[0]
if ch in ch2ixs:
result = []
for i in ch2ixs[ch]:
if keyword in string_data[i]:
result.append(i)
if result:
print(repr(keyword), "found in strings", result)
Then, e.g.,
string_data = ['banana', 'bandana', 'bandito']
ch2ixs = build_map(string_data)
find_partial_matches(['ban', 'i', 'dana', 'xyz', 'na'],
string_data,
ch2ixs)
displays:
'ban' found in strings [0, 1, 2]
'i' found in strings [2]
'dana' found in strings [1]
'na' found in strings [0, 1]
If, e.g., you still have plenty of RAM, but need more speed, and are willing to give up on (probably silly - but can't guess from here) 1-character matches, you could index bigrams (adjacent letter pairs) instead.
In the limit, you could build a trie out of string_data, which would require lots of RAM, but could reduce the time to search for an embedded keyword to a number of operations proportional to the number of characters in the keyword, independent of how many strings are in string_data.
Note that you should really find a way to get rid of this:
results_boolean = [keyword in n for n in string_data]
Building a list with over 10 million entries for every keyword search makes every search expensive, no matter how cleverly you index the data.
Note: a probably practical refinement of the above is to restrict the search to strings that contain all of the keyword's characters:
def find_partial_matches(keywords, string_data, ch2ixs):
for keyword in keywords:
keyset = set(keyword)
if all(ch in ch2ixs for ch in keyset):
ixs = set.intersection(*(ch2ixs[ch] for ch in keyset))
result = []
for i in ixs:
if keyword in string_data[i]:
result.append(i)
if result:
print(repr(keyword), "found in strings", result)

Python functions: return method inside a 'for' loop

I have the following code:
def encrypt(plaintext, k):
return "".join([alphabet[(alphabet.index(i)+k)] for i in plaintext.lower()])
I don't understand how python can read this kind of syntax, can someone break down what's the order of executions here?
I came across this kind of "one-line" writing style in python a lot, which always seemed to be so elegant and efficient but I never understood the logic.
Thanks in advance, have a wonderful day.
In Python we call this a list comprehension. There other stackoverflow posts that have covered this topic extensively such as: What does “list comprehension” mean? How does it work and how can I use it? and Explanation of how nested list comprehension works?.
In your example the code is not complete so it is hard to figure what "alphabet" or "plaintext" are. However, let's try to break down what it does on the high level.
"".join([alphabet[(alphabet.index(i)+k)] for i in plaintext.lower()])
Can be broken down as:
"".join( # The join method will stitch all the elements from the container (list) together
[
alphabet[alphabet.index(i) + k] # alphabet seems to be a list, that we index increasingly by k
for i in plaintext.lower()
# we loop through each element in plaintext.lower() (notice the i is used in the alphabet[alphabet.index(i) + k])
]
)
Note that we can re-write the for-comprehension as a for-loop. I have created a similar example that I hope can clarify things better:
alphabet = ['a', 'b', 'c']
some_list = []
for i in "ABC".lower():
some_list.append(alphabet[alphabet.index(i)]) # 1 as a dummy variable
bringing_alphabet_back = "".join(some_list)
print(bringing_alphabet_back) # abc
And last, the return just returns the result. It is similar to returning the entire result of bringing_alphabet_back.

Longest alphabetical substring - where to begin

I am working on the "longest alphabetical substring" problem from the popular MIT course. I have read a lot of the information on SO about how to code it but I'm really struggling to make the leap conceptually. The finger exercises preceding it were not too hard. I was wondering if anyone knows of any material out there that would really break down the problem solving being employed in this problem. I've tried getting out a pen and paper and I just get lost. I see people employing "counters" of sorts, or strings that contain the "longest substring so far" and when I'm looking at someone else's solution I can understand what they've done with their code, but if I'm trying to synthesize something of my own it's just not clicking.
I even took a break from the course and tried learning via some other books, but I keep coming back to this problem and feel like I need to break through it. I guess what I'm struggling with is making the leap from knowing some Python syntax and tools to actually employing those tools organically for problem solving or "computing".
Before anyone points me towards it, I'm aware of the course materials that are aimed at helping out. I've seen some videos that one of the TAs made that are somewhat helpful but he doesn't really break this down. I feel like I need to pair program it with someone or like... sit in front of a whiteboard and have someone walk me step by step and answer every stupid question I would have.
For reference, the problem is as follows:
Assume s is a string of lower case characters.
Write a program that prints the longest substring of s in which the letters occur in alphabetical order. For example, if s = 'azcbobobegghakl', then your program should print
Longest substring in alphabetical order is: beggh
In the case of ties, print the first substring. For example, if s = 'abcbcd', then your program should print
Longest substring in alphabetical order is: abc
I know that it's helpful to post code but I don't have anything that isn't elsewhere on SO because, well, that's what I've been playing with in my IDE to see if I can understand what's going on. Again, not looking for code snippets - more some reading or resources that will expand upon the logic being employed in this problem. I'll post what I do have but it's not complete and it's as far as I get before I start feeling confused.
s = 'azcbobobegghakl'
current = s[0]
longest = s[0]
for letter in range(0, len(s) -1):
if s[letter + 1] >= s[letter]:
current.append(s[letter + 1])
if len(current) > len(longest):
longest = current
else:
current =
Sorry for formatting errors, still new to this. I'm really frustrated with this problem.
You're almost there in your example, just needs a little tweaking
s = 'azcbobobegghakl'
longest = [s[0],] # make them lists so we can manipulate them (unlike strings)
current = [s[0],]
for letter in range(0, len(s) -1):
if s[letter + 1] >= s[letter]:
current.append(s[letter + 1])
if len(current) > len(longest):
longest = current
else:
current = [s[letter+1],] # reset current if we break an alphabetical chain
longest_string = ''.join(longest) # turn out list back into a string
output of longest_string:
'beegh'
If you are struggling with the concepts and logic behind solving this problem, I would recommend perhaps stepping back a little and going through easier coding tutorials and interactive exercises. You might also enjoy experimenting with JavaScript, where it might be easier to get creative right from the outset, building out snippets and/or webpages that one can immediately interact with in the browser. Then when you get more fun coding vocabulary under your belt, the algorithmic part of it will seem more familiar and natural. I also think letting your own creativity and imagination guide you can be a very powerful learning process.
Let's forget about the alphabetical part for the moment. Imagine we have a bag of letters that we pull out one at a time without knowing which is next and we have to record the longest run of Rs in a row. How would you do it? Let's try to describe the process in words, then pseudocode.
We'll keep a container for the longest run we've seen so far and another to check the current run. We pull letters until we hit two Rs in a row, which we put it in the "current" container. The next letter is not an R, which means our run ended. The "longest-so-far" run is empty so we pour the "current" container in it and continue. The next four letters are not Rs so we just ignore them. Then we get one R, which we put in "current" and then an H. Our run ended again but this time our one R in "current" was less than the two we already have in "longest-so-far" so we keep those and empty "current."
We get an A, a B, and a C, and then a run of five Rs, which we put into the "current" container one by one. Our bag now contains the last letter, a T. We see that our run ended again and that "current" container has more than the "longest-so-far" container so we pour out "longest" and replace its contents with the five Rs in "current." That's it, we found the longest run of Rs in the bag. (If we had more runs, each time one ended we'd choose whether to replace the contents of "longest-so-far.")
In pseudocode:
// Initialise
current <- nothing
longest <- nothing
for letter in bag:
if letter == 'R':
add 'R' to current
else:
if there are more letters
in current than longest:
empty longest and pour
in letters from current
otherwise:
empty current to get ready
for the next possible run
Now the alphabetical stipulation just slightly complicates our condition. We will need to keep track of the most recent letter placed in "current," and in order for a run to continue, its not seeing another of the same letter that counts. Rather, the next letter has to be "greater" (lexicographically) than the last one we placed in current; otherwise the run ends and we perform our quantity check against "longest-so-far."
Generally, it is easier to create a listing of all possibilities from the input, and then filter the results based on the additional logic needed. For instance, when finding longest substrings, all substrings of the input can be found, and then only elements that are valid sequences are retained:
def is_valid(d):
return all(d[i] <= d[i+1] for i in range(len(d)-1))
def longest_substring(s):
substrings = list(filter(is_valid, [s[i:b] for i in range(len(s)) for b in range(len(s))]))
max_length = len(max(substrings, key=len)) #this finds the length length of the longest valid substring, to be used if a tie is discovered
return [i for i in substrings if len(i) == max_length][0]
l = [['abcbcd', 'abc'], ['azcbobobegghakl', 'beggh']]
for a, b in l:
assert longest_substring(a) == b
print('all tests passed')
Output:
all tests passed
One way of dealing with implementation complexity is, for me, to write some unit tests: at some point, if I can't figure out from "reading the code" what is wrong, and/or what parts are missing, I like to write unit tests which is an "orthogonal" approach to the problem (instead of thinking "how can I solve this?" I ask myself "what tests should I write to verify it works ok?").
Then, by running the tests I can observe how the implementation behaves, and try to fix problems "one by one", i.e concentrate on making that next unit test pass.
It's also a way of "cutting a big problem in smaller problems which are easier to reason about".
s = 'azcbobobeggh'
ls = [] #create a new empty list
for i in range(len(s) - 1): # iterate s from index 0 to index -2
if s[i] <= s[i+1]: # compare the letters
ls.append(s[i]) # after comparing them, append them to the new list
else:
ls.append(s[i])
ls.append('mark') # place a 'mark' to separate them into chunks by order
ls.append(s[-1]) # get back the index -1 that missed by the loop
# at this point here ls:
# ['a', 'z', 'mark', 'c', 'mark', 'b', 'o', 'mark', 'b', 'o', 'mark', 'b', 'e', 'g', 'g', 'h']
ls = str(''.join(ls)) # 'azmarkcmarkbomarkbomarkbeggh'
ls = ls.split('mark') # ['az', 'c', 'bo', 'bo', 'beggh']
res = max(ls, key=len) # now just find the longest string in the list
print('Longest substring in alphabetical order is: ' + res)
# Longest substring in alphabetical order is: beggh

Find all the possible N-length anagrams - fast alternatives

I am given a sequence of letters and have to produce all the N-length anagrams of the sequence given, where N is the length of the sequence.
I am following a kinda naive approach in python, where I am taking all the permutations in order to achieve that. I have found some similar threads like this one but I would prefer a math-oriented approach in Python. So what would be a more performant alternative to permutations? Is there anything particularly wrong in my attempt below?
from itertools import permutations
def find_all_anagrams(word):
pp = permutations(word)
perm_set = set()
for i in pp:
perm_set.add(i)
ll = [list(i) for i in perm_set]
ll.sort()
print(ll)
If there are lots of repeated letters, the key will be to produce each anagram only once instead of producing all possible permutations and eliminating duplicates.
Here's one possible algorithm which only produces each anagram once:
from collections import Counter
def perm(unplaced, prefix):
if unplaced:
for element in unplaced:
yield from perm(unplaced - Counter(element), prefix + element)
else:
yield prefix
def permutations(iterable):
yield from perm(Counter(iterable), "")
That's actually not much different from the classic recursion to produce all permutations; the only difference is that it uses a collections.Counter (a multiset) to hold the as-yet-unplaced elements instead of just using a list.
The number of Counter objects produced in the course of the iteration is certainly excessive, and there is almost certainly a faster way of writing that; I chose this version for its simplicity and (hopefully) its clarity
This is very slow for long words with many similar characters. Slow compared to theoretical maximum performance that is. For example, permutations("mississippi") will produce a much longer list than necessary. It will have a length of 39916800, but but the set has a size of 34650.
>>> len(list(permutations("mississippi")))
39916800
>>> len(set(permutations("mississippi")))
34650
So the big flaw with your method is that you generate ALL anagrams and then remove the duplicates. Use a method that only generates the unique anagrams.
EDIT:
Here is some working, but extremely ugly and possibly buggy code. I'm making it nicer as you're reading this. It does give 34650 for mississippi, so I assume there aren't any major bugs. Warning again. UGLY!
# Returns a dictionary with letter count
# get_letter_list("mississippi") returns
# {'i':4, 'm':1, 'p': 2, 's':4}
def get_letter_list(word):
w = sorted(word)
c = 0
dd = {}
dd[w[0]]=1
for l in range(1,len(w)):
if w[l]==w[l-1]:
d[c]=d[c]+1
dd[w[l]]=dd[w[l]]+1
else:
c=c+1
d.append(1)
dd[w[l]]=1
return dd
def sum_dict(d):
s=0
for x in d:
s=s+d[x]
return s
# Recursively create the anagrams. It takes a letter list
# from the above function as an argument.
def create_anagrams(dd):
if sum_dict(dd)==1: # If there's only one letter left
for l in dd:
return l # Ugly hack, because I'm not used to dics
a = []
for l in dd:
if dd[l] != 0:
newdd=dict(dd)
newdd[l]=newdd[l]-1
if newdd[l]==0:
newdd.pop(l)
newl=create(newdd)
for x in newl:
a.append(str(l)+str(x))
return a
>>> print (len(create_anagrams(get_letter_list("mississippi"))))
34650
It works like this: For every unique letter l, create all unique permutations with one less occurance of the letter l, and then append l to all these permutations.
For "mississippi", this is way faster than set(permutations(word)) and it's far from optimally written. For instance, dictionaries are quite slow and there's probably lots of things to improve in this code, but it shows that the algorithm itself is much faster than your approach.
Maybe I am missing something, but why don't you just do this:
from itertools import permutations
def find_all_anagrams(word):
return sorted(set(permutations(word)))
You could simplify to:
from itertools import permutations
def find_all_anagrams(word):
word = set(''.join(sorted(word)))
return list(permutations(word))
In the doc for permutations the code is detailled and it seems already optimized.
I don't know python but I want to try to help you: probably there are a lot of other more performant algorithm, but I've thought about this one: it's completely recursive and it should cover all the cases of a permutation. I want to start with a basic example:
permutation of ABC
Now, this algorithm works in this way: for Length times you shift right the letters, but the last letter will become the first one (you could easily do this with a queue).
Back to the example, we will have:
ABC
BCA
CAB
Now you repeat the first (and only) step with the substring built from the second letter to the last one.
Unfortunately, with this algorithm you cannot consider permutation with repetition.

python complementing a complex regular expression

Trying to learn regular expressions and despite some great posts on here and links to a regEx site, I have a case I was trying to hack out of sheer stubbornness that defied producing the match I was looking for. To understand it, consider the following code which allows us to pass in a list of strings and a pattern and find out if the pattern either matches all items in the list or matches none of them:
import re
def matchNone(pattern, lst):
return not any([re.search(pattern, i) for i in lst])
def matchAll(pattern, lst):
return all([re.search(pattern, i) for i in lst])
To help w/ debugging, this simple code allows us to just add _test to a function call and see what is being passed to the any() or all() functions which ultimately return the result:
def matchAll_test(pattern, lst):
return [re.search(pattern, i) for i in lst]
def matchNone_test(pattern, lst):
return ([re.search(pattern, i) for i in lst])
This pattern and list produces True from matchAll():
wordPattern = "^[cfdrp]an$"
matchAll(wordPattern, ['can', 'fan', 'dan', 'ran', 'pan']) # True
This pattern on the surface appears to work with matchNone() in our effort to reverse the pattern:
wordPattern = "^[^cfdrp]an|[cfdrp](^an)$"
matchNone(wordPattern, ['can', 'fan', 'dan', 'ran', 'pan']) # True
It returns True as we hoped it would. But a true reversal of this pattern would return False for a list of values where none of them are equivalent to our original list ['can', 'fan', 'dan', 'ran', 'pan'] regardless of what else we pass into it. (i.e. "match anything except these 5 words")
In testing to see what changes to the words in this list will get us a False, we quickly discover the pattern is not as successful as it first appears. If it were, it would fail for matchNone() on anything not in the aforementioned list.
These permutations helped uncover the short-comings of my pattern tests:
["something unrelated", "p", "xan", "dax", "ccan", "dann", "ra"]
In my exploration of above, I tried other permutations as well taking the original list, using the _test version of the functions and changing one letter at a time on the original words, and or modifying one term or adding one term from permutations like what is above.
If anyone can find the true inverse of my original pattern, I would love to see it so I can learn from it.
To help with your investigation:
This pattern also works with matchAll() for all words, but I could not seem to create its inverse either: "^(can|fan|dan|ran|pan)$"
Thanks for any time you expend on this. I'm hoping to find a regEx guru on here who spots the mistake and can propose the right solution.
I hope I understood your question. This is the solution that I found:
^(?:[^cfdrp].*|[cfdrp][^a].*|[cfdrp]a[^n].*|.{4,}|.{0,2})$
[^cfdrp].*: if text starts not with c, f, d, r or p than match
[cfdrp][^a].*: text starts with c, f, d, r or p: match if second character is not an a
[cfdrp]a[^n].*: text starts with [cfdrp]a: match if third character is not a n.
.{4,}: match anything with more than 3 characters
.{0,2}: match anything with 0, 1, or 2 characters
It is equal to:
^(?:[^cfdrp].*|.[^a].*|..[^n].*|.{4,}|.{0,2})$
What you are looking to do is find the complement. To do this for any regular expression is a difficult problem. There is no built-in for complementing a regex.
There is an open challenge on PPCG to do this. One comment explains the difficulty:
It's possible, but crazy tedious. You need to parse the regexp into an NFA (eg Thompson's algorithm), convert the NFA to a DFA (powerset construction), complete the DFA, find the complement, then convert the DFA to a RE (eg Brzozowski's method). ie slightly harder than writing a complete RE engine!
There are Python libraries out there that will convert from a regular expression (the original specification refers to a "regular language", which only has literals, "or", and "star" -- much simpler than the type of regex you're thinking of [more info here]) to an NFA, to a DFA, complement it, and convert it back. It's pretty complicated.
Here's a related SO question: Finding the complement of a DFA?
In summary, it's far simpler to find the result of the original regular expression instead, then use Boolean negation.

Categories

Resources