Why does IDLE 3.4 take so long on this program? - python

EDIT: I'm redoing the question entirely. The issue has nothing to do with time.time()
Here's a program:
import time
start=time.time()
a=9<<(1<<26) # The line that makes it take a while
print(time.time()-start)
This program, when saved as a file and run with IDLE in Python 3.4, takes about 10 seconds, even though 0.0 is printed out from time.time(). The issue is very clearly with IDLE, because when run from the command line this program takes almost no time at all.
Another program that has the same effect, as found by senshin, is:
def f():
a = 9<<(1<<26)
I have confirmed that this same program, when run in Python 2.7 IDLE or from the command line on python 2.7 or 3.4, is near instantaneous.
So what is Python 3.4 IDLE doing that makes it take so long? I understand that calculating this number and saving it to memory is disk intensive, but what I'd like to know is why Python 3.4 IDLE performs this computation and write when Python 2.7 IDLE and command line Python presumably do not.

I would look at that line and pick it apart. You have:
9 << (1 << 26)
(1 << 26) is the first expression evaluated, and it produces a really large number. What this line is saying, is that you are going to multiply the number 1 by 2 to the power of 26, effectively producing the number 2 ** 26 in memory. This is not the problem however. You then shift 9 left by the count of 2 ** 26. This produces a number that is around 50 million digits long in memory (I cant even calculate it exactly!), because the shift left is just too big. Be careful in the future, as shifts by what seems to be small amounts do in fact grow very fast. If it was any larger, your program may have not run at all. Your expression mathematically evaluates to 9 * 2 ** (2 ** 26), if you were curious.
The ambiguity in the comment section is probably actually dealing with how this huge portion of memory is handled by python under the hood, and not IDLE.
EDIT 1:
I thing that what is happening, is that a mathematical expression evaluates to its answer, even when placed inside of a function that isn't called yet, only if the expression is self sufficient. This means that if a variable is used in the equation, the equation will be untouched in the byte code, and not evaluated until hard execution. The function has to be interpreted, and in that process, I think that your value is actually computed, resulting in the slower times. I am not sure about this, but I strongly suspect this behavior to be the root cause. Even if it is not so, you got to admit that 9<<(1<<26) kicks the computer in the behind, there's not much optimization that can be done there.
In[73]: def create_number():
return 9<<(1<<26)
In[74]: #Note that this seems instantaneous, but try calling the function!
In[75]: %timeit create_number()
#Python environment crashes because task is too hard
There is a slight deception in this kind of testing however. When trying this with the regular timeit, I got:
In[3]: from timeit import timeit
In[4]: timeit(setup = 'from __main__ import create_number', stmt = 'create_number()', number = 1)
Out[4]: .004942887388800443
Also keep in mind that printing the value is not do-able, so something like:
In[102]: 9<<(1<<26)
should not even be attempted.
For even more added support:
I felt like a rebel, so I decided to see what would happen if I timeit the raw execution of the equation:
In[107]: %timeit 9<<(1<<26)
10000000 loops, best of 3: 22.8 ns per loop
In[108]: def empty(): pass
In[109]: %timeit empty()
10000000 loops, best of 3: 96.3 ns per loop
This is really fishy, because apparently this calculation happens faster than the time it takes Python to call an empty function, which is obviously not the case. I repeat, this is not instantaneous, but probably has something to do with retrieving an already calculated object somewhere in memory, and reusing that value to calculate the expression. Anyways, nice question.

I am really puzzled. Here are more results with 3.4.1. Running either of the first two lines from the editor in either 3.4.1 or 3.3.5 gives the same contrast.
>>> a = 1 << 26; b = 9 << a # fast, , .5 sec
>>> c = 9 << (1 << 26) # slow, about 3 sec
>>> b == c # fast
True
>>> exec('d=9<<(1<<26)', globals()) # fast
>>> c == d # fast
True
The difference between normal execution and Idle's is that Idle exec's code in an exec call like the above, except that the 'globals' passed to exec is not globals() but a dict configured to look like globals(). I do not know of any 2.7 -- 3.4 difference in Idle in this respect except for the change of exec from statement to function. How can exec'ing an exec be faster than a single exec? How can adding an intermediate binding be faster?

Related

String concatenation much faster in Python than Go

I'm looking at using Go to write a small program that's mostly handling text. I'm pretty sure, based on what I've heard about Go and Python that Go will be substantially faster. I don't actually have a specific need for insane speeds, but I'd like to get to know Go.
The "Go is going to be faster" idea was supported by a trivial test:
# test.py
print("Hello world")
$ time python dummy.py
Hello world
real 0m0.029s
user 0m0.019s
sys 0m0.010s
// test.go
package main
import "fmt"
func main() {
fmt.Println("hello world")
}
$ time ./test
hello world
real 0m0.001s
user 0m0.001s
sys 0m0.000s
Looks good in terms of raw startup speed (which is entirely expected). Highly non-scientific justification:
$ strace python test.py 2>&1 | wc -l
1223
$ strace ./test 2>&1 | wc -l
174
However, my next contrived test was how fast is Go when faffing with strings, and I was expecting to be similarly blown away by Go's raw speed. So, this was surprising:
# test2.py
s = ""
for i in range(1000000):
s += "a"
$ time python test2.py
real 0m0.179s
user 0m0.145s
sys 0m0.013s
// test2.go
package main
func main() {
s := ""
for i:= 0; i < 1000000; i++ {
s += "a";
}
}
$ time ./test2
real 0m56.840s
user 1m50.836s
sys 0m17.653
So Go is hundreds of times slower than Python.
Now, I know this is probably due to Schlemiel the Painter's algorithm, which explains why the Go implementation is quadratic in i (i is 10 times bigger leads to 100 times slowdown).
However, the Python implementation seems much faster: 10 times more loops only slows it down by twice. The same effect persists if you concatenate str(i), so I doubt there's some kind of magical JIT optimization to s = 100000 * 'a' going on. And it's not much slower if I print(s) at the end, so the variable isn't being optimised out.
Naivety of the concatenation methods aside (there are surely more idiomatic ways in each language), is there something here that I have misunderstood, or is it simply easier in Go than in Python to run into cases where you have to deal with C/C++-style algorithmic issues when handling strings (in which case a straight Go port might not be as uh-may-zing as I might hope without having to, ya'know, think about things and do my homework)?
Or have I run into a case where Python happens to work well, but falls apart under more complex use?
Versions used: Python 3.8.2, Go 1.14.2
TL;DR summary: basically you're testing the two implementation's allocators / garbage collectors and heavily weighting the scale on the Python side (by chance, as it were, but this is something the Python folks optimized at some point).
To expand my comments into a real answer:
Both Go and Python have counted strings, i.e., strings are implemented as a two-element header thingy containing a length (byte count or, for Python 3 strings, Unicode characters count) and data pointer.
Both Go and Python are garbage-collected (GCed) languages. That is, in both languages, you can allocate memory without having to worry about freeing it yourself: the system takes care of that automatically.
But the underlying implementations differ, quite a bit in this particular one important way: the version of Python you are using has a reference counting GC. The Go system you are using does not.
With a reference count, the inner bits of the Python string handler can do this. I'll express it as Go (or at least pseudo-Go) although the actual Python implementation is in C and I have not made all the details line up properly:
// add (append) new string t to existing string s
func add_to_string(s, t string_header) string_header {
need = s.len + t.len
if s.refcount == 1 { // can modify string in-place
data = s.data
if cap(data) >= need {
copy_into(data + s.len, t.data, t.len)
return s
}
}
// s is shared or s.cap < need
new_s := make_new_string(roundup(need))
// important: new_s has extra space for the next call to add_to_string
copy_into(new_s.data, s.data, s.len)
copy_into(new_s.data + s.len, t.data, t.len)
s.refcount--
if s.refcount == 0 {
gc_release_string(s)
}
return new_s
}
By over-allocating—rounding up the need value so that cap(new_s) is large—we get about log2(n) calls to the allocator, where n is the number of times you do s += "a". With n being 1000000 (one million), that's about 20 times that we actually have to invoke the make_new_string function and release (for gc purposes because the collector uses refcounts as a first pass) the old string s.
[Edit: your source archaeology led to commit 2c9c7a5f33d, which suggests less than doubling but still a multiplicative increase. To other readers, see comment.]
The current Go implementation allocates strings without a separate capacity header field (see reflect.StringHeader and note the big caveat that says "don't depend on this, it might be different in future implementations"). Between the lack of a refcount—we can't tell in the runtime routine that adds two strings, that the target has only one reference—and the inability to observe the equivalent of cap(s) (or cap(s.data)), the Go runtime has to create a new string every time. That's one million memory allocations.
To show that the Python code really does use the refcount, take your original Python:
s = ""
for i in range(1000000):
s += "a"
and add a second variable t like this:
s = ""
t = s
for i in range(1000000):
s += "a"
t = s
The difference in execution time is impressive:
$ time python test2.py
0.68 real 0.65 user 0.03 sys
$ time python test3.py
34.60 real 34.08 user 0.51 sys
The modified Python program still beats Go (1.13.5) on this same system:
$ time ./test2
67.32 real 103.27 user 13.60 sys
and I have not poked any further into the details, but I suspect the Go GC is running more aggressively than the Python one. The Go GC is very different internally, requiring write barriers and occasional "stop the world" behavior (of all goroutines that are not doing the GC work). The refcounting nature of the Python GC allows it to never stop: even with a refcount of 2, the refcount on t drops to 1 and then next assignment to t drops it to zero, releasing the memory block for re-use in the next trip through the main loop. So it's probably picking up the same memory block over and over again.
(If my memory is correct, Python's "over-allocate strings and check the refcount to allow expand-in-place" trick was not in all versions of Python. It may have first been added around Python 2.4 or so. This memory is extremely vague and a quick Google search did not turn up any evidence one way or the other. [Edit: Python 2.7.4, apparently.])
Well. You should never, ever use string concatenation in this way :-)
in go, try the strings.Buider
package main
import (
"strings"
)
func main() {
var b1 strings.Builder
for i:= 0; i < 1000000; i++ {
b1.WriteString("a");
}
}

Is slicing really slower in Python 3.4?

This question and my answer got me thinking about this peculiar difference between Python 2.7 and Python 3.4. Take the simple example code:
import timeit
import dis
c = 1000000
r = range(c)
def slow():
for pos in range(c):
r[pos:pos+3]
dis.dis(slow)
time = timeit.Timer(lambda: slow()).timeit(number=1)
print('%3.3f' % time)
In Python 2.7, I consistently get 0.165~ and for Python 3.4 I consistently get 0.554~. The only significant difference between the disassemblies is that Python 2.7 emits the SLICE+3 byte code while Python 3.4 emits BUILD_SLICE followed by BINARY_SUBSCR. Note that I've eliminated the candidates for potential slowdown from the other question, namely strings and the fact that xrange doesn't exist in Python 3.4 (which is supposed to be similar to the latter's range class anyways).
Using itertools' islice yields nearly identical timings between the two, so I highly suspect that it's the slicing that's the cause of the difference here.
Why is this happening and is there a link to an authoritative source documenting change in behavior?
EDIT: In response to the answer, I have wrapped the range objects in list, which did give a noticeable speedup. However as I increased the number of iterations in timeit I noticed that the timing differences became larger and larger. As a sanity check, I replaced the slicing with None to see what would happen.
500 iterations in timeit.
c = 1000000
r = list(range(c))
def slow():
for pos in r:
None
yields 10.688 and 9.915 respectively. Replacing the for loop with for pos in islice(r, 0, c, 3) yields 7.626 and 6.270 respectively. Replacing None with r[pos] yielded 20~ and 28~ respectively. r[pos:pos+3] yields 67.531 and 106.784 respectively.
As you can see, the timing differences are huge. Again, I'm still convinced the issue is not directly related to range.
On Python 2.7, you're iterating over a list and slicing a list. On Python 3.4, you're iterating over a range and slicing a range.
When I run a test with a list on both Python versions:
from __future__ import print_function
import timeit
print(timeit.timeit('x[5:8]', setup='x = list(range(10))'))
I get 0.243554830551 seconds on Python 2.7 and 0.29082867689430714 seconds on Python 3.4, a much smaller difference.
The performance difference you see after eliminating the range object is much smaller. It comes primarily from two factors: addition is a bit slower on Python 3, and Python 3 needs to go through __getitem__ with a slice object for slicing, while Python 2 has __getslice__.
I wasn't able to replicate the timing difference you saw with r[pos]; you may have had some confounding factor in that test.

Is doing multiplication multiple times or assigning to a variable faster?

Let's say I have this code:
foo = int(input("Number"))
bar = int(input("Number"))
for number in range(0, 10):
if foo*bar > 0:
print("hello")
But, I could also have this code:
foo = int(input("Number"))
bar = int(input("Number"))
top = foo*bar
for number in range(0, 10):
if top > 0:
print("hello")
Which one is faster?
Using Python 3.
I realize that this question is similar, however, it's for Java, and Python may be different. Additonally, they're asking about memory efficiency whereas I am asking about processor efficiency.
Just so we can end this already, here is the answer. Using timeit on my computer we get the results as...
Do the multiplication in the loop every time:
$ python -m timeit 'for _ in xrange(10): 10 * 20 > 0'
1000000 loops, best of 3: 1.3 usec per loop
Do the multiplication outside the loop:
$ python -m timeit 'foo = 10 * 20
> for _ in xrange(10): foo > 0'
1000000 loops, best of 3: 1.07 usec per loop
Running the multiplication outside the loop instead of inside of it saves us 0.23 microseconds - about 18%.
Note that I'm using Python 2.7.10 on a 2007 iMac (2.4 GHz) running OS X 10.11.0, developer preview 7. Your exact results may vary with Python version, hardware, OS Version, CPU... other applications you have running (just Safari with 3 tabs in my case). But you should comparably see that storing the number in a variable is trivial compared to repeatedly performing the same operation and discarding the results every time.
In the future, OP, please just use timeit. It comes built in with Python for doing performance checks just like this. See the documentation here: https://docs.python.org/3/library/timeit.html (On that note, do not fear the documentation. Read them. If you have questions, ask them on Stack Overflow - you may reveal short-comings in the documentation where they need to be improved/clarified. Or at the very least, your questions will be answered and you'll know more afterwards than you did before.)

Can Go really be that much faster than Python?

I think I may have implemented this incorrectly because the results do not make sense. I have a Go program that counts to 1000000000:
package main
import (
"fmt"
)
func main() {
for i := 0; i < 1000000000; i++ {}
fmt.Println("Done")
}
It finishes in less than a second. On the other hand I have a Python script:
x = 0
while x < 1000000000:
x+=1
print 'Done'
It finishes in a few minutes.
Why is the Go version so much faster? Are they both counting up to 1000000000 or am I missing something?
One billion is not a very big number. Any reasonably modern machine should be able to do this in a few seconds at most, if it's able to do the work with native types. I verified this by writing an equivalent C program, reading the assembly to make sure that it actually was doing addition, and timing it (it completes in about 1.8 seconds on my machine).
Python, however, doesn't have a concept of natively typed variables (or meaningful type annotations at all), so it has to do hundreds of times as much work in this case. In short, the answer to your headline question is "yes". Go really can be that much faster than Python, even without any kind of compiler trickery like optimizing away a side-effect-free loop.
pypy actually does an impressive job of speeding up this loop
def main():
x = 0
while x < 1000000000:
x+=1
if __name__ == "__main__":
s=time.time()
main()
print time.time() - s
$ python count.py
44.221405983
$ pypy count.py
1.03511095047
~97% speedup!
Clarification for 3 people who didn't "get it". The Python language itself isn't slow. The CPython implementation is a relatively straight forward way of running the code. Pypy is another implementation of the language that does many tricky (especiallt the JIT) things that can make enormous differences. Directly answering the question in the title - Go isn't "that much" faster than Python, Go is that much faster than CPython.
Having said that, the code samples aren't really doing the same thing. Python needs to instantiate 1000000000 of its int objects. Go is just incrementing one memory location.
This scenario will highly favor decent natively-compiled statically-typed languages. Natively compiled statically-typed languages are capable of emitting a very trivial loop of say, 4-6 CPU opcodes that utilizes simple check-condition for termination. This loop has effectively zero branch prediction misses and can be effectively thought of as performing an increment every CPU cycle (this isn't entirely true, but..)
Python implementations have to do significantly more work, primarily due to the dynamic typing. Python must make several different calls (internal and external) just to add two ints together. In Python it must call __add__ (it is effectively i = i.__add__(1), but this syntax will only work in Python 3.x), which in turn has to check the type of the value passed (to make sure it is an int), then it adds the integer values (extracting them from both of the objects), and then the new integer value is wrapped up again in a new object. Finally it re-assigns the new object to the local variable. That's significantly more work than a single opcode to increment, and doesn't even address the loop itself - by comparison, the Go/native version is likely only incrementing a register by side-effect.
Java will fair much better in a trivial benchmark like this and will likely be fairly close to Go; the JIT and static-typing of the counter variable can ensure this (it uses a special integer add JVM instruction). Once again, Python has no such advantage. Now, there are some implementations like PyPy/RPython, which run a static-typing phase and should fare much better than CPython here ..
You've got two things at work here. The first of which is that Go is compiled to machine code and run directly on the CPU while Python is compiled to bytecode run against a (particularly slow) VM.
The second, and more significant, thing impacting performance is that the semantics of the two programs are actually significantly different. The Go version makes a "box" called "x" that holds a number and increments that by 1 on each pass through the program. The Python version actually has to create a new "box" (int object) on each cycle (and, eventually, has to throw them away). We can demonstrate this by modifying your programs slightly:
package main
import (
"fmt"
)
func main() {
for i := 0; i < 10; i++ {
fmt.Printf("%d %p\n", i, &i)
}
}
...and:
x = 0;
while x < 10:
x += 1
print x, id(x)
This is because Go, due to it's C roots, takes a variable name to refer to a place, where Python takes variable names to refer to things. Since an integer is considered a unique, immutable entity in python, we must constantly make new ones. Python should be slower than Go but you've picked a worst-case scenario - in the Benchmarks Game, we see go being, on average, about 25x times faster (100x in the worst case).
You've probably read that, if your Python programs are too slow, you can speed them up by moving things into C. Fortunately, in this case, somebody's already done this for you. If you rewrite your empty loop to use xrange() like so:
for x in xrange(1000000000):
pass
print "Done."
...you'll see it run about twice as fast. If you find loop counters to actually be a major bottleneck in your program, it might be time to investigate a new way of solving the problem.
#troq
I'm a little late to the party but I'd say the answer is yes and no. As #gnibbler pointed out, CPython is slower in the simple implementation but pypy is jit compiled for much faster code when you need it.
If you're doing numeric processing with CPython most will do it with numpy resulting in fast operations on arrays and matrices. Recently I've been doing a lot with numba which allows you to add a simple wrapper to your code. For this one I just added #njit to a function incALot() which runs your code above.
On my machine CPython takes 61 seconds, but with the numba wrapper it takes 7.2 microseconds which will be similar to C and maybe faster than Go. Thats an 8 million times speedup.
So, in Python, if things with numbers seem a bit slow, there are tools to address it - and you still get Python's programmer productivity and the REPL.
def incALot(y):
x = 0
while x < y:
x += 1
#njit('i8(i8)')
def nbIncALot(y):
x = 0
while x < y:
x += 1
return x
size = 1000000000
start = time.time()
incALot(size)
t1 = time.time() - start
start = time.time()
x = nbIncALot(size)
t2 = time.time() - start
print('CPython3 takes %.3fs, Numba takes %.9fs' %(t1, t2))
print('Speedup is: %.1f' % (t1/t2))
print('Just Checking:', x)
CPython3 takes 58.958s, Numba takes 0.000007153s
Speedup is: 8242982.2
Just Checking: 1000000000
Problem is Python is interpreted, GO isn't so there's no real way to bench test speeds. Interpreted languages usually (not always have a vm component) that's where the problem lies, any test you run is being run in interpreted bounds not actual runtime bounds. Go is slightly slower than C in terms of speed and that is mostly due to it using garbage collection instead of manual memory management. That said GO compared to Python is fast because its a compiled language, the only thing lacking in GO is bug testing I stand corrected if I'm wrong.
It is possible that the compiler realized that you didn't use the "i" variable after the loop, so it optimized the final code by removing the loop.
Even if you used it afterwards, the compiler is probably smart enough to substitute the loop with
i = 1000000000;
Hope this helps =)
I'm not familiar with go, but I'd guess that go version ignores the loop since the body of the loop does nothing. On the other hand, in the python version, you are incrementing x in the body of the loop so it's probably actually executing the loop.

Why is python slower compared to Ruby even with this very simple "test"?

See Is there something wrong with this python code, why does it run so slow compared to ruby? for my previous attempt at understanding the differences between python and ruby.
As pointed out by igouy the reasoning I came up with for python being slower could be something else than due to recursive function calls (stack involved).
I made this
#!/usr/bin/python2.7
i = 0
a = 0
while i < 6553500:
i += 1
if i != 6553500:
a = i
else:
print "o"
print a
In ruby it is
#!/usr/bin/ruby
i = 0
a = 0
while i < 6553500
i += 1
if i != 6553500
a = i
else
print "o"
end
end
print a
Python 3.1.2 (r312:79147, Oct 4 2010, 12:45:09)
[GCC 4.5.1] on linux2
time python pytest.py
o
6553499
real 0m3.637s
user 0m3.586s
ruby 1.9.2p0 (2010-08-18 revision 29036) [x86_64-linux]
time ruby rutest.rb
o6553499
real 0m0.618s
user 0m0.610s
Letting it loop higher gives higher differences. Adding an extra 0, ruby finishes in 7s, while python runs for 40s.
This is run on Intel(R) Core(TM) i7 CPU M 620 # 2.67GHz with 4GB mem.
Why is this so?
First off, note that the Python version you show is incorrect: you're running this code in Python 2.7, not 3.1 (it's not even valid Python3 code). (FYI, Python 3 is usually slower than 2.)
That said, there's a critical problem in the Python test: you're writing it as global code. You need to write it as a function. It runs about twice as fast when written correctly, in both Python 2 and 3:
def main():
i = 0
a = 0
while i < 6553500:
i += 1
if i != 6553500:
a = i
else:
print("o")
print(a)
if __name__ == "__main__":
main()
When you write code globally, you have no locals; all of your variables are global variables. Locals are much faster than globals in Python, because globals are stored in a dict. Locals can be referenced directly by the VM by index, so no hash table lookups are needed.
Also, note that this is such a simple test, what you're really doing is benchmarking a few arbitrary bytecode operations.
Why is this so?
Python's loop (for, while) isn't fast for handle dynamic types. in such a case, it lose the advantage.
but cython becames the salvation
pure python version below borrowed from Glenn Maynard's answer (without print)
cython version is very easy based on this, it's is easy enough for a new python programmer can read :
def main():
cdef int i = 0
cdef int a = 0
while i < 6553500:
i += 1
if i != 6553500:
a = i
else:
pass # print "0"
return a
if __name__ == "__main__":
print main()
on my pc, python version need 2.5s, cython version need 5.5ms:
In [1]: import pyximport
In [2]: pyximport.install()
In [3]: import spam # pure python version
In [4]: timeit spam.main()
1 loops, best of 3: 2.41 s per loop
In [5]: import eggs # cython version
In [6]: timeit eggs.main()
100 loops, best of 3: 5.51 ms per loop
update: as Glenn Maynard point out in the comment, while i < N: i+= 1 is not pythonic.
I test xrange implementation.
spam.py is the same as Glenn Maynard's verison. foo.py's code is:
def main():
for i in xrange(6553500):
pass
a = i
return a
if __name__ == "__main__":
print main()
~/code/note$ time python2.7 spam.py # Glenn Maynard's while version
6553499
real 0m2.128s
user 0m2.080s
sys 0m0.044s
:~/code/note$ time python2.7 foo.py # xrange version, as Glenn Maynard point out in comment
6553499
real 0m0.618s
user 0m0.604s
sys 0m0.016s
On my friend's laptop (Windows7 64 bit, python 2.6, 3GB RAM), python takes only around 1 sec for 6553500 and 10 secs for 65535000 input. I wonder why your computer is taking so much time. It also shaves off some time on the larger input when I use xrange and local variables instead.
I cannot comment on Ruby since it's not installed on this computer.
It's too simple test. Maybe in such things Ruby is faster. But Python takes advantage, when you need to work with more complex datatypes and their methods. Python have much more implemented "ways to do this", and you must choose the one, which is the simplest and enough. Ruby works with more abstract entities and it requires less knowledge. But when it is possible in the same task in Python go deeper in it's manuals and find more and more combinations of types and methods/functions, it is possible after time to make much more faster program, than Ruby.
Ruby is simplier and easier to write, if you just want to get result, not perfomance. But the more complex you task is, the more advantage Python will have in perfomance after hours of optimizing.
UPD: While Ruby and Python have so much common things, the perfomace and high-levelness will have the reverse proportionality.

Categories

Resources