Created on 2015-01-18 12:32 by cmn, last changed 2022-04-11 14:58 by admin. This issue is now closed.
I found the code used to collapse addresses to be very slow on a large number (64k) of island addresses which are not collapseable. The code at https://github.com/python/cpython/blob/0f164ccc85ff055a32d11ad00017eff768a79625/Lib/ipaddress.py#L349 was found to be guilty, especially the index lookup. The patch changes the code to discard the index lookup and have _find_address_range return the number of items consumed. That way the set operation to dedup the addresses can be dropped as well. Numbers from the testrig I adapted from http://bugs.python.org/issue20826 with 8k non-consecutive addresses: Execution time: 0.6893927365541458 seconds vs. Execution time: 12.116527611762285 seconds MfG Markus Kötter
Added the testrig.
This is great, thank you. Can you sign the contributor's agreement? https://www.python.org/psf/contrib/contrib-form/
Here is an updated patch with a fix to the tests and docstrings.
I just signed the agreement, ewa@ is processing it.
New changeset f7508a176a09 by Antoine Pitrou in branch 'default': Issue #23266: Much faster implementation of ipaddress.collapse_addresses() when there are many non-consecutive addresses. https://hg.python.org/cpython/rev/f7508a176a09
Ok, I've committed the patch. Thank you!
Deduplication should not be omitted. This slowed down collapsing of duplicated addresses.
$ ./python -m timeit -s "import ipaddress; ips = [ipaddress.ip_address('2001:db8::1000') for i in range(1000)]" -- "ipaddress.collapse_addresses(ips)"
Before f7508a176a09:
100 loops, best of 3: 13.4 msec per loop
After f7508a176a09:
10 loops, best of 3: 129 msec per loop
Proposed patch restores performance for duplicated addresses and simplifies the code using generators.
Good catch. What is the performance on the benchmark posted here?
> What is the performance on the benchmark posted here? The same as with current code.
Then +1. The patch looks fine to me.
New changeset 021b23a40f9f by Serhiy Storchaka in branch 'default': Issue #23266: Restore the performance of ipaddress.collapse_addresses() whith https://hg.python.org/cpython/rev/021b23a40f9f
My initial patch was wrong wrt. _find_address_range.
It did not loop over equal addresses.
Thats why performance with many equal addresses was degraded when dropping the set().
Here is a patch to fix _find_address_range, drop the set, and improve performance again.
python3 -m timeit -s "import bipaddress; ips = [bipaddress.ip_address('2001:db8::1000') for i in range(1000)]" -- "bipaddress.collapse_addresses(ips)"
1000 loops, best of 3: 1.76 msec per loop
python3 -m timeit -s "import aipaddress; ips = [aipaddress.ip_address('2001:db8::1000') for i in range(1000)]" -- "aipaddress.collapse_addresses(ips)"
1000 loops, best of 3: 1.32 msec per loop
Only one duplicated address is degenerated case. When there is a lot of duplicated addresses in range the patch causes regression.
$ ./python -m timeit -s "import ipaddress; ips = [ipaddress.ip_address('2001:db8::%x' % (i%100)) for i in range(100000)]" -- "ipaddress.collapse_addresses(ips)"
Unpatched: 10 loops, best of 3: 369 msec per loop
Patched: 10 loops, best of 3: 1.04 sec per loop
Eleminating duplicates before processing is faster once the overhead of the set operation is less than the time required to sort the larger dataset with duplicates.
So we are basically comparing sort(data) to sort(set(data)).
The optimum depends on the input data.
python3 -m timeit -s "import random; import bipaddress; ips = [bipaddress.ip_address('2001:db8::') + i for i in range(100000)]; random.shuffle(ips)" -- "bipaddress.collapse_addresses(ips)"
10 loops, best of 3: 1.49 sec per loop
vs.
10 loops, best of 3: 1.59 sec per loop
If the data is pre-sorted, possible if you retrieve from database, things are drastically different:
python3 -m timeit -s "import random; import bipaddress; ips = [bipaddress.ip_address('2001:db8::') + i for i in range(100000)]; " -- "bipaddress.collapse_addresses(ips)"
10 loops, best of 3: 136 msec per loop
vs
10 loops, best of 3: 1.57 sec per loop
So for my usecase, I basically have less than 0.1% duplicates (if at all), dropping the set would be better, but ... other usecases will exist.
Still, it is easy to "emulate" the use of "sorted(set())" from a users perspective - just call collapse_addresses(set(data)) in case you expect to have duplicates and experience a speedup by inserting unique, possibly even sorted, data.
On the other hand, if you have a huge load of 99.99% sorted non collapseable addresses, it is not possible to drop the set() operation in your sorted(set()) from a users perspective, no way to speed things up, and the slowdown you get is x10.
That said, I'd drop the set().
Optimization depends on data input, dropping the set() allows the user to optimize base on the nature of his input data.
messages: + msg234382
nosy:
+ serhiy.storchaka
messages:
+ msg234282
resolution: fixed -> (no value)
stage: resolved -> patch review
stage: patch review -> resolved
messages: + msg234242
messages: + msg234240