Are we doing this right? Slow insert performance on indexeddb. #8864
Unanswered
michael-kitchin
asked this question in
Q&A
Replies: 2 comments
-
|
@michael-kitchin is this using the |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
@michael-kitchin It would be interesting to see if your performance was any better using the in-memory adapter. 550K records is a lot of data for a browser application. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We're having difficulty achieving the performance we're looking for when bulk-inserting data into a local-only, indexeddb-backed PouchDB (v8.0.1), and at this point we're trying to determine whether we should expect what we're seeing or we're doing something wrong.
Our current observed performance inserting ~550k records is 1-2k insertions per second using
bulkDocs()into an empty database (i.e., completely removed, via dev tools). This nets out to ~5min, when all is said and done.Other specifics:
{"key1": ["val1"], "key2": ["val2"], "key3": {"key3.1": "val3.1", "key3.2": "val3.2"}}._idfields are prefixed UUIDs of the form01234567890-f357fd5d-00ed-4b70-99fb-c512d9a75e35._chunk()and promises wrapped up usingPromise.all(). The "1-2k insertions per second" is based on batch timing, and therefore excludes database creation.So what should we be expecting, do you think?
Beta Was this translation helpful? Give feedback.
All reactions