Replies: 2 comments 1 reply
-
|
I am aware that my plan violates data integrity principles of downstream records / calculated results, but I am willing to live with that since I suspect the divergences to be minimal after compression and I can, at a later time point, start to recalculate downstream results. |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Consider doing this as part of datajoint 2.0 migration. The is a new feature of codecs for objects. What is your data type. What compression are you using? you may be able to covert the objects into zarr for compression and lazy loading. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a computed table that stores tif stacks as
attach@my_storeobjects in a storage bucket.The size of these are getting out of hand (and their costs).
I want to reduce the size of my external store (
my_store) byHow do I handle this safely, or more specifically my question is:
What happens when I do an
update1()on each record in that table that has thoseattach@my_storeobjects. Does the blob in external storage get overwritten, does it end up under a different name in external storage, how can I safely clean those old, massive blobs (my original goal) from the bucket?Beta Was this translation helpful? Give feedback.
All reactions