Martin
My feedback
5 results found
-
4 votes
An error occurred while saving the comment Martin supported this idea · -
12 votes
An error occurred while saving the comment Martin commentedI want to add a usecase other than GDPR that we are facing right now:
We are running ETL pipelines for financial transactions in which we transform and aggregate data. Once in a while, we notice a bug in the first stage of the pipeline and we need to re-run the aggregation stage of our pipeline. However, the data source, i.e. the raw/detailed data, is archived after 3 months.
We want to do a full re-import of the data source for a particular timeframe into the cluster. That data would be archived after 3 months and cause duplicates inside the Online Archive. It would be nice to have a built-in solution for such a pipeline scenario.
One solution would be to tag the data somehow to differentiate the duplicate items inside the Online Archive. Another solution would be to allow delete queries for specific Online Archive paths in S3.
Martin supported this idea · -
2 votes
An error occurred while saving the comment Martin commentedIs this still the case or is there some intelligent housekeeping going on in the background nowadays?
Martin shared this idea · -
9 votes
An error occurred while saving the comment Martin commentedFor anyone who needs a quick workaround, e.g. for automatic TV dashboard deployments: https://gist.github.com/MartinLoeper/c138cbb564d0dcc1cd5a35b73f72c088
An error occurred while saving the comment Martin commentedYep, this is so so important for our raspberry pi connected TV dashboard!
Please make this little change - don't want to spend time scripting Tampermonkey again. :OMartin supported this idea · -
42 votes
We’re currently researching the best way to add support for this.
An error occurred while saving the comment Martin commentedWe would like to see this feature in the future, too. That is particularly important for our log archival solution which deals with millions of audit logs per month. They should all expire and be deleted at some point to cut storage costs.
Martin supported this idea ·
Such a good point! I was wondering about the exact same detail of provisioned IOPS gp3 volumes...
Is Atlas using gp3 volumes at all or are they using io2 volumes when checking the "provisioned IOPS" checkbox?
The docs read as if it was io2, see: https://www.mongodb.com/docs/atlas/customize-storage
For io2 troughput would scale proportionally with IOPS, see: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/provisioned-iops.html