This would be nice, its an extra step to restore for us since you have to launch a matching MongoDB Server and then dump the data just to make it usable
We created a k8s pod that we run that loads it after we have extracted it to a PVC:
```
---
apiVersion: v1
kind: Pod
metadata:
name: mongo-s3-restore-pod
namespace: mongo-s3-backup
spec:
containers:
- name: mongo-s3-restore-pod
image: "mongo:3.6-xenial"
imagePullPolicy: Always
envFrom:
- configMapRef:
name: mongo-s3-backup-config
volumeMounts:
- name: snapshot-volume
mountPath: /snapshots
- name: snapshot-volume
mountPath: /data/db
subPath: "path/to/dump"
restartPolicy: OnFailure
volumes:
- name: snapshot-volume
persistentVolumeClaim:
claimName: mongo-s3-restore-pod
Also we have an entire script that runs daily and archives the backups to our own S3, really would not like to have to do this as its prone to errors and we manage it.
This would be nice, its an extra step to restore for us since you have to launch a matching MongoDB Server and then dump the data just to make it usable
We created a k8s pod that we run that loads it after we have extracted it to a PVC:
```
---
apiVersion: v1
kind: Pod
metadata:
name: mongo-s3-restore-pod
namespace: mongo-s3-backup
spec:
containers:
- name: mongo-s3-restore-pod
image: "mongo:3.6-xenial"
imagePullPolicy: Always
envFrom:
- configMapRef:
name: mongo-s3-backup-config
volumeMounts:
- name: snapshot-volume
mountPath: /snapshots
- name: snapshot-volume
mountPath: /data/db
subPath: "path/to/dump"
restartPolicy: OnFailure
volumes:
- name: snapshot-volume
persistentVolumeClaim:
claimName: mongo-s3-restore-pod
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-s3-restore-pod
namespace: mongo-s3-backup
labels:
app: mongo-s3-restore-pod
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 750Gi
```
Also we have an entire script that runs daily and archives the backups to our own S3, really would not like to have to do this as its prone to errors and we manage it.