Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.
Some article numbers may have changed. If this isn't what you're looking for, try searching all articles. Search articles

Data Domain: PowerProtect DD Virtual Edition (“DDVE”) Metadata Guidelines

Summary: Data Domain: PowerProtect DD Virtual Edition (“DDVE”) Metadata Guidelines

This article may have been automatically translated. If you have any feedback regarding its quality, please let us know using the form at the bottom of this page.

Article Content


Instructions

Synopsis:

PowerProtect DD Virtual edition provides unmatched performance and TCO for customers seeking the benefits of a Data Domain appliance in a virtual form factor. However, care must be taken before deploying PowerProtect DDVE in certain types of environments as this might cause an increase in metadata usage over the prescribed guideline, thereby affecting the TCO experienced by the user. The increase in metadata resulting from the scenarios discussed below is a natural consequence of high deduplication as the product is working per design and we highly recommend that you assess the potential impact of the below scenarios on the TCO prior to deployment.
This document discusses such known scenarios which might lead to increased metadata usage while using DDVE writing to an Object Store.

Scenario 1: Virtual Synthetic File System backups

Protecting File System backups on PowerProtect DDVE using a backup application (such as Avamar) with Virtual Synthetic backups enabled might lead to a situation where the customer experiences a high deduplication ratio, at the cost of increased metadata usage. A high effective deduplication rate exerts a downward pressure on TCO, while increased metadata usage exerts an upward pressure. The resolution of these forces determines the overall TCO experienced by the user. It is important to assess prior to deployment, if the user s environment has this type of workload.
If it is not possible to turn off virtual synthetic backups in the backup application or choose one that does not use virtual synthetics, the only known solution to such cases is for the user to add additional metadata disks to keep the system operational.

Scenario 2: Partial Containers

Writing partial containers to the object store, which might be caused due to frequent commits or by writing small files (<1 MB) increases the metadata overhead of the system. In this scenario, this spike is temporary, and the system can be brought to steady state by running cleaning, which will copy forward the partial containers and eliminate the fragmentation. However, it should be noted there is a cost associated with running cleaning, which might affect TCO if run too often. The best practice guideline is to run cleaning once a week and size the metadata disks accordingly.

Scenario 3: Small Files

In cases where the user s environment has millions of small files (<1 MB), the metadata overhead is high compared to the data. This is an unsupported workload and DDVE might not be the right solution to protect such workloads.

Scenario 4: Dense Markers

Certain apps such as Oracle or Commvault introduce markers in the backup stream. DDVE deduplicates these markers, leading to high dedupe rates. Once again, the user is faced with a tradeoff between a high deduplication rate at the expense of increased metadata usage. The user can choose to turn off in-app optimization in the backup app and reduce DDVE metadata usage or choose the higher deduplication rates at the expense of increased metadata usage.
If it is not possible to turn off in-app optimization in the backup application, the only known solution to such cases is for the user to add additional metadata disks to keep the system operational.

Scenario 5: Non-Virtual Synthetic backups with high deduplication rates

This scenario is uncommon, and the user needs to be aware about the potential impact to TCO due to the high
deduplication rates, which might cause a significant increase in metadata usage. The only known solution to such
cases is for the user to add additional metadata disks to keep the system operational.

Summary

In the specific scenarios described above, we expect that there will be significant increase in the meta data
consumption and the recommended solution to such cases is for the user to add additional metadata disks to keep
the system operational. The product is operating per design and we highly recommend that you assess the potential
impact of the above scenarios on the TCO prior to deployment.
For workloads with a higher deduplication ratio, more metadata is needed. Metadata storage can be dynamically
expanded. When the metadata storage space usage exceeds 80%, an alert will be raised. A new metadata disk
should be added to the DDVE immediately to avoid being out of space. Please follow the administration guide for the
procedure of storage expansion. We recommend to always use 1TB disk.
The following CLI (documented in the DDVE Installation & Administration Guide) shows active localmetadata
usage.
For example:

sysadmin@atos-ddve# filesys show space tier active local-metadata Active Tier: local-metadata usage Size GiB Used GiB Avail GiB Use% -------- -------- --------- ----- 24290.4 4390.2 19900.2 18.0% -------- -------- --------- -----

Article Properties


Affected Product

Data Domain

Product

Data Domain, Data Domain Virtual Edition

Last Published Date

12 Mar 2021

Version

3

Article Type

How To