Ask Sawal

Discussion Forum
Notification Icon1
Write Answer Icon
Add Question Icon

What is persistent recovery commvault?

10 Answer(s) Available
Answer # 1 #

When you perform file-level restores from a backup copy, aside from the restore job, multiple block recoveries are submitted to the Job Controller as one job called Persistent Recovery job. The default timeout for a persistent recovery job is 7 days.

[71]
Edit
Query
Report
Pavati Master
Educator at Freelancing
Answer # 2 #

In part one, we showcased the capabilities of Commvault Distributed Storage’s (formerly known as Hedvig CSI Driver to support complete storage lifecycle management for stateful container workflows. In this blog, we’ll feature an in-depth overview of Commvault Distributed Storage snapshots and clones, the benefits of snapshots and how they are seamlessly integrated into container orchestrators through the Commvault Distributed Storage CSI Driver.

With the growing adoption of Kubernetes within enterprise organizations, cloud has become the destination of choice for not only modern but, also, legacy applications. With a cloud-first strategy, an organization’s data can be spread across multiple on-premises and/or cloud sites. When organizational data is spread across multiple disparate sites, continuous data protection can pose a significant challenge without a uniform data protection scheme.

With a single storage fabric that spans multiple sites, data placement policies that are declarative in nature coupled with built-in snapshot capabilities, Commvault Distributed Storage Platform provides a uniform location-transparent scheme for protecting organizational data.

Continuous data protection using snapshots

A snapshot can be defined as the state of a storage volume captured at a given point in time. Persistent point in time states of volumes provide a fast recovery mechanism in the event of failures with the ability to restore from known working points. This capability has been proven to be extremely beneficial in the following scenarios:

Commvault Distributed Storage Snapshots

Commvault Distributed Storage volume snapshots are space-efficient metadata-based zero-copy snapshots. Every newly created Commvault Distributed Storage volume has a version number and a version tree associated with it. The version number starts with “1” and is incremented on every successful snapshot operation along with an update to the version tree. Every block of data written is versioned with the version number associated with the Commvault Distributed Storage volume at the time of the corresponding write operation.

Let’s take ransomware attacks as an example to understand how Commvault Distributed Storage snapshots provide data protection. Consider the following sequence of events:

At this time, any new writes that happen as a part of the ransomware attack are recorded with version number: 3. By reverting the Commvault Distributed Storage volume back to the previous version (2), the application can be recovered instantly.

The process of reverting a Commvault Distributed Storage volume to an earlier version is not dependent on the size of the volume or the amount of data it contains. No data of the Commvault Distributed Storage volume needs to be copied either during the snapshot or the revert operation, resulting in a data protection scheme that is simple, fast and inexpensive from an operation point of view.

Scheduled snapshots and SLAs

Data protection schemes for application workloads are defined in terms of Service Level Agreements (SLAs). At the very minimum, an SLA is defined by specifying the following:

SLAs are set to align with your organization’s business needs with an inherent focus on business continuity. More specifically, SLAs are created to fulfill compliance rules for organizational data. As more applications move to the cloud, SLAs are also created to meet the application need for continuous delivery.

As an organization’s data grows, so do the SLAs and the manual process of creating and updating SLAs can be a deal-breaker. It is of utmost importance to have a policy-driven method of managing SLAs, which offers create and forget semantics, so that newer data is inherently protected.

Data protection for containerized applications

In this section, we will string together the concepts presented thus far and demonstrate how they can be applied effectively to protect containerized applications. Commvault Distributed Storage CSI driver provides users the ability to create on-demand snapshots as well as automated scheduled snapshots of stateful containerized applications. Snapshot management through the Commvault Distributed Storage CSI driver is completely policy-driven, thereby enabling automation to be extended all the way to the data layer. Let’s take a look at how this is done.

On-demand Snapshots

The workflow for on-demand snapshots is implemented as follows:

Create a VolumeSnapshotClass for creating snapshots of persistent volumes.

Create a VolumeSnapshot of an existing persistent volume using this class.

Use the volume snapshot to create a new PersistentVolumeClaim.

Scheduled Snapshots

While the ability to create on-demand snapshots is an important feature, it is not a feasible option when it comes to managing a large-scale production container ecosystems. With scheduled snapshots, users can easily create snapshot schedules for their persistent volumes and the built-in snapshot scheduler of the Commvault Distributed Storage CSI driver does the job of taking consistent snapshots as per the SLA specified.

Kubernetes (and the CSI Spec) does not provide a native type for creating snapshot schedules. Snapshot schedules are implemented as a CRD (CustomResourceDefinition) and are created by the Commvault Distributed Storage CSI driver. After the CSI driver has been deployed, a user can create snapshot schedules by specifying the periodicity and the retention period as follows:

This example creates a simple interval schedule that creates a new snapshot every minute and deletes the snapshot after two minutes. Snapshot schedules can be easily customized to meet the application needs.

After a snapshot schedule has been created, create a new storage class with the snapshot schedule. This will ensure that any new persistent volume provisioned using this storage class will be protected as per the snapshot schedule.

After the storage class has been created, create a persistent volume claim (PVC).

Based on the associated snapshot schedule, you should now see snapshots created for this persistent volume claim every minute. To list the snapshots, run the following command:

Use the snapshot AGE column to verify that the snapshots are deleted as per retention period.

[5]
Edit
Query
Report
Frans Kurita
Ticket Inspector
Answer # 3 #

The default timeout for a persistent recovery job is 7 days. For block-level restores using the Virtual Server Agent, the persistent recovery job remains open for 7

[4]
Edit
Query
Report
Ranbir Pandya
Marine Engineer
Answer # 4 #

Aside from the restore job, multiple extent recoveries are submitted to the Job Controller as one job called Persistent Recovery job. The default timeout for a

[3]
Edit
Query
Report
Yash Sood
Master's in Economics & Chinese (language), University of California, San Diego
Answer # 5 #

To show persistent recovery jobs for Virtual Server Agent in the CommCell Console Job Controller, add this additional setting on the CommServe system and set

[3]
Edit
Query
Report
Viti Mand
B. Tech from Delhi Technological University
Answer # 6 #

For block-level restores, in addition to the restore job, the Job Controller launches a persistent recovery job that opens a common pipeline, enabling multiple

[1]
Edit
Query
Report
Sai Anand
Title Examiner
Answer # 7 #

But before starting, let’s understand how the demand is continuously increasing for Commvault professionals worldwide:

Now, I hope you have got a clear idea of Commvault’s demand, career opportunities, and salary insights. To simplify your interview preparation process, we have segregated Commvault interview questions they are:

Commvault is a data management platform that assists organizations with data backup and recovery, cloud, virtualization, disaster recovery, security, and compliance.

Commvault software consists of modules to backup, restore, archive, replicate, and search data.

Simpana is one of the features of Commvault’s enterprise backup software platform that is specially designed for backup, archive, and reporting data. Simpana is the new version of the Deduplication process introduced by Commvault.

The key features of Commvault are listed below:

The various ways of doing High availability are listed below:

A storage policy is a data management entity with a set of rules that describe the lifecycle management of protected data in the Subclient’s content. It manages the Subclient’s data even when it stays on other servers in Commcell.

The set of rules includes how to manage data, protect data, where it will reside, and other management options such as compression, encryption, and deduplication of data in protected storage.

Below mentioned are the different types of storage policies:

We have only one primary copy and to that, we will be having different types of auxiliary/ secondary types of copies. The list of secondary copies available are as follows:

To configure the Commvault Proxy on the new client, follow the below steps:

Step1: Right-click on the client computer to be used as a Commvault proxy from the Commvault browser and select the properties.

Step2: From the properties dialog box, click on the Network button and select the configure firewall settings checkbox.

Step3: Click on the options tab and choose “this computer is in DMZ and will work as a proxy checkbox.”

Step4: Click on the incoming connections tab and select add.

Step5: Select the CommServe_client_name from the list.

Step6: From the State list, select RESTRICTED and click OK.

Step7: From the State list, select RESTRICTED and click OK.

A Commcell environment is the logical grouping of all the software components that secure, transfer, and manage data and information. It contains a CommServe host, MediaAgent, and Clients.

CommServe - It is the central management component that coordinates and executes all Commcell operations.

MediaAgent - It is a data transmission manager in the Commcell environment, which enables high-performance data movement and handles data storage libraries.

Client - A Client is a logical group of agents that defines the protection, management, and movement of data associated with the client.

A Silo is a collection of disk volume folders that are associated with the Deduplication Database. It contains the deduplicated data that are written on the disk storage.

A hash algorithm is used in the deduplication process in the Commvault whenever a block of data is read from the source and a unique signature is generated for the block of data using a hash algorithm.

The default block size for deduplicated storage policies is 128kb.

This can be a generalized answer and the ratio that we could maintain is as follows:

You can download and view the reports in Commvault through the web console.

You can access reports through the cloud services portal and web console.

Cloning is nothing but making a copy of something while a snapshot makes an initial copy and then makes subsequent changes to it. Both cloning and snapshot are considered to be good approaches for disaster recovery.

IRM stands for the IntelliSnap Recovery Manager that performs hardware storage snapshots directly into a recovery/protection process for smaller environments. It offers fast and easy Snap backups, centralized configuration, application support, etc.

The following steps show how IRM works:

Commvault software runs a building block approach for data management. A building block is a combination of server and storage to protect data sets regardless of the type or origin of the data.

Deduplication Building Blocks will allow you to protect huge amounts of data with minimal infrastructure, better scalability, and faster backups.

DDB seeding is a predefined workflow that allows transferring initial baseline backup among two sites using a removable disk drive.

ContinuousDataReplicator (CDR) replicates data from source to destination. It provides data protection and recovery support for all types of data including application data and file systems.

Commvault presents the most efficient backup and restores data in enterprises from mainstream applications, operating systems, and databases. Commvault backup uses agents to interface with applications or files and facilitates the transfer of data from production systems to protected environments.

There are various types of backups available in Commvault:

Some of the common or more frequent backup issues reported are as follows:

The backup copy operation allows you to copy snapshots of the data to any media. The primary snapshot stores the metadata information associated with the snap backups and when the backup copy operation is running, the primary snapshot data is copied to the media associated with the primary copy.

The backup copy can be configured in two ways:

Performing a backup is like capturing data for all of the virtual machines in a subclient.

The below-mentioned steps will help you perform a VMware backup:

1. Navigate to client computers from the Commcell console and select Virtualization-client->virtual server->VMware->backup-set.

2. Now right click on a subclient and click backup.

Select the backup type:

3. If you want to modify default modify settings or specify advanced options, then click Advanced.

4. To run a job immediately, click on Immediate or to create a backup schedule, then click configure to specify the schedule and click ok.

5. Click on Home->Job Controller to track the progress of the job.

Transport modes are selected automatically for restore and backups based on the VSA proxy used and also when the virtual machines being backed up or restored.

The following are the different types of transport modes available in VMware:

First thing first, the backup admin will have to monitor whether all the scheduled jobs are running as they are designated. If not, then they have to go through the jobs and understand the fault and rectify the same.

Further, they have to involve themselves with some of the following health checkups:

This should be an honest reply back to the interviewer because this is a real-time question that can be answered based on the experience available.

During the backup copy operations, the following activities are executed:

[0]
Edit
Query
Report
Mayuri Krishna
Software Developer
Answer # 8 #

We are performing several Sybase & SAP Hana restores.One thing I noticed: the progress is in commvault is “completely wrong” slash misleading.

It remains at 5%, while in the hana log & sybase log in commvault you can see that it’s for example 30% loaded.

Because the progress remains at 5%, people think that the restore is stuck (can stay at 5% for hours if you have bigger databases), and sometimes people kill the restore because of that.

Why can’t commvault show the real status of the restore? For sybase for example the clsybagent.log from commvault shows the exact status, so commvault knows that status...so why not showing it in the gui/command center?

Even if it’s not exact figure, having that as job status would remove some frustrations for users.

Only a question, not a request to change something :)

[0]
Edit
Query
Report
Meet Rafelson
Promotional Model
Answer # 9 #

Recalling the version 10 stubs creates a persistent recovery job. While that job is running, you can then recall the version 9 stubs. If the persistent recovery job

[0]
Edit
Query
Report
Saloni Kalla
I am professional Youtube Video Game Player, Uploader
Answer # 10 #

These log files are created to log the persistent recovery jobs during stub recalls. Type :Integer Location :LotusNotesDMAgent Default value :When the key is not

[0]
Edit
Query
Report
Shrishti Maharaj
Import Export Business