Salesforce Data Management at Scale: Recovery after Data Loss, Data Mining, Data Archive

Fortune 500 Technology Enterprise

A business enabling wireless communication and delivering technology pivotal to the Fourth Industrial Revolution.

This California headquartered enterprise is known for innovative products that are ingrained in many of the digital devices used by billions of people around the globe. Daily wireless connectivity would not be possible without the innovation of this organization. Salesforce is used to power customer support with a complex, multi-org Salesforce implementation. CapStorm empowers the business to know customers better by enabling data mining and near real-time data visualization. The enterprise also minimizes costs with Salesforce data archival and streamlined sandbox seeding.

Industry

Company Size

Specialities

Tech Stack

Problem

Huge Salesforce environments can be difficult to search, making robust analytics, queries, and data mining near impossible without a complementary solution.

This enterprise was an early adopter of Salesforce, and with over 50,000 employees, the data volumes in Salesforce increased exponentially. Data increased to such a high volume that the Case object was archived then reimplemented once the table grew larger than 55 million records. The enterprise needed to leverage Salesforce data to gain insight into customer trends, but the native reporting options or even 3rd party Salesforce data extract connectors simply could not handle such large data volumes. The business also required a solution for Salesforce backup and recovery, and the solution had to be architected to support these massive record counts, both upon Salesforce data extract and for complex data restores that could contain millions of interconnected records.

Solution

  1. Incremental Salesforce Backup

    Salesforce data is backed up incrementally into the enterprise’s Oracle databases. Salesforce objects that contain enterprise critical data are backed up every 15 minutes in order to meet Return Point Objectives (RPO). Less critical data is replicated ever 4 hours. This staged approach minimizes Salesforce API resources and provides three key benefits: (1) a Salesforce backup (2) a staging area for data mining and analytics, and (3) a storage area for records archived out of Salesforce.
  2. Recovery Testing with Sandboxes

    Return Time Objectives (RTO) are tested by restoring data sets Salesforce sandboxes. The production recovery process is identical to Salesforce sandbox seeding. This approach ensures that multiple people from the organization know how to perform disaster recovery, prior to an actual disaster! In 2016, a production data loss occurred due to a Salesforce data center outage. Over 6 hours of data was lost, however, the business was able to recover all lost data in minutes by leveraging the tested recovery solution.
  3. Data Mining & Analytics

    Data anomalies and patterns are detectible by integrating out-of-the-box data tools directly to extraction database. The database’s schema is automatically maintained by the solution in order to mirror Salesforce, including when new objects are created, packages are added, or field definitions are changed. In addition to a database structure that dynamically mimics Salesforce, data retention is determined by the business on an object-by-object level. This granular control over data retention gives the business the flexibility to monitor trends over a long period of time while also controlling data storage costs.

Outcome

Proven Salesforce data recovery with RPO and RTO measured in minutes.

Improved customer experience and increased revenue by mining Salesforce data for anomalies and trends.

Salesforce storage cost control with self-hosted Salesforce record archival.

Want results like this?

Become a CapStorm Insider

Subscribe to the CapStorm Forecast

Name
This field is for validation purposes and should be left unchanged.