Everything You Need to Know About Salesforce BigObjects

BigObjects are Salesforce’s answer to handling massive amounts of data on the Force platform. These BigObjects are unlike Salesforce standard or custom objects, which have data size limits based upon the Salesforce edition, aka, storage limit, of the Salesforce organization. Today’s mission is to dive deeply into BigObjects, including how to use Salesforce BigObjects, BigObject limitations, and best practices when creating BigObjects.
Clear blue sky with white clouds filling half of the sky.

What are BigObjects?

Salesforce has three ways to store data on the platform: Standard Objects, Custom Objects, and BigObjects. Most CRM data is stored in Salesforce standard and custom objects. These types of data are frequently accessed for day-to-day customer transactions. BigObjects provide an alternative location for data storage, particularly for data with millions, even billions of records. To fully understand the benefits of BigObjects, we must first define Standard and Custom objects to highlight the critical difference between them, particularly their impact on data storage.

Salesforce Standard & Custom Objects

A new Salesforce org contains a pre-built configuration to support the most common use cases. For example, Sales environments start with Standard objects to collect Lead, Opportunity, and Account data. These Standard objects are automatically present for every Salesforce customer, and they support a wide range of customization, including adding new fields or record types. 

Salesforce custom objects expand data model functionality by allowing you to create new database tables, aka objects. Custom objects are created each time a managed package from the Salesforce AppExchange is installed into an org or by a Salesforce administrator.   

Salesforce’s standard and custom objects are not designed to handle massive data volumes. When data volumes get high, a Salesforce org will see performance issues, particularly in reporting. This is in addition to increased data storage costs! Any object containing over 1 million records can generally cause performance degradation, especially in combination with many custom fields. When data volumes on Salesforce get high, there are two options:

  1. Archive data off of the Salesforce platform
  2. Move data to a BigObject 

BigObjects

Salesforce custom and standard objects are designed to capture transactional data, such as a customer’s interactions with a business’s sales and support team. This data is updated frequently, allowing a single record to connect to hundreds or thousands of related records. Conversely, BigObjects are created to aggregate and maintain massive data sets, and the data contained in BigObjects is rarely edited. Instead, BigObjects tend to grow as additional records are added to offset data storage costs and improve overall organization performance. 

FieldHistoryArchive is the most common BigObject, and Salesforce automatically creates this object to contain record field history.

Why You Should Use a BigObject

There are two common use cases that drive adoption of Salesforce BigObjects:

Storage of big data

BigObjects store billions of records, all in a way where the data is accessible natively inside Salesforce. This lets users create reports on this data without requiring additional off-platform storage or integrations. As a result, BigObjects can help a business analyze customer trends over a long period. 

Reduction of data costs

The per-record storage rate for a BigObject record is significantly lower than the per-record cost for a Salesforce organization exceeding its data storage limit. When this article was published, the general Salesforce pricing guide listed BigObjects as a $16,800 yearly investment or .0003 US cents per record. Alternatively, while not publicly published, Salesforce’s data storage rate is rumored to be $125 per month for 500 MB of extra data storage. Let’s do some simple math:

BigObject Cost:

.0003 cents per record 

Salesforce Standard or Custom Object Cost: 

.006 cents per record 

Annual cost per data storage block: $1,500 to contain 250,000 records 

= .006 cents per record 

With such a large cost discrepancy, it should be no surprise that Salesforce BigObjects are rapidly being adopted to support both data storage and data cost use cases! 

BigObject Technical Consideration: Index

Unlike Salesforce’s custom objects, you determine what fields should be used as an index on BigObject creation. This is perhaps the most critical technical consideration as it determines the future use of the BigObject. In addition, the index determines how the data stored in the object is filtered and queried. 

An index is set upon BigObject creation, and it can not change later

Consider the implications of this statement. First, a BigObject can contain billions of records. An index that does not promote efficient data filtering or searching could negate the value of using this technology.

Example: A bad index

This index uses an Account lookup as the 1st custom index. Here’s why this could cause a big problem in the future:

There are three records in my BigObject today:

  • Initech  
  • Massive Dynamic
  • Umbrella Corporation

When a new record is created –– for this example, Stark Industries –– the record will be inserted into the index between Massive Dynamic and Umbrella Corporation. As the first index field is also a lookup, we could also have records that share the same first index field! This means that we have no efficient way to search the index to see what records are new or what records have changed! 

Example: A good index 

This index is designed for queries with our first indexed field based on the record creation data, creating an iterator. For example, we can then query records created since a specific date. The second part of our index is a unique case number, which functions as a unique identifier even if we end up with two records created on the same date and time. Finally, the Account lookup and phone number are captured much later in our index. 

A good BigObject index must contain two core components:

  1. An iterator, like a timestamp
  2. A uniqueness indicator, like an ID

CapStorm Supports Full Backups of Salesforce BigObjects

When BigObjects are used, the data stored within them may need to be backed up to ensure the organization has a comprehensive Salesforce business continuity plan. At the time this article was published, Salesforce’s Backup Solution did not support the backup of BigObjects. FieldHistoryArchive, the most common BigObject, was not supported by Ownbackup (Knowledge Base: Salesforce Objects Included and Excluded in Backups), Grax, Spanning, and others.

CapStorm supports the replication of FieldHistoryArchive and other BigObjects. The CopyStorm application supports incremental replication from Salesforce BigObjects for those that have well-constructed indexes. This means that added or modified records are replicated during each backup, and the BigObject backups can run at a high frequency. For BigObjects that lack a uniqueness indicator and an iterator, CapStorm supports a full extract of these records.

Want to learn more? Check out a short demo video below:

For even more information on CapStorm’s Salesforce backup solutions, talk to one of our experts today!

Rebecca Gray

Rebecca Gray

Rebecca is 5 year Salesforce fanatic and certified Salesforce Admin, Service Cloud Consultant, Sales Cloud Consultant, and App Builder. She volunteers in the Salesforce community, leading the Saint Louis, MO Salesforce Admin Group and is a former Lightning Champion. In her day job, Rebecca supports Customer Success, helping CapStorm customers achieve their goals for Salesforce data management.

About CapStorm

CapStorm is the most technologically advanced Salesforce data management platform on the market. Billions of records per day flow through CapStorm software, and our solutions are used in every industry from credit cards, telecom providers, insurance agencies, global banks and energy providers.

Recent Posts

Follow Us

Become a CapStorm Insider

Become a CapStorm Insider

Subscribe to the CapStorm Forecast

Name
This field is for validation purposes and should be left unchanged.