How We Manage Testing Our Products at CapStorm

The entire staff at CapStorm takes Quality Assurance (QA) extremely seriously, and it is embedded in our DNA. Before every release of either a new product or new versions of existing products, a dedicated cycle of testing is conducted to exercise as many features, both old and new, as possible. Should any discrepancies be discovered during that process, it’s back to the drawing board in the Software Development Life Cycle (SDLC.)
Blue sky with clouds over farmland.

Software Development Life Cycle Defined 

Such testing falls into the category of Functional Testing, Integration Testing, System Testing, Performance Testing, Concurrency Testing, and other related testing types with overloaded industry names that, sadly, have definitions that vary widely depending on which team you are talking to.

 

SDLC with CapStorm

However, CapStorm’s Software Development Life Cycle doesn’t begin with testing. Before submitting new functionality (in the form of new code) into our Version Control System (VCS) and running a build, our Agile development processes strive for 90% (or better) code coverage when exercising our suite of Unit Tests, each of which is intended to test a granular piece of functionality (as opposed to an entire system.) Additionally, every line of code that is added or modified is reviewed by the technology team before submitting it to the VCS. Every night a build is run using our Continuous Integration platforms and part of that process is to run the Unit Tests and email a report of any offending test failures to the technology team. 

 

To put this into perspective, what is considered ‘good’ code coverage varies wildly with some considering 80% (Atlassian) or even coverage levels as low as 70% to be acceptable for general release software. 

 

Unlike most other software products, especially hosted SaaS model platforms, CapStorm’s products run on the database of our customer’s choice. This is one of CapStorm’s strengths in the industry. It’s your Salesforce data, it’s your choice of which database you would like to use to manage your Salesforce data, and it’s your choice of what you want to do with your data.

 

This complicates the testing strategies, which must now include testing across the numerous databases supported by CapStorm (and there are many.)

 

Combine this with the fact that while the Backup and Restore of Salesforce data sounds simple, it is in fact, extremely complex. Our flagship product, CopyStorm, which pulls data down from Salesforce and into our customer’s on-premise database, has to take into consideration the millions of things that can go wrong. It’s the nature of the beast and problematic for all client-server software systems. Here’s an analogy.

 

When you phone a friend and hear a busy signal or are immediately directed to voicemail, do you know if your friend is busy? Or if something terrible has happened to your friend? You often don’t know. What is your recourse? Well a retry, of course. You will keep calling back in the hopes of eventually connecting with your friend – especially if they never return your call. The same thing is true of computer communications. 

Photo courtesy of Pixabay

SLDC Complexity with Salesforce 

When CopyStorm calls out to Salesforce for information (what objects have been defined for my organization, what are the field names and types, etc) there is always the possibility that there will be a connection error or a timeout. And just like phoning your friend, our software doesn’t know if the network is down or running very slowly. The software’s best friend is to do a retry. But how many times should we retry? How long should we wait for a response back from Salesforce before assuming there is a connection problem in the first place?

 

So, when the answer to that fundamental question is “well, it depends …” it’s a signal that the customer will need to be provided with the tools to fine-tune those kinds of parameters.

 

These are just two of the many, many knobs, buttons, and switches that CopyStorm needs to allow the customer to fine-tune their backup strategy. But the more knobs, buttons and switches that the product offers, the more permutations of those that need to be tested.

 

SLDC with Automated Testing 

Being a lean and mean technology team implies innovation has to be conducted to alleviate the need for exhaustive manual testing. It is our experience that throwing bodies at problems rarely solves the hard ones. If anything, more people implies more communication breakdown, which is a problem in itself.

 

Our solution is to provide Test Automation tools that we have developed that are specific to our software products, to supplement our Unit Test suite that each software developer is responsible for creating.

 

One of our innovations in Test Automation is a Domain Specific Language (DSL) that was written to allow for creating small scripts with commands relevant to the Backup and Restore (and Governance) processes our products provide. This mini-programming language removes the huge amount of scaffolding code that needs to be written to loop through nested sequences of values for those knobs, buttons, and switches. With a single command to define those knobs, buttons, and switches, the DSL engine takes over and whips through each permutation, then executes a test and validates the result. These scripts and commands and validators can then be shared with all members of the technology team, so this process is proving to be an invaluable Test Automation technique.

 

Taking this a step further is the introduction of what we refer to as a Test Bot (short for robot), in which, rather than whip through every possible permutation, the DSL engine randomly generates sequences and then runs the test and validation for each sequence. Such randomness helps capture such problems as concurrency/thread-safety issues that plague the software industry (are you doing concurrency testing? Are you sure?) Let those random Bot tests run for a couple of days, and if no problems are reported then it is likely that the software does not have multi-threaded issues.

Photo courtesy of Dreamstime

Finally, our Test Automation captures each random Bot sequence and creates a replay script out of it. There’s nothing worse than running a Bot test overnight, only to come back to the office in the morning and observe a failure but not knowing what commands were executed (and in what order) that led to the failure. By capturing these command sequences as a replay script and running it through the DSL engine, and assuming that our Test scripts and Test data are largely deterministic, there is a likely chance of reproducing the problem. Reproducible problems are fixable problems. The first step in fixing any software bug is to first attempt to make it reproducible. Our DSL engine, combined with the replay scripts, has proven to be a lifesaver.

 

Are there any improvements that can be made to our Test Automation strategy? Of course there are. But we are getting a lot of mileage out of what we already have built, and we are continuously striving for improvement. Stay tuned!

Steve Widom

Steve Widom

With over three decades of software development experience, Steve is grateful to have lived through the Computer Age, and is a member of the CapStorm technology team specializing in data Governance, as well as Test Automation.

About CapStorm

CapStorm is the most technologically advanced Salesforce data management platform on the market. Billions of records per day flow through CapStorm software, and our solutions are used in every industry from credit cards, telecom providers, insurance agencies, global banks and energy providers.

Recent Posts

Follow Us

Become a CapStorm Insider

Become a CapStorm Insider

Subscribe to the CapStorm Forecast

Name
This field is for validation purposes and should be left unchanged.