How Automation Testing Improved Development Speed & Quality
Today we discuss the importance of software testing in blockchain, outline benefits of automation testing and explain how to get involved in the Quality Assurance of our open source project.
Many blockchain projects don’t survive long after hitting the initial production state. For most, lack of proper software testing is one of the main reasons for their demise. It’s estimated that over half a billion dollars worth of cryptocurrency has been lost due to bad code in the last year alone. You probably heard about The DAO’s code loophole, which allowed attackers to drain out 3.6 million ETH (worth $70 million at the time) from the Ethereum-based smart contract.
Another notorious case was the Parity bug which resulted in over $150 million permanently frozen.
Even Bitcoin itself is not immune to hacks. Late last year, a bug discovered in the code allowed malicious individuals to artificially inflate Bitcoin’s supply via double input. If the bug wasn’t quickly identified and addressed, it could have had catastrophic effects on the network. This is just the tip of the iceberg — there are plenty of smaller incidents caused by inexperienced or inattentive developers that don’t make the headlines.
What does this tell us? In development, things can go wrong fast and the outcome can be ugly. This is why software testing is so important for any project utilizing blockchain technology, such as blockchain platforms, blockchain applications or blockchain-based services.
In this article, we will discuss our experience and best practices with software testing while developing Lisk, a blockchain application platform. We will also show you how implementing automation testing improved our internal workflows and code reliability. Lastly, we will show you how you can get involved in testing our open source software.
Introduction to Blockchain and Lisk
You’ve probably heard about blockchain in the context of cryptocurrencies like Bitcoin, but what makes this new technology so special? The blockchain, which is a type of distributed ledger technology (DLT), is an open, distributed database that is able to record transactions among parties permanently and in an efficient, verifiable way.
Those transactions are packed into blocks, cryptographically signed and form the actual chain. Data stored in blockchain can’t be altered or tampered with, as all the records are immutable. Once data is saved into the ledger, it stays there forever.
The blockchain is also a decentralized network, meaning there is no central authority with control over it.
Lightcurve is a blockchain product development studio and consultancy based in Berlin, Germany. We’re currently focused on developing all aspects of Lisk, including the product, marketing, community and design.
What is Software Testing?
Basically, software testing is defined as a set of activities that can be done to guarantee that software is behaving as expected and it’s bug-free. A proper software testing process will help you to identify and prevent mistakes, ensure that the actual implementation of particular features matches the requirements and increase overall confidence in the code.
Testing blockchain applications is not much different from testing non-blockchain ones. However, with blockchain, additional test metrics are involved, for example:
- Chain size. The longer blockchain is, the more data it contains. This results in more space taken on the datastore. The chain can grow really fast and there is no limit on its actual size, as new blocks are constantly being added. We need to know the maximum possible amount of data stored on the chain over a certain amount of time. This way we can estimate how much space a blockchain can take, for example after a year from now.
- Throughput in blockchain is measured by calculating the number of transactions per second (TPS). Higher TPS is always better, but it comes with a cost of increasing overall network load and not every node is able to catch-up. TPS as a factor of scalability is a challenge and a hot topic of discussion in the blockchain industry. Many projects are chasing TPS blindly without focusing on other, more important indicators of performance. This often results with projects becoming less decentralized, which in turns negates the ideology of blockchain.
- Security & cryptography. The code needs to be constantly reviewed and audited to ensure that there are no flaws regarding the creation of new tokens, maintaining account balances, verifying blocks, or transactions signatures.
- Data integrity ensures that all the data stored on the blockchain is consistent between network nodes. This is guaranteed by cryptography for blocks and transactions. However, some blockchain applications calculate and maintain states (for example actual account balances) in the memory or helper databases in order to make sure that data consistency is preserved. There is no other way than just comparing data directly between nodes.
- Data propagation is the distribution of data from one node to another. A decentralized network can consist of thousands of nodes and all of them need to maintain a reasonable amount of connections with each other. When some portion of data hits one node, it needs to be forwarded to the entire network without disruption. The time this propagation takes is an important metric.
Why blockchain developers need to pay much more attention to details
If a bug was introduced in a centralized system, providing a fix would be relatively straightforward. Even if it corrupted some data, it would be easier to correct. This is because in most cases, the company maintaining the application has complete control over the data. Given that blockchains are immutable ledgers, corrupted data is incredibly hard if not impossible to correct.
Complicating the process even further, delivering a fix needs to be coordinated with all participants of the decentralized network. With Lisk, we must coordinate every release with hundreds of node operators, as well as block producers called delegates in our DPoS consensus algorithm.
Taking all these scenarios into account, the consequences of bugs in blockchain applications can be much more dangerous than in centralized software.In development, things can go wrong fast and the outcome can be ugly.
Now that we have discussed the importance of testing in a blockchain project, we can divide the actual tests according to the way we execute them:
- Manual testing is executed by QA/test engineers and is useful when some of the test scenarios are not yet migrated or ready to be executed in an automated way, such as new features. However, not everything can be done in this way for various reasons, including difficulty, time and budget constraints. Manual testing is overall the most time-consuming method.
- Automation testing is the basis of continuous delivery. It is a development methodology that allows development teams to safely deploy changes to production. Continuous delivery enables developers to find bugs quickly and helps teams deliver new releases with confidence. For complex applications, automation testing can reduce the time required for the release process from months or years to days or even hours. Investing time to implement a high-quality test suite can dramatically enhance developers’ productivity. However, it requires some effort to implement and maintain both the test scenarios and the infrastructure for executing them.
Different types of automated tests
We can distinguish a few types of automated tests, such as unit tests, integration tests, and functional tests. However, in some of the existing test suites, those types are confused with each other and there is absolutely no distinction between them. This makes a test suite not well suited for anything in particular. It’s very important for a developer to understand different types of tests as each one has a unique role to play.
- Unit testing is used as a fast feedback mechanism for a developer and ‘first line of defense’ during the development process. In unit testing, particular units of code (functions) are tested independently from each other with simulated input and the main focus is to test the logic in the unit. This means I/O operations (like file access, Ajax/RPC calls, DOM interaction), timers and expensive functions (encryption) are usually faked.
- Integration testing is a defense mechanism against protocol changes (eg. argument order) in mocked dependencies in the unit. Stubbing is often avoided in integration tests and actual interaction between units is being tested. Therefore integration tests are more expensive than unit tests.
- Functional testing, to paraphrase Eric Elliott, is usually considered as a subset of integration tests, as they test all pieces of the application together (in the context of the running application).
For complex applications, automation testing can reduce the time required for the release process from months or years to days or even hours.
Continuous Integration is a best practice when it comes to automation testing
Continuous Integration (CI) is a software development practice which depends on frequent integration of the code into a shared repository. Every time a team member commits some code changes to version control (for example Git) the automated process of building and testing the code can be triggered. Developers are encouraged to submit their code together with tests (unit, functional, integration) after completion of every small task (fixing an issue, implementing a feature).
Automated build system fetches the latest version of the code from the shared repository and applies changes if needed. It then executes all scheduled actions against it, like executing tests. CI is considered best practice because software developers need to integrate all the changes done by them with the rest of the changes made by other members of the development team. It helps to avoid merge conflicts, difficult bugs, or duplicated efforts.
This is because CI forces the code to be merged into a shared version control branch continuously, allowing to identify possible problems earlier and more easily. It also minimizes both the time spent on debugging and the time required for code reviews, allowing developers to focus more on adding features.
Which platform to pick? Travis CI vs CircleCI vs. Jenkins
The few popular CI platforms currently available vary by features and flexibility. Some of them are free. For others, you need to pay.
- CircleCI is very easy to start with. There are some free plans available, but with some limitations, like 1 concurrent job with 1 container and no parallelism. It’s a cloud-based tool, so you don’t need to host the infrastructure on your own. It has various integrations such as GitHub, Slack, Jira, etc. The initial setup is very easy and they have great customer support.
- Travis CI is very similar to CircleCI, however, Travis CI is more flexible. It allows you to run builds and tests on Linux and Mac OS X at the same time, as well as support more languages (they even provide tutorials for them). It’s free for open source projects.
- Jenkins is the leading open source automation server and definitely the most flexible one. You need to host it yourself, so it requires some more effort to initially set up and later to maintain. Jenkins allows you to have full control over any aspect of your builds. It can also be extended using plugins and there are hundreds available already, so you can integrate it with basically every tool you want to. While using Jenkins for small projects can be a bit of an overkill, it’s great for large ones.
If you want to compare CI platforms in more detail, there is a very nice comparison available on Stackshare.io.
Software testing is not enough — introducing Quality Assurance
While software testing is very important, it belongs to a wider scope of Quality Assurance. What does this term mean?
Quality Assurance (QA) is much more than just testing. It encompasses the entire software development process. Quality Assurance includes processes such as requirements definition, software design, coding, source code control, code reviews, software configuration management, testing, release management, and product integration.
Manual testing slowed down our software development process
It’s common in tech startups to face challenges in the first years getting processes in place and it’s no different for us at Lightcurve. We didn’t have enough resources to dedicate to software testing, but we still had to do as many tests as possible to ensure the quality and reliability of every new software release. For example, testing a bug fix or a feature on a private network level required:
- Preparing the binaries (build from source)
- Spinning up the cloud infrastructure (multiple virtual machines, from 10 to 500)
- Deploying the software on all of the machines
- Performing actual test scenarios
- Gathering logs for further investigation
- Cleaning up the instances (destroying VMs)
- Analyzing the logs gathered in the process
The majority of our tests were initially manual and therefore time-consuming. In many cases, software testing also required coordination and significant help from our DevOps team. We were not able to test all the protocol features and scenarios in a reasonable amount of time as both the efforts and time required were pretty high. As a result, we experienced delays when making improvements and adding new features to our product suite.
However, I am happy to confirm that we no longer depend solely on manual testing. Four months ago, we established our own QA team within our network development team to cover all the missing parts related to software testing, implementation processes, automation testing and enforcing high-quality standards.
How we implemented Quality Assurance at Lightcurve
Now that we’ve established different types of testing, let’s have a look at how exactly QA is performed at Lightcurve and what exact processes we introduced to eliminate the risks of delivering unreliable code to production.
The result of having a QA team in place improved the following areas
- Designing test plans together with tests scenarios. The QA team works closely with developers in identifying the features that are being developed and then preparing well thought out test scenarios. This step is required before the actual release. In most cases, QA also is responsible for writing tests that cover scenarios prepared before and then executing them and evaluating the results.
- Automated Test Framework. We implemented various test scenarios that are executed in an automated way. Our automated tests involve sanity testing, regression testing, network testing (blocks and transactions propagation, p2p communication, backward compatibility, etc.), security and fault tolerance network tests. Those tests are part of our Continuous Integration (CI) and also can be executed by developers on-demand.
- Jenkins and Ansible for Continuous Integration. At Lightcurve, we benefit from Jenkins’ flexibility while having to execute multiple jobs in parallel. We also want to have full control over the entire workflow. We have automated the process of creating the builds and spinning up test networks using cloud providers. To make our tests as close as possible to real-world scenarios, we are deploying nodes in different regions (US, China, Europe, Asia, etc.). We are also using Ansible, which as an orchestration tool. It enables us to roll-out the software and spin-up those networks with a push of a button.
- NewRelic APM for Performance Testing. One of the main indicators of a blockchain projects’ vitality is the network’s ongoing performance. This makes monitoring the performance of every release important. Our QA team uses NewRelic APM to determine whether there has been an improvement or degradation in the performance. We then give feedback to the development team to rectify the problem before we release. To ensure the network behaves as expected during high volumes of transactions, we run various types of stress tests (different transaction types, different workloads). We’re monitoring metrics such as CPU and memory usage, I/O throughput and API response times. Another important factor to check is memory leaks. When the code needs to use the memory for a particular task, it’s being automatically allocated (for example when creating an object) and it should be released when it’s no longer needed. Sometimes it’s not the case and the application refuses to clean the memory. The memory then stays consumed without a true need for it. Memory leaks cause memory used by the application to grow slowly (sometimes very slowly) until it finally takes all available memory and results in a crash. To improve overall agility and code reliability in development, we’re currently in the process of migrating to TypeScriptacross our product suite.
Our blockchain network’s QA testing process
In blockchain, minor releases involve stages of testing before they reach the production network. In our case, we have the following types of networks:
- Devnet is a temporary, short-lived network that we create to execute tests against new changes that are not a part of a release on a case-by-case basis.
- Alphanet is a network we test alpha versions of new releases, at this stage we need a larger network that reproduces the actual real-world scenario.
- Betanet is a public network in which we test beta releases. This happens only if there are very big changes in the codebase. In most of the cases, we are skipping this network.
- Testnet is a public network to which we push release candidates. Lisk’s Testnet has a huge set of historical data. You can check out our Testnet here.
- Mainnet is a public production network and contains the actual blockchain.
Our automation testing is configured to enable our developers to run tests on Devnet or Alphanet. The actual network size is configurable, ranging from 10 to 500 nodes. NewRelic APM monitoring is integrated with our software and enabled for each node. Once all the required tests are executed and their results evaluated, a decision can be made to release a feature or fix to Testnet. After a reasonable amount of time (depends on the size and complexity of the release) we will push it to the production, otherwise known as Mainnet.
The above picture depicts the Jenkins CI pipeline flow and a test report. The Jenkins CI pipeline consists of multiple stages, which include:
- Building Lisk Core software:In this stage, the Lisk Core software will be built from a specific branch (default is development), the successful build creates a tar file with unique hash in its name (e.g: lisk-1.5.0-alpha.2-b430af6-Linux-x86_64.tar.gz).
- Deploying the software to multiple machines: Once the software is built successfully, it will be deployed to multiple nodes to replicate the network behavior.
- Enabling delegates to forge: At this point all the nodes are already started and have the network’s genesis block loaded. Now we need to make the blockchain move, so in this step we’re enabling forging, as delegates are producing blocks.
- Executing protocol test scenarios: Once the network is moving, Lisk Protocol feature tests will be executed against the network. These tests including sanity, regression and new features, which will ensure all basic protocol related scenarios work as intended.
- Managing network stress tests: To ensure the network stays reliable even under very high transaction loads, we run stress test. They involve sending the maximum supported amount of transactions. We expect the network to handle the load and accept all the transactions within the given block slots.
The pipeline is configured to run nightly, which allows the development team to create each release on time and with proper quality. As a result, developers can test features as and when they develop at a network level using the QA automated framework. This gives developers instant feedback if there are any failures, backward compatibility issues or performance changes, etc.
Get involved in our open source automation testing
Lisk is developed in the spirit of the open source ethos. Therefore, we would like to encourage all developers to participate in ensuring the continued quality and security of our open source network with our QA tools.
How to start contributing to our QA
Observe our quality assurance progress by following our public Jenkins interface. If you want to try using the test suite, however, you will you need to set up your node and network. To do so, read through Lisk’s official documentation. You will especially need to follow the Lisk Core setup section to get the blockchain network up and running. Next, you can set up the QA tools by following the instructions in the Lisk Core QA repository.
Which QA tools can we offer you?
Now that you know how to set up your Lisk Core node, you can participate in the following:
- QA cycle checklist template to cover all possible scenarios
- BDD feature scenarios and its step_definitions implementation
- Support and utility class for testing
- Network configuration tools
- Stress test scenarios
If you are a developer and want to contribute to Lisk’s Quality Assurance process, you can follow these contribution guidelines. You can then share your insights or join the discussion on Lisk.Chat’s Network Channel.
From immutability to decentralization, blockchain’s development presents its own set of challenges. This makes software testing even more important for our industry than it already is for centralized applications. To complicate things further, software testing in itself is a whole universe of options.
The introduction of automation testing at Lightcurve along with a professionalized QA department, significantly improved our development speed, along with the quality of Lisk’s codebase. When it comes to blockchains, however, community equals security. Use the above QA tools to get involved in the testing and contribute to our network’s development starting today.