Category Archives: TDD

TDD and the impact on security

Test-driven development (TDD) is a software development approach that prioritizes writing automated tests while creating the actual code. There follows a cycle of writing a failed test, writing the code to make the test pass, and then refactoring the code. TDD was originally developed to ensure the quality, maintainability and expandability of the software created over the long term. The specific knowledge about the individual source text passages should also be shown in the tests. Thus, a transfer of responsibility between developers is supported. Better than any documentation, tests are always up-to-date regarding the function that has been implemented in the source code.

However, TDD also has a positive impact on the security of a program. And that’s what we’re going to look at now.

But first, let’s get back to TDD. There are a variety of theoretical approaches here. I will briefly touch on them here to give an overview. The question to be addressed is whether the TDD approach influences the effect.

Classic TDD – Kent Beck

Unit tests and units tested with them are continuously developed in parallel. The actual programming is done in small, repeating micro-iterations. One such iteration, which should only take a few minutes, consists of three main parts, known as the red-green refactoring.

Red: Write a test to test a new behaviour (functionality) to be programmed. You start with the simplest example. If the feature is older, it could also be a known bug or new functionality to be implemented. This test is initially not fulfilled by the existing program code, so it must fail.

Green: Change the program code with as little effort as possible and add to it until it passes all tests after the test run then initiated.

Then clean up the code (refactoring): remove repetitions (code duplication), abstract if necessary, align with the binding code conventions, etc. In this phase, no new behaviour may be introduced that would not already be covered by tests. After each change, the tests are performed; if they fail, it is impossible to replicate what appears to be a bug in the code being used. Cleanup aims to make the code simple and easy to understand.

These three steps are repeated until the known bugs are fixed, the code provides the desired functionality, and the developer can’t think of any more useful tests that could fail. The program-technical unit (unit) treated in this way is considered complete for now. However, the tests created together with her are retained to be able to test again after future changes as to whether the behavioural aspects that have already been achieved are still fulfilled.

For the changes in step 2 – also called transformations – to lead to the goal, each change must lead to a more general solution; for example, she may not only edit the current test case at the expense of others. Tests that get more and more detailed bring the code to a more and more general solution. Regular attention to transformation priorities leads to more efficient algorithms.

Consistently following this approach is an evolutionary design method, where every change evolves the system.

Outside in TDD

Outside-In Test-Driven Development (TDD) is an approach to software development that places emphasis on starting the development process first by creating high-level acceptance tests or end-to-end tests that demonstrate the desired behaviour of the system from his point of view to define users or external interfaces. It is also commonly referred to as behaviour-directed development (BDD).

With Outside-In TDD, the development process begins with writing a failed acceptance test that describes the desired behaviour of the system. This test is usually written from a user’s perspective or a high-level component that interacts with the system. The test is expected to initially fail as the system does not have the required functionality.

Once the first acceptance test has been performed, the next step is to write a failing unit test for the smallest possible unit of code that will pass the acceptance test. This unit test defines the desired behaviour of a specific module or component within the system. The unit test fails because the corresponding code still needs to be implemented.

The next step is implementing the code to make the failed unit test succeed. The code is written incrementally, focusing on the immediate needs of the failed test. The developer writes further tests and code step by step until the acceptance test is passed and the desired behaviour of the system is achieved.

The idea behind Outside-In TDD is to drive the development process from the outside, starting with the higher-level behaviour and moving inward to the lower-level implementation details. This approach helps ensure that the system is developed to meet the needs and expectations of its users. It also encourages component testability and decoupling by encouraging the creation of small, focused tests and modular code.

By practising outside-in TDD, developers can gain confidence in their code and ensure the system behaves as expected. It also helps uncover design flaws early in the development process and encourages the creation of loosely coupled and easily maintainable code.

Overall, Outside-In TDD is a methodology that combines testing and development, focusing on providing value to users and driving the development process from an outside perspective.

Acceptance Driven TDD

In test-driven development, system tests are continuously developed or at least specified before the system. In test-driven development, system development is no longer to meet written requirements but to pass specified system tests.

Acceptance Test-Driven Development (ATDD), while related to Test-Driven Development, differs in approach from Test-Driven Development. Acceptance test-driven development is a communication tool between the customer or user, the developer and the tester to ensure the requirements are well described. Acceptance test-driven development does not require automation of test cases, although it would be useful for regression testing. The acceptance tests for test-driven development must also be legible for non-developers. In many cases, the tests of test-driven development can be derived from the tests of acceptance test-driven development. For this approach to be used for system security, the boundary conditions must go beyond the normal technical aspects, which are only considered in some cases.

What test coverage should I use?

It is important for the effect on security that tests run automatically. These then lead to test coverage that can be measured and compared to previous runs. The question arises as to which test coverage will be most suitable. It has been shown that there are significant differences in the strength of the respective approaches. I am a strong proponent of mutation test coverage, as it is one of the most effective test coverages. If you want to know more here, I refer to my video on “Mutation Testing in German” or “Mutation Testing in English” on my YouTube channel. In any case, it is important that the better the test coverage, the greater the effect that can be achieved.

How exactly is the security of the application supported?

1. Identify vulnerabilities early: Writing tests before implementing features help identify potential security gaps early. By considering security concerns during the test design phase, developers can anticipate possible attack vectors and design their code with security guidelines in mind. This proactive approach makes it possible to detect security problems before they become deeply embedded in the code base.

2. Encouraging Safe Coding Practices: TDD encourages writing modular, well-structured, and testable code. This enables developers to adopt secure coding practices such as input validation, proper error handling, and secure data storage. By writing tests that simulate different security scenarios, developers are more likely to consider edge cases and verify that their code behaves securely.

3. Regression Testing for Security: TDD relies on running automated tests regularly to maintain existing functionality as new code is added. This approach helps prevent a decline in security-related issues. If a security-related test case exists and initially passes, subsequent changes to it can be tested to ensure that vulnerabilities were not accidentally introduced.

4. Building a Security-Aware Culture: TDD’s incorporation of security testing as an integral part of the development process helps foster a security-aware culture within the development team. By performing security-focused testing alongside functional testing, developers are more likely to prioritize security and see it as a fundamental aspect of their job. This mindset encourages a proactive attitude towards security rather than just seeing it as a side issue.

5. Facilitate Collaboration and Code Review: TDD encourages collaboration and code review between team members. When developers first write tests, they can discuss security concerns and get feedback from peers. This collaborative approach increases overall security awareness within the team and enables early detection and resolution of security issues.

Although TDD is not a security panacea, it offers a systematic and disciplined development approach that indirectly strengthens software security posture. By promoting security-centric thinking, promoting secure coding practices, and facilitating early detection of vulnerabilities, TDD can help build more secure and resilient applications. However, it is important to note that additional security measures such as threat modelling, code analysis, and penetration testing should also be included to meet security requirements fully.

A few more practical approaches

What are Compliance Issues, and how to remove them?

The compliance issues are elements that run under a license and may not be used in this way in the project. This can happen in all technology areas within a project. Libraries or frameworks used do not usually change their license over the years, but it occasionally happens. All dependencies of the elements used must also be checked. And that’s up to the last link.

One must replace the affected element with a semantic equivalent available under an appropriate license to fix such a violation.

What are Vulnerabilities, and how to remove them?

Vulnerabilities are unintentional errors in the source code that allow the system to be manipulated to its advantage. These errors can occur throughout the tech stack. It is important to recognize all vulnerabilities in all technologies used to get an overview of which vulnerabilities can be exploited in which combination for an attack on this system. Individual considerations need to be clarified at this point. This holistic view enables the recognition of the different attack vectors and the identification of weak points that cannot be exploited in this system. This allows you to use your efforts in a targeted manner to secure the system.

Why do we need efficient dependency management?

What tools are available to developers to address compliance issues and vulnerabilities? The answer is obvious. This is where very effective dependency management combined with robust test coverage comes in handy. Because, in all cases, dependencies have to be exchanged. Even though it’s a surrogate for compliance issues, fixing vulnerabilities is the same dependency in a different version. This can be a higher or lower version. Overall, it is about the clever combination of the various elements. Strong test coverage facilitates the subsequent test to determine whether the functionality is still available.

Conclusion:

In summary, strong test coverage is beneficial when modifying and exchanging external elements as well as your own source code. Here you can rely on the results of the CI environment without having to add manual verifications and thus implement a fast and lean release process with as little time loss as possible.

Quality and safety support each other.

Happy coding

Sven

The quick Wins of DevSecOps

Hello and welcome to my DevSecOps post. Here in Germany, it’s winter right now, and the forests are quiet. The snow slows down everything, and it’s a beautiful time to move undisturbed through the woods.

Here you can pursue your thoughts, and I had to think about a subject that customers or participants at conferences ask me repeatedly.

The question is almost always:

What are the quick wins or low hanging fruits if you want to deal more with the topic of security in software development?

And I want to answer this question right now!

For the lazy ones, you can see it as youtube video as well

Let’s start with the definition of a phrase that often used in the business world.

Make Or Buy

Even as a software developer, you will often hear this phrase during meetings with the company’s management and sales part.

The phrase is called; “Make or Buy“. Typically, we have to decide if we want to do something ourselves or spend money to buy the requested functionality. It could be less or more functionality or different so that we have to adjust ourself to use it in our context.

But as a software developer, we have to deal with the same question every day. I am talking about dependencies. Should we write the source code by ourselves or just adding the next dependencies? Who will be responsible for removing bugs, and what is the total cost of this decision? But first, let’s take a look at the make-or-buy association inside the full tech-stack.

Diff between Make / Buy on all layers.

If we are looking at all layers of a cloud-native stack to compare the value of “make” to “buy” we will see that the component “buy” is in all layers the bigger one. But first things first.

Alt Text

The first step is the development of the application itself.

Assuming that we are working with Java and using maven as a dependency manager, we are most likely adding more lines of code indirectly as dependency compared to the number of lines we are writing by ourselves. The dependencies are the more prominent part, and third parties develop them. We have to be carefully, and it is good advice to check these external binaries for known vulnerabilities.

We should have the same behaviour regarding compliance and license usage. The next layer will be the operating system, in our case Linux.

And again, we are adding some configuration files and the rest are existing binaries.

The result is an application running inside the operating system that is a composition of external binaries based on our configuration.

The two following layers, Docker and Kubernetes, are leading us to the same result. Until now, we are not looking at the tool-stack for the production line itself.

All programs and utilities that are directly or indirectly used under the hood called DevSecOps are some dependencies.

All layers’ dependencies are the most significant part by far.

Checking these binaries against known Vulnerabilities is the first logical step.

one time and recurring efforts for Compliance/Vulnerabilities

Comparing the effort of scanning against known Vulnerabilities and for Compliance Issues, we see a few differences.

Let’s start with the Compliance issues.

Compliance issues:

The first step will be defining what licenses are allowed at what part of the production line. This definition of allowed license includes the dependencies during the coding time and the usage of tools and runtime environments. Defining the non-critical license types should be checked by a specialised lawyer. With this list of white labelled license types, we can start using the machine to scan on a regular base the full tool stack. After the machine found a violation, we have to remove this element, and it must be replaced by another that is licensed under a white-labelled one.

Vulnerabilities:

The recurrent effort on this site is low compared to the amount of work that vulnerabilities are producing. A slightly different workflow is needed for the handling of found vulnerabilities. Without more significant preparations, the machine can do the work on a regular base as well. The identification of a vulnerability will trigger the workflow that includes human interaction. The vulnerability must be classified internally that leads to the decisions what the following action will be.

Compliance Issues: just singular points in your full-stack

There is one other difference between Compliance Issues and Vulnerabilities. If there is a compliance issue, it is a singular point inside the overall environment. Just this single part is a defect and is not influencing other elements of the environment.

Alt Text

Vulnerabilities: can be combined into different attack vectors.

Vulnerabilities are a bit different. They do not only exist at the point where they are located. Additionally, they can be combined with other existing vulnerabilities in any additional layer of the environment. Vulnerabilities can be combined into different attack vectors. Every possible attack vector itself must be seen and evaluated. A set of minor vulnerabilities in different layers of the application can be combined into a highly critical risk.

Alt Text

Vulnerabilities: timeline from found until active in the production

I want to have an eye on as next is the timeline from vulnerability is found until the fix is in production. After a vulnerability is existing in a binary, we have nearly no control over the time until this is found. It depends on the person itself if the vulnerability is reported to the creator of the binary, a commercial security service, a government or it will be sold on a darknet marketplace. But, assuming that the information is reported to the binary creator itself, it will take some time until the data is publicly available. We have no control over the duration from finding the vulnerability to the time that the information is publicly available. The next period is based on the commercial aspect of this issue.

As a consumer, we can only get the information as soon as possible is spending money.
This state of affairs is not nice, but mostly the truth.

Nevertheless, at some point, the information is consumable for us. If you are using JFrog Xray, from the free tier, for example, you will get the information very fast. JFrog is consuming different security information resources and merging all information into a single vulnerability database. After this database is fed with new information, all JFrog Xray instances are updated. After this stage is reached, you can act.

Alt Text

Test-Coverage is your safety-belt; try Mutation Testing.

Until now, the only thing you can do to speed up the information flow is spending money for professional security information aggregator. But as soon as the information is consumable for you, the timer runs. It depends on your environment how fast this security fix will be up and running in production. To minimise the amount of time a full automated CI Pipeline ist one of the critical factors.

But even more critical is excellent and robust test coverage.

Good test coverage will allow you, to switch dependency versions immediately and push this change after a green test run into production. I recommend using a more substantial test coverage as pure line-coverages. The technique called “mutation test coverage” is a powerful one.

Mutation Test Coverage
If you want to know more about this on, check out my YouTube channel. I have a video that explains the theoretical part and the practical one for Java and Kotlin.

The need for a single point that understands all repo-types
To get a picture of the full impact graph based on all known vulnerabilities, it is crucial to understand all package managers included by the dependencies. Focussing on just one layer in the tech-stack is by far not enough.

JFrog Artifactory provides information, including the vendor-specific metadata that is part of the package managers.

JFrog Xray can consume all this knowledge and can scan all binaries that are hosted inside the repositories that are managed by Artifactory.

Alt Text

Vulnerabilities – IDE plugin

Shift Left means that Vulnerabilities must be eliminated as early as possible inside the production pipeline. One early-stage after the concept phase is the coding itself. At the moment you start adding dependencies to your project you are possibly adding Vulnerabilities as well.

The fastest way to get feedback regarding your dependencies is the JFrog IDE Plugin. This plugin will connect your IDE to your JFrog Xray Instance. The free tier will give you access to Vulnerability scanning. The Plugin is OpenSource and available for IntelliJ, VS-Code, Eclipse,… If you need some additional features, make a feature request on GitHub or fork the Repository add your changes and make a merge request.

Try it out by yourself – JFrog Free Tier

How to use the IDE plugin?

If you add a dependency to your project, the IDE Plugin can understand this information based on the used package manager. The IDE Plugin is connected to your JFrog Xray instance and will be queried if there is a change inside your project’s dependency definition. The information provided by Xray includes the known vulnerabilities of the added dependency. If there is a fixed version of the dependency available, the new version number will be shown.

If you want to see the IDE Plugin in Action without registering

for a Free Tier, have a look at my youtube video.

Conclusion

With the JFrog Free Tier, you have the tools in your hands to practice Shift Left and pushing it into your IDE.

Create repositories for all included technologies, use Artifactory as a proxy for your binaries and let Xray scan the full stack.

With this, you have a complete impact graph based on your full-stack and the pieces of information about known Vulnerabilities as early as possible inside your production line.

You don’t have to wait until your CI Pipeline starts complaining. This will save a lot of your time.