The Lifeline of a Vulnerability

Intro

Again and again, we read something in the IT news about security gaps that have been found. The more severe the classification of this loophole, the more attention this information will get in the general press. Most of the time, you don’t even hear or read anything about all the security holes found that are not as well known as the SolarWinds Hack, for example. But what is the typical lifeline of such a security gap?

Vulnerability was generated until found 

Let’s start with the birth of a vulnerability. This birth can be done in two differently motivated ways. On the one hand, it can happen to any developer that he creates a security hole by an unfortunate combination of source code pieces. On the other hand, it can also be based on targeted manipulation. However, this has essentially no effect on the further course of the lifeline of a security vulnerability. In the following, we assume that a security hole has been created and that it is now active in some software. These can be executable programs or libraries offered that are integrated into other software projects as a dependency.

Found until Public available

In most cases, it is not possible to understand precisely when a security hole was created. But let’s assume that there is a security hole and that it will be found. It clearly depends on which person or which team finds this weak point. This has a severe impact on the subsequent course of history. Let’s start with the fact that this vulnerability has been found by people who are interested in using this vulnerability themselves or having it exploited by other parties. Here the information is either kept under lock and key or offered for purchase at relevant places on the Internet. There are primarily financial or political motives here, which I do not want to go into here. However, at this point, it can be clearly seen that the information is passed on at this point in channels that are not generally available to the public.

However, if the security gap is found by people or groups who are interested in making the knowledge about it available to the general public, various mechanisms now take effect. However, one must not forget that commercial interests will also play a role in most cases. However, the motivation is different. If the company or the project itself is affected by this vulnerability, there is usually an interest in presenting the information relatively harmless. The feared damage can even lead to the security gap being fixed, but knowledge about it further is hidden. This approach is to be viewed as critical, as it must be assumed that there will also be other groups or people who will gain this knowledge.

But let’s assume that people who are not directly involved in the affected components found the information about the vulnerability. In most cases, this is the motivation to sell knowledge of the vulnerability. In addition to the affected projects or products, there are also providers of the vulnerability databases. These companies have a direct and obvious interest in acquiring this knowledge. But to which company will the finder of the vulnerability sell this knowledge? Here it can be assumed that there is a very high probability that it will be the company that pays the better price. This has another side effect that affects the classification of the vulnerability. Many vulnerabilities are given an assessment using the CVSS. Here, the base value is made by different people. Different people will have other personal interests here, which will then also be reflected in this value. 

Here I refer to the blog post about CVSS – explained. https://svenruppert.com/2021/04/07/cvss-explained-the-basics/

Regardless of the detours via which the knowledge comes to the vulnerabilities databases. Only when the information has reached one of these points can one assume that this knowledge will be available to the general public over time.

Public available until consumable

However, one fact can be seen very clearly at this point. Regardless of which provider of vulnerabilities you choose, there will only ever be a subset of all known vulnerabilities in this data set. As an end consumer, there is only one sensible way to go here. Instead of contacting the providers directly, you should rely on integrators. This refers to services that integrate various sources themselves and then offer them processed and merged. It is also essential that the findings are processed so that further processing by machines is possible. This means that the meta-information such as the CVE or the CVSS value is supplied. This is the only way other programs can work with this information. The CVSS value is given as an example. This is used, for example, in CI environments to interrupt further processing when a particular threshold value is reached. Only when the information is prepared in this way and is available to the end-user can this information be consumable. Since the information generally represents a considerable financial value, it can be assumed in the vast majority of cases that the commercial providers of such data sets will have access to updated information more quickly than freely available data collections.

Consumable until running in Production

If the information can now be consumed, i.e. processed with the tools used in software development, the storyline begins in your projects. Whichever provider you have decided on, this information is available from a particular point in time, and you can now react to it yourself. The requirement is that the necessary changes are activated in production as quickly as possible. This is the only way to avoid the potential damage that can result from this security vulnerability. This results in various requirements for your software development processes. The most obvious need is the throughput times. Only those who have implemented a high degree of automation can enable short response times in the delivery processes. It is also an advantage if the team concerned can make the necessary decisions themselves and quickly. Lengthy approval processes are annoying at this point and can also cause extensive damage to the company.

Another point that can release high potential is the provision of safety-critical information in all production stages involved. The earlier the data is taken into account in production, the lower the cost of removing security gaps. We’ll come back to that in more detail when the shift-left topic is discussed.

Another question that arises is that of the effective mechanisms against vulnerabilities.

TestCoverage is your safety-belt; try Mutation Testing

The best knowledge of security gaps is of no use if this knowledge cannot be put to use. But what tools do you have in software development to take efficient action against known security gaps? Here I would like to highlight one metric in particular: the test coverage of your own source code parts. If you have strong test coverage, you can make changes to the system and rely on the test suite. If a smooth test of all affected system components has taken place, nothing stands in the way of making the software available from a technical point of view.

But let’s take a step back and take a closer look at the situation. In most cases, changing the version of the same dependency in use will remove known vulnerabilities. This means that efficient version management gives you the agility you need to be able to react quickly. In very few cases, the affected components have to be replaced by semantic equivalents from other manufacturers. And to classify the new composition of versions of the same ingredients as valid, strong test coverage is required. Manual tests would go far beyond the time frame and cannot be carried out with the same quality in every run. But what is strong test coverage?

I use the technique of mutation testing. This gives you more concrete test coverage than is usually the case with the conventional line or branch coverage. Unfortunately, a complete description of this procedure is not possible at this point.

However, if you want to learn more about mutation testing, visit the following URL. This video explains the theoretical and practical part of the mutation test for Java and Kotlin.

The need for a single point that understands all repo-types

If we now assume that we want to search for known security vulnerabilities in the development and the operation of the software, we need a place where we can carry out the search processes. Different areas are suitable here. However, there is a requirement that enables an efficient scan across all technologies used. We are talking about a logical central point through which all binaries must go. I don’t just mean the jar files declared as dependencies, but also all other files such as the Debian packages or Docker images. Artifactory is suitable for this as a central hub as it supports pretty much all package managers at one point. Because you have knowledge of the individual files and have the metadata, the following things can also be evaluated.

First of all, it is not only possible to capture the direct dependencies. Knowing the structures of the package managers used means that all indirect dependencies are also known. Second, it is possible to receive cross-technology evaluations. This means the full impact graph to capture the practical meaning of the individual vulnerabilities. The tool from JFrog that can give an overview here is JFrog Xray, and we are directly connected to JFrog Artifactory. Whichever tool you choose, it is crucial that you don’t just scan one technology layer. Only with a comprehensive look at the entire tech stack can one guarantee that there are no known security gaps, either directly or indirectly, in production.

Conclusion

Now we come to a conclusion. We have seen that we have little influence on most sections of a typical lifeline of an IT security vulnerability. Actually, there are only two sections that we can influence directly. On the one hand, it is the quickest and most comprehensive possible access to reliable security databases. Here it is essential that you not only entrust yourself to one provider but rather concentrate on so-called “mergers” or “aggregators”. The use of such supersets can compensate for the economically motivated vagueness of the individual providers. I named JFrog Xray as an example of such an aggregator. The second section of a typical lifeline is in your own home. That means, as soon as you know about a security gap, you have to act yourself—robust automation and a well-coordinated DevSecOps team help here. We will deal with precisely this section from the “security” perspective in another blogpost. Here, however, we had already seen that strong test coverage is one of the critical elements in the fight against vulnerabilities. Here I would like to refer again to the test method “Mutation Testing”, which is a very effective tool in TDD.

And what can I do right now?

Aou can, of course, take a look at my YouTube channel and find out a little more about the topic there. I would be delighted to welcome you as my new subscriber. Thanks!

https://www.youtube.com/@OutdoorNerd

Happy Coding

Sven

Howto Building a Bushcrafting Seat in the woods

In this episode, I’ll show you how to make a comfortable bushcrafting chair out of a few sticks and a little string.

Step 1 – Finding suitable wood

For this seat, we need three wooden poles. These must be stable enough to bear the weight of whoever wants to sit on them. When choosing the pieces of wood, you should, as always, make sure that only dead wood is used. In addition to the aspect of not damaging living trees, it also has a convenient background. With fresh wood, you always have to expect that there is still some moisture and resin in the workpiece and leak out in different places over time. In my search, I opted for secluded beech wood. All bars have a diameter of approx. 10cm are dry and sufficiently stable to hold my weight individually. Of course, rotten wood is not advisable.


Step 2 – cutting the workpieces

For the two outer poles, two branches with a length of approx. 1.50 m to 1.80 m are required. With these two bars, it doesn’t matter that they are straight. Here you can also use slightly crooked workpieces. A wooden rod with a length of 1m to 1.50m can be used for the seat itself. This bar can be a little thicker so that the later seating comfort is a little more fabulous.


Step 3 – smoothing the workpieces

After the branches have all been sawn to the correct length, all protruding branches should be removed. In this step, you can remove the branches with a hatchet and any remaining bark remnants. Removing the bark residues ensures that the pieces of wood used can withstand the weather for longer. Not only does moisture collect under the bark, but also insects will quickly find a new home here. When you have now sawn, debarked and smoothed all three pieces of wood, the assembly begins.

Step 4 – frame construction

The frame structure itself consists of two long rods. For this purpose, these are tied together on one side, similar to a tripod. For this, I use a rope made from natural products such as hemp. Of course, you can also use paracord. However, I advise against it if you do not want to dismantle this seat immediately after use. If the bushcrafting seat remains in the forest, I ask you not to use plastic-containing cords. The connection itself is kept quite simple and essentially only consists of a few loops and a final knot.


Step 5 – the seat

After the two long poles have been connected, this frame should be set up once. On the one hand, you can see whether the newly created connection is holding. On the other hand, you can use the third rod to try out where it should be attached. Once you have decided on a position, you can begin to attach the last piece. To do this, we put the frame construction back on the floor. After the piece has been placed on the two legs of the frame structure, both sides can be connected. It would be best if you made sure that the connections are sufficiently stable and resilient, as this is where the highest loads occur.


Step 6 – commissioning

After all, connections have been made; you can set up the construction and try it out right away. The two long poles are leaned against a tree trunk with the upper part. However, the angle to the tree trunk also determines the seat height and stability.
And the bushcraft chair is ready. The entire construction should be set up in about 15 minutes after the workpieces have been found. The short construction time makes this project ideal for a shared adventure with children.

Have fun with the reproduction.

CVSS – explained – the Basics

Intro

What is the Common Vulnerability Scoring System short called CVSS, who is behind it, what are we doing with it and what a CVSS Value means for you? I will explain how a CVSS Score is calculated, what the different elements of it mean and what are the differences between the different CVSS versions.

The Basic Idea Of CVSS

The basic idea behind CVSS is to provide a general classification of the severity of a security vulnerability. This is about the classification and evaluation of weak points. But what does the abbreviation CVSS mean?

What does the abbreviation CVSS mean?

The letters stand for the words: Common Vulnerability Scoring System. That means something like a general vulnerability rating system. Here, the weak points found are evaluated from various points of view. These elements are weighted against each other so that a standardized number between 0 and 10 is obtained at the end.

For what do you need such a rating system?

A rating system that provides us with a standardized number allows us to evaluate different weak points abstractly and derive follow-up actions from them. The focus here is on standardizing the handling of these weak points. As a result, you can define actions based on the value ranges. Here I mean processes in the value creation sources that are affected by this weak point.

What is the basic structure of this assessment?

In principle, CVSS can be described so that the probability and the maximum possible damage are related using predefined factors. The basic formula for this is: risk = probability of occurrence x damage

The Basic Values from 0..10

The evaluation in the CVSS is based on various criteria and is called “metrics”. For each metric, one or more values selected from a firmly defined selection option. This selection then results in a value between 0.0 and 10.0. Where 0 is the lowest and 10 is the highest risk value. The entire range of values ​​is then subdivided into groups and are labelled “None“, “Low“, “Medium“, “High“, and “Critical“. These metrics are divided into three areas that are weighted differently from one another. These are the areas “Basic Metrics”, “Temporal Metrics”, and “Environmental Metrics”. Here, different aspects are queried in each area, which must be assigned a single value. The weighting among each other and the subsequent composition of the three group values ​​gives the final result.

However, all component values ​​that lead to this result are always supplied. This behaviour ensures that there is transparency at all times about how these values ​​originally came about. Next, the three sub-areas will be explained individually in detail.

The Basic Metrics

The basic metrics form the foundation of this rating system. The aim is to record the technical details of the vulnerability that will not change over time. You can imagine it to be an assessment that is independent of other changing elements. Different parties can carry out the calculation of the base value. It can be done by the discoverer, the manufacturer of the project or product concerned or by a party (CERT) charged with eliminating this weak point. One can imagine that, based on this initial decision, the value itself will turn out differently since the individual groups pursue different goals.

necessary requirements:

The base value evaluates the prerequisites that are necessary for a successful attack via this security gap. This is, for example, the distinction between whether a user account must be available on the target system for an attack that is used in the course of the attack or whether the system can be compromised without the knowledge about a system user. It also plays a significant role in whether a system is vulnerable over the Internet or whether physical access to the affected component is required.

Complexity of the attack:

The base value should also reflect how complex the attack is to carry out. In this case, the complexity relates to the necessary technical steps and includes assessing whether the interaction with a regular user is essential. Is it sufficient to encourage any user to interact, or does this user have to belong to a specific system group (e.g. administrator)? At this point, it is already evident that the assessment of a new vulnerability requires exact knowledge of this vulnerability and the systems concerned. The correct classification is not a trivial process.

Assessment of the damage:

The basic metrics also take into account the damage that this attack could cause to the affected component. This means the possibilities to extract the data from the system, manipulate it, or completely prevent the system’s use. One speaks here of the three areas;

  • Confidentiality
  • Integrity
  • Availability

However, you have to be careful here concerning the weighting of these possibilities. In one case, it can be worse when data is stolen than it is changed. In another case, the unusability of a component can be the worst damage to be assumed.

Scope-Metric:

The scope metric has also been available since CVSS version 3.0. This metric looks at the effects of an affected component on other system components. For example, one can imagine that a compromised element in a virtualized environment enables access to the carrier system. A successful change of this scope represents a greater risk for the overall system and is therefore also evaluated using this factor. This point alone clearly shows that the interpretation of the values also requires adjusting to one’s situation. And so we come to the “temporal” and “environment” metrics.

The Temporal Metrics

The time-dependent components of the vulnerability assessment are brought together in the group of temporal metrics. The peculiarity at this point is that the base value can be reduced by the temporal components only. The initial rating is intended to represent the worst-case scenario.

This has both advantages and disadvantages if you bear in mind that it is during the initial assessment of a vulnerability that can give very different interests. At this point, there are two things that need to be highlighted;

1) Which factors influence the temporal metrics? 

The elements that change over time influence the “Temporal Metrics”.

On the one hand, this refers to changes concerning the availability of tools that support the exploitation of the vulnerability. These can be exploits or step-by-step instructions. A distinction must be made whether a chess point is theoretical or whether a manufacturer has officially confirmed it. All of these events change the base value.

2) Influence of the initial evaluation?

The influence on the initial evaluation comes about through external framework conditions. These take place over an undefined time frame and are not relevant for the actual basic assessment. Even if an exploit is already in circulation during the base values survey, this knowledge will not be included in the primary assessment. However, the base value can only be reduced by the temporal metrics. This approach takes a little getting used to and is often the subject of criticism. The reason why you decided on it is understandable from the theoretical point of view. The base value is intended to denote the most excellent possible damage.

And this is where a conflict arises. The person or group who has found a security gap tries to set the base value as high as possible. A highly critical loophole is better to sell and better exploited in the media. The reputation of the person/group who found this gap increases as a result. The affected company or the affected project is interested in exactly the opposite assessment. It, therefore, depends on who finds the security gap, how the recycling should take place and by which body the first evaluation is carried out. The only offsetting component is the Environmental Metrics.

The Environmental Metrics

In the case of environmental metrics, the own system landscape is set in relation to the security gap. This means that the evaluation is adjusted based on the real situation. In contrast to Temporal Metrics, Environmental Metrics can correct the base value in both directions. The environment can therefore lead to a higher classification and must also be constantly adapted to your own changes. The combination now gives rise to purely practical questions. Let’s assume that there is a patch from the manufacturer for a security hole. The mere presence of this modification leads to a lowering of the total value in the Temporal Metrics. However, as long as the patch has not been activated in your own systems, the overall value must be drastically corrected upwards again via the Environmental Metrics. Why did I say at this point that the value has to be increased drastically? As soon as a patch is available, this patch can be used to better understand the security gap and its effects. The attacker has more and more detailed information that can be used. This reduces the resistance of the not yet hardened systems.

The Final Score

At the end of an evaluation, the final score is obtained, calculated from the three previously mentioned values. At this point, I will not explain the calculation details; I have a separate post on this. The resulting value is then assigned to a value group. But there is one more point that is very important to me personally. In many cases, I see that the final score is simply carried over. The individual adjustments utilizing the environmental score do not take place. Instead, the value one to one is adopted as the risk assessment. In many cases, this behaviour leads to a dangerous evaluation which is incorrect for the overall system concerned.

Conclusion

We come to the management summary. With the CVSS we have a value system for evaluating security gaps in software. Since there are no alternatives, the system has been in use worldwide for over ten years and is constantly being developed, it is a defacto standard. The evaluation consists of three components. First, there is the basic score, which is there to depict a purely technical worst-case scenario. The second component is the evaluation of the time-dependent corrections based on external influences. This is about the evaluation of whether there are further findings, tools or patches for this security gap. The peculiarity of this point is that the base score can only be reduced with this value. The third component is the assessment of your own system environment with regard to this weak point. With this consideration, the security gap is adjusted in relation to the real situation on site. This value can lower or higher the base rating. Last but not least, an overall evaluation is made from these three values, which results in a number between 0.0 to 10.0.

This final value can be used to control your own actions to defend against the security gap in the overall context. At first glance, everything is quite abstract. It takes some practice to get a feel for the figures it contains.

The CVSS can only develop its full effect if it deals more deeply with this matter within your own system.

I will explain the underlying calculations in detail in further posts.

build a small Bushcraft bench from a log

When I’m out in the forest, I always enjoy having some tools with me. The backpack is certainly a little heavier, but there are countless possibilities. My son was there again on one of my last tours. At the time I write this blog post (early 2021), he is ten years old. It’s always lovely to see the enthusiasm with which new things are tried.
We had set ourselves the goal of building a bank. We had an axe, a saw and a paracord with us as tools.
To find a suitable workpiece, forest areas in which there are deciduous trees are suitable. Conifers are usually not very ideal because they contain much resin. Now and then, you will find places where some medium-sized trees such as birch, beech or oak have been fallen due to the last storm, are relocated. Of course, you have to be careful in such places whether the tree trunks are still under tension. We found just such a place, and we immediately started to sift through the wood. Since the logs had fallen for some time, it was safe to move between them.

What should a suitable workpiece look like?

Trunks that do not entirely rest on the ground are best. These pieces are usually dry and have a firm consistency. If you tap the wood with your fingers, you can hear a bright sound. If the sound is dull or if the wood is a bit musty, it is usually not worth sawing out a piece.
The length can be chosen freely and is only limited by the total weight you can carry yourself. Depending on the length, the trunk section must, of course, have a suitable thickness in diameter. After all, this piece of wood is supposed to withstand the weight of the people sitting on it. The comfort of the seat is, of course, greater the deeper the seat is. If in doubt, choose a larger diameter.
In our case, we have chosen a reasonably short piece of about one meter. The diameter is about 25cm and therefore sufficient for the expected weight.

Sawing the sections

To sit comfortably on the bench, one long side is now removed from the tree trunk piece. The aim is to get a flat surface over the entire width of the beam. So that the stability does not suffer too much, it is advisable not to split the trunk directly in the middle. It is possible to saw off a piece over the entire length, but it is quite a laborious task. I prefer to saw across the workpiece with a saw at intervals of about a hand’s breadth. The cut should be so deep that it corresponds to the portion to be removed.

Manufacture the seat

When the trunk has now been sawed, the tree trunk’s unneeded part can be removed with the axe. It is best to start with a middle segment. I hold the workpiece upright with my left hand and then use the axe with my right hand. By sawing, the pieces come off quite easily. When all segments have now been removed with the axe, the fine work can begin. With the axe or knife, you can now work on the seat as finely as you would like to do it.

Find and saw wood for tripods

The main piece is now finished. Now the pieces of wood are missing with which you can make the two tripods you need. Six sticks of the same length, if possible, with a diameter of about 4 cm or a little more are needed. Of course, you have to take the total weight into account here too.
In our case, sticks a little more than 40cm long and about 4cm in diameter were sufficient. Again, dry and stable dead wood from deciduous trees should be used.

Tie a tripod

To tie a tripod out of three sticks, you need a little bit of Paracord. A 2m long piece of string should be sufficient. I am placing the three sticks next to each other so that there is no space between them. The cord is attached to one of the outer sticks in the middle and then wrapped around all sticks. It would be best if you made sure that the bars stay tightly together. Towards the end, you can wrap the end piece between the sticks a few more times. The end of the Paracord should then be knotted tightly. The tripod is now ready and can be set up.

Assemble the bench

Time for the last work step now. Take the first workpiece and place it on the two tripods. After a slight fine adjustment, the bench should be ready.

Installation

The high point is to try out the bank and see if everything will hold up. All work steps can be carried out in about 30 minutes and are ideally suited as a small leisure project. Besides, there is, of course, the time to find suitable wood is not included.

Building a Rocket-Stove with Axe and Saw only

Today we’re going to look at how to make a rocket stove with an axe and a saw.
The construction does not require any other materials such as wire or the like.
This is a variant that can be used to boil water yourself.
Even in wet weather, this variant works very well because, with the correct selection of the workpiece, the inner dry wood is used to generate sufficient embers.
In this construction, the physical effect is used, in which hot air rises faster than the colder ambient air. This is, among other things, the reason why this variant works better the colder the ambient temperature is.



Search / select wood

The first step is to choose a suitable piece of wood. There are a few things to consider here.
Only one tree that has already died should be used for this work. There are several reasons for this.
On the one hand, the principle “Leave no trace” applies to me. So, avoid all traces as much as possible.
Therefore, damaging or even cutting down trees that are still alive is an absolute NO-GO for me.
But there are also purely practical reasons to concentrate mainly on deadwood when searching.
Most woods are much better to use as fuel than fresh and, therefore, still damp wood.
Exceptions here are woods with a very high proportion of resin. Many conifers are part of it, but birch is also an excellent raw material for a fire. Birch burns very well, even when it is wet. This is also one of the reasons why birch bark is very popular as a tinder material.

(But I’ll show that in more detail in another video)

However, there is one more thing to keep in mind. If you want to use this rocket stove for cooking, you should avoid wood that is too resinous. During the incineration, many particles then settle on the cookware, which then has to be removed with incredible difficulty. So think a little about the work that follows.

Wood that lies on the floor is usually more humid, possibly even slightly rotten. These woods can no longer be used and have a poor calorific value, and are usually the basis of many insects’ lives.
It is best if the part of the tree trunk is suspended in the air, i.e. not touching the ground. If you knock on wood, you can already tell from the sound how much moisture is still to be expected in the wood or how strong the decomposition process has already progressed. A bright sound is usually auspicious.

Let’s come to the size that is well suited for this project.
I prefer woods that are no more than the length of the palm of my hand.
There are several reasons for this. Firstly, these are easier to edit in the following work steps.
Second, these sizes can still be easily edited with handy tools. For thicker workpieces, you usually have to provide makeshift tools.
(How to split very thick tree trunks with a small axe, which I will describe in detail in another video.)

Saw off wood

It has been shown to me that I prefer to use pieces between 30-50 cm long in terms of length.
They have the advantage that they are stable in the fire for a long time, can be quickly processed and do not burn too long. The burning time is sufficient for preparing a meal for up to two people, followed by coffee and warm up a little while the coffee is still being enjoyed. The burning time, however, is heavily dependent on the type of wood used and its quality. For sawing myself, I use a saw on tours that has a relatively long blade. Indeed, shorter saw blades can also be used, but I prefer to carry a few grams more with me and then have more comfort when working on the wood.

Split / split wood

As soon as the workpiece has been found and sawed out, you can start splitting.
An old tree stump that can be used as a base is suitable for this.
This has the advantage that the work can also be carried out quite quietly.
Who would want to reveal their position to all hunters and foresters with loud noises?
Stones are only conditionally suitable as a base. Here you have to reckon that the axe hits the stony surface while working with the blade and thus quickly loses its sharpness.

The piece of wood must now be split lengthways into three or four equal parts, if possible.
The trunk must be divided along its entire length.
It is also crucial that the parts are as thick as possible so that all aspects burn at the same speed later during usage.

Remove insides

Now that we have three or four pieces, the chimney can be carved out. This is done on the inside edge, lifted some chips. The result is a free space in the middle. This achieved that an internal tube is created that can be used as a stove pipe. Ultimately, the point is to enable the hot air to rise in a targeted manner. This makes a draft that will constantly draw in fresh air from below. The chimney effect has already been achieved. When all parts are put back together, you can look through the length of the workpiece once.
The work itself does not have to be carried out too carefully. Likewise, not too much material should be removed, as it will only shorten the burning time. A reasonably small opening is enough for a good draft. I use my thumb as a measure of the diameter.

Cut the combustion chamber

There are now different ways to operate the Rocket Stove. I mostly cut a small combustion chamber into the bottom. For this purpose, a notch is engraved on the underside of a side part.
For this, you can do the rough preparatory work with the saw by cutting a triangle. Then the opening is worked out a little more with the axe. But you can do that at will.
It should just be such that the opening is enough to add fuel. This facilitates lighting during the first few minutes to light a sufficiently large flame.

assembling and installation

A relatively straight base is required to operate the furnace. There the parts are placed next to each other.
Most of the solutions I see with others use a piece of wire to hold the pieces together.
First of all, you must first have carried such a wire with you and, secondly, the wire must not have been left behind at the place later.

Attach side supports

I use material from the surrounding area to support the parts. A few small branches that are anchored in the ground to hold the side parts are sufficient. And you are done with the construction and can start operating. As it burns down, the small sticks are simply moved along the sides of the fire.
You could also sharpen the underside of the individual parts. This means that you can even anchor the stove on the floor. This even saves you the side supports.
So, leave the wire at home and use a couple of small sticks.

Collect birch bark

Birch bark is ideal as a tinder material. Usually, there are individual birch trees in many places in the forests, a pioneer plant. Some birches have some areas where the bark has already peeled off. It is better to use dead birch trees. You can then peel off the bark from these trees with a knife or axe. The birch wood can also be used as it contains quite a lot of essential oils, which enables it to ignite when it is moist.

Prepare the tinder

Now we come to the operation of the Rocket Stove. For this, we need tinder to get the flame so big that we can light the wood with it. For this, you can use the bark of the birch to light a flame with a few sparks from a fire steel. Chop up the birch bark a little and place it on top of additional tinder material. Old bark is suitable as a base to get the glowing nest into the combustion chamber. If you have fat wood, you can, of course, also use it.

Kindle

As soon as you have lit the fire on the bark underneath, you can use a stick to push the embers into the combustion chamber. The chimney effect should set in immediately and ensure a reasonably constant flow of fresh air. After a short time, the fire begins to rise along the chimney pipe. The resulting heat flows out of the opening and can be used immediately to heat food or water.
However, it is not advisable to place the mug directly on the chimney opening, as this interrupts airflow. Much smoke will usually be the result; two small sticks under the cup will solve the problem.

Conclusion

We are now able to manufacture a rocket stove ourselves with minimal effort. This gives you a controlled fireplace that is well suited to heating food and water in damp weather. It is also recommended to use it as a heat source near a warehouse. The wood consumption is meagre with good heat yield at the same time.

To extinguish the flame, you can pull the individual parts of the Rocket-Stove apart and press the embers on the floor. The flight of sparks is easy to control with this type of fireplace. In an emergency, the fire can be annihilated quickly with a bit of water if allowed to run directly into the chimney opening. The embers are, of course, not yet extinguished.

Pattern from the practical life of a software developer

Builder-Pattern

The book from the “gang of four” is part of the essential reading in just about every computer science branch. The basic patterns are described and grouped to get a good start on the topic of design patterns. But how does it look later in use?
Here we will take a closer look at one pattern and expand it.



The Pattern – Builder

The builder pattern is currently enjoying increasing popularity as it allows you to build a fluent API.
It is also lovely that an IDE can generate this pattern quite quickly. But how about using this design pattern in daily life?

The basic builder pattern

Let’s start with the basic pattern, the initial version with which we have already gained all our experience.
For example, I’ll take a Car class with the Engine and List <Wheels> attributes. A car’s description is certainly not very precise, but it is enough to demonstrate some specific builder-pattern behaviours.

Now let’s start with the Car class.

public class Car {
     private Engine engine;
     private List wheelList;
     //SNIPP
 }

At this point, I leave out the get and set methods in this listing. If you generate a builder for this, you get something like the following.

public static final class Builder {
        private Engine engine;
        private List<Wheel> wheelList;
        private Builder() {
        }
        public Builder withEngine(Engine engine) {
            this.engine = engine;
            return this;
        }
        public Builder withWheelList(List<Wheel> wheelList) {
            this.wheelList = wheelList;
            return this;
        }
        public Car build() {
            return new Car(this);
        }
    }

Here the builder is implemented as a static inner class. The constructor of the “Car” class has also been modified.

    private Car(Builder builder) {
        setEngine(builder.engine);
        wheelList = builder.wheelList;
    }

On the one hand, there has been a change from public to private, and on the other hand, an instance of the builder has been added as a method parameter.

    Car car = Car.newBuilder()
        .withEngine(engine)
        .withWheelList(wheels)

An example – the car

If you now work with the Builder Pattern, you get to the point where you have to build complex objects. Let us now extend our example by looking at the remaining attributes of the Car class.

public class Car {
    private Engine engine;
    private List<Wheel> wheelList;
}
public class Engine {
    private int power;
    private int type;
}
public class Wheel {
    private int size;
    private int type;
    private int colour;
}

Now you can have a corresponding builder generated for each of these classes. If you stick to the basic pattern, it looks something like this for the class Wheel:

public static final class Builder {
        private int size;
        private int type;
        private int colour;
        private Builder() {}
        public Builder withSize(int size) {
            this.size = size;
            return this;
        }
        public Builder withType(int type) {
            this.type = type;
            return this;
        }
        public Builder withColour(int colour) {
            this.colour = colour;
            return this;
        }
        public Wheel build() {
            return new Wheel(this);
        }
    }

But what does it look like if you want to create an instance of the class Car? For each complex attribute of Car, we will create an instance using the builder. The resulting source code is quite extensive; at first glance, there was no reduction in volume or complexity.

public class Main {
  public static void main(String[] args) {
    Engine engine = Engine.newBuilder().withPower(100).withType(5).build();
    Wheel wheel1 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    Wheel wheel2 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    Wheel wheel3 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    List<Wheel> wheels = new ArrayList<>();
    wheels.add(wheel1);
    wheels.add(wheel2);
    wheels.add(wheel3);
    Car car = Car.newBuilder()
                 .withEngine(engine)
                 .withWheelList(wheels)
                 .build();


    System.out.println("car = " + car);
  }
}

This source code is not very nice and by no means compact. So how can you adapt the builder pattern here so that on the one hand you have to write as little as possible by the builder himself and on the other hand you get more comfort when using it?

WheelListBuilder

Let’s take a little detour first. To be able to raise all potentials, we have to make the source text homogeneous. This strategy enables us to recognize patterns more easily. In our example, the creation of the List<Wheel> is to be outsourced to a builder, a WheelListBuilder.

public class WheelListBuilder {
    public static WheelListBuilder newBuilder(){
      return new WheelListBuilder();
    }
    private WheelListBuilder() {}
    private List<Wheel> wheelList;
    public WheelListBuilder withNewList(){
        this.wheelList = new ArrayList<>();
        return this;
    }
    public WheelListBuilder withList(List wheelList){
        this.wheelList = wheelList;
        return this;
    }
    public WheelListBuilder addWheel(Wheel wheel){
        this.wheelList.add(wheel);
        return this;
    }
    public List<Wheel> build(){
        //test if there are 4 instances....
        return this.wheelList;
    }
}

Now our example from before looks like this:

public class Main {
  public static void main(String[] args) {
    Engine engine = Engine.newBuilder().withPower(100).withType(5).build();
    Wheel wheel1 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    Wheel wheel2 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    Wheel wheel3 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    List<Wheel> wheelList = WheelListBuilder.newBuilder()
        .withNewList()
        .addWheel(wheel1)
        .addWheel(wheel2)
        .addWheel(wheel3)
        .build();//more robust if you add tests at build()
    Car car = Car.newBuilder()
        .withEngine(engine)
        .withWheelList(wheelList)
        .build();
    System.out.println("car = " + car);
  }
}

Next, we connect the Wheel class builder and the WheelListBuilder class. The goal is to get a fluent API so that we don’t create the instances of the Wheel class individually and then use the addWheel(Wheel w) method to WheelListBuilder need to add. It should then look like this for the developer in use:

List wheels = wheelListBuilder
     .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
     .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
     .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
     .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
     .build();

So what happens here is the following: As soon as the addWheel() method is called, a new instance of the class WheelBuilder should be returned. The addWheelToList() method creates the representative of the Wheel class and adds it to the list. To do that, you have to modify the two builders involved. The addWheelToList() method is added to the WheelBuilder side. This adds the instance of the Wheel class to the WheelListBuilder and returns the instance of the WheelListBuilder class.

private WheelListBuilder wheelListBuilder;
public WheelListBuilder addWheelToList(){
  this.wheelListBuilder.addWheel(this.build());
  return this.wheelListBuilder;
}

On the side of the WheelListBuilder class, only the method addWheel()  is added.

  public Wheel.Builder addWheel() {
    Wheel.Builder builder = Wheel.newBuilder();
    builder.withWheelListBuilder(this);
    return builder;
  }

If we now transfer this to the other builders, we come to a pretty good result:

      Car car = Car.newBuilder()
          .addEngine().withPower(100).withType(5).done()
          .addWheels()
            .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
            .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
            .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
          .done()
          .build();

The NestedBuilder

So far, the builders have been modified individually by hand. However, this can be implemented generically quite easily since it is just a tree of builders.

Every builder knows his children and his father. The implementations required for this can be found in the NestedBuilder class. It is assumed here that the methods for setting attributes always begin with the prefix with. Since this seems to be the case with most generators for builders, no manual adjustment is necessary here. The method done()  sets the result of his method build()  on his father. The call is made using reflection. With this, a father knows the authority of the child. At this point, I assume that the name of the attribute is the same as the class name. We will see later how this can be achieved with different attribute names. The method withParentBuilder(..) enables the father to announce himself to his child. We have a bidirectional connection now.

public abstract class NestedBuilder<T, V> {

  public T done() {
    Class<?> parentClass = parent.getClass();
    try {
      V build = this.build();
      String methodname = "with" + build.getClass().getSimpleName();
      Method method = parentClass.getDeclaredMethod(methodname, build.getClass());
      method.invoke(parent, build);
    } catch (NoSuchMethodException 
            | IllegalAccessException 
            | InvocationTargetException e) {
      e.printStackTrace();
    }
    return parent;
  }
  public abstract V build();
  protected T parent;

  public <P extends NestedBuilder<T, V>> P withParentBuilder(T parent) {
    this.parent = parent;
    return (P) this;
  }
}

Now the specific methods for connecting with the children can be added to a father. There is no need to derive from NestedBuilder.

public class Parent {
  private KidA kidA;
  private KidB kidB;
  //snipp.....
  public static final class Builder {
    private KidA kidA;
    private KidB kidB;
    //snipp.....
    // to add manually
    private KidA.Builder builderKidA = KidA.newBuilder().withParentBuilder(this);
    private KidB.Builder builderKidB = KidB.newBuilder().withParentBuilder(this);
    public KidA.Builder addKidA() { return this.builderKidA; }
    public KidB.Builder addKidB() { return this.builderKidB; }
    //---------
    public Parent build() {
      return new Parent(this);
    }
  }
}

And with the children, it looks like this: Here, you only have to derive from NestedBuilder.

public class KidA {
  private String note;
  //snipp.....
  public static final class Builder extends NestedBuilder<Parent.Builder, KidA> {
    //snipp.....
  }
}

The use is then very compact, as shown in the previous example.

public class Main {
  public static void main(String[] args) {
    Parent build = Parent.newBuilder()
        .addKidA().withNote("A").done()
        .addKidB().withNote("B").done()
        .build();
    System.out.println("build = " + build);
  }
}

Any combination is, of course, also possible. This means that a proxy can be a father and child at the same time. Nothing stands in the way of building complex structures.

public class Main {
  public static void main(String[] args) {
    Parent build = Parent.newBuilder()
        .addKidA().withNote("A")
                  .addKidB().withNote("B").done()
        .done()
        .build();
    System.out.println("build = " + build);
  }
}

Happy Coding

Make a Temporarily Drinking Cup from Wood and Paracord

Intro:

Sometimes you need a small container to catch a little water, hold small things together, or only a temporarily drinking cup. Today we will look at how a makeshift cup can be made from a round wood piece with simple means. All we need is a saw, a knife and a little paracord. But one thing at a time. Let’s start by choosing the right piece of wood.

Selecting The Right Stick Of A Tree

There are a few things to consider when choosing the appropriate piece of wood. First of all, I would like to ask you to use dead wood whenever possible explicitly. This behaviour is not only for the reason that no trees should be damaged. Even dry deadwood has the advantage that any moisture will not affect the taste. 

Under no circumstances should poisonous woods such as yew be used. Most yew species, such as the European yew (Taxus baccata), contain very toxic ingredients such as Taxin B. Bark, needles, and seeds are poisonous. However, the red seed coat does not contain any toxins. Cases of fatal poisoning by yew trees are known from humans, cattle and horses.


The use of softwood can also be unfavourable, as these woods often have a high resin content. This resin not only sticks the tools used but is also very stubborn on the skin. The resins themselves leave a nutty to very bitter taste that can be very unpleasant.


When the right piece of deadwood has been found, the question of the right size comes up. Here I recommend a portion for the first attempts you can enclose with your hand if it is a drinking cup. Up to this size, the work steps can still be carried out quickly with a relatively small tool. If the pieces are too thick, a more extensive tool is needed rapidly.


The wood should also not have died for too long so that the structure is still firm and not decomposed by insects. If you knock on the piece of wood and make a dull sound, it may have become too damp. Elements of wood that do not touch the ground are usually more suitable, as these are dry compared to those pieces that lie directly on the ground.
In terms of structure, the areas that have little or no knotholes are suitable. Branches that have grown out of the trunk leave most holes in the trunk that are not conducive to a cup’s function.

Saw The Workpiece To Size

When sawing out the workpiece, the length of the palm of the hand, including fingers, has proven to be practical for me. The longer the pieces, the more difficult it is to split them with small tools. The sawing itself should be carried out cleanly so that the edges do not splinter or break off. After the first cut, be sure to check the inside of the wood for damage from insects or fungi. If the tree is already severely damaged from the inside, further use is not recommended.

Split It Into Parts

The piece of wood must now be split into three or four parts. You can use an axe for this. It is also possible to use a knife and a wooden stick as a hammer. Please make sure that it is best to use a full tang/knife.Process individual parts with the knifeAs soon as the three or four parts are in place, you can start flattening the inside. The goal is to have a cavity in the middle when you put all the pieces back together later. So that you don’t accidentally edit the entire length, you can either mark it with a pen or use the saw. With the saw, you can cut the inside where the bottom of the vessel is to arise.

You should not work on the side walls.If you can work very precisely, it may work, but most of the time, the result is bad. Use the structure that resulted from splitting and leave it as it is. This gives excellent results in terms of water permeability.

Assemble And Tie With Paracord

The last step is to put the individual parts back together. It is, of course, more comfortable if you have identified the individual workpieces.As soon as all parts have been brought together, you can start to wrap a piece of paracord tightly around the bottom of the cup. Complete this process with a knot. The same must then be repeated on the top of the cup. When you have everything tightly wrapped, you can start with the first operational test.

Function Test With Water or Coffee

Finally, you can now test the cup by filling it with water and looking for leaks. If you want, you can still seal the seams with liquid wax. In my case, I didn’t do it.Please note that only drinking water is used in the test phase with a cup that is to be used for drinking.Subsequent rinsing is not possible due to the relatively rough wooden surface.

Conclusion

We have now seen how you can make a makeshift cup in a few minutes with an axe, a saw and two pieces of paracord. It is crucial to choose the right piece of wood. Here again, the important note that you must not use poisonous woods.
Have fun!
Cheers Sven

Delegation Versus Inheritance In Graphical User Interfaces

Intro

In this article, we will look at the difference between the inheritance and delegation concepts. Or, to put it better, why I prefer delegation and why I want to emphasize this rarely-used feature in Java.

The Challenge

The challenge we face today is quite common in the field of graphic user interfaces like desktop- or web-apps. Java is widely used as the development language for both worlds, and it does not matter if we are in the classic swing, JavaFX, or the field of web frameworks like Vaadin. Explicitly, I’ve opted for a pseudo-class model in core Java, as I’d like to look at the design patterns here without any technical details.

The goal is to create a custom component that consists of a text input field and a button. Both elements should be displayed next to each other, i.e. in a horizontal layout. The respective components have no function in this example. I want to be here exclusively to work towards the differences between inheritance and delegation.

To lazy to read? Check-out my Youtube Version!

The Base Class Model

Mostly, there are the respective essential components in a framework. In our case, it is a TextField, a button, and a horizontal or vertical layout. However, all of these components are embedded in an inheritance structure. In our case, I chose the following construction. Each component corresponds to the Component interface, for which there is an abstract implementation called AbstractComponent.

The class AbstractComponent contains framework-specific and technologically-based implementations. The Button, as well as the TextField, extend the class AbstractComponent. Layouts are usually separate and, therefore, a specialized group of components that leads in our case to an abstract class named Layout, which inherits from the class AbstractComponent.

In this abstract class, there are layout-specific implementations that are the same for all sorts of layouts. The implementations HorizontalLayout and VerticalLayout based on this. Altogether, this is already a quite complex initial model.

Inheritance — First Version

In the first version, I show a solution that I have often seen in projects. As a basis for a custom component, a base component from the framework is used as a parent. The direct inheritance from a layout is often used to structure all other internally child components on the screen. Inside the constructor, the internally required elements are generated and added to the inherited layout structure.

public class InputComponent     
  extends Horizontal Layout // Layout is abstract     
  implements HasLogger {   
  private button button = new Button ();   
  private TextField textField = new TextField ();   
  public InputComponent () {     
    addComponent (text field);     
    addComponent (button);   
  }   
  public void click () {     
    button.click ();   
  }   
  public void setText (String text) {     
    textField.setText (text);   
  }   
  public String getText () {     
    return textField.getText ();   
  } 
}

If you now look at how the component will behave during later use, it becomes visible that a derivation from a fundamental component brings its pitfalls with it.

What exactly happened here? If an instance of the custom component InputComponent is now used, it can be viewed as a layout. But that is not the case here anymore; on the contrary, it is even wrong. All methods inherited from the layout implementation are also public available with this component. But you wanted to achieve something else. First of all, we wanted to reuse the existing code, provided in the component implementation HorizontalLayout.

On the other hand, you want a component that externally delegates only the methods for the necessary interaction, needed for the custom behaviour. In this case, the public methods from the Button and the TextField used symbolically. Besides, this component is tied to visual design that leads to possible interactions that are not part of the domain-specific behaviour of this component. This technical debt should be avoided as much as possible.

In practical words, general methods from the implementation of the HorizontalLayout are made visible to the outside. If somebody uses exactly these methods, and later on the parent becomes a VerticalLayout, the source code can not compile without further corrections.

public class MainM01 implements HasLogger { 
   public static void main (String [] args) { 
     var inputComponent = new InputComponent (); 
     inputComponent.setText ("Hello Text M01"); 
     inputComponent.click ();  // critical things 
     inputComponent.doSomethingLayoutSpecific (); 
     inputComponent.horizontalSpecific (); 
     inputComponent.doFrameworkSpecificThings (); 
   } 
 }

Inheritance — Second Version

The custom component has to fit into the already existing component hierarchy from the framework. A place must be found inside the inheritance to start from; otherwise, the custom component cannot be used. But at the same time, we do not want to own specific implementation details, and neither the effort to implement basic technical requirements based on the framework needs. The point from which you split up the inheritance must be used wisely.

Please assume that the class AbstractComponent is what we are looking for as a start point.
If you derive your class from it, so you certainly have the essential features that you would like to have as a user of the framework. However, this abstraction mostly associated with the fact that also framework-specific things are to be considered. This abstract class is an internally used, fundamental element. Starting with this internal abstract class very likely leads to the need to implement internal and technical related methods. As an example, the method signature with the name doFrameworkSpecificThings() has been created and implemented with just a log message.

 public class InputComponent
     extends AbstractComponent
     implements HasLogger {
   private button button = new Button ();
   private TextField textField = new TextField ();
   public InputComponent () {
     var layout = new HorizontalLayout ();
     layout.addComponent (text field);
     layout.addComponent (button);
     addComponent (layout);
   }
   public void click () {
     button.click ();
   }
   public void setText (String text) {
     textField.setText (text);
   }
   public String getText () {
     return textField.getText ();
   }
   // to deep into the framework for EndUser
   public void doFrameworkSpecificThings () {
     logger (). info ("doFrameworkSpecificThings -" 
                             + this.getClass (). getSimpleName ());
   }
 }

In use, such a component is already a little less dangerous. Only the internal methods that are visible on other components are accessible on this component.

public class MainM02 implements HasLogger { 
   public static void main (String [] args) { 
     var inputComponent = new InputComponent (); 
     inputComponent.setText ("Hello Text M02"); 
     inputComponent.click (); 
     // critical things 
     inputComponent.doFrameworkSpecificThings (); 
   } 
 }

But I am not happy with this solution yet. Very often, there is no requirement for new components on the technical side. Instead, they are compositions of already existing essential elements, composed in a professional, domain-specific context.

Composition — My Favorite

So what can you do at this point? The beautiful thing about the solution is that you can use it to put a wrapper around already existing components, which have been generated by inheritance. One solution may be to create a composite of type T. Composite<T extends AbstractComponent>

This class serves as an envelope for the compositions of the required components. This class can then even itself inherit from the interface Component, so those technical methods of the abstract implementation not repeated or released to the outside. The type T itself is the type to be used as the external component that holds in the composition. In our case, it is the horizontal layout. With the method getComponent(), you can access this instance if necessary.

public final class InputComponent 
   extends Composite  
   implements HasLogger { 
   private button button = new Button (); 
   private TextField textField = new TextField (); 
   public InputComponent () { 
     super (new Horizontal Layout ()); 
     getComponent().addComponent(text field);
     getComponent().addComponent (button); 
   } 
 public void click () { button.click (); } 
   public void setText (String text) { 
     textField.setText (text); 
   } 
   public String getText () { 
     return textField.getText (); 
   } 
 }

Seen in this way, it is a neutral shell, but it will behave towards the outside as a minimal component since the minimum contract via the Component interface. Again, only the methods by delegation to the outside are made visible, which explicitly provided. Use is, therefore, harmless.

public class MainSolution { 
   public static void main (String [] args) { 
     var inputComponent = new InputComponent (); 
     inputComponent.setText ("Hello Text M03"); 
     inputComponent.click (); 
   } 
 }

Targeted Inheritance

Let’s conclude with what I believe is rarely used Java feature at the class level. The speech is about the keyword final.

To prevent an unintentional derivation, I recommend the targeted use of final classes. Thus, from this point, the unfavourable inheritance on the composition level is troublesome. Understandably, in most frameworks, no use of it was made. After all, you want to allow the user of the component Button to offer a specialized version. But at the beginning of your abstraction level, you can very well use it.

Conclusion

At this point, we have seen how you can achieve a more robust variant of a composition by delegation rather than inheritance. You can also use this if you are confronted with legacy source codes with this anti-pattern. It’s not always possible to clean up everything or change it to the last detail. But I hope this has given an incentive to approach this situation.

The source code for this example can be found on GitHub.

Cheers Sven!

A Challenge of the Software Distribution

The four factors that are working against us

Software development is more and more dependent on Dependencies and the frequency of deployments is increasing. Both trends together are pushing themselves higher. Another element that turns the delivery of software into a network bottleneck is the usage of compounded artefacts. And the last trend that is working against us, is the exploding amount of edges or better-called edge nodes.All four trends together are a challenge for the infrastructure.But what we could do against it?

Edge-Computing

Before we look at the acceleration strategies I will explain a bit the term “Edge” or better “Edge-Computing” because this is often used in this context. 


What is Edge or better edge computing?

The principle of edge computing states that data processing takes place at the Edge of the network. Which device is ultimately responsible for processing the data can differ depending on the application and the implementation of the concept.
An edge device is a device on the network periphery that generates, processes or forwards data itself. Examples of edge devices are smartphones, autonomous vehicles, sensors or IoT devices such as fire alarms.
An edge gateway is installed between the edge device and the network. It receives data from edge devices that do not have to be processed in real-time, processes specific data locally or selectively, sends the data to other services or central data centers. Edge gateways have wireless or wired interfaces to the edge devices and the communication networks for private or public clouds.


Pros of Edge Computing

The data processing takes place in the vicinity of the data source, minimising transmission and response times. Communication is possible almost in real-time. Simultaneously, the data throughput and the bandwidth usage reduction in the network, since only specific data that are not to be processed locally need to be transmitted to central data centres. Many functions can also be maintained even if the network or parts of the network fail—the performance of edge computing scales by providing more intelligent devices at the network periphery.

Cons of Edge Computing

Edge computing offers more security due to the locally limited data storage, but this is only the case if appropriate security concepts are available for the decentralised devices, due to the heterogeneity and many different devices, the effort involved in implementing the security concepts increases.

Fog Computing

Edge computing and fog computing are both decentralised data processing concepts. Fog Computing inserts another layer with the so-called Fog Nodes between the edge devices and the cloud. These are small, local data centres in the access areas of the cloud. These fog nodes collect the data from the edge devices. You select the data to be processed locally or decentrally and forward it to central servers or process it directly yourself. 
Selecting the best of both worlds means we are combining both principles of Edge- and Fog-Computing.

What are the acceleration options for SW Distribution?

There are different strategies to scale the distribution of binaries, and every solution suits a specific use-case. We will not have a few on cloud solutions only because companies are operating worldwide and have to deal with different governmental regulations and restrictions. Additionally, to these restrictions, I want to highlight the need for hybrid solutions as well. Hybrid solutions are including on-prem resources as well as air gaped infrastructure used for high-security environments.

a) Custom Solution based on replication or scaling servers

One possibility to scale inside your network/architecture is scaling hardware and working with direct replication. Implementing this by yourself will most-likely consume a higher budget of workforce, knowledge, time and money based on the fact that this is not a trivial project. At the same time, this approach is bound into the borders of the infrastructure you have access to.

b) P2P Networks

Peer to Peer networks is based on equal nodes that are sharing the binaries between all nodes.The peer to peer approach implies that you will have a bunch of copies of your files. If you are downloading a file from the network, all nodes can serve parts independently. This approach of splitting up files and delivering from different nodes simultaneously to the requesting node leads to constant and efficient network usage and reduced download times.

c) CDN – Content Delivery Network

CDN’s are optimised to deliver large files over regions. The network itself is build-out of a huge number of nodes that are caching files for regional delivery. With this strategy, the original server will not be overloaded.

Check out on my Youtube Channel the video with the title "DevSecOps - the Low hanging fruits".This video describes the balance between writing the code itself or adding a dependency in each Cloud-Native App layer. The question is, what does this mean for DevSecOps?

JFrog Solution

With the three mentioned techniques you can build up huge and powerful architecture that fit´s to your needs. But the integration of all these technologies and implementing products is not easy. We faced this challenge as well and over the years we found solutions that we integrated into a single DevSecOps Platform called “The JFrog Platform“. I don´t want to give an overview of all components, for this check out my youtube channel. Here I want to focus on the components that are responsible for the Distribution of the binaries only.

JFrog Distribution

With the JFrog Distribution, the knowledge about the content of the repositories and the corresponding metadata is used to provide a replication strategy. The replication solution is designed for internal and external repositories to bring the binaries all the way down to the place where it is needed. The infrastructure can be built in a hybrid model, including on-prem and cloud nodes.Even air-gapped solutions are possible with import/export mechanisms. In this scenario, we are focussing on a scalable caching mechanism that is optimised for reads.

What is a Release Bundle?

A Release bundle is a composition of binaries. These binaries can be of different types, like maven, Debian or Docker. The Release Bundle can be seen as a Bills Of Materials (BOM).The content and well as the Release Bundles itself are immutable. This immutability makes it possible to implement efficient caching and replication mechanisms across different networks and regions.

What is an Edge Node in this context?

An Edge Node in our context is a node that will provide the functionality of a read-only Artifactory.With this Edge Node, the delivery process is optimised, and we will see that replication is done in a transactional way. The difference to the original meaning of an Edge Node is that this instance is not the consuming or producing element. This can be seen as a Fog-Node, that is the first layer above the real edge nodes layer.

P2P Download

The P2P solution focuses on environments that need to handle download bursts inside the same network or region.This download bursts could be scenarios like “updating a server farm” or “updating a Microservice Mesh”. The usage is unidirectional, which means that the consumer is not updating from their side. They are just waiting for a new version and all consumer updating at the same time.This behaviour is a perfect case for the P2P solution. Artifactory, or an Edge Node in the same network or region, is influencing an update of all P2P Nodes with a new version of a binary. The consumer itself will request the binary from the P2P node and not from the Artifactory instance anymore.The responsible Artifactory instance manages the P2P nodes, which leads to zero maintenance on the user side. Have in mind, that the RBAC is active at the P2P nodes as well. 

CDN Distribution

The CDN Solution is optimised to deliver binaries to different parts of the world. We have it in two flavours. One is for the public and mostly used to distribute SDK’s, Drivers or other free available binaries. The other flavour is focussing on the private distribution.Whatever solution you are using, the RBAC defined inside the Access Module is respected, including solutions with Authentication and Authorisation and unique links including Access Tokens.

Conclusion

Ok, it is time for the conclusion.What we discussed today;
With the increasing amount of dependencies, a higher frequency of deployments and the constantly growing number of applications and edge-nodes, we are facing scalability challenges.
We had a look at three ways you could go to increase your delivery speed.The discussed solution based on

a) JFrog Distribution helps you build up a strong replication strategy inside your hybrid infrastructure to speed up the development cycle.
b) JFrog P2P that will allow you to handle massive download bursts inside a network or region. This solution fits tasks that need to distribute binaries to a high number of consumers concurrently during download bursts.
c) JFrog CDN to deliver binaries worldwide into regional data centres to make the experience for the consumer as best as possible.


All this is bundled into the JFrog DevSecOps Platform. 


Cheers Sven

DevSecOps – Be Independent Again

What do the effects the news of the last few months can have to do with risk management and the presumption of storage, and why is it an elementary component of DevSecOps?


If you want to see this Post as a video, check-out the following from me Youtube Channel


What Has Happened So Far

Again and again, changes have happened that set things in motion that were considered to have been set. In some cases, services or products have been freely available for many years, or the type of restriction has not changed. I am taking one of the last changes as an occasion to show the resulting behavior and to formulate solutions that help you deal with it.

In software development, repositories are one of the central elements that enable you to efficiently deal with the abundance of dependencies in a software environment. A wide variety of types and associated technologies have evolved over the decades. But there is a common approach mostly resulted in a global central authority that is seen as an essential reference.

I examined the topic of the repository from a generic point of view in a little more detail on youtube. 

As an example, I would like to briefly show what a minimal technology stack can look like today. Java is used for the application itself, the dependencies of which are defined using maven. For this, we need access to maven repositories. Debian repositories [Why Debian Repos are mission-critical..] used for the operating system on which the application is running. The components that then packaged into Docker images use Docker registries, and finally, the applications orchestrated in a composition of Docker images using Kubernetes. Here alone, we are dealing with four different repository types. At this point, I have left out the need for generic repositories to provide the required tools used within the DevSecOps pipeline.

DockerHub And Its Dominance

The example that inspired me to write this article was DockerHub’s announcements. Access to this service was free, and there were no further restrictions on storage space and storage duration for freely available Docker images. This fact has led to a large number of open source projects using this repository for their purposes. Over the years a whole network of dependencies between these images has built up.

Docker Hub was in the news recently for two reasons.

Storage Restrictions

Previously, Docker images were stored indefinitely on Dockerhub. On the one hand, this meant that nobody cared about the storage space of the Docker images. On the other hand, pretty much everyone has been counting on it not to change again. Unfortunately, that has now changed. The retention period for inactive Docker images has been reduced to six months. What doesn’t sound particularly critical at first turns out to be quite uncomfortable in detail.

Download Throttling

Docker has limited the download rate to 100 pulls per six hours for anonymous users, and 200 pulls per six hours for free accounts. Number 200 sounds pretty bearable. However, it makes sense to take a more detailed look here. 200 inquiries / 6h are 200 inquiries / 360min. We’re talking about 0.55 queries/minute at a constant query rate. First, many systems do more than one build and therefore requests, every 2 minutes. Second, if the limit is reached, it can take more than half a business day to regain access. The latter is to be viewed as very critical. As a rule, limit values given per hour, which then only leads to a delay of a little less than an hour. Six hours is a different order of magnitude.

Maven and MavenCentral

If you look at the different technologies, a similar monoculture emerges in the Maven area. Here is the maven-central a singular point operated by one company. A larger company bought this company. What does this mean for the future of this repository? I don’t know. However, it is not uncommon for costs to be optimized after a takeover by another company. A legitimate question arises here; What economic advantage does the operator of such a central, free-of-charge infrastructure have?

JDKs

There have been so many structural changes here that I’m not even sure what the official name is. But there is one thing I observe with eagle eyes in projects. Different versions, platforms and providers of the JDKs result in a source of joy in LTS projects that should not be underestimated. Here, too, it is not guaranteed how long the providers will keep the respective builds of a JDK for a platform. What is planned today can be optimized tomorrow. Here, too, you should take a look at the JDKs that are not only used internally but also by customers. Who has all the installers for the JDKs in use in stock? Are these JDKs also used within your own CI route, or do you trust the availability of specific Docker images?

Moderate Independence

How can this be countered now? The answer is straightforward. You get everything you need just once and then save it in your systems. And so we are running against the efforts of the last few years. As in most other cases, moderate use of this approach is recommended. More important than ever is the sensible use of freely available resources. It can help if a stringent retention tactic is used. Not everything has to be kept indefinitely. Many elements that are held in the caches are no longer needed after a while. Sophisticated handling of repositories and the nesting of resources helps here. Unfortunately, I cannot go into too much detail here, but it can be noted in short form.

The structure of the respective repositories enables, on the one hand, to create concrete compositions and, on the other hand, to carry out very efficient maintenance. Sources must be kept in individual repositories and then merged using virtual repositories. This process can be built up so efficiently that it can even drastically reduce the number of build cycles. 

DevSecOps – Risk Minimization

There is another advantage in dealing with the subject of “independence”. Because all files that are kept in their structures can be analyzed with regard to vulnerabilities and compliance, now when these elements are in one place, in a repository manager, I have a central location where I can scan them. The result is a complete dependency graph that includes the dependencies of an application but also the associated workbench. That, in turn, is one of the critical statements when you turn to the topic of DevSecOps. Security is like quality! It’s not just a tool; it’s not just a person responsible for it. It is a philosophy that has to run through the entire value chain.

Happy Coding,

Sven Ruppert 

« Older Entries Recent Entries »