Author Archives: Sven Ruppert

Pattern from the practical life of a software developer

Builder-Pattern

The book from the “gang of four” is part of the essential reading in just about every computer science branch. The basic patterns are described and grouped to get a good start on the topic of design patterns. But how does it look later in use?
Here we will take a closer look at one pattern and expand it.



The Pattern – Builder

The builder pattern is currently enjoying increasing popularity as it allows you to build a fluent API.
It is also lovely that an IDE can generate this pattern quite quickly. But how about using this design pattern in daily life?

The basic builder pattern

Let’s start with the basic pattern, the initial version with which we have already gained all our experience.
For example, I’ll take a Car class with the Engine and List <Wheels> attributes. A car’s description is certainly not very precise, but it is enough to demonstrate some specific builder-pattern behaviours.

Now let’s start with the Car class.

public class Car {
     private Engine engine;
     private List wheelList;
     //SNIPP
 }

At this point, I leave out the get and set methods in this listing. If you generate a builder for this, you get something like the following.

public static final class Builder {
        private Engine engine;
        private List<Wheel> wheelList;
        private Builder() {
        }
        public Builder withEngine(Engine engine) {
            this.engine = engine;
            return this;
        }
        public Builder withWheelList(List<Wheel> wheelList) {
            this.wheelList = wheelList;
            return this;
        }
        public Car build() {
            return new Car(this);
        }
    }

Here the builder is implemented as a static inner class. The constructor of the “Car” class has also been modified.

    private Car(Builder builder) {
        setEngine(builder.engine);
        wheelList = builder.wheelList;
    }

On the one hand, there has been a change from public to private, and on the other hand, an instance of the builder has been added as a method parameter.

    Car car = Car.newBuilder()
        .withEngine(engine)
        .withWheelList(wheels)

An example – the car

If you now work with the Builder Pattern, you get to the point where you have to build complex objects. Let us now extend our example by looking at the remaining attributes of the Car class.

public class Car {
    private Engine engine;
    private List<Wheel> wheelList;
}
public class Engine {
    private int power;
    private int type;
}
public class Wheel {
    private int size;
    private int type;
    private int colour;
}

Now you can have a corresponding builder generated for each of these classes. If you stick to the basic pattern, it looks something like this for the class Wheel:

public static final class Builder {
        private int size;
        private int type;
        private int colour;
        private Builder() {}
        public Builder withSize(int size) {
            this.size = size;
            return this;
        }
        public Builder withType(int type) {
            this.type = type;
            return this;
        }
        public Builder withColour(int colour) {
            this.colour = colour;
            return this;
        }
        public Wheel build() {
            return new Wheel(this);
        }
    }

But what does it look like if you want to create an instance of the class Car? For each complex attribute of Car, we will create an instance using the builder. The resulting source code is quite extensive; at first glance, there was no reduction in volume or complexity.

public class Main {
  public static void main(String[] args) {
    Engine engine = Engine.newBuilder().withPower(100).withType(5).build();
    Wheel wheel1 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    Wheel wheel2 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    Wheel wheel3 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    List<Wheel> wheels = new ArrayList<>();
    wheels.add(wheel1);
    wheels.add(wheel2);
    wheels.add(wheel3);
    Car car = Car.newBuilder()
                 .withEngine(engine)
                 .withWheelList(wheels)
                 .build();


    System.out.println("car = " + car);
  }
}

This source code is not very nice and by no means compact. So how can you adapt the builder pattern here so that on the one hand you have to write as little as possible by the builder himself and on the other hand you get more comfort when using it?

WheelListBuilder

Let’s take a little detour first. To be able to raise all potentials, we have to make the source text homogeneous. This strategy enables us to recognize patterns more easily. In our example, the creation of the List<Wheel> is to be outsourced to a builder, a WheelListBuilder.

public class WheelListBuilder {
    public static WheelListBuilder newBuilder(){
      return new WheelListBuilder();
    }
    private WheelListBuilder() {}
    private List<Wheel> wheelList;
    public WheelListBuilder withNewList(){
        this.wheelList = new ArrayList<>();
        return this;
    }
    public WheelListBuilder withList(List wheelList){
        this.wheelList = wheelList;
        return this;
    }
    public WheelListBuilder addWheel(Wheel wheel){
        this.wheelList.add(wheel);
        return this;
    }
    public List<Wheel> build(){
        //test if there are 4 instances....
        return this.wheelList;
    }
}

Now our example from before looks like this:

public class Main {
  public static void main(String[] args) {
    Engine engine = Engine.newBuilder().withPower(100).withType(5).build();
    Wheel wheel1 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    Wheel wheel2 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    Wheel wheel3 = Wheel.newBuilder().withType(2).withColour(3).withSize(4).build();
    List<Wheel> wheelList = WheelListBuilder.newBuilder()
        .withNewList()
        .addWheel(wheel1)
        .addWheel(wheel2)
        .addWheel(wheel3)
        .build();//more robust if you add tests at build()
    Car car = Car.newBuilder()
        .withEngine(engine)
        .withWheelList(wheelList)
        .build();
    System.out.println("car = " + car);
  }
}

Next, we connect the Wheel class builder and the WheelListBuilder class. The goal is to get a fluent API so that we don’t create the instances of the Wheel class individually and then use the addWheel(Wheel w) method to WheelListBuilder need to add. It should then look like this for the developer in use:

List wheels = wheelListBuilder
     .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
     .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
     .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
     .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
     .build();

So what happens here is the following: As soon as the addWheel() method is called, a new instance of the class WheelBuilder should be returned. The addWheelToList() method creates the representative of the Wheel class and adds it to the list. To do that, you have to modify the two builders involved. The addWheelToList() method is added to the WheelBuilder side. This adds the instance of the Wheel class to the WheelListBuilder and returns the instance of the WheelListBuilder class.

private WheelListBuilder wheelListBuilder;
public WheelListBuilder addWheelToList(){
  this.wheelListBuilder.addWheel(this.build());
  return this.wheelListBuilder;
}

On the side of the WheelListBuilder class, only the method addWheel()  is added.

  public Wheel.Builder addWheel() {
    Wheel.Builder builder = Wheel.newBuilder();
    builder.withWheelListBuilder(this);
    return builder;
  }

If we now transfer this to the other builders, we come to a pretty good result:

      Car car = Car.newBuilder()
          .addEngine().withPower(100).withType(5).done()
          .addWheels()
            .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
            .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
            .addWheel().withType(1).withSize(2).withColour(2).addWheelToList()
          .done()
          .build();

The NestedBuilder

So far, the builders have been modified individually by hand. However, this can be implemented generically quite easily since it is just a tree of builders.

Every builder knows his children and his father. The implementations required for this can be found in the NestedBuilder class. It is assumed here that the methods for setting attributes always begin with the prefix with. Since this seems to be the case with most generators for builders, no manual adjustment is necessary here. The method done()  sets the result of his method build()  on his father. The call is made using reflection. With this, a father knows the authority of the child. At this point, I assume that the name of the attribute is the same as the class name. We will see later how this can be achieved with different attribute names. The method withParentBuilder(..) enables the father to announce himself to his child. We have a bidirectional connection now.

public abstract class NestedBuilder<T, V> {

  public T done() {
    Class<?> parentClass = parent.getClass();
    try {
      V build = this.build();
      String methodname = "with" + build.getClass().getSimpleName();
      Method method = parentClass.getDeclaredMethod(methodname, build.getClass());
      method.invoke(parent, build);
    } catch (NoSuchMethodException 
            | IllegalAccessException 
            | InvocationTargetException e) {
      e.printStackTrace();
    }
    return parent;
  }
  public abstract V build();
  protected T parent;

  public <P extends NestedBuilder<T, V>> P withParentBuilder(T parent) {
    this.parent = parent;
    return (P) this;
  }
}

Now the specific methods for connecting with the children can be added to a father. There is no need to derive from NestedBuilder.

public class Parent {
  private KidA kidA;
  private KidB kidB;
  //snipp.....
  public static final class Builder {
    private KidA kidA;
    private KidB kidB;
    //snipp.....
    // to add manually
    private KidA.Builder builderKidA = KidA.newBuilder().withParentBuilder(this);
    private KidB.Builder builderKidB = KidB.newBuilder().withParentBuilder(this);
    public KidA.Builder addKidA() { return this.builderKidA; }
    public KidB.Builder addKidB() { return this.builderKidB; }
    //---------
    public Parent build() {
      return new Parent(this);
    }
  }
}

And with the children, it looks like this: Here, you only have to derive from NestedBuilder.

public class KidA {
  private String note;
  //snipp.....
  public static final class Builder extends NestedBuilder<Parent.Builder, KidA> {
    //snipp.....
  }
}

The use is then very compact, as shown in the previous example.

public class Main {
  public static void main(String[] args) {
    Parent build = Parent.newBuilder()
        .addKidA().withNote("A").done()
        .addKidB().withNote("B").done()
        .build();
    System.out.println("build = " + build);
  }
}

Any combination is, of course, also possible. This means that a proxy can be a father and child at the same time. Nothing stands in the way of building complex structures.

public class Main {
  public static void main(String[] args) {
    Parent build = Parent.newBuilder()
        .addKidA().withNote("A")
                  .addKidB().withNote("B").done()
        .done()
        .build();
    System.out.println("build = " + build);
  }
}

Happy Coding

Make a Temporarily Drinking Cup from Wood and Paracord

Intro:

Sometimes you need a small container to catch a little water, hold small things together, or only a temporarily drinking cup. Today we will look at how a makeshift cup can be made from a round wood piece with simple means. All we need is a saw, a knife and a little paracord. But one thing at a time. Let’s start by choosing the right piece of wood.

Selecting The Right Stick Of A Tree

There are a few things to consider when choosing the appropriate piece of wood. First of all, I would like to ask you to use dead wood whenever possible explicitly. This behaviour is not only for the reason that no trees should be damaged. Even dry deadwood has the advantage that any moisture will not affect the taste. 

Under no circumstances should poisonous woods such as yew be used. Most yew species, such as the European yew (Taxus baccata), contain very toxic ingredients such as Taxin B. Bark, needles, and seeds are poisonous. However, the red seed coat does not contain any toxins. Cases of fatal poisoning by yew trees are known from humans, cattle and horses.


The use of softwood can also be unfavourable, as these woods often have a high resin content. This resin not only sticks the tools used but is also very stubborn on the skin. The resins themselves leave a nutty to very bitter taste that can be very unpleasant.


When the right piece of deadwood has been found, the question of the right size comes up. Here I recommend a portion for the first attempts you can enclose with your hand if it is a drinking cup. Up to this size, the work steps can still be carried out quickly with a relatively small tool. If the pieces are too thick, a more extensive tool is needed rapidly.


The wood should also not have died for too long so that the structure is still firm and not decomposed by insects. If you knock on the piece of wood and make a dull sound, it may have become too damp. Elements of wood that do not touch the ground are usually more suitable, as these are dry compared to those pieces that lie directly on the ground.
In terms of structure, the areas that have little or no knotholes are suitable. Branches that have grown out of the trunk leave most holes in the trunk that are not conducive to a cup’s function.

Saw The Workpiece To Size

When sawing out the workpiece, the length of the palm of the hand, including fingers, has proven to be practical for me. The longer the pieces, the more difficult it is to split them with small tools. The sawing itself should be carried out cleanly so that the edges do not splinter or break off. After the first cut, be sure to check the inside of the wood for damage from insects or fungi. If the tree is already severely damaged from the inside, further use is not recommended.

Split It Into Parts

The piece of wood must now be split into three or four parts. You can use an axe for this. It is also possible to use a knife and a wooden stick as a hammer. Please make sure that it is best to use a full tang/knife.Process individual parts with the knifeAs soon as the three or four parts are in place, you can start flattening the inside. The goal is to have a cavity in the middle when you put all the pieces back together later. So that you don’t accidentally edit the entire length, you can either mark it with a pen or use the saw. With the saw, you can cut the inside where the bottom of the vessel is to arise.

You should not work on the side walls.If you can work very precisely, it may work, but most of the time, the result is bad. Use the structure that resulted from splitting and leave it as it is. This gives excellent results in terms of water permeability.

Assemble And Tie With Paracord

The last step is to put the individual parts back together. It is, of course, more comfortable if you have identified the individual workpieces.As soon as all parts have been brought together, you can start to wrap a piece of paracord tightly around the bottom of the cup. Complete this process with a knot. The same must then be repeated on the top of the cup. When you have everything tightly wrapped, you can start with the first operational test.

Function Test With Water or Coffee

Finally, you can now test the cup by filling it with water and looking for leaks. If you want, you can still seal the seams with liquid wax. In my case, I didn’t do it.Please note that only drinking water is used in the test phase with a cup that is to be used for drinking.Subsequent rinsing is not possible due to the relatively rough wooden surface.

Conclusion

We have now seen how you can make a makeshift cup in a few minutes with an axe, a saw and two pieces of paracord. It is crucial to choose the right piece of wood. Here again, the important note that you must not use poisonous woods.
Have fun!
Cheers Sven

Delegation Versus Inheritance In Graphical User Interfaces

Intro

In this article, we will look at the difference between the inheritance and delegation concepts. Or, to put it better, why I prefer delegation and why I want to emphasize this rarely-used feature in Java.

The Challenge

The challenge we face today is quite common in the field of graphic user interfaces like desktop- or web-apps. Java is widely used as the development language for both worlds, and it does not matter if we are in the classic swing, JavaFX, or the field of web frameworks like Vaadin. Explicitly, I’ve opted for a pseudo-class model in core Java, as I’d like to look at the design patterns here without any technical details.

The goal is to create a custom component that consists of a text input field and a button. Both elements should be displayed next to each other, i.e. in a horizontal layout. The respective components have no function in this example. I want to be here exclusively to work towards the differences between inheritance and delegation.

To lazy to read? Check-out my Youtube Version!

The Base Class Model

Mostly, there are the respective essential components in a framework. In our case, it is a TextField, a button, and a horizontal or vertical layout. However, all of these components are embedded in an inheritance structure. In our case, I chose the following construction. Each component corresponds to the Component interface, for which there is an abstract implementation called AbstractComponent.

The class AbstractComponent contains framework-specific and technologically-based implementations. The Button, as well as the TextField, extend the class AbstractComponent. Layouts are usually separate and, therefore, a specialized group of components that leads in our case to an abstract class named Layout, which inherits from the class AbstractComponent.

In this abstract class, there are layout-specific implementations that are the same for all sorts of layouts. The implementations HorizontalLayout and VerticalLayout based on this. Altogether, this is already a quite complex initial model.

Inheritance — First Version

In the first version, I show a solution that I have often seen in projects. As a basis for a custom component, a base component from the framework is used as a parent. The direct inheritance from a layout is often used to structure all other internally child components on the screen. Inside the constructor, the internally required elements are generated and added to the inherited layout structure.

public class InputComponent     
  extends Horizontal Layout // Layout is abstract     
  implements HasLogger {   
  private button button = new Button ();   
  private TextField textField = new TextField ();   
  public InputComponent () {     
    addComponent (text field);     
    addComponent (button);   
  }   
  public void click () {     
    button.click ();   
  }   
  public void setText (String text) {     
    textField.setText (text);   
  }   
  public String getText () {     
    return textField.getText ();   
  } 
}

If you now look at how the component will behave during later use, it becomes visible that a derivation from a fundamental component brings its pitfalls with it.

What exactly happened here? If an instance of the custom component InputComponent is now used, it can be viewed as a layout. But that is not the case here anymore; on the contrary, it is even wrong. All methods inherited from the layout implementation are also public available with this component. But you wanted to achieve something else. First of all, we wanted to reuse the existing code, provided in the component implementation HorizontalLayout.

On the other hand, you want a component that externally delegates only the methods for the necessary interaction, needed for the custom behaviour. In this case, the public methods from the Button and the TextField used symbolically. Besides, this component is tied to visual design that leads to possible interactions that are not part of the domain-specific behaviour of this component. This technical debt should be avoided as much as possible.

In practical words, general methods from the implementation of the HorizontalLayout are made visible to the outside. If somebody uses exactly these methods, and later on the parent becomes a VerticalLayout, the source code can not compile without further corrections.

public class MainM01 implements HasLogger { 
   public static void main (String [] args) { 
     var inputComponent = new InputComponent (); 
     inputComponent.setText ("Hello Text M01"); 
     inputComponent.click ();  // critical things 
     inputComponent.doSomethingLayoutSpecific (); 
     inputComponent.horizontalSpecific (); 
     inputComponent.doFrameworkSpecificThings (); 
   } 
 }

Inheritance — Second Version

The custom component has to fit into the already existing component hierarchy from the framework. A place must be found inside the inheritance to start from; otherwise, the custom component cannot be used. But at the same time, we do not want to own specific implementation details, and neither the effort to implement basic technical requirements based on the framework needs. The point from which you split up the inheritance must be used wisely.

Please assume that the class AbstractComponent is what we are looking for as a start point.
If you derive your class from it, so you certainly have the essential features that you would like to have as a user of the framework. However, this abstraction mostly associated with the fact that also framework-specific things are to be considered. This abstract class is an internally used, fundamental element. Starting with this internal abstract class very likely leads to the need to implement internal and technical related methods. As an example, the method signature with the name doFrameworkSpecificThings() has been created and implemented with just a log message.

 public class InputComponent
     extends AbstractComponent
     implements HasLogger {
   private button button = new Button ();
   private TextField textField = new TextField ();
   public InputComponent () {
     var layout = new HorizontalLayout ();
     layout.addComponent (text field);
     layout.addComponent (button);
     addComponent (layout);
   }
   public void click () {
     button.click ();
   }
   public void setText (String text) {
     textField.setText (text);
   }
   public String getText () {
     return textField.getText ();
   }
   // to deep into the framework for EndUser
   public void doFrameworkSpecificThings () {
     logger (). info ("doFrameworkSpecificThings -" 
                             + this.getClass (). getSimpleName ());
   }
 }

In use, such a component is already a little less dangerous. Only the internal methods that are visible on other components are accessible on this component.

public class MainM02 implements HasLogger { 
   public static void main (String [] args) { 
     var inputComponent = new InputComponent (); 
     inputComponent.setText ("Hello Text M02"); 
     inputComponent.click (); 
     // critical things 
     inputComponent.doFrameworkSpecificThings (); 
   } 
 }

But I am not happy with this solution yet. Very often, there is no requirement for new components on the technical side. Instead, they are compositions of already existing essential elements, composed in a professional, domain-specific context.

Composition — My Favorite

So what can you do at this point? The beautiful thing about the solution is that you can use it to put a wrapper around already existing components, which have been generated by inheritance. One solution may be to create a composite of type T. Composite<T extends AbstractComponent>

This class serves as an envelope for the compositions of the required components. This class can then even itself inherit from the interface Component, so those technical methods of the abstract implementation not repeated or released to the outside. The type T itself is the type to be used as the external component that holds in the composition. In our case, it is the horizontal layout. With the method getComponent(), you can access this instance if necessary.

public final class InputComponent 
   extends Composite  
   implements HasLogger { 
   private button button = new Button (); 
   private TextField textField = new TextField (); 
   public InputComponent () { 
     super (new Horizontal Layout ()); 
     getComponent().addComponent(text field);
     getComponent().addComponent (button); 
   } 
 public void click () { button.click (); } 
   public void setText (String text) { 
     textField.setText (text); 
   } 
   public String getText () { 
     return textField.getText (); 
   } 
 }

Seen in this way, it is a neutral shell, but it will behave towards the outside as a minimal component since the minimum contract via the Component interface. Again, only the methods by delegation to the outside are made visible, which explicitly provided. Use is, therefore, harmless.

public class MainSolution { 
   public static void main (String [] args) { 
     var inputComponent = new InputComponent (); 
     inputComponent.setText ("Hello Text M03"); 
     inputComponent.click (); 
   } 
 }

Targeted Inheritance

Let’s conclude with what I believe is rarely used Java feature at the class level. The speech is about the keyword final.

To prevent an unintentional derivation, I recommend the targeted use of final classes. Thus, from this point, the unfavourable inheritance on the composition level is troublesome. Understandably, in most frameworks, no use of it was made. After all, you want to allow the user of the component Button to offer a specialized version. But at the beginning of your abstraction level, you can very well use it.

Conclusion

At this point, we have seen how you can achieve a more robust variant of a composition by delegation rather than inheritance. You can also use this if you are confronted with legacy source codes with this anti-pattern. It’s not always possible to clean up everything or change it to the last detail. But I hope this has given an incentive to approach this situation.

The source code for this example can be found on GitHub.

Cheers Sven!

A Challenge of the Software Distribution

The four factors that are working against us

Software development is more and more dependent on Dependencies and the frequency of deployments is increasing. Both trends together are pushing themselves higher. Another element that turns the delivery of software into a network bottleneck is the usage of compounded artefacts. And the last trend that is working against us, is the exploding amount of edges or better-called edge nodes.All four trends together are a challenge for the infrastructure.But what we could do against it?

Edge-Computing

Before we look at the acceleration strategies I will explain a bit the term “Edge” or better “Edge-Computing” because this is often used in this context. 


What is Edge or better edge computing?

The principle of edge computing states that data processing takes place at the Edge of the network. Which device is ultimately responsible for processing the data can differ depending on the application and the implementation of the concept.
An edge device is a device on the network periphery that generates, processes or forwards data itself. Examples of edge devices are smartphones, autonomous vehicles, sensors or IoT devices such as fire alarms.
An edge gateway is installed between the edge device and the network. It receives data from edge devices that do not have to be processed in real-time, processes specific data locally or selectively, sends the data to other services or central data centers. Edge gateways have wireless or wired interfaces to the edge devices and the communication networks for private or public clouds.


Pros of Edge Computing

The data processing takes place in the vicinity of the data source, minimising transmission and response times. Communication is possible almost in real-time. Simultaneously, the data throughput and the bandwidth usage reduction in the network, since only specific data that are not to be processed locally need to be transmitted to central data centres. Many functions can also be maintained even if the network or parts of the network fail—the performance of edge computing scales by providing more intelligent devices at the network periphery.

Cons of Edge Computing

Edge computing offers more security due to the locally limited data storage, but this is only the case if appropriate security concepts are available for the decentralised devices, due to the heterogeneity and many different devices, the effort involved in implementing the security concepts increases.

Fog Computing

Edge computing and fog computing are both decentralised data processing concepts. Fog Computing inserts another layer with the so-called Fog Nodes between the edge devices and the cloud. These are small, local data centres in the access areas of the cloud. These fog nodes collect the data from the edge devices. You select the data to be processed locally or decentrally and forward it to central servers or process it directly yourself. 
Selecting the best of both worlds means we are combining both principles of Edge- and Fog-Computing.

What are the acceleration options for SW Distribution?

There are different strategies to scale the distribution of binaries, and every solution suits a specific use-case. We will not have a few on cloud solutions only because companies are operating worldwide and have to deal with different governmental regulations and restrictions. Additionally, to these restrictions, I want to highlight the need for hybrid solutions as well. Hybrid solutions are including on-prem resources as well as air gaped infrastructure used for high-security environments.

a) Custom Solution based on replication or scaling servers

One possibility to scale inside your network/architecture is scaling hardware and working with direct replication. Implementing this by yourself will most-likely consume a higher budget of workforce, knowledge, time and money based on the fact that this is not a trivial project. At the same time, this approach is bound into the borders of the infrastructure you have access to.

b) P2P Networks

Peer to Peer networks is based on equal nodes that are sharing the binaries between all nodes.The peer to peer approach implies that you will have a bunch of copies of your files. If you are downloading a file from the network, all nodes can serve parts independently. This approach of splitting up files and delivering from different nodes simultaneously to the requesting node leads to constant and efficient network usage and reduced download times.

c) CDN – Content Delivery Network

CDN’s are optimised to deliver large files over regions. The network itself is build-out of a huge number of nodes that are caching files for regional delivery. With this strategy, the original server will not be overloaded.

Check out on my Youtube Channel the video with the title "DevSecOps - the Low hanging fruits".This video describes the balance between writing the code itself or adding a dependency in each Cloud-Native App layer. The question is, what does this mean for DevSecOps?

JFrog Solution

With the three mentioned techniques you can build up huge and powerful architecture that fit´s to your needs. But the integration of all these technologies and implementing products is not easy. We faced this challenge as well and over the years we found solutions that we integrated into a single DevSecOps Platform called “The JFrog Platform“. I don´t want to give an overview of all components, for this check out my youtube channel. Here I want to focus on the components that are responsible for the Distribution of the binaries only.

JFrog Distribution

With the JFrog Distribution, the knowledge about the content of the repositories and the corresponding metadata is used to provide a replication strategy. The replication solution is designed for internal and external repositories to bring the binaries all the way down to the place where it is needed. The infrastructure can be built in a hybrid model, including on-prem and cloud nodes.Even air-gapped solutions are possible with import/export mechanisms. In this scenario, we are focussing on a scalable caching mechanism that is optimised for reads.

What is a Release Bundle?

A Release bundle is a composition of binaries. These binaries can be of different types, like maven, Debian or Docker. The Release Bundle can be seen as a Bills Of Materials (BOM).The content and well as the Release Bundles itself are immutable. This immutability makes it possible to implement efficient caching and replication mechanisms across different networks and regions.

What is an Edge Node in this context?

An Edge Node in our context is a node that will provide the functionality of a read-only Artifactory.With this Edge Node, the delivery process is optimised, and we will see that replication is done in a transactional way. The difference to the original meaning of an Edge Node is that this instance is not the consuming or producing element. This can be seen as a Fog-Node, that is the first layer above the real edge nodes layer.

P2P Download

The P2P solution focuses on environments that need to handle download bursts inside the same network or region.This download bursts could be scenarios like “updating a server farm” or “updating a Microservice Mesh”. The usage is unidirectional, which means that the consumer is not updating from their side. They are just waiting for a new version and all consumer updating at the same time.This behaviour is a perfect case for the P2P solution. Artifactory, or an Edge Node in the same network or region, is influencing an update of all P2P Nodes with a new version of a binary. The consumer itself will request the binary from the P2P node and not from the Artifactory instance anymore.The responsible Artifactory instance manages the P2P nodes, which leads to zero maintenance on the user side. Have in mind, that the RBAC is active at the P2P nodes as well. 

CDN Distribution

The CDN Solution is optimised to deliver binaries to different parts of the world. We have it in two flavours. One is for the public and mostly used to distribute SDK’s, Drivers or other free available binaries. The other flavour is focussing on the private distribution.Whatever solution you are using, the RBAC defined inside the Access Module is respected, including solutions with Authentication and Authorisation and unique links including Access Tokens.

Conclusion

Ok, it is time for the conclusion.What we discussed today;
With the increasing amount of dependencies, a higher frequency of deployments and the constantly growing number of applications and edge-nodes, we are facing scalability challenges.
We had a look at three ways you could go to increase your delivery speed.The discussed solution based on

a) JFrog Distribution helps you build up a strong replication strategy inside your hybrid infrastructure to speed up the development cycle.
b) JFrog P2P that will allow you to handle massive download bursts inside a network or region. This solution fits tasks that need to distribute binaries to a high number of consumers concurrently during download bursts.
c) JFrog CDN to deliver binaries worldwide into regional data centres to make the experience for the consumer as best as possible.


All this is bundled into the JFrog DevSecOps Platform. 


Cheers Sven

DevSecOps – Be Independent Again

What do the effects the news of the last few months can have to do with risk management and the presumption of storage, and why is it an elementary component of DevSecOps?


If you want to see this Post as a video, check-out the following from me Youtube Channel


What Has Happened So Far

Again and again, changes have happened that set things in motion that were considered to have been set. In some cases, services or products have been freely available for many years, or the type of restriction has not changed. I am taking one of the last changes as an occasion to show the resulting behavior and to formulate solutions that help you deal with it.

In software development, repositories are one of the central elements that enable you to efficiently deal with the abundance of dependencies in a software environment. A wide variety of types and associated technologies have evolved over the decades. But there is a common approach mostly resulted in a global central authority that is seen as an essential reference.

I examined the topic of the repository from a generic point of view in a little more detail on youtube. 

As an example, I would like to briefly show what a minimal technology stack can look like today. Java is used for the application itself, the dependencies of which are defined using maven. For this, we need access to maven repositories. Debian repositories [Why Debian Repos are mission-critical..] used for the operating system on which the application is running. The components that then packaged into Docker images use Docker registries, and finally, the applications orchestrated in a composition of Docker images using Kubernetes. Here alone, we are dealing with four different repository types. At this point, I have left out the need for generic repositories to provide the required tools used within the DevSecOps pipeline.

DockerHub And Its Dominance

The example that inspired me to write this article was DockerHub’s announcements. Access to this service was free, and there were no further restrictions on storage space and storage duration for freely available Docker images. This fact has led to a large number of open source projects using this repository for their purposes. Over the years a whole network of dependencies between these images has built up.

Docker Hub was in the news recently for two reasons.

Storage Restrictions

Previously, Docker images were stored indefinitely on Dockerhub. On the one hand, this meant that nobody cared about the storage space of the Docker images. On the other hand, pretty much everyone has been counting on it not to change again. Unfortunately, that has now changed. The retention period for inactive Docker images has been reduced to six months. What doesn’t sound particularly critical at first turns out to be quite uncomfortable in detail.

Download Throttling

Docker has limited the download rate to 100 pulls per six hours for anonymous users, and 200 pulls per six hours for free accounts. Number 200 sounds pretty bearable. However, it makes sense to take a more detailed look here. 200 inquiries / 6h are 200 inquiries / 360min. We’re talking about 0.55 queries/minute at a constant query rate. First, many systems do more than one build and therefore requests, every 2 minutes. Second, if the limit is reached, it can take more than half a business day to regain access. The latter is to be viewed as very critical. As a rule, limit values given per hour, which then only leads to a delay of a little less than an hour. Six hours is a different order of magnitude.

Maven and MavenCentral

If you look at the different technologies, a similar monoculture emerges in the Maven area. Here is the maven-central a singular point operated by one company. A larger company bought this company. What does this mean for the future of this repository? I don’t know. However, it is not uncommon for costs to be optimized after a takeover by another company. A legitimate question arises here; What economic advantage does the operator of such a central, free-of-charge infrastructure have?

JDKs

There have been so many structural changes here that I’m not even sure what the official name is. But there is one thing I observe with eagle eyes in projects. Different versions, platforms and providers of the JDKs result in a source of joy in LTS projects that should not be underestimated. Here, too, it is not guaranteed how long the providers will keep the respective builds of a JDK for a platform. What is planned today can be optimized tomorrow. Here, too, you should take a look at the JDKs that are not only used internally but also by customers. Who has all the installers for the JDKs in use in stock? Are these JDKs also used within your own CI route, or do you trust the availability of specific Docker images?

Moderate Independence

How can this be countered now? The answer is straightforward. You get everything you need just once and then save it in your systems. And so we are running against the efforts of the last few years. As in most other cases, moderate use of this approach is recommended. More important than ever is the sensible use of freely available resources. It can help if a stringent retention tactic is used. Not everything has to be kept indefinitely. Many elements that are held in the caches are no longer needed after a while. Sophisticated handling of repositories and the nesting of resources helps here. Unfortunately, I cannot go into too much detail here, but it can be noted in short form.

The structure of the respective repositories enables, on the one hand, to create concrete compositions and, on the other hand, to carry out very efficient maintenance. Sources must be kept in individual repositories and then merged using virtual repositories. This process can be built up so efficiently that it can even drastically reduce the number of build cycles. 

DevSecOps – Risk Minimization

There is another advantage in dealing with the subject of “independence”. Because all files that are kept in their structures can be analyzed with regard to vulnerabilities and compliance, now when these elements are in one place, in a repository manager, I have a central location where I can scan them. The result is a complete dependency graph that includes the dependencies of an application but also the associated workbench. That, in turn, is one of the critical statements when you turn to the topic of DevSecOps. Security is like quality! It’s not just a tool; it’s not just a person responsible for it. It is a philosophy that has to run through the entire value chain.

Happy Coding,

Sven Ruppert 

The quick Wins of DevSecOps

Hello and welcome to my DevSecOps post. Here in Germany, it’s winter right now, and the forests are quiet. The snow slows down everything, and it’s a beautiful time to move undisturbed through the woods.

Here you can pursue your thoughts, and I had to think about a subject that customers or participants at conferences ask me repeatedly.

The question is almost always:

What are the quick wins or low hanging fruits if you want to deal more with the topic of security in software development?

And I want to answer this question right now!

For the lazy ones, you can see it as youtube video as well

Let’s start with the definition of a phrase that often used in the business world.

Make Or Buy

Even as a software developer, you will often hear this phrase during meetings with the company’s management and sales part.

The phrase is called; “Make or Buy“. Typically, we have to decide if we want to do something ourselves or spend money to buy the requested functionality. It could be less or more functionality or different so that we have to adjust ourself to use it in our context.

But as a software developer, we have to deal with the same question every day. I am talking about dependencies. Should we write the source code by ourselves or just adding the next dependencies? Who will be responsible for removing bugs, and what is the total cost of this decision? But first, let’s take a look at the make-or-buy association inside the full tech-stack.

Diff between Make / Buy on all layers.

If we are looking at all layers of a cloud-native stack to compare the value of “make” to “buy” we will see that the component “buy” is in all layers the bigger one. But first things first.

Alt Text

The first step is the development of the application itself.

Assuming that we are working with Java and using maven as a dependency manager, we are most likely adding more lines of code indirectly as dependency compared to the number of lines we are writing by ourselves. The dependencies are the more prominent part, and third parties develop them. We have to be carefully, and it is good advice to check these external binaries for known vulnerabilities.

We should have the same behaviour regarding compliance and license usage. The next layer will be the operating system, in our case Linux.

And again, we are adding some configuration files and the rest are existing binaries.

The result is an application running inside the operating system that is a composition of external binaries based on our configuration.

The two following layers, Docker and Kubernetes, are leading us to the same result. Until now, we are not looking at the tool-stack for the production line itself.

All programs and utilities that are directly or indirectly used under the hood called DevSecOps are some dependencies.

All layers’ dependencies are the most significant part by far.

Checking these binaries against known Vulnerabilities is the first logical step.

one time and recurring efforts for Compliance/Vulnerabilities

Comparing the effort of scanning against known Vulnerabilities and for Compliance Issues, we see a few differences.

Let’s start with the Compliance issues.

Compliance issues:

The first step will be defining what licenses are allowed at what part of the production line. This definition of allowed license includes the dependencies during the coding time and the usage of tools and runtime environments. Defining the non-critical license types should be checked by a specialised lawyer. With this list of white labelled license types, we can start using the machine to scan on a regular base the full tool stack. After the machine found a violation, we have to remove this element, and it must be replaced by another that is licensed under a white-labelled one.

Vulnerabilities:

The recurrent effort on this site is low compared to the amount of work that vulnerabilities are producing. A slightly different workflow is needed for the handling of found vulnerabilities. Without more significant preparations, the machine can do the work on a regular base as well. The identification of a vulnerability will trigger the workflow that includes human interaction. The vulnerability must be classified internally that leads to the decisions what the following action will be.

Compliance Issues: just singular points in your full-stack

There is one other difference between Compliance Issues and Vulnerabilities. If there is a compliance issue, it is a singular point inside the overall environment. Just this single part is a defect and is not influencing other elements of the environment.

Alt Text

Vulnerabilities: can be combined into different attack vectors.

Vulnerabilities are a bit different. They do not only exist at the point where they are located. Additionally, they can be combined with other existing vulnerabilities in any additional layer of the environment. Vulnerabilities can be combined into different attack vectors. Every possible attack vector itself must be seen and evaluated. A set of minor vulnerabilities in different layers of the application can be combined into a highly critical risk.

Alt Text

Vulnerabilities: timeline from found until active in the production

I want to have an eye on as next is the timeline from vulnerability is found until the fix is in production. After a vulnerability is existing in a binary, we have nearly no control over the time until this is found. It depends on the person itself if the vulnerability is reported to the creator of the binary, a commercial security service, a government or it will be sold on a darknet marketplace. But, assuming that the information is reported to the binary creator itself, it will take some time until the data is publicly available. We have no control over the duration from finding the vulnerability to the time that the information is publicly available. The next period is based on the commercial aspect of this issue.

As a consumer, we can only get the information as soon as possible is spending money.
This state of affairs is not nice, but mostly the truth.

Nevertheless, at some point, the information is consumable for us. If you are using JFrog Xray, from the free tier, for example, you will get the information very fast. JFrog is consuming different security information resources and merging all information into a single vulnerability database. After this database is fed with new information, all JFrog Xray instances are updated. After this stage is reached, you can act.

Alt Text

Test-Coverage is your safety-belt; try Mutation Testing.

Until now, the only thing you can do to speed up the information flow is spending money for professional security information aggregator. But as soon as the information is consumable for you, the timer runs. It depends on your environment how fast this security fix will be up and running in production. To minimise the amount of time a full automated CI Pipeline ist one of the critical factors.

But even more critical is excellent and robust test coverage.

Good test coverage will allow you, to switch dependency versions immediately and push this change after a green test run into production. I recommend using a more substantial test coverage as pure line-coverages. The technique called “mutation test coverage” is a powerful one.

Mutation Test Coverage
If you want to know more about this on, check out my YouTube channel. I have a video that explains the theoretical part and the practical one for Java and Kotlin.

The need for a single point that understands all repo-types
To get a picture of the full impact graph based on all known vulnerabilities, it is crucial to understand all package managers included by the dependencies. Focussing on just one layer in the tech-stack is by far not enough.

JFrog Artifactory provides information, including the vendor-specific metadata that is part of the package managers.

JFrog Xray can consume all this knowledge and can scan all binaries that are hosted inside the repositories that are managed by Artifactory.

Alt Text

Vulnerabilities – IDE plugin

Shift Left means that Vulnerabilities must be eliminated as early as possible inside the production pipeline. One early-stage after the concept phase is the coding itself. At the moment you start adding dependencies to your project you are possibly adding Vulnerabilities as well.

The fastest way to get feedback regarding your dependencies is the JFrog IDE Plugin. This plugin will connect your IDE to your JFrog Xray Instance. The free tier will give you access to Vulnerability scanning. The Plugin is OpenSource and available for IntelliJ, VS-Code, Eclipse,… If you need some additional features, make a feature request on GitHub or fork the Repository add your changes and make a merge request.

Try it out by yourself – JFrog Free Tier

How to use the IDE plugin?

If you add a dependency to your project, the IDE Plugin can understand this information based on the used package manager. The IDE Plugin is connected to your JFrog Xray instance and will be queried if there is a change inside your project’s dependency definition. The information provided by Xray includes the known vulnerabilities of the added dependency. If there is a fixed version of the dependency available, the new version number will be shown.

If you want to see the IDE Plugin in Action without registering

for a Free Tier, have a look at my youtube video.

Conclusion

With the JFrog Free Tier, you have the tools in your hands to practice Shift Left and pushing it into your IDE.

Create repositories for all included technologies, use Artifactory as a proxy for your binaries and let Xray scan the full stack.

With this, you have a complete impact graph based on your full-stack and the pieces of information about known Vulnerabilities as early as possible inside your production line.

You don’t have to wait until your CI Pipeline starts complaining. This will save a lot of your time.

Recent Entries »