Tag Archives: urlshortener

The Importance of UI in Import Processes

Why an import needs a UI at all

Import functions are often treated as purely technical details in applications. Data is read in, processed and then made available – ideally without further interaction. In practice, however, an import is rarely an invisible process. It marks a transition between existing system states, between old and new data, between trust and control. This is exactly where the need for a user interface arises.

In the URL shortener, the import is not designed to run in the background; it is an explicit, visible process. The UI does not take on the role of a technical decision-maker, but that of a transparent mediator. It creates a clearly demarcated space in which an import can occur without immediately modifying existing data. This decoupling alone justifies its own import interface.

The current state of the source code is available on

GitHub: https://github.com/svenruppert/url-shortener or https://3g3.eu/url

Overview of a URL shortener tool displaying a table with shortcodes, URLs, creation dates, active status, expiration, and action options.

Importing is always associated with uncertainty. Even if the data format is known, it remains unclear how incoming datasets interact with existing datasets. Are there any overlaps? Are conflicts arising? Are individual entries incomplete or invalid? These questions cannot be answered by a technical process alone, but require visibility. The UI makes this intermediate state tangible without prematurely evaluating it.

Instead of treating the import as an atomic step, it is understood as a state in the URL shortener. The user interface displays this state, records it, and prevents it from being overlooked and inadvertently entered into a production database. This makes the import process controlled, even if the UI itself does not control the content.

The clear distribution of roles is important here. The user interface does not interpret data, validate content, or make decisions about its technical correctness. Their task is exclusively to make the current technical status of the import visible. This deliberate restraint is crucial, as it prevents UI and import logic from being conflated.

So the import doesn’t need a UI because it’s complex, but because it’s a transition. The surface acts as a boundary between file and system, between potential changes and existing persistence. It gives the import a place, a time and a clear context – and that’s exactly where its raison d’être lies.

The entry point in the application

The import does not start automatically in the URL shortener; it only starts via a deliberately designed entry point in the application. It is where users manage existing short URLs and view the current database. This clearly positions the import as an exceptional act rather than part of the everyday processing flow.

Screenshot of a URL shortener interface showing the import section, including options to upload a ZIP file, preview of URLs, and displayed conflicts.
User interface for importing a ZIP file with a preview section showing a staging ID, number of new items, conflicts, and invalid entries. The interface includes a table for displaying short codes, URLs, and activation statuses.

Entry is done via an explicit action that opens a modal dialogue. This action only creates a new instance of the import dialogue and gives it the necessary dependencies. Further logic is deliberately not provided at this point. By opening the dialogue, the user leaves the application’s normal working context and enters a clearly demarcated area where the import is fully encapsulated.

The modal character of the dialogue fulfils an important function here. It indicates that the import is not a background process running in parallel, but a coherent process with its own state. As long as the dialogue is open, the rest of the UI state remains unchanged. There is no implicit update and no automatic data transfer.

From the user guidance perspective, this creates a clear-cut. The import is not understood as an extension of the table view, but as a temporary workspace. This separation prevents import actions from being mixed with regular editing steps. At the same time, it creates a mental orientation: everything that happens within this dialogue belongs to the import, and nothing beyond it.

From a technical standpoint, the UI assumes no further responsibility at this point. The entry point does not initiate validation or server communication; instead, it delegates the entire import process to the dialogue itself. Only when the user consciously uploads a file does the import process begin.

Button importButton = new Button("Import");
importButton.addClickListener(event -> {
ImportDialog dialog =
new ImportDialog(urlShortenerClient, () -> {
refreshGrid();
});
dialog.open();
});

An excerpt from the corresponding view illustrates this principle. The button opens the import dialogue and optionally passes a callback, allowing a return to the normal UI context after a successful import.

The code makes it clear that the entry point merely instantiates and opens the dialogue. There is no pre-check or interaction with the server or import APIs. The callback is used only for downstream view updates and is not part of the import process itself.

Thus, the entry point defines not only where the import starts, but also how it is perceived. It is a clear turning point in the context of use and forms the basis for all subsequent steps in the import dialogue.

The import dialog as a closed workspace

When the import dialogue opens, the application deliberately switches to a well-defined working mode. The dialogue is not intended as a mere overlay, but as a standalone UI space with its own logic, state, and clearly defined boundaries. Everything that happens within this dialogue belongs exclusively to the import and has no effect on the existing database until the process is explicitly completed.

This partitioning is a central design element. While the dialogue is open, the rest of the application remains frozen in its previous state. Tables, filters or active selections are neither updated nor influenced. This creates a clear separation between ongoing work and the import process, providing both technical and mental orientation.

The import dialogue is implemented as a standalone Vaadin component. It is fully initialised when opened and has no implicit return channel to the calling view. This decoupling allows the dialog to manage its internal state independently. Neither when opened nor during the interaction is it assumed that import data already exists. The initial state is always neutral and empty.

Structurally, the dialogue is divided into distinct sections, each representing a specific phase of import. However, these areas are not implemented as step-by-step wizards; instead, they coexist within a common framework. This keeps the dialogue fully visible at all times and avoids implicit transitions that could obscure the actual state.

It is particularly important that the dialogue does not make a technical interpretation of the import data. It simply provides UI surfaces where results are displayed as soon as the server delivers them. Whether these spaces remain empty or are filled depends solely on the current import state, not on assumptions or expectations of the UI.

The control elements of the dialogue also follow this principle. Actions such as validation or application of the import are not permanently available, but are tied to the internal state of the dialogue. As long as there is no corresponding import data, the dialogue remains inactive. The UI does not enforce an order or progression, but only reacts to clearly defined state changes.

The import dialogue thus acts as a controlled container for a potentially critical operation. It fully encapsulates the import, makes its current status visible, and prevents incomplete or unchecked data from taking effect unnoticed. This clear boundary underpins all subsequent steps in the import process and explains why the dialogue was deliberately designed as a closed workspace.

Implementation of the ImportDialog in Vaadin

The import dialogue in the URL shortener clearly shows how little “framework magic” it takes to build a complete, state-driven UI with Vaadin Flow. The entire functionality is built from a few easy-to-understand building blocks: a dialogue container, standard components such as upload, grid, tabs, and button, and clear state variables that carry the dialogue throughout the import process. The result is an interface that treats the import as a separate workspace while keeping the source code manageable.

public final class ImportDialog
extends dialog
implements HasLogger {
private final URLShortenerClient client;
private final Upload upload = new Upload();
private byte[] zipBytes;
private final Button btnValidate = new Button("Validate");
private final Button btnApply = new Button("Apply Import");
private final Button btnClose = new Button("Close");
private final Div applyHint = new Div();
private string stagingId;

Getting started is already remarkably simple. Dialog is a normal Java class that extends Dialogue. This means the UI is not a declarative construct or a template; it can be fully traced in Java code. At the same time, the dialogue deliberately remains stateful: it holds both the uploaded ZIP bytes and the stagingId generated by server-side validation.

With this structure, it’s already clear how Vaadin Flow works here: Components are objects, and the dialogue has them like ordinary fields. The UI is thus automatically addressable without requiring “UI IDs” or bindings. At the same time, the import state is not kept externally; it belongs precisely to the dialogue in which it is made visible.

The constructor then shows the core of the Vaadin mechanics. With just a few lines, the dialogue is configured as a modal, resizable container. There is no separate configuration file and no DSL, but pure Java calls. This allows the import dialogue’s visual framework to be understood directly.

setHeaderTitle("Import");
setModal(true);
setResizable(true);
setDraggable(true);
setWidth("1100px");
setHeight("750px");

Immediately afterwards, it becomes apparent how Vaadin Flow typically models interactions. The buttons are initially deactivated and are unlocked only via events. This initialisation is a central part of the state model. The dialogue always starts neutrally and does not force progress unless there is a ZIP file.

btnValidate.addThemeVariants(ButtonVariant.LUMO_PRIMARY);
btnValidate.setEnabled(false);
btnApply.addThemeVariants(ButtonVariant.LUMO_SUCCESS);
btnApply.setEnabled(false);
btnClose.addClickListener(_ -> close());
btnValidate.addClickListener(_ -> validate());
btnApply.addClickListener(_ -> applyImport());

The interaction of UI and file upload can also be read directly. The upload will be limited to ZIP files and will have a maximum size. The key point here is the in-memory handler: once the file is successfully uploaded, the bytes are placed in zipBytes and the next step is unlocked by enabling btnValidate. The transition from “upload available” to “validation possible” is thus a single, very understandable state change.

upload.setAcceptedFileTypes(".zip", APPLICATION_ZIP);
upload.setMaxFiles(1);
upload.setMaxFileSize(IMPORT_MAX_ZIP_BYTES);
UploadHandler inMemoryUploadHandler = UploadHandler
.inMemory((metadata, bytes) -> {
String fileName = metadata.fileName();
long contentLength = metadata.contentLength();
logger().info("uploaded file: fileName: {} , contentLength {}", fileName, contentLength);
zipBytes = bytes;
logger().info("setting zipBytes..");
btnValidate.setEnabled(true);
});
upload.setUploadHandler(inMemoryUploadHandler);

The actual UI structure is then created in buildContent(). Here, too, the strength of Vaadin Flow becomes apparent: Layouts are components, and the dialogue is simply built by assembling components. Headings, upload area, preview summary, tabs, and a central content container are merged into a single VerticalLayout. The code reads like a description of the interface.

var root = new VerticalLayout(
new H3("Upload ZIP"),
upload,
new H3("Preview"),
summary,
applyRow,
tabs,
tabContent
);
root.setSizeFull();
root.setPadding(false);
renderTab(tabContent);
return root;

For displaying results, the dialogue uses two grid instances and switches between them via tabs. The decisive factor is that the tab interaction does not trigger any backend logic; it only updates the visible projection. In Vaadin Flow, switching is a single listener that empties the container and uses the appropriate combination of paging bar and grid.

tabs.addSelectedChangeListener(e -> renderTab(tabContent));
private void renderTab(VerticalLayout container) {
container.removeAll();
container.setSizeFull();
if (tabs.getSelectedTab() == tabInvalid) {
container.add(pagingInvalid, gridInvalid);
container.expand(gridInvalid);
} else {
container.add(pagingConflicts, gridConflicts);
container.expand(gridConflicts);
}
}

At this point, it becomes clear why the implementation remains so compact despite several states. Vaadin Flow provides the UI building blocks, and the dialogue connects them via a few, clearly defined state variables and listeners. The actual import process remains server-side. The UI only needs to know when a state is reached and how to display it.

This makes the ImportDialog a good example of how Vaadin Flow reduces complexity: not by hiding logic, but by using a direct programming model in which UI components, events, and state converge into a single, easy-to-read Java codebase. Especially for a minimalist import function, a complete, robust interface can be created quickly without requiring additional framework layers.

File upload: Transport instead of interpretation

In the import dialogue, the actual import process does not begin with a technical decision, but with a purely technical step: uploading a ZIP file. This moment marks the transition from an empty, neutral dialogue state to a potentially processable import without already making any content evaluation.

The upload component is firmly anchored in the dialogue and visible from the beginning. It deliberately serves as the first point of interaction within the closed work area. However, their task is clearly limited. The upload is used exclusively to receive a file and make its content temporarily available in the UI context. Neither the structure nor the semantics of the data contained play a role at this point.

This attitude is directly reflected in the source code. The upload is configured to accept exactly one ZIP file, and the maximum file size must not be exceeded. These restrictions do not serve the purpose of technical validation; they serve only the technical validation of the UI process.

upload.setAcceptedFileTypes(".zip", APPLICATION_ZIP);
upload.setMaxFiles(1);
upload.setMaxFileSize(IMPORT_MAX_ZIP_BYTES);

Once a file is selected and uploaded, an in-memory upload handler processes it. The dialogue saves the complete contents of the ZIP file as a byte array in an internal field. At this point, there is no check to verify whether the file contains import data or is structured correctly. In the UI, the file is initially a binary block.

UploadHandler inMemoryUploadHandler = UploadHandler
.inMemory((metadata, bytes) -> {
zipBytes = bytes;
btnValidate.setEnabled(true);
});
upload.setUploadHandler(inMemoryUploadHandler);

These few lines mark a crucial UI state change. Upon successful upload completion, the validation button is enabled. That’s all that happens at this point. The upload alone does not trigger any server communication and does not create an import state. It merely signals that there is now enough information to trigger the next user step.

Errors during uploads are also dealt with exclusively from a technical perspective. If a file is rejected, for example, due to an incorrect data type or a file size that is too large, the UI responds with a corresponding notification. However, there is still no evaluation of the file’s content.

upload.addFileRejectedListener(event -> {
String errorMessage = event.getErrorMessage();
Notification notification = Notification.show(errorMessage, 5000,
Notification.Position.MIDDLE);
notification.addThemeVariants(NotificationVariant.LUMO_ERROR);
});

The upload section of the dialogue thus clarifies a central design principle: The user interface does not interpret imported data. It merely ensures that a file is received in a technically correct manner and transfers this state into a clearly recognisable next possibility for action.

Only by clicking the “Validate” button does the dialogue leave this purely technical state of preparation. The file upload thus serves as the necessary, but deliberately content-free, basis for all subsequent steps of the import process.

Validation as UI state change

By clicking on the “Validate” button, the import dialogue leaves its purely preparatory state for the first time. Up to this point, only technical requirements were created: a file was uploaded and stored in memory without its content being interpreted or further processed. The validate() method now marks the clear transition from a passive UI state to a state-changing step that makes the import visible for the first time.

From a user interface perspective, validation is not a technical review process, but a coordinated flow of multiple UI updates, all triggered by a single user action. The dialogue deliberately assumes no responsibility for the content. It asks the server to verify the uploaded ZIP content and processes only the technical response it receives.

The introduction to the method is correspondingly defensive. First, the dialogue state checks whether a ZIP file exists. If this is missing, the UI displays a brief notification and cancels the process. At this point, no exception is propagated, and no internal state is changed. The validation is thus clearly tied to a previous, explicit user action.

if (zipBytes == null || zipBytes.length == 0) {
Notification.show("No ZIP uploaded.", 2500, Notification.Position.TOP_CENTER);
return;
}

Only when this prerequisite is fulfilled is the actual validation step triggered. The dialogue passes the complete contents of the ZIP file to the client and calls the dedicated server endpoint for import preview. For the UI, this call is a black box. It does not know the internal validation rules or the criteria by which entries are classified as new, conflicting, or invalid.

String previewJson = client.importValidateRaw(zipBytes);

The server’s response is received in its entirety as a JSON string. Instead of converting this into a fixed data model, the UI extracts the individual values relevant to representing the import state. This includes the generated stagingId, as well as the counts of new, conflicting, and invalid entries. These values are immediately visible in the interface and constitute the first concrete import state displayed by the dialogue.

this.stagingId = extractJsonString(previewJson, "stagingId");
int newItems = extractJsonInt(previewJson, "newItems", 0);
int conflicts = extractJsonInt(previewJson, "conflicts", 0);
int invalid = extractJsonInt(previewJson, "invalid", 0);

With these values set, the character of the dialogue changes noticeably. The previously empty preview area is populated, and the import receives an identity for the first time: the stagingId. Nevertheless, the UI remains strictly descriptive. It does not evaluate whether the figures are plausible or in a certain relationship to each other. It only shows the current technical status.

At the same time, the dialogue reads the result lists directly from the server response. Iterators are used to read JSON arrays for conflicts and invalid entries, and to convert them into UI-internal row objects. These are then assigned to the corresponding grids. Here, too, there is no interpretation. If an array is empty or non-existent, the tables remain empty – a state that the UI treats as completely valid.

for (String obj : new ItemsArrayIterator(r, "conflictItems")) {
Map<String, String> m = JsonUtils.parseJson(obj);
conflictRows.add(ConflictRow.from(m));
}

After the grids are filled, paging information is updated, and the associated UI components are synchronised. This step is also performed exclusively based on the previously determined result lists, without any additional feedback to the server.

With the completion of the validate() method,  the import dialogue is in a new, stable state. The uploaded data is validated on the server side; the results are visible, and the dialogue can now decide which further actions to offer the user. Note that validation itself does not trigger an import. It only changes the UI state and creates transparency about what a subsequent import would do.

The validate() method is thus the central linchpin of the entire import dialogue. It combines the upload and presentation of results without making technical decisions itself. In this role, it becomes the core of the UI-controlled import process.

Presentation of results without interpretation

After validation, the import dialogue is in a state where concrete results are available for the first time. However, these results are not interpreted, weighted, or further processed; they are only presented. The last chapter describes exactly this moment: the transformation of raw information delivered on the server side into visible UI structures – without the user interface drawing its own conclusions.

Central to this chapter is the realisation that the dialogue lacks a technical view of conflicts or invalid entries. He knows neither their significance nor their effects. Instead, it performs a purely mechanical task: it takes lists of results and renders them into prepared UI components.

This projection is done via two separate tables, each associated with a specific result type. Conflicts and invalid entries are deliberately not presented together; they are kept in separate contexts. The dialogue offers two tabs for this purpose, which the user can switch between. However, this change only changes the view of the data, not its content or state.

The technical basis of this representation consists of two grid components, which are already fully configured when the dialogue is set up. Columns, widths, and display logic are fixed before it is even known whether data will ever be displayed. The grids thus exist independently of the existence of results and represent a stable projection surface.

gridConflicts.addColumn(ConflictRow::shortCode).setHeader("shortCode").setAutoWidth(true).setFlexGrow(0);
gridConflicts.addColumn(ConflictRow::d iff).setHeader("diff").setAutoWidth(true).setFlexGrow(0);
gridConflicts.addColumn(ConflictRow::existingUrl).setHeader("existingUrl").setAutoWidth(true);
gridConflicts.addColumn(ConflictRow::incomingUrl).setHeader("incomingUrl").setAutoWidth(true);

These grids are filled not through a traditional data model, but via direct iteration over JSON fragments in the server response. The UI reads each object from its corresponding array and converts it into a simple line representation. These row objects contain exactly the fields that should be displayed, no more and no less.

for (String obj : new ItemsArrayIterator(r, "conflictItems")) {
Map<String, String> m = JsonUtils.parseJson(obj);
conflictRows.add(ConflictRow.from(m));
}

It is noteworthy that the dialogue makes no assumptions about the number of expected entries or about which fields must be present. If an array is empty or cannot be read, the associated table remains empty. This state is not treated by the UI as an error; it is considered a valid result of validation.

The tab control also follows this principle of neutrality. The currently selected tab only determines which table is visible. When switching, no data is reloaded, and no states are changed. The UI only shows a different section of the same import state.

tabs.addSelectedChangeListener(e -> renderTab(tabContent));

The presentation of results is supplemented by a simple paging component that operates only on results already loaded. It provides a better overview of large datasets but is fully local and does not execute any additional server queries. Here, too, no content filter is applied; paging is purely representation-oriented.

The interplay of grids, tabs, and paging results in a deliberately restrained user interface. It shows what’s there and doesn’t show anything that hasn’t been clearly delivered. Neither is there an attempt to compensate for missing data, nor are implicit assumptions made about the “actual” state of the import.

This makes it clear that the result displayed in the import dialogue is not an analysis tool, but a mirror. It reflects the state delivered by the server exactly and defers any further evaluation to subsequent steps in the import process.

Action Control and Apply Logic

After uploading, validating, and displaying the results, one central question remains in the import dialogue: Can the import actually be used? The answer to this question is not implicit or automatic. Instead, it is completely controlled by the UI state.

The basis of this control is not an additional server call, but the already known import state. All information relevant to the decision is already available to the UI. The updateApplyState() method serves as a central hub that interprets the state and translates it into concrete UI activations or deactivations.

The starting point is a strictly defensive default state. Regardless of what happened before, the Apply button will be disabled first. The UI never assumes that an import is automatically applicable. Only when all conditions are explicitly fulfilled is this state lifted.

btnApply.setEnabled(false);
btnApply.setText("Apply Import");

The first hard test point is the presence of a stagingId. Without this identifier, there is no valid import context from a UI perspective. Even if the data is already displayed, the import remains inapplicable until a server-side confirmed staging state is reached. The UI does not treat this case as an error, but as an incomplete state.

Then, the two result dimensions that have already been made visible in the previous chapters are considered: invalid entries and conflicts. Invalid entries generally block the import. As soon as at least one invalid record exists, the Apply function remains deactivated, and the dialogue explicitly communicates this state via a hint text. The UI does not force a correction; it simply makes it clear that an import is not possible under these conditions.

Conflicts, on the other hand, are treated differently. From a UI perspective, they do not represent an absolute exclusion but rather a deliberate decision-making process. The dialogue includes a checkbox for this purpose, allowing the user to specify whether conflicting entries should be skipped during import. Only with this explicit consent is the import released despite existing conflicts.

This differentiation is directly evident in the interplay of the checkbox, information text and apply button. Activating or deactivating the checkbox immediately triggers a reassessment of the UI state without loading or recalculating any additional data. The UI responds only to the known state.

if (conflicts > 0 && !chkSkipConflicts.getValue()) {
applyHint.setText("Apply disabled: " + conflicts + " conflict(s). Tick "Skip conflicts on apply" to proceed.");
return;
}

When the import is finally approved, not only does the button’s activation change, but its label does as well. This allows the UI to clearly signal the conditions under which the import will be executed. This visual notice is part of the action management and serves to ensure transparency, not to enforce professional rules.

By clicking “Apply Import”, the dialogue leaves its purely display and decision modes. Only at this point is another server call triggered, which actually applies the previously validated import. Up to this point, the UI has only managed states, displayed them and demanded decisions.

The action control thus forms the deliberate conclusion of the import dialogue. It bundles all previously built-up information and converts it into an explicit user decision. It is precisely this restraint – not to apply anything automatically, to imply anything – that makes dialogue a controlled and comprehensible tool within the application.

JSON export in Vaadin Flow

Export functions are often seen as a purely technical side task: one button, one download, done. In a Vaadin-based application, however, it quickly becomes apparent that exporting is much more than writing data to a file. It is a direct extension of the UI state, an infrastructural contract between frontend and backend, and a decisive factor for maintainability and predictability.

This article shows how a JSON-based export was deliberately designed as a UI-driven workflow in the URL shortener project. The focus is not on file formats or complex backend abstractions, but on the clean embedding of the export in a Vaadin Flow interface: filter coupling, download mechanics, paging boundaries and clear responsibilities between UI, client and server.

The current source code can be found on GitHub under https://github.com/svenruppert/url-shortener or https://3g3.eu/url

Overview page of a URL shortener tool displaying a list of shortened URLs, their user IDs, creation dates, status indicators, and options for managing the links.

Export from a UI point of view: more than a download button

In classic web applications, export is often thought of as an isolated API endpoint. From the perspective of a Vaadin UI, this consideration falls short. For the user, export is not a technical process but a consequence of the current UI state: the filters applied, the sorting, and the paging limits.

An export that ignores this coupling immediately leads to cognitive breaks. The display in the grid shows a certain amount of data, but the export provides something else – be it more, less or simply different data. This is exactly where it is determined whether an application is perceived as consistent.

The claim in the project was therefore clear: The export is not a special function, but a mirror of the UI state.

This decision shapes all further design steps – from the filter generation to the download mechanics to the structure of the export data itself.

Initial situation: functional export, but non-UI

Before the revision, an export was already technically possible. Data could be read on the server and returned to the client as JSON, ensuring the basic functionality was fulfilled. However, from a user interface perspective, this implementation introduced several structural issues that only became apparent upon closer inspection.

The response structures were inconsistent and required the client to interpret them in context. The meaning and interaction of HTTP status codes and response bodies were not explicitly defined; they were derived from implicit assumptions in the client code. At the same time, there was no clear connection between the export and the currently visible filters in the interface. From the UI’s perspective, it was only possible to trace the data that was actually exported indirectly. In addition, there was special logic for empty results or error cases that could not be derived consistently from the response itself but were distributed across several places in the client.

From Vaadin’s perspective, the export was not an integrated UI workflow but rather an isolated technical endpoint. The UI had to provide knowledge of special cases, status codes, and response formats that were not explicitly covered by a contractually defined framework. This state of affairs was also reflected in the test landscape: tests were often based on concrete string representations or complete JSON output rather than on clearly defined functional structures. Changes to filters, sorting or response formats therefore had to be followed up in several places and carried an increased risk of unintended side effects.

In short, the export worked technically but did not meet the requirements for a UI-enabled, traceable, and maintainable component in a Vaadin application.

Design Goal: Export as a Deterministic UI Workflow

The central goal of the redesign was not “more features”, but predictability. For the Vaadin UI, this means:

The export uses the same filters as the grid and thus reflects the current UI state. Paging limits are deliberately set and comprehensible for developers and users alike, so that the scope and character of the export remain clearly recognisable. Success, empty results and error cases are clearly distinguishable and can be handled in the UI without special logic. At the same time, the download behaves browser-compliantly and UI-stably without affecting the current UI state.

From the UI’s perspective, an export must not have its own state. He must not “think” anything, expand anything, or change anything implicitly. It is a snapshot of what the user sees – nothing more, nothing less.

Uniform responses as a prerequisite for clean UI logic

In a Vaadin application, API responses have an immediate effect on the UI code, as each response is typically translated directly into UI states, component logic, and user feedback. In contrast to purely client-side frontends, the UI logic is tightly coupled to server-side processing: Each response directly updates component state, enables or disables controls, and renders feedback to the user.

In this context, inconsistent response formats inevitably lead to complex if-else cascades in the UI code. Special treatments for seemingly trivial cases, such as “empty” exports or different error states, must be explicitly requested. The UI code starts by interpreting technical details of the API – such as certain HTTP status codes or the presence of individual JSON fields – instead of relying on clearly defined business signals. This not only increases the code complexity but also complicates the interface behaviour during extensions, making it harder to understand and more error-prone.

In the URL shortener project, this problem was solved by introducing an explicit and stable response structure. Regardless of whether an export record is empty or contains an error, the response always follows the same structure. HTTP status codes are still used to signal the rough outcome of a request, but they do not serve as the sole signifier. The actual technical information – such as the context, scope, and content of the export – is transmitted in full and consistently in JSON format.

A simplified, real export from the system illustrates this approach:

{
  "formatVersion": "1",
  "mode": "filtered",
  "exportedAt": "2026-02-05T11:28:54.582886239Z",
  "total": 9,
  "items": [ /* subject records */ ]
}

This creates a clear and stable contract for the Vaadin UI. The UI code can rely on the fact that metadata such as mode, exportedAt, or total is always present and interpreted consistently. The interface no longer has to guess whether an export was successful or whether there are special cases. Instead, the process can be designed in a linear, deterministic way: metadata is evaluated, the scope is checked, and the user data is processed or reported back to the user.

This structure has far-reaching consequences for UI logic. Loading indicators, confirmation dialogs or error messages can be derived directly from the structured response, without additional special logic or context-dependent checks. This keeps the interface clear, predictable, and closely linked to the technical significance of the answer, rather than tied to technical special cases or implicit assumptions.

Filter logic as a common language between the grid and export

A crucial Vaadin-specific point is the reuse of the filter logic. There is no separate export filter in the project. Instead, the export is generated exclusively from the current UI state.

The SearchBar acts as the only source of truth:

public UrlMappingListRequest buildFilter(int page, int size) {
  UrlMappingListRequest req = new UrlMappingListRequest();
  req.setPage(page);
  req.setSize(size);
  req.setActiveState(activeState);
  req.setCodePart(codeField.getValue());
  req.setUrlPart(urlField.getValue());
  req.setFrom(from);
  req.setTo(to);
  req.setSort(sort);
  req.setDir(dir);
  return req;
}

This Request object is used for both grid display and export. This guarantees:

Display and export thus produce identical results, since both are based on the same filter definitions. Changes to filters or collations automatically and consistently affect display and export without requiring additional code. At the same time, there are no hidden or implicit export parameters, so the export behaviour can be fully explained by the UI state.

From a maintenance perspective, this is a significant advantage: if you understand the UI, you understand the export.

Download mechanics in Vaadin: Button ≠ Download

A common mistake in Vaadin applications is trying to start a file download directly from a button click. Technically, this is problematic: a button click primarily triggers server-side logic, whereas a download is a resource from the browser’s perspective.

In Vaadin, a button click is primarily a server-side UI event. The browser does not send a “classic” download request; instead, Vaadin processes the click via its UI/RPC communication (server round-trip, event listener, component update). From the browser’s perspective, this is not a normal navigation or resource retrieval. And that’s exactly why “button clicks → browser downloads file” is not reliable, because the browser typically only starts a download cleanly when it retrieves a resource (link/navigation) or submits a form – i.e. something that is perceived in the browser as a “real request for a file”.

The anchor (<a>) element solves this problem because it is a standard download target for the browser: it has an href attribute that points to a resource, and the download attribute signals to the browser: “This is a file”. In Vaadin, you bind this href to a StreamResource. This creates a separate HTTP request when clicking the anchor, which is not part of the Vaadin UI event flow but rather an independent resource retrieval. Only at this moment is the StreamResource “pulled”, and the export content is generated on demand.

In practice, this has three major advantages:

  1. Browser compliance and reliability: The download is started via a mechanism that the browser natively supports. This reduces edge cases in which a download triggered by a UI event is blocked or behaves inconsistently (e.g., pop-up/download policies, timing, UI updates).
  2. Decoupling from the UI lifecycle: The download occurs in a separate request. Even if Vaadin processes UI requests in parallel, if the user clicks on or rerenders the interface, the download can continue to run stably. This is especially important if export generation takes longer or is streamed.
  3. Clean accountability: The button is purely UI/UX (icon, tooltip, permissions, enable/disable, visual feedback). The anchor is purely “transport” (browser download). The StreamResource is purely a “data supplier” (the export is generated only when needed). This separation makes the code more maintainable and reduces the side effects.
Button btnExport = new Button(VaadinIcon.DOWNLOAD.create());
btnExport.setTooltipText("Export current result set as ZIP");
btnExport.addClickListener(e ->
    exportAnchor.getElement().callJsFunction("click")
);

The actual download behaviour is in the anchor connected to a StreamResource:

StreamResource exportResource =
    new StreamResource("export.zip", () -> {
      UrlMappingListRequest filter =
          searchBar.buildFilter(1, chunkSize);
      return urlShortenerClient.exportAllAsZipDownload(filter);
    });
exportAnchor.setHref(exportResource);
exportAnchor.getElement().setAttribute("download", true);

This pattern clearly separates the responsibilities: the UI interaction is limited to the button, which serves exclusively as a trigger for export. The browser download is triggered via the anchor element and is therefore treated as a regular resource request. Finally, the data is made available via the StreamResource, which only generates the export content when it is actually downloaded.

The export is only generated when the browser actually retrieves the resource – not when the user clicks on it.

StreamResource: Export on demand instead of in advance

The use of StreamResource is not a detail, but a deliberate architectural decision. The export is generated on demand while the browser reads the stream.

This has several advantages. On the UI side, the memory footprint remains low because the export does not need to be fully pre-generated and buffered. At the same time, the UI thread is not blocked because the data transfer occurs outside the regular UI lifecycle. The download can continue regardless of the current UI state, even if the user navigates or performs further actions during this time. If errors occur during stream generation, they can be propagated cleanly via a separate HTTP request without causing the UI state to become inconsistent.

The export is thus technically decoupled from the UI lifecycle, although it is logically triggered by the UI.

Paging boundaries as a protective mechanism

Another explicitly UI-related aspect of the export implementation is the deliberate limit on export quantity. The export uses the same chunkSize as the grid in the interface and is additionally limited by a fixed upper limit. This decision ensures that the export always remains within a clearly defined framework and can be derived directly from the current UI state.

From an architectural perspective, this limitation prevents the export from processing large amounts of data in an uncontrolled manner when a user triggers an export. Especially in Vaadin applications, where UI interactions are typically synchronous with server-side logic, this protective measure is crucial. It reduces the risk of heavy memory loads, long runtimes, or blocking operations that could negatively impact other users or the entire server.

At the same time, the paging boundary conveys a clear technical semantics to the outside world. The export is deliberately defined as an image of the currently visible result set. It mirrors exactly what the user sees in the grid, including filtering, sorting, and paging configurations. This does not imply a claim to completeness, as is typically associated with a backup.

This clarity is particularly relevant for user expectations. The export does not provide a complete system print or a historically complete data set, but a specifically selected excerpt. The limitation makes this character explicit and prevents misinterpretations, such as assuming that an export can fully restore the system.

From a maintenance and operations perspective, the paging boundary also serves as a natural safety line. It forces us to consciously design export scenarios and, if necessary, to provide separate mechanisms for backups or mass data withdrawals. As a result, the export remains a controllable UI tool and does not insidiously become an infrastructural backdoor for unlimited data queries.

In summary, limiting export volume is not a technical constraint but a deliberate design decision. It combines UI state, user expectations and system stability into a consistent overall picture and underlines once again that the export in the URL shortener is understood as a UI-driven result set – and expressly not as a substitute for a backup.

Real JSON export from the running system

The architectural decisions described above are particularly well understood from a real export of the running system. The following JSON export was generated directly from the Vaadin interface and represents a specific UI state at a defined point in time.

Even at the top level, the export contains all the necessary contextual information to enable independent classification. The formatVersion field explicitly defines the export format version, providing a stable foundation for future extensions. Changes to the internal data model do not automatically propagate to the export contract, provided the version limit is respected.

The field mode is deliberately chosen to speak. The filtered value makes it unmistakably clear that this is not a complete data deduction, but a result set restricted by UI filters. This information is crucial because it prevents the export from being mistakenly interpreted as a backup. The export does not capture the entire system state; it only includes the section the user has seen in the grid.

With exportedAt, the exact time of snapshot creation is recorded. The export thus clearly refers to a defined system state. Later changes to individual data records are deliberately not included and can be clearly delineated on the basis of this time stamp. This context is supplemented by the total field, which indicates the number of exported data records and enables a quick plausibility check without analysing the actual user data.

The actual technical data is located exclusively in the items array. Each entry describes a single URL-mapping dataset, including subject-relevant properties such as shortCode, originalUrl, and active, as well as temporal attributes createdAt and, optionally, expiresAt. It is notable that these objects contain no UI or export-specific metadata. They are deliberately reduced to the technical core and could also come from other contexts in the same form.

It is precisely this clear separation between top-level metadata and functional user data in the items array that makes the export an explainable artefact in itself. Even without knowledge of the internal code or the Vaadin interface, it is possible to determine when the export was created, under what conditions, its scope, and where the actual technical data begins.

The real export thus confirms the design goals described above. It is reproducible, rich in context and clearly recognisable as a UI-driven result set. Instead of merely transporting data, it also conveys its meaning and context of creation – a property that is crucial for maintainability, analysis and long-term further processing.

Effects on maintainability and comprehensibility

The tight coupling between the export and the UI state ensures behaviour that is predictable for developers and users alike. The export follows the same rules as the grid display and contains no hidden special paths or implicit deviations. As a result, the export automatically evolves with the UI: any adjustment to filters, sorting, or paging mechanisms has a consistent effect on both paths without requiring additional synchronisation code.

From a developer’s perspective, this architecture significantly reduces cognitive load. There is no separate mental model space for exporting, as its behaviour can be completely derived from the known UI state. If you understand the grid and its filter logic, you automatically understand the export. This not only simplifies onboarding new developers but also reduces the risk of unintentional inconsistencies during refactorings or functional enhancements.

Testability also benefits directly from this clarity. Since the export has no state and relies on stable request and response structures, it can be tested in isolation. Tests can be run with specific filter combinations and validate the resulting exports without simulating the entire UI or complex interaction sequences. At the same time, UI tests remain lean because they can focus on correctly generating the filter state.

In the long term, this structure improves the maintainability of the overall system. Changes to the UI do not introduce hidden side effects in the export, and conversely, further development of the export does not require parallel adjustments elsewhere. The risk of divergent logic paths between display and export is not only reduced but systematically eliminated.

In summary, the close integration of UI state and export logic ensures that export is not a special case in the system. It becomes a transparent, explainable, long-term, and maintainable component of the application that fits seamlessly into the existing Vaadin architecture.

Conclusion

The export in the URL shortener is not an isolated API endpoint, but an integral part of the Vaadin UI architecture. It follows the same rules as the grid, uses the same filters and respects the same boundaries.

Vaadin Flow applications in particular show that a cleanly integrated export is less a question of the file format – and much more a question of clear responsibilities, explicit contracts and a consistently conceived UI workflow.

Introduction to the URL‑Shortener Advent Calendar 2025

December 2025 is all about a project that has grown steadily in recent months: the Java-based URL Shortener, an open-source project implemented entirely with Core Java, Jetty, and Vaadin Flow. The Advent calendar accompanies users every day with a new feature, a technical deep dive, or an architectural improvement – from the basic data structure and REST handlers to UI components and security aspects.

Motivation: Why your own URL shortener?

The entire project is developed openly on GitHub. There, users will not only find the current state of the code, but also all historical development steps, patches, branches, and PRs discussed in the context of the Advent calendar. The source code can be viewed at https://github.com/svenruppert/url-shortener.

Commercial URL shorteners are practical – but often non-transparent. They collect usage data, offer proprietary APIs, or block essential functions behind paywalls. The goal of this project is therefore a completely self-powered, transparent, and secure shortener suitable for internal networks, development teams, companies, and hobby projects alike.

The project pursues three central basic principles: transparency, in which every decision is documented in the code in a comprehensible way; didactics, as the project is not only intended to be a tool, but above all a learning platform; and extensibility, because each feature is designed to be modular and can therefore be easily adapted or expanded.

A diagram illustrating the process of a URL shortener, showing the transformation from a long URL to a short URL.

What can users expect in the Advent calendar?

The technical journey through Advent is visible both in terms of content and directly in the repository. Each day is created as its own Git branch, starting with feature/advent-2025-day-01 and continuing until the final Christmas door. This allows the project’s development to be traced in small, clearly defined steps – including all refactorings, architectural adjustments, and new functions. In the same structure, the Advent Calendar presents a precisely outlined feature every day, which is reproduced using the associated patch or PR. Supplementary code excerpts from the core and UI modules, architectural explanations and references to typical anti-patterns deepen the understanding. Each door is rounded off with suitable visualisations and a compact summary, so that the project’s development becomes transparent and tangible step by step.

Reasons for the choice of technology

For the URL shortener, technologies were deliberately chosen that are both practical and didactically valuable. Every decision supports the project’s goal: to create a modern yet easy-to-understand system that requires no unnecessary dependencies.

Core Java (Java 24)

The project’s basis is pure Core Java. The reasons for this are:

The use of pure core Java enables maximum transparency: Without any framework magic, users can see exactly how data models, handlers, error handling or serialisation are structured. By deliberately avoiding additional libraries, the attack surface is reduced, and the risk of supply chain problems is minimised. At the same time, this approach makes it very clear to developers how robust and comprehensive the JDK already is today: Many tasks that used to require external dependencies can now be easily solved with onboard tools.

Jetty as an embedded web server

A deliberate alternative to Spring Boot or Jakarta EE:
Jetty is characterised by its lightweight design and rapid launch time, which are especially beneficial in development scenarios. At the same time, Jetty offers complete control over routing and servlets, so that HTTP mechanics can be precisely demonstrated and implemented in a targeted manner. Thanks to its modular structure, Jetty is ideal for small, well-defined microservices while remaining a stable, proven technology whose long production history speaks for high reliability.

Jetty offers just the right balance between simplicity and technical relevance for this open source project.

Vaadin Flow for the UI

The project uses Vaadin Flow to implement an entirely server-side, Java-focused UI that does not require an additional JavaScript or TypeScript stack, making it particularly suitable for developers who want to focus entirely on Java. Instead of relying on a complex front-end ecosystem, Vaadin enables end-to-end development in a language, significantly reducing the cognitive load and flattening the learning curve. The component-based architecture enables a productive and structured way of working, in which user interfaces can be clearly modelled and reused. At the same time, server-side rendering eliminates the need for direct REST calls from the browser, increasing security and reducing attack surfaces. Despite this technical simplicity, Vaadin offers modern styling and a user experience that is reminiscent of the professionalism of enterprise applications. This makes the framework ideal for internal tools, administrative interfaces and corporate projects where security, robustness and long-term maintainability are paramount.

EclipseStore as a persistence solution

EclipseStore replaces classic databases with an object-oriented persistence approach that deliberately does not require ORM layers, entities, table models or complex abstractions. Instead of having to convert data into a relational structure, it remains in its natural form: Java records, lists and other object structures are directly persisted. This eliminates the usual mapping logic, significantly reducing the complexity of the persistence layer and making the architecture more straightforward to understand.

EclipseStore shows its strength especially in small, focused services. Persistence fits seamlessly into the domain model without forcing developers to adapt their data structures to a relational mindset. This direct approach not only leads to more elegant modelling but also to excellent performance. Since EclipseStore manages in-memory-compliant object graphs, data access is high-speed and does not require any additional caching mechanisms or complex optimisations. The result is a lightweight yet powerful persistence system ideal for a modern, compact Java service.

Didactically, EclipseStore provides valuable insight into how persistent data models can operate when not constrained by relational database constraints. Developers gain a deeper understanding of what object lifecycles, data flows, and modelling decisions look like in an environment that is designed entirely from the Java world. This makes EclipseStore particularly suitable for this URL shortener.

Reference to previously published articles

Before we get into the Advent calendar, it’s worth taking a look at the previous publications, which already present the basics, architecture and the first UI concepts of the project. These articles form the foundation on which the content of the Advent calendar is built:

These items offer a first technical introduction and are perfect as preparation for the individual doors of the Advent calendar.

Why, as an Advent calendar?

A project like a URL shortener doesn’t happen in a weekend – it consists of many small steps, refactorings, decisions and improvements. An Advent calendar offers the perfect format for this: short, daily, understandable bites that come together to form an overall understanding.

It is, at the same time, a review of the project’s journey and a motivating start to the new year: a project that users can run, expand, change, or completely rethink.

Part II – UrlShortener – first Implementation

1. Introduction to implementation

1.1 Objectives and differentiation from the architectural part

The first part of this series focused on theory: We explained why a URL shortener is not just a convenience tool, but a security-relevant element of digital infrastructure. We discussed models for collision detection, entropy distribution, and forwarding logic, as well as analysed architectural variants – from stateless redirect services to domain-specific validation mechanisms.

This second part now turns to the concrete implementation. We develop a first working version of a URL shortener in Java 24, consciously without the use of frameworks such as Spring Boot or Jakarta EE. The goal is to achieve a transparent, modularly structured solution that provides all core functions: URL shortening, secure storage of mappings, HTTP forwarding, and optional presentation via a Vaadin-based user interface.

Particular attention is paid to the clean separation between encoding, storage, API, and UI. The entire application is delivered as a monolithic artefact – specifically, a classic WAR file, which is compatible with standard servlet containers such as Jetty or Tomcat. This decision enables rapid deployment and facilitates onboarding and testability.

1.2 Technological guardrails: WAR, Vaadin, Core JDK

The implementation is based on a modern, yet deliberately lean technology stack. Only the built-in tools of the JDK and Vaadin Flow are used as the UI framework. The decision to use Vaadin is based on the requirement to implement interactive administration interfaces without additional JavaScript or separate front-end logic, entirely in Java.

The project is a multi-module structure. The separation between core logic, API layer, and UI remains visible and maintainable in the code. Maven is used as the build tool, supplemented by a WAR packaging plugin that creates a classic servlet deployment structure. The use of Java 24 enables the utilisation of modern language tools, including records, pattern matching, sequenced collections, and virtual threads.

The goal is a production-oriented, comprehensible implementation that can be used both as a learning resource and as a starting point for further product development.

1.3 Overview of the components

The application consists of the following core components:

  • A Base62-Encoder, which transforms consecutive IDs into URL-compatible short forms
  • A Mapping-Store, which manages the mapping between the short link and the original URL
  • A REST service, which allows URL shortening and resolution via redirect
  • One optional UI based on Vaadin Flow for manual management of mappings
    and a configurable WAR deployment that integrates all components

The architecture follows the principle: “As little as possible, as much as necessary.” Each part of the application is modular and allows for later splitting if necessary – for example, into separate services for reading, writing or analysis.

In the next chapter, we will focus on the concrete project structure and the module structure.

2. Project structure and module organisation

2.1 Structure of a modular WAR project

The first executable version of the URL shortener is realised as a monolithic Java application, which is in the form of a classic WAR. The project’s structure is based on a clear, layered architecture, which is prepared for later decomposition. The project is organised modularly, distinguishing between core logic, HTTP interface, and user interface. This separation not only allows for better maintainability but also forms the basis for the service decomposition planned in Part III or IV.

The project consists of three main modules:

  • shortener-core: Contains all business logic, including URL encoding, data model and store interfaces.
  • shortener-api: Implements the REST API based on the Java HTTP server (com.sun.net.httpserver.HttpServer).
  • shortener-ui-required: Optional UI module with Vaadin Flow for managing and visualising mappings.

These modules are distributed via a central WAR project (shortener war), which handles the delivery configuration and combines all dependencies. The WAR project is the only one that handles servlet-specific aspects (e.g., web.xml, I require a Servlet) – the remaining modules remain entirely independent of it.

2.2 Separation of domain, API and UI code

2.2 Separation of Domain, API, and UI Code

The modularisation of the project is based on the principle of technological isolation: The core business logic must know nothing about HTTP, servlet containers, or UI frameworks. This way, it remains fully testable, interchangeable, and reusable—for example, for future CLI or event-based variants of the shortener.

The core module defines all central interfaces (UrlMappingStore, ShortCodeEncoder) as well as the base classes (ShortUrlMapping, Base62Encoder). These components do not contain any I/O logic.

The api module is responsible for parsing HTTP requests, routing, and generating redirects and JSON responses. It accesses the core logic internally but remains detached from UI aspects.

The ui-vaadin module uses Vaadin Flow to implement a web-based interface, integrates the core logic directly, and is initialised in the WAR via a dedicated servlet definition.

Additional modules can be optionally added—for example, for persistence, monitoring, or analysis—without compromising the coherence of the structure.

2.3 Tooling und Build (Maven, JDK 24, WAR-Plugin)

The build system is based on the current version of Maven. Each module is managed as a standalone Maven project with its pom.xml, with shortener-war configured as the parent WAR application. WAR packaging is handled using the standard servlet model, allowing the resulting file to be easily deployed in Tomcat, Jetty, or any Servlet 4.0+ compatible container.

Java 24 is required at runtime, which is particularly relevant for modern language features such as record, pattern matching, and SequencedMap. Release 21 or higher is recommended as the target platform to ensure compatibility with modern runtimes.

Vaadin Flow integration is handled purely on the server side via the Vaadin Servlet and does not require a separate front-end build pipeline. Resources such as themes and icons are loaded entirely from the classpath.

3. URL encoding: Base62 and ID generation

The key requirement for a URL shortener is to generate unique, shortest possible character strings that serve as keys for accessing the original URL. To meet this requirement, the first implementation utilises a sequential ID scheme that assigns a consecutive numeric ID to each new URL. This ID is then converted into a URL-compatible format—specifically, Base62.

Base62 includes the 26 uppercase letters, the 26 lowercase letters, and the 10 decimal digits. Unlike Base64, Base62 does not contain special characters such as +, /, or =, making it ideal for URLs: The generated strings are readable, system-friendly, and easily transferable in all contexts.

The resulting scheme is thus based on a two-step process:

  • Assigning a unique numeric ID (e.g., 1, 2, 3, …)
  • Converting this ID to a Base62 string (e.g., 1 → b, 2 → c, …)

This method guarantees unique and unguessable codes, especially if the ID count does not start at 0 or if codes are additionally shuffled.

3.2 Implementation of a Base62 encoder

The Base62-Encoder is used as a standalone utility class in the core module. It contains two static methods:

  • encode(long value): converts a positive integer to a Base62 string
  • decode(String input): converts a Base62 string back to an integer

The alphabet is defined internally as a constant character string, and the conversion process is carried out purely mathematically, comparable to the representation of a number in another place value system.

This implementation creates a deterministic, stable, and thread-safe encoder that requires no external libraries. The resulting codes are significantly shorter than the underlying decimal number and contain no special characters—a key advantage for embedded or typed links.

For exceptional cases—such as custom aliases—the encoder remains optional, as such aliases can be stored directly as separate strings. However, by default, the Base62 encoder is the preferred method.

3.3 Alternatives: Random, Hashing, Custom Aliases

In addition to the sequential approach, there are other methods for generating short links that can be considered in later stages of development:

  • Random-based tokens (z. B. UUID, SecureRandom) increase unpredictability, but require collision detection and additional memory overhead.
  • The hashing process (e.g., SHA-1 of the destination URL) guarantees stability but is prone to collisions under high load or identical destination addresses.
  • Custom aliases enable readable, short links (e.g., /helloMax), but require additional checking for collisions, syntactic validity, and protection of reserved terms.

For the first version, we focus on the sequential model with Base62 transformation – a stable and straightforward approach.

3.4 Implementation: Base62Encoder.java

The goal is to provide a simple utility class that converts integers to Base62 strings and vice versa. This class is thread-safe, stateless, and implemented without any external dependencies.

First, the complete source code:

3.5 What is happening here – and why?

This class encapsulates all Base62 encoding behaviour in two static methods. The character set consists of digits (0–9), lowercase letters (a–z), and uppercase letters (A–Z), resulting in exactly 62 different characters.

  • The method encode(long number) converts an integer into its inverse representation in the base 62 place value system. The remainder of the division by 62 is successively calculated, and the corresponding character is inserted. The result is a short, URL-friendly string.
  • The decode(String input) method reverses this process: it converts a Base62 string back into its numeric representation. Each character is replaced by its index in the alphabet and weighted accordingly.

This implementation is robust against invalid input, operates entirely in memory, and can be used directly in ID generation or URL mappings.

Implementation: ShortCodeGenerator.java

The class abstracts ID generation from the concrete storage mechanism. It is suitable for both the in-memory version and future persistent variants. The generator is thread-safe and uses only JDK resources.


Explanation

This class encapsulates a sequential counter that is incremented each time nextCode() generates a new, unique short code. The output is based on a monotonically increasing ID encoded into a Base62 string.

The getAndIncrement() method of AtomicLong is non-blocking and thus highly performant, even under high concurrency conditions. The generated code is unambiguous, compact, and deterministic—properties that are ideal for auditing, logging, and subsequent analysis.

The constructor allows you to configure the initial value. This is useful, for example, if you want to continue a persistent counter after a restart.

Example usage (e.g. in the mapping store)

4. Mapping Store: Storage of the mapping

4.1 Interface-Design: UrlMappingStore

The mapping store forms the heart of the URL shortener. It manages the mapping between a short code (e.g.kY7zD) and the corresponding target URL (https://example.org/foo/bar). At the same time, it takes control over multiple uses, expiration times, and potential aliases.

In the first stage of development, a purely in-memory-based solution is used. This is fast, simple, and ideal for starting—even if it is lost upon reboot. Persistence is deliberately postponed to a later phase.

The store is abstracted via a simple interface. This interface allows for later substitution (e.g., with a file- or database-based version) without affecting the API or UI components.

4.2 In-memory implementation with ConcurrentHashMap

The first concrete implementation utilises a ConcurrentHashMap to ensure reliable access even under heavy load. Each mapping entry is represented by a simple record object (ShortUrlMapping) that contains the target URL, creation time, and optional expiration information.

The combination of ConcurrentHashMap and ShortCodeGenerator allows deterministic and thread-safe ID assignment without the need for explicit synchronisation. This creates a high-performance solution that operates reliably even under high load.

4.3 Extensibility for later persistence

All entries are accessible via a central interface. This interface is used not only for storage and retrieval but also forms the basis for later extensions, such as a persistence layer with flat files or EclipseStore, process control via TTL, or even event-driven backends.

The data structure can be extended with metrics, validity logic, or audit fields without requiring changes to the API – a classic approach to interface orientation in the sense of the open/closed principles.

4.4 Implementation

This structure represents the basic assignment and allows the optional specification of an expiration time.

Store-Interface: UrlMappingStore.java

The interface is deliberately kept slim and abstracts the two core operations: insert (with creation) and lookup.

Implementation: InMemoryUrlMappingStore.java

4.5 Why so?

The use of a ConcurrentHashMap ensures that concurrent write and read operations can be handled consistently and efficiently. The combination with AtomicLong in ShortCodeGenerator prevents collisions. The interface allows for a persistent implementation to be introduced later without changing the API or UI behaviour.

5. HTTP API with Java tools

5.1 HTTP-Server mit com.sun.net.httpserver.HttpServer

Instead of relying on heavyweight frameworks like Spring or Jakarta EE, we use the lightweight HTTP server implementation that the JDK already includes in the package com.sun.net.httpserver. This API, although rudimentary, is performant, stable, and perfectly sufficient for our use case.

The server is configured in just a few lines, requires no XML or annotation-based mappings, and can be controlled entirely programmatically. For each path, we define a separate HTTP handler that receives the request, processes it, and returns a structured HTTP response.

5.2 POST /shorten: Shorten URL

The first endpoint allows a long URL to be passed over an HTTP POST to the shortener. In response, the server returns the generated short form, in the simplest case as a JSON object with the shortCode.

Example request:

POST /shorten

Content-Type: application/json

Answer:

200 OK

Content-Type: application/json

Missing or invalid entries are marked with 400 Bad Request answered.

5.3 GET /{code}: Redirect to the original URL

When calling a shortcode (e.g.GET /kY7zD), the server checks whether a mapping exists. If so, a HTTP 302 redirect to the original address. If the code is unknown or expired, a 404 Not Found error will be displayed.

This redirection is stateless and allows for later isolation into a read-only redirect service.

5.4 Implementation

Starting point: ShortenerServer.java


POST Handler: ShortenHandler.java

GET-Handler: RedirectHandler.java

JsonUtils.java (Minimal Java JSON without external libraries)

Since we do not want to use external dependencies such as Jackson or Gson in this first implementation, we need our own utility class to process simple JSON objects, specifically:

  • String → Map<String, String>: for processing HTTP POST payloads (/shorten)
  • Map<String, String> → JSON-String: to generate responses (e.g.{ “shortCode”: “abc123” })

This class is sufficient for simple key-value structures, as used in the shortener. It is not intended for nested objects or arrays, but as a pragmatic solution for the start.

Implementation: JsonUtils.java

Properties and limitations

This implementation is:

  • fully JDK-based(no third-party libraries)
  • for flat JSON objects suitable, i.e.{ “key”: “value” }
  • robust against trivial parsing errors, but without JSON Schema validation
  • Consciously minimalistic to stay within the scope of the prototype

It is sufficient for:

  • POST /shorten(Client sends{ “url”: “…” })
  • Response to this POST (server sends{ “shortCode”: “…” })

Example use

For productive systems, the following is recommended in the future:

  • Gson(lightweight, idiomatic)
  • Jackson(extensive, also for DTO binding)
  • Json-B(Standard-API, Jakarta conform)

However, for our first implementation in the Core JDK, the solution shown above deliberately remains the appropriate middle ground.

5.6 Core Java Client Implementation

The class URLShortenerClient functions as a minimalist HTTP client for interacting with a URL shortener service. Its structure allows connection to a configurable or locally running server, with the default address being http://localhost:8080/. This enables easy integration into local development environments, test runs, or automated system tests without the need for additional configuration.

At the heart of the functionality is the method shortenURL(String originalUrl). It initiates an HTTP POST call against the server endpoint/shorten, transmits the URL to be shortened in a simple JSON document and immediately evaluates the server’s response. Successful completion is indicated exclusively by a status code.200 OK In this case, the method extracts the contained shortCode using the static auxiliary method extractShortCode() from the class JsonUtils. If the server returns a different HTTP code instead, the process will be aborted with a corresponding IOException, which enforces explicit error handling at the application level. This maintains a clear semantic separation between regular usage and exception situations.

The second central method,resolveShortcode(String shortCode), is used to explicitly resolve a short URL. It sends a GET request directly to the server’s root context, supplemented by the passed code. The behaviour of this method largely corresponds to that of a web browser, with the difference that automatic redirects have been deliberately deactivated. This way, the method can determine the actual target address, if present, ​​from the HTTP header field. Location and return it as a result. It clearly distinguishes between valid redirects (status 301 or 302), non-existent codes (status 404), and other unexpected responses. In the latter case, an IOException is thrown, analogous to the shortening process.

Technically speaking, the URLShortenerClient is exclusively composed of the Java SE API, namely HttpURLConnection, TYPE and stream-based input and output routines. All communication is UTF-8 encoded, ensuring high interoperability with modern JSON-based REST interfaces. The class also implements the interface HasLogger, which suggests a project-wide logging infrastructure and implicitly supports good traceability in server communication.

This client is particularly recommended for integration tests, command-line tools, or administrative scripts that require specific URLs to be shortened or verified. Due to its lean structure, the class is also suitable as a starting point for further abstractions, such as service-oriented encapsulation in larger architectures.

5.7 Summary

With just a few lines, you can create a functional HTTP API that serves as an ideal testbed and proof of concept. The structure is minimal but open to extensions such as error objects, rate limiting, or logging. What’s particularly remarkable is that the entire API works without servlet containers, external frameworks, or reflection—ideal for embedded applications and lightweight deployments.

In the next blog post, we will create a graphical interface to map user interactions.

Happy Coding

Sven