JavaZone 2025: A Speaker’s Perspective

I usually write about quantum computing, sometimes about Java and related things. However, fresh from my recent experience as a speaker on one of the largest European conferences (and one that’s marked as one of the best), I thought that you, dear Readers, may be interested in a slightly different article. This time I write about how it was to be one of the one hundred speakers on JavaZone 2025. I thought you may find it interesting to learn how to secure a spot at such a big event, prepare, and give it a go.

Let’s begin with some key facts. JavaZone 2025 took place 2nd-4th September 2025 in Lillestrøm, Norway. According to this database, it hosted 90 speakers, 150 exhibitors, and 2500 thousand attendees. It definitely felt crowded there, especially in the partners’ area. The first day actually took place in Oslo, and hosted exclusively workshops. The main two conference days, with presentations and lightning talks, were 3rd-4th September. There was one special event the day before: a dinner for the speakers. I couldn’t attend it due to personal reasons, but I quite regret it. It’d be a great opportunity to hang out with the others, since I didn’t actually have a chance during the conference days. 

I gave a lightning talk titled “Quantum Leap with Java: An Unrealistic Dream?” You can find it recorded here. Be sure to watch at least a minute or two of it, since there’s one important thing I write about later. I think it was well received, though I didn’t get that much feedback, to be honest. Anyway, I’m quite happy with how I performed. And speaking of which, how did it actually go?

Getting a Spot

Nothing of this would’ve happened of course if I didn’t get a spot in the first place. It all started on 18th February, when I got an email that a call for proposals was opened. This email isn’t sent to everyone, but if you’ve ever submitted, then you’ll get it unless you’ve unsubscribed yourself. This time it was actually my second time trying to get my talk accepted. Two years ago, in 2023 I submitted a longer form, a 45-minutes presentation about the modulith architecture and different libraries supporting it. It was rejected, but gave me quite a precious lesson on how to make a second attempt. For various reasons I couldn’t try last year, so I waited until 2025. The call for proposals ended 28th April, and on 25th June I knew mine was accepted. How did I secure it this time?

First of all, I realised that the modulith pattern might have been too weird to be accepted. After all, there’re many Java developers that have very bad memories when the word monolith is spoken aloud (me included). And though I believe that the modulith architecture is one of the best ever proposed (and quite different from the real monolith), it hasn’t actually gained any momentum until this year. Instead, I opted for something “sexier”. Quantum computing is definitely quite famous despite still being a niche thing. I started to learn it a year before submitting, so I was already well prepared to talk about it. Moreover, I’ve always been deeply concerned that Java developers are missing yet another opportunity to develop one of the great technologies of this decade (the second one being AI), so I thought it’s a great opportunity to hit two birds with one stone. 

Second of all, a shorter form is easier to “squeeze in” in case your topic isn’t among the top contenders, but is still solid. Therefore I opted for a lightning talk instead of a presentation. Moreover, it suited my purpose better. Java libraries for quantum development are somewhat disappointing, and I didn’t see a big point in presenting what other languages are capable of at a conference having the word “Java” in its name. However, one of the purposes of lightning talks is to spread ideas, and in my case – alert our Java community. It’s beyond my competence to decide what we should do with the fact that Java is heavily lagging behind in the quantum race, but I felt obligated to shout it out anyway. Hence the choice of the form of lightning talk, which I believe worked quite well.

Once I’d made up my mind, it was time to apply. You have to fill a pretty long form in order to submit your talk. There you must provide a title for your speech and a publicly visible, short description. This is the abstract that ends up in the program eventually. There’s also a separate, longer program-committee-visible description. It’s to help its members evaluate your proposal. There you must provide a more detailed layout. I did it in the form of a list of bullet points, but there’s no style enforced whatsoever. You also have to specify the form of the “thing” (a presentation, a lightning talk or a workshop), duration, and language. I opted for English for two reasons. I speak it way better than Norwegian, and it made more sense given it’s an international conference. Finally, you must provide information about yourself, including a short bio, links to your online presence, and conference availability. For example, if you aren’t fully flexible, you can state it there. 

There’s one caveat filling the form though. Sometimes it doesn’t get saved. So it’s safer to write the stuff down in some notepad, and then just paste and send, which is what I did.

Preparing the Talk

Regardless of what subject you have chosen, there’s one thing you must remember about above anything else: time. Every form of presentation is timeboxed, and the technical crew is pretty strict about staying within this limit. It means that you have to take extra care about the amount of material you want to show.

My rule of thumb is to prepare the stuff for only 75% of the available time. This gives me some safety margin in case I talk a bit too long about something or improvise an anecdote or two. I guess it happens to all speakers, especially when nerves and adrenaline kick in. Having those extra minutes can be a lifesaver. In the case of my talk, I planned to prepare the material for 15 minutes, and have 5 minutes just in case. The actual talk took me slightly longer than 18 minutes, so it worked pretty well.

The starting point for my preparation was the layout of the talk that I submitted a couple of months before. Don’t worry, it isn’t carved in stone. Speakers are allowed to modify their submissions, to some degree of freedom obviously. I’ve never done this, so I don’t have first-hand experience on how it goes.

Regardless of whether I prepare public talks, or courses, I always write down what I want to say. Exactly the way I’ll be saying it. This way is much easier for me to prepare for the actual event. It reduces the variability when repeating the talk during preparation, and helps me memorise the key points. When this is done, I do the slides too (or any other form of presentation, ex. code). With the whole “presentation pack” in place, I read it out loud trying to simulate actually giving it as much as possible with a stopwatch. The goal is to make it within the 75% of maximum time available. If I can’t make it most of the time, it means I’ve prepared too much material and have to edit it.

If I’m happy however with the way this part goes, then I use a highlighter to mark all the important keywords and parts of the talk. It’ll be crucial during the repetition phase, which is next. During it I read the talk aloud. I haven’t counted how many times I did it, but the point wasn’t to learn it by heart. It was on the other hand to memorise it enough to be able to give it smoothly even when I’m stressed or something unplanned happens.

Once I’m happy with it, I practise giving it the way I actually want to give the talk. With slides, code, and everything else. This helps me automate moments of switching the slides or live coding. Again, it’s to mitigate the stress ruining your talk. Why is it so important? I’m a pretty cool person, but still I always get a bit nervous when I talk during such events. I’m fairly certain that most of you have that too. Even if it isn’t a full blown stage fright, it may still ruin your talk. So I prefer not to take chances and just prepare well enough.

Finally, I “give” the speech with the stopwatch again. It’s just to double-check if I still make it in time after all this practice. If so, I take a couple of days just before the conference free from the talk, to let it sink in my head, and rest a bit. And last but not least, taking a good night’s sleep the night before the talk is crucial.

The Conference Day

The conference day is no less important than all the preparation before. I always make sure that the means of transport I take can take me well before the time of my talk. I wasn’t at the opening session on JavaZone, but still made it therebefore 10 o’clock. It was much earlier than my talk, that was a bit after 4 o’clock, but I wanted to attend some presentations, and hang out with friends. These are actually two traps that one can easily fall into.

Attending the presentations can be quite mentally exhausting. I always try to avoid participating in too many if I’m the speaker, so this time I also chose three I was the most interested in, and skipped the rest. You may be thinking that I was too harsh on myself, and could’ve attended more. However, it’s important to remember that the attendees are at the conference to take the most of the talks that’s possible. It’s a matter of professional ethic to be able to be at your tops when you’re giving a talk. That’s why I do my best to avoid cognitive overload, and that’s why I went to only the talks that I really wanted to attend.

One of the talks that I attended was in the room that mine was scheduled for later that day. JavaZone actually hosts a special walk around the day before the first day of the conference. It’s meant for the speakers to get familiar with the rooms and facilities. Unfortunately, I couldn’t attend it, so I had to rely on that little trick to know what the room looks like. I didn’t want to discover something surprising just before my talk.

The second trap is networking or hanging out with friends. Not because it can be tiresome. Usually it isn’t. However, talking for a couple of hours puts a huge strain on your throat. So while I really enjoyed time with them, I made sure that I have at least one hour of keeping my mouth shut, so the throat can rest. 

What Went Well and What Went Wrong

Well, I got the spot. But more seriously, I believe that the choice of the form (lightning talk), and a “sexy” topic (at least I believe quantum computing can be considered one) contributed to getting my proposal accepted. I’ve learned that there were around 800 submissions, so only approximately one in eight got accepted. I’m not sure I wouldn’t fail to get a place again if I didn’t do it this time with a better strategy.

I’m also quite happy with the amount of preparation I put into the talk. It went pretty well despite I caught some nerves in the middle of talking (of course!). I’m fairly certain that if I hadn’t memorised it so well, I could’ve panicked or at least not given it so smoothly.

According to my estimation (which can be wildly wrong since I did it just on the stage, two minutes before the talk), there were about 20-30 people in the audience. Given the relatively late time, I think the talk was still fairly popular. I may as well be deluding myself, but I’m anyway satisfied with this.

And last but not least, I actually got some feedback and follow-up questions. The feedback was generally positive, though I also got some remarks about stuff I should work on if I’m to give yet another talk. It’s actually quite valuable, I like to take every chance to improve my skills. Plus, it showed me that there were at least a couple of people that were genuinely interested in the talk.

That would be it when it comes to the positive things. And now to the downsides. First of all, despite being wary of not showing too early, and conserving my energy, I failed to do this. I really wanted to hang out with friends, and attend some talks, but in turn it made me more tired than I wished. It wasn’t good from the talk’s perspective, so I definitely should’ve worked a bit on balancing this one out. But on the other hand, it’s hard to just turn your folk down, so I may as well repeat the same thing again.

That would be more on perfecting my performance. But there was something that I really screwed up, and didn’t realise until it was way too late. I have eye comfort settings turned on on my devices almost all the time. The lights in the rooms at Nova in Lillestrøm were quite warm, so I haven’t turned it off. After connecting I check if the presenters screen is in neutral color, which was. This was to ensure that the presentation will also be displayed without any filter on. But I didn’t think that the software they used for recording my presentation would also pick up the warm colour that the eye comfort shield bathed my display with. When you look at the slides in the Vimeo recording, you’ll see that they have an unnaturally warm tone. I haven’t confirmed it with JavaZone organisers, but I’m fairly certain that it’s because of my settings. Next time I do have to remember to turn this off no matter what to avoid peach vibes emanating from the slides.

Conclusion 

It was really a great experience to give a lightning talk at this year’s JavaZone. I really enjoyed it, and I hope my audience did too. I wouldn’t be there, if not for the carefully prepared strategy for picking up the topic and the form of presentation. I also spend a lot of time preparing for it, so I can give it as professionally as possible regardless of conditions or my psychological state. The most important thing was to be mindful of the time limit at every stage of preparations, but proper repetition did its job too.

The talk went quite well, and I’ve got enough positive feedback to call it a success. Despite overestimating my stamina, I avoided most of the pitfalls of all the things that surrendered my talk. However, one can never be overprepared. My eye comfort settings badly affected the way the slides were recorded, effectively downgrading the experience of anyone watching my talk on Vimeo. This is something to work on in the future.

Finally, this isn’t meant to be a guide on how to be a speaker at JavaZone (or any bigger conference as a matter of fact). However, if you find this article useful in your preparations, I couldn’t be happier.

Quantum Computing in Java

If you are new to quantum computing, then the title of this post will not sound suspicious or provocative to you. However, if you are a seasoned quantum user then you will know that something must be off here. After all, your language of choice is most likely Python, or maybe C if you are working on high-performing solutions. 

The majority of frameworks and libraries in the field of quantum computing are written in Python, and made for this language. I am thinking of Qiskit, Ocean, and cirq, among others. There is a big ongoing effort by IBM to develop a parallel Qiskit framework for the C language, and one can even try Q# from Microsoft. What they have in common is comprehensive functionality, big company backing, and integration with real quantum hardware (to different extents, though).

But what about Java? Is it even possible to do any useful quantum development in this language? For a big part of my career I have been a Java/Kotlin developer, so this question is somewhat personal to me. And the answer is… it is complicated. There are two libraries that I have managed to find, and two JEPs that are focused on quantum computing. How useful are they? Let us find out.

JQuantum

The first library we are going to take a look at is called JQuantum. If you click on the link the first thing you will notice is that it has not been updated since 30th July 2018. Being so outdated does not make it a candidate for any production project. Can it be of any use anyway?

JQuantum contains implementation of basic quantum computing concepts. You can find there classes for qubits, gates, circuits, and even the most famous algorithms like Grover’s algorithm or Shor’s algorithm.

Sadly, the library does not have any form of a simulator, not to mention the possibility to run the code against a real QPU. You can run your programs and output their results, but what JQuantum does behind the scenes is just linear algebra on vectors and matrices. Since the amount of memory necessary to contain multidimensional matrices scales exponentially, the amount of qubits you can use in JQuantum is severely limited. After all, even the simplest gates are two-dimensional matrices, and adding each qubit effectively doubles the number of dimensions in the matrix that represents such a system.

Okay, so the library is outdated, and its functionality is quite limited. Can we put it to any use then? In fact, we can. Let us take a look at a sample code.

class JQuantumExample implements Example {
    @Override
    public void show() {
        var qubitValue = 0;
        var quantumRegister = new QubitRegister(1, qubitValue);
       
        var notGate = QuantumGate.X;
        notGate.accept(quantumRegister);
        var measurement = quantumRegister.measure();
       
        System.out.println("JQuantum Example");
        System.out.println("----------------");
        System.out.println("Applied X gate to |" + qubitValue + "⟩; result is |" + measurement +"⟩.");
        System.out.println();
    }
}

In the code snippet above, we first create a qubit in the |0⟩ (read 0-ket) state. If you do not know what kets are, please read my post about them. Then, we create a quantum register (processor’s memory) with one qubit. We instantiate a NOT gate, often called an X gate. Then, we apply the X gate to the register, effectively flipping the value of the qubit from |0⟩ to |1⟩. Finally, we measure.

The measurement in quantum computing is an act of “translating” the result of the computation from quantum encoding to classical (i.e. binary) encoding. Thus, whatever is yielded by the program, ends up as either |0⟩ or |1⟩. Any other quantum state (and there is an infinite number of them) cannot be processed then by classical computers.

This code may not earn you any money, but it can still demonstrate basic principles of quantum programs. And this is what the JQuantum library is great for: education. If you have never written a line of code in any language other than Java, you can still learn a solid piece of quantum computing with this library.

If you would like to run this program you must add the Quantum.jar file to your classpath. You can find it in the root tree in the JQuantum’s repository.

Strange

If you are a Marvel fan, then you may be disappointed that I will not be writing about your favourite doctor here. Instead, Strange is the name of a quantum library written for the book Quantum Computing in Action by Johan Vos.

Strange is an open-source learning resource that can be used alongside the aforementioned book, or by itself. It is available via Maven Central, so no manual installation is needed like it was the case with JQuantum.

It has somewhat similar functionality to that library. You will find abstractions for qubits, gates, circuits (under the name of programs), and quantum algorithms. The collection is a bit smaller than in JQuantum, though.

On the upside, Strange does contain a quantum simulator. It is a simple one, without any sophisticated features of modern-day simulators (like noise or hardware-specific characteristics), but it can still be pretty useful. There are traces in the code that cloud integration was in progress, but it has never been finished. Thus, you cannot run code written with Strange on any actual quantum hardware.

Let us take a look at a code example.

public class StrangeExample implements Example {
   @Override
   public void show() {
       var qubitValue = 0;
       var program = new Program(1);
       var x = new X(qubitValue);

       var negationStep = new Step();
       negationStep.addGate(x);
       program.addStep(negationStep);

       var executionEnvironment = new SimpleQuantumExecutionEnvironment();
       var result = executionEnvironment.runProgram(program);
       var measurement = result.getQubits()[0].measure();
       
       System.out.println("Strange Example");
       System.out.println("---------------");
       System.out.println("Applied X gate to |" + qubitValue + "⟩; result is |" + measurement +"⟩.");
       System.out.println();
   }
}

This program does exactly the same thing as the previous one, and the first three steps are almost identical. The difference starts from the definition of a step. Steps are phases of a circuit, which is called a program in Strange. So in order to code the negation, we must first define a step for it (negationStep) and then add it to the program with the addStep method. Next, we have to instantiate the simulator in the form of executionEnvironment. Finally, we can run it, and obtain a measurement from the result.

Though JQuantum and Strange are very alike, the functionality of Strange is somewhat more robust. Both, however, are great learning resources for quantum computing newbies. Unfortunately, doing any real-life project with Strange is as unfeasible, as with JQuantum.

Post-quantum Cryptography

Believe it or not, but quantum computers are going to break all the cryptography that we use on an everyday basis. SHA, RSA, Diffie-Helman, and related protocols are useless against the codebreaking capabilities of future quantum computers. Luckily, they are not here yet. Even more luckily, I will spare you all the difficult math related to this issue.

Although quantum computers with enough qubits to run Shor’s algorithm for the inputs of size of nowadays use cryptographic keys are still at least a couple of years away, cryptographers have already started working on cryptographic protocols that are quantum-safe. By this they mean that they cannot be cracked by any kind of quantum computer or algorithm (at least for now). Why do they do this? There is an old attack pattern that relies on evasedropping the data now for later decryption. If an attacker follows it it can for example steal the packets in which you authenticate with your bank in hope of decrypting them whenever possible and obtaining your credentials. It may seem futile at first, but if a feasible quantum computer arrives in two years, then a lot of this data may still be valid. After all, how often do you rotate your bank password?

I mentioned that the cryptographers have already begun working on a solution. It arrived in the form of NIST standards in August 2024. Shortly after that, two JEPs were published: 496 and 497. The first one provides implementation for cryptographical keys creation and encapsulation. The second one is very similar, but can be used for signing, not encrypting keys. They can be used in the following way.

class PostQuantumCryptographyExample implements Example {
   @Override
   public void show() {
       try {
           var mlKem = "ML-KEM";
           var encryptionKeyGenerator = KeyPairGenerator.getInstance(mlKem);
           encryptionKeyGenerator.initialize(NamedParameterSpec.ML_KEM_512);
           var encryptionKeys = encryptionKeyGenerator.generateKeyPair();

           System.out.println("Post-Quantum Cryptography Example");
           System.out.println("---------------------------------");

           var publicEncryptionKey = encryptionKeys.getPublic().getEncoded();
           String publicKeyBase64 = Base64.getEncoder().encodeToString(publicEncryptionKey);
           System.out.println("Public key generated with " + mlKem + ": " + publicKeyBase64);

           var mlDsa = "ML-DSA";
           var signingKeyGenerator = KeyPairGenerator.getInstance(mlDsa);
           var signingKeys = signingKeyGenerator.generateKeyPair();
           var privateSigningKey = signingKeys.getPrivate();

           var helloWorld = "Hello World";
           byte[] message = helloWorld.getBytes();
           Signature signature = Signature.getInstance(mlDsa);
           signature.initSign(privateSigningKey);
           signature.update(message);
           byte[] signedMessage = signature.sign();

           System.out.println("Message \"" + helloWorld + "\" signed with " + mlDsa + ": " + Base64.getEncoder().encodeToString(signedMessage));
       } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException | InvalidKeyException |
                SignatureException e) {
           throw new RuntimeException(e);
       }
   }
}

If you have ever used other cryptographic keys in Java, then I am quite sure this code feels familiar to you. I will not describe it in detail here, since there is nothing related to quantum computing in it. We cannot use these implementations for working with qubits, gates or circuits. In fact, they do not even require a quantum computer to run. Instead, they rely on lattice structures that are immune to Shor’s algorithm since they do not use big primes multiplication.

Conclusion

While Java may not be the first choice for quantum computing due to the lack of robust, up-to-date libraries and frameworks, there are still learning resources available for those interested. The JQuantum and Strange libraries, although outdated or limited in functionality, can serve as valuable educational tools for beginners.

Moreover, with the impending threat of post-quantum cryptography being broken by future quantum computers, Java has already started addressing this challenge through JEPs 496 and 497. These provide implementations for creating and encapsulating cryptographic keys, ensuring our data remains secure even in a post-quantum world. Although they cannot be used for general quantum computing, they should at least provide a cornerstone for reliability of Java application in the quantum era.

While Java may not be the language of choice for quantum computing today, it is still possible to learn basic principles and prepare for a future where quantum-safe cryptography becomes the norm.

You can find all the code examples on my GitHub.

Post Scriptum

If you are interested in the prospects of quantum computing in Java and its future, I warmly invite you to join me on JavaZone2025 in Lillestrøm, Norway where I’ll give a lightning talk titled Quantum Leap with Java: An Unrealistic Dream? You can keep an eye on the program here. I am hoping to see you there!

How to Implement the Pipes and Filters Architecture with Java and Azure

In my previous blog post, I provided an in-depth explanation of the Pipes and Filters architecture, which you can check out here. To recap, the Pipes and Filters architecture breaks down a system into small, self-contained processing components known as filters. Each filter is responsible for performing a specific task or transformation on the data it receives, promoting modularity and reusability. These filters are connected via pipes, which facilitate the flow of data from one filter to the next. This architecture is particularly effective in scenarios involving data integration, processing workflows, transformation pipelines, and stream processing.

In this blog post, I will walk you through a sample implementation of the Pipes and Filters architecture using Java and Azure. Our example project will centre around a chatbot designed to assist in creative thinking activities such as brainstorming.

Sample Project Overview

The goal of this project is to create a tool that integrates with a company’s creative thinking solutions. Specifically, it’s a chatbot that aids teams during brainstorming sessions and other creative activities. The process begins when a user interacts with the application by typing a question, such as “How will the ongoing AI revolution affect the financial industry?” This question is then sent to the application for processing.

How the System Works

  1. Input Validation: The first filter is responsible for validating the user’s question. The question might be in a language that the AI model doesn’t understand, or it might be too long or contain sensitive information. Therefore, the first task is to verify whether the question can be processed further.
  2. Prompt Engineering: If the question is valid, the application uses predefined templates to enrich it. These templates provide context to the AI-powered tool, making the model’s output more valuable. For example, a template might be: “You are a CEO. Given a strategic prompt, you will create X futuristic, hypothetical scenarios that happen Y years from now. The strategic prompt is: Z”. This step is crucial as it leverages prompt engineering to guide the AI model in generating more meaningful responses.
  3. AI Model Interaction: The final step involves sending the enriched prompts to the AI model, which processes them and generates answers. These answers are then displayed back to the user.

Implementation Details

The system consists of three filters:

  1. Input Validation Filter: Validates the user’s input according to the application’s data requirements.
  2. Prompt Engineering Filter: Analyses and enriches the validated input to create a prompt.
  3. AI Model Facade Filter: Sends the engineered prompt to the AI model and handles the response.

The First Filter: Input Validation

The first filter is implemented as an Azure Function, and its primary role is to validate the incoming question.

@FunctionName("QuestionValidationFunction")
public HttpResponseMessage validate(
       @HttpTrigger(name = "question",
               methods = {HttpMethod.POST},
               authLevel = AuthorizationLevel.FUNCTION)
       HttpRequestMessage<String> question,
       @QueueOutput(name = "questionQueue",
               queueName = "question-queue",
               connection = "AzureWebJobsStorage")
       OutputBinding<String> questionQueue,
       ExecutionContext executionContext) {
    // Implementation of validation.
}

The validate method, annotated with @FunctionName("QuestionValidationFunction"), is triggered by an HTTP request. It takes two parameters: the HTTP request containing the question and an output binding to a storage queue named "question-queue". The method validates the question and, if valid, sends it down the pipeline.

The Second Filter: Prompt Engineering

The second filter enriches the validated question with a template to maximise the AI model’s response quality.

@FunctionName("PromptEngineeringFunction")
public void sendPrompt(
       @QueueTrigger(
               name = "question",
               queueName = "question-queue",
               connection = "AzureWebJobsStorage")
       String question,
       @QueueOutput(
               name = "promptQueue",
               queueName = "prompt-queue",
               connection = "AzureWebJobsStorage")
       OutputBinding<String> promptQueue,
       ExecutionContext executionContext) {
   // Prompt engineering logic.
}

This function is triggered by messages in the "question-queue". When a new message arrives, the function is invoked, and the question is enriched before being sent to the next queue, "prompt-queue".

The Third Filter: AI Model Facade

The third filter handles communication with the AI model. This filter is implemented using the Spring Cloud Function framework, which decouples infrastructure configuration from the business logic. I’ll describe it in detail in the next blog post, but I’ll give you a short description here so you understand the code.

The functions are implemented as Java function interfaces and autowired into respective request handlers. The handlers contain logic that configures integration with the serverless platform provider. In our case it’ll be the Azure SDK (which examples we’ve seen before). With this setup, you can change the cloud provider by simply rewriting the handlers (and changing build definition) without any need to rewrite the functions itself. 

Let’s now look at the function’s code. 

@Bean
public Function<String, String> answer(ModelClient modelClient) {
   	// Function’s logic
}

The answer function is a simple Java function interface that handles the logic for interacting with the AI model. It is autowired into a handler that manages the integration with Azure.

@Component
public class AnswerHandler {

   private final Function<String, String> answer;

   public AnswerHandler(Function<String, String> answer) {
       this.answer = answer;
   }

   @FunctionName("answer")
   public void answer(
           @QueueTrigger(
                   name = "promptQueue",
                   queueName = "prompt-queue",
                   connection = "AzureWebJobsStorage")
           String prompt,
           @QueueOutput(
                   name = "answerQueue",
                   queueName = "answer-queue",
                   connection = "AzureWebJobsStorage")
           OutputBinding<String> answerQueue
   ) {
       // Handler’s logic
   }
}

This handler is similar to the previous filters, but it delegates the business logic to the answer function. The answerQueue is used to send the final answer for further consumption. 

Deployment

With all three filters implemented, you can now deploy the application to Azure, to play with the code. The deployment process can be accomplished using Maven, as described in this article

In summary, we implemented a complete Pipes and Filters architecture using both the Azure SDK and Spring Cloud Function. The system comprises three filters – each responsible for a distinct part of the application’s workflow: input validation, prompt engineering, and AI model communication. The unidirectional data flow is managed primarily by queues, ensuring a clean separation of concerns and easy scalability.

Summary

This blog post demonstrates how to implement the Pipes and Filters architecture using Java and Azure for a chatbot that assists in creative thinking activities. The architecture is broken down into three filters: input validation, prompt engineering, and AI model interaction. Each filter handles a specific task in the data processing pipeline, ensuring modularity and reusability. The post also covers the deployment process using Azure and Spring Cloud Function, highlighting the benefits of separating business logic from infrastructure configuration.

If you’re interested in how this architecture style can be used to implement serverless solutions, and how to work with Azure Functions in Java, check out my Udemy course that covers these topics in detail.  

The working code example can be found on my GitHub