Search the Community
Showing results for tags 'java'.
-
In the vast universe of programming, the era of generative artificial intelligence (GenAI) has marked a turning point, opening up a plethora of possibilities for developers. Tools such as LangChain4j and Spring AI have democratized access to the creation of GenAI applications in Java, allowing Java developers to dive into this fascinating world. With Langchain4j, for instance, setting up and interacting with large language models (LLMs) has become exceptionally straightforward. Consider the following Java code snippet: public static void main(String[] args) { var llm = OpenAiChatModel.builder() .apiKey("demo") .modelName("gpt-3.5-turbo") .build(); System.out.println(llm.generate("Hello, how are you?")); } This example illustrates how a developer can quickly instantiate an LLM within a Java application. By simply configuring the model with an API key and specifying the model name, developers can begin generating text responses immediately. This accessibility is pivotal for fostering innovation and exploration within the Java community. More than that, we have a wide range of models that can be run locally, and various vector databases for storing embeddings and performing semantic searches, among other technological marvels. Despite this progress, however, we are faced with a persistent challenge: the difficulty of testing applications that incorporate artificial intelligence. This aspect seems to be a field where there is still much to explore and develop. In this article, I will share a methodology that I find promising for testing GenAI applications. Project overview The example project focuses on an application that provides an API for interacting with two AI agents capable of answering questions. An AI agent is a software entity designed to perform tasks autonomously, using artificial intelligence to simulate human-like interactions and responses. In this project, one agent uses direct knowledge already contained within the LLM, while the other leverages internal documentation to enrich the LLM through retrieval-augmented generation (RAG). This approach allows the agents to provide precise and contextually relevant answers based on the input they receive. I prefer to omit the technical details about RAG, as ample information is available elsewhere. I’ll simply note that this example employs a particular variant of RAG, which simplifies the traditional process of generating and storing embeddings for information retrieval. Instead of dividing documents into chunks and making embeddings of those chunks, in this project, we will use an LLM to generate a summary of the documents. The embedding is generated based on that summary. When the user writes a question, an embedding of the question will be generated and a semantic search will be performed against the embeddings of the summaries. If a match is found, the user’s message will be augmented with the original document. This way, there’s no need to deal with the configuration of document chunks, worry about setting the number of chunks to retrieve, or worry about whether the way of augmenting the user’s message makes sense. If there is a document that talks about what the user is asking, it will be included in the message sent to the LLM. Technical stack The project is developed in Java and utilizes a Spring Boot application with Testcontainers and LangChain4j. For setting up the project, I followed the steps outlined in Local Development Environment with Testcontainers and Spring Boot Application Testing and Development with Testcontainers. I also use Tescontainers Desktop to facilitate database access and to verify the generated embeddings as well as to review the container logs. The challenge of testing The real challenge arises when trying to test the responses generated by language models. Traditionally, we could settle for verifying that the response includes certain keywords, which is insufficient and prone to errors. static String question = "How I can install Testcontainers Desktop?"; @Test void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() { String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message(); assertThat(answer).contains("https://testcontainers.com/desktop/"); } This approach is not only fragile but also lacks the ability to assess the relevance or coherence of the response. An alternative is to employ cosine similarity to compare the embeddings of a “reference” response and the actual response, providing a more semantic form of evaluation. This method measures the similarity between two vectors/embeddings by calculating the cosine of the angle between them. If both vectors point in the same direction, it means the “reference” response is semantically the same as the actual response. static String question = "How I can install Testcontainers Desktop?"; static String reference = """ - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/ - Answer must indicate to use brew to install Testcontainers Desktop in MacOS - Answer must be less than 5 sentences """; @Test void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() { String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message(); double cosineSimilarity = getCosineSimilarity(reference, answer); assertThat(cosineSimilarity).isGreaterThan(0.8); } However, this method introduces the problem of selecting an appropriate threshold to determine the acceptability of the response, in addition to the opacity of the evaluation process. Toward a more effective method The real problem here arises from the fact that answers provided by the LLM are in natural language and non-deterministic. Because of this, using current testing methods to verify them is difficult, as these methods are better suited to testing predictable values. However, we already have a great tool for understanding non-deterministic answers in natural language: LLMs themselves. Thus, the key may lie in using one LLM to evaluate the adequacy of responses generated by another LLM. This proposal involves defining detailed validation criteria and using an LLM as a “Validator Agent” to determine if the responses meet the specified requirements. This approach can be applied to validate answers to specific questions, drawing on both general knowledge and specialized information By incorporating detailed instructions and examples, the Validator Agent can provide accurate and justified evaluations, offering clarity on why a response is considered correct or incorrect. static String question = "How I can install Testcontainers Desktop?"; static String reference = """ - Answer must indicate to download Testcontainers Desktop from https://testcontainers.com/desktop/ - Answer must indicate to use brew to install Testcontainers Desktop in MacOS - Answer must be less than 5 sentences """; @Test void verifyStraightAgentFailsToAnswerHowToInstallTCD() { String answer = restTemplate.getForObject("/chat/straight?question={question}", ChatController.ChatResponse.class, question).message(); ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference); assertThat(validate.response()).isEqualTo("no"); } @Test void verifyRaggedAgentSucceedToAnswerHowToInstallTCD() { String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message(); ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference); assertThat(validate.response()).isEqualTo("yes"); } We can even test more complex responses where the LLM should suggest a better alternative to the user’s question. static String question = "How I can find the random port of a Testcontainer to connect to it?"; static String reference = """ - Answer must not mention using getMappedPort() method to find the random port of a Testcontainer - Answer must mention that you don't need to find the random port of a Testcontainer to connect to it - Answer must indicate that you can use the Testcontainers Desktop app to configure fixed port - Answer must be less than 5 sentences """; @Test void verifyRaggedAgentSucceedToAnswerHowToDebugWithTCD() { String answer = restTemplate.getForObject("/chat/rag?question={question}", ChatController.ChatResponse.class, question).message(); ValidatorAgent.ValidatorResponse validate = validatorAgent.validate(question, answer, reference); assertThat(validate.response()).isEqualTo("yes"); } Validator Agent The configuration for the Validator Agent doesn’t differ from that of other agents. It is built using the LangChain4j AI Service and a list of specific instructions: public interface ValidatorAgent { @SystemMessage(""" ### Instructions You are a strict validator. You will be provided with a question, an answer, and a reference. Your task is to validate whether the answer is correct for the given question, based on the reference. Follow these instructions: - Respond only 'yes', 'no' or 'unsure' and always include the reason for your response - Respond with 'yes' if the answer is correct - Respond with 'no' if the answer is incorrect - If you are unsure, simply respond with 'unsure' - Respond with 'no' if the answer is not clear or concise - Respond with 'no' if the answer is not based on the reference Your response must be a json object with the following structure: { "response": "yes", "reason": "The answer is correct because it is based on the reference provided." } ### Example Question: Is Madrid the capital of Spain? Answer: No, it's Barcelona. Reference: The capital of Spain is Madrid ### Response: { "response": "no", "reason": "The answer is incorrect because the reference states that the capital of Spain is Madrid." } """) @UserMessage(""" ### Question: {{question}} ### Answer: {{answer}} ### Reference: {{reference}} ### """) ValidatorResponse validate(@V("question") String question, @V("answer") String answer, @V("reference") String reference); record ValidatorResponse(String response, String reason) {} } As you can see, I’m using Few-Shot Prompting to guide the LLM on the expected responses. I also request a JSON format for responses to facilitate parsing them into objects, and I specify that the reason for the answer must be included, to better understand the basis of its verdict. Conclusion The evolution of GenAI applications brings with it the challenge of developing testing methods that can effectively evaluate the complexity and subtlety of responses generated by advanced artificial intelligences. The proposal to use an LLM as a Validator Agent represents a promising approach, paving the way towards a new era of software development and evaluation in the field of artificial intelligence. Over time, we hope to see more innovations that allow us to overcome the current challenges and maximize the potential of these transformative technologies. Learn more Check out the GenAI Stack to get started with adding AI to your apps. Subscribe to the Docker Newsletter. Get the latest release of Docker Desktop. Vote on what’s next! Check out our public roadmap. Have questions? The Docker community is here to help. New to Docker? Get started. View the full article
-
Java thriving after 30 yearsView the full article
-
Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are vulnerable to one or more critical or high severity vulnerabilities introduced by a third-party library, versus an average of 47% for alternative programming languages. Based on an analysis of IT environments being monitored using the […] View the full article
-
Now that you have Ubuntu 24.04 installed, the remaining task is ensuring that you install all the software you need, including Java. Installing Java on Ubuntu 24.04 makes it possible to develop and run Java applications, and as a Java programmer, you will inevitably install Java on Ubuntu.Java isn’t pre-installed on Ubuntu. As such, you must know what steps are required to quickly install Java before you start using it for your projects. Reading this post will arm you with a simple procedure to install Java on Ubuntu 24.04. Java JDK vs JRE When installing Java on Ubuntu 24.04, a common concern is understanding the difference between JDK and JRE and knowing which to install. Here’s the thing: Java Development Kit (JDK) comprises all the required tools to develop Java applications. It comprises of the Java compiler and debugger and for someone looking to create Java apps, you must have JDK installed. As for Java Runtime Environment(JRE), it is required for anyone looking to run Java applications on their system. So, if you only want to run Java applications without building them, you only need to install JRE and not the JDK. As a programmer, you will likely develop and run Java applications. Therefore, you must install JDK and JRE for everything to work correctly. How to Install Java on Ubuntu 24.04 Installing Java only requires access to an internet connection. Again, when you install the JDK, it should install the default JRE by default. However, that’s not always the case. Besides, if you want a specific version, you can specify it when running the install command. Here, we’ve provided the steps to follow to install Java quickly. Take a look! Step 1: Update Ubuntu’s Repository Updating the system repository ensures that the package you install is the latest stable version. The update command refreshes the sources list, and when you install Java, you will have the updated source index for the latest version. $ sudo update Step 2: Install Default JRE Before we can start installing Java, first verify that it isn’t already installed on your Ubuntu 24.04 by checking its version with the following command. $ java --version If Java is installed, you will get its version displayed on the output. Otherwise, you will get an output showing ’Java’ not found. Otherwise, install the default JRE using the below command. $ sudo apt install default-jre The installation time will depend on your network’s speed. Step 3: Install OpenJDK After successfully installing JRE, you are ready to install OpenJDK. Here, you can choose to install the default JDK, which will install the available version. Alternatively, you can opt to install a specific JDK version depending on your project requirements. For instance, if we want to install OpenJDK 17, we would execute our command as follows. $ sudo apt install openjdk-21-jdk During the installation process, you will get prompted to confirm a few things. Press ’y’ and hit the enter key to proceed with the installation. Once the installation is complete, you will have Java installed on your Ubuntu 24.04 and ready for use. The last task is to verify that Java is installed. By checking the version, you will get an output showing which version is installed. If you want a different version, ensure you specify it in the previous commands, as your project requirements could be different. $ java --version For our case, the output shows that we’ve installed Java v21.0.3 . Conclusion Installing Java on Ubuntu 24.04 isn’t a complicated process. However, you must know what your project requirements are to guide which version you install. To recap, installing Java requires you to first update the repository. Next, install JRE and then specify what OpenJDK version to install. You will have managed to install Java on Ubuntu 24.04, and this post shares more details on each step. View the full article
-
NCache Java Edition with distributed cache technique is a powerful tool that helps Java applications run faster, handle more users, and be more reliable. In today's world, where people expect apps to work quickly and without any problems, knowing how to use NCache Java Edition is very important. It's a key piece of technology for both developers and businesses who want to make sure their apps can give users fast access to data and a smooth experience. This makes NCache Java Edition an important part of making great apps. This article is made especially for beginners to make the ideas and steps of adding NCache to your Java applications clear and easy to understand. It doesn't matter if you've been developing for years or if you're new to caching, this article will help you get a good start with NCache Java Edition. Let’s start with a step-by-step process to set up a development workstation for NCache with the Java setup. View the full article
-
The technology industry has always been prone to hype. Today, the obsession with artificial intelligence (AI) is almost overwhelming and as someone who has worked with Java for over 25 years I have to admit to a certain degree of envy. It is no surprise that as AI and machine learning continue to trend, computer science graduates are gravitating to the Python programming language as it plays such an integral role in this world. That said, I think it would be a mistake to put all your eggs in the AI basket. If you are a graduate or already working in the IT industry consulting your ‘Mirror, mirror on the Wall’ and thinking about whether your future lies with Python or Java, then I would like to convince you that Java is indeed the greatest of them all. You can say I’m biased, but there are huge opportunities for programmers willing to turn their gaze to Java in 2024. The noise around AI may turn heads, but its appeal will rise and fall as the discipline matures. If you want more certainty from a career in IT, then Java is very much the incumbent and THE mature technology in the enterprise. Reasons why Java skills are in demand In the enterprise technology world, there is a commercial reality which is underlined by Hackerrank’s analysis at the start of 2023 of the most in-demand skills for employers. In that ranking, Java clearly came out on top. Certainly, an exciting time to be a Java developer. Why do I believe this? There are many good reasons why Java will continue to lead in its demand among IT skills in 2024, but let me distil it down to the three most prominent ones: 1. Java is everywhere We conducted our annual research on the State of Java which confirmed it is being used in the majority of enterprise applications and IT infrastructure environments. This universality means there are significant opportunities for those with Java skills. A major issue we have found though is version complexity. Our analysis suggests that many companies are using more than one version of Java from JDK 6 and 7 to JDK 8, 11 and 17. In an enterprise environment, downtime has real financial consequences, so knowing there are so many versions of Java in these set-ups creates complexity and more worryingly security vulnerabilities. We saw with the Log4j vulnerability how much potential damage could be caused if code is not up to date. Indeed, recent Veracode research suggests many companies still have not fully updated their Java code base to protect against future breaches. The bottom line is that companies will need individuals with Java skills to ensure applications remain stable and secure and there will be a consistent demand for commercial support to provide the essential fixes, security patches and expertise to support these teams. At a time when organizations are striving to transform their enterprise IT environments, it is critical that the underlying Java applications and infrastructure are optimized. This creates opportunities for coders, because working with our customers we have also found that optimizing Java environments has had a significant positive knock-on effect for cloud computing usage. Historically, we have found customers paying for more public cloud capacity than they use, but if they optimize their Java-based applications and infrastructure they can significantly improve throughput which in turn reduces the number of nodes they require in the cloud. We have also seen examples where organizations have had instances up and running on standby because they are worried about Java warm-up times, which also adds unnecessary costs. A good Java coder will be seen as a huge asset to an organization if they can deliver cost savings on public cloud expenditure and improve IT performance. 2. Java is alive and well While Oracle may own the trademark there is an incredibly robust open source community, which underlines the health of the technology. This community is contributing innovations and building out Java-based frameworks, libraries and tools, which ensures its continued relevance in the enterprise. More than 9 million developers use Java to create applications for everything from smart cards to enterprise servers and the cloud. Java powers more than 4,500 branded products. Furthermore, there is a well-established standards structure which gives enterprises the confidence that the technology will be developed in a consistent, reliable manner. You only have to look at the concerns about dependency on ChatGPT highlighted during the recent uncertainty surrounding the OpenAI leadership team to understand why enterprise users cannot become dependent on a technology that is not stable or dominated by one player. That Java has so many contributors and established ways of operating means that those with Java skills can also be confident there will be opportunities for them long into the future. 3. Java is at an inflection point At the start of 2023, Oracle announced it was changing its pricing policy for Java, which has upset many in the community. Based on our discussions with customers we are seeing more and more looking at alternative non-Oracle Java distributions. Clearly, this increases demand for Java skills, but more importantly this disruption will lead to further innovation as other vendors look to provide solutions that will not just match but improve on what Oracle already offers like delivering more distributed architectures; faster startup/warmup times; more performant JVMs; and telemetry to monitor for vulnerabilities in underlying Java code. These three factors secure the future for Java as we will see new developments and applications of the technology which will maintain its vibrancy. In turn, this will create opportunities for individuals to carve out a name for themselves by contributing to future developments in the community, suggesting Java will be around for long-time after all the hype around other technologies has died down. We've listed the best JavaScript online courses. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
-
As our applications age, it takes more and more effort just to keep them secure and running smoothly. Developers managing the upgrades must spend time relearning the intricacies and nuances of breaking changes and performance optimizations others have already discovered in past upgrades. As a result, it’s difficult to balance the focus between new features and essential maintenance work. Today, we are introducing in preview Amazon Q Code Transformation. This new capability simplifies upgrading and modernizing existing application code using Amazon Q, a new type of assistant powered by generative artificial intelligence (AI). Amazon Q is specifically designed for work and can be tailored to your business. Amazon Q Code Transformation can perform Java application upgrades now, from version 8 and 11 to version 17, a Java Long-Term Support (LTS) release, and it will soon be able to transform Windows-based .NET Framework applications to cross-platform .NET. Previously, developers could spend two to three days upgrading each application. Our internal testing shows that the transformation capability can upgrade an application in minutes compared to the days or weeks typically required for manual upgrades, freeing up time to focus on new business requirements. For example, an internal Amazon team of five people successfully upgraded one thousand production applications from Java 8 to 17 in 2 days. It took, on average, 10 minutes to upgrade applications, and the longest one took less than an hour. Amazon Q Code Transformation automatically analyzes the existing code, generates a transformation plan, and completes the transformation tasks suggested by the plan. While doing so, it identifies and updates package dependencies and refactors deprecated and inefficient code components, switching to new language frameworks and incorporating security best practices. Once complete, you can review the transformed code, complete with build and test results, before accepting the changes. In this way, you can keep applications updated and supported in just a few steps, gain performance benefits, and remove vulnerabilities from using unsupported versions, freeing up time to focus on new business requirements. Let’s see how this works in practice. Upgrading a Java application from version 8 to 17 I am using IntelliJ IDEA in this walkthrough (the same is available for Visual Studio Code). To have Amazon Q Code Transformation in my IDE, I install the latest version of the AWS Toolkit for IntelliJ IDEA and sign in using the AWS IAM Identity Center credentials provided by my organization. Note that to access Amazon Q Code Transformation, the CodeWhisperer administrator needs to explicitly give access to Amazon Q features in the profile used by the organization. I open an old project that I never had the time to update to a more recent version of Java. The project is using Apache Maven to manage the build. The project object model (POM) file (pom.xml), an XML representation of the project, is in the root directory. First, in the project settings, I check that the project is configured to use the correct SDK version (1.8 in this case). I choose AWS Toolkit on the left pane and then the Amazon Q + CodeWhisperer tab. In the Amazon Q (Preview) section, I choose Transform. This opens a dialog where I check that the correct Maven module is selected for the upgrade before proceeding with the transformation. I follow the progress in the Transformation Hub window. The upgrade completes in a few minutes for my small application, while larger ones might take more than an hour to complete. The end-to-end application upgrade consists of three steps: Identifying and analyzing the application – The code is copied to a managed environment in the cloud where the build process is set up based on the instructions in the repository. At this stage, the components to be upgraded are identified. Creating a transformation plan – The code is analyzed to create a transformation plan that lists the steps that Amazon Q Code Transformation will take to upgrade the code, including updating dependencies, building the upgraded code, and then iteratively fixing any build errors encountered during the upgrade. Code generation, build testing, and finalization – The transformation plan is followed iteratively to update existing code and configuration files, generate new files where needed, perform build validation using the tests provided with the code, and fix issues identified in failed builds. After a few minutes, the transformation terminates successfully. From here, I can open the plan and a summary of the transformation. I choose View diff to see the proposed changes. In the Apply Patch dialog, I see a recap of the files that have been added, modified, or deleted. First, I select the pom.xml file and then choose Show Difference (the icon with the left/right arrows) to have a side-by-side view of the current code in the project and the proposed changes. For example, I see that the version of one of the dependencies (Project Lombok) has been increased for compatibility with the target Java version. In the Java file, the annotations used by the upgraded dependency have been updated. With the new version, @With has been promoted, and @Wither (which was experimental) deprecated. These changes are reflected in the import statements. There is also a summary file that I keep in the code repo to quickly look up the changes made to complete the upgrade. I spend some time reviewing the files. Then, I choose OK to accept all changes. Now the patch has been successfully applied, and the proposed changes merged with the code. I commit changes to my repo and move on to focus on business-critical changes that have been waiting for the migration to be completed. Things to know The preview of Amazon Q Code Transformation is available today for customers on the Amazon CodeWhisperer Professional Tier in the AWS Toolkit for IntelliJ IDEA and the AWS Toolkit for Visual Studio Code. To use Amazon Q Code Transformation, the CodeWhisperer administrator needs to give access to the profile used by the organization. There is no additional cost for using Amazon Q Code Transformation during the preview. You can upgrade Java 8 and 11 applications that are built using Apache Maven to Java version 17. The project must have the POM file (pom.xml) in the root directory. We’ll soon add the option to transform Windows-based .NET Framework applications to cross-platform .NET and help accelerate migrations to Linux. Once a transformation job is complete, you can use a diff view to verify and accept the proposed changes. The final transformation summary provides details of the dependencies updated and code files changed by Amazon Q Code Transformation. It also provides details of any build failures encountered in the final build of the upgraded code that you can use to fix the issues and complete the upgrade. Combining Amazon’s long-term investments in automated reasoning and static code analysis with the power of generative AI, Amazon Q Code Transformation incorporates foundation models that we found to be essential for context-specific code transformations that often require updating a long tail of Java libraries with backward-incompatible changes. In addition to generative AI-powered code transformations built by AWS, Amazon Q Code Transformation uses parts of OpenRewrite to further accelerate Java upgrades for customers. At AWS, many of our services are built with open source components and promoting the long-term sustainability of these communities is critical to us and our customers. That is why it’s important for us to contribute back to communities like OpenRewrite, helping ensure the whole industry can continue to benefit from their innovations. AWS plans to contribute to OpenRewrite recipes and improvements developed as part of Amazon Q Code Transformation to open source. “The ability for software to adapt at a much faster pace is one of the most fundamental advantages any business can have. That’s why we’re excited to see AWS using OpenRewrite, the open source automated code refactoring technology, as a component of their service,” said Jonathan Schneider, CEO and Co-founder of Moderne (the sponsor of OpenRewrite). “We’re happy to have AWS join the OpenRewrite community and look forward to their contributions to make it even easier to migrate frameworks, patch vulnerabilities, and update APIs.” Upgrade your Java applications now Amazon Q Code Transformation product page Read more about Amazon Q Introducing Amazon Q, a new generative AI-powered assistant (preview) Amazon Q brings generative AI-powered assistance to IT pros and developers (preview) Improve developer productivity with generative-AI powered Amazon Q in Amazon CodeCatalyst (preview) New generative AI features in Amazon Connect, including Amazon Q, facilitate improved contact center service New Amazon Q in QuickSight uses generative AI assistance for quicker, easier data insights (preview) — Danilo View the full article
-
On Oct 17, 2023 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) versions of OpenJDK. Corretto 21.0.1, 17.0.9, 11.0.21, and 8u392 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. View the full article
-
In the ever-evolving world of software development, data plays a central role. Handling and processing data efficiently is a paramount concern for developers. As one of the most widely used programming languages, Java acknowledges the significance of data-oriented programming with its latest enhancements in Java 21. Two significant Java Enhancement Proposals (JEPs) stand out: JEP 440 and JEP 441... View the full article
-
Eliminate any MongoDB collection with ease using Java. Our concise and effective steps will help you through the process, making sure that you get the job done in no time. No more struggling with tedious and time-consuming tasks. Trust our solution to deliver the assertive approach you need to remove your collection quickly and efficiently. In this video tutorial series, you will gain a deep understanding of Java Integration with MongoDB. Java integration with MongoDB allows developers to leverage the power of MongoDB, a popular NoSQL database, in their Java applications. MongoDB is designed to handle large amounts of unstructured data and provides scalability and flexibility. View the full article
-
java Best Practices for Concurrency in Java
TechRepublic posted a topic in Development & Programming
Learn about the best practices for concurrency in Java to ensure your multi-threaded applications are efficient, synchronized, and error-free.View the full article -
Java is one of the most popular programming languages. We're just after the new LTS release - Java SE 21. Although it is not always known, the Java platform was used to implement various tools and components used in modern IT systems. Let’s discuss some of the most popular ones... View the full article
-
- programming
- tools
-
(and 1 more)
Tagged with:
-
Java recently marked 30 years since its inception and remains one of the world's most widely used programming languages. However, some have argued Java risks falling behind newer languages like Python and JavaScript along with modern frameworks. I sat down with Georges Saab, Senior Vice President of Development for Oracle's Java Platform Group, and Chad Arimura, Vice President of Developer Relations, at Oracle CloudWorld, to get an insider's perspective on how Java is evolving to stay relevant for the next generation of cloud-native development. View the full article
-
Oracle made generally available a Java 21 edition and provided early access to more than 15 proposed enhancements to the JDK. View the full article
-
The Collectors “groupingBy()” method is utilized to perform grouping operations on data available in a stream and collects the results. It uses “Map” for storing collected results. It helps in the process of organizing and aggregating data based on particular criteria. The Stream Collectors “groupingBy()” method can be widely utilized for the process of “Reporting”, “Aggregation”, and “Data Analysis”. This article demonstrates the procedure for using the collectors groupingBy() method in Java. How to Use Collectors groupingBy() Method in Java? The Collectors groupingBy() method enables users to define complex grouping logic. It can seamlessly integrate with the Stream API, enabling efficient and concise stream processing operations. The collector “groupingBy()” method returns the grouping key mapped to a list of elements that share the same key. Syntax The syntax of the Collector groupingBy() method is stated below: public static Collector<T, ?, Map<K, List>> groupingBy(Function demo); Let us break down the syntax and analyze each part: First, the “public static” indicates that the method is accessible from anywhere without requiring an instance of the class. The “Collector” object collects the elements of type “T” (the stream element type). Next, the “groupingBy(Function demo)” specifies that the “groupingBy()” method takes a function parameter named “demo”. “<T>” represents the stream being processed and “?” represents an unknown type. After that, the “groupingBy()” method may have some intermediate operations before producing the result. In the end, the “Map<K, List>” represents the keys and values of the List from which the “Map” gets generated. Let us consider an example for better understanding Collectors groupingBy() method in Java: Example 1: Collectors groupingBy() Method Visit the below code block to get the usage and implementation of the Collectors groupingBy() method in Java: import java.util.*; import java.util.stream.Collectors; public class GroupingByEx { public static void main(String[] args) { List<Witch> actor = Arrays.asList( new Witch("Elizabeth", 25), new Witch("Roman", 33), new Witch("Olsen", 25), new Witch("Agent", 33) ); Map<Integer, List<Witch>> actorsAge = actor.stream().collect(Collectors.groupingBy(Witch::getAge)); for (Map.Entry<Integer, List<Witch>> entry: actorsAge.entrySet()) { int age = entry.getKey(); List<Witch> group = entry.getValue(); System.out.println("Age: " + age + " - " + group); } } } Explanation of the above code: First, create a List named “Witch” and pass the random data in the array. Next, the “stream()” method is utilized for the conversion of list data into the stream. Now, apply the “Collectors.groupingBy()” method to the stream and pass “Witch::getAge” as a classifier function. This creates a map consisting of ages and their corresponding objects which reside inside the list. If more than one object has the same “age” then the objects are written next to each other not separately. In addition, store the result of the above computing in a variable named “actorsAge”. After that, utilize the “for” loop to traverse through the data that is collected and stored on the Map. Also, utilize the HashMap “entrySet()” method to create a set of elements having the same data as “age” in our case. In the end, retrieve the Key which stores age and the corresponding values that consist of list items using the “getKey()” and “getValue()” methods, respectively. In the end, store these retrieved values in variables and display them. Now, create a Witch class to support/provide the required data to the “main()” method: class Witch { private String identity; private int age; public Witch(String identity, int age) { this.identity = identity; this.age = age; } public int getAge() { return age; } @Override public String toString() { return identity; } } Explanation of the above code: First, create “identity” and “age” variables inside the class named “Witch”. Then, set these variables as a value to the class variables Next, create a “getAge()” function that returns the variable “age”. After that, override the “String” type by assigning the “toString()” method next to it. This converts the data into strings and then returns the variable named “identity”. After the end of the compilation phase: The above snapshot shows the data that has been grouped and displayed in the format of sets. Conclusion The “groupingBy()” method accepts a classifier function as a parameter, and this function determines the grouping key for each object in a stream. The objects or elements that have the same grouping key as values for some variables get grouped. And store the result in the resulting map. This guide has explained the usage of the Collectors groupingBy() method in Java. View the full article
-
While dealing with mathematical calculations in Java, there can be instances where there is a need to multiply the values such that a desired outcome is returned. For instance, multiplying the specified or user-defined values of various data types. In such instances, multiplying two numbers in Java is assistive in computing the values conveniently. This blog will demonstrate the approaches to multiplying two numbers in Java. How to Multiply Two Numbers in Java? The arithmetic operator “*” is used to multiply two numbers in Java. This operator can be placed between the operands and return the corresponding multiplication. Example 1: Multiply Two Integers in Java In this example, the two specified integers can be multiplied and returned: int num1 = 3; int num2 = 2; int result = num1 * num2; System.out.println("The multiplication of the numbers is: "+result); In the above lines of code, initialize the two integer values and apply the arithmetic operator “*” to multiply the specified integers. Lastly, display the resultant computed value. Output In the output, it can be implied that the corresponding multiplication is returned. Example 2: Multiply Two Floats in Java In this particular program, the arithmetic operator “*” can be utilized to multiply the two specified float values: double num1 = 2.5; double num2 = 3.5; double result = num1 * num2; System.out.println("The multiplication of the numbers is: "+result); In the above code snippet, initialize the two float values by specifying the type as “double”. After that, multiply the float values and display the resultant float value on the console. Output Example 3: Multiply User-defined Numbers in Java The “nextInt()” method scans the next input token as an integer. In the below example, the multiplication of the two user input numbers can be carried out. Firstly, make sure to include the below-provided library before heading to the example: import java.util.Scanner; Now, let’s add the following code in the “main()” method: int num1,num2,result; Scanner input= new Scanner(System.in); System.out.println("Enter the first number: "); num1= input.nextInt(); System.out.println("Enter the second number: "); num2= input.nextInt(); result= num1 * num2; System.out.println("The multiplication of the numbers is: "+result); In the above code block, apply the following steps: First, create a “Scanner” object” using the “new” keyword and the “Scanner()” constructor, respectively. The “System.in” parameter takes the input from the user. Now, input two numbers from the user. The associated “nextInt()” method ensures that the user input numbers are in the form of integers. Lastly, multiply the input numbers via the arithmetic operator “*” and display the computed multiplication. Output From this outcome, it can be analyzed that the user-defined numbers are evaluated appropriately. Conclusion The arithmetic operator “*” is utilized to multiply two numbers in Java. These numbers can be integer, float, or user-input numbers. This operator can be applied by placing it between the operands and returning the multiplication. This blog discussed the approaches to multiplying two numbers in Java. View the full article
-
Grateful for any tips guys. i am learning Dev Ops from Youtube course, the steps from course using Jenkins : - create job to download git hub repo. - build with maven packege command in course the file built with .war extension, but with me its always .jar file. How can i make the build with .war file.
-
Dynatrace has extended the Application Security Module it provides for its observability platform to protect against vulnerabilities in runtime environments, including the Java Virtual Machine (JVM), Node.js runtime and .NET CLR. In addition, Dynatrace has extended its support to applications built using the Go programming language. The Dynatrace Application Security Module leverages existing Dynatrace tracing […] View the full article
-
The Eclipse Foundation this week opened an Adoptium Marketplace through which DevOps teams can access Java binaries based on the OpenJDK specification. OpenJDK was created based on the Java standard edition (SE) of the programming language and virtual machine. Mike Milinkovich, executive director of the Eclipse Foundation, said the goal is to make it easier […] The post Eclipse Foundation Opens Marketplace for OpenJDK Binaries appeared first on DevOps.com. View the full article
-
Join us for episode 3 in our series DevOps for Java Shops! In this episode Brian Benz walks us through how to deploy a Java application to Azure App Service using GitHub Actions and continuous delivery! The post Deploying Java Applications to Azure using Continuous Delivery appeared first on Azure DevOps Blog. View the full article
-
Azure loves Java, bring your favorite tools and frameworks to Azure! In this 3-part series of our DevOps for Java Shops, Brian Benz stops by to highlight the easiest ways for Java developers to work with their IT organizations and partners to deliver their code to the cloud, including the best ways to reliably make updates and maintain production cloud code using built-in CI/CD tools from GitHub and Microsoft. You can find more information, step-by-step tutorials, and sample source code at https://aka.ms/devopsforjavashops. The post DevOps for Java on Azure appeared first on Azure DevOps Blog. View the full article
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts