Java Optimised is a project focused on enhancing Java/Spring Boot workflows and deployments by improving build times, reducing image sizes, and lowering memory and CPU usage. It demonstrates optimisation techniques across build workflows, deployments, and runtime performance, using a basic User Service application—a CRUD API built with Spring Boot and Spring Data JPA—as a test service for evaluating these strategies.
The project focuses on two primary areas of optimisation:
- Build Workflow Optimisations
- Runtime Performance Optimisations
Each optimisation technique is explored in detail, with links to corresponding branches where the implementations can be found.
Efficient build workflows reduce development cycle times and improve deployment efficiency. The following optimisations target build processes:
Multi-Stage Builds (user-service example
)
Description: Multi-stage builds split the build and runtime environments into separate Docker layers, reducing the final image size and simplifying deployment processes. By using multi-stage builds, you can exclude unnecessary tools from the final image and reduce security risks.
Example Dockerfile snippet:
# Stage 1: Build
FROM maven:3.9.5-eclipse-temurin-17 as builder
WORKDIR /app
COPY . .
RUN mvn clean package -DskipTests
# Stage 2: Runtime
FROM amazoncorretto:17-alpine
WORKDIR /app
COPY --from=builder /app/target/*.jar app.jar
ENTRYPOINT ["java", "-jar", "app.jar"]
Key Benefits:
- Smaller image size (see size comparisons between Single-Stage Builds and Multi-Stage Builds below )
- Reduced build times
- Improved security by minimising the attack surface
- Simpler CI/CD Pipelines
Gradle & Docker Caching (user-service example
)
Description: This technique leverages caching mechanisms for Gradle and Docker layers to avoid redundant build steps and speed up subsequent builds. Caching is essential for large projects, where rebuilding the same layers repeatedly can waste time and resources.
Key Benefits:
- Faster build times by avoiding redundant steps
- Reduced network usage during builds
- Optimised CI/CD pipelines for faster deployments
Example Gradle Cache Workflow:
...
# Cache Gradle dependencies
- name: Cache Gradle
uses: actions/cache@v4
with:
path: |
~/.gradle/caches
~/.gradle/wrapper
key: gradle-${{ hashFiles('**/*.gradle*', '**/gradle-wrapper.properties') }}
restore-keys: |
gradle-${{ runner.os }}-
gradle-
...
Example Docker Layer Cache Workflow:
...
# Cache Docker layers
- name: Cache Docker layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: docker-cache-${{ github.ref_name }}-${{ hashFiles('Dockerfile') }}
restore-keys: |
docker-cache-${{ github.ref_name }}-
docker-cache-
...
Results:
Caching significantly reduced the build and push time for a Java source code change (with no modifications to Gradle dependencies or the Dockerfile).
Metric | no caching |
caching enabled |
---|---|---|
Build Time | 57 s | 4s |
Push Time | 28 s | 2s |
Optimising runtime performance can significantly improve application scalability and reduce operational costs. The following techniques focus on runtime improvements:
Lightweight Runtime Image (user-service example
)
Description: Using lightweight base images, such as Alpine or Distroless, reduces the final image size and enhances security by limiting the number of installed packages. This can improve application performance and reduce cloud infrastructure costs.
Key Benefits:
- Smaller image sizes
- Enhanced security
- Reduced resource consumption
Here are commonly used Java 17 base images for various requirements. Image sizes vary significantly, affecting resource usage and costs, so choosing an optimised image can help lower cloud expenses.
Image | Base OS | Size | Use Case | Pros | Cons |
---|---|---|---|---|---|
eclipse-temurin:17-jdk |
Debian/Ubuntu | ~300 MB | General purpose | Well-maintained, widely used, secure | Larger image size |
amazoncorretto:17 |
Amazon Linux 2 | ~200 MB | AWS deployments | Optimised for AWS, long-term support by Amazon | Tied to AWS ecosystem |
openjdk:17-jdk |
Debian/Ubuntu | ~300 MB | General purpose | Official OpenJDK build, widely compatible | Larger size compared to other JDKs |
eclipse-temurin:17-jdk-alpine |
Alpine Linux | ~80 MB | Minimal image size | Very small image, suitable for lightweight apps | Potential compatibility issues |
ghcr.io/graalvm/graalvm-ce:java17 |
Oracle Linux | ~350 MB | Native builds and performance | Supports native compilation with native-image |
Larger size and complex native builds |
Runtime Image | Base OS | Size | Use Case | Pros | Cons |
---|---|---|---|---|---|
eclipse-temurin:17-jre |
Debian/Ubuntu | ~70 MB | General purpose runtime | Well-maintained, widely used, secure | Slightly larger than Alpine-based JRE |
amazoncorretto:17-al2-jre |
Amazon Linux 2 | ~40 MB | AWS deployments | Optimised for AWS, secure | AWS-specific |
eclipse-temurin:17-jre-alpine |
Alpine Linux | ~30 MB | Minimal image size | Very small, suitable for lightweight apps | Potential glibc compatibility issues |
cgr.dev/chainguard/jre:17 |
Distroless | ~50 MB | Security-focused runtime | Minimal attack surface, no shell | Limited debugging options |
gcr.io/distroless/java17-debian11 |
Distroless | ~50 MB | Secure production deployments | Reduced attack surface, no package manager | No shell or package manager |
ghcr.io/graalvm/graalvm-ce:java17 |
Oracle Linux | ~80 MB | Native builds | Use native-image to produce a native executable |
Native builds require more setup |
GraalVM Native Image (user-service example
)
Description: GraalVM enables Java applications to be compiled into native executables, resulting in faster startup times and lower memory usage. This is particularly beneficial for serverless environments and microservices where cold start times are critical.
Key Benefits:
- Faster startup times
- Reduced resource usage
- Faster startup times.
- Optimised for microservices and serverless functions
Caveats of Using Native Images:
While native images (e.g., using GraalVM Native Image) offer significant advantages in terms of reduced startup time and lower memory usage, they also come with several caveats and potential drawbacks that should be carefully considered before adopting them in production:
Summary of Trade-offs:
Benefit | Caveat |
---|---|
Faster startup time | Longer build times |
Lower memory usage | Increased build complexity |
Reduced cold start issues | Limited reflection support |
Cloud-native friendly | Compatibility issues with some libraries |
No JVM required at runtime | Platform-specific builds |
Native images are best suited for cloud-native, serverless, and microservices architectures where startup time and memory consumption are critical. However, for long-running, CPU-intensive applications, traditional JVM builds may be more efficient.
Results:
Since JVM metrics are not available with native builds, the statistics were gathered using docker stats combined with the psrecord tool. A simple load test was used to measure CPU and memory consumption during the test. The graphs below show the comparison between the standard JVM and GraalVM Native Image.
![]() |
![]() |
---|---|
Standard Image | Native Image |
This table highlights the differences between Standard JVM and GraalVM Native Image builds.
Metric | Standard JVM | GraalVM Native Image |
---|---|---|
Max CPU Usage (%) | 159.7 | 7.0 |
Average CPU Usage (%) | 19.63 | 1.58 |
Max Real Memory (MB) | 348.42 | 214.16 |
Average Real Memory (MB) | 341.92 | 203.98 |
Max Virtual Memory (MB) | 8924.07 | 1704.35 |
Average Virtual Memory (MB) | 8916.54 | 1695.44 |
Startup Time (s) | 12.24 | 0.067 |
Key Takeaway:
The comparison highlights a dramatic reduction in resource usage when switching from Standard JVM to GraalVM Native Image:
-
🔥 CPU Usage:
Max CPU drops from 159.7% to just 7%, with an average reduction of nearly 90%. -
💾 Memory Consumption:
- Real Memory reduced by over 35%.
- Virtual Memory drops from ~8.9 GB to ~1.7 GB, a reduction of more than 80%.
-
⚡ Startup Time:
Startup time improves drastically from 12.24 seconds to just 0.067 seconds, making GraalVM ideal for serverless and microservices.
These improvements can lead to:
- Lower cloud infrastructure costs due to reduced CPU and memory usage.
- Improved scalability with faster startup times, reducing cold start penalties in pay-per-use serverless models.
- Better user experience with faster response times.
Switching to GraalVM Native Image can be a game-changer for high-traffic microservices or serverless applications, offering both performance gains and significant cost savings at scale.
For a typical AWS ECS task, a Java application using a Standard JVM may require 2 vCPU and 4 GB memory to handle a given workload. By optimising with GraalVM Native Image, the same application can run efficiently with 0.5 vCPU and 1 GB memory.
Configuration | Standard JVM | GraalVM Native Image |
---|---|---|
vCPU | 2 | 0.5 |
Memory (GB) | 4 | 1 |
Estimated Cost/Month | ~$100/month | ~$25/month |
Total Cost (50 Tasks) | ~$5,000/month | ~$1,250/month |
Assuming a fleet of 50 ECS tasks, switching to GraalVM could result in monthly savings of ~$3,750, or ~$45,000 annually.
The techniques covered in this repository — multi-stage builds, lightweight runtime images, Gradle and Docker caching, and GraalVM native images — are among the most impactful ways to optimise Java/Spring Boot applications. These approaches significantly improve performance by reducing build times, lowering memory and CPU usage, and minimising image sizes.
However, there are several additional techniques that could further enhance your Java applications. These methods were not covered in this repository but are worth considering based on your project's specific requirements:
-
JVM Tuning
- Customise garbage collection (e.g., G1GC, ZGC)
- Optimise heap size and thread usage based on application load
-
Lazy Bean Initialization
- Reduce startup time by only initializing beans when they are first required
-
AOT Compilation (Spring Native)
- Use Spring's Ahead-of-Time (AOT) compilation for even faster startup and lower memory footprint
-
Quarkus or Micronaut Frameworks
- Explore alternative JVM frameworks that are designed to optimise for cloud-native environments
-
JIT Profiling
- Use tools like JFR (Java Flight Recorder) and Mission Control to profile and optimise runtime performance
-
Container Optimisation
- Limit container resource usage with Kubernetes resource requests/limits
-
Database Connection Pooling
- Fine-tune your HikariCP or other connection pool configurations for better performance
-
Caching Strategies
- Implement appropriate caching layers (e.g., Redis, Caffeine) to reduce repetitive calls to slow services or databases
-
Reactive Programming
- Consider frameworks like Project Reactor to optimise resource usage for high-throughput, non-blocking applications
-
Security Hardening and Scanning
- Use tools to scan for vulnerabilities and harden your Docker images and dependencies
These additional techniques can be layered on top of the existing optimisations in this project to achieve even greater performance gains. Each of them addresses different aspects of Java application performance and should be considered as part of a holistic optimisation strategy.