By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: The Latest in OpenJDK and JCP Expert Group: Insights with Simon Ritter
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > The Latest in OpenJDK and JCP Expert Group: Insights with Simon Ritter
News

The Latest in OpenJDK and JCP Expert Group: Insights with Simon Ritter

News Room
Last updated: 2025/12/22 at 6:17 AM
News Room Published 22 December 2025
Share
The Latest in OpenJDK and JCP Expert Group: Insights with Simon Ritter
SHARE

Transcript

Michael Redlich: Hello everybody. This is Michael Redlich. I’m the lead Java editor at InfoQ. I’m very happy to have with me Simon Ritter, who is the deputy CTO for Azul. He’s also a member of the JDK 26 Expert Group along with Iris Clark, Stephan Herrmann and Christopher Langer. And I noticed since I report on this all the time, I think you’ve been on the expert group for a number of the JDK releases.

Simon Ritter: Yes, I actually started at JDK 9, so I’ve been doing it for a few years now.

Michael Redlich: Oh, wow. Okay.

Simon Ritter: As you said, I’m the deputy CTO at Azul. I’ve been doing Java right from the very beginning. I joined Sun Microsystems way back in February of 1996, about the same time that JDK 1.0 came out, and I followed Java all the way through the Sun years, got acquired by Oracle, spent another five years at Oracle doing Java, and then for the last 10 years I’ve been at Azul.

Serving on the JCP Expert Group [01:31]

Michael Redlich: One of the things that dawned on me is your experience on being on the expert group. So what’s it like either day-to-day, what are the challenges? What’s the time commitment? I think that might be something to let our listeners know.

Simon Ritter: Yes. It’s interesting because the way that Java is developed has changed. So if we go back to pre-JDK 9, a lot of the work that was done in terms of developing new features and so on was done through the JCP, under JSRs, and we had what we called component JSRs and then an umbrella JSR, which was the one that actually covered the particular version of Java. And we moved away from that. So what we do now is we have an umbrella JSR, which includes all of the things which relate to the Java SE specification, and that’s the Java Language Specification, the Java Virtual Machine Specification and the definition of all the core class libraries. But OpenJDK, the open-source project, is really where a lot of the ideas and the individual features get developed.

So we have these things called JEPs now, JDK Enhancement Proposals, and those are a lot smaller, finer grained than a JSR. And what happens there is that the JEPs are the thing that actually drives the features that get put into a version of Java, and then that gets fed into the Java SE specification. So the work of the expert group is a lot less than it used to be. What we really do now is evaluate what’s coming in, in terms of those JEPs, what gets fed into the Java SE specification, and then really just making sure that that fits with what needs to be done, and then voting on the Java SE specification, and that’s finalized. So the amount of commitment in terms of time and effort is actually quite small, a lot smaller than you might think, because the majority of the work happens in the JDK, OpenJDK project.

Generational Shenandoah [03:26]

Michael Redlich: Oh, awesome. Just curious as far as the challenges, there was, once Rampdown Phase One hits, the feature set is frozen and there was one instance, I think it was Generational Shenandoah, that was actually pulled at the last minute for JDK 21. And obviously I think there were good reasons for it, but is that something that you and the rest of the expert group were involved in that decision-making, or making an exception for that rule?

Simon Ritter: No, so Generational Shenandoah, because that’s actually an implementation of the details in terms of the JVM, that actually doesn’t come under the purview of the expert group. So the expert group, as I say, in terms of what we look at would be the Virtual Machine Specification. And again, if you look at the VM Spec, it doesn’t even say you literally have to have a garbage collector, but it doesn’t do anything in terms of specifying how that garbage collector works, which is why we see things like Generational Shenandoah. We’ve seen ZGC. We’ve seen G1 and obviously all the other ones before that. So those are implementation specific. They don’t affect the specification.

It’s an interesting point, as you said, that at the last minute that got pulled from JDK 21, and that really demonstrates how well the new way of doing the releases of Java works because we switched back at JDK, I suppose 9, 10. So 10 was the first one where we moved to a six-month release cycle, and that means that we’re on a fixed release cycle in terms of timing, not delaying the release based on features that might be ready. Now, that’s how it worked with JDK 9 and earlier. JDK 9 got delayed because of the modularity, and there were some discussions about that, and that’s where the expert group did get involved, and they actually delayed JDK 9 because of the difficulties they were experiencing with making sure that all worked properly.

So with the six-month release cadence, what we’re now seeing is that if something isn’t quite ready, in the case of Shenandoah, they may just pull it because they know that it’s only going to be six months until the next release. So anything that would require a couple more weeks of work, it’s not going to be two or three years before it can get in. It’s another six months, so it isn’t too much of an issue to delay it into the next release.

Six-Month Release Cycle [05:43]

Michael Redlich: Yes, that’s one of the definite benefits of the six-months release cycle now. I was talking to some of the Oracle folks at JavaOne earlier this year and they said it’s easier for them, just like you described, because in the past they would try to get a feature in because it would be two to three years later if they didn’t get it in for that particular release.

Simon Ritter: Yes. Another point about that, and one of the things that we’ve really seen and I think is a huge benefit from switching to this time-based release cadence is that we have the ability to include preview features and incubator modules. We couldn’t do that before because of the length of time between releases. So if you introduce something as, let’s say, in JDK 7 as a preview feature and you had to wait until JDK 8 before you could either revise it, or actually make it a full feature, or even remove it, then that’s just way too long. But with this six months between releases, it has proved to be very, very powerful in terms of allowing the inclusion of features which are complete. So it’s not a beta test or anything like that, but still requires some feedback in terms of how those things should be used. And then there’s the opportunity to make changes before they get actually formalized in the specification and then set in stone.

So it’s much harder to change them at that point. And we’ve even seen in one case where we actually removed a feature. So the String Templates was a great example because they looked at that and they said, “Actually, it’s not really going to work the way we thought it was, and we need to have a complete rethink on this“. And so they said, “Right, we’re actually going to remove it completely“. It was a preview feature, so that didn’t impact backwards compatibility and things like that. So that, again, a very useful way of doing things.

Michael Redlich: Yes, that was quite surprising to me when that happened, last year or year before, because I hadn’t really paid attention that it got removed, because there were blog posts and videos on that. But one of my co-directors at the Garden State JUG said, “Hey Mike, what happened to string templates?” I’m like, “Well, what do you mean?” So I did some digging and I learned what happened, but yes, the six-month release cycle is beneficial in many, many ways.

JDK Flight Recorder [07:47]

So turning to a different topic, JDK 25 of course was just released a month and a half ago or so, and there were, what, 18 new features? There were three that were JDK Flight Recorder, and so I was not necessarily surprised, but I noticed, wow, there’s three JEPs related to this. I had a chance to experiment with one of them. I think it was the JFR Method Timing & Tracing.

Simon Ritter: Yes.

Michael Redlich: So can you tell us more about the focus on JFR these days?

Simon Ritter: Yes. I mean, and this is where it gets interesting, just briefly circling back to the expert group and so on. Java Flight Recorder is not part of the specification, so it comes completely under the JDK. So it’s just interesting that that is separate from that. But you’re right, we have had three JEPs included in this particular release. Obviously there’s been some work going on in the background and they’ve been looking at how to improve Flight Recorder, and that’s how we’ve seen these three particular things come about. And it comes back to the idea that we’re always looking for better observability of applications because that allows us to see what’s going on. And as we all know, as being developers, there are oftentimes when the application doesn’t behave in the way you expect it to.

Certainly from a performance point of view, you might find there’s bottlenecks and things like that, and it’s how do you analyze that? How do you find out where the actual bottleneck is and what’s causing that problem? And that can be very difficult if you don’t have the right kind of tooling. And Java Flight Recorder is a fantastic tool because what we’re doing really is taking the information, most of the information that the JVM already collects in terms of class loading. It maintains all the information about what’s going on in the heap. So if we can expose that information outside of the JVM and make it usable, and that’s where Mission Control comes in because that gives you a graphical representation of the data that you’re collecting through Flight Recorder, then it allows you to start looking at things and saying, “Oh, okay, so I’m allocating too many objects here. That’s causing the excessive garbage collection and things like that“.

So it’s all about observability, but of course one of the problems, and then you mentioned the Method Timing & Tracing, one of the problems that we see with observability is Heisenberg’s Uncertainty Principle. Now, I studied physics at university and Heisenberg’s Uncertainty Principle is all about quantum mechanics, but it actually applies really well in computer science as well. The act of observing something changes what you’re observing and method tracing is a great example of that, because I remember back when I was at the Sun, we did a lot of stuff with DTrace and applying DTrace to Java to see what was going on.

If you start actually looking at every single method in your application, you’ll find that the overhead of intercepting the method entry and the method exit, recording information about what parameters are passed, what time it is, and then on the way out, also if there’s a return value and what time you exit it, that then starts skewing the performance of your application, because you actually start applying too much instrumentation to your application and then it starts slowing down. And you can miss things because of that. So the idea of the Method Timing & Tracing is being selective. So you say, “Okay, well, I don’t want to look at every single method I’m calling. I only want to look at certain ones which are of interest to me because of the application code I’m writing“, and that can give you that ability to then to focus on things and look at what’s going on.

Michael Redlich: Okay, that makes sense. This is what I loved. A lot of us obviously studied computer science in college and possibly grad school, but it’s just, it’s seeing all that and studying what actually goes on behind the scenes.

Simon Ritter: It is fascinating, because I remember that when we were doing the DTrace stuff, I actually had an application which would create a call graph. So you could say, “Okay, if I call this method what actually happens underneath?” And trace all of the different method calls from that, and I actually instrumented and traced the application that I’d written to display that information, and I said, “Okay, well, I’m going to change one particular element in my graph“. So I colored it yellow, so you could highlight things and then look at information about it. And just changing that one rectangle from being white to yellow, how many method calls do you think that actually took? The answer is 4 million.

Michael Redlich: Really?

Simon Ritter: Yes, I was totally blown away when I looked at this.

Project Leyden [12:07]

Michael Redlich: Definitely mind-blowing. It’s great to see the science in computer science and I’ve always appreciated that when I was in college studying computer science. But what else do we have in 25? Let’s see, we had stuff from Leyden.

Simon Ritter: Yes, I was going to say that’s probably the other big thing is the Project Leyden stuff.

Michael Redlich: So there was JEP 483, the Ahead-of-Time Class Loading was in JDK 24 earlier this year, and then we had JEP 514 and 515, the Ahead-of-Time Command-Line Ergonomics and Method Profiling.

516, Ahead-of-Time Object Cache with Any GC. It is on the verge of being proposed to target?

Simon Ritter: Yes.

Michael Redlich: I haven’t seen the formal email from Mark Reinhold yet, but if you look at the JEP itself, it says it’s proposed. It was just updated yesterday, and there’s a to-be-determined JEP number for Ahead of-Time Code Compilation.

Simon Ritter: Yes.

Michael Redlich: So a lot of stuff is going on in Leyden. What can you tell me about what’s going on there, and I guess is there Leyden, an alternative to GraalVM? But I guess GraalVM is more of changing things to native codes, but it has to do with performance.

Simon Ritter: Yes. So this is one of the perennial, not exactly problems, but the features. Let’s call it a feature. It’s one of the perennial features of Java is the fact that you have this write once, run anywhere approach, which means that what we’re doing is we’re taking byte codes for a virtual machine rather than native instructions for a specific platform. And that’s where we see the differences between if you can compile C or C++ versus you compile Java, you get classes with byte codes. Now when the JVM executes those, initially, it does it in interpreted mode, meaning it takes individual byte codes, converts those using a template into the native instructions, executes it, and then just literally does that in a big loop. That’s quite inefficient in terms of getting performance for your application.

So we have this idea of adaptive compilation, Just-In-Time compilation that has been there since JDK 1.2, and that allows us to say, “Okay, when you have a frequently used method“, rather than executing all the byte codes every time, you say, “Let’s take that method and compile it for the platform we’re running on as the application runs“. That will give us better performance, and we do that in a number of stages. So there’s a C1 JIT compiler which compiles quickly but doesn’t apply a lot of optimizations. Once we’ve compiled it that way, we then profile the method and when it becomes a very hot method, we recompile it with C2, and that then uses all the information from the profiling to more heavily optimize it and give you better performance.

So that’s all very good, but the drawback is that that takes time to get to the optimum level of performance, because you’ve got to identify all the methods that need to be compiled, compile them with C1, profile them, compile with them C2, potentially un-compile them because you’ll be use speculative optimizations which result in deoptimizations, and then we get to our steady state of performance. If you restart your application again, we go through exactly the same process each time. So it’s very wasteful if you’re doing things like microservices and spinning up new instances of the same service.

So what we are trying to do is to solve that problem. Now, Graal, as you rightly say, took one approach, which was to say, “Rather than use bytecodes, let’s actually take the same approach that we do with C or C++ and compile Java directly into native instructions for a target platform“. That’s really good from the point of view of startup because it means you get instant startup, instant maximum level of performance, because you’ve got all your compile code in there. But it does have a number of drawbacks. And the drawbacks are things like it, it’s what we call a closed world, meaning you don’t have the same adaptability that you have in Java.

So there’s no dynamic class loading. You can probably get around that with some things that they did in Graal recently. There’s no reflection easily. You have to declare all the reflection you want to do up ahead of time so you can work around that. But also, because you’re fixing the compilation before you actually run the application, you’re not able to see how the application runs and then adapt to that in your compilation. And that’s where we see a lot of the performance benefits is through these speculative optimizations. So what Leyden is doing is saying, “Okay, well rather than taking that approach of compiling everything ahead of time, what can we do to minimize the amount of effort that’s required to get to that performance level?” And that’s through things like the Ahead-of-Time Class Loading.

So we say, “Okay, we know what classes need to be loaded straight away. We don’t have to go through the whole process of figuring that out“. What classes need to be initialized and so on? We can do that straight away. And what they’re working towards is this idea of also keeping a record so that when you’ve run your application and warmed it up, you can take a profile and then feed that back into the next time you run it. And what that will allow you to do is say, “Okay, we can even cache the code that was compiled“. If you’re running on the same machine, you don’t have to recompile everything because you’ve got the copy of the stuff that was generated last time. Feed that back in and get very heavily optimized performance much, much more quickly. So it’s an interesting approach, that allows you to not have the closed world. You can still do reflection in exactly the same way. You can do dynamic class loading, dynamic bytecode generation, all that works very happily. So the Project Leyden stuff is very interesting.

Michael Redlich: Yes, absolutely, and it’s only been around a couple years, but there was a gap. I remembered it was introduced, there was a gap and then it came back, which is good. One thing I thought of is the Coordinated Restore at Checkpoint or otherwise known as CRaC, which everybody makes their jokes about that, right? With Project Leyden and CRaC, is there any overlap? They complement each other.

Simon Ritter: Yes, they’re complementary because if you think of Leyden as being the approach of saying, “Okay, so we need to avoid the warm-up time, so what we’re doing is recording all the information about what’s actually happened in terms of the JIT compilation“. So we record the classes that are loaded, classes are initialized, which methods are being compiled, the compiled code, the profiling information, all that sort of stuff. So that gives us a good head start in terms of making the application available quickly, at the point where you actually enter your main method. What CRaC is about is saying, “Well, go beyond that“. And rather than thinking just about the initial startup, once you’ve got to your main entry point, you then start doing application-specific initialization. So you’ll be opening databases, establishing connections with different services, doing various work that you need for your application.

Now, that will all take time. What we then say is, “Okay, why not take a snapshot of your running application so that you can start it from exactly the same point at a later time?” And that’s what CRaC is all about. So we basically freeze the application. The way that I describe this is it’s a bit like when you’re using your laptop and you’ve got a web browser open, you’ve got a Word document open. You close your laptop, travel to the office down the road, open your laptop, everything is exactly as you left it. So what we’re doing is the same thing with a Java application. So you’re just restarting it from the same point. It’s a little bit more complicated than that because of course if you try and do that in a situation where you’ve got database connections, you’ve got network connections or open files, those things may well go stale and you might have problems in trying to access your database again.

So what CRaC does is, the first C in CRaC is about coordinated. What we do there is we say, “All the methods, all the classes which are going to use things like database connections or open files, they need to be made aware that they’re going to do a checkpoint. And before the checkpoint actually happens, they can then gracefully shut down those connections so that when they start up again, we can gracefully restore those connections“. We can do things like reestablish database connection, reopen a file, but even to the point where we can store a checksum of an open file and then say, “When you start it up again, do a comparison between the checksums“. If the file has changed, there may need to be processing that occurs, but you have that ability to do a controlled restart of the application, not just saying, “Okay, freeze everything and then try and restart it from that point“, which most of the time probably wouldn’t work.

This is proving to be very popular. People like Spring Boot have integrated it in. The AWS people with their SnapStart, they’re using not the underlying technology, but they’re using the same API that we developed for that.

Michael Redlich: Yes, Azul and BellSoft are the downstream distribution vendors that have integrated CRaC into your OpenJDK. Do you see other vendors following suit at some point? As far as I know you’re the only two, but maybe I missed a couple, like Corretto?

Simon Ritter: Yes, I think the limitation is that most of the other distributions are just building straight from OpenJDK. So whatever’s in OpenJDK, they will take for that particular release. BellSoft have followed us. We developed CRaC. We published it as a project on OpenJDK, and what they’ve decided to do is take it and integrate it into their distribution so that people can use it with their distribution. In terms of getting into more distributions, I think the next stage is really to start looking at can we make it a formal part of the OpenJDK, maybe even integrate it into the Java SE specification. That’s quite a significant hurdle in terms of the amount of work that that requires because creating a JEP, which is what we would need to do, probably more than one JEP for this, is quite a lot of work, and we would have to work with the OpenJDK engineers, primarily are from Oracle, in terms of figuring out how to integrate that.

I know that there is… How should I put this politely? A bit of resistance from OpenJDK in terms of integrating this. Now part of that is very fair, which is because right now it’s not cross-platform. Right now we only provide it on Linux. So if we really want to make it part of Java, we need to have it cross-platform and find a solution that will run on Windows and a solution that will run on Mac. Now, we are working on some stuff around that. You could always stub it out, but that’s not really the best way to do it. We are working on an alternative solution because we use what’s called CRIU underneath initially, so that’s the Coordinated Restore in Userspace that’s available for Linux. But now we’ve got a thing called Warp, which is somebody who’s worked on the Unix kernel. It’s really quite scary because essentially what we’re doing is we’re forcing a crash dump and then restoring from a crash dump.

Michael Redlich: Oh, wow.

Simon Ritter: But it works. Our engineering people will tell me off for that because it’s not really a crash dump. It’s just, we’re dumping the whole state of the process and then restoring from that. But it’s very cool technology that we’re working on for that, and hopefully we can do that on other platforms as well. But coming back to the idea of making it broader in terms of other platforms or other distributions, there’s obviously the cross-platform part, but also we need to work with the OpenJDK engineers. They’re focused very much on the Leyden approach and they’ve got some other ideas for things like that. So will we get it into OpenJDK? I think we are still quite a long way off from that yet.

Michael Redlich: Yes, I was going to ask about the cross-platform part, so thank you for answering that already. A lot of interesting things are happening in the Java space. I think we covered a lot of the trends that were in JDK 25. The Vector API is an interesting one. It sits on its 11th… 10th or 11th?

Simon Ritter: 10th. Yes, 10th in JDK 25.

Michael Redlich: Yes, the Vector API is also on the cusp of being proposed for JDK 26. That’s JEP 529. I think a lot of people already know, they’re waiting on Project Valhalla to get some JEPs in there. I think we’re getting closer and closer to that. I think that’s one of the most commonly asked questions of Brian Goetz.

Simon Ritter: Yes.

Michael Redlich: But JDK 26 obviously will come out in March. There are three JEPs at the moment that are targeted.

Removal of the Applet API [24:20]

Well, it took quite a while, the Applet API is finally going to be removed.

Nobody does applets anymore. There’s support for HTTP/3 in JEP 517, and then another garbage collector one, the G1 GC: Improving the Throughput by Reducing Synchronization. Is there anything you can tell us about any of those?

Simon Ritter: Obviously like you said, applets, they’re finally getting removed. And this is one of the things that we’ve seen going all the way back to JDK 9. JDK 9 was quite significant because it was the first JDK release to remove things as well as to add things. Up until then we hadn’t removed anything. We’ve seen a steady trickle of various bits and pieces in the JDK that have been gradually removed. So like JDK 11, you got rid of all the Java Enterprise APIs that had been included in Java SE. So things like CORBA, and JAXB, and those things. And obviously things like Nashorn has been removed and various other small APIs have dropped out. Plus there’s a lot of work on trying to figure out how to eliminate all possible use of the internal APIs in the JDK. So this is where sun.misc.Unsafe comes in, and we’ve seen the introduction of various public APIs that replace the private APIs of sun.misc.Unsafe.

We’re not quite there yet, but close. And so, yes, we are gradually getting rid of some of the legacy aspects of Java and applets is clearly one of those. It makes perfect sense because nobody’s going to be using applets on JDK 25 or 26. If anybody’s still using applets, which there are actually plenty of people who do, but they’re going to be running on JDK 8, or JDK 7, or JDK 6, because after that you start losing the browser plugin and all the stuff that sits around it. So just having the API isn’t enough to run an applet. And we saw in JDK 25, the final nail in the coffin, if you like, of the 32-bit port on x86. So that got removed as well. So we’d removed the Windows port, I think in 24, and then the Linux port got removed in 25. So that’s it for the 32-bit x86 ports now.

Michael Redlich: Yes, I’d be curious to see what else comes to JDK 26 before the Rampdown Phase One, which will be in early December, right?

Simon Ritter: I’d expect to see a raft of, there’ll probably be a fair number of announcements next month because that’s when they tend to do it. They say, “Okay, these are the ones we think we’re going to get in there“. But they don’t put them in until around a month before the rampdown so that they can figure out, will they actually go in? Are they ready to go in, rather than putting them in and then having to take them out again? So I would expect to see quite a lot of activity in terms of JEPs next month.

Michael Redlich: Yes, the ones that have not been formally announced via the email, but if you look at the JEPs they’re proposed to target, it’s the Vector API, the Eleventh Incubator, Ahead-of-Time Object Caching with Any GC, and Prepare to Make Final Mean Final.

Simon Ritter: Yes, that’s another interesting one, that is. One of the things in 25 is that Stable Values has become a full feature now, and for stable values are related to the idea of final in that they wanted to have the ability to make things truly final because of course, even though you can mark something as final in your code, which means you can’t change it directly in your code, reflection and deep reflection still allows you to change a final variable. What they’re proposing with that JEP is that you, even with deep reflection, you won’t be able to change a final value, and that’s going to be very useful because it makes the underlying way that we deal with these things better and we can then get better performance out of the JVM for these things.

What’s New and Exciting That We Didn’t Expect in the Java Ecosystem [28:05]

Michael Redlich: I didn’t realize there was a correlation with the stable values and that. Learned something new today, which is always good, right? So we’ll look forward to seeing that. Of course, you may not know, I write our weekly news roundup and I follow the life cycle of all the JEPs as they go from draft, to candidate, to target. So stay tuned for that. I wanted to get into some of the questions I ask of the participants of the Java Trends Report and get your input on that. So one of the questions is, what is new and exciting that we didn’t expect in the Java community or in the Java space?

Simon Ritter: That’s an interesting question. What didn’t we expect? See, I think that’s one of the things about the way that Java is being developed now. There isn’t much that comes as a surprise because the whole idea of JEPs is to give a roadmap for the next three years of Java. So if you track the JEPs and if you look at the JEPs that are submitted, you look at the JEPs and as they go through quite a complicated life cycle, you can have a fairly good view of what’s likely to happen in the next three years in terms of Java. So I’m not really sure that I would come up with anything that has really surprised me in terms of features that are being added to Java. I think not so much of a surprise, but I think one of the areas that they’ve been focusing on with Project Amber is pattern matching. And that I think is a very useful feature or set of features, because they’re not just one. And the application of pattern matching to all the different areas that we’re seeing is really very helpful, and so that’s good stuff.

Michael Redlich: Oh, absolutely. Yes, there is a pattern matching, I think that’s coming out. It might be.

Simon Ritter: Yes, because there’s pattern matching, for instance, of primitive values in instanceof.

Michael Redlich: Oh yes, there it is.

Simon Ritter: Yes, you see, now that’s an interesting one because if you look at the JEP for 25, it’s been put forward as a preview feature. That was previewed without any changes to the previous iteration. Now if you look at JDK 26, which I expect that they will add up the same feature, that is going to include some changes. I was talking to Gavin Bierman, who’s one of the lead architects for this, at the JCP Executive Committee meeting a couple of weeks ago, and I did a session at Devoxx in Belgium on what I call Java puzzles. And several of the puzzles relate to pattern matching and some of them relate to primitives in pattern matching. And then there are certain situations where you get these edge cases. And I found a couple of examples where I couldn’t explain why it was doing what it was doing.

Spoke to Gavin about this and he said, “Yes, we saw your email. We’ve had a lot of discussion about that and we’ve had to go through this“. And so there are a number of situations where they’re having to rework the way that they defined this so that they can make absolutely certain that you don’t end up with some slightly strange ways that it works. But again, this is the wonderful thing about preview is you get feedback, provide feedback to the team. The team looks at it and goes, “Oh, yes, you’re right. If we do it this way, we need to tidy that up and tighten up the specification“. So it’s all good stuff.

Michael Redlich: Yes, I found it in my one Google Doc for next week’s roundup is JEP 530 that was just elevated from its candidate status and, yes, they propose a fourth preview with two changes that would enhance the definition of unconditional exactness.

Simon Ritter: Going to say, it is unconditional exact, which is the issue. Because the problem is if you combine primitive types with reference types, then it all gets a little bit complicated, because reference types are unconditionally exact in that you know exactly… It’s a bizarre way of describing it, but it’s unconditionally exact, which means you know what the type is. Whereas a primitive, you don’t necessarily know which primitive you’re dealing with because what they do is they say, “If you’re passing a value as a numeric, let’s say a integral value, you pass in 42, what they’ll do is a runtime evaluation to see will that 42 fit into a byte? Will it fit into a short or will it fit into an int?“

So you don’t know until runtime which specific primitive you’re dealing with. Whereas if you do integer, you do know because it’s an integer object and that’s a reference type. So there’s a number of situations where when you use both a reference type and a primitive type, where you’re mixing an unconditionally exact with something which it has to be evaluated at runtime, the number of combinations and situations that you have to evaluate becomes quite complicated.

Michael Redlich: That’s interesting. Yes, there’s, I guess what you were describing, I know there’s implicit conversions now. I’m going back to my C++ days, but yes, sometimes like you said, a primitive type could be, like you just mentioned 42, could be a 42.0, could be implicitly converted, and this is what you were talking about at runtime.

Simon Ritter: Well, yes, because in Java you get automatic promotion. So if you pass a value of 42 to a method and it takes an int as a parameter, even though you might have a byte, it automatically gets promoted into an int so that you will always get that int because it fits in an int rather than double. So yes, again, there’s situations where you get automatic promotion and then you get the evaluation at runtime to say, “Will it fit into a byte again?” And then you narrow it down again.

Michael Redlich: Yes, the other change was applying tighter dominance checks in switch constructs.

Simon Ritter: Yes, because this is actually the one that I told him about, because if you do a switch statement or switch expression, either, and you have two primitives and you say, “case int“, and then, “case byte“, obviously anything that will fit into a byte will also fit into an int. So that gives you case pattern dominance of the int over the byte. But if you use case Integer, the wrapper reference type-

Michael Redlich: Oh, right.

Simon Ritter: … and then case byte, then you don’t get pattern dominance, even though there should really be pattern dominance in certain situations. And that’s what they’ve had to tighten up is because in the initial, or the development they’ve done, they allowed this situation. So they’ve tightened up the pattern dominance so that will not allow you to do that.

What is Getting You Personally Excited in the Java Ecosystem? [34:12]

Michael Redlich: Yes, look forward to seeing what’s coming out of that, and that JEP itself and the others. So, well, one of the other questions I had is what’s getting you personally and really excited in the Java space?

Simon Ritter: Yes, again, I’m very much a language person. I like programming languages. One of the things I did at university in my computer science course was compiler design. I really like the idea of programming languages, and how you compile them and so on. So the thing that I like looking at, mostly from a Java perspective, is what’s happening with the language syntax, and that’s where we get into this idea of pattern matching and those sorts of things. So I really like those kinds of features, and understanding exactly how they work, and what the benefits are and so on.

Compact Object Headers [34:57]

Michael Redlich: Awesome. Yes, I’m going to aggregate all the results from everybody who’s participating in this year’s trends report. So we have the editors at InfoQ, some of the editors and some of the folks from the outside. So stay tuned for that. Hope to get that out soon. So in terms of wrapping up, are there any final thoughts you might have? Anything I may have missed in terms of trends or what people can expect in… Anything regarding the language?

Simon Ritter: I guess the only thing that I can think of, because one thing that did pop up on JDK 25 was Compact Object Headers, and that’s a nice little feature from a performance perspective because what that does is allow you to reduce the amount of memory that’s used in the heap. They’ve seen some quite interesting and fairly impressive results in terms of running benchmarks, that you can reduce the heap usage and even the CPU utilization as well. So there’s things like that, but overall what we’re seeing is that the switch to the six-month release cadence has worked extremely well.

I have to really give credit to people like Brian Goetz, Stuart Marks, and Gavin Bierman, and all the others who are involved in this, for the way that they’ve managed to move the platform forward in a very controlled way. They’ve done a lot of work in terms of Project Amber, Project Leyden, Project Panama, and Valhalla as we’ll see, and really moved the platform forward in a controlled way that hasn’t broken backwards compatibility. But it’s allowed us to add new features and take some of the rough edges off, which is what Amber is all about, taking some of the rough edges off the programming language to make developers more productive, make writing code easier, make reading code easier, which is really the big thing that we should focus on. So yes, I’m very impressed with the way that Java is moving forward.

Michael Redlich: Yes, that’s right. I just remembered that it was an experimental feature for JDK 24, and then a product feature, as I read about it, within one release cycle. Is that unusual or does that speak to the hard work of the folks actually implementing that?

Simon Ritter: Yes, it’s one of those features where they introduce it as a preview feature because pretty much everything now gets introduced as the preview feature, so there is at least some time for feedback. But with things like Compact Object Headers, it doesn’t expose anything to developers. So there’s nothing in terms of APIs, there’s nothing in terms of the language. It’s all about inside the JVM. So it’s really more of an implementation detail and individual developers won’t see that other than the performance benefit. So a lot of work I know went into this before they even introduced it as a preview feature. Nobody had anything dramatic to say because they tried their applications and everything, and so they said, “Right. Good. We’ll just introduce it as a full feature“.

Michael Redlich: Oh, okay. For those who may be interested in learning more, our good friend Ben Evans actually wrote a InfoQ news piece on that. It came out, oh, almost a year ago, 13th of November last year. So for those who may not know, I took over as lead editor from Ben, so I had some big shoes to fill. I hope I’ve accomplished that.

Conclusion [38:03]

Simon, this was awesome. I think it’s needless to say that if you’re ever interested in coming out to New York City to visit with the Garden State Java User Group, the New York JavaSIG, and possibly Philly, that would be great to have you visit with us. Hopefully we’ll run into each other at a conference at some point down the road.

Simon Ritter: Yes.

Michael Redlich: Thank you very much for your time. I know you’re busy and this was a great discussion today.

Simon Ritter: Thank you very much.

Mentioned:

.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Chinese EV maker Xpeng reportedly turns to hybrids · TechNode Chinese EV maker Xpeng reportedly turns to hybrids · TechNode
Next Article Weston 15.0 Alpha Released With New Protocols, Experimental Vulkan Renderer Weston 15.0 Alpha Released With New Protocols, Experimental Vulkan Renderer
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

This Free App Makes Transferring Files Between Devices Ridiculously Easy
This Free App Makes Transferring Files Between Devices Ridiculously Easy
News
Rex: Proposed Safe Rust Kernel Extensions For The Linux Kernel, In Place Of eBPF
Rex: Proposed Safe Rust Kernel Extensions For The Linux Kernel, In Place Of eBPF
Computing
Apple Clings to Samsung as RAM Prices Soar
Apple Clings to Samsung as RAM Prices Soar
News
5 Best Sci-Fi Shows Of 2025, Ranked – BGR
5 Best Sci-Fi Shows Of 2025, Ranked – BGR
News

You Might also Like

This Free App Makes Transferring Files Between Devices Ridiculously Easy
News

This Free App Makes Transferring Files Between Devices Ridiculously Easy

9 Min Read
Apple Clings to Samsung as RAM Prices Soar
News

Apple Clings to Samsung as RAM Prices Soar

3 Min Read
5 Best Sci-Fi Shows Of 2025, Ranked – BGR
News

5 Best Sci-Fi Shows Of 2025, Ranked – BGR

12 Min Read
Crunchbase Predicts: IPOs Picked Up In 2025 And The Outlook For 2026 Is Even More Optimistic 
News

Crunchbase Predicts: IPOs Picked Up In 2025 And The Outlook For 2026 Is Even More Optimistic 

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?