By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: WASM in the Enterprise: Secure, Portable, and Ready for Business
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > WASM in the Enterprise: Secure, Portable, and Ready for Business
News

WASM in the Enterprise: Secure, Portable, and Ready for Business

News Room
Last updated: 2025/12/03 at 9:31 AM
News Room Published 3 December 2025
Share
WASM in the Enterprise: Secure, Portable, and Ready for Business
SHARE

Transcript

Andrea Peruffo: I’m here to present to you, WebAssembly in the Enterprise: Secure, Portable, and Ready for Business. I’m Andrea Peruffo. I work as a principal engineer in Red Hat. Currently, my main focus is on Chicory and WebAssembly. Let’s talk about what we are going through in this talk. We will have two parts. The first one is about the motivation that brings this talk here, and what we are trying to achieve. The second part of it will be when the rubber hits the road. Practical use cases of WebAssembly, and all the troubles and the problem you will see, and you will face, and experience when trying to integrate your first WebAssembly programs into your applications, and how to overcome them to have actually great applications.

WebAssembly – Overview

A couple of words about WebAssembly. First off, how many people have heard about WebAssembly before? WebAssembly is a portable binary execution format for programs. It is extremely used in the browser. WebAssembly really loves to run in the browser. It has been built to overcome the limitations of running at speed, like graphic engines, cryptographic libraries, inside the browser, where you usually can run only JavaScript, which is a dynamic language which is slow. Instead, now with WebAssembly, you can run system programming languages like C, C++, Rust, and those are extremely fast. They enable companies like Google to speed up their applications like Google Maps, Excel.

Even if you don’t know, you are running WebAssembly behind the scenes in almost any site of a medium-decent size in your browser. There are plus arguments to run WebAssembly inside the browser, which is the secure sandbox. WebAssembly is designed to be embedded into systems, and it will be executed in a secure sandbox that cannot be escaped, which means that the code that runs there will be completely controlled by the outside system. This standard is part of W3C, so it is used today largely in the browser, and it is going to be used forever because it’s a standard, and everyone can target it from the application development point of view and from the browser point of view.

We are in a track where we have to talk about server-side WebAssembly. To talk about server-side WebAssembly, you have to see the power of this feature. What if we can take all the WebAssembly and all the work that has been done in the frontend for the browser and bring it to our backend systems? In this way, we can have polyglot interfaces. We can compile, actually, from many languages to WebAssembly, so we can have plugin systems which can target WebAssembly. It’s super easily embeddable. WebAssembly is designed to be embedded in a host programming language, so embedding it into another programming language feels like just natural, and you can follow the JavaScript APIs, but you can even go wilder and use different approaches.

This feels really natural and helps you design your system correctly. It’s secure. It runs in a sandbox. The memory that is used by the WebAssembly module is completely under control and is completely defined, so you don’t have to think about strange things happening, but everything will stay within the boundaries that you have given. Again, it’s standard, so reusing something that has been built for a standard means that you can actually use those things for years to come.

We are talking about many different things, but when we talk about WebAssembly, we have always the two sides of it. We have many languages like Go, Rust, C, Kotlin, and Scala, and whatever that can compile to WebAssembly bytecode. Then you have the other side, which is from WebAssembly bytecode, you want to run those WebAssembly payloads inside a system. Then you have a lot of different runners. Some of them, they come from the web browser, like we have V8, of course, or SpiderMonkey and these kinds of engines, and some of them are new. Executing WebAssembly inside another language is a superpower, and that’s why today we are going to concentrate and talk about running WebAssembly in Java, thanks to Chicory. Chicory is the project that I am the main maintainer of. It is a new Java WebAssembly runtime written in pure Java with zero dependencies. It can run basically on any stock JVM. Come send a star to our GitHub project. Check out the documentation. It is actually pretty interesting.

The Ubiquitous JVM, and Wasm

Why? I gave you a few motivations why we started this journey, but the full vision is the fact that the JVM is everywhere. We are running programs on Java in the whole infrastructure, in healthcare, cloud, wherever. This is very widespread. What can we do to unlock the potentiality of using WebAssembly inside those services? That’s the reason why we started with Chicory. We have to think that there are many ways to integrate WebAssembly inside Java as of today, but most of them require foreign function interfaces. You can call out to V8. You can call out to Wasmtime, but by doing a foreign function interface, so calling an external program to run your WebAssembly payload, you are effectively escaping the safety boundaries of the JVM, and this has a number of downsides.

First of all, distribution. WebAssembly is intended to be a single-target architecture that everyone can target and is completely portable across architectures. If you have to ship a native library with your Java code, you have to compile it for the target architecture and operating system. This is a problem because you always need to compile on the target that will run. Instead, with WebAssembly, this is completely decoupled, and you can just compile once to WebAssembly and ship your WebAssembly that will run unmodified on any different platform, which is ARM, PowerPC, or whatever, or even IBM Z in mainframes, and those kinds of things.

Then you have runtime issues because escaping the JVM doesn’t affect the capacity of the JVM to monitor and actually produce observability and monitoring on top of the application that you are running on it. Staying within the boundaries means that your program, your WebAssembly module, will run within the memory of the JVM without escaping the maximum that you give, and this is extremely good for cloud-native environments. It will have all the observability that is provided by the JVM. That means all the debugging tools continue to work without any change. We have the memory safety of WebAssembly. We have fault isolation because any time that the native library, call it Wasmtime V8, or any native library that you will be running to run WebAssembly, in case the native library crashes, it will crash your JVM. This is, of course, something that you don’t want to happen on your system in production, and is a benefit of running any untrusted code, especially, inside the JVM.

Then you take advantage of all the other benefits that the JVM brings, that is also the just-in-time compilation, and your program will be completely self-contained because it runs with something that already knows. It’s just on bare JVM. That’s why, at the end of the day, we decided that bringing WebAssembly payloads on the JVM is actually something really good, something that we want to do, and something that we want to enable.

Case Study 1 – JRuby

During the next use cases, we will see a bunch of things. Let’s start with the first of the case studies that we have. The first one is something that I really care about, because it’s JRuby. It was the very first real-world use case we had for Chicory. We were developing WebAssembly runtime, and we started, of course, with an interpreter. Just running the code in an interpreter inside the JVM. JRuby people came to me and said, we have something to talk about. What is JRuby first? JRuby is a high-performance port of the Ruby programming language on the JVM. Why they did it is because you can take advantage, again, of all the benefits of the JVM, especially the threading model and the advanced just-in-time compilation of Ruby code. It can run even faster than native Ruby code in some cases, especially for long-lived applications. That’s why JRuby is a super interesting project.

When you look at Ruby itself as a language, you have to know that Ruby is barely a user experience, or in most libraries and in most of the ecosystem, Ruby is a DSL that you will use to interact with your system. Ruby has been done and developed to be really human-friendly, and this means that it tends to be really slow. What happens is that below the surface, most of the code that you will be running when you are developing Ruby applications, it will be C. That boils down to running directly on hardware, and that goes really fast. There are many problems in porting Ruby into different runtimes. The first one that you have to do is the fact that when you want to port Ruby to a different runtime, you have to write a parser, and parsers are not easily portable. The Ruby syntax has been done completely for humans, which means that it is extremely difficult to be interpreted and understood by computers.

You can find on the internet a ton of very different blog posts that are claiming how it is difficult to parse Ruby. This means also that each port of Ruby for different platforms has re-implemented their own Ruby parser with different nuances, with different edge cases. We all know our industry, and we know what happened. We have 14 standards, we need the 15th one. I have to say that in this case, this is turning out and working out pretty well, as far as I’m concerned, and as far as I know, because basically what we did is we have a ton of different parsers for Ruby, let’s just build from scratch one. With the current knowledge, after many years of trying to parse Ruby in many different ways, rewrite a full, decent parser for Ruby, and it will be a C implementation, and it will be the unique one for all of the Ruby implementations. They have already reached 100% compatibility with all the parsers, which is great. This project is called Prism, and I encourage you to check it out on the internet, because it’s really cool.

As you can imagine, why do we want to think about using Chicory in JRuby for Prism? Because Prism is a C library, which can easily be compiled to WebAssembly. If you run the WebAssembly inside a native WebAssembly runtime with zero dependency, you can boot JRuby in any place where there is a JVM. You don’t need to precompile your parser, which is actually the very first brick of your infrastructure when you’re trying to run Ruby code on a different platform. You don’t have to recompile the parser for a different architecture. You don’t have to recompile it for Windows. You don’t have to recompile it for embedded operating systems or those kinds of things. You just run what you have in WebAssembly, and this is really good, and is extremely portable, and enables JRuby to boot anywhere the JVM can go. If you have even an embedded operating system where the JVM has been ported already, you can directly run JRuby, and it will bootstrap in pure Java without the native dependencies.

Running WebAssembly in an interpreter and running a full, decent parser written in C on an interpreter is something really slow. This was our first attempt and the first deliverable of the Chicory project, but we needed to go next. To go next, what happened is that we implemented a WebAssembly to Java bytecode translator or compiler, AOT compiler, ahead-of-time compiler. If you think about WebAssembly bytecode and you look at it, it’s very similar to Java bytecode. It doesn’t differ that much. Implementing a translator in between the two is extremely easy, and it makes it possible actually to run things much faster as they are pure Java code instead of having the need for an interpreter going instruction by instruction through your code to be executed.

At this point, using the Chicory ahead-of-time compiler, you can translate the WebAssembly payload to a few class files and a few metadata that can be embedded without external dependencies into your final JAR, and it will run as pure Java without any change. It will take advantage of all the JVM things. As you can see, the performance is dramatically increased, from 10x to 30x, 40x speedup by using the ahead-of-time compiler. Now we are fast.

Case Study 2 – Trino

The second use case that I want to show you is from a friend of mine, who is one of the creators of TrinoDB, or PrestoSQL, formerly known. Basically, what Trino is, is a distributed SQL database that happened to be written in Java. At some point very early in the story of Chicory, one of the creators of this database came to us and said, I really want to run Python for a user-defined function inside my database. This was a big challenge, super interesting. The guy is David Phillips. Python and WebAssembly. Python actually really loves WebAssembly, starting from the release 3.13.0 of Python, WebAssembly is a tier-2 release target architecture, which means that it is just behind the tier-1 architecture, so it is highly supported, it is extremely tested. This means that Python cares about being able to run on WebAssembly very much. I

n this case, I’m talking about not generic Python, but I’m talking about CPython, because how you run Python in WebAssembly is not by compiling function by function the Python code that you have, but it’s compiling the interpreter itself, so it is compiling CPython to WebAssembly. Then you can run the interpreter of Python, CPython, inside any other WebAssembly runtime, which is actually interesting. When you start an interpreter, and you have Python, and in this case, you are constrained to a single thread, the first thing that an interpreter will do is to start to load the standard library of the language. The standard library of a language like Python, with a lot of APIs, is pretty big. It has to load files from the file system and preload all the basic functionalities for you to be able to use. This is going to be slow.

To overcome this limitation, what happens is that you can start to take advantage of the amazing tools that are out there for WebAssembly. The first one is wasi-vfs. wasi-vfs enables you to have an embedded file system directly embedded into your WebAssembly payload, which means that you don’t have to actually really access the operating system and load the file, and then load the file, and then share the data with the WebAssembly module, but the WebAssembly module will contain directly the data that is on the file system and can directly read it. It has a few limitations. It is only for static file systems, and so the number of files that are there will not change during the WebAssembly execution, but it’s pretty good to at least encode the entire standard library and any kind of default library that you want to load into the environment by default. This is the first tool that we used.

The second one is called Wizer, and is a WebAssembly pre-initializer. What does pre-initializer mean? It means that you can actually run your WebAssembly module up to a point, and at some point, you stop the execution, you snapshot the current situation into another WebAssembly module. The produced WebAssembly module will be extremely fast because it has already gone through all the initialization process and has done all the operations that were needed to be run before actually starting. You start from a pre-warmed-up environment, and so you can actually be running something with a low latency and a pretty quick startup time that you usually have from WebAssembly code.

On the JVM, running in interpreter mode, so extremely slow, you can see that the initialization of Python used to take 40 seconds to load the entire standard library. After you use wasi-vfs and Wizer, it takes only one second. You can say that it’s not optimal, but it’s a big delta and makes something usable, and we can apply even further optimization to make it more appealing for a wider use. With that we have been fast.

Case Study 3 – Debezium

I have a third use case, which is Debezium. Debezium is a super interesting framework that implements the change data capture, CDC pattern, which means that it is basically listening on your database or on your data source, which can be Postgres, MySQL, MongoDB, and a number of other static databases. It will listen for changes, and it will propagate those changes as events to other different technologies, like Redis, but normally message queues and Kafka and those kinds of things. Think of it as a glorified tail -f on your database. Something is producing data into the database. You connect it to a different system. You do it by propagating the events that are happening.

If you think of it, Debezium is used as an architectural tool inside many Kubernetes native framework and infrastructures. You know that Kubernetes is built on Go, but Debezium at its own core is really built on Kafka Connect, which is Java at the end of the day. This is a Java framework that really needs to play well with the entire ecosystem, which is written in Go on top of Kubernetes. One thing that Debezium lets you do on top of those changes is single message transformation. Imagine that each event that has been produced on the database side will be emitted as an event, and it will be just one message that goes through Kafka, and you can actually provide operations to be done on each of those changes.

The most used ones are, for example, filter and routing. For example, you can take one message, check if it has something in it, and decide if you want to move it forward or not, or routing, delivering it to different things, different queuing technologies on the downstream side.

How do you make people write single message transformation in Go for Debezium? This is the standard approach that you will take if you use a separate service or a separate container for doing this operation. Debezium is built on Kafka Connect, so it will receive a message from Kafka. It needs to deserialize it. It needs to serialize it back to go through the single message transformation that the user defined that will deserialize the message again, reserialize it to get back into Debezium. This is slow again. This is not feasible and makes our architecture not usable by our user, and no one will really want to use it for this kind of use case. That’s why we have been running single message transformation, especially written in Go, thanks to a Go PDK, Plugin Development Kit, that will be used for writing those filters and routers, and those single message transformation will be able to lazily access the field of your messages.

You can see that the PDK offers you helper methods to access, through a pointer, the object that has been deserialized into the Java heap. You deserialize the message coming from Kafka Connect once into the heap memory, and then you give your WebAssembly module a way to lazily access each of the fields by just navigating with a familiar dot syntax, its structure, and materializing only the value that they are interested in.

If you look at this example, only the value.source.version will be materialized as a string inside the WebAssembly guest module, so that it can be compared with whatever, and you can decide to move on or not. This is extremely interesting because you don’t have any more to reserialize the message, which can be big, up to 1 meg or so, into the WebAssembly module, but you can access your host module. We said it before, WebAssembly has been designed to be embedded in other languages, and those are the kinds of powers that you can take from it. Of course, this is fast again, and we are happy again, and we are going to be happy and run those things nicely on the JVM.

Case Study 4 – SQLite

I want to speak a little bit about SQLite. SQLite is very popular, used everywhere. Used on your mobile devices, on your laptops, and in a lot of desktop applications, or embedded devices, everything that you can think about. It’s very widespread. We don’t have a really great SQLite story in the Java world, so SQLite is not really used in many Java projects, because SQLite is written in C. Having to call from Java to C, you incur in all the problems and issues that we have analyzed before. We can reuse the ahead-of-time compiler that we used with Prism and JRuby. We take SQLite amalgamation, so the exact source code that is produced by SQLite and is distributed as releases of SQLite, we compile it to WebAssembly first. Once we have the WebAssembly version of SQLite, we translate the WebAssembly bytecode into Java bytecode. This runs exactly Java methods on top of the JVM, so we have all the functionality of SQLite directly embedded in a Java program.

Running all those C functions and all the algorithms that are running in the database, translated from Wasm, has problems. We noticed it to be really slow, and we started profiling to understand what was the root cause of the problem. You don’t see this flame graph very closely, but what has been highlighted there that takes most of the span of this flame graph, which is at the end of the day going to take most of the time of your execution, is the call to the memory grow methods of WebAssembly. How does memory grow work? Memory grow means that SQLite wants to allocate a string or allocate some space in the heap, and it asks to grow the memory a little bit because it doesn’t fit the current size of the memory. SQLite happens to call the memory grow method one by one for each memory allocation that exceeds the current size of the memory, which means that now you have a memory of 1, but you need to store 2 bytes, and they call memory.grow with 2 to be able to store whatever is needed in SQLite for 2 bytes.

Then this happens again for 3, for 5. Those little incremental grows are done subsequently and for each and every allocation that has been done into SQLite. This doesn’t look that bad, but our first implementation of the allocator was just respecting what the guest module, so SQLite in this case, was asking. It was growing the memory according to what the WebAssembly module asked for. Growing the memory is extremely costly because the WebAssembly memory is defined, in this case, it’s just a byte array. You have to think about the guest module memory as just a byte array. For growing it, you cannot simply resize the array. You have to create a new array with a new size, copy all the content of the previous array into the new one, and go on from there. This is an extremely costly operation. If you start doing it over and again, you end up with a graph that we have seen.

We implemented pre-allocation, which is nothing new, but what happened is that basically when the WebAssembly module asked us for 2 bytes of memory, we just pre-allocated a whole chunk and showed to the WebAssembly module the 2 bytes that it asked for. We decoupled the memory that has been allocated by the runtime from the memory that will be requested from the WebAssembly module. In this way, most of the operation of memory grow will be a no-op at the end of the day. You can see that this was macroscopic, even running a test. In this specific test, we have been running like 50,000 or something insert, and it was extremely slow. It was taking more than 50 seconds to execute a single test. With just decoupling the memory that is allocated by the runtime from the memory that is requested by the WebAssembly module, we have been able to be fast again.

Conclusions

I want to see some conclusions about using WebAssembly in real world projects nowadays. When you need safety, so to be in a very secure environment, when you want to port your plugins everywhere you want, when you want to be fast, because at the end of the day, WebAssembly has been designed to overcome the limitation of JavaScript of not being fast enough. When you want to be polyglot and get your users to write their functions in any language they want, and especially when you have a constrained problem, WebAssembly works extremely well when you have a very stable C, C++, Rust, so low-level API, which is really well defined, or when you have an application binary interface, which is very stable, like for a single message transformation or for queuing systems, so you have a very clear boundary in between your application and what the WebAssembly module should do.

There are a lot more use cases where you can use it. In this case, you don’t have to fear the sloth. Your program doesn’t need to be slow and doesn’t need to run slow. You can actually take advantage of WebAssembly and run it fast again, and actually execute on any platform that you care about.

Questions and Answers

Participant 1: What’s the future for Chicory? Are there any big projects, or features you might want to have?

Andrea Peruffo: I’m currently working full‑time on Chicory, so on its implementation, on the core implementation. We are actually implementing a lot of proposals now. We have exception handling already implemented for the interpreter that we are moving to the compiler now. The next one will be WasmGC, which will be a heavy lifting to be able to run Kotlin, Scala, Dart, for example, on top of Chicory. This is regarding the core development that we are doing, and regarding use cases as well. We are looking at having much closer integrations with other runtimes, with some projects to be able to make this technology and make WebAssembly actually run in Java user applications without even people noticing. The first one that I will be probably focusing on next will be a tight integration with the Quick.js runtime, which enables you to run JavaScript in a sandboxed environment. This is extremely useful for Java programs that are compiled nowadays more often than not with a native image of GraalVM, so they are very static, actually, programs to restore the dynamicity that we used to have on the JVM to be able to run JavaScript in this case for plugins and those kinds of systems. I’m happy to hear even more use cases and to support them, because we have many things going on.

Participant 2: There is another tool that Red Hat says that is super-fast, and that’s Quarkus. How do you see WebAssembly and Quarkus or Chicory and Quarkus joining forces?

Andrea Peruffo: Chicory has been designed ground up for running in Quarkus. It runs in Quarkus. It just generates pure bytecode, which is what Quarkus really loves, so you can just use Chicory, produce the bytecode that you need, and if you go through a Quarkus application, you take all the advantages of compiling with native image or running at the speed of Quarkus. For the future, we are foreseeing much closer integrations with the framework to be able to take advantage of Chicory much more easily with a more Quarkus idiomatic API, other than being them just two separate projects that happen to play well together.

Participant 3: How does the sandboxing in Chicory work?

Andrea Peruffo: The sandboxing works as any WebAssembly runtime. The first thing is the fact that WebAssembly defines that your memory is bounded. It has an initial size and it has a limit. How it is implemented at the end of the day is nothing magic. It’s just a byte array, literally. The memory safety or your WebAssembly module are dependent on the size that you define your byte array that is the memory for the module. This is extremely safe. You don’t have out of memory. You don’t have nothing. This is allocated by the JVM. It’s fully inside the JVM. The second question that normally comes with this one is the fact that you need usually to bound the computation as well. You need to bound the CPU time that your WebAssembly module will be using. This thing is tackled in two different ways. The first one is honoring the thread interruption on the JVM. This means that if you run your WebAssembly module on a thread and it will receive an interrupt, this will be honored and satisfied. The program can easily have a timeout. You can easily time out after 10 seconds, whatever, the execution on top of the CPU. The second way is a very low-level listener into the interpreter. This is not available when you translate from WebAssembly to Java bytecode. In the interpreter, you have a very low-level callback at each opcode execution. This means that you have very low-level access to what has been executed in the WebAssembly runtime. You can implement things like fuel or run for a thousand instruction or run for a thousand instruction that only contains certain things. This is what we have.

Participant 4: If you use ahead-of-time compiling, you inject instructions that from time to time check the thread interruption?

Andrea Peruffo: No. This is available only in the interpreter mode.

Participant 4: In AOT, you can’t actually interrupt the thread?

Andrea Peruffo: No. Thread interruptions are not injected, but they are encoded, especially in all the sensitive control flow instructions. We are not checking for interruption during normal code execution, but we only check for interruption during control flow. Control flow happens pretty much very often in any Wasm module for doing anything useful. We do check for interruption in loops, blocks, unconditional jumps, for those kinds of opcodes. This is happening as well in the translated Java bytecode.

Participant 4: This generates a trap, or you can continue?

Andrea Peruffo: It generates an exception.

 

See more presentations with transcripts

 

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article YouTube’s Recap is the latest Spotify Wrapped knock-off YouTube’s Recap is the latest Spotify Wrapped knock-off
Next Article Why Nigeria’s startup ecosystem needs more corporate buyers Why Nigeria’s startup ecosystem needs more corporate buyers
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

This tiny box can turn your old speakers into Sonos rivals
This tiny box can turn your old speakers into Sonos rivals
Gadget
Let AI Write Your Pytest Suite: A Practical Guide for SDETs | HackerNoon
Let AI Write Your Pytest Suite: A Practical Guide for SDETs | HackerNoon
Computing
Uber is launching a robotaxi service in Dallas
Uber is launching a robotaxi service in Dallas
News
One UI 8.5 leak reveals Samsung’s biggest update yet (and it lands next month)
One UI 8.5 leak reveals Samsung’s biggest update yet (and it lands next month)
News

You Might also Like

Uber is launching a robotaxi service in Dallas
News

Uber is launching a robotaxi service in Dallas

2 Min Read
One UI 8.5 leak reveals Samsung’s biggest update yet (and it lands next month)
News

One UI 8.5 leak reveals Samsung’s biggest update yet (and it lands next month)

5 Min Read
Hicks and Thornberry published in Defense News
News

Hicks and Thornberry published in Defense News

2 Min Read
Marshall Middleton 2 Speaker Review: Portable And Powerful – BGR
News

Marshall Middleton 2 Speaker Review: Portable And Powerful – BGR

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?