Join us as we dive deep into the evolution of blockchain infrastructure, data availability, and decentralization with Douwe Faasen, co-founder of HyveDA. From early coding adventures to pioneering high-throughput solutions, this interview unveils the visionary insights behind the next generation of decentralized technology.
Ishan Pandey: Hi Douwe Faasen, welcome to our ‘Behind the Startup’ series. Your journey from coding at 11 to founding HyveDA is fascinating. How did your early experiences with Bitcoin faucets and smart contracts shape your vision for data availability solutions?
Douwe Faasen: Hey Ishan, thank you, it is great to have the opportunity to be a part of this series. Thank you for that question, I always do like reminiscing about the past.It was quite a journey for me to go from simplistic websites to building a high throughput data availability solution. In that journey, however, I indeed experimented with Bitcoin faucets and started learning how to develop smart contract in 2017.
Did taught me a lot about the atomicity of blockchains, but also exposed the problem to me of data infrastructure limitations in blockchain development. Often, Ethereum became too restrictive build something on that scales, so teams would spin up their own chain where they can issue the gas token themselves and build something of much bigger scale. Ultimately, people stuck to Ethereum and these chains became ghost chains.
This observation, coupled with my passion for data, gave me the drive to build the tools for decentralized technologies to build something of scale without leaving behind the network effect of existing blockchains. HyveDA is a result of this and we’ll be building data infrastructure for rollups, validiums, app chains, and verifiable services to build upon with limitless scale and capacity.
Ishan Pandey: We’re seeing a significant trend in the re-utilization of Ethereum infrastructure, particularly with “based” rollups. How do you see this evolution impacting the future of blockchain scalability?
Douwe Faasen: I think reutilization is the perfect framing for this. Based rollups have quite a couple of improvements over current rollup design in the way they handle security and decentralization. One notable element is that they leverage Ethereum validators for sequencing while they outsource execution to the rollup’s execution nodes, which is a great modular design and can be very scalable.In my opinion, based rollups are a great testament to Ethereum’s ability to operate as a settlement layer, but it also highlights the importance of data availability.
When scaling horizontally and vertically on execution is not a problem anymore for rollups, the bottleneck becomes data availability. Rollups need to ensure the data to continue the chain and to verify the results are available to all participants, including other rollups that need interoperability with each other. With all these puzzle pieces aligned, blockchain scalability becomes limitless.
Ishan Pandey: You’ve been vocal about the importance of home stakers in maintaining true decentralization. Could you elaborate on why this matters in an era where high-TPS solutions often favor large data centers?
Douwe Faasen: Solo-stakers are a crucial part of the puzzle in mitigating the risks posed by centralized entities and protecting networks against threats like censorship or 51% attacks. I strongly believe that our focus should be on building truly decentralized networks that are open to everyone and resilient to attacks from large, centralized actors. Security should never be compromised for the sake of performance.
Even in high-TPS environments, there are several ways to protect decentralization. For instance, you can design the system so that each node processes only a portion of the data, rather than requiring all nodes to handle the entire dataset. This portion can be so small that even a basic home computer can handle it at scale, ultimately contributing to greater throughput for the network as a whole.
Ishan Pandey: With your background in building indexers before The Graph, what unique insights have you gained about data availability challenges in the blockchain space?
Douwe Faasen: Indexer networks and data availability are completely different concepts, but they do share a few underlying patterns. You could even argue that The Graph helps maintain data availability for indexed data. Of course, the security assumptions and cryptographic properties are fundamentally different. Still, one key thing I’ve learned from building my own indexers, as well as working with The Graph and StreamingFast, is just how much data redundancy can cost you.Nodes can disappear at any moment, and as a user, you never want that to affect your application. That means you’ve got to replicate your data across multiple nodes. Naturally, this gets expensive. And that concept of redundancy is one of the reasons data availability (DA) became such an important topic in the Ethereum community.
Sure, having all nodes process the same transaction data is great, because you only need one honest node to ensure data integrity. But it’s also a bottleneck, because to scale, every node needs to scale up too, and that just cranks up the network’s costs.A high-throughput network really needs to split data responsibilities across several nodes to ensure redundancy. However, you can’t just replicate everything everywhere, or you end up with a massive bottleneck. Striking that balance between redundancy and efficiency is one of the things that makes data availability such a challenging problem.
Ishan Pandey: There’s ongoing debate about the balance between decentralization and verifiability, especially with ZK technology. How does HyveDA approach this balance in its solutions?
Douwe Faasen: That’s a great question. I personally believe that ZK is a fantastic tool to prove that execution was done right. You would only need consensus for the ordering of the transactions and to ensure the state is correct after applying the proofs. We’re actively exploring ZK for DA and it’s a great way to aid decentralization.
ZK removes the requirement for any kind of trust between nodes, which means that more nodes can join the network and consensus can be reached in a more simplified way. Ultimately, this increases verifiability and decentralization at the same time.In the context of DA, we’re particularly interested in how ZK can ensure correctness without requiring extensive overhead. For example, ZK proofs could be used to guarantee that nodes in the DA network are storing and serving data correctly, while Ethereum’s consensus ensures the global integrity and sequencing of these proofs.
Ishan Pandey: You’ve mentioned that real use cases will be enabled by verifiability through Ethereum rather than complete decentralization. Can you share some concrete examples of how this might play out?
Douwe Faasen: Absolutely. Let me start by saying that decentralization remains extremely important and trustless verifiability wouldn’t exist without it. Verifiability through decentralized consensus-based systems like Ethereum means you can create trustless guarantees that operations or computations were correctly executed and applied without every step to be ran in a decentralized setup. Let me give you some examples of that:Game engines could run their infrastructure off-chain, enabling the usual high throughput that they’re dealing with, but periodically prove game transitions and submit them on-chain. The chain’s consensus guarantees correct game outcomes, but none of the logic or assets are hosted on-chain.
Players, especially in games that have expensive in-game assets, can remain confident they’re not being cheated on.Another usecase would be decentralized orderbooks. Trade matching and execution would happen off-chain, but its proofs would be verified on-chain. A decentralized exchange like this would be able to be ran by anyone, but without needing expensive and lengthy consensus for order matching. Another fun thing about this is that you could design it in such a way that deposits and withdrawals simply happen on-chain, also minimizing any trust assumptions in a centralized entity.Payment systems can also profit from this.
Consensus creates several limitations, such as latency and bandwidth limitations. For a payment processor, it will be hard to handle millions of transactions per second with a consensus algorithm implemented. What is more important to the end-users is transparency on fees and accounting. Decentralized verifiability can enable this, without relying on a consensus algorithm in the payment processor itself perse.
Ishan Pandey: Looking ahead, what developments in data availability and blockchain infrastructure are you most excited about, and how is HyveDA positioning itself for these changes?
Douwe Faasen: There is a lot happening in infrastructure right now and it’s all pretty exciting. The most exciting thing for the industry as a whole, in my opinion, is how far ZK has come in such a short period of time. It is technology that can advance decentralization and distributed systems in terms of security, speed, and reliability, but previously required a team of PhDs to actually implement it.
A big shoutout is due here to Succinct, who have eliminated this requirement and now developers can build with ZK without extensive prior academic knowledge on the topic. At Hyve we’re actively researching and developing to bring ZK into our data availability layer for faster finality and more simple data availability security assumptions. This will ultimately make us even more reliable for DA integrators, while maintaining the same throughput.
Ishan Pandey: Thank you for your time and insights, Douwe!
Don’t forget to like and share the story!
Vested Interest Disclosure: This author is an independent contributor publishing via our