Traditional social platforms control your data, decide what you see, and can censor or make money from your content without asking. By switching to a browser-only P2P model, each user’s browser acts as both client and server, allowing them to own their data and decide who to connect with. This removes central control points, giving users complete control over their posts, timelines, and privacy.
Difference between a centralized micro-blog (Twitter) and a browser-only P2P mesh
On Twitter(X), every tweet goes through and is stored on Twitter’s servers. You depend on their uptime, their rules, and their business model. In a browser-only P2P mesh, users find each other directly (with minimal signaling), share updates over WebRTC, and use a CRDT-based store to stay in sync. There’s no single authority, no server farm, and no central point of failure.
Prerequisites
Before starting with the code, make sure you have the latest version of Node.js (version 20 LTS or higher) installed, and choose a package manager like npm or pnpm. You also need a modern browser that supports WebRTC and ES modules. I recommend using the latest version of a Chromium-based browser (Chrome, Edge, or Brave) or Firefox. These browsers provide the necessary APIs for peer-to-peer connections, IndexedDB storage, and ES module imports without additional bundling.
On the conceptual side, you should be familiar with the basics of WebRTC handshakes. Understanding ICE candidates, STUN/TURN servers, and the SDP offer/answer exchange will be important when we set up peer communication. It will also help to know about CRDTs (Conflict-free Replicated Data Types) and how they manage updates in distributed systems. If you’ve used libraries like Yjs or Automerge, you’ll recognize similar concepts in our timeline store: every peer eventually agrees on the same order of posts, even if they go offline or lose network connections.
If you are new to programming, I will guide you through the process. I will break down each code snippet in this guide so you can understand what we are doing.
Bootstrap the Project Folder
To begin, we’ll set up a new Vite project that’s ready for React and TypeScript. Vite is great because it provides almost instant hot-module reloads and supports ES modules right away, which is perfect for our browser-only P2P app.
First, run this in your terminal:
npx create-vite p2p-twitter --template react-ts
Here’s what happens behind the scenes:
npx create-vite
starts Vite’s project setup without needing a global install.p2p-twitter
is used as both the folder name and the project’s npm package name.--template react-ts
tells Vite to set up a React project with TypeScript, includingtsconfig.json
, React build settings, and type-safe JSX.
Once that command finishes, change into your new directory:
cd p2p-twitter
Inside, you’ll see the default Vite structure: a src
folder with main.tsx
and App.tsx
, a public
folder for static assets, and basic configuration files (package.json
, tsconfig.json
, vite.config.ts
). Now, you can start the development server to make sure everything is working:
npm install # or `pnpm install` if you prefer
npm run dev
Visit the URL shown in your terminal (usually http://localhost:5173
) to see the Vite welcome screen. With the basic setup running smoothly, you’re ready to add P2P signaling, WebRTC channels, and the other features of our serverless Twitter clone.
Add a Minimal Signalling Stub
Our peer-to-peer mesh needs a simple “lobby” to handle the initial handshake—exchanging session descriptions and ICE candidates—before browsers can communicate directly. Instead of using tiny-ws
, we’ll build a minimal stub with the ws
library. Once peers have each other’s connection info, all further data flows peer-to-peer over WebRTC.
1. Install the signalling library
npm install ws
Tip: To use ESM
import
syntax in Node, make sure yourpackage.json
includes"type": "module"
or rename your stub file to
server.mjs
.
2. Create the signalling server
Create a file called server.js
(or server.mjs
):
import { WebSocketServer } from 'ws';
const PORT = 3000;
const wss = new WebSocketServer({ port: PORT });
console.log(`⮞ WebSocket signalling server running on ws://localhost:${PORT}`);
wss.on('connection', (ws) => {
console.log('⮞ New peer connected');
ws.on('message', (data) => {
// Broadcast to all *other* clients
for (const client of wss.clients) {
if (client !== ws && client.readyState === WebSocketServer.OPEN) {
client.send(data);
}
}
console.log('⮞ Broadcasted message to peers:', data.toString());
});
ws.on('close', () => console.log('⮞ Peer disconnected'));
});
This stub will:
- Listen on port 3000 for incoming WebSocket connections.
- When one client sends a message (an SDP offer/answer or ICE candidate), forward it to every other connected client.
- Log connections, broadcasts, and disconnections to the console.
3. Add a convenience script
In your package.json
, under "scripts"
, add:
{
"scripts": {
"dev:signal": "node server.js",
// …your existing scripts
}
}
4. Run your app + signalling stub
-
Start the Vite React app (usually on port 5173):
npm run dev
-
In a second terminal, start the signalling server:
npm run dev:signal
-
Open two browser windows pointing at your React app. In each console you’ll see logs like:
⮞ New peer connected ⮞ Broadcasted message to peers: {"type":"offer","sdp":"…"}
Once both offer and answer messages appear, your peers have exchanged ICE and SDP, and can establish a direct WebRTC connection.
With just this tiny stub, you’ve replaced tiny-ws
without adding any heavyweight dependencies or extra servers—just the essential broadcast logic to bootstrap your P2P Twitter clone.
Establish Browser-to-Browser WebRTC Channels
Browsers can’t connect directly until they share enough information to find each other. That’s where ICE (Interactive Connectivity Establishment) helps, it collects possible connection points (like your local IPs, your public IP through a STUN server, and any TURN relay if direct paths don’t work). Once you have ICE candidates and a pair of SDP (Session Description Protocol) blobs, an offer from one peer and an answer from the other; a RTCPeerConnection
connects everything.
In your React app, create a module (for example webrtc.ts
) that exports a function to set up a peer connection:
// webrtc.ts
export async function createPeerConnection(
sendSignal: (msg: any) => void,
onData: (data: any) => void
) {
const config = {
iceServers: [
{ urls: 'stun:stun.l.google.com:19302' }
]
};
const pc = new RTCPeerConnection(config);
const channel = pc.createDataChannel('chat', {
negotiated: true,
id: 0,
maxPacketLifeTime: 3000
});
channel.binaryType = 'arraybuffer';
channel.onmessage = ({ data }) => onData(data);
pc.onicecandidate = ({ candidate }) => {
if (candidate) sendSignal({ type: 'ice', candidate });
};
pc.ondatachannel = ({ channel: remote }) => {
remote.binaryType = 'arraybuffer';
remote.onmessage = ({ data }) => onData(data);
};
// Begin handshake
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
sendSignal({ type: 'offer', sdp: pc.localDescription });
return async function handleSignal(message: any) {
if (message.type === 'offer') {
await pc.setRemoteDescription(new RTCSessionDescription(message.sdp));
const answer = await pc.createAnswer();
await pc.setLocalDescription(answer);
sendSignal({ type: 'answer', sdp: pc.localDescription });
} else if (message.type === 'answer') {
await pc.setRemoteDescription(new RTCSessionDescription(message.sdp));
} else if (message.type === 'ice') {
await pc.addIceCandidate(new RTCIceCandidate(message.candidate));
}
};
}
Here’s what’s happening:
- The
RTCPeerConnection
is configured with a public STUN server so each browser can discover its public-facing address. - We immediately open a data channel named “chat” with negotiated parameters (no out-of-band negotiation round) and allow up to 3 seconds of packet retransmission (
maxPacketLifeTime
). SettingbinaryType = 'arraybuffer'
ensures we can handle both text and binary blobs later. - As ICE candidates are gathered,
onicecandidate
fires; we serialize each candidate through our signalling stub. - If the remote peer creates a channel first,
ondatachannel
catches it so both sides can send and receive. - We kick off negotiation by creating an SDP offer, sending it, then waiting in
handleSignal
to react to offer, answer, or ICE messages.
To test, connect this to your app’s console. In one tab, run:
window.signalHandler = await createPeerConnection(msg => ws.send(JSON.stringify(msg)), data => console.log('received', data));
…in another tab, do the same, but forward incoming WebSocket messages into window.signalHandler(JSON.parse(evt.data))
. Once both peers have exchanged the offer, answer, and all ICE candidates, type into one console:
const buf = new TextEncoder().encode('ping');
channel.send(buf);
The other tab’s console should log received Uint8Array([...])
, confirming a direct browser-to-browser channel. From this point, every message, including our future CRDT updates, travels peer-to-peer without ever going through a backend server again.
Wire Up a CRDT Timeline Store
To keep every peer’s timeline in sync, even when someone goes offline or multiple people post at once, we’ll use Yjs, a reliable CRDT library, along with its y-webrtc adapter. Yjs ensures all updates merge without conflicts, while y-webrtc uses the same peer connection channels we’ve already opened.
First, install both packages in your project root:
npm install yjs y-webrtc
Here, yjs
provides the core CRDT types and algorithms; y-webrtc
plugs directly into WebRTC data channels so changes propagate instantly to every connected peer.
Next, create a new file (src/crdt.ts
) to initialize and export your shared timeline:
// src/crdt.ts
import * as Y from 'yjs'
import { WebrtcProvider } from 'y-webrtc'
// A Yjs document represents the shared CRDT state
const doc = new Y.Doc()
// Name “p2p-twitter” ensures all peers join the same room
const provider = new WebrtcProvider('p2p-twitter', doc, {
// optional: pass our own RTCPeerConnection instances if you’d like
})
// Use a Y.Array to hold an ordered list of posts
const posts = doc.getArray<{ id: string; text: string; ts: number }>('posts')
// Whenever `posts` changes, fire a callback so the UI can re-render
posts.observe(() => {
// you’ll wire this to your React state later
renderTimeline(posts.toArray())
})
export function addPost(text: string) {
const entry = { id: crypto.randomUUID(), text, ts: Date.now() }
// CRDT push: this update goes to every peer
posts.push([entry])
}
export function getPosts() {
return posts.toArray()
}
Here’s what happens above:
- A
Y.Doc
holds all your CRDT types in one in-memory document. WebrtcProvider
joins the “p2p-twitter” room over WebRTC, relaying Yjs updates on the same peer channels you set up earlier.doc.getArray('posts')
creates (or returns) a shared array keyed by the string “posts.” Each element is an object containing a UUID, the tweet text, and a timestamp.- Calling
addPost(...)
pushes a new entry into the array locally and broadcasts the change to every connected peer. - Subscribing via
posts.observe(...)
lets you react to any remote or local update—perfect for calling your React state setter to refresh the UI.
To verify, run your Vite dev server (npm run dev
) and open the app in two separate browser windows (or devices). In one window’s console, type:
import { addPost } from './src/crdt.js'
addPost('Hello from Tab A!')
Almost instantly, in the other window, you should see your renderTimeline
callback activate with the new post. This single line of code shows complete P2P, CRDT-backed replication with no server needed. From here, you can connect addPost
to a form submit handler and call getPosts()
to initialize your React state, providing every user with the same chronological feed of tweets.
Build the Tweet-Like UI
With our CRDT store set up, let’s create a user-friendly interface for writing and viewing posts. We’ll use Tailwind CSS to style quickly without making custom CSS.
First, install Tailwind and create its config files:
npx tailwindcss init -p
This creates tailwind.config.js
and a PostCSS setup. In your tailwind.config.js
, ensure the content
array covers all your React files:
/** @type {import('tailwindcss').Config} */
export default {
content: ['./index.html', './src/**/*.{js,ts,jsx,tsx}'],
theme: { extend: {} },
plugins: []
}
Next, open src/index.css
and replace its contents with Tailwind’s base imports:
@tailwind base;
@tailwind components;
@tailwind utilities;
Now every class like p-4
, bg-gray-100
, or rounded-xl
is available.
Inside src/App.tsx
, import your CRDT helpers and Tailwind styles:
import React, { useEffect, useState } from 'react'
import './index.css'
import { addPost, getPosts } from './crdt'
export default function App() {
const [timeline, setTimeline] = useState(getPosts())
const [draft, setDraft] = useState('')
// Re-render on CRDT updates
useEffect(() => {
const handleUpdate = () => setTimeline(getPosts())
// assume posts.observe calls handleUpdate under the hood
return () => {/* unsubscribe if you wire it up */}
}, [])
function submitPost(e: React.FormEvent) {
e.preventDefault()
if (!draft.trim()) return
addPost(draft.trim())
setDraft('')
}
return (
<div className="max-w-xl mx-auto p-4">
<form onSubmit={submitPost} className="mb-4">
<textarea
value={draft}
onChange={e => setDraft(e.currentTarget.value)}
placeholder="What’s happening?"
className="w-full p-2 border rounded-lg focus:outline-none focus:ring"
rows={3}
/>
<button
type="submit"
className="mt-2 px-4 py-2 bg-blue-500 text-white rounded-lg hover:bg-blue-600"
>
Tweet
</button>
</form>
<div className="space-y-4">
{timeline
.sort((a, b) => b.ts - a.ts)
.map(post => (
<div
key={post.id}
className="p-4 bg-gray-100 rounded-lg shadow-sm"
>
<p className="text-gray-800">{post.text}</p>
<time className="text-xs text-gray-500">
{new Date(post.ts).toLocaleTimeString()}
</time>
</div>
))}
</div>
</div>
)
}
Here, the form uses Tailwind’s utility classes for padding (p-2
), borders (border
), and focus states (focus:ring
). The button’s background changes on hover, and each post card has a soft shadow and rounded corners (rounded-lg shadow-sm
). We link the textarea to the draft
state and call addPost
on submit. Since our CRDT store broadcasts changes immediately, every connected peer’s getPosts()
returns the new entry and prompts React to re-render the timeline list.
By connecting your React component hierarchy directly to CRDT updates, you’ve created a dynamic, real-time feed that looks and feels like Twitter’s timeline, yet runs entirely in the browser.
Persist Offline with IndexedDB
Even without an internet connection, we want each user to reload and see their timeline as it was. Yjs provides a y-indexeddb
provider that saves the CRDT document in your browser’s IndexedDB and brings it back when you start up.
Start by installing the package:
npm install y-indexeddb
In your src/crdt.ts
, import and wire up the IndexedDB persistence alongside the WebRTC provider:
import { IndexeddbPersistence } from 'y-indexeddb'
// existing imports...
import * as Y from 'yjs'
import { WebrtcProvider } from 'y-webrtc'
const doc = new Y.Doc()
// persist the “p2p-twitter” doc locally
const persistence = new IndexeddbPersistence('p2p-twitter-storage', doc)
persistence.once('synced', () => {
// IndexedDB snapshot has been loaded into `doc`
console.log('loaded local snapshot from IndexedDB')
})
// now connect to peers over WebRTC as before
const webrtc = new WebrtcProvider('p2p-twitter', doc, {
// you can pass custom peer connections here if needed
})
// When you want to force a write of the current state to disk:
export function flushToDisk() {
// flush() returns a promise that resolves once IndexedDB has been updated
return persistence.flush()
}
Here’s what’s happening: when the page loads, IndexeddbPersistence
checks for an existing serialized Yjs document under the key p2p-twitter-storage
. If it finds one, it deserializes all CRDT state into doc
, triggering any observers right away. This means your renderTimeline
callback will show past posts before peers reconnect. The synced
event lets you know when this restoration is finished.
Whenever you make new CRDT updates (like with addPost
), Yjs automatically schedules writes to IndexedDB in the background. If you need to make sure the latest state is saved before the user closes the page or goes offline, call await flushToDisk()
. This function serializes the document and writes it to IndexedDB in one step.
To check if this works, stop your signaling stub and WebRTC (or just disable your network). Reload the tab, and you’ll see the console log loaded local snapshot from IndexedDB
, with your timeline posts reappearing right away. From now on, each peer keeps a local copy of the timeline that stays intact during reloads and offline sessions, and seamlessly reconnects when the network is back.
Share Images & Large Blobs
Text and small JSON objects move smoothly over WebRTC data channels, but large binary files like images or videos need to be split into smaller pieces to avoid exceeding MTU limits and causing silent drops. We’ll use the same data channel we set up earlier, configuring it for reliable, ordered binary transfer, and add simple chunking logic on top.
Start by making sure your peer connection data channel in webrtc.ts
uses these options:
const channel = pc.createDataChannel('file', {
ordered: true,
maxPacketLifeTime: 0 // 0 means retry indefinitely until delivered
});
channel.binaryType = 'arraybuffer';
With ordered: true
and no packet lifetime limit, the browser will buffer and resend lost fragments until the entire blob is received or the connection closes. Next, write two helper functions: one to send a File
or Blob
in chunks, and another to put them back together when received.
const CHUNK_SIZE = 16 * 1024; // 16 KB per packet
export function sendBlob(blob: Blob) {
const total = blob.size;
let offset = 0;
const id = crypto.randomUUID();
function sliceAndSend() {
const end = Math.min(offset + CHUNK_SIZE, total);
blob.slice(offset, end).arrayBuffer().then(buffer => {
channel.send(JSON.stringify({ type: 'chunk-meta', id, total, offset }));
channel.send(buffer);
offset = end;
if (offset < total) sliceAndSend();
else channel.send(JSON.stringify({ type: 'chunk-end', id }));
});
}
sliceAndSend();
}
const incomingBuffers: Record<string, Uint8Array[]> = {};
channel.onmessage = async event => {
if (typeof event.data === 'string') {
const meta = JSON.parse(event.data);
if (meta.type === 'chunk-meta') {
if (!incomingBuffers[meta.id]) incomingBuffers[meta.id] = [];
} else if (meta.type === 'chunk-end') {
const full = new Blob(incomingBuffers[meta.id]);
incomingBuffers[meta.id] = [];
displayImage(full); // your UI hook
}
} else {
// binary ArrayBuffer payload
const id = /* track current id based on protocol, e.g. last seen meta.id */
Object.keys(incomingBuffers).pop()!;
incomingBuffers[id].push(new Uint8Array(event.data));
}
};
Here’s how it works: when you call sendBlob(file)
, it breaks the file into 16 KB slices. Before each slice, it sends a small JSON “chunk-meta” message containing the unique transfer ID, total size, and current offset. Then it sends the raw ArrayBuffer. Once done, it emits a “chunk-end” marker. On the receiving end, string messages initialize or finalize an array of Uint8Arrays; binary messages append to the current buffer. After chunk-end
, you reconstruct a Blob and call your UI rendering function—perhaps converting to an object URL or injecting into an <img>
tag.
To test, add a file input to your React form:
<input
type="file"
accept="image/*"
onChange={e => {
const file = e.target.files?.[0];
if (file) sendBlob(file);
}}
/>
Drop an image, and moments later it should appear in every connected peer’s timeline just like a tweet, without any centralized upload server. This chunked, peer-to-peer transfer unlocks truly serverless sharing of rich media.
Secure the Mesh
In a completely serverless mesh, it’s important to authenticate peers and prevent bad actors from adding fake posts. We’ll use TweetNaCl’s ECDH key exchange and Ed25519 signatures so each message has a verifiable signature, and peers can check public-key fingerprints separately before trusting each other.
Run this command once in your project root to install the crypto library:
npm install tweetnacl tweetnacl-util
Here, tweetnacl
provides fast, audited tools for generating key pairs, deriving shared keys, and signing. tweetnacl-util
offers helpers to convert between strings, Uint8Arrays, and base64.
In a new module (src/crypto.ts
), set up each peer’s identity and export utility functions:
// src/crypto.ts
import nacl from 'tweetnacl'
import { encodeBase64, decodeUTF8, encodeUTF8 } from 'tweetnacl-util'
// Generate or load a persistent keypair; here we generate fresh on each reload
const { publicKey, secretKey } = nacl.sign.keyPair()
// Compute a stable fingerprint for display (first 8 bytes of SHA-256 of pubkey)
async function fingerprint(pk: Uint8Array) {
const hash = await crypto.subtle.digest('SHA-256', pk)
const bytes = new Uint8Array(hash).slice(0, 8)
return Array.from(bytes).map(b => b.toString(16).padStart(2, '0')).join('')
}
// Sign arbitrary JSON-serializable payloads
export function signMessage(payload: any) {
const json = JSON.stringify(payload)
const msgUint8 = decodeUTF8(json)
const signed = nacl.sign(msgUint8, secretKey)
return encodeBase64(signed)
}
// Verify and decode a signed message
export function verifyMessage(signedB64: string) {
const signed = Uint8Array.from(atob(signedB64), c => c.charCodeAt(0))
const opened = nacl.sign.open(signed)
if (!opened) throw new Error('Invalid signature')
const json = encodeUTF8(opened)
return JSON.parse(json)
}
export { publicKey, fingerprint }
Here’s what each part does: you generate an Ed25519 keypair using nacl.sign.keyPair()
. The fingerprint
helper hashes the public key and displays the first 16 hex characters, making it easy for peers to visually compare and confirm over chat or QR code. signMessage
converts your payload to JSON, signs it, and returns a base64 string. verifyMessage
reverses that process: it base64-decodes the string, verifies the signature against the public key in the signed message, and parses the JSON if it’s valid (otherwise, it throws an error).
Next, integrate signing into your CRDT updates in crdt.ts
. Instead of pushing raw objects, wrap them:
import { signMessage, verifyMessage, publicKey } from './crypto'
// Modify addPost:
export function addPost(text: string) {
const entry = { id: crypto.randomUUID(), text, ts: Date.now() }
const signed = signMessage({ ...entry, author: encodeBase64(publicKey) })
posts.push([{ signed }])
}
// When loading posts from CRDT:
posts.observe(event => {
event.changes.added.forEach(item => {
const { signed } = item.content.getContent()[0]
try {
const { id, text, ts, author } = verifyMessage(signed)
renderTimelineEntry({ id, text, ts, author })
} catch {
console.warn('Discarded forged post')
}
})
})
Each post now carries its author’s public key and Ed25519 signature. On receipt, you call verifyMessage
; if verification fails, you silently drop the update. In your UI (e.g. in the app’s header), display your own fingerprint:
const [fp, setFp] = useState<string>()
useEffect(() => {
fingerprint(publicKey).then(setFp)
}, [])
// Render: Your fingerprint: {fp}
When a peer connects, ask them to share their fingerprint separately for verification. Once confirmed, all signed messages from that peer are trusted. If there’s a mismatch, a console warning appears, and the fake entry is ignored. This way, your network remains decentralized but authenticated, only posts signed by known keys appear in your timeline.
Conclusion
You’ve now successfully created a fully serverless, Twitter-like microblog that works entirely in the browser. By using a simple signaling stub, direct WebRTC channels, a CRDT timeline store, offline storage in IndexedDB, chunked binary transfers, and end-to-end message signing, you’ve eliminated all centralized parts. Your posts go directly from peer to peer, your data stays under your control, and users can verify each other’s identities without a middleman. From here, you could make the app a Progressive Web App (PWA) so mobile users can install it like a native app, or strengthen your fallback options with a dedicated TURN relay for peers with strict NATs. You might also use Vultr Object Storage to serve your static bundle worldwide while keeping your signaling stub light, or even replace it with a decentralized signaling network. Whatever you decide, the groundwork you’ve set shows that true, serverless social networking is not only possible but also easily accessible for your users.