SSE
SSE provides unidirectional server-to-client streaming over HTTP. Unlike WebSocket bidirectional connections, SSE is designed for scenarios where server pushes updates to clients - live feeds, notifications, real-time dashboards.
The framework provides SSE meta-process implementation that integrates SSE connections with the actor model. Each connection becomes an independent actor addressable from anywhere in the cluster.
The Integration Problem
SSE connections need two capabilities:
HTTP streaming: Connection must keep HTTP response open and stream events to client. Standard HTTP handlers return immediately - SSE requires long-lived responses.
Asynchronous writing: Backend actors must be able to push events to the client at any time - notifications, updates, data changes from the actor system.
This is exactly what meta-processes solve. The SSE connection meta-process holds the HTTP response open. Actor Handler receives messages from backend actors and writes formatted SSE events to the response stream.
Components
Two meta-processes work together:
SSE Handler: Implements http.Handler interface. When HTTP request arrives, sets SSE headers and spawns Connection meta-process. Returns after connection closes.
SSE Connection: Meta-process managing one SSE connection. Actor Handler receives messages from actors, formats them as SSE events, writes to HTTP response stream. Connection lives until client disconnects or error occurs.
For client-side connections:
SSE Client Connection: Meta-process connecting to external SSE endpoint. External Reader continuously reads SSE stream, parses events, sends them to application actors.
Creating SSE Server
Use sse.CreateHandler to create handler meta-process:
Handler options:
ProcessPool: List of process names that will receive messages from SSE connections. When connection is established, handler round-robins across this pool to select which process handles this connection. If empty, connection sends to parent process.
Heartbeat: Interval for sending comment heartbeats to keep connection alive. Default 30 seconds. Heartbeats prevent proxies and load balancers from closing idle connections.
Connection Lifecycle
When client connects:
HTTP request arrives with
Accept: text/event-streamHandler sets SSE response headers
Handler spawns Connection meta-process
Connection sends
MessageConnectto applicationConnection blocks waiting for client disconnect
Actor Handler waits for backend messages
During connection lifetime:
Server events: Application sends message -> Actor Handler formats and writes SSE event
Heartbeats: Periodic comment lines keep connection alive
Connection remains open until client disconnects
When client disconnects:
HTTP request context is cancelled
Connection sends
MessageDisconnectto applicationMeta-process terminates
HTTP handler returns
Messages
Four message types flow between connections and actors:
sse.MessageConnect: Sent when connection established.
Receive this to track new connections:
sse.MessageDisconnect: Sent when connection closes.
Receive this to clean up connection state:
sse.Message: Event to send to client (server) or received from server (client).
Send events to client:
Wire format for the above message:
sse.MessageLastEventID: Sent when client reconnects with Last-Event-ID header.
Handle reconnection to resume from last event:
SSE Wire Format
SSE events follow a simple text format:
event:- Event type. Client listens withaddEventListener("type", ...). Optional, defaults to "message".id:- Event ID. Client sends asLast-Event-IDheader on reconnect. Optional.retry:- Suggested reconnection delay. Client uses this if connection drops. Optional.data:- Event payload. Can span multiple lines, each prefixed withdata:. Required.Empty line terminates event.
The sse.Message struct maps directly to this format. Multi-line data is handled automatically.
Client Connections
Create client-side SSE connections with sse.CreateConnection:
Connection options:
URL: SSE server endpoint. Use http:// or https:// scheme.
Process: Process name that will receive events from server. If empty, sends to parent process.
Headers: Custom HTTP headers for the request. Useful for authentication.
LastEventID: Initial Last-Event-ID header value for resuming from specific event.
ReconnectInterval: Default reconnection delay. Can be overridden by server's retry: field. Default 3 seconds.
Client connections receive the same message types. External Reader parses SSE stream and sends sse.Message to application:
Network Transparency
Connection meta-processes have gen.Alias identifiers that work across the cluster. Any actor on any node can send events to any connection:
Network transparency makes every SSE connection addressable like any other actor. Backend logic scattered across cluster nodes can push updates to specific clients without intermediaries.
Process Pool Distribution
Handler accepts ProcessPool - list of process names to receive connection messages. Handler distributes connections across this pool using round-robin:
Connection 1 sends to "handler1", connection 2 to "handler2", connection 3 to "handler3", connection 4 to "handler1", etc. This distributes load across multiple handler processes.
Useful for scaling: spawn multiple handler processes, each managing subset of connections. Prevents single handler from becoming bottleneck.
Differences from WebSocket
Direction
Bidirectional
Server to client only
Protocol
Upgrade to ws://
Standard HTTP streaming
Client to server
WriteMessage()
Not supported (use separate HTTP requests)
Browser support
Requires WebSocket API
Native EventSource API
Reconnection
Manual implementation
Built-in with Last-Event-ID
Binary data
Supported
Text only (base64 encode if needed)
Proxy support
May require configuration
Works through standard HTTP proxies
Choose SSE when:
Server pushes updates to clients (notifications, live feeds, dashboards)
Clients only need to receive, not send through same connection
Working with proxies that may not support WebSocket
Want automatic reconnection with event replay
Choose WebSocket when:
True bidirectional communication needed
Binary data transfer required
Low latency in both directions critical
Last updated
