Decentralised Load Balancing Architecture

To ensure high availability and horizontal scalability, DeCloudX implements a decentralized load balancing and traffic distribution model—similar in principle to AWS Elastic Load Balancing, but powered by a distributed mesh of compute nodes.

🔄 Horizontal Scaling via Multi-Instance Containers

Rather than relying on a single node with large memory (e.g., 5GB RAM), applications can be deployed as multiple smaller containers across the network:

  • Example: An app needing 5GB RAM can be split into three containers using 2GB, 1.5GB, and 1.5GB respectively.

  • Each container runs on a different compute node in DeCloudX.

  • The total processing capacity is preserved while eliminating single-node dependency.

🌐 Gateway-Driven Traffic Distribution

DeCloudX includes decentralized gateway nodes that serve as intelligent routing layers:

  • Accept incoming traffic to .dcx domains or mapped Web2 domains.

  • Distribute traffic to healthy, geographically optimal compute nodes.

  • Perform failover rerouting if a node goes offline.

This design ensures that no single point of failure can bring down the backend, and performance remains consistent across global users.

🛠️ Configuration Parameters (Sample)

app_name: khunti-backend
image: khunti/php:latest
replicas: 3
router: enabled
autoscale: true
warm_standby: 1

⚙️ Benefits of DeCloudX Load Routing

Feature

Functionality

Load Balancing

Routes traffic evenly across all active replicas

Redundancy

Ensures no downtime from single-node failures

Geo-Routing

Connects users to the nearest available node

Warm Standby Nodes

Automatically promotes standby containers in case of failures

Stateless Microservices

Recommended design for dynamic scaling and fast failover

This model brings AWS-grade infrastructure behavior into a fully decentralized, sovereign, and trustless environment—empowering builders to launch globally resilient applications with zero reliance on Big Tech cloud providers.

Last updated