Skip to main content

Mobile Wallet Payment System

Yash Sachdeva
Author
Yash Sachdeva
Software Engineer | Turning Complex Problems into Simple Solutions
Software engineer by trade, problem solver by nature. I write about the systems I build in my free time and the experiences that shape them.

As part of this post, we’ll be covering the design of a mobile wallet payment system that supports -

  • Top-ups (add money to wallet from bank/card)
  • P2P transfers (wallet -> wallet)
  • Basic fraud detection
  • Concurrency with clear trade-offs between strong and eventual consistency at scale.

Let’s start with a basic design and then we can scale it up.

1. Single node with relational DB
#

CREATE TABLE wallet (
    id            BIGINT PRIMARY KEY,
    owner_id      BIGINT NOT NULL,
    balance_cents BIGINT NOT NULL,
    version       BIGINT NOT NULL DEFAULT 0
);

CREATE TABLE wallet_transaction (
    id               BIGSERIAL PRIMARY KEY,
    from_wallet_id   BIGINT,
    to_wallet_id     BIGINT,
    amount_cents     BIGINT NOT NULL,
    payment_status   VARCHAR(32) NOT NULL,
    created_at       TIMESTAMP NOT NULL DEFAULT now()
);

We use a single DB transaction per operation to ensure atomicity and acquire row locks.

But there are still lots of issues with this basic design:

  • Balance is a mutable column: bugs can overwrite it.
  • No strong audit guarantees: we can’t easily replay transactions or recover from failures.

This sets the stage for a ledger-based design

2. Ledger-Based Design
#

Modern wallets usually move from “balance column” to a ledger-based design with double-entry style accounting.

Ledger Schema
#

Instead of directly mutating balance, we only append immutable ledger entries. Balance is then calculated by summing up the entries for a wallet.

CREATE TABLE wallet (
    id       BIGINT PRIMARY KEY,
    owner_id BIGINT NOT NULL
);

CREATE TABLE ledger_entry (
    id               BIGSERIAL PRIMARY KEY,
    wallet_id        BIGINT NOT NULL,
    amount_cents     BIGINT NOT NULL, -- positive for credit, negative for debit
    transaction_id   BIGINT NOT NULL,
    entry_type       VARCHAR(32) NOT NULL, -- e.g. TRANSFER_DEBIT, TRANSFER_CREDIT, TOP_UP
    created_at       TIMESTAMP NOT NULL DEFAULT now()
);

CREATE TABLE transaction (
    id               BIGSERIAL PRIMARY KEY,
    external_id      VARCHAR(64), -- for idempotency or PSP reference
    type             VARCHAR(32) NOT NULL,
    status           VARCHAR(32) NOT NULL,
    from_wallet_id   BIGINT,
    to_wallet_id     BIGINT,
    amount_cents     BIGINT NOT NULL,
    created_at       TIMESTAMP NOT NULL DEFAULT now(),
    UNIQUE (external_id)
);
@Transactional
public TransactionEntity topUp(Long walletId, long amountCents, String externalId) {
    TransactionEntity tx = transactionRepository
            .findByExternalId(externalId)
            .orElseGet(() -> {
                TransactionEntity t = new TransactionEntity();
                t.setExternalId(externalId);
                t.setType("TOP_UP");
                t.setFromWalletId(null);
                t.setToWalletId(walletId);
                t.setAmountCents(amountCents);
                t.setStatus("PENDING");
                return transactionRepository.save(t);
            });

    if ("SUCCESS".equals(tx.getStatus())) {
        return tx; // idempotent replay
    }

    LedgerEntryEntity ledger = new LedgerEntryEntity();
    ledger.setWalletId(walletId);
    ledger.setAmountCents(amountCents);
    ledger.setTransactionId(tx.getId());
    ledger.setEntryType("TOP_UP_CREDIT");
    ledgerRepository.save(ledger);

    tx.setStatus("SUCCESS");
    return transactionRepository.save(tx);
}

Balance Query
#

public long getBalance(Long walletId) {
    Long sum = ledgerRepository.sumAmountByWalletId(walletId);
    return sum != null ? sum : 0L;
}

There are still issues with this design:

  • Performance: naive summing is expensive, we need to add caches.
  • Concurrency control: is needed.

3. Ledger + Materialized Balance
#

CREATE TABLE ledger_entry (
    id               BIGSERIAL PRIMARY KEY,
    wallet_id        BIGINT NOT NULL,
    amount_cents     BIGINT NOT NULL,  -- +10000 for credit, -5000 for debit
    tx_id            BIGINT NOT NULL,
    entry_type       VARCHAR(32) NOT NULL,  -- TOPUP_CREDIT, TRANSFER_DEBIT
    created_at       TIMESTAMP DEFAULT now()
);

-- Index for fast wallet scans
CREATE INDEX idx_ledger_wallet_created ON ledger_entry(wallet_id, created_at);

Materialized Balance Table
#

For fast reads, maintain a projection updated inside the same transaction as ledger writes.

CREATE TABLE wallet_balance (
    wallet_id              BIGINT PRIMARY KEY,
    available_balance_cents BIGINT NOT NULL DEFAULT 0,
    locked_balance_cents    BIGINT NOT NULL DEFAULT 0,
    updated_at              TIMESTAMP DEFAULT now(),
    version                 BIGINT DEFAULT 0  -- for optimistic locking
);

Double-Entry Extension (for advanced audits)
#

For full accounting compliance, extend the ledger to double‑entry:

-- Every txn has debit + credit entries across accounts (user wallet ↔ treasury)
ALTER TABLE ledger_entry ADD COLUMN account_id BIGINT;
ALTER TABLE ledger_entry ADD COLUMN direction VARCHAR(10);  -- DEBIT/CREDIT

To catch drift, run a nightly job that recomputes balances and compares against the materialized table. Alert if there is a mismatch.

4. Concurrency Control for Balances
#

What could be the challenges while dealing with concurrency control?

  • Double-spend / lost updates: when multiple operations hit the same wallet at the same time (e.g., two transfers spending the same money).
  • Duplicate transactions: from retries, flaky networks, payment gateway timeouts.
  • High contention: on “hot” wallets (e.g., popular merchants, exchanges).
  • Race conditions: across services (e.g., wallet service vs. fraud service vs. notification service).

Let’s try to solve these one by one.

A. Preventing Double-Spend per Wallet
#

We should never allow two balance modifying operations on the same wallet to run concurrently, even under high load.

For this, we can use a per-wallet lock so operations on different wallets can proceed concurrently.

For this, we can either implement custom striped locks or simply use Guava’s striped locks.

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

public class BoundedStripedLockManager {

    private final ConcurrentHashMap<Integer, Lock> stripes = new ConcurrentHashMap<>();
    private final int stripesCount;

    public BoundedStripedLockManager(int stripesCount) {
        this.stripesCount = stripesCount; //1024
    }

    //many keys map to the smaller set of locks
    private int stripeKey(long walletId) {
        int h = Long.hashCode(walletId);
        h ^= (h >>> 16);
        int idx = h % stripesCount;
        return idx < 0 ? idx + stripesCount : idx;
    }

    public Lock lockForWallet(long walletId) {
        int key = stripeKey(walletId);
        return stripes.computeIfAbsent(key, k -> new ReentrantLock());
    }
}

Or simply with Guava:

import com.google.common.util.concurrent.Striped;

import java.util.concurrent.locks.Lock;

public class WalletLockManager {

    // 1024 stripes => up to 1024 locks spread across wallet IDs
    private final Striped<Lock> walletLocks = Striped.lock(1024);

    public Lock lockForWallet(long walletId) {
        return walletLocks.get(walletId);
    }
}
public class WalletService {

    private final StripedWalletLockManager lockManager;

    public WalletService(StripedWalletLockManager lockManager) {
        this.lockManager = lockManager;
    }

    public void transfer(long fromWalletId, long toWalletId, long amountCents) {
        // Enforce deterministic lock order to avoid deadlocks
        long firstId  = Math.min(fromWalletId, toWalletId);
        long secondId = Math.max(fromWalletId, toWalletId);

        Lock firstLock  = lockManager.lockForWallet(firstId);
        Lock secondLock = lockManager.lockForWallet(secondId);

        firstLock.lock();
        try {
            secondLock.lock();
            try {
                doTransfer(fromWalletId, toWalletId, amountCents);
            } finally {
                secondLock.unlock();
            }
        } finally {
            firstLock.unlock();
        }
    }

    private void doTransfer(long fromWalletId, long toWalletId, long amountCents) {
        // DB transaction: check balances, write ledger entries, update projections, etc.
    }
}

You map multiple wallets to a limited number of locks (striping) so you don’t have to maintain a lock for each wallet id leading to a large number of tiny lock objects in a big wallet system.

B. DB Level Concurrency: Row Locks and Optimistic Retries
#

Even with in-memory locks, multiple instances could race on the same wallet database row. You need DB-level concurrency control as well.

a. Pessimistic Locking
#

public interface WalletBalanceRepository extends JpaRepository<WalletBalanceEntity, Long> {

    @Lock(LockModeType.PESSIMISTIC_WRITE)
    @Query("select b from WalletBalanceEntity b where b.walletId = :walletId")
    WalletBalanceEntity findByWalletIdForUpdate(@Param("walletId") long walletId);
}

b. Optimistic Locking with Retries
#

Under high contention, pessimistic locking can cause long waits and deadlocks. Optimistic locking avoids this by only checking for conflicts at commit time. Attach a version field to wallet_balance. On update, JPA will throw OptimisticLockException if the version has changed since load. Retry the operation with fresh state.

@Entity
@Table(name = "wallet_balance")
public class WalletBalanceEntity {

    @Id
    private Long walletId;

    private Long availableBalanceCents;

    @Version
    private Long version;

    // getters/setters
}
@Service
public class OptimisticWalletService {

    private static final int MAX_RETRIES = 5;

    @Autowired
    private WalletBalanceRepository balanceRepository;
    @Autowired
    private LedgerRepository ledgerRepository;

    public void transfer(long fromWalletId, long toWalletId, long amountCents, long txId) {
        for (int attempt = 1; attempt <= MAX_RETRIES; attempt++) {
            try {
                doTransferOnce(fromWalletId, toWalletId, amountCents, txId);
                return;
            } catch (ObjectOptimisticLockingFailureException ex) {
                if (attempt == MAX_RETRIES) {
                    throw new ConcurrentModificationException("Too much contention, please retry later", ex);
                }
                // backoff could be added here
            }
        }
    }

    @Transactional
    protected void doTransferOnce(long fromWalletId, long toWalletId, long amountCents, long txId) {
        WalletBalanceEntity from = balanceRepository.findById(fromWalletId).orElseThrow();
        WalletBalanceEntity to   = balanceRepository.findById(toWalletId).orElseThrow();

        if (from.getAvailableBalanceCents() < amountCents) {
            throw new IllegalStateException("Insufficient funds");
        }

        from.setAvailableBalanceCents(from.getAvailableBalanceCents() - amountCents);
        to.setAvailableBalanceCents(to.getAvailableBalanceCents() + amountCents);

        balanceRepository.save(from);
        balanceRepository.save(to);

        ledgerRepository.insertTransferEntries(fromWalletId, toWalletId, amountCents, txId);
    }
}

C. Idempotency
#

The request can contain an idempotency key generated at the client side through a random ID generator, and then its uniqueness can be enforced at the DB level.

ALTER TABLE transaction
ADD COLUMN idempotency_key VARCHAR(64),
ADD CONSTRAINT uk_tx_idempotency UNIQUE (idempotency_key, from_wallet_id);

D. Available vs Locked Balance Under Concurrency
#

  • Available balance: money that can be spent.
  • Locked balance: reserved for in-flight operations.

E. Concurrency in Caches and Projections
#

You’ll likely cache wallet balances (Redis, in-memory) and maintain projections for history and analytics. Challenges include:

  • 2 threads updating the cache concurrently and overriding each other.
  • Cache becoming inconsistent with DB under failure or retries.

Therefore, we should:

  • Treat the ledger and balance table as the only source of truth; caches are ephemeral.
  • Use atomic operations in the cache (e.g., Redis INCR guarded by Lua scripts) if you adjust balances in the cache.

F. Fraud Detection and Risk
#

Before committing a transaction, apply fast rules. For example:

  • Daily transaction amount limits per user tier (KYC‑based).
  • Velocity checks: number of transactions in last N minutes.
  • Device/IP velocity: too many attempts from same device.
  • Risky patterns: new device + large amount.

5. End to End Flow
#

Payment System

Related

Order Management System

As part of this post, we’ll be covering the design of a modern, production-grade Order Management System (OMS) with a focus on multi-fulfillment, cancellations, refunds, inventory synchronization, and multi-region deployment. Let’s first start with the requirements. Requirements # Functional Requirements # Core order lifecycle: Create order with multiple line items, shipping options, and payment methods. Order state machine: Support states such as PENDING → CONFIRMED → PARTIALLY_FULFILLED → FULFILLED → CANCELLED → REFUNDED. Split shipments: Support split shipments and partial fulfillment when items originate from multiple locations or arrive at different times. Cancellations: Allow customer and system-initiated cancellations in various states (pre-fulfillment, mid-fulfillment) with clear rules. Refunds: Support refunds (full and partial), including multi-payment or mixed-method scenarios (card, wallet, store credit). Multi-fulfillment: Route each line item to an optimal fulfillment node (warehouse, store, 3PL, marketplace drop-shipper). Multiple shipments: Track multiple shipments per order with independent tracking IDs and statuses. Backorders and preorders: Support delayed fulfillment while the order remains active. Inventory and payments: Reserve inventory atomically as part of the order creation saga; release on failure or cancellation. Inventory sync: Prevent overselling across channels with near real-time inventory sync and event-driven updates. Payment gateways: Integrate with one or more payment gateways for authorization, capture, and refund. Multi-channel and integrations: Receive orders from internal checkout, marketplaces, and POS; normalize into a canonical order model. Fulfillment updates: Push fulfillment updates and cancellations back to channels and customer notification systems. Multi-region deployment: Deploy OMS in multiple regions, each with a full stack of services fronted by a global load balancer. Data synchronization: Keep critical data (orders, payments, inventory) synchronized across regions using a mix of strong and eventual consistency depending on domain constraints. Non-Functional Requirements # High Availability and resilience: One failure in a downstream flow should not take down the entire order flow. Scalability: Capable of handling peak events such as flash sales and promotions. Consistency: Clear consistency model for orders and inventory (strong vs eventual consistency). Observability: Comprehensive logging, monitoring, and tracing. Extensibility: Easy to add new fulfillment types, payment methods, or regions without major rewrites. High Level Design # Order Lifecycle and Domain Model # Order Lifecycle Stages # A typical e-commerce order lifecycle contains the following high-level stages: