home intel cve-2026-0030-host-check-page-state-oob-write
CVE Analysis 2026-03-02 · 8 min read

CVE-2026-0030: OOB Write in __host_check_page_state_range Enables LPE

An incorrect bounds check in __host_check_page_state_range of mem_protect.c allows an out-of-bounds write, enabling local privilege escalation with no additional privileges required.

#memory-corruption#out-of-bounds-write#bounds-check-failure#privilege-escalation#local-attack
Technical mode — for security professionals
▶ Attack flow — CVE-2026-0030 · Memory Corruption
ATTACKERRemote / unauthMEMORY CORRUPTIOCVE-2026-0030Cross-platform · HIGHCODE EXECArbitrary coderuns as targetCOMPROMISEFull accessNo confirmed exploits

Vulnerability Overview

CVE-2026-0030 is a memory corruption vulnerability in the Android hypervisor's memory protection layer, specifically in __host_check_page_state_range() within mem_protect.c. The bug is an out-of-bounds write caused by a defective bounds check on page state iteration. An unprivileged local attacker can exploit this to achieve local escalation of privilege — no additional execution privileges and no user interaction required. CVSS scores this at 8.4 (HIGH).

This function operates in the Protected KVM (pKVM) hypervisor context on Android devices running GKI kernels. pKVM enforces memory ownership between the host Linux kernel and guest VMs; a corruption here can rewrite hypervisor-managed page ownership metadata, breaking the entire isolation model.

Root cause: __host_check_page_state_range computes the iteration end pointer using an unchecked page count derived from caller-supplied parameters, allowing the state-check loop to walk past the end of the hyp_page array and corrupt adjacent hypervisor memory.

Affected Component

  • File: arch/arm64/kvm/hyp/nvhe/mem_protect.c
  • Function: __host_check_page_state_range()
  • Subsystem: pKVM (Protected KVM) hypervisor — nVHE path
  • Privilege boundary: EL1 (host kernel) → EL2 (hypervisor)
  • Affected platforms: Android devices with pKVM-enabled GKI kernels (arm64)
  • Patch level: March 2026 Android Security Bulletin

Root Cause Analysis

The pKVM hypervisor tracks physical page ownership using a hyp_page metadata array indexed by PFN (page frame number). __host_check_page_state_range() is called during host memory donation and reclaim operations to assert that all pages in a supplied range are in the expected ownership state before any stage-2 mapping changes are committed.

The vulnerable logic reconstructs an end-of-range pointer using arithmetic on the supplied nr_pages value without validating that the resulting range stays within the bounds of the hyp_vmemmap array:

/*
 * arch/arm64/kvm/hyp/nvhe/mem_protect.c
 * Simplified pseudocode — vulnerable version
 */

static int __host_check_page_state_range(u64 addr, u64 size,
                                          enum pkvm_page_state state)
{
    struct hyp_page *p;
    struct hyp_page *end;
    u64 nr_pages;

    /* Convert byte size to page count — attacker influences addr+size */
    nr_pages = size >> PAGE_SHIFT;

    /* Resolve base hyp_page entry for starting PFN */
    p   = hyp_phys_to_page(addr);

    /*
     * BUG: end pointer computed from caller-supplied nr_pages
     * with no validation that (p + nr_pages) stays within
     * hyp_vmemmap array bounds. If addr + size exceeds
     * hyp_vmemmap coverage, end walks past the array.
     */
    end = p + nr_pages;   // BUG: no bounds check against hyp_vmemmap_end

    while (p < end) {
        /* Compound-page tail: skip to head */
        if (p != hyp_page_to_head(p)) {
            p = hyp_page_to_head(p) + compound_nr(hyp_page_to_page(p));
            continue;
        }

        if (hyp_page_state(p) != state)
            return -EPERM;

        /* BUG: write path in callers uses same unchecked arithmetic,
         * state update via hyp_page_ref_inc / ownership tag store
         * will dereference 'p' beyond array end               */
        p += 1UL << p->order;
    }
    return 0;
}

The critical issue is the unchecked end = p + nr_pages assignment. The hypervisor's hyp_vmemmap array has a fixed length determined at EL2 initialization time. When the host kernel (EL1) triggers a memory protection hypercall with a crafted addr/size pair, nr_pages can be set such that end points beyond hyp_vmemmap into adjacent EL2 memory. Callers that mutate page state (e.g., __host_set_page_state_range, which shares the same pattern) then perform writes through this out-of-bounds pointer.

Memory Layout

EL2 memory during pKVM initialization (representative layout, arm64):

EL2 (Hypervisor) Virtual Address Space — nVHE:

  0xFFFF800010000000  hyp_vmemmap[]          <-- struct hyp_page array start
                      [ pfn=0x00000 ] +0x00
                      [ pfn=0x00001 ] +0x08
                      ...
  0xFFFF800010080000  hyp_vmemmap[] END       <-- last valid hyp_page entry
  0xFFFF800010080008  [hyp_pool metadata]     <-- WRITE TARGET (adjacent)
                       .free_pages list head
                       .lock
  0xFFFF800010080020  [kvm_pgtable root]
                       .pgd  = EL2 page table root ptr

BEFORE CORRUPTION (normal __host_set_page_state_range call):
  p   = &hyp_vmemmap[0x0FFFF]   (valid, last page in range)
  end = &hyp_vmemmap[0x10000]   (one-past-end, loop exits cleanly)

AFTER CORRUPTION (crafted nr_pages overflows past array):
  p   = &hyp_vmemmap[0x0FFFF]   (valid)
  end = &hyp_vmemmap[0x10040]   (40 entries past end — into hyp_pool)

  ITERATION WRITES:
  hyp_vmemmap[0x10000] → host = CORRUPTED  (hyp_pool.free_pages.next)
  hyp_vmemmap[0x10001] → host = CORRUPTED  (hyp_pool.free_pages.prev)
  hyp_vmemmap[0x10002] → host = CORRUPTED  (hyp_pool.lock)
  ...
  RESULT: hyp_pool list head now points to attacker-controlled page

Exploitation Mechanics

EXPLOIT CHAIN — CVE-2026-0030:

1. SETUP: Attacker runs as unprivileged process on Android device.
   Obtains access to /dev/kvm or leverages a vendor driver that
   issues KVM_CAP_ARM_PROTECTED_VM hypercalls on behalf of callers.

2. GROOM: Spray EL2 hyp_pool to achieve deterministic layout.
   Allocate/free EL2 pages to position hyp_pool metadata immediately
   after hyp_vmemmap[] array end at a known offset.

3. TRIGGER: Issue PKVM_HOST_SHARE_HYP / PKVM_HOST_DONATE_HYP hypercall
   with crafted parameters:
     addr      = last_valid_pfn_base - (N * PAGE_SIZE)
     size      = (hyp_vmemmap_len + OVERFLOW_PAGES) * PAGE_SIZE
   This causes nr_pages to extend 'end' 0x40 hyp_page entries past
   the array, into hyp_pool metadata.

4. WRITE: __host_set_page_state_range iterates and stamps
   PKVM_PAGE_OWNED state tag into each hyp_page.host_state byte.
   At offset +0x00 within struct hyp_page this is a 1-byte store;
   8 consecutive corrupted entries reconstruct a full 8-byte pointer.
   With controlled addr, attacker controls the byte pattern written.

5. PIVOT: Corrupted hyp_pool.free_pages list head now points to
   attacker-controlled page (previously prepared with fake hyp_page
   metadata). Next EL2 pool allocation returns attacker page as
   a legitimate EL2 allocation.

6. EL2 WRITE PRIMITIVE: Attacker now has a page mapped at EL2 that
   they control from EL1. Write shellcode or a fake stage-2 PTE
   to disable memory isolation for a target process.

7. LPE: Map kernel .text RWX, overwrite SELinux enforcement hook,
   escalate to root + SELinux disabled.

Step 3 is achievable without /dev/kvm directly on devices where vendor HALs (e.g., DRM/Widevine, TEE proxy drivers) invoke protected memory hypercalls with partially user-controlled parameters — a common pattern on Pixel and partner SKUs.

Patch Analysis

The fix introduces an explicit upper-bound validation of the computed end pointer against the known extent of hyp_vmemmap before the iteration loop is entered. Both the check function and all mutating callers sharing the same pattern are patched.

// BEFORE (vulnerable — mem_protect.c):
static int __host_check_page_state_range(u64 addr, u64 size,
                                          enum pkvm_page_state state)
{
    struct hyp_page *p   = hyp_phys_to_page(addr);
    struct hyp_page *end = p + (size >> PAGE_SHIFT);  // unchecked

    while (p < end) {
        if (hyp_page_state(p) != state)
            return -EPERM;
        p += 1UL << p->order;
    }
    return 0;
}

// AFTER (patched — March 2026 ASB):
static int __host_check_page_state_range(u64 addr, u64 size,
                                          enum pkvm_page_state state)
{
    struct hyp_page *p   = hyp_phys_to_page(addr);
    struct hyp_page *end = p + (size >> PAGE_SHIFT);

    /* PATCH: Validate computed range stays within hyp_vmemmap bounds */
    if (end < p || end > hyp_vmemmap + hyp_vmemmap_nr_pages)
        return -EINVAL;

    while (p < end) {
        if (hyp_page_state(p) != state)
            return -EPERM;
        p += 1UL << p->order;
    }
    return 0;
}

// Same guard added to __host_set_page_state_range():
static int __host_set_page_state_range(u64 addr, u64 size,
                                        enum pkvm_page_state state)
{
    struct hyp_page *p   = hyp_phys_to_page(addr);
    struct hyp_page *end = p + (size >> PAGE_SHIFT);

    /* PATCH: Same bounds guard — prevents OOB write path */
    if (end < p || end > hyp_vmemmap + hyp_vmemmap_nr_pages)
        return -EINVAL;

    while (p < end) {
        hyp_set_page_state(p, state);
        p += 1UL << p->order;
    }
    return 0;
}

The two-part guard (end < p catches unsigned wraparound; end > hyp_vmemmap + hyp_vmemmap_nr_pages catches overshoot) is the minimal correct fix. Both conditions are necessary: wraparound alone could bypass a single upper-bound check by wrapping to a low address that is numerically less than the limit.

Detection and Indicators

Direct kernel-level indicators on a targeted device:

// dmesg / kernel log artifacts:

[  142.883214] kvm [1]: pKVM: host_donate_hyp rejected: EINVAL (addr=0xXXXX size=0xXXXX)
// Patched kernels — hypercall rejected at bounds check.
// Unpatched kernels — silent corruption, NO log entry.

// Detectable via hypervisor integrity monitoring:
// - Unexpected change in hyp_pool free-list pointer
// - EL2 allocation returning a page in host-accessible IPA range
// - Stage-2 fault on page previously marked PKVM_PAGE_HYP_OWNED

// Android kernel audit:
# cat /proc/kmsg | grep pkvm
# kstat -e kvm:kvm_exit | grep HVC   (spike in HVC exits)

Because exploitation is silent on unpatched kernels (no WARN/BUG is triggered), behavioral detection is more reliable: monitor for privilege escalation following unusual HVC activity from a userspace process without a legitimate VM management role. On patched kernels, the -EINVAL return will propagate to the calling HAL and may surface as a tombstone or a cryptic service crash.

Remediation

  • Apply the March 2026 Android Security Patch Level (2026-03-01) immediately. This is the authoritative fix.
  • Verify patch application: adb shell getprop ro.build.version.security_patch must return 2026-03-01 or later.
  • OEM/vendor kernels that have backported pKVM must independently verify that both __host_check_page_state_range and __host_set_page_state_range carry the bounds guard — partial patches that only fix one function remain exploitable via the other.
  • Audit all hypercall dispatch paths that accept addr/size from EL1 callers for analogous patterns: anywhere hyp_phys_to_page(addr) + (size >> PAGE_SHIFT) is computed without subsequent validation against hyp_vmemmap extent is a candidate for the same class of bug.
  • Mitigation (no patch available): restrict access to /dev/kvm via SELinux policy to trusted system contexts only; audit vendor HALs for hypercall passthrough of user-controlled parameters.
CB
CypherByte Research
Mobile security intelligence · cypherbyte.io
// RELATED RESEARCH
// WEEKLY INTEL DIGEST

Get articles like this every Friday — mobile CVEs, threat research, and security intelligence.

Subscribe Free →