Blob Blame History Raw
From bacbfafc865397fa29a30bfebf9e5a283efae505 Mon Sep 17 00:00:00 2001
From: Kevin Wolf <kwolf@redhat.com>
Date: Thu, 15 May 2014 16:10:11 +0200
Subject: [PATCH] qcow1: Validate L2 table size (CVE-2014-0222)

Too large L2 table sizes cause unbounded allocations. Images actually
created by qemu-img only have 512 byte or 4k L2 tables.

To keep things consistent with cluster sizes, allow ranges between 512
bytes and 64k (in fact, down to 1 entry = 8 bytes is technically
working, but L2 table sizes smaller than a cluster don't make a lot of
sense).

This also means that the number of bytes on the virtual disk that are
described by the same L2 table is limited to at most 8k * 64k or 2^29,
preventively avoiding any integer overflows.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
(cherry picked from commit 42eb58179b3b215bb507da3262b682b8a2ec10b5)

Conflicts:
	tests/qemu-iotests/092
	tests/qemu-iotests/092.out
---
 block/qcow.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/block/qcow.c b/block/qcow.c
index 2379132..bdd2003 100644
--- a/block/qcow.c
+++ b/block/qcow.c
@@ -136,6 +136,14 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags)
         goto fail;
     }
 
+    /* l2_bits specifies number of entries; storing a uint64_t in each entry,
+     * so bytes = num_entries << 3. */
+    if (header.l2_bits < 9 - 3 || header.l2_bits > 16 - 3) {
+        error_report("L2 table size must be between 512 and 64k");
+        ret = -EINVAL;
+        goto fail;
+    }
+
     if (header.crypt_method > QCOW_CRYPT_AES) {
         ret = -EINVAL;
         goto fail;