6875c9a
In some cases it may happen that pmd_none_or_clear_bad() is called
6875c9a
with the mmap_sem hold in read mode. In those cases the huge page
6875c9a
faults can allocate hugepmds under pmd_none_or_clear_bad() and that
6875c9a
can trigger a false positive from pmd_bad() that will not like to see
6875c9a
a pmd materializing as trans huge.
6875c9a
6875c9a
It's not khugepaged the problem, khugepaged holds the mmap_sem in
6875c9a
write mode (and all those sites must hold the mmap_sem in read mode to
6875c9a
prevent pagetables to go away from under them, during code review it
6875c9a
seems vm86 mode on 32bit kernels requires that too unless it's
6875c9a
restricted to 1 thread per process or UP builds). The race is only
6875c9a
with the huge pagefaults that can convert a pmd_none() into a
6875c9a
pmd_trans_huge().
6875c9a
6875c9a
Effectively all these pmd_none_or_clear_bad() sites running with
6875c9a
mmap_sem in read mode are somewhat speculative with the page faults,
6875c9a
and the result is always undefined when they run simultaneously. This
6875c9a
is probably why it wasn't common to run into this. For example if the
6875c9a
madvise(MADV_DONTNEED) runs zap_page_range() shortly before the page
6875c9a
fault, the hugepage will not be zapped, if the page fault runs first
6875c9a
it will be zapped.
6875c9a
6875c9a
Altering pmd_bad() not to error out if it finds hugepmds won't be
6875c9a
enough to fix this, because zap_pmd_range would then proceed to call
6875c9a
zap_pte_range (which would be incorrect if the pmd become a
6875c9a
pmd_trans_huge()).
6875c9a
6875c9a
The simplest way to fix this is to read the pmd in the local stack
6875c9a
(regardless of what we read, no need of actual CPU barriers, only
6875c9a
compiler barrier needed), and be sure it is not changing under the
6875c9a
code that computes its value. Even if the real pmd is changing under
6875c9a
the value we hold on the stack, we don't care. If we actually end up
6875c9a
in zap_pte_range it means the pmd was not none already and it was not
6875c9a
huge, and it can't become huge from under us (khugepaged locking
6875c9a
explained above).
6875c9a
6875c9a
All we need is to enforce that there is no way anymore that in a code
6875c9a
path like below, pmd_trans_huge can be false, but
6875c9a
pmd_none_or_clear_bad can run into a hugepmd. The overhead of a
6875c9a
barrier() is just a compiler tweak and should not be measurable (I
6875c9a
only added it for THP builds). I don't exclude different compiler
6875c9a
versions may have prevented the race too by caching the value of *pmd
6875c9a
on the stack (that hasn't been verified, but it wouldn't be impossible
6875c9a
considering pmd_none_or_clear_bad, pmd_bad, pmd_trans_huge, pmd_none
6875c9a
are all inlines and there's no external function called in between
6875c9a
pmd_trans_huge and pmd_none_or_clear_bad).
6875c9a
6875c9a
		if (pmd_trans_huge(*pmd)) {
6875c9a
			if (next-addr != HPAGE_PMD_SIZE) {
6875c9a
				VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem));
6875c9a
				split_huge_page_pmd(vma->vm_mm, pmd);
6875c9a
			} else if (zap_huge_pmd(tlb, vma, pmd, addr))
6875c9a
				continue;
6875c9a
			/* fall through */
6875c9a
		}
6875c9a
		if (pmd_none_or_clear_bad(pmd))
6875c9a
6875c9a
Because this race condition could be exercised without special
6875c9a
privileges this was reported in CVE-2012-1179.
6875c9a
6875c9a
The race was identified and fully explained by Ulrich who debugged it.
6875c9a
I'm quoting his accurate explanation below, for reference.
6875c9a
6875c9a
====== start quote =======
6875c9a
  mapcount 0 page_mapcount 1
6875c9a
  kernel BUG at mm/huge_memory.c:1384!
6875c9a
6875c9a
At some point prior to the panic, a "bad pmd ..." message similar to the
6875c9a
following is logged on the console:
6875c9a
6875c9a
  mm/memory.c:145: bad pmd ffff8800376e1f98(80000000314000e7).
6875c9a
6875c9a
The "bad pmd ..." message is logged by pmd_clear_bad() before it clears
6875c9a
the page's PMD table entry.
6875c9a
6875c9a
    143 void pmd_clear_bad(pmd_t *pmd)
6875c9a
    144 {
6875c9a
->  145         pmd_ERROR(*pmd);
6875c9a
    146         pmd_clear(pmd);
6875c9a
    147 }
6875c9a
6875c9a
After the PMD table entry has been cleared, there is an inconsistency
6875c9a
between the actual number of PMD table entries that are mapping the page
6875c9a
and the page's map count (_mapcount field in struct page). When the page
6875c9a
is subsequently reclaimed, __split_huge_page() detects this inconsistency.
6875c9a
6875c9a
   1381         if (mapcount != page_mapcount(page))
6875c9a
   1382                 printk(KERN_ERR "mapcount %d page_mapcount %d\n",
6875c9a
   1383                        mapcount, page_mapcount(page));
6875c9a
-> 1384         BUG_ON(mapcount != page_mapcount(page));
6875c9a
6875c9a
The root cause of the problem is a race of two threads in a multithreaded
6875c9a
process. Thread B incurs a page fault on a virtual address that has never
6875c9a
been accessed (PMD entry is zero) while Thread A is executing an madvise()
6875c9a
system call on a virtual address within the same 2 MB (huge page) range.
6875c9a
6875c9a
           virtual address space
6875c9a
          .---------------------.
6875c9a
          |                     |
6875c9a
          |                     |
6875c9a
        .-|---------------------|
6875c9a
        | |                     |
6875c9a
        | |                     |<-- B(fault)
6875c9a
        | |                     |
6875c9a
  2 MB  | |/////////////////////|-.
6875c9a
  huge <  |/////////////////////|  > A(range)
6875c9a
  page  | |/////////////////////|-'
6875c9a
        | |                     |
6875c9a
        | |                     |
6875c9a
        '-|---------------------|
6875c9a
          |                     |
6875c9a
          |                     |
6875c9a
          '---------------------'
6875c9a
6875c9a
- Thread A is executing an madvise(..., MADV_DONTNEED) system call
6875c9a
  on the virtual address range "A(range)" shown in the picture.
6875c9a
6875c9a
sys_madvise
6875c9a
  // Acquire the semaphore in shared mode.
6875c9a
  down_read(&current->mm->mmap_sem)
6875c9a
  ...
6875c9a
  madvise_vma
6875c9a
    switch (behavior)
6875c9a
    case MADV_DONTNEED:
6875c9a
         madvise_dontneed
6875c9a
           zap_page_range
6875c9a
             unmap_vmas
6875c9a
               unmap_page_range
6875c9a
                 zap_pud_range
6875c9a
                   zap_pmd_range
6875c9a
                     //
6875c9a
                     // Assume that this huge page has never been accessed.
6875c9a
                     // I.e. content of the PMD entry is zero (not mapped).
6875c9a
                     //
6875c9a
                     if (pmd_trans_huge(*pmd)) {
6875c9a
                         // We don't get here due to the above assumption.
6875c9a
                     }
6875c9a
                     //
6875c9a
                     // Assume that Thread B incurred a page fault and
6875c9a
         .---------> // sneaks in here as shown below.
6875c9a
         |           //
6875c9a
         |           if (pmd_none_or_clear_bad(pmd))
6875c9a
         |               {
6875c9a
         |                 if (unlikely(pmd_bad(*pmd)))
6875c9a
         |                     pmd_clear_bad
6875c9a
         |                     {
6875c9a
         |                       pmd_ERROR
6875c9a
         |                         // Log "bad pmd ..." message here.
6875c9a
         |                       pmd_clear
6875c9a
         |                         // Clear the page's PMD entry.
6875c9a
         |                         // Thread B incremented the map count
6875c9a
         |                         // in page_add_new_anon_rmap(), but
6875c9a
         |                         // now the page is no longer mapped
6875c9a
         |                         // by a PMD entry (-> inconsistency).
6875c9a
         |                     }
6875c9a
         |               }
6875c9a
         |
6875c9a
         v
6875c9a
- Thread B is handling a page fault on virtual address "B(fault)" shown
6875c9a
  in the picture.
6875c9a
6875c9a
...
6875c9a
do_page_fault
6875c9a
  __do_page_fault
6875c9a
    // Acquire the semaphore in shared mode.
6875c9a
    down_read_trylock(&mm->mmap_sem)
6875c9a
    ...
6875c9a
    handle_mm_fault
6875c9a
      if (pmd_none(*pmd) && transparent_hugepage_enabled(vma))
6875c9a
          // We get here due to the above assumption (PMD entry is zero).
6875c9a
          do_huge_pmd_anonymous_page
6875c9a
            alloc_hugepage_vma
6875c9a
              // Allocate a new transparent huge page here.
6875c9a
            ...
6875c9a
            __do_huge_pmd_anonymous_page
6875c9a
              ...
6875c9a
              spin_lock(&mm->page_table_lock)
6875c9a
              ...
6875c9a
              page_add_new_anon_rmap
6875c9a
                // Here we increment the page's map count (starts at -1).
6875c9a
                atomic_set(&page->_mapcount, 0)
6875c9a
              set_pmd_at
6875c9a
                // Here we set the page's PMD entry which will be cleared
6875c9a
                // when Thread A calls pmd_clear_bad().
6875c9a
              ...
6875c9a
              spin_unlock(&mm->page_table_lock)
6875c9a
6875c9a
The mmap_sem does not prevent the race because both threads are acquiring
6875c9a
it in shared mode (down_read). Thread B holds the page_table_lock while
6875c9a
the page's map count and PMD table entry are updated. However, Thread A
6875c9a
does not synchronize on that lock.
6875c9a
====== end quote =======
6875c9a
6875c9a
Reported-by: Ulrich Obergfell <uobergfe@redhat.com>
6875c9a
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
6875c9a
---
6875c9a
 arch/x86/kernel/vm86_32.c     |    2 +
6875c9a
 fs/proc/task_mmu.c            |    9 ++++++
6875c9a
 include/asm-generic/pgtable.h |   57 +++++++++++++++++++++++++++++++++++++++++
6875c9a
 mm/memcontrol.c               |    4 +++
6875c9a
 mm/memory.c                   |   14 ++++++++--
6875c9a
 mm/mempolicy.c                |    2 +-
6875c9a
 mm/mincore.c                  |    2 +-
6875c9a
 mm/pagewalk.c                 |    2 +-
6875c9a
 mm/swapfile.c                 |    4 +--
6875c9a
 9 files changed, 87 insertions(+), 9 deletions(-)
6875c9a
6875c9a
diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c
6875c9a
index b466cab..328cb37 100644
6875c9a
--- a/arch/x86/kernel/vm86_32.c
6875c9a
+++ b/arch/x86/kernel/vm86_32.c
6875c9a
@@ -172,6 +172,7 @@ static void mark_screen_rdonly(struct mm_struct *mm)
6875c9a
 	spinlock_t *ptl;
6875c9a
 	int i;
6875c9a
 
6875c9a
+	down_write(&mm->mmap_sem);
6875c9a
 	pgd = pgd_offset(mm, 0xA0000);
6875c9a
 	if (pgd_none_or_clear_bad(pgd))
6875c9a
 		goto out;
6875c9a
@@ -190,6 +191,7 @@ static void mark_screen_rdonly(struct mm_struct *mm)
6875c9a
 	}
6875c9a
 	pte_unmap_unlock(pte, ptl);
6875c9a
 out:
6875c9a
+	up_write(&mm->mmap_sem);
6875c9a
 	flush_tlb();
6875c9a
 }
6875c9a
 
6875c9a
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
6875c9a
index 7dcd2a2..3efa725 100644
6875c9a
--- a/fs/proc/task_mmu.c
6875c9a
+++ b/fs/proc/task_mmu.c
6875c9a
@@ -409,6 +409,9 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
6875c9a
 	} else {
6875c9a
 		spin_unlock(&walk->mm->page_table_lock);
6875c9a
 	}
6875c9a
+
6875c9a
+	if (pmd_trans_unstable(pmd))
6875c9a
+		return 0;
6875c9a
 	/*
6875c9a
 	 * The mmap_sem held all the way back in m_start() is what
6875c9a
 	 * keeps khugepaged out of here and from collapsing things
6875c9a
@@ -507,6 +510,8 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
6875c9a
 	struct page *page;
6875c9a
 
6875c9a
 	split_huge_page_pmd(walk->mm, pmd);
6875c9a
+	if (pmd_trans_unstable(pmd))
6875c9a
+		return 0;
6875c9a
 
6875c9a
 	pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
6875c9a
 	for (; addr != end; pte++, addr += PAGE_SIZE) {
6875c9a
@@ -670,6 +675,8 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
6875c9a
 	int err = 0;
6875c9a
 
6875c9a
 	split_huge_page_pmd(walk->mm, pmd);
6875c9a
+	if (pmd_trans_unstable(pmd))
6875c9a
+		return 0;
6875c9a
 
6875c9a
 	/* find the first VMA at or above 'addr' */
6875c9a
 	vma = find_vma(walk->mm, addr);
6875c9a
@@ -961,6 +968,8 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
6875c9a
 		spin_unlock(&walk->mm->page_table_lock);
6875c9a
 	}
6875c9a
 
6875c9a
+	if (pmd_trans_unstable(pmd))
6875c9a
+		return 0;
6875c9a
 	orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
6875c9a
 	do {
6875c9a
 		struct page *page = can_gather_numa_stats(*pte, md->vma, addr);
6875c9a
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
6875c9a
index 76bff2b..10f8291 100644
6875c9a
--- a/include/asm-generic/pgtable.h
6875c9a
+++ b/include/asm-generic/pgtable.h
6875c9a
@@ -443,6 +443,63 @@ static inline int pmd_write(pmd_t pmd)
6875c9a
 #endif /* __HAVE_ARCH_PMD_WRITE */
6875c9a
 #endif
6875c9a
 
6875c9a
+/*
6875c9a
+ * This function is meant to be used by sites walking pagetables with
6875c9a
+ * the mmap_sem hold in read mode to protect against MADV_DONTNEED and
6875c9a
+ * transhuge page faults. MADV_DONTNEED can convert a transhuge pmd
6875c9a
+ * into a null pmd and the transhuge page fault can convert a null pmd
6875c9a
+ * into an hugepmd or into a regular pmd (if the hugepage allocation
6875c9a
+ * fails). While holding the mmap_sem in read mode the pmd becomes
6875c9a
+ * stable and stops changing under us only if it's not null and not a
6875c9a
+ * transhuge pmd. When those races occurs and this function makes a
6875c9a
+ * difference vs the standard pmd_none_or_clear_bad, the result is
6875c9a
+ * undefined so behaving like if the pmd was none is safe (because it
6875c9a
+ * can return none anyway). The compiler level barrier() is critically
6875c9a
+ * important to compute the two checks atomically on the same pmdval.
6875c9a
+ */
6875c9a
+static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t *pmd)
6875c9a
+{
6875c9a
+	/* depend on compiler for an atomic pmd read */
6875c9a
+	pmd_t pmdval = *pmd;
6875c9a
+	/*
6875c9a
+	 * The barrier will stabilize the pmdval in a register or on
6875c9a
+	 * the stack so that it will stop changing under the code.
6875c9a
+	 */
6875c9a
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
6875c9a
+	barrier();
6875c9a
+#endif
6875c9a
+	if (pmd_none(pmdval))
6875c9a
+		return 1;
6875c9a
+	if (unlikely(pmd_bad(pmdval))) {
6875c9a
+		if (!pmd_trans_huge(pmdval))
6875c9a
+			pmd_clear_bad(pmd);
6875c9a
+		return 1;
6875c9a
+	}
6875c9a
+	return 0;
6875c9a
+}
6875c9a
+
6875c9a
+/*
6875c9a
+ * This is a noop if Transparent Hugepage Support is not built into
6875c9a
+ * the kernel. Otherwise it is equivalent to
6875c9a
+ * pmd_none_or_trans_huge_or_clear_bad(), and shall only be called in
6875c9a
+ * places that already verified the pmd is not none and they want to
6875c9a
+ * walk ptes while holding the mmap sem in read mode (write mode don't
6875c9a
+ * need this). If THP is not enabled, the pmd can't go away under the
6875c9a
+ * code even if MADV_DONTNEED runs, but if THP is enabled we need to
6875c9a
+ * run a pmd_trans_unstable before walking the ptes after
6875c9a
+ * split_huge_page_pmd returns (because it may have run when the pmd
6875c9a
+ * become null, but then a page fault can map in a THP and not a
6875c9a
+ * regular page).
6875c9a
+ */
6875c9a
+static inline int pmd_trans_unstable(pmd_t *pmd)
6875c9a
+{
6875c9a
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
6875c9a
+	return pmd_none_or_trans_huge_or_clear_bad(pmd);
6875c9a
+#else
6875c9a
+	return 0;
6875c9a
+#endif
6875c9a
+}
6875c9a
+
6875c9a
 #endif /* !__ASSEMBLY__ */
6875c9a
 
6875c9a
 #endif /* _ASM_GENERIC_PGTABLE_H */
6875c9a
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
6875c9a
index d0e57a3..67b0578 100644
6875c9a
--- a/mm/memcontrol.c
6875c9a
+++ b/mm/memcontrol.c
6875c9a
@@ -5193,6 +5193,8 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd,
6875c9a
 	spinlock_t *ptl;
6875c9a
 
6875c9a
 	split_huge_page_pmd(walk->mm, pmd);
6875c9a
+	if (pmd_trans_unstable(pmd))
6875c9a
+		return 0;
6875c9a
 
6875c9a
 	pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
6875c9a
 	for (; addr != end; pte++, addr += PAGE_SIZE)
6875c9a
@@ -5355,6 +5357,8 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
6875c9a
 	spinlock_t *ptl;
6875c9a
 
6875c9a
 	split_huge_page_pmd(walk->mm, pmd);
6875c9a
+	if (pmd_trans_unstable(pmd))
6875c9a
+		return 0;
6875c9a
 retry:
6875c9a
 	pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
6875c9a
 	for (; addr != end; addr += PAGE_SIZE) {
6875c9a
diff --git a/mm/memory.c b/mm/memory.c
6875c9a
index fa2f04e..e3090fc 100644
6875c9a
--- a/mm/memory.c
6875c9a
+++ b/mm/memory.c
6875c9a
@@ -1251,12 +1251,20 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
6875c9a
 				VM_BUG_ON(!rwsem_is_locked(&tlb->mm->mmap_sem));
6875c9a
 				split_huge_page_pmd(vma->vm_mm, pmd);
6875c9a
 			} else if (zap_huge_pmd(tlb, vma, pmd, addr))
6875c9a
-				continue;
6875c9a
+				goto next;
6875c9a
 			/* fall through */
6875c9a
 		}
6875c9a
-		if (pmd_none_or_clear_bad(pmd))
6875c9a
-			continue;
6875c9a
+		/*
6875c9a
+		 * Here there can be other concurrent MADV_DONTNEED or
6875c9a
+		 * trans huge page faults running, and if the pmd is
6875c9a
+		 * none or trans huge it can change under us. This is
6875c9a
+		 * because MADV_DONTNEED holds the mmap_sem in read
6875c9a
+		 * mode.
6875c9a
+		 */
6875c9a
+		if (pmd_none_or_trans_huge_or_clear_bad(pmd))
6875c9a
+			goto next;
6875c9a
 		next = zap_pte_range(tlb, vma, pmd, addr, next, details);
6875c9a
+	next:
6875c9a
 		cond_resched();
6875c9a
 	} while (pmd++, addr = next, addr != end);
6875c9a
 
6875c9a
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
6875c9a
index 47296fe..0a37570 100644
6875c9a
--- a/mm/mempolicy.c
6875c9a
+++ b/mm/mempolicy.c
6875c9a
@@ -512,7 +512,7 @@ static inline int check_pmd_range(struct vm_area_struct *vma, pud_t *pud,
6875c9a
 	do {
6875c9a
 		next = pmd_addr_end(addr, end);
6875c9a
 		split_huge_page_pmd(vma->vm_mm, pmd);
6875c9a
-		if (pmd_none_or_clear_bad(pmd))
6875c9a
+		if (pmd_none_or_trans_huge_or_clear_bad(pmd))
6875c9a
 			continue;
6875c9a
 		if (check_pte_range(vma, pmd, addr, next, nodes,
6875c9a
 				    flags, private))
6875c9a
diff --git a/mm/mincore.c b/mm/mincore.c
6875c9a
index 636a868..936b4ce 100644
6875c9a
--- a/mm/mincore.c
6875c9a
+++ b/mm/mincore.c
6875c9a
@@ -164,7 +164,7 @@ static void mincore_pmd_range(struct vm_area_struct *vma, pud_t *pud,
6875c9a
 			}
6875c9a
 			/* fall through */
6875c9a
 		}
6875c9a
-		if (pmd_none_or_clear_bad(pmd))
6875c9a
+		if (pmd_none_or_trans_huge_or_clear_bad(pmd))
6875c9a
 			mincore_unmapped_range(vma, addr, next, vec);
6875c9a
 		else
6875c9a
 			mincore_pte_range(vma, pmd, addr, next, vec);
6875c9a
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
6875c9a
index 2f5cf10..aa9701e 100644
6875c9a
--- a/mm/pagewalk.c
6875c9a
+++ b/mm/pagewalk.c
6875c9a
@@ -59,7 +59,7 @@ again:
6875c9a
 			continue;
6875c9a
 
6875c9a
 		split_huge_page_pmd(walk->mm, pmd);
6875c9a
-		if (pmd_none_or_clear_bad(pmd))
6875c9a
+		if (pmd_none_or_trans_huge_or_clear_bad(pmd))
6875c9a
 			goto again;
6875c9a
 		err = walk_pte_range(pmd, addr, next, walk);
6875c9a
 		if (err)
6875c9a
diff --git a/mm/swapfile.c b/mm/swapfile.c
6875c9a
index d999f09..f31b29d 100644
6875c9a
--- a/mm/swapfile.c
6875c9a
+++ b/mm/swapfile.c
6875c9a
@@ -932,9 +932,7 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
6875c9a
 	pmd = pmd_offset(pud, addr);
6875c9a
 	do {
6875c9a
 		next = pmd_addr_end(addr, end);
6875c9a
-		if (unlikely(pmd_trans_huge(*pmd)))
6875c9a
-			continue;
6875c9a
-		if (pmd_none_or_clear_bad(pmd))
6875c9a
+		if (pmd_none_or_trans_huge_or_clear_bad(pmd))
6875c9a
 			continue;
6875c9a
 		ret = unuse_pte_range(vma, pmd, addr, next, entry, page);
6875c9a
 		if (ret)
6875c9a
6875c9a
--
6875c9a
To unsubscribe, send a message with 'unsubscribe linux-mm' in
6875c9a
the body to majordomo@kvack.org.  For more info on Linux MM,
6875c9a
see: http://www.linux-mm.org/ .
6875c9a
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
6875c9a
Don't email:  email@kvack.org