Glibc Heap Exploitation Basics : ptmalloc2 internals (Part 3) : The Main Arena


Hi folks, this post is part of a series in which I try to explore the internals of glibc's implementation ptmalloc2 which is used for managing heap memory. In this post I'm going to specifically pay attention to the main_arena and the malloc_state structure, which is used to store some important pointers for searching heap memory.

The main arena

The heap bakes the main_arena struct right into process memory. Its a struct of the type malloc_state and holds the following fields (extract from glibc-2.23/malloc/malloc.c):

1686 struct malloc_state
1687 { 
1688   /* Serialize access.  */
1689   mutex_t mutex;
1690   
1691   /* Flags (formerly in max_fast).  */
1692   int flags;
1693   
1694   /* Fastbins */
1695   mfastbinptr fastbinsY[NFASTBINS];
1696   
1697   /* Base of the topmost chunk -- not otherwise kept in a bin */
1698   mchunkptr top;
1699   
1700   /* The remainder from the most recent split of a small request */
1701   mchunkptr last_remainder;
1702   
1703   /* Normal bins packed as described above */
1704   mchunkptr bins[NBINS * 2 - 2];
1705   
1706   /* Bitmap of bins */
1707   unsigned int binmap[BINMAPSIZE];
1708   
1709   /* Linked list */
1710   struct malloc_state *next;
1711   
1712   /* Linked list for free arenas.  Access to this field is serialized
1713      by free_list_lock in arena.c.  */
1714   struct malloc_state *next_free;
1715 
1716   /* Number of threads attached to this arena.  0 if the arena is on
1717      the free list.  Access to this field is serialized by
1718      free_list_lock in arena.c.  */
1719   INTERNAL_SIZE_T attached_threads;
1720 
1721   /* Memory allocated from the system in this arena.  */
1722   INTERNAL_SIZE_T system_mem;
1723   INTERNAL_SIZE_T max_system_mem;
1724 };


Here's what some of the interesting fields mean (in addition to the already very helpful docs):

  • mutex_t mutex - this field is an integer that can be used to prevent other threads from messing with the arena while its being modified. We can see a confirmation of this in the code:

 29 /* The mutex functions used to do absolutely nothing, i.e. lock,
 30    trylock and unlock would always just return 0.  However, even
 31    without any concurrently active threads, a mutex can be used
 32    legitimately as an `in use' flag.  To make the code that is
 33    protected by a mutex async-signal safe, these macros would           have to
 34    be based on atomic test-and-set operations, for example. */
 35 typedef int mutex_t;
 36 
 37 # define mutex_init(m)          (*(m) = 0)
 38 # define mutex_lock(m)          ({ *(m) = 1; 0; })
 39 # define mutex_trylock(m)       (*(m) ? 1 : ((*(m) = 1), 0))
 40 # define mutex_unlock(m)        (*(m) = 0)
 41 
 42 #endif /* !defined mutex_init */


  • int flags - this is an integer field that main arena uses to mark itself with properties. For instance should there be multiple main arena's (peep at the linked list node below and clues about attached_threads). In order to make using this field easy there are an accompanying list of functions for using these fields in the code:
1640 #define have_fastchunks(M)     (((M)->flags & FASTCHUNKS_BIT) == 0)
1641 #define clear_fastchunks(M)    catomic_or (&(M)->flags, FASTCHUNKS_BIT)
1642 #define set_fastchunks(M)      catomic_and (&(M)->flags, ~FASTCHUNKS_BIT)
1643 
...
1652 
1653 #define NONCONTIGUOUS_BIT     (2U)
1654 
1655 #define contiguous(M)          (((M)->flags & NONCONTIGUOUS_BIT) == 0)
1656 #define noncontiguous(M)       (((M)->flags & NONCONTIGUOUS_BIT) != 0)
1657 #define set_noncontiguous(M)   ((M)->flags |= NONCONTIGUOUS_BIT)
1658 #define set_contiguous(M)      ((M)->flags &= ~NONCONTIGUOUS_BIT)

These are pretty self documenting. 

  • fastbinsY - a pointer to the start of the fastbin array obviously this helps provide a common point in running down the fastbin structure. As I may have mentioned before fastbins are arranged by size, so what we have here is essentially a kind of minimalist priority heap. 
  • mchunkptr top - pointer to the top chunk on the heap.
  • skipping last_remainder for now
  • mchunkptr bins - A pointer to the start of the unsorted bins. All chunks that are above fastbin max size will have pointers here, and the first two indexes are unsorted bins according to documentation.
  • unsigned int binmap - list of indexes for all of the bins indicating if they are free. We can see how its used in the _do_check_malloc_state function which is thrown in to the source for the sake of aiding debugging:

1576 #define mark_bin(m, i) ((m)->binmap[idx2block (i)] |= idx2bit (i))
1577 #define unmark_bin(m, i) ((m)->binmap[idx2block (i)] &= ~(idx2bit (i)))
1578 #define get_binmap(m, i) ((m)->binmap[idx2block (i)] & idx2bit (i))
...
2111 static void
2112 do_check_malloc_state (mstate av)
2113 {
2114   int i;
2115   mchunkptr p;
2116   mchunkptr q;
...
2188       /* binmap is accurate (except for bin 1 == unsorted_chunks) */
2189       if (i >= 2)
2190         {
2191           unsigned int binbit = get_binmap (av, i);
2192           int empty = last (b) == b;
2193           if (!binbit)
2194             assert (empty);
2195           else if (!empty)
2196             assert (binbit);

So it uses this to extract a "binbit" and this asserts whether the bin is in use or not. Anyway that's enough about the fields lets see them in action. 

Exploring the main_arena with gdb

To explore the main_arena I whipped up a simple C program that allocates some chunks in series according to a size I specify on the command line.

> cat arena.c 

#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <time.h>


char *make_string(size_t length){
char *arr = (char *) malloc(length);
asm("int $3");
return arr;
}
void free_string(char *arr){
free(arr);
asm("int $3");
return;
}
/*
Generate chunks in a list with a single size
- shows us how fast chunks work
*/
void make_chunk_field(size_t chunk_length,size_t amount_of_chunks){
int index = 0;
char **chunks = malloc(amount_of_chunks*sizeof(char *)); 
//printf("[*] chunk array head at [%p]\n",&chunks);

for (index = 0;index < amount_of_chunks; index++){
chunks[index] = make_string(chunk_length);
memset(chunks[index],0x40+index,chunk_length);
}
for (index = 0; index < amount_of_chunks;index++){
memset(chunks[index],0xFF,chunk_length);
free_string(chunks[index]);
}
}

int main(int argc, char **argv){
int run = 0;
if (argc < 4){
printf("Usage : %s [chunk length (bytes)] [number of chunks] [rounds]",argv[0]);
return 2;
}
size_t chunk_length = atoi(argv[1]);
unsigned int number_of_chunks = atoi(argv[2]);
int cycles = atoi(argv[3]);
int index = 0;
for (index =0;index<cycles;index++){
make_chunk_field(chunk_length,number_of_chunks);
}
}

I then ran this in gbd and set up an gdbinit to dump the main_arena.  Simple gdbinit file:

> cat ~/.gdbinit
define hook-stop
x/16xg 0x603000
x/18xg &main_arena
info threads
end

I also dump what is usually the start of the heap at 0x603000 when I'm launching in gdb and some thread information. First thing I wanted to know was where each fastbin size goes according to practical demonstration the basic procedure was:

  1. Assign a bunch of chunks of a given size
  2. free up all of them
  3. at each free, check the main_arena fastbinsY array contents
We of course need to know where fastbinsY starts, which gdb and glibc this is pretty easy, all you need to do is run it, set some break point and issue this command:


(gdb) x/1xg &main_arena->fastbinsY
0x7ffff7dd1b28 <main_arena+8>: 0x0000000000000000

Pretty much the same as far as the other fields go if you're curious enough. Okay so we know where the fastbinsY starts. Lets see what happens when we increase chunk size by 10 bytes everytime, basically I just ran arena.c like this:



The r 10 5 1 here means, run this with chunks of size 10 bytes, allocate an array of 5 chunks, and allocate and then deallocate them for 1 round.  And after collecting enough data for size of chunks 10,20,30... until the fastbinsY is no longer used I saw this:




So clearly as soon as a chunk is bigger than 120 bytes on my machine it will start becoming an unsorted bin. We can see this when we do a request for 130 byte chunks:


So what happens if you try to change main arena fields while the heap is use or while the program is running? Lets see:


Glibc will panic when you mess with the main_arena, but the error is interesting here its not about the main_arena its about the fastchunk. Which means some legitimate fastchunk stuff probably happened with the corrupted data? We can see what is happening here with another experiment, by looking at which field in the fastchunk actually ends up in the main_arena by doing the following:


The screenshot above was produced by running the alloc/delloc for 2 rounds what I did was:


  1. assigned some chunks to prep the heap (all the same size, running the same arena.c quoted above)
  2. then re-assigned them
  3. and while in flight I tinkered with the main_arena pointers. 
After injecting a sample pointer we can see that the 0x434343 value gets pop'd into the main arena at the end of the error dump:


Pretty interesting! This field that gets pop'd out is none other than the mchunk->fd pointer which would obviously point to the next free fast bin.  So we now know that when a chunk is assigned the fd pointer is replaced with the previous one.  Right now I can't really see a useful way to abuse this, it just opens up some behavior that may be useful later.

That's going to be it for this one, next post is going to cover some stuff about the heap life cycle, which method actually get called when the heap sets up and tears down behind the scenes.

References and Reading


  1. https://heap-exploitation.dhavalkapil.com/diving_into_glibc_heap/malloc_state.html 
  2. http://core-analyzer.sourceforge.net/index_files/Page335.html 
  3. https://articles.forensicfocus.com/2017/10/16/linux-memory-forensics-dissecting-the-user-space-process-heap/ 
  4. Heap Consistency Chceking (GNU.org) https://www.gnu.org/software/libc/manual/html_node/Heap-Consistency-Checking.html#Heap-Consistency-Checking 
  5. https://stackoverflow.com/questions/1665419/do-threads-have-a-distinct-heap 





Comments