Update: The second part of this post has been published and can be found here


Update 2023-06-02: The vulnerability has been assigned CVE-2023-33476.

This post will go over the details and root cause of a heap buffer overflow vulnerability I discovered in the HTTP chunk parsing code of MiniDLNA, affecting up to version 1.3.2. This vulnerability can be exploited to achieve remote code execution in the context of the user that the minidlna server is running as.

The second part of this post contains a detailed write-up of the exploit development process along with two fully weaponized exploits for both x86_64 and ARM32 targets and can be found here.

Vulnerability Summary

MiniDLNA is a simple media server software, with the aim of being fully compliant with DLNA/UPnP-AV clients. It is commonly deployed on Linux servers and across a wide range of embedded devices like routers and NAS devices.

The latest version of the MiniDLNA/ReadyMedia media server contains a vulnerability in the HTTP request processing code responsible for handling requests that use chunked encoding which can result in an out-of-bounds read/write leading to remote code execution. The issue occurs in the validation logic for chunk sizes in ParseHttpHeaders() and results in the return value of a comparison expression being incorrectly saved to a variable used to track the parsed chunk size rather than the return value of strtol() that’s used to parse the size. This allows for values larger than the total request size to pass validation; the application later parses and passes these chunk size values as the size argument in call(s) to memmove(), resuling in an OOB read/write on the heap.

Affected Versions

  • All versions between 1.1.5 and 1.3.2 (inclusive)
  • Default versions provided by apt on Debian 11 and Ubuntu 22.04
  • Version deployed on the Netgear Nighthawk RAX30 w/ latest patches

Minimal Testcase to Trigger the Bug

This testcase will trigger the bug by passing a huge value (0xffffff) that is much larger than the total request length sent, resulting in an OOB read past the end of the request buffer allocation and into unmapped memory. The application should crash with a segmentation fault.

GET /status HTTP/1.0\r\nTransfer-Encoding:chunked\r\n\r\nffffff\r\n0\r\n\r\n


I originally discovered this vulnerability while fuzzing an older version of the software while hunting for bugs on the Netgear RAX45. I wasn’t familiar enough with the code base to know exactly the right place to fuzz so I just chose to go for the most reachable part of the code: HTTP request handling. Fuzzing was done using both LibFuzzer and AFL++ using custom harnesses. I made some minor changes to the code to improve fuzzability, including removal of the network read/write functionality, but otherwise no other changes were needed.

The core portion of the harness used to find this particular bug is shown below. The full harness code and other helper code will be released soon.

#include <stdio.h>
#include <stdint.h>
#include <stddef.h>
#include "minixml.h"
#include "upnphttp.h"
#include "upnpsoap.h"
#include "containers.h"
#include "upnpreplyparse.h"
#include "scanner.h"
#include "log.h"

void ProcessHttpQuery_upnphttp(struct upnphttp *);

int LLVMFuzzerTestOneInput(char *buf, size_t size)
    struct upnphttp *h = New_upnphttp(1);
    const char *endheaders;
    h->req_buf = (char *)malloc(size+1);
    if (!h->req_buf)
      return 0;
    memcpy(h->req_buf, buf, size);
    h->req_buflen = size;
    h->req_buf[h->req_buflen] = '\0';
    /* search for the string "\r\n\r\n" */
    endheaders = strstr(h->req_buf, "\r\n\r\n");
      h->req_contentoff = endheaders - h->req_buf + 4;
      h->req_contentlen = h->req_buflen - h->req_contentoff;
      return 0;
    return -1;

After a few days of fuzzing, tweaking the harnesses, and fuzzing some more, I had come across a handful of crashes that seemed somewhat promising. Among them was this one.

An interesting side note here: thanks to Netgear’s terrible practices with their GPL code releases, it turned out that the actual device I was testing against (RAX45) was not only running a newer version than the one they included in their GPL package, it was also a custom fork that apparently had fixed these bugs already. I confirmed this by reversing the binary taken straight from the device. Maybe its just me, but its nuts that they’re fixing vulnerabilities in their internal forks of open source code and not providing those fixes in their GPL packages, let alone pushing them upstream. After discovering this I decided to pivot to just focusing on the latest code from MiniDLNA Git repo and confirmed that version was vulnerable as well.

Root Cause Analysis

The vulnerable code is reached for any valid request that includes the Transfer-Encoding:chunked HTTP header and that meets the following conditions:

  • Correctly terminates the HTTP headers with \r\n\r\n sequence
  • Includes a terminator chunk at the end of the request body with a chunk size of 0
  • Correctly follows chunk size values with terminator sequence \r\n

The function call chain is shown below, beginning in Process_upnphttp() (upnphttp.c:1096):

Process_upnphttp() →
    ProcessHttpQuery_upnphttp(h) →
        ParseHttpHeaders(h) ← (returns)
    ProcessHttpQuery_upnphttp(h) -- VULNERABLE CODE

Initial Request Handling: Process_upnphttp()

The code responsible for the initial reception and processing of requests is in Process_upnphttp() and is described below.

Process_upnphttp(), upnphttp.c:1096

Data is read from the socket using recv(), up to 2048 bytes at a time, into a static 2048-byte char buffer. If data is received, the code calculates new_req_buflen as req_buflen + bytes_recv'd and checks whether the new buffer length would exceed a max value of 1MB. If so, an error is return; otherwise the code continues.

    struct upnphttp *h = ev->data;
    char buf[2048];
        int new_req_buflen;
        const char * endheaders;

        // new buf_len is the sum of the last calculated red_buflen and the
        // number of bytes the call to `recv` returned
        new_req_buflen = n + h->req_buflen + 1;

        // check to see if the new buf len exceeds a max value (1MB)
        if (new_req_buflen >= 1024 * 1024)
            DPRINTF(E_ERROR, L_HTTP, "Receive headers too large (received %d bytes)\n", new_req_buflen);
            h->state = 100;

Further down in this function, realloc() is called using the calculated new_req_buflen as the size and passing the pointerh->req_buf as the buffer to perform the reallocation on. On the first round of processing (i.e. the first 2048 bytes received) h->req_buf will be NULL and realloc() behaves like a normal call to malloc(). The data is copied from the static buffer into the buffer pointed to by h->req_buf and the h->req_buflen field of the upnphttp struct is updated with the new size.

                h->req_buf = (char *)realloc(h->req_buf, new_req_buflen);
                if (!h->req_buf)
                    DPRINTF(E_ERROR, L_HTTP, "Receive headers: %s\n", strerror(errno));
                    h->state = 100;

                // copy n bytes from the local `buf[2048]` to the alloc'ed memory
                // req_buflen will be 0 on the first round of processing since it's not updated
                // until the next line.
                memcpy(h->req_buf + h->req_buflen, buf, n);
                // update req_buflen
                h->req_buflen += n;

Next, the buffer is null terminated and then passed to strstr() to search for an \r\n\r\n sequence to determine whether the full HTTP headers section of the request has been received. Upon finding this sequence, the start of the request body and it’s size are also calculated and the respective values in the upnphttp struct (req_contentoff and req_contentlen) are updated. The code then calls ProcessHttpQuery_upnphttp() to move onto parsing.

            h->req_buf[h->req_buflen] = '\0';

            // search for the string "\r\n\r\n" and calculate content offset and content length if
            // found
            endheaders = strstr(h->req_buf, "\r\n\r\n");

                h->req_contentoff = endheaders - h->req_buf + 4;
                h->req_contentlen = h->req_buflen - h->req_contentoff;

NOTE: The headers have only been read into h->req_buf at this point without parsing; they will be parsed into fields of the upnphttp struct within the call to ProcessHttpQuery_upnphttp() at the end of this code block.


The first call to this function only happens once the \r\n\r\n sequence is found, indicating the end of the HTTP header section was received. After parsing the HTTP verb and path, it calls ParseHttpHeaders() to perform the actual parsing of the header data before doing anything else; the source of the vulnerability is found here.

When ParseHttpHeaders() returns and indicates the full request was recieved by setting h->req_chunklen to 0, processing of the chunks in the HTTP body will resume in ProcessHttpQuery_upnphttp(); this is where the corruption caused by the bug is triggered when h->req_chunklen is passed to memmove() without validating that it does not exceed the allocated buffer.

BUG: Incorrect Chunk Size Validation in ParseHttpHeaders()

After reading data from the socket and finding the end of the HTTP header section as indicated by the presence of the \r\n\r\n sequence, Process_upnphttp() passes the request off to ParseHttpQuery_upnphttp(), which in turn calls ParseHttpHeaders() to perform the actual parsing of the header data into a upnphttp struct. If the HTTP headers contain the Transfer-Encoding:chunked header, a flag is set on the structure which will result in the application reaching the vulnerable code on line 428.

After some rudimentary sanity checks of the h->req_chunklen and h->req_contentoff fields of the struct, the code iterates through the rest of the request body, attempting to read the numeric size values for each of the chunks at the expected offsets based on sizes read. It combines the step of reading the size value and attempting to perform the size validation inside the conditions of the following while loop:

    while( (line < (h->req_buf + h->req_buflen)) &&
           (h->req_chunklen = strtol(line, &endptr, 16) > 0) &&
           (endptr != line) )

The following checks are performed:

  • Checks if the char ptr line has been incremented past the end of the allocation in h->req_buf. line is incremented by the parsed req_chunklen at the end of the inner block of the while loop.
  • Attempt to parse a size value using strtol() and ensure the value is greater than 0;
  • Ensure the char pointer endptr is not pointing to the same location as line after the call to strtol() which indicates no parsable digit was found.

The bug occurs in the evaluation of the second condition, where strtol() is called; the intent is for the return value of strtol() to be saved to h->req_chunklen and to compare the saved value to 0 for the validation step. Instead, the result of the comparison expression strtol(x,x) > 0 is saved to h->req_chunklen, resulting in the incorrect calculation of the total expected request size as the Boolean result of the expression would evaluate to 1 for all numbers greater than 0.

Within the inner block of the while loop, the value saved to h->req_chunklen is used to increment the line pointer to the location where the next chunk size is expected, indicating the true intent of the code (comments added for annotation).

        endptr = strstr(endptr, "\r\n");
        if (!endptr)

        // if strtol() returned a size greater than 0, `line` will only ever be incremented
        // by 1 (the bool eval of the comparison in int) at most, which means the first validation
        // condition in the while loop will not properly detect large values that exceed the size
        // of the request buffer allocation
        line = endptr+h->req_chunklen+2;

This means that even for very large chunk sizes, the line pointer will only ever be incremented by 1 at most for each iteration through the loop, meaning the validation check in the first condition of the while loop will not be triggered and catch these sizes that exceed the length of data sent in the request body.

OOB read/write on chunk sizes > request length in ProcessHttpQuery_upnphttp()

After the headers have been parsed in ParseHttpHeaders(), execution returns to ProcessHttpQuery_upnphttp(), where the value saved to h->req_chunklen is checked; if it is 0 after the request header parsing, it is assumed that the full request has been received and that the request buffer at h->req_buf is large enough to fit all the expected data based on chunk sizes.

Parsing of the actual chunks from the request body then continues in the code block below (upnphttp.c:893) after returning from ParseHttpHeaders():

    char *chunkstart, *chunk, *endptr, *endbuf;
    // chunk, endbuf, and chunkstart all begin pointing to the start of the http request body
    chunk = endbuf = chunkstart = h->req_buf + h->req_contentoff;

    while ((h->req_chunklen = strtol(chunk, &endptr, 16)) > 0 && (endptr != chunk) )
        endptr = strstr(endptr, "\r\n");
        if (!endptr)
        endptr += 2;

        // this call to memmove will use the chunklen parsed by strol() above
        // without checking that it doesn't read beyond the end of the request buf.
        memmove(endbuf, endptr, h->req_chunklen);

        endbuf += h->req_chunklen;
        chunk = endptr + h->req_chunklen;
    h->req_contentlen = endbuf - chunkstart;
    h->req_buflen = endbuf - h->req_buf;
    h->state = 100;

Summary of while loop conditions/checks:

  • Condition 1:
    • Attempt to parse a chunk size number as a long using strtol(), passing in the chunk pointer which begins the while loop pointing to the start of the request body section (immediately following the headers). The number will be parsed as base 16, meaning hex digits A-F are considered valid.
    • The return value of the call to strtol() is saved to h->req_chunklen and a comparison is performed to check whether it is greater than 0; this must evaluate true for the parsing to continue
  • Condition 2:
    • Check that the value that strtol() saved to endptr does not point to the same place as chunk, which would indicate that no valid numeric value could be parsed from the string.

The code in this block relies on the validation performed during the header parsing step and so it parses and uses the user-controlled chunk size as the size argument in calls to memmove() without bounds checking. This results in the application accepting chunk size values that exceed the number of bytes received in the request, leading to an OOB read/write.

The while loop used to iterate through the chunks in this block is nearly identical to the one in ParseHttpHeaders(), except this one includes an additional set of parentheses around the assignment and comparison expressions in the call to strtol(), resulting in the correct assignment of the return value to h->req_chunklen. Had the same bug present in the ParseHttpHeaders() chunk size parsing code been introduced here, it would have probably been noticed much sooner as chunks would likely get truncated as a result of the incorrect logic.


Suggested Fix

The issue can be fixed by wrapping the assignment expression in the second condition of the while loop in ParseHttpHeaders() (upnphttp.c:150) used to validate chunk sizes in parentheses to correctly separate the assignment from the comparison expression that compares the value to 0.

The fixed code would be:

    while ((line < (h->req_buf + h->req_buflen)) &&
           ((h->req_chunklen = strtol(line, &endptr, 16)) > 0) && // FIX HERE
           (endptr != line) )

A patch with this fix was provided to the package maintainer along with the vulnerability report.

Disclosure Timeline

  • 2023-04-18: Submitted vulnerability report to Zero Day Initiative RE: vulnerable Netgear RAX30
  • 2023-05-04: ZDI rejects the vulnerability report (not interested in the product)
  • 2023-05-05: Request a CVE ID from Mitre
  • 2023-05-05: Unable to find a private avenue to report to the package maintainer directly, so instead submit reports to Debian and Ubuntu security teams since they have vuln versions in their repos
  • 2023-05-08: Debian security team shares the email address of the package maintainer; reach out to them over email and submit a private bug report on the Sourceforge page.
  • 2023-05-08: Package maintainer acknowledges report and begins working on fix
  • 2023-05-31: Package maintainer releases fixed version, 1.3.3
  • 2023-05-31: Follow up message to Mitre, CVE assignment pending
  • 2023-06-02: Mitre assigns the vulnerability CVE-2023-33476


As this is a heap-based vulnerability, exploitability of this issue will be dependent upon the Glibc version the application is linked against and compiler exploit mitigations, to some extent. Ultimately, because the bug provides for a strong read/write primitive, there are various options for exploitation.

Part 2 of this series going over the exploit development process and the exploits I wrote for this bug can be found here.