An issue was discovered in Squid before 4.13 and 5.x before 5.0.4. Due to incorrect data validation, HTTP Request Splitting attacks may succeed against HTTP and HTTPS traffic. This leads to cache poisoning. This allows any client, including browser scripts, to bypass local security and poison the browser cache and any downstream caches with content from an arbitrary source. Squid uses a string search instead of parsing the Transfer-Encoding header to find chunked encoding. This allows an attacker to hide a second request inside Transfer-Encoding: it is interpreted by Squid as chunked and split out into a second request delivered upstream. Squid will then deliver two distinct responses to the client, corrupting any downstream caches.
The product compares two entities in a security-relevant context, but the comparison is incorrect, which may lead to resultant weaknesses.
Name | Vendor | Start Version | End Version |
---|---|---|---|
Squid | Squid-cache | * | 4.13 (excluding) |
Squid | Squid-cache | 5.0 (including) | 5.0.4 (excluding) |
Red Hat Enterprise Linux 7 | RedHat | squid-7:3.5.20-17.el7_9.4 | * |
Red Hat Enterprise Linux 8 | RedHat | squid:4-8020020200827100059.4cda2c84 | * |
Red Hat Enterprise Linux 8.0 Update Services for SAP Solutions | RedHat | squid:4-8000020200827105727.f8e95b4e | * |
Red Hat Enterprise Linux 8.1 Extended Update Support | RedHat | squid:4-8010020200827104751.c27ad7f8 | * |
Squid | Ubuntu | devel | * |
Squid | Ubuntu | focal | * |
Squid | Ubuntu | groovy | * |
Squid | Ubuntu | hirsute | * |
Squid | Ubuntu | trusty | * |
Squid3 | Ubuntu | bionic | * |
Squid3 | Ubuntu | precise/esm | * |
Squid3 | Ubuntu | trusty | * |
Squid3 | Ubuntu | xenial | * |
This Pillar covers several possibilities: