A vulnerability has been identified in Cerberus PRO EN Engineering Tool (All versions < IP8), Cerberus PRO EN Fire Panel FC72x IP6 (All versions < IP6 SR3), Cerberus PRO EN Fire Panel FC72x IP7 (All versions < IP7 SR5), Cerberus PRO EN X200 Cloud Distribution IP7 (All versions < V3.0.6602), Cerberus PRO EN X200 Cloud Distribution IP8 (All versions < V4.0.5016), Cerberus PRO EN X300 Cloud Distribution IP7 (All versions < V3.2.6601), Cerberus PRO EN X300 Cloud Distribution IP8 (All versions < V4.2.5015), Cerberus PRO UL Compact Panel FC922/924 (All versions < MP4), Cerberus PRO UL Engineering Tool (All versions < MP4), Cerberus PRO UL X300 Cloud Distribution (All versions < V4.3.0001), Desigo Fire Safety UL Compact Panel FC2025/2050 (All versions < MP4), Desigo Fire Safety UL Engineering Tool (All versions < MP4), Desigo Fire Safety UL X300 Cloud Distribution (All versions < V4.3.0001), Sinteso FS20 EN Engineering Tool (All versions < MP8), Sinteso FS20 EN Fire Panel FC20 MP6 (All versions < MP6 SR3), Sinteso FS20 EN Fire Panel FC20 MP7 (All versions < MP7 SR5), Sinteso FS20 EN X200 Cloud Distribution MP7 (All versions < V3.0.6602), Sinteso FS20 EN X200 Cloud Distribution MP8 (All versions < V4.0.5016), Sinteso FS20 EN X300 Cloud Distribution MP7 (All versions < V3.2.6601), Sinteso FS20 EN X300 Cloud Distribution MP8 (All versions < V4.2.5015), Sinteso Mobile (All versions < V3.0.0). The network communication library in affected systems does not validate the length of certain X.509 certificate attributes which might result in a stack-based buffer overflow. This could allow an unauthenticated remote attacker to execute code on the underlying operating system with root privileges.
The product copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow.
Name | Vendor | Start Version | End Version |
---|---|---|---|
Cerberus_pro_en_engineering_tool | Siemens | * | ip8 (excluding) |
Cerberus_pro_en_fire_panel_fc72x | Siemens | * | ip8 (excluding) |
Cerberus_pro_en_x200_cloud_distribution | Siemens | * | 4.0.5016 (excluding) |
Cerberus_pro_en_x300_cloud_distribution | Siemens | * | 4.2.5015 (excluding) |
Sinteso_fs20_en_engineering_tool | Siemens | * | mp8 (excluding) |
Sinteso_fs20_en_fire_panel_fc20 | Siemens | * | mp8 (excluding) |
Sinteso_fs20_en_x200_cloud_distribution | Siemens | * | 4.0.5016 (excluding) |
Sinteso_fs20_en_x300_cloud_distribution | Siemens | * | 4.2.5015 (excluding) |
Sinteso_mobile | Siemens | * | 3.0.0 (excluding) |
Use a language that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.
For example, many languages that perform their own memory management, such as Java and Perl, are not subject to buffer overflows. Other languages, such as Ada and C#, typically provide overflow protection, but the protection can be disabled by the programmer.
Be wary that a language’s interface to native code may still be subject to overflows, even if the language itself is theoretically safe.
Use a vetted library or framework that does not allow this weakness to occur or provides constructs that make this weakness easier to avoid.
Examples include the Safe C String Library (SafeStr) by Messier and Viega [REF-57], and the Strsafe.h library from Microsoft [REF-56]. These libraries provide safer versions of overflow-prone string-handling functions.
Use automatic buffer overflow detection mechanisms that are offered by certain compilers or compiler extensions. Examples include: the Microsoft Visual Studio /GS flag, Fedora/Red Hat FORTIFY_SOURCE GCC flag, StackGuard, and ProPolice, which provide various mechanisms including canary-based detection and range/index checking.
D3-SFCV (Stack Frame Canary Validation) from D3FEND [REF-1334] discusses canary-based detection in detail.
Consider adhering to the following rules when allocating and managing an application’s memory:
Assume all input is malicious. Use an “accept known good” input validation strategy, i.e., use a list of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.
When performing input validation, consider all potentially relevant properties, including length, type of input, the full range of acceptable values, missing or extra inputs, syntax, consistency across related fields, and conformance to business rules. As an example of business rule logic, “boat” may be syntactically valid because it only contains alphanumeric characters, but it is not valid if the input is only expected to contain colors such as “red” or “blue.”
Do not rely exclusively on looking for malicious or malformed inputs. This is likely to miss at least one undesirable input, especially if the code’s environment changes. This can give attackers enough room to bypass the intended validation. However, denylists can be useful for detecting potential attacks or determining which inputs are so malformed that they should be rejected outright.
Run or compile the software using features or extensions that randomly arrange the positions of a program’s executable and libraries in memory. Because this makes the addresses unpredictable, it can prevent an attacker from reliably jumping to exploitable code.
Examples include Address Space Layout Randomization (ASLR) [REF-58] [REF-60] and Position-Independent Executables (PIE) [REF-64]. Imported modules may be similarly realigned if their default memory addresses conflict with other modules, in a process known as “rebasing” (for Windows) and “prelinking” (for Linux) [REF-1332] using randomly generated addresses. ASLR for libraries cannot be used in conjunction with prelink since it would require relocating the libraries at run-time, defeating the whole purpose of prelinking.
For more information on these techniques see D3-SAOR (Segment Address Offset Randomization) from D3FEND [REF-1335].
Use a CPU and operating system that offers Data Execution Protection (using hardware NX or XD bits) or the equivalent techniques that simulate this feature in software, such as PaX [REF-60] [REF-61]. These techniques ensure that any instruction executed is exclusively at a memory address that is part of the code segment.
For more information on these techniques see D3-PSEP (Process Segment Execution Prevention) from D3FEND [REF-1336].
Run the code in a “jail” or similar sandbox environment that enforces strict boundaries between the process and the operating system. This may effectively restrict which files can be accessed in a particular directory or which commands can be executed by the software.
OS-level examples include the Unix chroot jail, AppArmor, and SELinux. In general, managed code may provide some protection. For example, java.io.FilePermission in the Java SecurityManager allows the software to specify restrictions on file operations.
This may not be a feasible solution, and it only limits the impact to the operating system; the rest of the application may still be subject to compromise.
Be careful to avoid CWE-243 and other weaknesses related to jails.