Data Continuity

Backup and recovery services are a necessity for todays modern networks. We can help to determine where and when your data needs to live to be sure it's always available

IT Consulting, Service and Management

Our decades of implementation and integration experience allows us to deliver best-of-class IT services to our customers

Cloud Services

With so many options and implementation scenarios available, let us help you determine how best to use new services available from the cloud.

Since 1996, our goal has been to help our clients maximize productivity and efficiency by expertly maintaining existing infrastructures, as well as designing and implementing new technologies, allowing them to continue growing into the future.

...

We focus on business process design and strategize and implement policies for continuous improvement and integration.
  • Knowledgeable and friendly staff
  • Flexible consumption-based pricing models
  • Online strategy and consulting services
  • Decades of experience
Our Services

News, updates, trends and the latest
info you need to know about IT

VU#253266: Keras 2 Lambda Layers Allow Arbitrary Code Injection in TensorFlow Models

Overview
Lambda Layers in third party TensorFlow-based Keras models allow attackers to inject arbitrary code into versions built prior to Keras 2.13 that may then unsafely run with the same permissions as the running application. For example, an attacker could use this feature to trojanize a popular model, save it, and redistribute it, tainting the supply chain of dependent AI/ML applications.
Description
TensorFlow is a widely-used open-source software library for building machine learning and artificial intelligence applications. The Keras framework, implemented in Python, is a high-level interface to TensorFlow that provides a wide variety of features for the design, training, validation and packaging of ML models. Keras provides an API for building neural networks from building blocks called Layers. One such Layer type is a Lambda layer that allows a developer to add arbitrary Python code to a model in the form of a lambda function (an anonymous, unnamed function). Using the Model.save() or save_model() method, a developer can then save a model that includes this code.
The Keras 2 documentation for the Model.load_model() method describes a mechanism for disallowing the loading of a native version 3 Keras model (.keras file) that includes a Lambda layer when setting safe_mode (documentation):

safe_mode: Boolean, whether to disallow unsafe lambda deserialization. When safe_mode=False, loading an object has the potential to trigger arbitrary code execution. This argument is only applicable to the Keras v3 model format. Defaults to True.

This is the behavior of version 2.13 and later of the Keras API: an exception will be raised in a program that attempts to load a model with Lambda layers stored in version 3 of the format. This check, however, does not exist in the prior versions of the API. Nor is the check performed on models that have been stored using earlier versions of the Keras serialization format (i.e., v2 SavedModel, legacy H5).
This means systems incorporating older versions of the Keras code base prior to versions 2.13 may be susceptible to running arbitrary code when loading older versions of Tensorflow-based models.
Similarity to other frameworks with code injection vulnerabilities
The code injection vulnerability in the Keras 2 API is an example of a common security weakness in systems that provide a mechanism for packaging data together with code. For example, the security issues associated with the Pickle mechanism in the standard Python library are well documented, and arise because the Pickle format includes a mechanism for serializing code inline with its data.
Explicit versus implicit security policy
The TensorFlow security documentation at https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) includes a specific warning about the fact that models are not just data, and makes a statement about the expectations of developers in the TensorFlow development community:

Since models are practically programs that TensorFlow executes, using untrusted models or graphs is equivalent to running untrusted code. (emphasis in earlier version)

The implications of that statement are not necessarily widely understood by all developers of TensorFlow-based systems.The last few years has seen rapid growth in the community of developers building AI/ML-based systems, and publishing pretrained models through community hubs like huggingface (https://huggingface.co/) and kaggle (https://www.kaggle.com). It is not clear that all members of this new community understand the potential risk posed by a third-party model, and may (incorrectly) trust that a model loaded using a trusted library should only execute code that is included in that library. Moreover, a user may also assume that a pretrained model, once loaded, will only execute included code whose purpose is to compute a prediction and not exhibit any side effects outside of those required for those calculations (e.g., that a model will not include code to communicate with a network).
To the degree possible, AI/ML framework developers and model distributors should strive to align the explicit security policy and the corresponding implementation to be consistent with the implicit security policy implied by these assumptions.
Impact
Loading third-party models built using Keras could result in arbitrary untrusted code running at the privilege level of the ML application environment.
A CVE ID has not yet been assigned to this vulnerability. We are currently negotiating with the vendor on this process.
Solution
Upgrade to Keras 2.13 or later. When loading models, ensure the safe_mode parameter is not set to False (per https://keras.io/api/models/model_saving_apis/model_saving_and_loading, it is True by default). Note: An upgrade of Keras may require dependencies upgrade, learn more at https://keras.io/getting_started/
If running pre-2.13 applications in a sandbox, ensure no assets of value are in scope of the running application to minimize potential for data exfiltration.
Advice for Model Users
Model users should only use models developed and distributed by trusted sources, and should always verify the behavior of models before deployment. They should follow the same development and deployment best practices to applications that integrate ML models as they would to any application incorporating any third party component. Developers should upgrade to the latest versions of the Keras package practical (v2.13+ or v3.0+), and use version 3 of the Keras serialization format to both load third-party models and save any subsequent modifications.
Advice for Model Aggregators
Model aggregators should distribute models based on the latest, safe model formats when possible, and should incorporate scanning and introspection features to identify models that include unsafe-to-deserialize features and either to prevent them from being uploaded, or flag them so that model users can perform additional due diligence.
Advice for Model Creators
Model creators should upgrade to the latest versions of the Keras package (v2.13+ or v3.0+). They should avoid the use of unsafe-to-deserialize features in order to avoid the inadvertent introduction of security vulnerabilities, and to encourage the adoption of standards that are less susceptible to exploitation by malicious actors. Model creators should save models using the latest version of formats (Keras v3 in the case of the Keras package), and, when possible, give preference to formats that disallow the serialization of models that include arbitrary code (i.e., code that the user has not explicitly imported into the environment). Model developers should re-use third-party base models with care, only building on models from trusted sources.
General Advice for Framework Developers
AI/ML-framework developers should avoid the use of naïve language-native serialization facilities (e.g., the Python pickle package has well-established security weaknesses, and should not be used in sensitive applications).
In cases where it’s desirable to include a mechanism for embedding code, restrict the code that can be executed by, for example:

disallow certain language features (e.g., exec)
explicitly allow only a “safe” language subset
provide a sandboxing mechanism (e.g., to prevent network access) to minimize potential threats.

Acknowledgements
This document was written by Jeffrey Havrilla, Allen Householder, Andrew Kompanek, and Ben Koo.

VU#123335: Multiple programming languages fail to escape arguments properly in Microsoft Windows

Overview
Various programming languages lack proper validation mechanisms for commands and in some cases also fail to escape arguments correctly when invoking commands within a Microsoft Windows environment. The command injection vulnerability in these programming languages, when running on Windows, allows attackers to execute arbitrary code disguised as arguments to the command. This vulnerability may also affect the application that executes commands without specifying the file extension.
Description
Programming languages typically provide a way to execute commands (for e.g., os/exec in Golang) on the operating system to facilitate interaction with the OS. Typically, the programming languages also allow for passing arguments which are considered data (or variables) for the command to be executed. The arguments themselves are expected to be not executable and the command is expected to be executed along with properly escaped arguments, as inputs to the command. Microsoft Windows typically processes these commands using a CreateProcess function that spawns a cmd.exe for execution of the command. Microsoft Windows has documented some of the concerns related to how these should be properly escaped before execution as early as 2011. See https://learn.microsoft.com/en-us/archive/blogs/twistylittlepassagesallalike/everyone-quotes-command-line-arguments-the-wrong-way.
A vulnerability was discovered in the way multiple programming languages fail to properly escape the arguments in a Microsoft Windows command execution environment. This can lead confusion at execution time where an expected argument for a command could be executed as another command itself. An attacker with knowledge of the programming language can carefully craft inputs that will be processed by the compiled program as commands. This unexpected behavior is due to lack of neutralization of arguments by the programming language (or its command execution module) that initiates a Windows execution environment. The researcher has found multiple programming languages, and their command execution modules fail to perform such sanitization and/or validation before processing these in their runtime environment.
Impact
Successful exploitation of this vulnerability permits an attacker to execute arbitrary commands. The complete impact of this vulnerability depends on the implementation that uses a vulnerable programming language or such a vulnerable module.
Solution
Updating the runtime environment
Please visit the Vendor Information section so see if your programming language Vendor has released the patch for this vulnerability and update the runtime environment that can prevent abuse of this vulnerability.
Update the programs and escape manually
If the runtime of your application doesn’t provide a patch for this vulnerability and you want to execute batch files with user-controlled arguments, you will need to perform the escaping and neutralization of the data to prevent any intended command execution.
Security researcher has more detailed information in the blog post which provides details on specific languages that were identified and their Status.
Acknowledgements
Thanks to the reporter, RyotaK.This document was written by Timur Snoke.

VU#155143: Linux kernel on Intel systems is susceptible to Spectre v2 attacks

Overview
A new cross-privilege Spectre v2 vulnerability that impacts modern CPU architectures supporting speculative execution has been discovered. CPU hardware utilizing speculative execution that are vulnerable to Spectre v2 branch history injection (BHI) are likely affected. An unauthenticated attacker can exploit this vulnerability to leak privileged memory from the CPU by speculatively jumping to a chosen gadget. Current research shows that existing mitigation techniques of disabling privileged eBPF and enabling (Fine)IBT are insufficient in stopping BHI exploitation against the kernel/hypervisor.
Description
Speculative execution is an optimization technique in which a computer system performs some task preemptively to improve performance and provide additional concurrency as and when extra resources are available. However, these speculative executions leave traces of memory accesses or computations in the CPU’s cache, buffer, and branch predictors. Attackers can take advantage of these and, in some cases, also influence speculative execution paths via malicious software to infer privileged data that is part of a distinct execution. See article Spectre Side Channels for more information. Attackers exploiting Spectre v2 take advantage of the speculative execution of indirect branch predictors, which are steered to gadget code by poisoning the branch target buffer of a CPU used for predicting indirect branch addresses, leaking arbitrary kernel memory and bypassing all currently deployed mitigations.
Current mitigations rely on the unavailability of exploitable gadgets to eliminate the attack surface. However, researchers demonstrated that with the use of their gadget analysis tool, InSpectre Gadget, they can uncover new, exploitable gadgets in the Linux kernel and that those are sufficient at bypassing deployed Intel mitigations.
Impact
An attacker with access to CPU resources may be able to read arbitrary privileged data or system registry values by speculatively jumping to a chosen gadget.
Solution
Please update your software according to the recommendations from respective vendors with the latest mitigations available to address this vulnerability and its variants.
Acknowledgements
Thanks to Sander Wiebing, Alvise de Faveri Tron, Herbert Bos, and Cristiano Giuffrida from the VUSec group at VU Amsterdam for discovering and reporting this vulnerability, as well as supporting coordinated disclosure. This document was written by Dr. Elke Drennan, CISSP.

Visit Our News Page

Contact us today if you'd like to know more
about how we can keep your network working at its best

VistaNet, Inc is a technology consulting and services company, helping enterprises
marry scale with agility to achieve competitive advantage.

We'd love to talk about your technology needs

Our experts would love to contribute their
expertise and insights to your potential projects
  • This field is for validation purposes and should be left unchanged.