Software runs on data and data is often regarded as the new oil. So it makes sense to put data as close to where it is being processed as possible, in order to reduce latency for performance-hungry processing tasks.
Some architectures call for big chunks of memory-like storage located near the compute function, while, conversely, in some cases, it makes more sense to move the computer nearer to the bulk storage.
In this series of articles we explore the architectural decisions driving modern data processing… and, specifically, we look at computational storage
“Computational storage is defined as architectures that provide Computational Storage Functions (CSF) coupled to storage, offloading host processing or reducing data movement. These architectures enable improvements in application performance and/or infrastructure efficiency through the integration of compute resources (outside of the traditional compute & memory architecture) either directly with storage or between the host and the storage. The goal of these architectures is to enable parallel computation and/or to alleviate constraints on existing compute, memory, storage and I/O.”
Fern writes as follows
When it comes to computational storage, people ask how exactly should software engineers identify, target and architect specific elements of the total processing workload to reside closer to computational storage functions?
With quantum computing on the horizon (whether that be that a couple of years away, or more than a decade), software engineers need to be conscious of the threat it poses to traditional digital encryption and the knock-on effects for how data is processed and stored.
Solutions developed to protect that data – and how they operate – will have a clear impact on how we architect specific elements of the processing workload and where they are located.
Data disaggregation
For example, data disaggregation offers real potential to future-proof data security in a quantum computing world but there is a trade-off, to some degree, between security and speed of data processing that may require a rethinking of how we architect storage and processing functions.
Disaggregating data inherently creates some latency when it comes to accessing that data but there are ways of storing and processing that disaggregated data that allow system architects to bring data that is needed quickly close to the CPU while maintaining its wider disaggregated form.
How? By making the storage ‘content-aware’.
The Content Addressable Filesystem (CAFS) approach may have fallen by the wayside over recent years as massively parallel processing (MPP) architectures became prominent, but it’s actually quite a nifty way of doing things. It allows data exchange to take place remotely from the processor while also being aware enough to know that the next bit of data – which may be further away from the processor – will take longer to access and should therefore be moved closer.
Crucially, there are ways in which this and other technologies can be applied to make the cloud inherently safe and secure, delivering an outcome analogous to homomorphic encryption – querying the data in its disaggregated using form without ever opening it up to the airwaves in clear.
Latency you can live with
Yes, there is some additional latency but there are three clear advantages to this approach:
Firstly, the data itself is never revealed in its unencrypted state;
Secondly, any latency disaggregating, re-aggregating, encrypting and decrypting the data can be mitigated by use of smart storage that identifies and pre-fetches relevant data, bringing it closer to the CPU as needed;
Thirdly, limiting the amount of data stored near the CPU reduces the security risk of storing it all close to the processor, while that risk is further mitigated by a retrieval process that focuses on identifying and pulling back smaller pieces of data – effectively just bringing the relevant page from a book rather than the whole bookcase at any one time.
This approach does mean reviewing what have become the accepted norms of data storage, processing and system architecture but there’s an argument that approaches such as CAFS were not the wrong approach, just before their time in terms of the solutions they enable.
The refactoring X-factor
So how much refactoring does it take to bring about a successful computational storage deployment and, crucially, how much tougher is that operation when the DevOps team in question is faced with a particularly archaic older legacy system?
Our aim at Nerdcore Computers is to make the deployment of computational storage as transparent as possible so that it doesn’t interrupt workflow during the transition period and in-life.
In essence, our approach to data storage is mapped in traditional storage terms but delivered virtually, meaning you don’t have to fully switch everything over on day one and risk breaking the whole integrated IT and human subsystems. Instead, you can deploy new storage at a manageable rate, decommissioning old systems as each phase of the transition completes and evolving your business practices and workflows gradually, as capacity and capability allows.
It’s a new approach but not a novel one; more a change of mindset about how to structure computational storage and implement change that smooths the transition, especially where existing structures are particularly archaic or need to incorporate – initially at least – legacy systems.
Author Nerdcore Computers
Comments