Storage

Memory market likely to improve in 2020: Micron

Posted on

Micron Technology Inc. recently celebrated the grand opening of its Global Development Center (GDC) in Hyderabad, India. The site will play a key role in contributing to the development of technologies behind breakthroughs in a wide range of areas, such as artificial intelligence (AI) and machine learning {ML).

Jeff VerHeul.

Jeff VerHeul, senior VP of Non-Volatile Engineering, Micron, said: “We are excited about the new data center. We are growing a substantial team. We are now approaching 200 engineers. We are giving major programs to teams here from day one. The wealth of talent in India is great. We have teams in Hyderabad and Bangalore.”

Speaking about the memory and storage markets, VerHeul added: “The ASPs have fallen. We have stated that there is strong demand, with improvement over the next year. We do look at the emerging memory technologies. Specifically, there are many parts, mobile products, emerging memory, etc.”

Dr. Scott DeBoer, executive VP, Technology & Products, Micron, added: “There is greater demand for memory densities. Higher performance and greater density is important for autonomous driving, etc. The need for memory expansion is great for applications.

Dr. Scott DeBoer.

“If you look at edge, there are power-sensitive needs. NV, with high performance, is needed. We do process development of memory technologies. Density, power, cost, etc., are all key.”

Micron is a user and manufacturer of IoT devices. At the edge, there is stringent demand for power. Micron sees that in many other applications as well, and consider the segment as a growing opportunity. More computes need to be enabled at the edge. From some applications, there is spectrum of needs. Some new and emerging memories combine latency, with fast power, compute.

Talking about autonomous driving, VerHeul said: “I am the owner of a Tesla 3. It does things that imply that, its not a flawless device. That’s about 3-5 years away. The rate of development of technology is growing very fast. It is also a case of regulatory hurdles.

“It may seem easy to think about taking a car from point A to point B. But, what happens on a snowy day? Does the car have to take into account the boundary conditions. Greater compute power and memory is required to make this fool proof. Micron is developing future memories.

“We are in partnership with Intel as of now. Our first system products are due in next few quarters. We also had a public project with Sony. We also had a project with STT MRAM with the Singapore Government. Certain memories are more applicable for embedded, and some for high density.”

DeBoer added: “We also have a mobile business, which is a robust one. We are doing the right things in working with the chipset partners and the OEM partners. We are aligning our offerings with their requirements. It should change the user behavior. A large part of smart manufacturing in semiconductors is within our facilities.”

Pliops storage processor architecture increases data center storage efficiency by over 60X

Posted on

Pliops demonstrated its latest storage processor at the ongoing Flash Memory Summit 2019 (FMS), being held at the Santa Clara Convention Center, California, USA. The revolutionary new architecture increases data center storage efficiency by over 60X.

Pliops, based in San Jose, USA and Tel Aviv, Israel, is a storage processor company. It has 40 employees, and has deep experience in database and SSD technologies. Pliops has completed work on its core technology. The first product is to be released in Q419.

The Pliops storage processor enables cloud and enterprise customers to offload and accelerate data-intensive workloads, using just a fraction of the computational load and power consumption.

Pliops, at FMS 2019, talked about the cloud networking trends. In networking, 100Gb is currently mainstream. It is now moving to 400Gb. For the CPU, the GHz has been doubling every 20 years. Adding cores marginally adds to the performance. As for the NVMe SSDs, it is 1,000x IOPs over HDDs and 10x IOPs over SATA 8. 16TB storage is currently mainstream. The growing gap between networking and storage vs. CPU performance will increase the data center sprawl and costs.

In the key-value storage engines, among the database/storage stack, there are storage engines, such as, RocksDB, WiredTiger, InnoDB, etc. These are responsible for data persistency. They also keep the data sorted, and are traditionally based on B-trees. LSM has taken over, while RocksDB remains popular. All of these are extremely complex and prone to variable performance.

If we examine the source of key-value inefficiencies, there are instances such as: how to efficiently map variable-sized data to fixed-size blocks? Or, huge memory maps vs. multiple flash accesses, and speed vs. space efficiency.

There are high CPU and I/O costs for sorting, resorting, and garbage collection of data. There is also high read and write amplification – typically 20-100x. This either reduces the flash lifetime or requires expensive flash. It also reduces effective application bandwidth. When using disaggregated block storage, 20-100x app bandwidth required.

The thin driver layer can be added to the database/storage stack, such as the MySQL, Mongo, Ceph. Here’s where the Pliops storage processor comes in.

Elaborating on the role of hardware, Pliops listed management of highly compressed object memory map as prime. It is extremely memory-efficient, and software alternatives are much costlier. It takes care of key sorting, object garbage collection, compression and encryption, data persistency and logging. It also frees memory and compute resources to run applications, and not manage storage.

Pliops offers a 13X improvement or performance benefit over software. Comparing Pliops vs. software at MySQL, Pliops offers 5X faster queries per second, and over 7X more transactions per second. There is 20 percent NVMe flash space savings, and 9.5X write-amp improvements for flash.

Pliops offered three deployment options. First, DAS or the accelerator card. Second, accelerator card in storage engine node. Third, SEaaS — storage engine as a service.

Pliops’ solution solves the scalability challenges raised by the cloud data explosion and the increasing data requirements of AI/ML applications.