EDA software adoption by IT companies contributing to growth: Dr. Wally Rhines

Posted on Updated on

The electronic design automation (EDA) industry revenue increased 12.6 percent in Q2-2020 to $2,783.9 million, compared to $2,472.1 million in Q2-2019, with most categories logging double-digit increases, as per the Electronic System Design (ESD) Alliance Market Statistics Service (MSS).

The four-quarter moving average, which compares the most recent four quarters to the prior four quarters, increased by 6.7 percent. The ESD Alliance is a SEMI Technology Community.

Dr. Walden (Wally) C. Rhines, Executive Sponsor, SEMI EDA Market Statistics Service, President and CEO, Cornami, and CEO Emeritus, Mentor, A Siemens Business, said that the EDA industry is experiencing amazingly strong growth right now, at least through the second quarter.

Dr. Wally Rhines.

“We just reported 12.6 percent worldwide growth in revenue compared to the same quarter as last year. The last four quarters show growth of 6.7 percent. In the second quarter of 2020, every category tracked by the ESDA Market Statistics Program grew in double digits, except PCB design and services. Even so, PCB is still on track to be the fastest growing segment in 2021 with 12.4 percent growth in the last 12 months.”

Right now, all the segments are increasing. Will this trend continue, going forward?

Dr. Rhines said: “Of course, no one can predict the future. But, the underlying fundamentals causing current growth suggest that this is not a short-term effect. The biggest contributor right now is the adoption of EDA software by companies that have not historically designed their own electronics. That includes the IT community of companies like Google, Facebook, Amazon, Alibaba, and many more. In addition, the other systems companies in areas like automotive electronics are doing their own chip and board designs, while continuing their dependence upon tier one providers like Bosch, Denso, and many more.”

So, what is the growth likely for EDA during 2021? According to him, Japan grew at a very strong 9 percent rate in Q2-20 versus Q2-19. But, for the last 12 months, it has been flat. Korea EDA revenue decreased about 10 percent over the past 12 months, compared to prior years, and was flat in Q2-20 vs. Q2-19.

And, how are the semiconductor markets in Korea and Japan looking right now? Dr. Rhines noted that Japan is relatively flat. Korea has easier comparisons with last year, since the decrease in semiconductor revenue in 2019 was heavily influenced by memory price declines. He would expect that Korea will grow its semiconductor sales more than the overall world average in 2020 due to some recovery in memory pricing, influenced by the strong demand for server capacity in data centers.

Logic performance
Next, will logic performance improvement at fixed power slow down in 2021? How do you get around that? Dr. Rhines said that semiconductor logic revenue was relatively flat in 2019 despite the overall semiconductor market decline. It appears to be continuing that trend in 2020. The year 2021 will depend upon a post Covid-19 economic recovery. That is by no means certain, and it’s probably influenced by other factors, such as the elections in the USA.

Will there also be more heterogeneous integration, enabled by 3D technologies? “Absolutely! Heterogeneous integration is growing rapidly,” noted Dr. Rhines. And, multi-chip 2D and 3D packaging has made many new capabilities possible. Integration of PCB layout tools with IC design environments also helped. The support of foundries, like TSMC, with their “3D Fabric”, has helped too. Chiplets are an interesting extension to this packaging capability. It is interesting to see what AMD and Intel are doing in this space.

NVM and edge AI on the rise
Further, does the industry see the emerging non-volatile memories on the rise? He added that non-volatile memories are currently leading the transistor cost learning curve for the semiconductor industry. About 512 layers in flash memories is achievable, making for amazing NAND flash capacity in a single package. At this same time, it is a period of growing interest in new memory process technologies, like MRAM, ReRAM, FRAM, and more. Cost per bit continues on the long-term learning curve, and the continued doubling of total memory storage, both appear very predictable.

On the same token, I asked for his thoughts on the edge AI chip industry, going forward. As per Dr. Rhines, edge AI is entering a new wave of growth. It is inevitable that the intelligence in the cloud will make its way downward to embedded systems. It always has in the past as silicon capability allows us to compute locally that which we used to compute centrally in mainframes or servers.

“Dozens of new post-Von Neumann neuromorphic computing architectures have been funded as chip startups starting in 2017 and the pace continues at about $2 billion of venture investment per year in these companies. Working for one of these companies, Cornami, that promises orders of magnitude further performance and power dissipation improvements gives me some visibility into this trend. It is not slowing down.”

Finally, does the NAND industry need to consolidate to generate sufficient returns? He said that most semiconductor mergers and acquisitions are no longer driven by manufacturing economies of scale, unlike the 1970s and 1980s. Memory manufacturing efficiency does, however, depend upon scale. Samsung has nearly 30 percent of the market, and is very profitable, although, the commodity nature of memory makes the revenue and pricing more volatile than non-memory semiconductor products.

“Behind Samsung, we have Kioxia (formerly Toshiba), Micron, Western Digital (SanDisk) and SK Hynix. I suspect that harvesting economies of scale from merging any of these companies would be difficult because of the differences in products, processes and geographic locations. But, it could certainly happen!”

PCM to be leading SCM thanks to 3D XPoint, particularly NVDIMMs: Yole

Posted on Updated on

The emerging embedded NVM is said to have entered the takeoff phase. The embedded market segment showed a 118% CAGR between 2019 and 2025, reaching more than US$2 billion by 2025, according to Yole Developpment (Yole), Lyon-Villeurbanne, France. It will be driven by two key segments: low-latency storage (enterprise and client drives) and persistent memory  or non-volatile dual in-line memory modules (NVDIMMs).

Status of newly emerging applications
Let’s start with the status of the newly emerging applications: stand-alone code/data Simone-ENVMstorage and embedded NVM for analog ICs. Simone Bertolazzi, Technology & Market Analyst at Yole, said: “The emerging NVMs bring new features and functionalities, but at a higher price. For instance, stand-alone STT-MRAM (spin-transfer torque magnetic random-access memory) offers a promising combination of low-power consumption and high speed, but its price-per-bit is orders of magnitude higher than DRAM. This makes its adoption into mainstream DRAM-like applications (e.g. NVDIMM) very challenging at this stage.


“However, the price gap is much smaller vs. stand-alone NOR, a technology that is being used nowadays for code/data storage in a host of semiconductor products. Discrete STT-MRAM is currently available at densities of 1Gb, thanks to the forefront work of the Everspin-GlobalFoundries partnership.”

Through further scaling in the coming years, STT-MRAM will approach NOR price level, and start targeting NOR replacement in applications requiring high endurance and low-power consumption, particularly at densities above 1Gb.

He added: “NVM is a key element in analog ICs, where it is used for storing controller code or configuration settings in numerous device applications, spanning from sensing, measuring, and power management ICs (PMICs). Various companies are focusing on the development of NVM elements using embedded RRAM.

“For instance, TSCM developed 40nm BCD (bipolar-CMOS-DMOS) technology for PMIC applications, and enriched its ultra-low power (ULP) process with embedded RRAM to enable low power and high integration mobile applications. The South-Korean analog and mixed-signal foundry, DB HiTek, has licensed Adesto’s CBRAM (conductive bridging RAM) technology for use in IoT and other ultra-low power ICs manufactured at 180nm. Weebit Nano is also actively discussing with an analog foundry for the implementation of embedded RRAM in 90-180nm foundry process.”

Now, let’s look at the market size of the SCM (storage-class memory) market segment and its expected evolution. Also, how will the PCM (phase-change memory) sector evolve?

He added: “The last three years have witnessed the take-off and rapid expansion of the storage-class memory (SCM) market – low-latency drives and, more recently, persistent memory modules – which is projected to reach multi-billion-dollar revenues by 2025.

“Key to this was the introduction of the 3D XPoint, a stand-alone PCM-based technology developed by Micron and Intel, and commercialized by Intel, since 2017, under the brand name Optane.


“PCM will be the leading SCM technology, thanks to the sales of 3D XPoint products – particularly NVDIMMs – that are sold by Intel in a bundle with its server CPUs. The stand-alone PCM market is expected to grow to ~$3B in 2025 with a CAGR19-25 ~38%.”

China’s position
In that case, it will be interesting to note the market positioning of the Chinese memory companies.

Bertolazzi added that in China, the priority is clearly given to the mainstream memories that are critical for the growing datacenter and mobile businesses. These are developed by key players such as YMTC (3D NAND) and CXMT (DRAM). NAND/DRAM projects have captured the majority of financial resources, and investments in other memory technologies have been rather limited or are focused mainly on the most promising players.

However, China is not overlooking at all the emerging NVM business. It has now initiated a number of projects (e.g. AMT, Hikstor, Ciyu, and more) that aim at acquiring new memory know-how and IP and developing new technology processes and products to get ready for the next evolution in the overall semiconductor memory business.

Currently, the highest RRAM density is 8Mb. How much progress has been made in this field?

In 2013, Panasonic shipped the first microcontroller with 64KB (512Kb) embedded RRAM manufactured at 180nm. Later, in 2016, Panasonic partnered with Fujitsu to make a discrete 4 Mb RRRAM part, which was updated (August 2019) with a factor of 2 improvement in density.

So far, RRAM has been commercialized mainly by Adesto with low-density CBRAMTM products, as well as by Panasonic and Fujitsu. Due to the relatively high price-per-bit and the limited number of commercial players, RRAM has targeted niche applications (EEPROM replacement).

Embedded MRAM?
Next, where is embedded MRAM going? And, what about PCM and RRAM?

He added that among the emerging NVM technologies, embedded MRAM has advanced at a relatively faster pace, thanks to the strong involvement of IDM/foundries and to the support of equipment suppliers that have been providing new solutions to difficult technical challenges (e.g. etching, deposition, and metrology).

MRAM is being developed as a potential replacement for eFlash for code/data storage, as well as a low-power, low-footprint working memory (SRAM-like). Yole forecasts a ~$1.7B embedded MRAM market in 2025.

However, PCM (phase-change material) and RRAM are not out of the race, due to their unique memristive properties and synapse-like behavior. They are both promising for analog in-memory-computing applications that could take off by 2023-2024.

Moreover, the embedded PCM is being developed for the automotive applications by STMicroelectronics on 28nm FDSOI, and the new products could hit the market in the coming years.

Estimated market size
So, what is the estimated market size over the next five years?

The stand-alone emerging NVM market is projected grow from ~$500M in 2019 to ~$4.1B in 2025. It will be driven by two key segments, namely, persistent memory (NVDIMM) and low-latency storage (enterprise and client SCM drives).

The embedded emerging NVM market is now in the takeoff phase, and will be driven by MCUs and IoT, as well as memory buffers for ASIC products, such as AI accelerators, display drivers and CMOS image sensors. It is expected to grow rapidly with a CAGR 19-25 from ~118%, to ~$2.5B by 2025.

Memory market likely to improve in 2020: Micron

Posted on

Micron Technology Inc. recently celebrated the grand opening of its Global Development Center (GDC) in Hyderabad, India. The site will play a key role in contributing to the development of technologies behind breakthroughs in a wide range of areas, such as artificial intelligence (AI) and machine learning {ML).

Jeff VerHeul.

Jeff VerHeul, senior VP of Non-Volatile Engineering, Micron, said: “We are excited about the new data center. We are growing a substantial team. We are now approaching 200 engineers. We are giving major programs to teams here from day one. The wealth of talent in India is great. We have teams in Hyderabad and Bangalore.”

Speaking about the memory and storage markets, VerHeul added: “The ASPs have fallen. We have stated that there is strong demand, with improvement over the next year. We do look at the emerging memory technologies. Specifically, there are many parts, mobile products, emerging memory, etc.”

Dr. Scott DeBoer, executive VP, Technology & Products, Micron, added: “There is greater demand for memory densities. Higher performance and greater density is important for autonomous driving, etc. The need for memory expansion is great for applications.

Dr. Scott DeBoer.

“If you look at edge, there are power-sensitive needs. NV, with high performance, is needed. We do process development of memory technologies. Density, power, cost, etc., are all key.”

Micron is a user and manufacturer of IoT devices. At the edge, there is stringent demand for power. Micron sees that in many other applications as well, and consider the segment as a growing opportunity. More computes need to be enabled at the edge. From some applications, there is spectrum of needs. Some new and emerging memories combine latency, with fast power, compute.

Talking about autonomous driving, VerHeul said: “I am the owner of a Tesla 3. It does things that imply that, its not a flawless device. That’s about 3-5 years away. The rate of development of technology is growing very fast. It is also a case of regulatory hurdles.

“It may seem easy to think about taking a car from point A to point B. But, what happens on a snowy day? Does the car have to take into account the boundary conditions. Greater compute power and memory is required to make this fool proof. Micron is developing future memories.

“We are in partnership with Intel as of now. Our first system products are due in next few quarters. We also had a public project with Sony. We also had a project with STT MRAM with the Singapore Government. Certain memories are more applicable for embedded, and some for high density.”

DeBoer added: “We also have a mobile business, which is a robust one. We are doing the right things in working with the chipset partners and the OEM partners. We are aligning our offerings with their requirements. It should change the user behavior. A large part of smart manufacturing in semiconductors is within our facilities.”

Emerging memories enable AI market

Posted on Updated on

Stanford University, USA, in collaboration with Atascadero, California, USA-based Coughlin Associates, is organizing a workshop on emerging non-volatile memories (NVM) and artificial intelligence (AI), on on August 29, 2019.

The one-day workshop at Stanford University, put on by the Stanford Center for Magnetic Nanotechnology and Coughlin Associates, features invited expert speakers to talk about various emerging NVMs, and how they will enable the next-generation of AI devices in the home, in the factory and in the industry.

Subhasish Mitra, Prof. Electrical Engineering and Computer Science, Stanford University, will talk about RRAM integrated on silicon CMOS for AI applications. A very good friend, Thomas Coughlin, president of Coughlin Associates, will be speaking on how emerging memories enable the AI market.

RRAM, also known as ReRAM (resistive random access memory), is a form of non-volatile storage that operates by changing the resistance of a specially formulated solid dielectric material.

Tom Coughlin

Elaborating on RRAM for AI applications, specifically, RRAM integrated on silicon CMOS for AI app, Tom Coughlin said: “AI inference engines are looking at MRAM, and possibly, RRAM for storing ML weighting functions. This would be either for the edge computing or end-point applications. RRAM, as well as PCM, are being pursued for neuromorphic computing architectures that use memory cell technology for analog computing, similar to the way that neurons work in the brain.

How can we understand ML and its potential in the semicon industry? Coughlin added that AI allows the unlocking of greater value in the data and information that we capture, and thus making better decisions based upon that data. This has a huge value, and semiconductor technologies enabling AI will play a big role.

I also got his views on edge-AI and the rise of the neural accelerators. He added: “A lot of edge work will be done with inference engines using models developed in data centers. Although, there are approaches to allow some continuous learning. I see this as becoming a big enabler of smart devices and applications.”

Pliops storage processor architecture increases data center storage efficiency by over 60X

Posted on

Pliops demonstrated its latest storage processor at the ongoing Flash Memory Summit 2019 (FMS), being held at the Santa Clara Convention Center, California, USA. The revolutionary new architecture increases data center storage efficiency by over 60X.

Pliops, based in San Jose, USA and Tel Aviv, Israel, is a storage processor company. It has 40 employees, and has deep experience in database and SSD technologies. Pliops has completed work on its core technology. The first product is to be released in Q419.

The Pliops storage processor enables cloud and enterprise customers to offload and accelerate data-intensive workloads, using just a fraction of the computational load and power consumption.

Pliops, at FMS 2019, talked about the cloud networking trends. In networking, 100Gb is currently mainstream. It is now moving to 400Gb. For the CPU, the GHz has been doubling every 20 years. Adding cores marginally adds to the performance. As for the NVMe SSDs, it is 1,000x IOPs over HDDs and 10x IOPs over SATA 8. 16TB storage is currently mainstream. The growing gap between networking and storage vs. CPU performance will increase the data center sprawl and costs.

In the key-value storage engines, among the database/storage stack, there are storage engines, such as, RocksDB, WiredTiger, InnoDB, etc. These are responsible for data persistency. They also keep the data sorted, and are traditionally based on B-trees. LSM has taken over, while RocksDB remains popular. All of these are extremely complex and prone to variable performance.

If we examine the source of key-value inefficiencies, there are instances such as: how to efficiently map variable-sized data to fixed-size blocks? Or, huge memory maps vs. multiple flash accesses, and speed vs. space efficiency.

There are high CPU and I/O costs for sorting, resorting, and garbage collection of data. There is also high read and write amplification – typically 20-100x. This either reduces the flash lifetime or requires expensive flash. It also reduces effective application bandwidth. When using disaggregated block storage, 20-100x app bandwidth required.

The thin driver layer can be added to the database/storage stack, such as the MySQL, Mongo, Ceph. Here’s where the Pliops storage processor comes in.

Elaborating on the role of hardware, Pliops listed management of highly compressed object memory map as prime. It is extremely memory-efficient, and software alternatives are much costlier. It takes care of key sorting, object garbage collection, compression and encryption, data persistency and logging. It also frees memory and compute resources to run applications, and not manage storage.

Pliops offers a 13X improvement or performance benefit over software. Comparing Pliops vs. software at MySQL, Pliops offers 5X faster queries per second, and over 7X more transactions per second. There is 20 percent NVMe flash space savings, and 9.5X write-amp improvements for flash.

Pliops offered three deployment options. First, DAS or the accelerator card. Second, accelerator card in storage engine node. Third, SEaaS — storage engine as a service.

Pliops’ solution solves the scalability challenges raised by the cloud data explosion and the increasing data requirements of AI/ML applications.

Toshiba Memory first to sample UFS 3.0 embedded memory devices

Posted on

Toshiba Memory America Inc. has begun sampling the industry’s first Universal Flash Storage (UFS) Ver. 3.0 embedded flash memory devices.

Toshiba UFS V3.0

Available in three capacities (128, 256 and 512GB), the new lineup utilizes its cutting-edge 96-layer BiCS FLASH 3D flash memory. High-speed read/write performance and low power consumption make the new devices ideal for applications such as mobile devices, smartphones, tablets, and augmented/virtual reality systems.

Elaborating on the features, Scott Beekman, director of managed flash memory products for Toshiba Memory America, said: “There are two key new features:
(1) increased performance enabled by a faster interface. Ver. 3.0 UFS has double the interface speed vs. prior generation Ver. 2.1. Ver. 2.1 can support two lanes, each lane with a data transfer rate up to 5.8Gbps (so, it means total 11.6Gbps [Gigabits per sec]), whereas Ver. 3.0 can support two lanes, each lane with data transfer rate up to 11.6Gbps (so, it means total 23.2Gbps).

“(2) UFS Ver. 3.0 enables lower power consumption by supporting VCC power supply voltage of 2.5V (it also supports 3.3V)….whereas Ver. 2.1 only supports VCC power supply voltage of 3.3V. The main motivation for why JEDEC standard Ver3.0 was developed was due to the first feature, due to performance increase. Feature two enables to help support this faster performance while suppressing increases in power consumption.”

JEDEC, the global leader in developing open standards for the microelectronics industry, has enhanced the previous versions of the UFS standard.

Scott Beekman.

How is this different from the competition?

He added that Toshiba is the first to introduce Ver. 3.0 UFS. “We have this faster product before our competition does. At some point, it is expected that competitors will also have Ver. 3.0 UFS.”

Features of 96-layer BiCS FLASH 3D flash memory
Elaborating on the 96-layer BiCS FLASH 3D flash memory, and how is it different, he said: “The 96-layer BiCS FLASH is Toshiba’s latest generation of 3D memory. This UFS Ver. 3.0 device is using the latest generation of 3D flash memory that Toshiba has available, which helps to enable faster performance, and more density per given area of die. The prior Ver. 2.1 UFS memory from Toshiba has 64-layer BiCS FLASH 3D memory.

“The combination of the faster JEDEC standard Ver. 3.0 interface, and using our latest generation of 96-layer BiCS FLASH, along with Toshiba’s UFS controller, is enabling an actual performance increase of about +70 percent for Sequential Read and +80 percent for Sequential Write over our prior generation Ver. 2.1 UFS.”

What is the next advancement likely from Toshiba?

He noted: “For UFS, we will continue to support future versions/generations of JEDEC standard UFS that enable even faster performance. This will enable applications, such as smartphones, AR/VR and many others, to continue to realize increases in performance – and may even enable new use cases and applications that we aren’t currently able to envision.”

DVC provides fantastic opportunity: NetApp

Posted on Updated on

NetApp has introduced the Data Visionary Engineering Center (DVC) in Bangalore. Paul van Linden, manager, EMEA and APAC EBC Program, said that as of now, there are four DVCs: in Sunnyvale and RTP North Carolina, USA, Amsterdam, the Netherlands, and now, Bangalore.

Having a DVC does make a difference. Linden said: “Partners are hugely important. In a 2017 APBM survey, 86 percent said their purchase size increased due to visit. 30 percent said that NetApp is a trusted advisor. 42 percent said that their sales cycle had reduced (by up to 9 percent). And, 79 percent said that they discovered additional products (gone up by 15 percent).” He added, “We provide proven business acceleration.”

On the question of why have a DVC in Bangalore, he said: “Global customers have some very unique requirements. Eg., they would like to have detailed conversations with coders. This (DVC) is a fantastic opportunity.”
Anil Valluri, president, Sales, India and SAARC, said: “It is a recognition of two things – one, the vibrancy of the market, and two, the huge amount of engineering talent in India. There are a lot of services being launched by the government. There is a growing market, with a lot of cutting-edge technology. We can tell people how to embrace digital transformation.

“The global SIs architecture centers are here. They can come here, and use technologies. It is a recognition of the potential of the Indian market. We can also serve as the knowledge center.”

Deepak Vishweswaraiah, MD and SVP, Data Fabric and Manageability Group, noted: “The whole digital transformation is not unique to NetApp. We are helping customers to progress on their data journey visions. Customers need to find new ways to do business. They have to find newer customers and newer ways to do business.

“We are also introducing the NetApp Cloud Volumes for Google Cloud Platform (GCP). We are now delivering data services with all the world’s largest hyper-scalers, such as Azure, AWS and Google Cloud Platform.

“We have modernized the IT architecture with Cloud Connected Flash. Powerful AI and high-performance applications with the world’s fastest enterprise all-flash array, the AFF A800 end-to-end NVMe.

“The NetApp ONTAP 9.4 storage OS improves performance, efficiency and data protection, also providing the industry’s first enterprise 30TB SSDs. It enables GDPR compliance and secures the data. New, intelligent cloud services further reduce TCO. The Active IQ provides insights for higher operational efficiency.

“We have also announced the NetApp Cloud Insights – Hybrid Cloud ITIM, delivered via SaaS. It improves customer satisfaction, pro-actively prevents failures, and optimizes to reduce cost. We have automated the tamper-proof retention of critical financial data.

“We are now accelerating our data visionary footprint in India. We have the largest R&D teams for NetApp in India.”

Semicon industry in for an exciting decade ahead: Lam Research

Posted on Updated on

Lam Research is a global leader in wafer fabrication equipment and services since 1980. It is the world’s second-largest semiconductor equipment manufacturer. Lam Research India was established for software development and support in 2000. Now, it provides hardware and software engineering design services, and plays a strategic role as part of the Product Engineering and Global Operations teams.

With a centre in Bengaluru that houses over 800 employees, Lam Research India’s proximity to the customer and supplier base in Asia, as well as 24×7 operational support enabled by the time zone difference with the headquarters in Fremont, CA, makes Lam India an indispensable part of Lam.

While Lam does not manufacture in India, there is a manufacturing support system here that is involved in planning, procurement and logistics that caters to a worldwide network of suppliers and manufacturers.

Innovation in semicon
Let’s look at the work and innovation happening in the semiconductor space

KrishnanKrishnan Shrinivasan, MD, Lam Research (India), said: “It is a very exciting time to be in the semiconductor ecosystem. There is a full spectrum of next-generation solutions that we have been working on for about five years now. We have made some headway in its implementation. Non-volatile memory (NVM), which is about the cloud and data storage, driven by the amount of distributed sensors that are collecting data that needs to be stored and monetized, has possibly experienced the highest growth.

“Another key transition is from two-dimensional architecture to a three-dimensional architecture. In a two-dimensional architecture, one is constantly working on shrinking, but on a single dimension. Now, we have an opportunity to continue to work on shrinking, but, also have an almost unlimited opportunity to vertically scale. We are just in the third- or fourth-generation of an inflection that will create an impact for at least ten generations to come.

“In terms of the logic roadmap, it has already transitioned from the world of planar transistor to the FinFET transistor scheme, and there are further generations of innovation in FinFET technology and a new transistor structure in later architecture.

“This roadmap is a 5-10 year one for the logic industry. While the clearest roadmap for the industry from a technology point is in NVM, all of the elements, logic and memory, including DRAM and NVM, have a technical roadmap, which is as good – if not stronger – than it has been in many years.

“The semiconductor industry is looking at an exciting decade from a technological advancement point of view. The level of innovation is being driven by an increasing number of applications for predictive medicine, autonomous vehicles, innovations in space and climate. All this would not have been possible without silicon.

“The innovation in silicon enables the development of the application space. Application development and growth can only be sustained through continuous innovation in the semiconductor industry.”

Transformative memory tech
Next, what about transformative memory technology and its latest inflection

Shrinivasan added: “The semiconductor industry is facing multiple technology inflections simultaneously. Revolutionary approaches are being sought after in place of incremental or evolutionary scaling strategies in order to provide consumers with smaller, faster and power efficient devices.

“The current inflections are focussed on multiple patterning, FinFET, advanced packaging, and 3D NAND. NAND flash has traditionally been made using two-dimensional (2D) or planar methods. However, in order to squeeze in more memory capacity without having to shrink feature dimensions, 3D NAND provides a viable option. This memory structure is different therefore, it requires new fabrication methods which are being developed. 3D NAND is being driven by several important advantages that it offers, including its ability to deliver higher capacity with a lower cost per bit.
Read the rest of this entry »