Japan needs to put their mind, and bridge semiconductor gap, China in catch-up mode

Posted on

TRT World is a Turkish public broadcaster. In Money Talks, a TRT World production, it was reported that the global semiconductor market is expected to grow 13.1 percent to $588.36 billion in 2024.

Memory chips are anticipated to lead with 44.8 percent, driving market growth. Next, Americas is expected to grow by 22.3 percent, and Asia Pacific by 12 percent in 2024. TSMC, Nvidia, Samsung and Intel are expected to lead the global AI chip trends.

Malcolm Penn, Chairman and CEO, Future Horizons, told Money Talks about the importance of semiconductors, and its impact on the stock market in the near future. He said memory has always been an importance segment of the semiconductor industry. It generally accounts for a quarter or a third of the overall sales. What happens in memory affects the overall semiconductor market.

Memory has seen a down-cycle for the last two years. It is now on upside, and recovery will be very strong. It is also the driving force behind the overall recovery we are seeing. The other parts of the semiconductor market, including discretes, have been growing in single digits so far. We are seeing a memory-led recovery. Memory is always the first into the downturn, and first into the upturn.

Regarding the chip war among China, Taiwan, and USA, he added that China is always playing catch-up in terms of technology, which is quite normal. New regions are entering the semiconductor market. It will take time to catch up. China has got two or three nodes in leading edge. China has aspirations for Taiwan. That led the rest of the world to see what can be done to stall China’s entry into Taiwan, and in the high end of the market. That led to the current restrictions on China, led by America. There is this move to push China back, and try to slow it down. They want to keep the lead with TSMC, in particular. The whole world depends on TSMC.

Japan saw modest growth of 4.4 percent last year. How can it maintain growth? Penn noted that Japan was very strong in the 1980s. It was steaming ahead in technology, manufacturing, etc. They have gone from 28 percent share of the global semiconductor market to just 8 percent. They have lost of lot of their OEM segment, which was the market for semiconductors. They have also fallen back in technology.

Japan is now making a very brave effort right now to pull back and catch up. They are certainly doing very well. They have some very good investment projects going on. But, that’s not going to happen overnight. There are huge factors in the equipment industry, huge factors in the materials industry, etc. Japan now has the pieces in place. They need to put their mind to it, and bridge that gap.

Utility-scale quantum computing — advances and future challenges

Posted on

At IRPS 2024, Dr. Rajeev Malik, Program Director, System Development, Benchmarking & Deployment, IBM Quantum, USA, presented another keynote on utility-scale quantum computing — advances and future challenges. He said quantum timeline started in 1970s with Charles Henry Bennett. It has been advancing since. The field has continued to grow. About five years ago, we introduced Quantum System 1.

Quantum computers are the only novel hardware that changes the game. We need to solve several hard problems. Factoring is involved here. The value of quantum computing becomes apparent as problems scale. They can’t be handled by classical computers. That’s the goal!

Dr. Rajeev Malik.

We are seeing increasing utility in quantum computing. Any quantum computer can have errors. We are doing error correction, with increasing circuit complexity. We have scale + quality + speed, as performance metrics. These are the three key metrics for delivering quantum computing performance.

Scale involves the number of qubits. Quality is measured as error per layered gate. Quantum computers are noisy, and need to lower the error rates of 2 qubit gates (<0.1 percent). Speed is calcutated in circuit layers per second. We need individual gate operations to complete in <1us range to have reasonable runtimes for real workloads. IBM is striving to bring useful quantum computing to the world, and make it quantum safe.

IBM quantum platform
He talked about IBM quantum platform. We are working on designing the processor, to how users are going to use the system. Qiskit is software development kit, followed by tools that run systems, and users’ work.

A quantum circuit is fundamental unit of quantum computation. A quantum circuit is a computational routine consisting of coherent quantum operations on quantum data, such as qubits, and concurrent real-time classical computation. Relevant problems need 100s and 1000s of qubits and 1 million gates or more to solve.

IBM Quantum Network is mapping interesting problems to quantum circuits. The IBM Quantum Development Roadmap is to run quantum circuits faster on quantum hardware and software. Different things are being used to build qubits. These include photons, ions, solid-state defects, nanowires, neutral atoms, superconducting circuits, etc.

An IBM quantum system has dilution refrigerator with cryogenic interconnects and components, including the processor. It has custom third generation room temperature electronics for control and readout. Classical co-compute servers can enable the qiskit runtime to execute use case workloads efficiently.

IBM quantum roadmap includes demonstrate quantum- centric supercomputing in 2025. We can scale quantim computers by 2027. In 2029, we can deliver fully error-corrected system. In 2030+, we can deliver quantum-centric supercomputers with 1,000s of logical qubits. Beyond 2033, quantum-centric supercomputers will include thousands of qubits capable of running 1 billion gates, unlocking the full power of quantum computing.

From Falcon with 27 qubits in 2019, IBM has moved Osprey with 433 qubits in 2022. We are scaling QPUs, with Osprey using scalable I/O. IBM’s advanced device stackup includes interposer chip, wiring plane, readout plane, and qubit plane. Lot of technologies are borrowed from semiconductors. In 2023, IBM released Condor, pushing the limits of scale and yield with 1,121 qubits. IBM also released Heron with 133 qubits, with I/O complexity on par with Osprey. IBM aims to extend Heron to Flamingo and Crossbill via modular coupling.

Accelerated timescale
We are now upgrading everything, and development is on an accelerated timescale. It is for processors, connectors, control electronics that are getting updated every 12-18 months. Density, power, and cost remain key focus areas. There is migration from discrete to integrated on-board components, and control electronics to 4K temperatures or cryo CMOS. We have longer reliability of cryogenic components. We are looking at predictability, availability, and stability of the system. We have achieved system uptime of over 97 percent, and over 90 percent available to run jobs.

We have deployments in Fraunhofer, Germany, University of Tokyo, Cleveland Clinic, PINQ2 Canada, and pending at Yonsei University, Seoul, Riken, Japan, Iqerbasque, Spain, etc., and 40 innovation centers. We need a disruptive change to unlock potential of quantum computation. We have multiple systems in place by H2-23, with more to come.

Innovative technologies for sustainable future of semiconductor industry: IRPS 2024

Posted on Updated on

IEEE International Reliability Physics Symposium (IRPS) 2024 was held in Dallas, Texas, USA. Su Jin Ahn, EVP, Advanced Technology Development Office, Samsung Semiconductor R&D Center, presented the plenary on innovative technologies for sustainable future of semiconductor industry.

Semiconductor market growth will be from $224 billion to $350 billion for computing and data storage. Automotive electronics has moved to $150 billion. We are seeing an explosive growth of data and rapid development of AI. Data creation is accelerated by GenAI. We had AlphaGo, developed by London-based DeepMind Technologies, an acquired subsidiary of Google, in 2015.

Today, we are witnessing growth of wafers and semiconductor fabs. Number of fabs using 300mm wafers has increased x3 over last 15 years. There is paradigm shift in computing over the past 80 years. From mainframe era, we have now come hyper-scale connectivity and AI. Moore’s Law-based geometry scaling has been at the heart of all these developments. We have seen evolution of photo-lithograph, multi-patterning, etc. There has been structure and materials innovations as well. There is changing landscape of memory and logic devices in IRPS papers. Reliability issues need to be overcome for new structures and materials.

Sun Ji Ahn.

We are seeing evolution of backend-of-line metals. We have relied on low resistivity and high reliability for performance, speed, and power efficiency of chips. There is evolution of NAND flash memory. It is now entering 3D stack up and WF bonding. Stacking has moved to heterogenous integration. There are mutual thermal, chemical, and stress effects. There are potential reliability issues. Cell-to-cell variations from top to bottom increases due to deep contact hole. There are stress-related problems in wafer bonding.

Technologies in works
We are now in logic scaling trend. CPP scaling has slowed down due to short channel effects, and contact resistance. Cell height scaling slowed down due to rising metal resistance. We have prospects for logic transistor beyond planar TR. We moved from planar TR to FinFeT at 14nm. At 3nm, we have the gate all-around (GAA) MBCFET. At <1nm, we have 3DS FET and CFET.

We have potential reliability issues in GAA transistor. Thin body and process complexity degrade HCI. Structural complexity and increased packing density is vulnerable to heat dissipation. For self-heating, we can maintain temperature at thin nanosheet channels.

We have potential reliability issues in 3DS FET. Accumulation of process damage causes TDDB, and BTI degradation. Complex 3D layout causes Vth variation, and self-heating deteriorates. We also have potential reliability issues in 2D channels. We have to look at material quality, full integration into logic process, defect control, etc.

We have prospects for DRAM cell beyond 10nm. Area scaling continues via vertical channel transistor or vertically-stacked cell array. Potential reliability issues include an undesired hole accumulation in thin floating channel that increases the sub-threshold leakage. Beyond Si-channel, we have deposition-able IGZO channel transistor in vertical channel transistor (VCT). We also have potential reliability issues in IGZO channel. There can be thermal instability in process integration, abnormal PBTI, and ion-Vth trade-off behavior.

Future prospects
In future, we will be transitioning to 3D stack era. We will move to cell array and periphery circuit, and heterogenous integration. We are moving to wafer bonding and advanced packaging. We are seeing the evolution of package technology. We are pursuing fine pitch bonding (<2um) for interconnect density (>2e5/um2) compatible to SoC. We are moving to wafer-to-wafer and multi-chiplets.

There are potential issues in 3D-IC. These include massive bonding interfaces (multi-chiplet, multi-stage HBM) that increases EM risks. Pitch scaling and low temperature process weakens bonding interface stability.

There are several future reliability challenges. They include TDDB, EM, FBE, HCI, BTI, self-heat, etc. Thanks to new semiconductor technologies being developed, we have managed the challenges well, so far. More is expected in future.

Intel launches Gaudi 3 AI accelerator, Lunar Lake, Xeon 6, and more!

Posted on Updated on

Intel Vision 2024 was held recently in Phoenix, Arizona, USA. AI represents a paradigm shift in how humans and technology interact. Pat Gelsinger, Intel CEO talked, about the tangible outcomes AI can enable for businesses, and what the future of AI in enterprise looks like.

He said that we had a proud moment in Arizona on March 20, when we had the first major releases of the CHIPS Act coming to Intel for our Ocotillo facility. It was the largest investment in the semiconductor history. The CHIPS Act is the most important industrial policy since World War 2. Every aspect of our life is becoming digital! Era of AI is driving this huge momentum.

We have Intel Foundry as the systems foundry for the AI era. Intel Products are modular platforms for AI era. Intel Foundry is committed to becoming the world’s no. 2 foundry by the end of the decade. We will also be opening the doors to manufacturing for the first time ever for companies across the industry. Intel Foundry and Intel Products form a deadly combination. We are in the most intense period of innovation.
Every company will be an AI company in the future. AI workload is a key driver of the $1 trillion+ semiconductors TAM by 2030. AI is making everything exciting like we have never seen. It will change every aspect of business.

You need technology infrastructure that is scalable and flexible. You also need tangible business outcomes. Intel has the mission of bringing AI everywhere! We help enable AI for every aspect of your business. Wi-Fi made every office, coffee shop, etc., wireless. AI PC is like the Centrino moment. We are seeing AI PC momentum, using Intel Core Ultra. Over 5 million AI PCs have been shipped to date. We have 40-million-unit goal by end of this year. 500 models are optimized for Core Ultra. We are working with OEM partners like Dell, HP, Lenovo, etc. Our roadmap is strong, and off to a good start.

Lunar Lake.

Welcome Lunar Lake!
Intel’s next platform is the Lunar Lake. It is the second chip we have launched. It is the flagship SoC for next-gen AI PCs. It has 3x the AI performance. It has over 100 platform tops, 45 NPUs tops. The third generation is currently in the fab. We are going to drive the AI PC category.

AI will be creating every new business worker, and automating, streamlining, collaborating, with new insights. You need to refresh! It is the time to upgrade. Today, every app is going through an AI makeover. He encouraged participants to call their IT head and ask for AI PC refresh policy.

The next piece of bringing AI everywhere is the edge and enterprise. The killer app for next-gen edge is AI. Intel is building on the open standards platforms, and investing. There are three laws of the edge — economics, physics, and land. It is too expensive to bring back data to the cloud. Economics says: improve it to the edge. Next, we have skill requirements. Laws of physics drive you to the edge. Laws of the land are also important. Every nation has some form of GDPR capability. Edge is becoming increasingly important.

RAG unleashes data!
We have the retrieval augmented generation (RAG). It’s all about unleashing data! RAG becomes an important workflow for corporations to take advantage of data and GenAI. RAG, as a database and codes domain/business specific data, packages them together for complex queries into open standard LLM environments.

We now have Open Platform for enterprises. How can you deploy using the existing infrastructure, and do seamless integration. Now is the time to build an Open Platform for enterprise AI. We also have strong support from companies listed here, such as SAP, IBM, RedHat, VMware, Yellowbrick, etc. Based on this strong ecosystem, we will have effective benchmarking of these solutions.

Bringing an Open Platform will help with blueprints, reference designs available, and demonstrate performance, interoperability, trustworthiness, and ensure effective benchmarking and certification of these solutions. We will be rolling out next steps for the Open Platform for enterprise AI in Seattle next week.

Lam Guan.

Lan Guan, Accenture’s Chief AI Officer, said we help customers develop their AI strategies. There are three challenges. One, ambiguous value realization. Clients find hard to realize value from their AI investment. You really need to take the enterprise reinvention approach. The second challenge is the insufficient data quality. GIGO cannot be more truer, especially in the era of AI. As an example, Accenture does so much work in contact center AI. A client had 37 versions of SOP.

Third challenge is the widening talent gap. Many organizations need talents to build, operate, and manage AI. There are also lot of business users that need to be trained for effective prompt engineering.

You need to build with value. It is also time to build your digital core with data and AI foundations. Next, bridge your talent gaps. Further, you need to fix your responsible AI solution. GenAI is also about continued reinvention. Take a marginalized approach.

Xeon 6 to the fore
Gelsinger said Intel has been working for years on data center in the cloud. The Xeon innovation machine is not slowing down. We are now seeing rapid evolution of AI workloads. Intel is announcing the next-gen Xeon 6 processors. It is the new brand for next-gen of efficient core and performance core solution.

Xeon 6 with E-core is uniquely addressing the challenges. It is based on Intel 3. We will be moving Sierra Forest into production this quarter, and work with customers and OEMs to make that available. We are delivering 2.7x better rack density. We have 2.5x performance per watt improvements. With Xeon 6, we can reduce the need for telco data centers to just 72 racks, with same performance and capabilities. Xeon 6 provides up to 27 percent lower embodied carbon footprint over previous generation systems.

The big brother of Sierra Forest processor is the Xeon Gen 6 processor, formerly known as Granite Rapids with P-cores. We will be launching this shortly, after the Sierra Forest product. We are excited to be ramping it this year.

We are now looking at maximizing the power of your data. Over 60 percent of data is on the cloud. A vast majority of the data is still on-prem. 66 percent of that data is unused, and 90 percent of unstructured data is unused. LLMs and RAG provide an extraordinary opportunity to unlock this hidden asset. Xeon is tremendous for running RAG environments. It can run LLMs as well.

Intel is working to drive standards, in particular, microscaling formats or MX Alliance. We are working with Arm, Qualcomm, Nvidia, etc. One format is called MXFP4. It is floating 0.4 bit standard for Radix and Mantissa to operate effectively.

Next, we have Xeon 6 P-Core or Granite Rapids. In a Gen 2 vs. Gen 4, test, there is 3X improvement in the latency as we move to modern data types, for FP4. From Gen 4 to Gen 5, there was 100ms threshold. With Gen 6, we have 82ms improvement. We can run hefty models on the Xeon platform. We have 6.4X improvement from Gen 4 to Xeon 6.

Data is now your most valuable asset. Intel is working on confidential computing. We have the EU AI Act, and US Act. We have the Intel TDX. Confidential computing and AI are now available on the cloud with major CSPs. We are building a truly end-to-end confidential computing environment. This is a sweet spot for enterprise AI adoption.

Enterprises are also looking for cost-effective, high-performance enterprise AI training, and inferencing environments as well. They are turning to Intel’s Gaudi 2, which also has price performance advantages. Gaudi is an alternative to Nvidia H100 for training LLMs. We are seeing the acceleration of our Gaudi offering in 2024-25 and beyond.

UEC building AI fabrics
We are now bringing Xeon and Gaudi together. Customers are now asking for open ecosystem for AI connectivity. Ultra Ethernet Consortium (UEC) has the mission to deliver an Ethernet-based open, interoperable, high-performance, full-communications stack architecture to meet the growing network demands of AI and HPC at scale. Intel is among the steering members.

Lhyfe announces progress in green hydrogen projects

Posted on Updated on

Hydrogen producer, Lhyfe, from the city of Nantes, France, started the first production facility into operation in Oct. 2021. Today, the company offers renewable energy solutions, bio-gas, smart grids, and batteries.

Matthieu Guesné.

Matthieu Guesné, Chairman and CEO, Lhyfe, talked about their achievements during FY 2023. The FY 2023 revenues were at €1.3 million, which is x2, as compared to FY 2022. Lhyfe had the signature of multiple new clients in France and Germany, including Avia, Manitou, Iveco, John Deer, Hypion, Hype, Symbio, Bretetche Hydrogen, etc.

New sites
Two new sites were inaugurated in France (Buléon and Bessières), making Lhyfe the first producer of renewable hydrogen in the country. Eight other sites are currently in construction or extension, mainly in France and Germany, more than any other player in the sector in Europe. We have continued innovation with the world’s first offshore green hydrogen production.

Lhyfe is also boosting the scale-up, with €149m grant from the French government for 100 MW project near Le Havre in France. It has strengthened financing strategy, with a €28m first green corporate syndicated loan, and increase in secured grants at c230 million, as of December 2023.

Bouin (France) site.

Bouin, France site is now running at full speed. Factory was completed in 2021, and it is now fully booked. Extension is planned for up to 1 ton of green hydrogen/day. This is representing 2.5MW electrolysis installed capacity after extension. The onsite storage capacity will be extended from 700 kg to 5 tonnes. It is scheduled by end of FY 2024.

This is Lhyfe’s first green hydrogen production site with a current production capacity of up to 300 kg of green hydrogen/day (installed capacity of 0.75 MW). It has direct connection to wind farm, and has secured PPA with Vendée Energie. It is serving mobility clients. Lhyfe has 100 percent success rate in deliveries.

Providing site update on Buléon (France), he said it is located in Brittany (Morbihan, Buléon near Lorient). Site has production capacity up to 2 tonnes of green hydrogen per day (5 MW installed capacity). Lhyfe is addressing mobility (70 percent) and bulk industry (30 percent). Main source of energy is wind PPA with VSB énergies Nouvelles. Client has already been signed. Another site was installed as of end 2023. Commercial ramp-up will start by end of H1-2024.

Bessières (France) site update was next. It is located in Occitany (Bessières near Toulouse Occitany). Production capacity is up to 2 tonnes of green hydrogen per day (5 MW installed capacity). Main source of energy is wind PPA. It is also winner of the Corridor Hydrogen tender for projects. This plant is under commissioning. Commercial ramp-up will start by end of H1-2024.

Lhyfe has several sites under construction in Germany. Tübingen, Germany has up to 200 kg per day (1 MW installed capacity). It is aimed at supplying hydrogen-powered trains on the Pforzheim-Horb-Tübingen line from 2024. Lhyfe signed contract with Deusche Bahn. Unit has been installed and ready for client’s start of operations.

Schwäbisch Gmünd, Germany has up to 4 tpd (10 MW installed capacity). It is mostly used for mobility. Construction works was launched at the end 2023. Brake, Germany, has up to 4 tpd (10 MW installed capacity. Site construction had started at end of 2023. It is 100 percent used for bulk.

Sites under construction in France include those in Croixrault and Sorigny. Croixrault has up to 2 tonnes of green hydrogen per day (5 MW of installed electrolysis capacity). It is located on the Mine d’Or industrial area, alongside the A29 motorway. It is the first production unit in the Hauts-de-France region to make renewable hydrogen available to a wide market. It will supply local uses in mobility and industry. Civil works had started early 2024.

Green hydrogen can decarbonize ammonia.

Sorigny has up to 2 tonnes of green hydrogen per day (5 MW of installed electrolysis capacity). It is part of Hy’Touraine project. Green hydrogen will be supplied for uses in mobility and industry, with many local authorities and businesses already identified as having hydrogen needs in the area. Civil works started early 2024. In total, Lhyfe will have 10 plants. We are also developing in Spain.

Lhyfe has Fortress pipeline, excluding projects already under construction. Bulk projects are in Wallsend (UK) – 20 MW, HOPE Project (Belgium) – 10 MW, Bussy St-Georges (France) – 5 MW, Vallmoll (Spain) – 15 MW, Duisburg (Germany) – 20 MW, Milan (Italy) – 5 MW, and Le Cheylas (France) – 5 MW.

Onsite projects are in Gonfreville l’Orcher (France) – 100 MW, Nantes Saint-Nazaire Port (France) – 210 MW, Fonderies du Poitou (France) – 100 MW, Epinal (France) – 70 MW, SouthH2Port (Sweden) – 600 MW, Delfzijl (Netherlands) – 200 MW, etc. Backbone projects are in Aaland Island (Finland) – X GW, Lubmin (Germany) – 800 MW, and Perl (Germany) – 70 MW.

Lhyfe has secured €149m grant from French government to support 100 MW project near Le Havre in France. 28,000 m2 available space at the planned site of Gonfreville-l’Orcher. It will produce 100 MW. This confirms Lhyfe’s ability to raise significant subsidies and de-risk large projects. It confirms as well the status as a key player in the renewable hydrogen industry, and know-how and expertise of Lhyfe teams, pioneers in the industry.

The project has been approved by the European Commission as part of the third wave of IPCEI (Important Projects of Common European Interest) on hydrogen.

SEALHYFE pilot.

Offshore hydrogen production
Lhyfe is also paving the way for offshore hydrogen production. SEALHYFE pilot is a unique set of data for a concrete step forward in hydrogen offshore development. It is the first offshore hydrogen production unit in the world in 2022. It is producing green hydrogen offshore in the Atlantic Ocean during pilot period from May-Nov. 2023.

Green hydrogen was produced under stressed conditions (corrosion, direct connexion to wind mill, strong accelerations, fully remote operations). Millions of data was collected to support next phase (HOPE project). Reliability of hydrogen offshore production in an isolated environment, and management of the platform’s movements were undertaken. There was validation of production software and algorithms. It was decommissioned end-Nov. 2023.

HOPE or Hydrogen Offshore Production for Europe, was for the first time in the world. Green hydrogen will be produced at sea, and delivered ashore via
a composite pipeline to local customers for use in industry and transport sectors.

Up to 4 tpd of green hydrogen and 10 MW installed capacity. It is located in the North Sea, off the port of Ostend. Operations are expected early 2026. €33m grants awarded, o/w €20m from EU and €13m from Belgian government. This project is coordinated by Lhyfe, and implemented together with eight European partners.

Aland Island, off west coast of Finland, is an autonomous, demilitarized, Swedish-speaking region of Finland. Lhyfe has project to develop large-scale hydrogen production on Åland, integrated with gigawatt scale offshore wind in Åland waters. It is for use on Åland and in the wider European region. Lhyfe has signed MoU with CIP, the world’s largest dedicated fund manager within the greenfield renewable energy investments, and a global leader in offshore wind, green hydrogen.

Lhyfe is well positioned to answer future offshore bids to be launched in Europe from 2024 onward. Another 80 actions will be implemented over the coming years to address the Group’s ESG strategic orientations. Over 80 tonnes green hydrogen has been produced and sold to date.

HPC Vega — Slovenian peta-scale supercomputer powering scientific discovery

Posted on Updated on

European Technology Platform (ETP) for High-Performance Computing (HPC) or ETP4HPC organized a conference today on the Vega system.

EuroHPC supercomputers with HPC Vega system, was hosted by IZUM in Maribor, Slovenia. Aleš Zemljak and Žiga Zebec from IZUM presented on Vega. IZUM is the Institute of Information Science, Maribor, Slovenia.

Slovenian peta-scale supercomputer
Aleš Zemljak gave an overview of HPC Vega: “HPC Vega — Slovenian Peta-scale Supercomputer”. He touched on the system’s design, architecture and installation, focusing on most user-relevant basic concepts of HPC, and their relation to HPC Vega.

HPC Vega.

HPC Vega is the Slovenian peta-scale supercomputer. HPC Vega is the most powerful Slovenian supercomputer. It is the first operational EuroHPC JU system, in production since April 2021. It has performance of 6.9 PFLOPS, uses Atos Sequana XH2000 and 1020 Compute nodes, Infiniband 100Gb/s. It has 18PB large capacity storage Ceph, and 1PB high performance storage Lustre. It consumes < 1MW power, and has PUE < 1.15.

App domains
HPC app domains include earth sciences, such as seismology, earthquake simulations and predictions, climate change, weather forecast, earth temperatures, ocean streams, forest fires, vulcano analysis, etc. High energy physics and space exploration, such as particle physics, large Hadron collider, project ATLAS trkalnik, astronomy, large synoptic survey telescope, Gaia satellite, supernovas, new stars, planets, sun, moon, etc.

Medicine, health, chemistry, molecular simulation, including diseases, drugs, vaccines, DNA sequencing, bioinformatics, molecular chemistry, etc. Mechanical engineering and computational liquid dynamics. Machine, deep learning, AI, etc., such as autonomous driving, walk simulations, speech and face recognition, robotics, language analytics, etc.

HPC Vega has 10 design goals. These are: general-purpose HPC for user communities, HPC compute intensive CPU/GPU partitions, high-performance data analytics (HPDA) extreme data processing, AI/ML, compute node WAN connectivity, hyper-converged network, remote access for job submission, good scalability for massively parallel jobs, fast throughput for large number of small jobs, and high sequential with random storage access.

EU projects (funded) are: interTwin, exploitation of HPC Vega environment, two FTEs (IZUM, JSI), starts, EPICURE, SMASH (MCSA cofunded), o-boarding first postdocs, etc. EUmaster4HPC is preparing an offer for summer internship.

Supporting projects/activities (non-funded) are: EuroCC SLING, MaX3 CoE, etc. Others are: European Digital Infrastructure Consortium (EDIC) – national resources reserved, high-level app support help for Leonardo, CASTIEL2, Container Forum, MultiXscale CoE, and EVEREST (Experiments for Validation and Enhancement of higher REsolution Simulation Tools).

Future is data centers and ‘Project NOO’. Project “Recuperation and Resilience Plan — NOO. The goal is to archive facilities for research data, space for hosting of equipment of public research institutions and universities, space for future HPC(s). The project is due to be completed in June 2026. We have EUR15.2 million for two data centers and the long-lasting storage for research data equipment.

We envision two identical facilities or buildings for two data centers. They will be located in Dravske elektrarne, Mariborski otok. Acquisition of land has been completed. The other one is JSI (nuclear research) reactor, at Podgorica, Montenegro. We will be using the ground floor for HPC, first floor for the research data archive, Arnes’s and hosted equipment. Slovenia is going to need a new supercomputer by end of 2026. EuroHPC JU co-funding is expected (this system is not part of this ‘Project NOO.

Powering scientific discovery
Dr. Žiga Zebec presented: “HPC Vega: Powering Scientific Discovery”, focusing on the science conducted on HPC Vega, or “use cases”.

Slovenian research facilities using HPC Vega are: Kemijsko Institut, lab for molecular modeling, Univerza v Lubljani, for cognition modeling lab, FMF, in physics department, Univerza v Maribou, lab of physical chemistry, and Institut Jozef Stefan, theroretical physics, experimental particle physics, reactor physics, Centre for Astrophysics and Cosmology, etc.

Major domestic projects are development of Slovene in a digital environment. Project goal is to meet needs for computational tools and services in language technologies for Slovene. Development of meteorological and oceanographic test models. Hospital smart development based on AI, with project goal to develop AI-based hospitals. Robot textile and fabric inspection and manipulation. It is to advance state-of-the-art of perception and inspection, and robotic manipulation of textile and fabric, and bridge technological gap in this industry.

We have Slovenian Genome project with systematic study of genomic variability of Slovenians. There can be faster and more reliable diagnostics of rare genetic diseases.

There are scientific projects running on the Slovenian share of HPC Vega. These include deep-learning ensemble for sea level and storm tide forecasting, All-Atom Simulations of cellular senescence (process of deterioration with age), first-principles catalyst screening, dynamics of opioid receptor, visual realism assessment of deepfakes, etc. Scientific projects are also running on EuroHPC share of HPC Vega such as understanding skin permeability with molecular dynamics simulations.

Vega is involved in several international projects. These include SMASH, interTwin, EUMaster4HPC, Epicure, etc.

New leaders can capture the chiplet revolution

Posted on

TechInsights, USA, organized a fireside chat today on the global semiconductor industry.

G. Dan Hutcheson, Vice Chair, TechInsights, said, the Chinese economy has been starting to recover now. We are also getting into a new PC up-cycle. Companies are also trying to move their centers of excellence to the other countries. We are seeing a normal upside right now. You do get some variation in supply over the period of 12 months. We are also moving into the 2nm era next year.

The magnificent seven for everybody includes: Apple, Amazon, Google, Meta, Microsoft, Nvidia, Tesla, etc. Microsoft and Apple came out of the PC era. Amazon and Google came out in the 2000s. Nvidia came out of semiconductors. Tesla happened later. Apple is still riding on the smartphone. We also have the growing EV market. AI has also been emerging strong. However, AI stocks were worst performing among semiconductors stocks last week.

Nvidia has done a double lock-up recently. It has GPUs and whole system. They are re-architecting the way the data center works. Nvidia is, where it is today. We now need a new technology to be the next big thing. When Apple iPhone first came in, it started a new revolution. It always surprises you!

We will have new leaders in future. We will also see new leaders capturing the growing chiplet revolution. The foundries that exist would not have been possible without the EDA revolution. Chiplets have now emerged as the new revolution.

We have neural network processors already. We also have cellphone APUs. There are some really cool things coming that will help organizing your life, especially using the smartphone.

AI is seeing huge explosion in entrepreneurial pursuit. Several AI chip startups will be coming up. GPUs always had an innate advantage. GPUs chips were power hungry. We now need to partition that down to smaller parts. PCs had closed architecture partnership between Intel and Microsoft. We later saw the explosion of innovation around apps. AI is more of a curiosity right now. IBM used it to help physicians diagnose cancer. Today, it has become routine. AI solutions will take step forward, and bring real value.

China needs to catch up
As for domestic Chinese companies in AI, China is developing its own core technology. Taiwan has been incredibly successful as it has access to the global technologies. China also needs to do lot of classical innovation to get forward. Doing a lot of innovation can be very cultural. Silicon Valley is one example to follow. We are hoping that China can catch up, and get back to the order, and we can get back the global order.

AI will be used on chips to improve MCU/MPU performance. Synopsys is a world leader that enables all of that. We are also seeing new process technologies being developed. However, we still need the human intelligence to make all of this happen. AI, as a tool for engineers, may make them struggle. People were locked into their tools earlier. You have to be really good at using all the weapons at your disposal. If you don’t, you can be left behind. We are also going to go through another productivity surge in future

Regarding alternatives to silicon, he said that God was bullish on silicon. It has proved to be the best material. Today, we have substrates with specific functions. We have to get around the interconnect level. Data centers are migrating further down to the new chips. Quantum does replace it! However, it will co-exist with silicon.

Lead times are delivered largely by the complexity of the problem addressed. Today, we have about 2,000 process steps, but the lead time is still 12-13 weeks. We have to address complexity. We had the case of just-in-time. We may create disaster if we moved to just-in-case. Shrinking lead times requires you to decrease utilization. We saw lead times decrease to 60 percent, using utilization. Intel had increased utilization by increasing hot spots.

We also need to look at the supply chain. As we become more efficient, we may also be dealing with even more complexity. We cannot see that either happen, or decrease, in the forseeable future. Regarding NAND demand, we are witnessing the incoming demand, at least from data centers.

Japan getting back mojo!
Finally, which country can emerge as a semiconductor powerhouse? Japan is finally getting over lost decades. Japan is coming back certainly. It appears that Japan has got back its mojo after a long time. China is also going to grow. India has an advantage of cheap labor force. India may have difficulty in duplicating the success of software. It has advantages and disadvantages.

Japan and South Korea are much ahead right now. The US recovery is also taking place. Mexico is starting to rise. That’s driving new factories inside Mexico. Canada has a liberal immigration policy. Some of the best and brightest are present there. There is always opportunity. With technology, you need to run faster, work smarter, etc.

He hoped that everyone is safe in Taiwan, following the earthquake. Despite the severity of the tremors, the impact on Taiwan’s semiconductor manufacturing capacity appears to be limited. TSMC has done, and been doing incredible work in the future.

Policies and partnerships needed to support semiconductor startups

Posted on

Semiconductor Industry Association (SIA), USA, organized a seminar on: Encouraging innovation: Policies and partnerships needed to support semiconductor startups.

Startups are a critical part of the semiconductor ecosystem, driving growth and innovation in the industry and exploring new frontiers of chip technology. Unfortunately, startups in the semiconductor sector face significant challenges and barriers to entry. Creative and ambitious policy solutions and expanded public-private collaboration are needed to help semiconductor startups grow and strengthen.

SIA, and Dan Armbrust from Silicon Catalyst—the world’s only incubator and accelerator for startups focused on semiconductor solutions— had a discussion on the opportunities and challenges facing semiconductor startups. They looked at the actions needed to reinforce and expand this important part of the semiconductor ecosystem.

John Neuffer, SIA President, said that we represent two-thirds of the global chip industry. Startups have been an essential part of the ecosystem. There are barriers to entry. We have to overcome them.

Dan Armburst said that company valuations and profitability has seen eight of the top 20 market caps in technology. It is the third most profitable industry. AI is profoundly hardware limited, and it’s the next gold rush. There are essential assets in a geopolitical sea change away from globalism.

Surge of investments are underway. There are CHIPS Act(s) in various countries and regions. VCs are wading back in as there are green shoots in deep tech and specialty funds. We have reasonable M&A and IPO opportunities for startups. Chiplets and advanced packaging can be advantageous for startups.

Semiconductor startups face daunting challenges. There is escalating cost of innovation, with prototyping access and costs. Sustained decline of VC for semiconductors is also there. Achieving product-market fit remains challenging. We have diminished customer appetite to award design wins to startups.

More research will not lead to commercialization, unless, we continue to build startup playbook. We must aggressively implement CHIPS Act investments for prototyping and startup funds with a sense of urgency. We can supplement with existing government programs and funding streams. We need to strengthen the startup ecosystem for translation to industry.

How it all started?
In 1990s, foundry business model was led by TSMC in Taiwan. In 2010s, we had Moore’s Law slowdown, rise of AI, and emergence of Chinese threat, and pricing power. In 2020s, pandemic chips shortages, CHIPS Act(s), China’s access restrictions, and GenAI are in action.

We have been witnessing consolidation and concentration in each segment. These are across chip design costs, DRAM, logic/foundry platforms, and equipment market. Also, scaling is in trouble, as the evidence of Moore’s Law slowdown. We need a very solid roadmap for next decade. We now need CMOS roadmap to <1.0 nm, along with the advances in EUV lithography, advanced packaging, and more backside power distribution.

Today, system companies, such as Apple, Google, Microsoft, Meta, Cisco, Huawei, along with IBM, Samsung, etc., are becoming silicon houses. China export controls and trade restrictions are stressing globalism. VC has also moved past semiconductors to software and services over the years. Investment has been around $6.5 billion, only 2.5 percent of $244.5 billion, in 2022.

VC investment and model
Venture capital investments in AI/ML have escalated. Majority has been in vertical apps. We had the first wave of domain specific accelerators / architectures for AI, largely around edge/cloud. There have been some investments in optical/photonics, in-memory, and neuromorphic chips.

Today’s VC model at a glance suggests goal is return 3-5x or 20-30 percent annual IRR over the 10-year life of the fund. Invest fund in 20-25 companies, which represent 0.1-1 percent of deal flow. Hits-driven business means, we need one-three firms to return 10-100x of investment. VCs are compensated 2 percent of fund annually for opex, and retain 20 percent (carry) of profits. Each startup funding round is lead by a new VC that sets the valuation and investing terms for others. For existing investors, exercising pro-rata rights is key. VCs raise follow-up funds based on track record of prior funds.

Silicon Catalyst role over the years.

VC model dictates where investments are made, and why semiconductors struggle. Investments in semiconductors are less attractive, compared to software and services. Higher capital is required, with longer time to revenue ramp. It has higher innovation failure rates, and longer time to liquidity, and lower returns.

Semiconductors requires extensive and specific due diligence, a skill mostly atrophied. Product-market fit is hard to predict based on early measures of traction and adoption. Incubator and accelerator services have helped startups in other arenas, apart from semiconductors. He said Silicon Catalyst accelerator model is tuned to semiconductor startup needs. Silicon Catalyst services are available from the industry’s ecosystem.

What’s coming up?
Within semiconductors, we have materials/process changes, new materials and devices, new equipment and processes, and EDA for emerging technologies, are coming up in the future.

For substrates, we have SiC, silicon-on-insulator, GaN, compound semiconductors, etc. For wafer fabs, there are patternable materials, planarization materials, gases, cleaning solutions, etc. Device performance has 2D semiconductors, graphene, diamond, ferroelectrics, spintronics, etc. Interconnects have metals, metal oxides, metal barriers, carbon nanotubes, isolation materials, dielectrics, etc. Packaging materials have solders, ceramics, encapsulates, thermal management materials, insulators, etc.

CHIPS Act
CHIPS and Science Act was signed into law in August 2022. Innovation gap is about the CHIPS Act R&D provisions. Gaps are in prototyping at scale, scale-up business model, startup funding, and government-agency coordination.

CHIPS Act Industrial Advisory Committee (IAC) was also set up later. IAC R&D gaps recommendations includes:

  • Establish easily accessible prototyping capabilities in multiple facilities and enact the
    ability to rapidly try out CMOS+X at a scale that is relevant to industry.
  • Create a semiverse digital twin.
  • Establish chiplets ecosystem and 3D heterogeneous integration platform for chiplet
    innovation and advanced packaging.
  • Build an accessible platform for chip design and enable new EDA tools that treat 3D
    (monolithic or stacked) as an intrinsic assumption.
  • Create a nurturing ecosystem for promising startups.

CHIPStart UK is an example of a fast-moving government-led initiative. Last year, 11 startups were admitted for 9-month program. In Feb. 2024, there was call for second cohort applications.

Recommendations
It is recommended that USA executes what’s been authorized and appropriated with the CHIPS Act. Accelerate access to affordable prototyping capabilities for startups through the various CHIPS Act initiatives. This includes: NSTC for silicon, NAPMP for packaging, and Manufacturing USA for digital twin. DoD Commons (Hubs) for “lab to fab” – 8 regional hubs were launched in Sept. 23. We also have to Implement NSTC’s Innovation fund at minimum of $0.5B consistent with IAC and SCSP recommendations.

We can enhance existing SBIR/STTR and DIU programs with fast-track entrepreneur lane to 3x funding across NSF/DoE/DARPA/DoD/NIH. Leverage ongoing government initiatives by ensuring that startup investment and procurement are included (e.g., DoD NDIS (National Defense Industrial Strategy) and SBICCT (Small Business Investment Company Critical Technologies)), and DoE Office of Science (BES) and AMO funding and loan programs.

We can complement all this by attracting further private investment. Increase corporate VC (CVC) investments by 2x to provide signals to VC for early-stage startups with innovative technologies, especially in materials, metrology, processes and EDA.

We must commission the OSTP to establish means of collaboration with allied nations’ CHIPS Acts and ensure coordination across government agencies on initiatives that support startups. Increase the number of “hard tech” and specialty fund VCs by identifying and addressing gaps in incentives and policies via a neutral technology-based organization (e.g., MITRE, SRI, CoC – Council on Competitiveness). Enhance the capital gains provisions for entrepreneurs and investors that have long liquidity timelines (QSBS – Qualified Small Business Stock for capital gains).

Regarding EDA startups, they face similar problems to semiconductor companies. EDA historically has had significant startup and M&A activity. That has been a significant contributor to how the EDA industry has grown. There are major opportunities in chiplets and advanced packaging enablement, moving forward, and the use of AI to improve designer productivity.

Regarding agencies response to targeted and augmented SBIRS, he added that DoD and DARPA have modified and offered funding along these lines.

CHIPS R&D workshop addresses digital twin data interoperability standards

Posted on

Digital twins in manufacturing enable proactive decision-making, predictive maintenance, scenario testing, and collaboration among stakeholders, etc. This workshop will focus on standards needs for a specific use case, application of a digital twin for manufacturing in the chiplet-packaging module.

CHIPS R&D, USA, organized a conference today. Participants discussed the potential for digital twin technologies to drive progress in the semiconductor and microelectronics industry. They looked at the role of data interoperability standards for digital twins in semiconductor manufacturing ecosystem.

Factors to be considered in identifying standards priorities include potential for broad impact, feasibility for accelerated development, and suitability for various standards development channels, including through alliances, incubators and accelerators, and standards setting organizations.

Eric Forsythe.

CHIPS Manufacturing USA
Eric Forsythe, CHIPS R&D, provided an introduction to CHIPS Manufacturing USA. We have a funding opportunity and context. CHIPS Manufacturing USA falls under the CHIPS Act. We will try and solve problems, specifically for the digital twin. We have a workforce initiative going on, as well. We are focused on workforce development. Within the R&D portfolio, we have Natcast, NAPMP, CHIPS Manufacturing USA, with 17 institutes across the network, and CHIPS metrology program.

Manufacturing USA purpose is to accelerate the discovery to US production. We are creating an effective collaboration for applied industry research to bridge the gap from discovery to production. The process has basic research, proof of concept, production in lab, capacity to produce the prototypes, capacity in the production environment, and the demonstration of production rates.

We have minimum NIST commitment of ~$200 million over a five-year period. We are analyzing RFI responses, industry feedback etc., for digital twins. The objectives include reduce time and cost for chips development and manufacturing. Accelerate adoption of the semiconductor manufacturing innovations. We are also increasing the access to semiconductor manufacturing training, etc. We have established the shared resources capabilities, competitively fund industry-led technical and workplace development projects. We have digital framework for interoperable data, shared and validating data, etc. We are also creating a shared marketplace of digital twins model.

Digital twins in semiconductor manufacturing standardization
There was a panel discussion on defining the landscape, scope, and focus of digital twins in semiconductor manufacturing standardization efforts.

Kemaljeet Ghotra.

Kemaljeet Ghotra, Enterprise Data Strategist, PDF Solutions, said digital twin is a virtual representation of the physical world that is capable of producing intelligent feedback with simulation, emulation, data analytics, and modelling.

We are now developing virtual process, tools and devices, and virtual fab. Data is the horizontal across all. We have grand challenges such as generating data, create models, share, and use data, with security around it. Digital twin framework requirements include DT re-usability, interoperability, validity and verification, maintainability, capability, extensibility, accuracy, security, provenance, hierarchical relations, historian model life cycle, etc.

We need to have an operationally focused digital twin for the extended semiconductor supply chain. We need to bring in the end-to-end traceability for products. Operational DT allows for centralized management of globally distributed supply chain to be built on PDF’s Extensio platform capabilities. We need to allow sharing of data in much more secured environment.

PDF has AI models for automation and real-time insights for DT. We have added fault detection and classification, predictive maintenance, virtual metrology and sensing, and fab predictive model in the PDF solution. The operational DT leverages and expand PDF’s existing solutions and market presence. Focus is to get off different proprietary data types to be able to talk to each other.

James Moyne.

James Moyne, Research Scientist, University of Michigan, stated that the scope of DT is across the manufacturing ecosystem. We have existing DT solutions. New ones are emerging from improved ecosystem integration, and solution integration. We also have improved reuse of solutions.

Enabling collaborative DT environment across the industry requires agreement on specs for DT and DT framework. DT is a purpose-driven digital replica of physical asset, process, system, or product. It quantifes prediction and prediction accuracy. The DT framework involves aggregation and generalization examples.

We need to understand requirements driving DT and DT framework definition. We have already done lof of work to identify requirements. We have a path forward for getting results into industry practise, such as International Roadmap for Devices and Systems (IRDS), SEMI, and other standard organizations.

Ben Davaji.

Ben Davaji, Asst. Prof., Northeastern University, stated that development of targeted domain-specific DTs could be more efficient. DT for semiconductor manufacturing includes manufacturing process — such as drifts, aging, tool PM, diagnostics, etc. We can accelerate process development and characterization for manufacturing equipment. We can develop new process equipment and reduce evaluation times.

We can enable fast adoption of novel and emerging materials and substrates. We can do innovation in process design to enable novel device architectures. We can enable accelerated PDK development, etc.

DT for nanofabrication involves DUV lithography process and plasma etch process as examples. He talked about DNN-enhanced virtual metrology. We can have minimum viable DT with data standards, quantitative and multimodal data, TCAD and EDA to generate large data sets and calibrate using experimental data.

We can have DT black box from tool manufacturer and material suppliers. We can develop process and test datasets. We can have computational infrastructure to support secure computing and federated learning. We can also have an open environment for the integration of DTs, enabling interconnections.

Serge Leef.

Serge Leef, Head of Secure Microelectronics, Microsoft, we have been witnessing the convergence of electronics and physical worlds. DT was limited to chip-level modeling and simulation. We are seeing computing continuum up to 2030.

Modern systems are domain specific, highly heterogenous, distributed over networks, highly interactive with physical world, etc., and everything really has to work together. Physical prototyping for complex systems is a huge task. Typically, one or two prototypes can get built. We also need to have DT simulation, with heterogeneity challenge.

We now need to execute meaningful scenarios at near-real-time speed at near-zero modelling cost to gain actionable insights. Microsoft has developed vision for automotive and aerospace DTs. There are some walls between disciplines that need to be broken down.

Cloud-based architecture is leveraging speculative parallelism. We need to use ML to train reduced order models on real-world data. We have standards opportunities for simulation backplanes, modeling interfaces, and testing frameworks.

Gurtej Sandhu.

Gurtej Sandhu, Principal Fellow and CVP, Micron Technology, noted we have an end-to-end Si virtual model. DT of chip fab is virtual model of the entire process flow to accelerate technology development and ramp to inline and packaging yield.

Cost of developing chips is increasing exponentially. Tools are needed to make more informed decisions and build faster process flows. Successful collaborations require multi-disciplinary collaboration across the entire framework of chip building discipline, structure and materials, etc. Achieving this requires breakthroughs in multi-scale modelling. We also need partnerships among chip makers, tool providers, etc.

We have the fab technology co-optimization (FTCO) framework. We can have DT models on the top. It can be followed by metrology, process, tools, efficiency, and partners. A module-level co-optimization requires over 20 process steps/modules. A typical fab co-optimization requires over 1,000 steps. The DT platform is a virtual test bed. Members can develop tools/models, access data for testing/validation, and deliver DT solutions.

Victor Zhirnov.

Victor Zhirnov, Chief Scientist, Semiconductor Research Corp. (SRC), talked about DT for microelectronics. SRC is creating CHIPS Manufacturing USA Institute. SEMI is partnering with SRC. DT should have plug-and-play capabilities. It is a tool for rapid innovation enablement. The MAPT plan is call for DT infrastructure.

We need to look at data standards that can be applicable for electronics manufacturing, We are quite familiar with design automation, modelling and simulation, and industrial automation.

Standards must be set to enable interchange of materials and 3D data between various entities involved in SIP design and manufacturing, Key drivers include multiple physics field nature of design. DT interoperability is currently a problem. We need universal standard to cover use cases.

CHIPS R&D semiconductor supply chain trust gets essential!

Posted on

CHIPS R&D Semiconductor Supply Chain Trust & Assurance Data Standards Workshop started today in Rockville, Maryland, USA.

As semiconductor products are manufactured, key transactions are captured as data in different digital twin ecosystem modules (e.g., raw materials acquisition, design, layout, tape-out, mask making, chip fabrication, testing, packaging, and assembly). Digital twin modules must be linked together to allow backward traceability across these ecosystems, and to enable access to accumulated supply chain data for traceability, authentication, and provenance tracking.

Yaw Obeng, CHIPS R&D, welcomed the audience. He also introduced the Workshop Planning Committee.

Carl McCants.

Addressing supply chain issues
Carl McCants, Special Assistant to DARPA Director, presented the opening keynote on DARPA’s history in the semiconductor supply chain trust and assurance standards. It has been focused on addressing supply chain issues. We had a grand challenge in 2005, where we wanted autonomous cars. We had failed back then.

DARPA has been creating breakthrough, paradigm-shifting solutions. We are accepting and managing risks as well. Concern with globalized microelectronics ecosystem has also been addressed within DoD since 2000. DARPA TRUST and IRIS programs developed the techniques for validating design and process integration before distribution.

He also talked about EDA and testing, and whether the tools were doing what they were expected to do. For IRIS, we focused on what’s happening to the manufacturing process. DARPA SHIELD will develop the facility to provide 100 percent assurance against certain known threat modes quickly, and at any step of the supply chain.

Semiconductor manufacturing supply chain needs to address trust and assurance challenges. We need to maintain the confidentiality of the technology delivered, protect the IP, and have continuous and sustained access to technology needed. We have challenges such as data and definitions, so that a semiconductor product can be delivered without compromise to the product’s integrity, trustworthiness, and authenticity.

For IP protection, we need to incorporate, verify, and validate an IP into design. We need to protect the logic design and simulation of the chip. We also need to be able to transmit and store the functional test programs to the wafer fab facility, and the assembly, packaging, and testing facility. We also have to do aggregation of package-level test data in the APT facility, and take that to the customer.

Eric Forsythe.

Model and simulate semiconductor supply chain
Eric Forsythe, Technical Director, CHIPS R&D, introduced the CHIPS Manufacturing USA. The grand challenge is to seamlessly model and simulate the entire semiconductor supply chain. We need to create an effective collaboration environment for applied industry research to bridge the gap from discovery to production.

CHIPS Manufacturing USA Institute is meeting the digital twin institute objectives. These are: reduce time and cost for chip development and manufacturing, accelerate adoption of semiconductor manufacturing initiatives, etc.

Data — reliable, secure and accessible, workforce development, and model development and validation, were the top three areas to look at. These are the big challenges for developing digital twin technologies for semiconductor manufacturing.

Electronics supply chain digital security standardization
There was a panel discussion on landscape, scope, and focus of electronics supply chain digital security standardization efforts. The participants were Gretchen Greene, NIST, Chris Ritter, Idaho National Lab, and Christophe Bégué, PDF Solutions.

Gretchen Greene.

Gretchen Greene, Group Leader, Data Science Group, NIST, said we are currently building trusted chip environments (TCE). We are modernizing the ecosystem and leveraging digital technology. Security and interoperability remain the main issues.

In the CHIPS supply value chain, there are design, fabrication, package, assembly, and test, and commercial sectors. These are addressed by players in muti-physics and modelling, IP, Open Source, manufacturing process and tooling, materials and resources, photonics, microelectronics, etc.

Granularity of the semiconductor supply chain is at the heart of the standards challenge. The interoperability at scale supporting coarse grain digital assets has been inconsistent, and even non-existent. We have the opportunity to impact the industry. We are opening several windows of commercial opportunity for marketplace innovation.

We are also standardizing protocols, such as information sharing, smart connections, etc. We are making protocol specs, payload types, synchronization or process flows, status, managing authorities, verification/validation and resolver services, and registry/curation for monitoring, nodes/hubs, etc.

We are also developing a knowledge network via CHIPS exchange. Semiconductor knowledge can be shared across digital assets, such as taxonomy, machine, actionable, analytics, visualization, etc.

We have goals such as federate across supply chain through use of digital architecture connecting generations, standards, TREs and stakeholders. Strengthen exchange, reuse, and interoperability. Enable discovery and access, etc.

Chris Ritter.

Digital engineering mission
Chris Ritter, Idaho National Lab, said that we have the digital engineering mission. Digital engineering transforms the way we design and operate energy assets. Digital engineering is an innovator and key success driver across all initiatives. It is a key enabler for net-zero program.

With DE, we can design — it links facility information. Operations enable the digital twin. He talked about Deep Lynx, its virtual, and physical platforms. Deep Lynx open source model is a centralized digital twin data warehouse and live event system. Ontological and time series storage of digital twin data streams is there. Event system can push and pull data in real-time around a digital twin. It is proven in operation of MAGNET digital twin.

Idaho National Lab has open ontology for thread and twins. General entity model (GEM) is an extensible, upper-level ontology. It has an advanced manufacturing app. It has digital twin demonstrations across lifecycle stages.

Christophe Bégué.

Supply chain traceability
Christophe Bégué, PDF Solutions, said the semiconductor market is currently looking at reliability, RMA or failures in the field, security, and regulation.

Supply chain traceability can provide fast and precise analysis of a reliability or security issue. We can enable short- and long-term containment plans to reduce cost and preserve brand. We can have assurance and preferred supply through provenance and traceability.

We need standards for single device traceability. We have SEMI E142 standard that defines a data model for devices within a wafer or complex assembly. Devices have a virtual identifier (VID) based on this model. E142 forms basis for single device tracking.

We need standards for supply chain traceability. SEMI is developing Specification for Supply Chain Traceability using Distributed Ledger Technology standard proposal to record chain of custody and provenance. We also have SEMI Supply Chain Traceability using distributed ledger technology or DLT. Standard currently defines the data and transaction model, asset lifecycle, and services.