Enterprise

To be, or not to be fault tolerant! Or fault intolerant?

Posted on Updated on

IMG_20180723_183441Semiconductors is a tough business, and definitely not for the faint hearted, said Suman Narayan, senior VP, for Semiconductors, IoT and Analytics, Cyient. If you are in DFT, you are in the insurance business. He was moderating a panel discussion on ‘fault tolerance vs. fault intolerance’.

Rubin Parekhji, senior technologist, Texas Instruments, said that a system is fault tolerant if there is no error. An app is fault tolerant if there is no intolerant fault. An affordable system should be fault tolerant. Which faults are important? How are hardware-software fault tolerant? For instance, if not done well, it will lead to bulky devices. There is a need to optimize and differentiate. There is a need to build fault tolerant systems using fault intolerant building blocks.

Jais Abraham, director of engineering, Qualcomm, said that device complexity has increased 6X times since 2010. There is a disproportionate increase in test cost vs. node shrink benefits. Are we good at fault finding? It’s our fault. Be intolerant to faults, but don’t be maniacal. Think of the entire gamut of testing. Think of the system, and not just the chip. Think of the manufacturing quality, and find remedies. Fault tolerance may mean testing enough such that it meets the quality requirements of customers, who are becoming intolerant. We continue to invest in fault tolerance architectures.

Ruchir Dixit, Technical director, Mentor,  felt that making a system robust is the choice. The key is the machine that we make, and whether it is robust. The customers expect a quality robust system. Simpler systems make up a complex system. Successful system deals with malfunctions. There are regenerative components. The ISO-26262 standard drives robustness.

Dr Sandeep Pendharkar, Engineering director, Intel, felt that there is an increased usage of semiconductors in apps such as ADAS and medical. Functional safety (FuSa) requires unprecedented quality levels. Now, DPPM has changed to DPPB.

Achieving near zero DPPB on the nearest node is nearly impossible. Fault tolerance is the way forward. How should the test flows change to comprehend all this? Should we cap the number of recoverable faults before declaring a chip unusable?

Ram Jonnavithula, VP of Engineering, Tessolve, said that a pacemaker should be fault tolerant, with zero defects. Fault tolerance requires redundancy, mechanism to detect and isolate faults. Sometimes, fault tolerance could mean reduced performance, but the system still functions.

Adit D. Singh, Prof. Electrical & Computer Engineering, Auburn University, USA, highlighted the threats to electronics reliability. These are:
* Test escapes – DPPM. Especially, escape from testing components. Also, timing defects.
* New failures occur during operation. They can also be due to aging.
* Poor system design, which are actually, no solution. There can be design errors and improper shields.

Test diversity helps costs. Design diversity helps fault tolerance costs. Design triplicated modules independently. Avoid correlated failures.

So, what’s it going to be? Be fault tolerant! Or, fault intolerant?

Advertisements

Need for end-user driven test strategy: Raja Manickam, Tessolve

Posted on Updated on

At the ongoing ITC 2018 conference, Raja Manickam, founder and CEO, Tessolve, spoke on ‘Always on ERA’?

RajaEvery chip is tested. About 10 million++ chips are tested every day. A chip carries millions of data and also does continuous self test. It is expected that the chip is always on. Engineers look at all possible combinations. They try and solve problems quickly.

Design is supposed to be pure genius. However, testing is the necessary evil. There is DFT to probe FT and SLT. We just keep on adding tests.

The players who help drive us are the academia, EDA companies, fabs, and ATEs (they add more instruments, and make it bigger). What matters is that the chip must work in a particular manner, all the time.

Test leadership creates an environment for test strategy and drives it. There must be given flexibility and innovation for test leadership. Focus on the end-user relevancy.

Next-gen BIST provides M-BIST and scan compression engines on separate DFx die. The ATE interface can exist in the DFx die. Base functional die will provide power and clocks.

At any time a machine is running, less than 20 percent of the instruments are used. That’s not the best use of assets.

Factors influencing traditional ATE include loop back testing, ATE need to test deserialized parallel data, miniature MEMS loop back device to improve SI and MEMS RF relay, and the use of FPGAs.

There are adaptive tests and predictive algorithms. The ATE could look like instrumentation and intelligence built in to the load board or hardware. There could be three dimensional handlers. The handler will go vertical. There should be an end-user driven test strategy. The test strategy should be holistic.

Automotive electrification drives use of design IP: Dr Yervant Zorian, Synopsys

Posted on Updated on

IMG_20180723_083209_2At the ongoing ITC 2018, Dr Yervant Zorian , Synopsys fellow and chief architect, delivered the keynote. He said that today, change is in automotive and IoT. There were 15 competitors in 2004, which became six in autonomous driving. Connected cars is growing at 25 percent, and 92 percent connected cars should be built in 2020.

As of now, we have around 100 million codes in 2017, which are moving on to 300 million by 2025. The emerging technology trends are:
* Semi-autonomous /autonomous vehicles
* V2V (Vehicle to vehicle)
* V2I (Vehicle to infrastructure)
* Cloud connectivity
* Security
* Target users all age groups

Automotive apps need different SoC architectures. There are high-end ADAS, infotainment and MCUs.

ADAS is among the fastest-growing auto app. Sensors are seeing a fusion of massive data. We act fast, post data interpretation. The more data we provide to ML, we can get better results, via machine learning. AI is another growing area. There will be AI chips worth $38.6 million by 2025.

Automotive grade IP reduce risk and increase safety.  Automotive test phases are in production test, power-on self-testing and in-field testing. Innovations benefit advanced designs on established nodes. We are now having EPPM below 1.

Automotive electrification drives the use of design IP. The amount and variety of IP is increasing. ASIL D/D is now ready for AEC-Q100 testing. Each Fin has to be accurate, to reduce repair. There is on-chip self-repair as well.

On logic side, ATPG is there for autonomous testing. The SoC-level hierarchical system automates. Fault diagnosis is done via the DDR PHY. Automotive grade IP (FS) reduces risk.

There is also the automotive safety integrity level (ASIL). ASIL levels of final product depends on implementation. Reliability, reduces risks. Mission-critical automotive ICs need ECCC.  Adding RAS is also important.

Automotive test phases include production test phase, power on phase and mission mode phase, respectively.

Power on-off and periodic self-test for mission mode are available. Periodic testing can be done block by block. We need to pay to attention to the safety manager. Also, M-BIST implementation is must for functional safety. There is the Synopsys M-BIST.

Security is very critical, once we are connecting cars. Secure hardware is the root of trust. The tRoot provides a chipset that cannot be tampered. There must be detection and protection. Safety, quality and security play important roles.

Evolving to a world of smart everything: Dr. Aart de Geus, Synopsys

Posted on Updated on

Dr. Aart de Geus, chairman and co-CEO, Synopsys, graced the ongoing SNUG India 2018, being held in Bangalore.

He delivered the keynote titled, “At the Heart of Impact,” discussing how the demand for computation power is changing virtually all industries — from automotive to healthcare to financial services.

AartSilicon has arrived at a state that is now making software possible that we only could dream of years ago. The world is moving into its next age: the age of smart everything. If you look at chips, there’s only one word and that’s called Moore’s Law. This is easily being the single, or most rapid sustained exponential in the existence of mankind.

The push to smaller geometries, which has been predicted as ending so many times. Moore’s Law is supposedly dead, and yet, Moore’s Law is now more expensive, but, it’s certainly not dead! We also have the opportunity to count the first 500 designs in each technology. FinFETs were impossible and too hard to do, but, here we are! The most advanced chips have all moved to this!

Let’s propose a thinking model of how to look at what EDA really is. The first question is: can you capture it on a computer? If you can capture it, can you actually model its behavior? If you can model it, can you simulate? If you can simulate, can you analyze a result? If you can understand the analysis, can you optimize, and if you can optimize, can you ultimately automate?

Digital twin
Digital simulation first, and then synthesis, completely changed the productivity picture in our field. The productivity push has continued. The notion of IP re-use, which, itself is not new, as transistor became gate, became register, became a small processor, became ultimately, big building blocks. The notion of a digital twin is essential.

AI is interesting, because it parallels somewhat, the history of EDA. You have to bring together data, collect the data, structure it, so that it’s usable. We have a lot of data in our programs. In the case of AI, it’s called learning, rather than simulating. That learning is then interpreted on the EDGE devices actively to ultimately, i.e., that creates some limited action field, and the long-term goal is autonomous behavior in many different fields. We have seen some remarkable advances that may initially not have gone quite as high as what we’re all familiar with, but the notion of a digital twin is essential.

There are four forces to understand. The first two, actually act in tandem. More computation allows for more machine learning, and more machine learning wants one thing more, even more machine learning, which we’ll push on more computation. This statement will drive the semiconductor industry for the next few decades.

The consumption of silicon will increase significantly. It is accentuated by one more thing: more data. Big data is a common term at this point. If you look at the number of sensors that are being put all over the world and all kinds of products, the amount of data that is becoming available is so large that it’s even difficult to make sense of it, unless you applied the very machine learning techniques that are now very rapidly evolving.

You need quantum physics to predict. Its called Ab Initio, means, from the start. It’s fundamentally going to the very basic of physics that are applicable to the atomic-molecular level. This technology is at Synopsys. We also added to this super-fine meshing to be able to describe the devices, and the ability to extract parameters.

We have invested in and are working with a number of partners on DTCO or design technology co-optimization. It has one objective. What do I tweak in the technology so that the design gets better?

If you look at the car, this is a source of data like you haven’t seen before: cameras, many RADAR, LIDAR, ultrasonic, and others. These cars are going to generate about four terabytes of data per day. This is where the cloud and AI machine learning will continue to blossom and grow at an unbelievable speed.

Simulation essential
Simulation is going to be essential in the digital world. It will take many more dimensions. We are very much focused on the digital electronic side of things.

What made it all work is the fusion of three things. Logic, optimization, that was the heart of it, but also, in parallel, integrated timing, and the notion of libraries.

On the electronic side, we have all the necessities to the modeling and of the simulation. We have invested and grown the capability to do device simulation for photonic devices and the simulation of those. We have even incorporated RedHawk inside of IC Compiler II.

Now, EUV is coming into the picture and many other materials accentuate this. We have invested massively by working closely with all of the most advanced fabs in the world.

Vision processor is a fusion itself of multiple things. Synopsys actually has put quite a bit of effort in that as a building block. Our vision processor has a CPU that can have up to four cores, each made up of a scalar CPU, and a bit vector DSP. Around that is a convolution neural network that help do a number of the tasks. The interpretation of the neural network learning data in a common implementation platform and with all the software that makes this possible.

Safety and cars have already been a theme for us. Automotive IP does have needs. There’s a substantial number of standards around that.

Energy distribution
Let’s also look at the notion of energy distribution. Are you going to put big cables to tank stations? How do you get it there? Is it local energy generation and by the way can we use that to deal with some of the clean energy needs to fix everything. It’s quite remarkable to find out that Tesla, which is aiming at being an electric car by increasing the autonomous car, also has a battery business, and has an energy distribution business by putting recharging stations, energy generation by owning a solar cell company.

Synopsys has invested in a number of things around security on a simple premise: all systems have inputs and outputs. They very quickly turn it to digital data, into electronic systems that are really systems on a chip, that are compute systems, really bring software, and the software itself is algorithms and proprietary code, and more dangerously so, third-party code and open source code vulnerabilities.

Today, networking the cloud is just one big computational continuum, that is now refining itself and becoming savvy. Be ready for the next phase! That absolutely has started and is upon us. The age of smart everything is happening at an incredibly fast clip and will have a very big impact. It will have an impact at a human evolutionary scale.

All the market segments are digitizing themselves. The semiconductor ecosystem is at the center of making this happen. This transformation is changing and challenging the semiconductor industry as we evolve to a world of smart everything that is networked, mobile, and also under increased pressure to become more secure. Synopsys is both humbled and privileged to be in the midst of this transformation.

Dr. Pradip K. Dutta, group VP and MD of Synopsys India and Sri Lanka, said: “As we move closer to completing two decades of SNUG in India, we are committed to making it the leading platform for the electronic design engineering community to connect with each other and learn to innovate from silicon to software. As the semiconductor industry and the ecosystem evolve with the accelerating pace of digital transformation, the center of gravity is moving to the intersection of hardware and software.”

ITC India to address design, test, and yield challenges

Posted on Updated on

The forthcoming International Test Conference (ITC) will be held on July 22nd-24th, 2018, at the Radisson Blu Hotel in Marathalli, Bangalore.

I must thank Navin Bishnoi, General Chair, ITC India, and director, ASIC India Design Center, GLOBALFOUNDRIES, and Veeresh Shetty, senior marketing manager, Mentor, for apprising me of developments.

The second edition of the conference, it is the world’s premier event dedicated to the electronic test of devices, boards and systems. At ITC India, design, test, and yield professionals can confront challenges faced by the industry, and learn how these challenges are being addressed by the combined efforts of academia, design tool and equipment suppliers, designers, and test engineers. The ITC India is being run under the guidance of ITC USA, and is supported by the IEEE Bangalore Section and IESA.

NavinLet’s look at the test challenges that the conference seeks to address. Navin Bishnoi said:  “DFT, test and reliability domains are seeing a huge focus with the need of standard test practices for a variety of applications across communication, automotive, computing and industrial.

“In addition, the cost of implementation and testing continues to be challenged, asking designers to look at innovative ways to optimize test, without impacting quality. ITC India brings the best minds from academia, research and industry to share best practices to enable the standard DFT/Test practices for variety of applications with reduced cost and high quality.

“The conference covers sessions on emerging test needs for topics such as: artificial intelligence, automotive and IoT, hardware security, system test, analog and mixed signal test, yield learning, test analytics, test methodology, benchmarks, test standards, memory and 3D test, diagnosis, DFT architectures, functional- and software-based tests.”

Next, what is the focus on DFT architecture and DFT strategy in automotive and other devices with low-cost testing requirements?

He added: “Today’s automotive safety-critical chips need multiple in-system self-test modes, such as power-on self-test and repair, periodic in-field self-test during mission mode, advanced error correction solutions, redundancy, etc. The conference has numerous presentations on summarizing the implications of automotive test, reliability and functional safety on all aspects of the SoC lifecycle, while accelerating the time-to-market for automotive SoCs.

“There is a strong focus on understanding the increasing use of system-level tests to screen smartphone and notebook processors for manufacturing defects by taking an in-depth look at the limitations of state-of-the-art scan test methodology. In addition, there is continuous study in the fields of DFT, diagnosis, yield learning, and root cause analysis, which use machine learning algorithms for solving various problems.”

Trends in modelling
Let us also look at the the trends in modelling and the simulation of defects in analog circuits and their applications that the conference seeks to address.

Bishnoi said that digital circuits have now evolved to standardize fault modeling and simulation. However, analog circuits have work in progress to look at new methods for modeling and simulating different types of faults using a mixed-signal fault injection methodology.

“Modeling defects in analog circuit use transient analyses that leverage different methods to inject faults. This is critical for today’s use case applications, like automotive, sensors, and industrial, which has significant analog components in the SoC. One of the trends that will be addressed in the conference is the layout-based fault modelling that is in fact a statistical analysis of process defects.

Now, to the directions made in advanced packaging technology. What’s the road ahead?

Bishnoi added: “Packaging technology has exploded with complexity in recent times for need of stacked dies, which involves change in processes, materials, equipment, as well as in the SoC implementation and sign-off. Advanced packaging enables small form-factor chips, with high-speed functionality for consumer market.

And, how are challenges in analog loopback testing for RF transceivers being addressed?

He said: “The main challenge in the implementation of loopback testing for RF transceivers is distinguishing the non-linearity effects of Rx and Tx, performance of channels during parallel testing, as well as coupling effects. Various test solutions will be discussed during the conference to address the challenges of an analog loopback testing of RF transceivers. Solutions employing the BiST techniques to have a quick TAT during manufacturing test will also be discussed.”

For those unaware, BiST or the built-in self-test, is a design technique in which parts of a circuit are used to test the circuit itself.

Finally, which version of the conference is this? Are we going to see regular ones? Bishnoi noted: “This is the second edition of the conference. We went through rigorous analysis and discussions with global leaders about the frequency and venue of the conference. It was decided to keep it annually (with the amount of growth in test/reliability space), as well as to keep it in Bangalore for the first 5 years, before we review it again to check if we should take it to other cities in India.

“The conference includes four keynotes from visionary leaders from Synopsys, Tessolve, Intel and Mentor Graphics, an exciting panel discussion on Fault Tolerance or Fault In-tolerance, as well as a variety of technical sessions and exhibits/demos from sponsors. It also has a dedicated day (on Sunday) for tutorials on six topics covering automotive, analog test, IEEE standards, machine learning in-test, system-level test and security.”

I will be present at ITC India 2018 in Bangalore, and look forward to meeting many of you, the attendees, as well! 🙂

Cadence @ DAC 2018: Electronic systems and semiconductor design for cloud

Posted on Updated on

Cadence Design Systems made a host of announcements at the ongoing 55th Design Automation Conference, at Moscone Center, San Francisco, USA. These are:

  • Cadence delivers the first broad cloud portfolio for the development of electronic systems and semiconductors.
  • Cadence collaborates with Amazon Web Services (AWS) to deliver electronic systems and semiconductor design for the cloud.
  • Cadence and Microsoft collaborate to facilitate semiconductor and system design on the Microsoft Azure Cloud platform.
  • Cadence collaborates with Google Cloud to enable cloud-based development of electronic systems and semiconductors.
  • Cadence launches Liberate Trio Characterization Suite employing machine learning and cloud optimizations.

Very interesting!

I caught up with Carl Siva, VP of Information Technology, Cloud, Cadence Design Systems Inc. and Craig Johnson, VP of Cloud Business Development, Cadence Design Systems Inc., to find out more.

First, why has Cadence chosen to go the cloud way now?

CarlCarl Siva said: “Designs are continuing to become more complex, process nodes are getting smaller, and the volume of chip design data is increasing exponentially, and creating peak compute needs. Traditional data center models of company-owned, -housed, and -managed cannot support peak needs.

“Cadence has been engaged with cloud vendors and customers on this topic for several years. Our decision to launch our portfolio now is based on the increased customer interest and their growing confidence in the security of the cloud. Cadence’s cloud approach has been proven internally. So, it was logical to draw upon that extensive experience to drive customer adoption, so that they can achieve the productivity, scalability, security and flexibility benefits of the cloud.”

The Cadence Cloud portfolio includes customer-managed and Cadence-managed cloud environments. What’s in there and how are they different?

Craig Johnson, VP of Cloud Business Development, Cadence Design Systems Inc., said: “The Cadence Cloud portfolio offerings are targeted toward small-, mid-sized and enterprise- systems and semiconductor companies, providing improved productivity, scalability, security and flexibility benefits. For example, the platform can enable customers to gain access to dedicated compute resources in as little as five minutes.

“With the Cadence Cloud-Hosted Design Solution (the Cadence-managed option), small companies benefit from this offering because it eliminates the need for a costly internal infrastructure and the overhead from a large Computer-aided design (CAD) and IT staff, allowing these companies to focus on chip design innovation.

“Mid- to large-sized companies also benefit because the Cadence-managed environment allows them to move an entire design project or team to offload the strain on their on-premise environments. It also includes the Palladium Cloud, a managed and scalable emulation environment for customers desiring to use our hardware without the responsibilities of equipment installation or maintenance.

“With the Cadence Cloud Passport model (the customer-managed option), mid- to large-sized companies that have the means to manage their own cloud infrastructure internally and small companies that are cloud-savvy can use Cadence software tools via their current IaaS provider.

“The Cadence Cloud Passport model includes the Cloud-ready Cadence software tools that have been tested for use in the cloud, a cloud-based license server for high reliability, and access to Cadence software through familiar download mechanisms.
Read the rest of this entry »

Start-ups sizzle @ NetApp Excellerator day!

Posted on Updated on

NetAppLast week, there was a session with the start-ups of NetApp ExcelleratorProgram at NetApp Bangalore Campus, ITPL Main Road, Hoodi, Bangalore.

Ajeya Motaganahalli, senior director, Engineering Programs and leader of NetApp Excellerator, NetApp India, said: “We received 450 application in 2018. Earlier, we received 250+ applications in 2017. Out of 450, 21 start-ups were selected to pitch. We had boot camps with 11 of them. We shortened this down to six start-ups.

“June 28 is the demo day where these start-ups will demonstrate before the investors. There is no IP contamination. We are also opening the applications for the third batch of cohorts on June 28.”

Now, let’s have a look at the various start-ups and their wares.

Anlyz: It provides next-generation security product with granularity and visibility of enterprise threat landscape using machine/deep learning (ML) and artificial intelligence (AI) to address enterprise cybersecurity needs.

Gartner forecasts that by 2019, total enterprise spending on security outsourcing services will be 75 percent of the spending on security software and hardware products, up from 63 percent in 2016.

Also, the worldwide security spending will reach $96 billion in 2018, up 8 percent from 2017. Enterprise security budgets are also shifting towards detection and response, and this trend will drive security market growth during the next five years.

Apoorv Garg, co-founder, said it provides a cognitive and coherent cybersecurity product. The problems were:
* the ability to detect is not supported with surround data path.
* Delayed analyses and impacts overall capability to combat threats.
* Today’s products are not very agile.

He added: “There is a need for an inclusive solution. Enterprise security budgets are shifting toward detection and response. We also offer one-month PoC (proof of concept).”

The Anlyz PoC allows the organization to quantify their security stature. Scalable features enable the CISO team to take quick decisions with tremendous visibility and vehemently, with the absolute data path to identify, analyze and detect basic and advanced requirements of security threats for the organization.

The NetApp Excellerator is a gauge adding wings to Anlyz.

ArchSaber: It provides intelligent infrastructure analytics. It automates the diagnosis and prediction of issues occurring in a large and complex IT stack, thereby offering real-time monitoring and alerting of all core and non-core infrastructure components, and advanced data science techniques to detect and fix these issues.

Arpit Jain, team leader, said the problem was how to ease incidents diagnosis in a fast-paced environment. “We provide performance analytics platform for data analytics. We enable accurate alert prediction, to prevent issues become large.”

ArchSaber also provides easy knowledge sharing. It auto-documents diagnosis and post-mortems, and shares them across teams and community. The market size of IT monitoring is roughly $10 billion.

Jain added: “We are targeting high-growth SMBs such as Ola, Zomato, etc. We are enabling fast-paced development. There will also be an effective use of precious engineering resources.” It is running non-commercial beta pilots with the likes of Zomato, Lenskart, hypertrack, etc.

He said: “We leverage NetApp’s years of experience in building and evolving IT infrastructure. It is perhaps, the best accelerator programme in India focused at deep technology ideas.”
Read the rest of this entry »