Enterprise

Three trends for CIOs in 2019: Bob Gault, Extreme

Posted on

Extreme Networks is focused on customer-driven networking to improve transformation, innovation, and customer experience – from the enterprise edge to the cloud – with software driven-solutions that are agile, adaptive, and secure. Now, it has announced the path to the new, agile data center. How is it different from the biggies?

Bob Gault.

Bob Gault, Chief Revenue and Services Officer, Extreme Networks, said: “Most vendors today sell closed technology stacks with domain-level visibility. This makes it difficult for customers to adopt new, software-driven approaches that drive the business forward, or to test new technologies that aren’t driven by the vendor they have aligned with.

“In contrast, Extreme provides technology that works in a multi-vendor heterogeneous environment, eliminating vendor lock-in. We deliver real, multi-vendor capabilities that meet the needs of the modern enterprise. For example, the Extreme Management Center allows for full visibility and management of multi-vendor networks, and Extreme Workflow Composer enables cross-domain, multi-vendor IT automation that allows organizations to automate at their pace—from automating simple tasks to deploying sophisticated workflows.

“Further, our standards-based, multi-vendor interoperable and adaptable data enter fabric gives customers the ability to build once and re-use it many times. All of this gives organizations the ability to accelerate their digital transformation initiatives, and to adapt and respond to new service demands with cloud speed.”

Enabling digital transformation
In that case, how does Extreme Networks enable digital transformation? He added: “According to a recent study, 89 percent of enterprises worldwide either plan to adopt, or have already adopted, a digital-first business strategy. The key enabler of digital transformation is an organization’s network infrastructure.

“With the advent of IoT, pervasive mobility and growing cloud service adoption, the network has become increasingly distributed. As such, Extreme collaborates with our customers to build open, software-driven networking solutions from the enterprise edge to the cloud that are agile, adaptive, and secure to enable digital transformation.

“We are a group of dedicated professionals who are passionate about helping our customers – and each other – succeed. Our 100% in-sourced services and support are #1 in the industry and even with 30,000 customers globally, including half of the Fortune 50, we remain nimble and responsive to ensure customer and partner success. We call this Customer-Driven Networking.”

There are three core drivers are: user experience, data, and insights, and the foundation to keep all that data secure. I requested Gault to elaborate.

He said: “On top of providing the network building blocks for wired and wireless LAN access (routers, switches, access points, etc.), Extreme offers an array of software capabilities including analytics, artificial intelligence and machine learning to help customers gather granular insights into who is using what application, when, and where.

“With that data, customers can understand usage patterns to optimize applications, do capacity planning and fine-tune the infrastructure for optimal performance. By applying machine learning to the data, Extreme’s analytics can detect anomalies from devices and applications, and block potentially malicious access. Collectively, these capabilities allow organizations to deliver a better customer experience via personalized offers and engagement based on user behavior and with maximum security, network uptime and greater throughput.

“Extreme offers the industry’s only end-to-end, single pane of glass solution that enables customers to accelerate digitization while saving IT operations cost with automation, visibility, analytics, and control. Our solution helps secure our customers’ networks and ensure exceptional user experiences with fast Mean Time to Innocence, application performance insights, security and forensics, and automated roll-out of consistent policies.”

Advertisements

Disruptive innovation with Xilinx Versal ACAP super FPGA

Posted on Updated on

Xilinx Inc. recently announced Versal – the super FPGA. Versal is said to be the first adaptive compute acceleration platform (ACAP), a fully software-programmable, heterogeneous, compute platform.

Versal ACAP combines scalar engines, adaptable engines, and intelligent engines to achieve dramatic performance improvements of up to 20X over today’s fastest FPGA implementations, and over 100X over today’s fastest CPU implementations—for data center, wired network, 5G wireless, and automotive driver-assist applications.

Versal ACAP
Versal is the first ACAP by Xilinx. What exactly is an ACAP? For which applications does it work best?

Victor Peng

Victor Peng, president and CEO, Xilinx, said: “An ACAP is a heterogeneous, hardware adaptable platform that is built from the ground up to be fully software programmable. An ACAP is fundamentally different from any multi-core architecture as it provides hardware programmability, but, the developer does not have to understand any of the hardware detail.

“From a software standpoint, it includes tools, libraries, run-time stacks and everything that you’d expect from a modern software-driven product. The tool chain, however, takes into account every type of developer—from the hardware developer, to embedded developer, to data scientist, and to framework developer.

Differences from classic FPGA and SoC
Now, that means there are technical differences in the Versal from a classic FPGA and to an SoC.

He said: “A Versal ACAP is significantly different than a regular FPGA or SoC. Zero hardware expertise is required to boot the device. Developers can connect to a host via CCIX or PCIe and get memory-mapped access to all peripherals (e.g., AI engines, DDR memory controllers).

“The Network-on-Chip is at the heart of what makes this possible. It provides ease-of-use, and makes the ACAP inherently SW programmable—available at boot and without any traditional FPGA place-and-route or bit stream. No programmable logic experience is required to get started, but designers can design their own IP or add from the large Xilinx ecosystem.

“With regard to Xilinx’s hardware programmable SoCs (Zynq-7000 and Zynq UltraScale+ SoCs), the Zynq platform partially integrated two out of the three engine types (Scalar Engines and Adaptable Hardware Engines).

“Versal devices add a third engine type (intelligent engines). More importantly, the ACAP architecture tightly couples them together, via the Network on Chip (NOC) to enable each engine type to deliver 2-3x the computational efficiency of a single engine architecture, such as a SIMT GPU.”

Does this mean that Xilinx will address, besides the classic hardware designers, the application engineers in the future?

He noted: “Xilinx has been addressing software developers with design abstraction tools as well as its hardware programmable SoC devices (Zynq-7000 and Zynq UltraScale+) for multiple generations. However, with ACAP, software programmability is inherently designed into the architecture itself for the entire platform, including its hardware adaptable engines and peripherals.”

Global semicon industry likely to grow +4.4pc in 2019: Dr. Wally Rhines, Mentor

Posted on Updated on

Happy new year, to all of you. 🙂 And, it gets even better, having a discussion with Dr. Walden C. Rhines, CEO and Chairman of the Board of Directors of Mentor, A Siemens Company, on the global semiconductor industry trends for the year 2019.

Semiconductor industry in 2018, and 2019
First, I needed to know how did the global semiconductor industry performed last year? And, what is the way forward in 2019.

Dr. Walden C. Rhines.

Dr. Wally Rhines said: “2018 was another strong growth year for the global semiconductor. IC bookings for the first 10 months remain above 2017 levels and silicon area shipments for the last six quarters have also been above the trends line, with fourth quarter YoY growth 10 percent. And, IC revenues overall continue to have strong double-digit growth for 2018, with fourth quarter YoY growth of nearly 23 percent.

“However, analysts are expecting much more modest growth in 2019. Individual analyst predictions for growth in 2019 vary from -2 to +8 percent, with the average forecasts at +4.4 percent.

“Much of this is due to the softening memory market, along with concerns about tariffs, inflation and global trade war. While the rest of the IC business has been relatively strong with Samsung and Intel noting solid demand for ICs for servers and PCs, sentiment by senior managers of semiconductor companies is near a record low level. So, I’m not expecting much growth, if any, in 2019 and more likely a decline.

EDA in 2019
On the same note, how is the global EDA industry performing, and what’s the path in 2019?

He said: “Revenue growth of the EDA industry continues to be remarkably strong, fueled by new entrants into the IC design world, like networking companies (e.g. Google, Facebook, Amazon, Alibaba, etc.) and automotive system and Tier1 companies, as well as a plethora of new AI-driven fabless semiconductor start-ups. Design activity precedes semiconductor revenue growth so it would not be surprising to continue to see strong EDA company performance even with a weak semiconductor market in 2019.

“EDA venture funding has rebounded, reaching a 6-year high of $16.5M showing a renewed confidence in the future of EDA. The major companies all have sighted better than expected results. On the semiconductor side of EDA there seem to be more technology challenges than the industry has faced in a long time.

“Some of those include new compute architectures, the emergence of photonics, increased lithographic complexities involving EUV and other techniques, new and more complex packaging, massive increases in data, and the multiplication of sources of design data (often created according to differing standards).

“The challenges on the system side of EDA are multiplying as expected. It is becoming more difficult to be at the leading edge when designing end-products in silos. Embedded software, mechanical, PCB, packaging, electrical interconnect, networking (access to the intranet) and security are just a few of the domains that need to work closer together in a more integrated manner. The increasing complexity is also making each of the domains more challenging. This all pushes new materials and methodologies into each of the domains listed above.”

Five trends in semicon for 2019
I wanted to find out about the top five trends in semicon for 2019.

He said: “The top five semiconductor technology trends include:
* the ongoing ramp of next-generation technologies, led by Machine Learning, Artificial Intelligence and cloud, and SaaS demand on the datacenter,
* the roll-out of IoT – especially in manufacturing,
* 5G development,
* computing on the edge, and
*the increasing semiconductor content within electrical devices.”

Achronix’s Speedcore Gen4 eFPGA IP raise performances by 60pc

Posted on Updated on

Achronix Semiconductor Corp. recently announced the immediate availability of its Speedcore Gen4 embedded FPGA (eFPGA) IP.

Speedcore Gen4 is said to increase performances by 60 percent, reduces die area by 65 percent, and power by 50 percent. In addition, the new Machine Learning Processor (MLP) blocks deliver 300 percent higher performance for AI/ML applications. This is a remarkable improvement across, performance, power, and die area which will help Speedcore users develop significantly better AI/ML applications.

So, what is this proven methodology used to deliver Speedcore Gen4 eFPGA?

Steve Mensor, Achronix VP of Marketing, said: “Achronix has shipped multiple Speedcore 16t (16nm) eFPGA instances. Our customers have:

  • integrated their Speedcore eFPGA instance in their SoC
  • closed timing
  • taped out
  • brought up their SoC silicon with the Speedcore eFPGA
  • completed ATE testing with over 99 percent coverage
  • completed HTOL testing, and
  • began production.

“The methodology that we use is proven by the fact that all of the deliveries of our Speedcore eFPGA IP to customers have worked and are fully functionaly.”

These can be licensed to FinFET processes such as the TSMC 16FF+ and now TSMC n7.

Can these be licensed to other processes as well? Mensor said: “Other nodes and other foundries can be supported. Achronix delivers Speedcore as a hard macro as a GDSII. This means that it is optimized for a given node and metal stack. Achronix would need to port Speedcore in order for it to be supported on other process technologies and other foundries. It takes Achronix 4 months to port to a new process node on TSMC and 9 months to port to a new foundry.”

The Speedcore Gen4 eFPGA IP is for integration into users’ SoCs. Mensor said: “This simply means that Speedcore is IP that companies can integrate into their SoC. Speedcore is an all digital IP (no mixed signal / analog functionality). Companies integrate it just like they would integrate any other digital IP. “

Global semiconductor industry trends 2019: Jaswinder Ahuja, Cadence

Posted on Updated on

Today happens to be my birthday! 😉 And, what better way to celebrate, with a discussion on the global semiconductor industry and the expected trends for 2019.

I caught up with my good friend, Jaswinder Ahuja, Corporate VP & MD of Cadence Design Systems India Pvt Ltd, and asked him about the global semiconductor industry trends for 2019. So, how is the global semicon industry performing this year? How does Cadence see it going in 2019?

Jaswinder Ahuja, Cadence.

Global semicon industry trends
Jaswinder Ahuja said: “The semiconductor industry is doing very well. Estimates say that it has crossed $400 billion in revenue. This growth is being driven by four or five waves that have emerged over the last couple of years. These are:

  • Cloud and data center applications are booming, and the top names in this space, including, Amazon and Google are now designing their own chips.
  • Automotive is (and has been) going through a transformation over the last few years. ADAS is just the beginning. From infotainment to safety, the whole vehicle is driven on precision electronics.
  • Industrial IoT is another wave. By incorporating artificial intelligence (AI) to manufacturing and industrial processes, we are looking at a revolution—what is being called Industry 4.0.
  • Mobile and wireless have, of course, driven growth in the last decade to decade-and-a-half, and it doesn’t show any signs of slowing down.
  • Consumer and IoT devices can also be considered a wave, although the consumer wave started some time ago. IoT is the game-changer there, with so many billion connected devices being forecasted in the next 5-10 years.

“Thanks to these technology waves, our sense is that the growth will continue into 2019, and probably beyond, especially as AI and ML become more prevalent across applications.”

Global EDA and memory industries
How is the global EDA industry performing this year? How do you see it going next year in 2019?

He said: “Cadence has seen strong results in 2018 so far across product lines. This is thanks to multiple technology waves, especially machine learning, that are driving increased design activity and our System Design Enablement strategy, as well as our continued focus on innovation and launching new products.”

And, what’s the road ahead for memory? Is memory attracting more investment?

He added: “The memory market is being driven by the data-driven economy, and the need to store and process data at the edge and in the cloud. Added to that is the huge demand for smart and connected devices, for which memory is crucial.

“There isn’t any data about investments, but keeping in mind the consolidation that is happening across the industry, it could well be that we may witness some industry M&A activity with memory companies as well. The merger of SanDisk and Western Digital is one such example.”

EUV lithography trends
Has EUV lithography progressed? By when is EUV lithography likely to get mainstream?

Ahuja noted: “As technology advances, both manufacturing and design complexity grow. Designs are being scaled down to meet the ever-increasing demand for more functionality contained in a single chip, creating unique implementation challenges.

“Manufacturing is facing huge challenges in terms of printability, manufacturability, yield ramp-up and variability. Unfortunately, restrictions on power, performance and area (PPA) or turnaround time (TAT) do not scale up along with these factors.

“Foundries have been talking about EUV for years now. However, the power and performance improvements with EUV don’t look very significant at this time. Clearly, there is still some distance to go, before EUV will become mainstream.

“On a related note, in February 2018, Cadence and imec, the world-leading research and innovation hub in nanoelectronics and digital technologies, announced that its extensive, long-standing collaboration had resulted in the industry’s first 3nm test chip tapeout.

“The tapeout project, geared toward advancing 3nm chip design, was completed using EUV and 193 immersion (193i) lithography-oriented design rules, and the Cadence Innovus Implementation System and Genus Synthesis Solution.”

Trends in power and verification
Finally, what is the latest regarding coverage and power across all the aspects of verification?

He said: “Over the past decade, verification complexity and demands on engineering teams have continued to rise rapidly. Applying innovative solution flows, automation tools, and best-in-class verification engines is necessary to overcome the resulting verification gap.

“With regard to verification coverage, the challenge is always to know when you are done (the process of verification signoff). Cadence has a unique methodology and technology for measuring and signing off on the design and verification metrics used during the many milestones typical in any integrated circuit (IC) development, and it is called Metric Driven Verification (MDV).

“While milestones and metrics vary by design type and end-application, the final verification signoff will, at a minimum, contain the criteria and metrics within a flexible, human-readable and user-defined organizational structure. Automated data collection, project tracking, dashboards and in-depth report techniques are mandatory elements to eliminate subjectivity, allowing engineers to spend more time on verification and less time manually collecting and organizing data.

“Power-optimization techniques are creating new complexities in the physical and functional behavior of electronic designs. An integral piece of a functional verification plan, Cadence’s power-aware verification methodology can help verify power optimization without impacting design intent, minimizing late-cycle errors and debugging cycles. After all, simulating without power intent is like simulation with some RTL code black-boxed.

“The methodology brings together power-aware elaboration with formal analysis and simulation. With power-aware elaboration, all of the blocks as well as the power management features in the design are in place, so design verification with power intent is possible. Power intent introduces power/ground nets, voltage levels, power switches, isolation cells, and state retention registers. Any verification technology—simulation, emulation, prototyping, or formal—can be applied on a power-aware elaboration of the design.”

To be, or not to be fault tolerant! Or fault intolerant?

Posted on Updated on

IMG_20180723_183441Semiconductors is a tough business, and definitely not for the faint hearted, said Suman Narayan, senior VP, for Semiconductors, IoT and Analytics, Cyient. If you are in DFT, you are in the insurance business. He was moderating a panel discussion on ‘fault tolerance vs. fault intolerance’.

Rubin Parekhji, senior technologist, Texas Instruments, said that a system is fault tolerant if there is no error. An app is fault tolerant if there is no intolerant fault. An affordable system should be fault tolerant. Which faults are important? How are hardware-software fault tolerant? For instance, if not done well, it will lead to bulky devices. There is a need to optimize and differentiate. There is a need to build fault tolerant systems using fault intolerant building blocks.

Jais Abraham, director of engineering, Qualcomm, said that device complexity has increased 6X times since 2010. There is a disproportionate increase in test cost vs. node shrink benefits. Are we good at fault finding? It’s our fault. Be intolerant to faults, but don’t be maniacal. Think of the entire gamut of testing. Think of the system, and not just the chip. Think of the manufacturing quality, and find remedies. Fault tolerance may mean testing enough such that it meets the quality requirements of customers, who are becoming intolerant. We continue to invest in fault tolerance architectures.

Ruchir Dixit, Technical director, Mentor,  felt that making a system robust is the choice. The key is the machine that we make, and whether it is robust. The customers expect a quality robust system. Simpler systems make up a complex system. Successful system deals with malfunctions. There are regenerative components. The ISO-26262 standard drives robustness.

Dr Sandeep Pendharkar, Engineering director, Intel, felt that there is an increased usage of semiconductors in apps such as ADAS and medical. Functional safety (FuSa) requires unprecedented quality levels. Now, DPPM has changed to DPPB.

Achieving near zero DPPB on the nearest node is nearly impossible. Fault tolerance is the way forward. How should the test flows change to comprehend all this? Should we cap the number of recoverable faults before declaring a chip unusable?

Ram Jonnavithula, VP of Engineering, Tessolve, said that a pacemaker should be fault tolerant, with zero defects. Fault tolerance requires redundancy, mechanism to detect and isolate faults. Sometimes, fault tolerance could mean reduced performance, but the system still functions.

Adit D. Singh, Prof. Electrical & Computer Engineering, Auburn University, USA, highlighted the threats to electronics reliability. These are:
* Test escapes – DPPM. Especially, escape from testing components. Also, timing defects.
* New failures occur during operation. They can also be due to aging.
* Poor system design, which are actually, no solution. There can be design errors and improper shields.

Test diversity helps costs. Design diversity helps fault tolerance costs. Design triplicated modules independently. Avoid correlated failures.

So, what’s it going to be? Be fault tolerant! Or, fault intolerant?

Need for end-user driven test strategy: Raja Manickam, Tessolve

Posted on Updated on

At the ongoing ITC 2018 conference, Raja Manickam, founder and CEO, Tessolve, spoke on ‘Always on ERA’?

RajaEvery chip is tested. About 10 million++ chips are tested every day. A chip carries millions of data and also does continuous self test. It is expected that the chip is always on. Engineers look at all possible combinations. They try and solve problems quickly.

Design is supposed to be pure genius. However, testing is the necessary evil. There is DFT to probe FT and SLT. We just keep on adding tests.

The players who help drive us are the academia, EDA companies, fabs, and ATEs (they add more instruments, and make it bigger). What matters is that the chip must work in a particular manner, all the time.

Test leadership creates an environment for test strategy and drives it. There must be given flexibility and innovation for test leadership. Focus on the end-user relevancy.

Next-gen BIST provides M-BIST and scan compression engines on separate DFx die. The ATE interface can exist in the DFx die. Base functional die will provide power and clocks.

At any time a machine is running, less than 20 percent of the instruments are used. That’s not the best use of assets.

Factors influencing traditional ATE include loop back testing, ATE need to test deserialized parallel data, miniature MEMS loop back device to improve SI and MEMS RF relay, and the use of FPGAs.

There are adaptive tests and predictive algorithms. The ATE could look like instrumentation and intelligence built in to the load board or hardware. There could be three dimensional handlers. The handler will go vertical. There should be an end-user driven test strategy. The test strategy should be holistic.